rook: Cordoned node hangs operator

Is this a bug report or feature request?

  • Bug Report

Expected behavior: When a node in a cluster is cordoned and drained, the cluster should continue operating OK. Exactly what should happen to the OSDs on the “failed” node is up for debate, I don’t really have an opinion on that, but whatever happens the operator should not get stuck in an endless loop

Deviation from expected behavior:

The operator gets stuck in an endless loop. An unabridged log can be found here.

The gist is that it starts provisioning on all the nodes and ends with

2019-01-29 01:04:53.933828 E | op-cluster: failed to create cluster in namespace rook-ceph. failed to start the osds. 1 failures encountered while running osds in namespace rook-ceph: node ip-172-20-65-210.us-gov-east-1.compute.internal did not resolve to start osds

That’s the cordoned node. It then begins again, printing the same stuff.

I’ll put a full loop at the end of the issue.

How to reproduce it (minimal and precise):

Create a four node cluster and wait for it to be healthy. Pick a node that doesn’t have a mon or mgr running on it (otherwise you will hit #2570 instead) and cordon that node. Restart the operator.

Environment:

  • Cloud provider or hardware configuration: AWS
  • Rook version (use rook version inside of a Rook Pod): 0.9.2
  • Kubernetes version (use kubectl version): 1.11.6
  • Kubernetes cluster type (e.g. Tectonic, GKE, OpenShift): KOPS
  • Storage backend status: Healthy

Full loop of the operator:

2019-01-29 01:14:51.950899 I | op-mon: start running mons
2019-01-29 01:14:51.953069 I | cephmon: parsing mon endpoints: b=100.67.235.104:6790,c=100.70.37.58:6790,a=100.68.29.88:6790
2019-01-29 01:14:51.953119 I | op-mon: loaded: maxMonID=2, mons=map[b:0xc000569140 c:0xc000569220 a:0xc0005693e0], mapping=&{Node:map[a:0xc000a333b0 b:0xc000a333e0 c:0xc000a33410] Port:map[]}
2019-01-29 01:14:51.959242 I | op-mon: saved mon endpoints to config map map[mapping:{"node":{"a":{"Name":"ip-172-20-76-227.us-gov-east-1.compute.internal","Hostname":"ip-172-20-76-227.us-gov-east-1.compute.internal","Address":"172.20.76.227"},"b":{"Name":"ip-172-20-81-246.us-gov-east-1.compute.internal","Hostname":"ip-172-20-81-246.us-gov-east-1.compute.internal","Address":"172.20.81.246"},"c":{"Name":"ip-172-20-95-115.us-gov-east-1.compute.internal","Hostname":"ip-172-20-95-115.us-gov-east-1.compute.internal","Address":"172.20.95.115"}},"port":{}} data:b=100.67.235.104:6790,c=100.70.37.58:6790,a=100.68.29.88:6790 maxMonId:2]
2019-01-29 01:14:51.959420 I | cephconfig: writing config file /var/lib/rook/rook-ceph/rook-ceph.config
2019-01-29 01:14:51.959491 I | cephconfig: copying config to /etc/ceph/ceph.conf
2019-01-29 01:14:51.959550 I | cephconfig: generated admin config in /var/lib/rook/rook-ceph
2019-01-29 01:14:51.967516 I | op-mon: Found 0 running nodes without mons
2019-01-29 01:14:51.967528 I | op-mon: All nodes are running mons. Adding all 7 nodes to the availability.
2019-01-29 01:14:51.967540 I | op-mon: ensuring mon rook-ceph-mon-b (b) is started
2019-01-29 01:14:51.967547 I | op-mon: looping to start mons. i=0, endIndex=3, c.Size=3
2019-01-29 01:14:51.995569 I | op-mon: mon b running at 100.67.235.104:6790
2019-01-29 01:14:52.140134 I | op-mon: mon c running at 100.70.37.58:6790
2019-01-29 01:14:52.540066 I | op-mon: mon a running at 100.68.29.88:6790
2019-01-29 01:14:52.940312 I | op-mon: saved mon endpoints to config map map[data:b=100.67.235.104:6790,c=100.70.37.58:6790,a=100.68.29.88:6790 maxMonId:2 mapping:{"node":{"a":{"Name":"ip-172-20-76-227.us-gov-east-1.compute.internal","Hostname":"ip-172-20-76-227.us-gov-east-1.compute.internal","Address":"172.20.76.227"},"b":{"Name":"ip-172-20-81-246.us-gov-east-1.compute.internal","Hostname":"ip-172-20-81-246.us-gov-east-1.compute.internal","Address":"172.20.81.246"},"c":{"Name":"ip-172-20-95-115.us-gov-east-1.compute.internal","Hostname":"ip-172-20-95-115.us-gov-east-1.compute.internal","Address":"172.20.95.115"}},"port":{}}]
2019-01-29 01:14:52.940517 I | cephconfig: writing config file /var/lib/rook/rook-ceph/rook-ceph.config
2019-01-29 01:14:52.940591 I | cephconfig: copying config to /etc/ceph/ceph.conf
2019-01-29 01:14:52.940654 I | cephconfig: generated admin config in /var/lib/rook/rook-ceph
2019-01-29 01:14:52.940994 I | cephconfig: writing config file /var/lib/rook/rook-ceph/rook-ceph.config
2019-01-29 01:14:52.941094 I | cephconfig: copying config to /etc/ceph/ceph.conf
2019-01-29 01:14:52.941153 I | cephconfig: generated admin config in /var/lib/rook/rook-ceph
2019-01-29 01:14:52.970477 I | op-mon: mons created: 3
2019-01-29 01:14:52.970491 I | op-mon: waiting for mon quorum with [b c a]
2019-01-29 01:14:53.144411 I | exec: Running command: ceph mon_status --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/851342638
2019-01-29 01:14:53.389489 I | op-mon: Ceph monitors formed quorum
2019-01-29 01:14:53.389506 I | op-mon: ensuring mon rook-ceph-mon-c (c) is started
2019-01-29 01:14:53.389511 I | op-mon: looping to start mons. i=1, endIndex=3, c.Size=3
2019-01-29 01:14:53.540680 I | op-mon: mon b running at 100.67.235.104:6790
2019-01-29 01:14:53.939948 I | op-mon: mon c running at 100.70.37.58:6790
2019-01-29 01:14:54.339967 I | op-mon: mon a running at 100.68.29.88:6790
2019-01-29 01:14:54.740427 I | op-mon: saved mon endpoints to config map map[maxMonId:2 mapping:{"node":{"a":{"Name":"ip-172-20-76-227.us-gov-east-1.compute.internal","Hostname":"ip-172-20-76-227.us-gov-east-1.compute.internal","Address":"172.20.76.227"},"b":{"Name":"ip-172-20-81-246.us-gov-east-1.compute.internal","Hostname":"ip-172-20-81-246.us-gov-east-1.compute.internal","Address":"172.20.81.246"},"c":{"Name":"ip-172-20-95-115.us-gov-east-1.compute.internal","Hostname":"ip-172-20-95-115.us-gov-east-1.compute.internal","Address":"172.20.95.115"}},"port":{}} data:b=100.67.235.104:6790,c=100.70.37.58:6790,a=100.68.29.88:6790]
2019-01-29 01:14:54.740617 I | cephconfig: writing config file /var/lib/rook/rook-ceph/rook-ceph.config
2019-01-29 01:14:54.740684 I | cephconfig: copying config to /etc/ceph/ceph.conf
2019-01-29 01:14:54.740740 I | cephconfig: generated admin config in /var/lib/rook/rook-ceph
2019-01-29 01:14:54.741044 I | cephconfig: writing config file /var/lib/rook/rook-ceph/rook-ceph.config
2019-01-29 01:14:54.741103 I | cephconfig: copying config to /etc/ceph/ceph.conf
2019-01-29 01:14:54.741228 I | cephconfig: generated admin config in /var/lib/rook/rook-ceph
2019-01-29 01:14:54.766054 I | op-mon: mons created: 3
2019-01-29 01:14:54.766068 I | op-mon: waiting for mon quorum with [b c a]
2019-01-29 01:14:54.943527 I | exec: Running command: ceph mon_status --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/443279541
2019-01-29 01:14:55.190851 I | op-mon: Ceph monitors formed quorum
2019-01-29 01:14:55.190869 I | op-mon: ensuring mon rook-ceph-mon-a (a) is started
2019-01-29 01:14:55.190874 I | op-mon: looping to start mons. i=2, endIndex=3, c.Size=3
2019-01-29 01:14:55.339956 I | op-mon: mon b running at 100.67.235.104:6790
2019-01-29 01:14:55.740032 I | op-mon: mon c running at 100.70.37.58:6790
2019-01-29 01:14:56.144426 I | op-mon: mon a running at 100.68.29.88:6790
2019-01-29 01:14:56.545559 I | op-mon: saved mon endpoints to config map map[maxMonId:2 mapping:{"node":{"a":{"Name":"ip-172-20-76-227.us-gov-east-1.compute.internal","Hostname":"ip-172-20-76-227.us-gov-east-1.compute.internal","Address":"172.20.76.227"},"b":{"Name":"ip-172-20-81-246.us-gov-east-1.compute.internal","Hostname":"ip-172-20-81-246.us-gov-east-1.compute.internal","Address":"172.20.81.246"},"c":{"Name":"ip-172-20-95-115.us-gov-east-1.compute.internal","Hostname":"ip-172-20-95-115.us-gov-east-1.compute.internal","Address":"172.20.95.115"}},"port":{}} data:a=100.68.29.88:6790,b=100.67.235.104:6790,c=100.70.37.58:6790]
2019-01-29 01:14:56.546060 I | cephconfig: writing config file /var/lib/rook/rook-ceph/rook-ceph.config
2019-01-29 01:14:56.546167 I | cephconfig: copying config to /etc/ceph/ceph.conf
2019-01-29 01:14:56.546233 I | cephconfig: generated admin config in /var/lib/rook/rook-ceph
2019-01-29 01:14:56.546451 I | cephconfig: writing config file /var/lib/rook/rook-ceph/rook-ceph.config
2019-01-29 01:14:56.546615 I | cephconfig: copying config to /etc/ceph/ceph.conf
2019-01-29 01:14:56.546735 I | cephconfig: generated admin config in /var/lib/rook/rook-ceph
2019-01-29 01:14:56.572512 I | op-mon: mons created: 3
2019-01-29 01:14:56.572525 I | op-mon: waiting for mon quorum with [b c a]
2019-01-29 01:14:56.743832 I | exec: Running command: ceph mon_status --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/654272144
2019-01-29 01:14:56.990341 I | op-mon: Ceph monitors formed quorum
2019-01-29 01:14:56.991722 I | op-mgr: start running mgr
2019-01-29 01:14:57.140262 I | ceph-spec: the keyring rook-ceph-mgr-a was already generated
2019-01-29 01:14:57.145761 I | op-mgr: deployment for mgr rook-ceph-mgr-a already exists. updating if needed
2019-01-29 01:14:57.147261 I | op-k8sutil: updating deployment rook-ceph-mgr-a
2019-01-29 01:14:59.158398 I | op-k8sutil: finished waiting for updated deployment rook-ceph-mgr-a
2019-01-29 01:14:59.158414 I | op-mgr: skipping enabling orchestrator modules on releases older than nautilus
2019-01-29 01:14:59.158496 I | exec: Running command: ceph mgr module enable prometheus --force --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/266952879
2019-01-29 01:14:59.521901 I | exec: module 'prometheus' is already enabled
2019-01-29 01:14:59.522035 I | exec: Running command: ceph mgr module enable dashboard --force --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/776379970
2019-01-29 01:15:00.535352 I | exec: module 'dashboard' is already enabled
2019-01-29 01:15:05.539065 I | op-mgr: the dashboard secret was already generated
2019-01-29 01:15:05.539153 I | exec: Running command: ceph dashboard create-self-signed-cert --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/763709625
2019-01-29 01:15:05.848410 I | op-mgr: Running command: ceph dashboard set-login-credentials admin *******
2019-01-29 01:15:06.410938 I | op-mgr: restarting the mgr module
2019-01-29 01:15:06.411011 I | exec: Running command: ceph mgr module disable dashboard --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/253184339
2019-01-29 01:15:07.378090 I | exec: Running command: ceph mgr module enable dashboard --force --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/103294614
2019-01-29 01:15:08.406333 I | exec: Running command: ceph config get mgr. mgr/dashboard/url_prefix --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/559013629
2019-01-29 01:15:08.680413 I | exec: Error ENOENT: 
2019-01-29 01:15:08.680523 I | exec: Running command: ceph config rm mgr. mgr/dashboard/url_prefix --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/768972856
2019-01-29 01:15:08.974767 I | exec: Running command: ceph config get mgr. mgr/dashboard/server_port --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/288385591
2019-01-29 01:15:09.228942 I | exec: Running command: ceph config set mgr. mgr/dashboard/server_port 8443 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/688927274
2019-01-29 01:15:09.490869 I | exec: Running command: ceph config get mgr. mgr/dashboard/ssl --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/265497473
2019-01-29 01:15:09.753833 I | exec: Error ENOENT: 
2019-01-29 01:15:09.753965 I | exec: Running command: ceph config rm mgr. mgr/dashboard/ssl --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/204228844
2019-01-29 01:15:10.035554 I | op-mgr: dashboard service already exists
2019-01-29 01:15:10.063069 I | op-mgr: mgr metrics service already exists
2019-01-29 01:15:10.063088 I | op-osd: start running osds in namespace rook-ceph
2019-01-29 01:15:10.063171 I | exec: Running command: ceph osd set noscrub --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/203485531
2019-01-29 01:15:10.433133 I | exec: noscrub is set
2019-01-29 01:15:10.433279 I | exec: Running command: ceph osd set nodeep-scrub --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/351462142
2019-01-29 01:15:11.443769 I | exec: nodeep-scrub is set
2019-01-29 01:15:11.452536 I | op-osd: 3 of the 4 storage nodes are valid
2019-01-29 01:15:11.452548 I | op-osd: checking if orchestration is still in progress
2019-01-29 01:15:11.454163 I | op-osd: osd orchestration status for node ip-172-20-95-115.us-gov-east-1.compute.internal is completed
2019-01-29 01:15:11.455267 I | op-osd: osd orchestration status for node ip-172-20-76-227.us-gov-east-1.compute.internal is completed
2019-01-29 01:15:11.456304 I | op-osd: osd orchestration status for node ip-172-20-65-210.us-gov-east-1.compute.internal is completed
2019-01-29 01:15:11.457352 I | op-osd: osd orchestration status for node ip-172-20-81-246.us-gov-east-1.compute.internal is completed
2019-01-29 01:15:11.458347 I | op-osd: start provisioning the osds on nodes, if needed
2019-01-29 01:15:11.458371 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName ip-172-20-76-227.us-gov-east-1.compute.internal will be c4ce2d2a12af5a96f04b61b55b645966
2019-01-29 01:15:12.046178 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName ip-172-20-76-227.us-gov-east-1.compute.internal will be c4ce2d2a12af5a96f04b61b55b645966
2019-01-29 01:15:12.445594 I | op-osd: avail devices for node ip-172-20-76-227.us-gov-east-1.compute.internal: [{Name:nvme15n1 FullPath: Config:map[]} {Name:nvme14n1 FullPath: Config:map[]} {Name:nvme17n1 FullPath: Config:map[]} {Name:nvme16n1 FullPath: Config:map[]} {Name:nvme0n1 FullPath: Config:map[]}]
2019-01-29 01:15:12.445633 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName ip-172-20-76-227.us-gov-east-1.compute.internal will be c4ce2d2a12af5a96f04b61b55b645966
2019-01-29 01:15:12.447079 I | op-k8sutil: Removing previous job rook-ceph-osd-prepare-c4ce2d2a12af5a96f04b61b55b645966 to start a new one
2019-01-29 01:15:12.455637 I | op-k8sutil: batch job rook-ceph-osd-prepare-c4ce2d2a12af5a96f04b61b55b645966 still exists
2019-01-29 01:15:14.456831 I | op-k8sutil: batch job rook-ceph-osd-prepare-c4ce2d2a12af5a96f04b61b55b645966 deleted
2019-01-29 01:15:14.462342 I | op-osd: osd provision job started for node ip-172-20-76-227.us-gov-east-1.compute.internal
2019-01-29 01:15:14.462369 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName ip-172-20-81-246.us-gov-east-1.compute.internal will be f5fd917c1af0dab98d3bcaff7dd5efd2
2019-01-29 01:15:14.476775 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName ip-172-20-81-246.us-gov-east-1.compute.internal will be f5fd917c1af0dab98d3bcaff7dd5efd2
2019-01-29 01:15:14.486584 I | op-osd: avail devices for node ip-172-20-81-246.us-gov-east-1.compute.internal: [{Name:nvme15n1 FullPath: Config:map[]} {Name:nvme18n1 FullPath: Config:map[]} {Name:nvme17n1 FullPath: Config:map[]} {Name:nvme16n1 FullPath: Config:map[]} {Name:nvme0n1 FullPath: Config:map[]}]
2019-01-29 01:15:14.486620 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName ip-172-20-81-246.us-gov-east-1.compute.internal will be f5fd917c1af0dab98d3bcaff7dd5efd2
2019-01-29 01:15:14.488075 I | op-k8sutil: Removing previous job rook-ceph-osd-prepare-f5fd917c1af0dab98d3bcaff7dd5efd2 to start a new one
2019-01-29 01:15:14.496725 I | op-k8sutil: batch job rook-ceph-osd-prepare-f5fd917c1af0dab98d3bcaff7dd5efd2 still exists
2019-01-29 01:15:16.500120 I | op-k8sutil: batch job rook-ceph-osd-prepare-f5fd917c1af0dab98d3bcaff7dd5efd2 deleted
2019-01-29 01:15:16.505486 I | op-osd: osd provision job started for node ip-172-20-81-246.us-gov-east-1.compute.internal
2019-01-29 01:15:16.505515 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName ip-172-20-95-115.us-gov-east-1.compute.internal will be 1f40e1b6860a20978e305358500171c4
2019-01-29 01:15:16.520879 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName ip-172-20-95-115.us-gov-east-1.compute.internal will be 1f40e1b6860a20978e305358500171c4
2019-01-29 01:15:16.530758 I | op-osd: avail devices for node ip-172-20-95-115.us-gov-east-1.compute.internal: [{Name:nvme20n1 FullPath: Config:map[]} {Name:nvme22n1 FullPath: Config:map[]} {Name:nvme19n1 FullPath: Config:map[]} {Name:nvme0n1 FullPath: Config:map[]} {Name:nvme21n1 FullPath: Config:map[]}]
2019-01-29 01:15:16.530792 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName ip-172-20-95-115.us-gov-east-1.compute.internal will be 1f40e1b6860a20978e305358500171c4
2019-01-29 01:15:16.532278 I | op-k8sutil: Removing previous job rook-ceph-osd-prepare-1f40e1b6860a20978e305358500171c4 to start a new one
2019-01-29 01:15:16.545518 I | op-k8sutil: batch job rook-ceph-osd-prepare-1f40e1b6860a20978e305358500171c4 still exists
2019-01-29 01:15:18.546637 I | op-k8sutil: batch job rook-ceph-osd-prepare-1f40e1b6860a20978e305358500171c4 deleted
2019-01-29 01:15:18.552228 I | op-osd: osd provision job started for node ip-172-20-95-115.us-gov-east-1.compute.internal
2019-01-29 01:15:18.552241 I | op-osd: start osds after provisioning is completed, if needed
2019-01-29 01:15:18.553746 I | op-osd: osd orchestration status for node ip-172-20-95-115.us-gov-east-1.compute.internal is starting
2019-01-29 01:15:18.553761 I | op-osd: osd orchestration status for node ip-172-20-76-227.us-gov-east-1.compute.internal is orchestrating
2019-01-29 01:15:18.553768 I | op-osd: osd orchestration status for node ip-172-20-65-210.us-gov-east-1.compute.internal is completed
2019-01-29 01:15:18.553773 I | op-osd: starting 0 osd daemons on node ip-172-20-65-210.us-gov-east-1.compute.internal
2019-01-29 01:15:18.553781 E | op-osd: node ip-172-20-65-210.us-gov-east-1.compute.internal did not resolve to start osds
2019-01-29 01:15:18.554865 I | op-osd: osd orchestration status for node ip-172-20-81-246.us-gov-east-1.compute.internal is orchestrating
2019-01-29 01:15:18.554876 I | op-osd: 1/4 node(s) completed osd provisioning, resource version 781456
2019-01-29 01:15:20.502156 I | op-osd: osd orchestration status for node ip-172-20-95-115.us-gov-east-1.compute.internal is computingDiff
2019-01-29 01:15:20.560645 I | op-osd: osd orchestration status for node ip-172-20-95-115.us-gov-east-1.compute.internal is orchestrating
2019-01-29 01:15:28.360175 I | op-osd: osd orchestration status for node ip-172-20-76-227.us-gov-east-1.compute.internal is completed
2019-01-29 01:15:28.360190 I | op-osd: starting 4 osd daemons on node ip-172-20-76-227.us-gov-east-1.compute.internal
2019-01-29 01:15:28.370104 I | op-osd: deployment for osd 0 already exists. updating if needed
2019-01-29 01:15:28.371868 I | op-k8sutil: updating deployment rook-ceph-osd-0
2019-01-29 01:15:30.384258 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-0
2019-01-29 01:15:30.384274 I | op-osd: started deployment for osd 0 (dir=false, type=)
2019-01-29 01:15:30.391354 I | op-osd: deployment for osd 3 already exists. updating if needed
2019-01-29 01:15:30.392951 I | op-k8sutil: updating deployment rook-ceph-osd-3
2019-01-29 01:15:32.404364 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-3
2019-01-29 01:15:32.404381 I | op-osd: started deployment for osd 3 (dir=false, type=)
2019-01-29 01:15:32.412032 I | op-osd: deployment for osd 6 already exists. updating if needed
2019-01-29 01:15:32.413498 I | op-k8sutil: updating deployment rook-ceph-osd-6
2019-01-29 01:15:34.424985 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-6
2019-01-29 01:15:34.425001 I | op-osd: started deployment for osd 6 (dir=false, type=)
2019-01-29 01:15:34.431980 I | op-osd: deployment for osd 9 already exists. updating if needed
2019-01-29 01:15:34.433459 I | op-k8sutil: updating deployment rook-ceph-osd-9
2019-01-29 01:15:36.444993 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-9
2019-01-29 01:15:36.445009 I | op-osd: started deployment for osd 9 (dir=false, type=)
2019-01-29 01:15:36.446255 I | op-osd: osd orchestration status for node ip-172-20-81-246.us-gov-east-1.compute.internal is completed
2019-01-29 01:15:36.446268 I | op-osd: starting 4 osd daemons on node ip-172-20-81-246.us-gov-east-1.compute.internal
2019-01-29 01:15:36.453729 I | op-osd: deployment for osd 1 already exists. updating if needed
2019-01-29 01:15:36.455453 I | op-k8sutil: updating deployment rook-ceph-osd-1
2019-01-29 01:15:38.469386 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-1
2019-01-29 01:15:38.469403 I | op-osd: started deployment for osd 1 (dir=false, type=)
2019-01-29 01:15:38.476070 I | op-osd: deployment for osd 10 already exists. updating if needed
2019-01-29 01:15:38.477617 I | op-k8sutil: updating deployment rook-ceph-osd-10
2019-01-29 01:15:40.489065 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-10
2019-01-29 01:15:40.489082 I | op-osd: started deployment for osd 10 (dir=false, type=)
2019-01-29 01:15:40.496244 I | op-osd: deployment for osd 4 already exists. updating if needed
2019-01-29 01:15:40.497874 I | op-k8sutil: updating deployment rook-ceph-osd-4
2019-01-29 01:15:42.509683 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-4
2019-01-29 01:15:42.509699 I | op-osd: started deployment for osd 4 (dir=false, type=)
2019-01-29 01:15:42.517015 I | op-osd: deployment for osd 7 already exists. updating if needed
2019-01-29 01:15:42.518642 I | op-k8sutil: updating deployment rook-ceph-osd-7
2019-01-29 01:15:44.530560 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-7
2019-01-29 01:15:44.530578 I | op-osd: started deployment for osd 7 (dir=false, type=)
2019-01-29 01:15:44.531826 I | op-osd: osd orchestration status for node ip-172-20-95-115.us-gov-east-1.compute.internal is completed
2019-01-29 01:15:44.531839 I | op-osd: starting 4 osd daemons on node ip-172-20-95-115.us-gov-east-1.compute.internal
2019-01-29 01:15:44.540238 I | op-osd: deployment for osd 8 already exists. updating if needed
2019-01-29 01:15:44.542802 I | op-k8sutil: updating deployment rook-ceph-osd-8
2019-01-29 01:15:46.555797 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-8
2019-01-29 01:15:46.555813 I | op-osd: started deployment for osd 8 (dir=false, type=)
2019-01-29 01:15:46.562648 I | op-osd: deployment for osd 11 already exists. updating if needed
2019-01-29 01:15:46.564461 I | op-k8sutil: updating deployment rook-ceph-osd-11
2019-01-29 01:15:48.578157 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-11
2019-01-29 01:15:48.578174 I | op-osd: started deployment for osd 11 (dir=false, type=)
2019-01-29 01:15:48.585256 I | op-osd: deployment for osd 2 already exists. updating if needed
2019-01-29 01:15:48.586919 I | op-k8sutil: updating deployment rook-ceph-osd-2
2019-01-29 01:15:50.597938 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-2
2019-01-29 01:15:50.597954 I | op-osd: started deployment for osd 2 (dir=false, type=)
2019-01-29 01:15:50.605049 I | op-osd: deployment for osd 5 already exists. updating if needed
2019-01-29 01:15:50.606661 I | op-k8sutil: updating deployment rook-ceph-osd-5
2019-01-29 01:15:52.618055 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-5
2019-01-29 01:15:52.618072 I | op-osd: started deployment for osd 5 (dir=false, type=)
2019-01-29 01:15:52.619364 I | op-osd: 4/4 node(s) completed osd provisioning
2019-01-29 01:15:52.619408 I | op-osd: checking if any nodes were removed
2019-01-29 01:15:52.626582 I | op-osd: processing 0 removed nodes
2019-01-29 01:15:52.626597 I | op-osd: done processing removed nodes
2019-01-29 01:15:52.626680 I | exec: Running command: ceph osd unset noscrub --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/414462533
2019-01-29 01:15:52.941460 I | exec: noscrub is unset
2019-01-29 01:15:52.941603 I | exec: Running command: ceph osd unset nodeep-scrub --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/844758752
2019-01-29 01:15:53.999285 I | exec: nodeep-scrub is unset
2019-01-29 01:15:53.999362 E | op-cluster: failed to create cluster in namespace rook-ceph. failed to start the osds. 1 failures encountered while running osds in namespace rook-ceph: node ip-172-20-65-210.us-gov-east-1.compute.internal did not resolve to start osds
2019-01-29 01:15:57.942502 I | op-mon: start running mons
2019-01-29 01:15:57.944846 I | cephmon: parsing mon endpoints: a=100.68.29.88:6790,b=100.67.235.104:6790,c=100.70.37.58:6790
2019-01-29 01:15:57.944907 I | op-mon: loaded: maxMonID=2, mons=map[a:0xc000a3b660 b:0xc000a3b6e0 c:0xc000a3b720], mapping=&{Node:map[a:0xc000ad84b0 b:0xc000ad84e0 c:0xc000ad8510] Port:map[]}
2019-01-29 01:15:57.953037 I | op-mon: saved mon endpoints to config map map[maxMonId:2 mapping:{"node":{"a":{"Name":"ip-172-20-76-227.us-gov-east-1.compute.internal","Hostname":"ip-172-20-76-227.us-gov-east-1.compute.internal","Address":"172.20.76.227"},"b":{"Name":"ip-172-20-81-246.us-gov-east-1.compute.internal","Hostname":"ip-172-20-81-246.us-gov-east-1.compute.internal","Address":"172.20.81.246"},"c":{"Name":"ip-172-20-95-115.us-gov-east-1.compute.internal","Hostname":"ip-172-20-95-115.us-gov-east-1.compute.internal","Address":"172.20.95.115"}},"port":{}} data:a=100.68.29.88:6790,b=100.67.235.104:6790,c=100.70.37.58:6790]
2019-01-29 01:15:57.953256 I | cephconfig: writing config file /var/lib/rook/rook-ceph/rook-ceph.config
2019-01-29 01:15:57.953358 I | cephconfig: copying config to /etc/ceph/ceph.conf
2019-01-29 01:15:57.953419 I | cephconfig: generated admin config in /var/lib/rook/rook-ceph
2019-01-29 01:15:57.961761 I | op-mon: Found 0 running nodes without mons
2019-01-29 01:15:57.961772 I | op-mon: All nodes are running mons. Adding all 7 nodes to the availability.
2019-01-29 01:15:57.961781 I | op-mon: ensuring mon rook-ceph-mon-a (a) is started
2019-01-29 01:15:57.961785 I | op-mon: looping to start mons. i=0, endIndex=3, c.Size=3
2019-01-29 01:15:57.988492 I | op-mon: mon a running at 100.68.29.88:6790
2019-01-29 01:15:58.139267 I | op-mon: mon b running at 100.67.235.104:6790
2019-01-29 01:15:58.539288 I | op-mon: mon c running at 100.70.37.58:6790
2019-01-29 01:15:58.939169 I | op-mon: saved mon endpoints to config map map[data:a=100.68.29.88:6790,b=100.67.235.104:6790,c=100.70.37.58:6790 maxMonId:2 mapping:{"node":{"a":{"Name":"ip-172-20-76-227.us-gov-east-1.compute.internal","Hostname":"ip-172-20-76-227.us-gov-east-1.compute.internal","Address":"172.20.76.227"},"b":{"Name":"ip-172-20-81-246.us-gov-east-1.compute.internal","Hostname":"ip-172-20-81-246.us-gov-east-1.compute.internal","Address":"172.20.81.246"},"c":{"Name":"ip-172-20-95-115.us-gov-east-1.compute.internal","Hostname":"ip-172-20-95-115.us-gov-east-1.compute.internal","Address":"172.20.95.115"}},"port":{}}]
2019-01-29 01:15:58.939367 I | cephconfig: writing config file /var/lib/rook/rook-ceph/rook-ceph.config
2019-01-29 01:15:58.939440 I | cephconfig: copying config to /etc/ceph/ceph.conf
2019-01-29 01:15:58.939511 I | cephconfig: generated admin config in /var/lib/rook/rook-ceph
2019-01-29 01:15:58.939891 I | cephconfig: writing config file /var/lib/rook/rook-ceph/rook-ceph.config
2019-01-29 01:15:58.939976 I | cephconfig: copying config to /etc/ceph/ceph.conf
2019-01-29 01:15:58.940044 I | cephconfig: generated admin config in /var/lib/rook/rook-ceph
2019-01-29 01:15:58.966688 I | op-mon: mons created: 3
2019-01-29 01:15:58.966703 I | op-mon: waiting for mon quorum with [a b c]
2019-01-29 01:15:59.143117 I | exec: Running command: ceph mon_status --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/322474175
2019-01-29 01:15:59.392536 I | op-mon: Ceph monitors formed quorum
2019-01-29 01:15:59.392554 I | op-mon: ensuring mon rook-ceph-mon-b (b) is started
2019-01-29 01:15:59.392559 I | op-mon: looping to start mons. i=1, endIndex=3, c.Size=3
2019-01-29 01:15:59.539012 I | op-mon: mon a running at 100.68.29.88:6790
2019-01-29 01:15:59.939062 I | op-mon: mon b running at 100.67.235.104:6790
2019-01-29 01:16:00.339325 I | op-mon: mon c running at 100.70.37.58:6790
2019-01-29 01:16:00.739249 I | op-mon: saved mon endpoints to config map map[mapping:{"node":{"a":{"Name":"ip-172-20-76-227.us-gov-east-1.compute.internal","Hostname":"ip-172-20-76-227.us-gov-east-1.compute.internal","Address":"172.20.76.227"},"b":{"Name":"ip-172-20-81-246.us-gov-east-1.compute.internal","Hostname":"ip-172-20-81-246.us-gov-east-1.compute.internal","Address":"172.20.81.246"},"c":{"Name":"ip-172-20-95-115.us-gov-east-1.compute.internal","Hostname":"ip-172-20-95-115.us-gov-east-1.compute.internal","Address":"172.20.95.115"}},"port":{}} data:a=100.68.29.88:6790,b=100.67.235.104:6790,c=100.70.37.58:6790 maxMonId:2]
2019-01-29 01:16:00.739454 I | cephconfig: writing config file /var/lib/rook/rook-ceph/rook-ceph.config
2019-01-29 01:16:00.739532 I | cephconfig: copying config to /etc/ceph/ceph.conf
2019-01-29 01:16:00.739595 I | cephconfig: generated admin config in /var/lib/rook/rook-ceph
2019-01-29 01:16:00.739975 I | cephconfig: writing config file /var/lib/rook/rook-ceph/rook-ceph.config
2019-01-29 01:16:00.740052 I | cephconfig: copying config to /etc/ceph/ceph.conf
2019-01-29 01:16:00.740115 I | cephconfig: generated admin config in /var/lib/rook/rook-ceph
2019-01-29 01:16:00.765674 I | op-mon: mons created: 3
2019-01-29 01:16:00.765686 I | op-mon: waiting for mon quorum with [a b c]
2019-01-29 01:16:00.942810 I | exec: Running command: ceph mon_status --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/176262418
2019-01-29 01:16:01.192271 I | op-mon: Ceph monitors formed quorum
2019-01-29 01:16:01.192289 I | op-mon: ensuring mon rook-ceph-mon-c (c) is started
2019-01-29 01:16:01.192294 I | op-mon: looping to start mons. i=2, endIndex=3, c.Size=3
2019-01-29 01:16:01.339418 I | op-mon: mon a running at 100.68.29.88:6790
2019-01-29 01:16:01.739336 I | op-mon: mon b running at 100.67.235.104:6790
2019-01-29 01:16:02.139032 I | op-mon: mon c running at 100.70.37.58:6790
2019-01-29 01:16:02.539777 I | op-mon: saved mon endpoints to config map map[data:a=100.68.29.88:6790,b=100.67.235.104:6790,c=100.70.37.58:6790 maxMonId:2 mapping:{"node":{"a":{"Name":"ip-172-20-76-227.us-gov-east-1.compute.internal","Hostname":"ip-172-20-76-227.us-gov-east-1.compute.internal","Address":"172.20.76.227"},"b":{"Name":"ip-172-20-81-246.us-gov-east-1.compute.internal","Hostname":"ip-172-20-81-246.us-gov-east-1.compute.internal","Address":"172.20.81.246"},"c":{"Name":"ip-172-20-95-115.us-gov-east-1.compute.internal","Hostname":"ip-172-20-95-115.us-gov-east-1.compute.internal","Address":"172.20.95.115"}},"port":{}}]
2019-01-29 01:16:02.539998 I | cephconfig: writing config file /var/lib/rook/rook-ceph/rook-ceph.config
2019-01-29 01:16:02.540071 I | cephconfig: copying config to /etc/ceph/ceph.conf
2019-01-29 01:16:02.540158 I | cephconfig: generated admin config in /var/lib/rook/rook-ceph
2019-01-29 01:16:02.540500 I | cephconfig: writing config file /var/lib/rook/rook-ceph/rook-ceph.config
2019-01-29 01:16:02.540583 I | cephconfig: copying config to /etc/ceph/ceph.conf
2019-01-29 01:16:02.540643 I | cephconfig: generated admin config in /var/lib/rook/rook-ceph
2019-01-29 01:16:02.566428 I | op-mon: mons created: 3
2019-01-29 01:16:02.566441 I | op-mon: waiting for mon quorum with [a b c]
2019-01-29 01:16:02.742725 I | exec: Running command: ceph mon_status --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/912803145
2019-01-29 01:16:02.989233 I | op-mon: Ceph monitors formed quorum
2019-01-29 01:16:02.990417 I | op-mgr: start running mgr
2019-01-29 01:16:03.139009 I | ceph-spec: the keyring rook-ceph-mgr-a was already generated
2019-01-29 01:16:03.145469 I | op-mgr: deployment for mgr rook-ceph-mgr-a already exists. updating if needed
2019-01-29 01:16:03.147039 I | op-k8sutil: updating deployment rook-ceph-mgr-a
2019-01-29 01:16:05.157357 I | op-k8sutil: finished waiting for updated deployment rook-ceph-mgr-a
2019-01-29 01:16:05.157376 I | op-mgr: skipping enabling orchestrator modules on releases older than nautilus
2019-01-29 01:16:05.157454 I | exec: Running command: ceph mgr module enable prometheus --force --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/167670292
2019-01-29 01:16:06.023809 I | exec: module 'prometheus' is already enabled
2019-01-29 01:16:06.023930 I | exec: Running command: ceph mgr module enable dashboard --force --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/345368675
2019-01-29 01:16:07.035952 I | exec: module 'dashboard' is already enabled
2019-01-29 01:16:12.039170 I | op-mgr: the dashboard secret was already generated
2019-01-29 01:16:12.039262 I | exec: Running command: ceph dashboard create-self-signed-cert --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/399057510
2019-01-29 01:16:12.651603 I | op-mgr: Running command: ceph dashboard set-login-credentials admin *******
2019-01-29 01:16:13.217692 I | op-mgr: restarting the mgr module
2019-01-29 01:16:13.217764 I | exec: Running command: ceph mgr module disable dashboard --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/196636808
2019-01-29 01:16:14.182904 I | exec: Running command: ceph mgr module enable dashboard --force --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/158514759
2019-01-29 01:16:15.190442 I | exec: Running command: ceph config get mgr. mgr/dashboard/url_prefix --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/275768058
2019-01-29 01:16:15.452293 I | exec: Error ENOENT: 
2019-01-29 01:16:15.452402 I | exec: Running command: ceph config rm mgr. mgr/dashboard/url_prefix --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/045387281
2019-01-29 01:16:15.746012 I | exec: Running command: ceph config get mgr. mgr/dashboard/server_port --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/637574716
2019-01-29 01:16:15.993953 I | exec: Running command: ceph config set mgr. mgr/dashboard/server_port 8443 --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/088017515
2019-01-29 01:16:16.254603 I | exec: Running command: ceph config get mgr. mgr/dashboard/ssl --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/947160270
2019-01-29 01:16:16.524186 I | exec: Error ENOENT: 
2019-01-29 01:16:16.524291 I | exec: Running command: ceph config rm mgr. mgr/dashboard/ssl --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/345648597
2019-01-29 01:16:16.804249 I | op-mgr: dashboard service already exists
2019-01-29 01:16:16.831236 I | op-mgr: mgr metrics service already exists
2019-01-29 01:16:16.831254 I | op-osd: start running osds in namespace rook-ceph
2019-01-29 01:16:16.831333 I | exec: Running command: ceph osd set noscrub --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/019289904
2019-01-29 01:16:17.252081 I | exec: noscrub is set
2019-01-29 01:16:17.252223 I | exec: Running command: ceph osd set nodeep-scrub --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/034856143
2019-01-29 01:16:18.264329 I | exec: nodeep-scrub is set
2019-01-29 01:16:18.273286 I | op-osd: 3 of the 4 storage nodes are valid
2019-01-29 01:16:18.273297 I | op-osd: checking if orchestration is still in progress
2019-01-29 01:16:18.275034 I | op-osd: osd orchestration status for node ip-172-20-95-115.us-gov-east-1.compute.internal is completed
2019-01-29 01:16:18.276210 I | op-osd: osd orchestration status for node ip-172-20-76-227.us-gov-east-1.compute.internal is completed
2019-01-29 01:16:18.277320 I | op-osd: osd orchestration status for node ip-172-20-65-210.us-gov-east-1.compute.internal is completed
2019-01-29 01:16:18.278505 I | op-osd: osd orchestration status for node ip-172-20-81-246.us-gov-east-1.compute.internal is completed
2019-01-29 01:16:18.279685 I | op-osd: start provisioning the osds on nodes, if needed
2019-01-29 01:16:18.279707 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName ip-172-20-76-227.us-gov-east-1.compute.internal will be c4ce2d2a12af5a96f04b61b55b645966
2019-01-29 01:16:18.866909 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName ip-172-20-76-227.us-gov-east-1.compute.internal will be c4ce2d2a12af5a96f04b61b55b645966
2019-01-29 01:16:19.266198 I | op-osd: avail devices for node ip-172-20-76-227.us-gov-east-1.compute.internal: [{Name:nvme15n1 FullPath: Config:map[]} {Name:nvme14n1 FullPath: Config:map[]} {Name:nvme17n1 FullPath: Config:map[]} {Name:nvme16n1 FullPath: Config:map[]} {Name:nvme0n1 FullPath: Config:map[]}]
2019-01-29 01:16:19.266546 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName ip-172-20-76-227.us-gov-east-1.compute.internal will be c4ce2d2a12af5a96f04b61b55b645966
2019-01-29 01:16:19.268320 I | op-k8sutil: Removing previous job rook-ceph-osd-prepare-c4ce2d2a12af5a96f04b61b55b645966 to start a new one
2019-01-29 01:16:19.276116 I | op-k8sutil: batch job rook-ceph-osd-prepare-c4ce2d2a12af5a96f04b61b55b645966 still exists
2019-01-29 01:16:21.277387 I | op-k8sutil: batch job rook-ceph-osd-prepare-c4ce2d2a12af5a96f04b61b55b645966 deleted
2019-01-29 01:16:21.283515 I | op-osd: osd provision job started for node ip-172-20-76-227.us-gov-east-1.compute.internal
2019-01-29 01:16:21.283548 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName ip-172-20-81-246.us-gov-east-1.compute.internal will be f5fd917c1af0dab98d3bcaff7dd5efd2
2019-01-29 01:16:21.299088 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName ip-172-20-81-246.us-gov-east-1.compute.internal will be f5fd917c1af0dab98d3bcaff7dd5efd2
2019-01-29 01:16:21.309442 I | op-osd: avail devices for node ip-172-20-81-246.us-gov-east-1.compute.internal: [{Name:nvme15n1 FullPath: Config:map[]} {Name:nvme18n1 FullPath: Config:map[]} {Name:nvme17n1 FullPath: Config:map[]} {Name:nvme16n1 FullPath: Config:map[]} {Name:nvme0n1 FullPath: Config:map[]}]
2019-01-29 01:16:21.309470 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName ip-172-20-81-246.us-gov-east-1.compute.internal will be f5fd917c1af0dab98d3bcaff7dd5efd2
2019-01-29 01:16:21.311146 I | op-k8sutil: Removing previous job rook-ceph-osd-prepare-f5fd917c1af0dab98d3bcaff7dd5efd2 to start a new one
2019-01-29 01:16:21.320038 I | op-k8sutil: batch job rook-ceph-osd-prepare-f5fd917c1af0dab98d3bcaff7dd5efd2 still exists
2019-01-29 01:16:23.323186 I | op-k8sutil: batch job rook-ceph-osd-prepare-f5fd917c1af0dab98d3bcaff7dd5efd2 deleted
2019-01-29 01:16:23.329048 I | op-osd: osd provision job started for node ip-172-20-81-246.us-gov-east-1.compute.internal
2019-01-29 01:16:23.329077 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName ip-172-20-95-115.us-gov-east-1.compute.internal will be 1f40e1b6860a20978e305358500171c4
2019-01-29 01:16:23.343855 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName ip-172-20-95-115.us-gov-east-1.compute.internal will be 1f40e1b6860a20978e305358500171c4
2019-01-29 01:16:23.355456 I | op-osd: avail devices for node ip-172-20-95-115.us-gov-east-1.compute.internal: [{Name:nvme20n1 FullPath: Config:map[]} {Name:nvme22n1 FullPath: Config:map[]} {Name:nvme19n1 FullPath: Config:map[]} {Name:nvme0n1 FullPath: Config:map[]} {Name:nvme21n1 FullPath: Config:map[]}]
2019-01-29 01:16:23.355507 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName ip-172-20-95-115.us-gov-east-1.compute.internal will be 1f40e1b6860a20978e305358500171c4
2019-01-29 01:16:23.357091 I | op-k8sutil: Removing previous job rook-ceph-osd-prepare-1f40e1b6860a20978e305358500171c4 to start a new one
2019-01-29 01:16:23.364595 I | op-k8sutil: batch job rook-ceph-osd-prepare-1f40e1b6860a20978e305358500171c4 still exists
2019-01-29 01:16:25.365781 I | op-k8sutil: batch job rook-ceph-osd-prepare-1f40e1b6860a20978e305358500171c4 deleted
2019-01-29 01:16:25.371023 I | op-osd: osd provision job started for node ip-172-20-95-115.us-gov-east-1.compute.internal
2019-01-29 01:16:25.371038 I | op-osd: start osds after provisioning is completed, if needed
2019-01-29 01:16:25.372549 I | op-osd: osd orchestration status for node ip-172-20-95-115.us-gov-east-1.compute.internal is starting
2019-01-29 01:16:25.372565 I | op-osd: osd orchestration status for node ip-172-20-76-227.us-gov-east-1.compute.internal is orchestrating
2019-01-29 01:16:25.372572 I | op-osd: osd orchestration status for node ip-172-20-65-210.us-gov-east-1.compute.internal is completed
2019-01-29 01:16:25.372576 I | op-osd: starting 0 osd daemons on node ip-172-20-65-210.us-gov-east-1.compute.internal
2019-01-29 01:16:25.372612 E | op-osd: node ip-172-20-65-210.us-gov-east-1.compute.internal did not resolve to start osds
2019-01-29 01:16:25.373719 I | op-osd: osd orchestration status for node ip-172-20-81-246.us-gov-east-1.compute.internal is orchestrating
2019-01-29 01:16:25.373731 I | op-osd: 1/4 node(s) completed osd provisioning, resource version 781698
2019-01-29 01:16:27.349728 I | op-osd: osd orchestration status for node ip-172-20-95-115.us-gov-east-1.compute.internal is computingDiff
2019-01-29 01:16:27.409517 I | op-osd: osd orchestration status for node ip-172-20-95-115.us-gov-east-1.compute.internal is orchestrating
2019-01-29 01:16:35.179314 I | op-osd: osd orchestration status for node ip-172-20-76-227.us-gov-east-1.compute.internal is completed
2019-01-29 01:16:35.179329 I | op-osd: starting 4 osd daemons on node ip-172-20-76-227.us-gov-east-1.compute.internal
2019-01-29 01:16:35.188823 I | op-osd: deployment for osd 0 already exists. updating if needed
2019-01-29 01:16:35.190438 I | op-k8sutil: updating deployment rook-ceph-osd-0
2019-01-29 01:16:37.202574 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-0
2019-01-29 01:16:37.202591 I | op-osd: started deployment for osd 0 (dir=false, type=)
2019-01-29 01:16:37.210072 I | op-osd: deployment for osd 3 already exists. updating if needed
2019-01-29 01:16:37.212339 I | op-k8sutil: updating deployment rook-ceph-osd-3
2019-01-29 01:16:39.223461 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-3
2019-01-29 01:16:39.223477 I | op-osd: started deployment for osd 3 (dir=false, type=)
2019-01-29 01:16:39.230452 I | op-osd: deployment for osd 6 already exists. updating if needed
2019-01-29 01:16:39.232116 I | op-k8sutil: updating deployment rook-ceph-osd-6
2019-01-29 01:16:41.243358 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-6
2019-01-29 01:16:41.243373 I | op-osd: started deployment for osd 6 (dir=false, type=)
2019-01-29 01:16:41.250663 I | op-osd: deployment for osd 9 already exists. updating if needed
2019-01-29 01:16:41.252307 I | op-k8sutil: updating deployment rook-ceph-osd-9
2019-01-29 01:16:43.263620 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-9
2019-01-29 01:16:43.263636 I | op-osd: started deployment for osd 9 (dir=false, type=)
2019-01-29 01:16:43.264914 I | op-osd: osd orchestration status for node ip-172-20-81-246.us-gov-east-1.compute.internal is completed
2019-01-29 01:16:43.264927 I | op-osd: starting 4 osd daemons on node ip-172-20-81-246.us-gov-east-1.compute.internal
2019-01-29 01:16:43.271900 I | op-osd: deployment for osd 7 already exists. updating if needed
2019-01-29 01:16:43.273496 I | op-k8sutil: updating deployment rook-ceph-osd-7
2019-01-29 01:16:45.287592 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-7
2019-01-29 01:16:45.287609 I | op-osd: started deployment for osd 7 (dir=false, type=)
2019-01-29 01:16:45.294599 I | op-osd: deployment for osd 1 already exists. updating if needed
2019-01-29 01:16:45.296414 I | op-k8sutil: updating deployment rook-ceph-osd-1
2019-01-29 01:16:47.307942 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-1
2019-01-29 01:16:47.307958 I | op-osd: started deployment for osd 1 (dir=false, type=)
2019-01-29 01:16:47.315019 I | op-osd: deployment for osd 10 already exists. updating if needed
2019-01-29 01:16:47.316772 I | op-k8sutil: updating deployment rook-ceph-osd-10
2019-01-29 01:16:49.328180 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-10
2019-01-29 01:16:49.328197 I | op-osd: started deployment for osd 10 (dir=false, type=)
2019-01-29 01:16:49.335090 I | op-osd: deployment for osd 4 already exists. updating if needed
2019-01-29 01:16:49.336654 I | op-k8sutil: updating deployment rook-ceph-osd-4
2019-01-29 01:16:51.347410 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-4
2019-01-29 01:16:51.347428 I | op-osd: started deployment for osd 4 (dir=false, type=)
2019-01-29 01:16:51.348680 I | op-osd: osd orchestration status for node ip-172-20-95-115.us-gov-east-1.compute.internal is completed
2019-01-29 01:16:51.348693 I | op-osd: starting 4 osd daemons on node ip-172-20-95-115.us-gov-east-1.compute.internal
2019-01-29 01:16:51.355840 I | op-osd: deployment for osd 11 already exists. updating if needed
2019-01-29 01:16:51.358169 I | op-k8sutil: updating deployment rook-ceph-osd-11
2019-01-29 01:16:53.369339 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-11
2019-01-29 01:16:53.369356 I | op-osd: started deployment for osd 11 (dir=false, type=)
2019-01-29 01:16:53.376398 I | op-osd: deployment for osd 2 already exists. updating if needed
2019-01-29 01:16:53.378066 I | op-k8sutil: updating deployment rook-ceph-osd-2
2019-01-29 01:16:55.391355 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-2
2019-01-29 01:16:55.391370 I | op-osd: started deployment for osd 2 (dir=false, type=)
2019-01-29 01:16:55.398684 I | op-osd: deployment for osd 5 already exists. updating if needed
2019-01-29 01:16:55.400349 I | op-k8sutil: updating deployment rook-ceph-osd-5
2019-01-29 01:16:57.411192 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-5
2019-01-29 01:16:57.411210 I | op-osd: started deployment for osd 5 (dir=false, type=)
2019-01-29 01:16:57.418246 I | op-osd: deployment for osd 8 already exists. updating if needed
2019-01-29 01:16:57.420105 I | op-k8sutil: updating deployment rook-ceph-osd-8
2019-01-29 01:16:59.431775 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-8
2019-01-29 01:16:59.431791 I | op-osd: started deployment for osd 8 (dir=false, type=)
2019-01-29 01:16:59.432942 I | op-osd: 4/4 node(s) completed osd provisioning
2019-01-29 01:16:59.432976 I | op-osd: checking if any nodes were removed
2019-01-29 01:16:59.438886 I | op-osd: processing 0 removed nodes
2019-01-29 01:16:59.438898 I | op-osd: done processing removed nodes
2019-01-29 01:16:59.438976 I | exec: Running command: ceph osd unset noscrub --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/265164770
2019-01-29 01:16:59.747852 I | exec: noscrub is unset
2019-01-29 01:16:59.747967 I | exec: Running command: ceph osd unset nodeep-scrub --cluster=rook-ceph --conf=/var/lib/rook/rook-ceph/rook-ceph.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/462548953
2019-01-29 01:17:00.806085 I | exec: nodeep-scrub is unset
2019-01-29 01:17:00.806173 E | op-cluster: failed to create cluster in namespace rook-ceph. failed to start the osds. 1 failures encountered while running osds in namespace rook-ceph: node ip-172-20-65-210.us-gov-east-1.compute.internal did not resolve to start osds

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Comments: 21 (6 by maintainers)

Most upvoted comments

@travisn Thank you, that was it.

The configmap name contained a shortened identifier for the node. Is there a way to get this identifier automatically? Nevermind, the actual node name is in the configmap’s label.

I wouldn’t expect Rook to move any pods from a node that has been “cordoned”, cordoning a node could a pre-step to maintenance but not necessarily. Someone could also cordon a node to isolate it and investigate a potential issue.

However, the cordor + drain combination is still valid to reduce the number of pod to move before maintenance.

In the end, I wouldn’t move anything if a node is not drained.

Am I missing something?