Skip to content

Instantly share code, notes, and snippets.

@mmack
Created August 7, 2019 16:09
Show Gist options
  • Select an option

  • Save mmack/3d762b6a61acf66ba181522e3814da11 to your computer and use it in GitHub Desktop.

Select an option

Save mmack/3d762b6a61acf66ba181522e3814da11 to your computer and use it in GitHub Desktop.
operator logs - multiple restarts combined
This file has been truncated, but you can view the full file.
2019-08-07 08:15:34.970472 I | rookcmd: starting Rook v1.0.4 with arguments '/usr/local/bin/rook ceph operator'
2019-08-07 08:15:34.970677 I | rookcmd: flag values: --alsologtostderr=false, --csi-attacher-image=quay.io/k8scsi/csi-attacher:v1.0.1, --csi-cephfs-image=quay.io/cephcsi/cephfsplugin:v1.0.0, --csi-cephfs-plugin-template-path=/etc/ceph-csi/cephfs/csi-cephfsplugin.yaml, --csi-cephfs-provisioner-template-path=/etc/ceph-csi/cephfs/csi-cephfsplugin-provisioner.yaml, --csi-enable-cephfs=false, --csi-enable-rbd=false, --csi-provisioner-image=quay.io/k8scsi/csi-provisioner:v1.0.1, --csi-rbd-image=quay.io/cephcsi/rbdplugin:v1.0.0, --csi-rbd-plugin-template-path=/etc/ceph-csi/rbd/csi-rbdplugin.yaml, --csi-rbd-provisioner-template-path=/etc/ceph-csi/rbd/csi-rbdplugin-provisioner.yaml, --csi-registrar-image=quay.io/k8scsi/csi-node-driver-registrar:v1.0.2, --csi-snapshotter-image=quay.io/k8scsi/csi-snapshotter:v1.0.1, --help=false, --log-flush-frequency=5s, --log-level=DEBUG, --log_backtrace_at=:0, --log_dir=, --log_file=, --logtostderr=true, --mon-healthcheck-interval=45s, --mon-out-timeout=10m0s, --skip_headers=false, --stderrthreshold=2, --v=0, --vmodule=
2019-08-07 08:15:34.975461 I | cephcmd: starting operator
2019-08-07 08:15:35.174011 I | op-agent: getting flexvolume dir path from FLEXVOLUME_DIR_PATH env var
2019-08-07 08:15:35.174045 I | op-agent: discovered flexvolume dir path from source env var. value: /var/lib/kubelet/volumeplugins
2019-08-07 08:15:35.174060 W | op-agent: Invalid ROOK_ENABLE_FSGROUP value "". Defaulting to "true".
2019-08-07 08:15:35.194290 I | op-agent: rook-ceph-agent daemonset already exists, updating ...
2019-08-07 08:15:35.214189 I | op-discover: rook-discover daemonset already exists, updating ...
2019-08-07 08:15:35.263608 I | operator: rook-provisioner ceph.rook.io/block started using ceph.rook.io flex vendor dir
I0807 08:15:35.263870 9 leaderelection.go:217] attempting to acquire leader lease rook-ceph-stage-primary/ceph.rook.io-block...
2019-08-07 08:15:35.264166 I | operator: rook-provisioner rook.io/block started using rook.io flex vendor dir
2019-08-07 08:15:35.264186 I | operator: Watching the current namespace for a cluster CRD
2019-08-07 08:15:35.264200 I | op-cluster: start watching clusters in all namespaces
2019-08-07 08:15:35.264231 I | op-cluster: Enabling hotplug orchestration: ROOK_DISABLE_DEVICE_HOTPLUG=
I0807 08:15:35.264294 9 leaderelection.go:217] attempting to acquire leader lease rook-ceph-stage-primary/rook.io-block...
2019-08-07 08:15:35.468758 I | op-cluster: start watching legacy rook clusters in all namespaces
2019-08-07 08:15:35.563786 I | op-cluster: starting cluster in namespace rook-ceph-stage-primary
2019-08-07 08:15:35.573772 D | op-cluster: Skipping -> Do not use all Nodes in cluster rook-ceph-stage-primary
2019-08-07 08:15:35.573808 D | op-cluster: Skipping -> Do not use all Nodes in cluster rook-ceph-stage-primary
2019-08-07 08:15:35.573822 D | op-cluster: Skipping -> Do not use all Nodes in cluster rook-ceph-stage-primary
2019-08-07 08:15:35.573834 D | op-cluster: Skipping -> Do not use all Nodes in cluster rook-ceph-stage-primary
2019-08-07 08:15:35.573845 D | op-cluster: Skipping -> Do not use all Nodes in cluster rook-ceph-stage-primary
2019-08-07 08:15:35.573855 D | op-cluster: Skipping -> Do not use all Nodes in cluster rook-ceph-stage-primary
2019-08-07 08:15:35.573866 D | op-cluster: Skipping -> Do not use all Nodes in cluster rook-ceph-stage-primary
2019-08-07 08:15:35.573878 D | op-cluster: Skipping -> Node is not tolerable for cluster rook-ceph-stage-primary
2019-08-07 08:15:35.573893 D | op-cluster: Skipping -> Node is not tolerable for cluster rook-ceph-stage-primary
2019-08-07 08:15:35.573966 D | op-cluster: Skipping -> Do not use all Nodes in cluster rook-ceph-stage-primary
2019-08-07 08:15:35.573978 D | op-cluster: Skipping -> Do not use all Nodes in cluster rook-ceph-stage-primary
2019-08-07 08:15:35.573989 D | op-cluster: Skipping -> Do not use all Nodes in cluster rook-ceph-stage-primary
2019-08-07 08:15:35.573999 D | op-cluster: Skipping -> Do not use all Nodes in cluster rook-ceph-stage-primary
2019-08-07 08:15:35.574015 D | op-cluster: Skipping -> Do not use all Nodes in cluster rook-ceph-stage-primary
2019-08-07 08:15:35.574032 D | op-cluster: Skipping -> Do not use all Nodes in cluster rook-ceph-stage-primary
2019-08-07 08:15:35.574042 D | op-cluster: Skipping -> Do not use all Nodes in cluster rook-ceph-stage-primary
2019-08-07 08:15:35.574053 D | op-cluster: Skipping -> Do not use all Nodes in cluster rook-ceph-stage-primary
2019-08-07 08:15:35.574063 D | op-cluster: Skipping -> Do not use all Nodes in cluster rook-ceph-stage-primary
2019-08-07 08:15:35.574074 D | op-cluster: Skipping -> Node is not tolerable for cluster rook-ceph-stage-primary
2019-08-07 08:15:35.574085 D | op-cluster: Skipping -> Do not use all Nodes in cluster rook-ceph-stage-primary
2019-08-07 08:15:35.574097 D | op-cluster: Skipping -> Do not use all Nodes in cluster rook-ceph-stage-primary
2019-08-07 08:15:35.574107 D | op-cluster: Skipping -> Do not use all Nodes in cluster rook-ceph-stage-primary
2019-08-07 08:15:35.869491 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:15:36.485556 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:15:36.529362 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:15:37.087551 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:15:37.186820 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:15:37.802268 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:15:37.918118 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:15:38.075055 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:15:38.102276 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:15:38.475586 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:15:38.503847 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:15:38.750840 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:15:38.894848 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:15:39.056290 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:15:39.131732 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:15:39.423252 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:15:40.388068 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:15:40.405126 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:15:41.663150 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:15:41.671075 I | op-k8sutil: waiting for job rook-ceph-detect-version to complete...
2019-08-07 08:15:41.819030 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:15:41.879829 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:15:42.094045 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:15:45.863799 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:15:46.505032 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:15:46.550926 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:15:46.791127 I | op-cluster: Detected ceph image version: 14.2.1 nautilus
2019-08-07 08:15:46.791164 I | op-cluster: CephCluster rook-ceph-stage-primary status: Creating
2019-08-07 08:15:46.815365 D | op-mon: Acquiring lock for mon orchestration
2019-08-07 08:15:46.815386 D | op-mon: Acquired lock for mon orchestration
2019-08-07 08:15:46.815395 I | op-mon: start running mons
2019-08-07 08:15:46.815401 D | op-mon: establishing ceph cluster info
2019-08-07 08:15:46.862663 D | op-mon: found existing monitor secrets for cluster rook-ceph-stage-primary
2019-08-07 08:15:46.868854 I | op-mon: parsing mon endpoints: b=100.67.17.84:6789,a=100.70.46.205:6789,f=100.69.115.5:6789,g=100.66.122.247:6789,h=100.64.242.138:6789
2019-08-07 08:15:46.869165 I | op-mon: loaded: maxMonID=7, mons=map[b:0xc000f7c960 a:0xc000f7c9a0 f:0xc000f7c9e0 g:0xc000f7ca20 h:0xc000f7ca60], mapping=&{Node:map[f:0xc000f7ad20 g:0xc000f7ad50 h:0xc000f7ad80 a:0xc000f7abd0 b:0xc000f7ac60 c:0xc000f7ac90 d:0xc000f7acc0 e:0xc000f7acf0] Port:map[]}
2019-08-07 08:15:46.878860 D | op-mon: updating config map rook-ceph-mon-endpoints that already exists
2019-08-07 08:15:46.980140 I | op-mon: saved mon endpoints to config map map[data:h=100.64.242.138:6789,b=100.67.17.84:6789,a=100.70.46.205:6789,f=100.69.115.5:6789,g=100.66.122.247:6789 maxMonId:7 mapping:{"node":{"a":{"Name":"k8s-worker-00.lxstage.domain.com","Hostname":"k8s-worker-00.lxstage.domain.com","Address":"172.22.254.105"},"b":{"Name":"k8s-worker-101.lxstage.domain.com","Hostname":"k8s-worker-101.lxstage.domain.com","Address":"172.22.254.183"},"c":{"Name":"k8s-worker-00.lxstage.domain.com","Hostname":"k8s-worker-00.lxstage.domain.com","Address":"172.22.254.105"},"d":{"Name":"k8s-worker-101.lxstage.domain.com","Hostname":"k8s-worker-101.lxstage.domain.com","Address":"172.22.254.183"},"e":{"Name":"k8s-worker-00.lxstage.domain.com","Hostname":"k8s-worker-00.lxstage.domain.com","Address":"172.22.254.105"},"f":{"Name":"k8s-worker-102.lxstage.domain.com","Hostname":"k8s-worker-102.lxstage.domain.com","Address":"172.22.254.186"},"g":{"Name":"k8s-worker-103.lxstage.domain.com","Hostname":"k8s-worker-103.lxstage.domain.com","Address":"172.22.254.185"},"h":{"Name":"k8s-worker-104.lxstage.domain.com","Hostname":"k8s-worker-104.lxstage.domain.com","Address":"172.22.254.187"}},"port":{}}]
2019-08-07 08:15:46.990759 D | op-config: Generated and stored config file:
[global]
mon_allow_pool_delete = true
mon_max_pg_per_osd = 1000
osd_pg_bits = 11
osd_pgp_bits = 11
osd_pool_default_size = 1
osd_pool_default_min_size = 1
osd_pool_default_pg_num = 100
osd_pool_default_pgp_num = 100
rbd_default_features = 3
fatal_signal_handlers = false
osd pool default pg num = 512
osd pool default pgp num = 512
osd pool default size = 3
osd pool default min size = 2
2019-08-07 08:15:47.107174 D | op-config: updating config secret &Secret{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:rook-ceph-config,GenerateName:,Namespace:rook-ceph-stage-primary,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{},OwnerReferences:[{ceph.rook.io/v1 CephCluster rook-ceph-stage-primary 76235f05-b792-11e9-9b32-0050568460f6 <nil> 0xc000c66c6c}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string][]byte{},Type:kubernetes.io/rook,StringData:map[string]string{mon_host: [v2:100.66.122.247:3300,v1:100.66.122.247:6789],[v2:100.64.242.138:3300,v1:100.64.242.138:6789],[v2:100.67.17.84:3300,v1:100.67.17.84:6789],[v2:100.70.46.205:3300,v1:100.70.46.205:6789],[v2:100.69.115.5:3300,v1:100.69.115.5:6789],mon_initial_members: g,h,b,a,f,},}
2019-08-07 08:15:47.108942 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:15:47.221865 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:15:47.307554 I | cephconfig: writing config file /var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config
2019-08-07 08:15:47.307695 I | cephconfig: copying config to /etc/ceph/ceph.conf
2019-08-07 08:15:47.308020 I | cephconfig: generated admin config in /var/lib/rook/rook-ceph-stage-primary
2019-08-07 08:15:47.506978 D | op-cfg-keyring: updating secret for rook-ceph-mons-keyring
2019-08-07 08:15:47.820149 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:15:47.906253 D | op-cfg-keyring: updating secret for rook-ceph-admin-keyring
2019-08-07 08:15:47.941113 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:15:48.105248 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:15:48.126638 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:15:48.384410 I | op-mon: targeting the mon count 5
2019-08-07 08:15:48.503586 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:15:48.522982 D | op-mon: there are 22 nodes available for 5 mons
2019-08-07 08:15:48.531396 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:15:48.763453 D | op-mon: mon pod on node k8s-worker-00.lxstage.domain.com
2019-08-07 08:15:48.763491 D | op-mon: mon pod on node k8s-worker-101.lxstage.domain.com
2019-08-07 08:15:48.763506 D | op-mon: mon pod on node k8s-worker-102.lxstage.domain.com
2019-08-07 08:15:48.763517 D | op-mon: mon pod on node k8s-worker-103.lxstage.domain.com
2019-08-07 08:15:48.763527 D | op-mon: mon pod on node k8s-worker-104.lxstage.domain.com
2019-08-07 08:15:48.763635 I | op-mon: Found 15 running nodes without mons
2019-08-07 08:15:48.763648 D | op-mon: mon b already assigned to a node, no need to assign
2019-08-07 08:15:48.763656 D | op-mon: mon a already assigned to a node, no need to assign
2019-08-07 08:15:48.763664 D | op-mon: mon f already assigned to a node, no need to assign
2019-08-07 08:15:48.763671 D | op-mon: mon g already assigned to a node, no need to assign
2019-08-07 08:15:48.763681 D | op-mon: mon h already assigned to a node, no need to assign
2019-08-07 08:15:48.763688 D | op-mon: mons have been assigned to nodes
2019-08-07 08:15:48.763696 I | op-mon: checking for basic quorum with existing mons
2019-08-07 08:15:48.763712 D | op-k8sutil: creating service rook-ceph-mon-b
2019-08-07 08:15:48.768644 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:15:48.929591 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:15:48.951069 D | op-k8sutil: updating service %s
2019-08-07 08:15:49.078115 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:15:49.163165 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:15:49.444049 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:15:49.511084 I | op-mon: mon b endpoint are [v2:100.67.17.84:3300,v1:100.67.17.84:6789]
2019-08-07 08:15:49.511125 D | op-k8sutil: creating service rook-ceph-mon-a
2019-08-07 08:15:49.736984 D | op-k8sutil: updating service %s
2019-08-07 08:15:50.110154 I | op-mon: mon a endpoint are [v2:100.70.46.205:3300,v1:100.70.46.205:6789]
2019-08-07 08:15:50.110183 D | op-k8sutil: creating service rook-ceph-mon-f
2019-08-07 08:15:50.345645 D | op-k8sutil: updating service %s
2019-08-07 08:15:50.409198 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:15:50.427027 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:15:50.909707 I | op-mon: mon f endpoint are [v2:100.69.115.5:3300,v1:100.69.115.5:6789]
2019-08-07 08:15:50.909752 D | op-k8sutil: creating service rook-ceph-mon-g
2019-08-07 08:15:51.143482 D | op-k8sutil: updating service %s
2019-08-07 08:15:51.511371 I | op-mon: mon g endpoint are [v2:100.66.122.247:3300,v1:100.66.122.247:6789]
2019-08-07 08:15:51.511416 D | op-k8sutil: creating service rook-ceph-mon-h
2019-08-07 08:15:51.655176 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:15:51.737280 D | op-k8sutil: updating service %s
2019-08-07 08:15:51.845879 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:15:51.909671 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:15:52.163109 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:15:52.313741 I | op-mon: mon h endpoint are [v2:100.64.242.138:3300,v1:100.64.242.138:6789]
I0807 08:15:52.511248 9 leaderelection.go:227] successfully acquired lease rook-ceph-stage-primary/ceph.rook.io-block
I0807 08:15:52.511373 9 controller.go:769] Starting provisioner controller ceph.rook.io/block_rook-ceph-operator-6b8b758497-mjppr_882783de-b8eb-11e9-a796-0ef7d43f73a7!
I0807 08:15:52.511433 9 event.go:209] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"rook-ceph-stage-primary", Name:"ceph.rook.io-block", UID:"782fe0db-b792-11e9-9b32-0050568460f6", APIVersion:"v1", ResourceVersion:"309425206", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' rook-ceph-operator-6b8b758497-mjppr_882783de-b8eb-11e9-a796-0ef7d43f73a7 became leader
2019-08-07 08:15:52.709851 D | op-mon: updating config map rook-ceph-mon-endpoints that already exists
I0807 08:15:53.511641 9 controller.go:818] Started provisioner controller ceph.rook.io/block_rook-ceph-operator-6b8b758497-mjppr_882783de-b8eb-11e9-a796-0ef7d43f73a7!
I0807 08:15:53.511802 9 controller.go:1196] provision "admin-d0277887/datadir-zk-1" class "default": started
2019-08-07 08:15:53.706218 I | op-mon: saved mon endpoints to config map map[data:h=100.64.242.138:6789,b=100.67.17.84:6789,a=100.70.46.205:6789,f=100.69.115.5:6789,g=100.66.122.247:6789 maxMonId:7 mapping:{"node":{"a":{"Name":"k8s-worker-00.lxstage.domain.com","Hostname":"k8s-worker-00.lxstage.domain.com","Address":"172.22.254.105"},"b":{"Name":"k8s-worker-101.lxstage.domain.com","Hostname":"k8s-worker-101.lxstage.domain.com","Address":"172.22.254.183"},"c":{"Name":"k8s-worker-00.lxstage.domain.com","Hostname":"k8s-worker-00.lxstage.domain.com","Address":"172.22.254.105"},"d":{"Name":"k8s-worker-101.lxstage.domain.com","Hostname":"k8s-worker-101.lxstage.domain.com","Address":"172.22.254.183"},"e":{"Name":"k8s-worker-00.lxstage.domain.com","Hostname":"k8s-worker-00.lxstage.domain.com","Address":"172.22.254.105"},"f":{"Name":"k8s-worker-102.lxstage.domain.com","Hostname":"k8s-worker-102.lxstage.domain.com","Address":"172.22.254.186"},"g":{"Name":"k8s-worker-103.lxstage.domain.com","Hostname":"k8s-worker-103.lxstage.domain.com","Address":"172.22.254.185"},"h":{"Name":"k8s-worker-104.lxstage.domain.com","Hostname":"k8s-worker-104.lxstage.domain.com","Address":"172.22.254.187"}},"port":{}}]
I0807 08:15:54.370458 9 controller.go:1205] provision "admin-d0277887/datadir-zk-1" class "default": persistentvolume "pvc-95509a25-9be6-11e9-9a2e-0050568460f6" already exists, skipping
I0807 08:15:54.706616 9 leaderelection.go:227] successfully acquired lease rook-ceph-stage-primary/rook.io-block
I0807 08:15:54.706736 9 controller.go:769] Starting provisioner controller rook.io/block_rook-ceph-operator-6b8b758497-mjppr_8827a7f0-b8eb-11e9-a796-0ef7d43f73a7!
I0807 08:15:54.707055 9 event.go:209] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"rook-ceph-stage-primary", Name:"rook.io-block", UID:"782fdbfc-b792-11e9-9b32-0050568460f6", APIVersion:"v1", ResourceVersion:"309425234", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' rook-ceph-operator-6b8b758497-mjppr_8827a7f0-b8eb-11e9-a796-0ef7d43f73a7 became leader
I0807 08:15:55.707080 9 controller.go:818] Started provisioner controller rook.io/block_rook-ceph-operator-6b8b758497-mjppr_8827a7f0-b8eb-11e9-a796-0ef7d43f73a7!
2019-08-07 08:15:55.822268 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:15:55.905890 D | op-config: Generated and stored config file:
[global]
mon_allow_pool_delete = true
mon_max_pg_per_osd = 1000
osd_pg_bits = 11
osd_pgp_bits = 11
osd_pool_default_size = 1
osd_pool_default_min_size = 1
osd_pool_default_pg_num = 100
osd_pool_default_pgp_num = 100
rbd_default_features = 3
fatal_signal_handlers = false
osd pool default pg num = 512
osd pool default pgp num = 512
osd pool default size = 3
osd pool default min size = 2
2019-08-07 08:15:56.305880 D | op-config: updating config secret &Secret{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:rook-ceph-config,GenerateName:,Namespace:rook-ceph-stage-primary,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{},OwnerReferences:[{ceph.rook.io/v1 CephCluster rook-ceph-stage-primary 76235f05-b792-11e9-9b32-0050568460f6 <nil> 0xc000c66c6c}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string][]byte{},Type:kubernetes.io/rook,StringData:map[string]string{mon_host: [v2:100.64.242.138:3300,v1:100.64.242.138:6789],[v2:100.67.17.84:3300,v1:100.67.17.84:6789],[v2:100.70.46.205:3300,v1:100.70.46.205:6789],[v2:100.69.115.5:3300,v1:100.69.115.5:6789],[v2:100.66.122.247:3300,v1:100.66.122.247:6789],mon_initial_members: h,b,a,f,g,},}
2019-08-07 08:15:56.526364 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:15:56.573245 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:15:56.717255 I | cephconfig: writing config file /var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config
2019-08-07 08:15:56.717438 I | cephconfig: copying config to /etc/ceph/ceph.conf
2019-08-07 08:15:56.717570 I | cephconfig: generated admin config in /var/lib/rook/rook-ceph-stage-primary
2019-08-07 08:15:56.719244 I | cephconfig: writing config file /var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config
2019-08-07 08:15:56.719427 I | cephconfig: copying config to /etc/ceph/ceph.conf
2019-08-07 08:15:56.719585 I | cephconfig: generated admin config in /var/lib/rook/rook-ceph-stage-primary
2019-08-07 08:15:56.719613 D | op-mon: monConfig: %+v&{rook-ceph-mon-b b 100.67.17.84 6789 0xc000d7e0a0}
2019-08-07 08:15:56.719804 D | op-mon: Starting mon: rook-ceph-mon-b
2019-08-07 08:15:56.734031 I | op-mon: deployment for mon rook-ceph-mon-b already exists. updating if needed
2019-08-07 08:15:56.873159 I | op-k8sutil: updating deployment rook-ceph-mon-b
2019-08-07 08:15:56.970399 D | op-k8sutil: deployment rook-ceph-mon-b status={ObservedGeneration:2 Replicas:0 UpdatedReplicas:0 ReadyReplicas:0 AvailableReplicas:0 UnavailableReplicas:0 Conditions:[{Type:Progressing Status:True LastUpdateTime:2019-08-06 14:22:03 +0000 UTC LastTransitionTime:2019-08-06 14:22:03 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-mon-b-6859887bb" has successfully progressed.} {Type:Available Status:False LastUpdateTime:2019-08-07 08:15:56 +0000 UTC LastTransitionTime:2019-08-07 08:15:56 +0000 UTC Reason:MinimumReplicasUnavailable Message:Deployment does not have minimum availability.}] CollisionCount:<nil>}
2019-08-07 08:15:57.129243 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:15:57.244011 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:15:57.864032 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:15:57.958529 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:15:58.126207 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:15:58.152443 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:15:58.529207 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:15:58.560171 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:15:58.862419 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:15:58.949214 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:15:58.975461 D | op-k8sutil: deployment rook-ceph-mon-b status={ObservedGeneration:2 Replicas:0 UpdatedReplicas:0 ReadyReplicas:0 AvailableReplicas:0 UnavailableReplicas:0 Conditions:[{Type:Progressing Status:True LastUpdateTime:2019-08-06 14:22:03 +0000 UTC LastTransitionTime:2019-08-06 14:22:03 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-mon-b-6859887bb" has successfully progressed.} {Type:Available Status:False LastUpdateTime:2019-08-07 08:15:56 +0000 UTC LastTransitionTime:2019-08-07 08:15:56 +0000 UTC Reason:MinimumReplicasUnavailable Message:Deployment does not have minimum availability.}] CollisionCount:<nil>}
2019-08-07 08:15:59.102645 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:15:59.169981 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:15:59.464693 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:00.427977 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:00.457285 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:00.980805 D | op-k8sutil: deployment rook-ceph-mon-b status={ObservedGeneration:2 Replicas:0 UpdatedReplicas:0 ReadyReplicas:0 AvailableReplicas:0 UnavailableReplicas:0 Conditions:[{Type:Progressing Status:True LastUpdateTime:2019-08-06 14:22:03 +0000 UTC LastTransitionTime:2019-08-06 14:22:03 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-mon-b-6859887bb" has successfully progressed.} {Type:Available Status:False LastUpdateTime:2019-08-07 08:15:56 +0000 UTC LastTransitionTime:2019-08-07 08:15:56 +0000 UTC Reason:MinimumReplicasUnavailable Message:Deployment does not have minimum availability.}] CollisionCount:<nil>}
2019-08-07 08:16:01.672577 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:01.869940 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:01.940639 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:02.138782 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:02.985928 D | op-k8sutil: deployment rook-ceph-mon-b status={ObservedGeneration:2 Replicas:1 UpdatedReplicas:1 ReadyReplicas:0 AvailableReplicas:0 UnavailableReplicas:1 Conditions:[{Type:Available Status:False LastUpdateTime:2019-08-07 08:15:56 +0000 UTC LastTransitionTime:2019-08-07 08:15:56 +0000 UTC Reason:MinimumReplicasUnavailable Message:Deployment does not have minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-07 08:16:02 +0000 UTC LastTransitionTime:2019-08-06 14:22:03 +0000 UTC Reason:ReplicaSetUpdated Message:ReplicaSet "rook-ceph-mon-b-5df554cc8c" is progressing.}] CollisionCount:<nil>}
2019-08-07 08:16:04.991403 D | op-k8sutil: deployment rook-ceph-mon-b status={ObservedGeneration:2 Replicas:1 UpdatedReplicas:1 ReadyReplicas:0 AvailableReplicas:0 UnavailableReplicas:1 Conditions:[{Type:Available Status:False LastUpdateTime:2019-08-07 08:15:56 +0000 UTC LastTransitionTime:2019-08-07 08:15:56 +0000 UTC Reason:MinimumReplicasUnavailable Message:Deployment does not have minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-07 08:16:02 +0000 UTC LastTransitionTime:2019-08-06 14:22:03 +0000 UTC Reason:ReplicaSetUpdated Message:ReplicaSet "rook-ceph-mon-b-5df554cc8c" is progressing.}] CollisionCount:<nil>}
2019-08-07 08:16:05.863257 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:06.542304 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:06.592550 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:07.003248 I | op-k8sutil: finished waiting for updated deployment rook-ceph-mon-b
2019-08-07 08:16:07.003295 D | op-mon: monConfig: %+v&{rook-ceph-mon-a a 100.70.46.205 6789 0xc000d7e1e0}
2019-08-07 08:16:07.003469 D | op-mon: Starting mon: rook-ceph-mon-a
2019-08-07 08:16:07.014533 I | op-mon: deployment for mon rook-ceph-mon-a already exists. updating if needed
2019-08-07 08:16:07.018670 I | op-k8sutil: updating deployment rook-ceph-mon-a
2019-08-07 08:16:07.075606 D | op-k8sutil: deployment rook-ceph-mon-a status={ObservedGeneration:1 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:22:03 +0000 UTC LastTransitionTime:2019-08-06 14:22:03 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:22:03 +0000 UTC LastTransitionTime:2019-08-06 14:22:03 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-mon-a-6bd9c7f566" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:16:07.144857 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:07.362200 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:07.886313 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:07.980817 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:08.145790 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:08.170630 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:08.557508 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:08.585633 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:08.862437 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:08.978260 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:09.080890 D | op-k8sutil: deployment rook-ceph-mon-a status={ObservedGeneration:2 Replicas:0 UpdatedReplicas:0 ReadyReplicas:0 AvailableReplicas:0 UnavailableReplicas:0 Conditions:[{Type:Progressing Status:True LastUpdateTime:2019-08-06 14:22:03 +0000 UTC LastTransitionTime:2019-08-06 14:22:03 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-mon-a-6bd9c7f566" has successfully progressed.} {Type:Available Status:False LastUpdateTime:2019-08-07 08:16:07 +0000 UTC LastTransitionTime:2019-08-07 08:16:07 +0000 UTC Reason:MinimumReplicasUnavailable Message:Deployment does not have minimum availability.}] CollisionCount:<nil>}
2019-08-07 08:16:09.125252 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:09.203922 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:09.493240 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:10.450839 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:10.475024 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:11.087203 D | op-k8sutil: deployment rook-ceph-mon-a status={ObservedGeneration:2 Replicas:0 UpdatedReplicas:0 ReadyReplicas:0 AvailableReplicas:0 UnavailableReplicas:0 Conditions:[{Type:Progressing Status:True LastUpdateTime:2019-08-06 14:22:03 +0000 UTC LastTransitionTime:2019-08-06 14:22:03 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-mon-a-6bd9c7f566" has successfully progressed.} {Type:Available Status:False LastUpdateTime:2019-08-07 08:16:07 +0000 UTC LastTransitionTime:2019-08-07 08:16:07 +0000 UTC Reason:MinimumReplicasUnavailable Message:Deployment does not have minimum availability.}] CollisionCount:<nil>}
2019-08-07 08:16:11.694325 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:11.896825 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:11.964010 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:12.158431 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:13.092131 D | op-k8sutil: deployment rook-ceph-mon-a status={ObservedGeneration:2 Replicas:0 UpdatedReplicas:0 ReadyReplicas:0 AvailableReplicas:0 UnavailableReplicas:0 Conditions:[{Type:Progressing Status:True LastUpdateTime:2019-08-06 14:22:03 +0000 UTC LastTransitionTime:2019-08-06 14:22:03 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-mon-a-6bd9c7f566" has successfully progressed.} {Type:Available Status:False LastUpdateTime:2019-08-07 08:16:07 +0000 UTC LastTransitionTime:2019-08-07 08:16:07 +0000 UTC Reason:MinimumReplicasUnavailable Message:Deployment does not have minimum availability.}] CollisionCount:<nil>}
2019-08-07 08:16:15.097998 D | op-k8sutil: deployment rook-ceph-mon-a status={ObservedGeneration:2 Replicas:0 UpdatedReplicas:0 ReadyReplicas:0 AvailableReplicas:0 UnavailableReplicas:0 Conditions:[{Type:Progressing Status:True LastUpdateTime:2019-08-06 14:22:03 +0000 UTC LastTransitionTime:2019-08-06 14:22:03 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-mon-a-6bd9c7f566" has successfully progressed.} {Type:Available Status:False LastUpdateTime:2019-08-07 08:16:07 +0000 UTC LastTransitionTime:2019-08-07 08:16:07 +0000 UTC Reason:MinimumReplicasUnavailable Message:Deployment does not have minimum availability.}] CollisionCount:<nil>}
2019-08-07 08:16:15.862552 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:16.565863 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:16.644987 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:17.109591 D | op-k8sutil: deployment rook-ceph-mon-a status={ObservedGeneration:2 Replicas:0 UpdatedReplicas:0 ReadyReplicas:0 AvailableReplicas:0 UnavailableReplicas:0 Conditions:[{Type:Progressing Status:True LastUpdateTime:2019-08-06 14:22:03 +0000 UTC LastTransitionTime:2019-08-06 14:22:03 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-mon-a-6bd9c7f566" has successfully progressed.} {Type:Available Status:False LastUpdateTime:2019-08-07 08:16:07 +0000 UTC LastTransitionTime:2019-08-07 08:16:07 +0000 UTC Reason:MinimumReplicasUnavailable Message:Deployment does not have minimum availability.}] CollisionCount:<nil>}
2019-08-07 08:16:17.163643 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:17.288510 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:17.904336 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:17.998424 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:18.172428 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:18.189714 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:18.579258 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:18.612581 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:18.862522 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:18.998619 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:19.114190 D | op-k8sutil: deployment rook-ceph-mon-a status={ObservedGeneration:2 Replicas:1 UpdatedReplicas:1 ReadyReplicas:0 AvailableReplicas:0 UnavailableReplicas:1 Conditions:[{Type:Available Status:False LastUpdateTime:2019-08-07 08:16:07 +0000 UTC LastTransitionTime:2019-08-07 08:16:07 +0000 UTC Reason:MinimumReplicasUnavailable Message:Deployment does not have minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-07 08:16:18 +0000 UTC LastTransitionTime:2019-08-06 14:22:03 +0000 UTC Reason:ReplicaSetUpdated Message:ReplicaSet "rook-ceph-mon-a-b6c75f5d7" is progressing.}] CollisionCount:<nil>}
2019-08-07 08:16:19.142422 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:19.224032 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:19.513740 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:20.470416 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:20.493311 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:21.118874 D | op-k8sutil: deployment rook-ceph-mon-a status={ObservedGeneration:2 Replicas:1 UpdatedReplicas:1 ReadyReplicas:0 AvailableReplicas:0 UnavailableReplicas:1 Conditions:[{Type:Available Status:False LastUpdateTime:2019-08-07 08:16:07 +0000 UTC LastTransitionTime:2019-08-07 08:16:07 +0000 UTC Reason:MinimumReplicasUnavailable Message:Deployment does not have minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-07 08:16:18 +0000 UTC LastTransitionTime:2019-08-06 14:22:03 +0000 UTC Reason:ReplicaSetUpdated Message:ReplicaSet "rook-ceph-mon-a-b6c75f5d7" is progressing.}] CollisionCount:<nil>}
2019-08-07 08:16:21.707159 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:21.914587 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:21.994993 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:22.179208 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:23.124290 I | op-k8sutil: finished waiting for updated deployment rook-ceph-mon-a
2019-08-07 08:16:23.124339 D | op-mon: monConfig: %+v&{rook-ceph-mon-f f 100.69.115.5 6789 0xc000d7e3c0}
2019-08-07 08:16:23.124500 D | op-mon: Starting mon: rook-ceph-mon-f
2019-08-07 08:16:23.134370 I | op-mon: deployment for mon rook-ceph-mon-f already exists. updating if needed
2019-08-07 08:16:23.138508 I | op-k8sutil: updating deployment rook-ceph-mon-f
2019-08-07 08:16:23.177822 D | op-k8sutil: deployment rook-ceph-mon-f status={ObservedGeneration:1 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:22:03 +0000 UTC LastTransitionTime:2019-08-06 14:22:03 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:22:03 +0000 UTC LastTransitionTime:2019-08-06 14:22:03 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-mon-f-7788fbf6ff" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:16:25.182576 D | op-k8sutil: deployment rook-ceph-mon-f status={ObservedGeneration:2 Replicas:0 UpdatedReplicas:0 ReadyReplicas:0 AvailableReplicas:0 UnavailableReplicas:0 Conditions:[{Type:Progressing Status:True LastUpdateTime:2019-08-06 14:22:03 +0000 UTC LastTransitionTime:2019-08-06 14:22:03 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-mon-f-7788fbf6ff" has successfully progressed.} {Type:Available Status:False LastUpdateTime:2019-08-07 08:16:23 +0000 UTC LastTransitionTime:2019-08-07 08:16:23 +0000 UTC Reason:MinimumReplicasUnavailable Message:Deployment does not have minimum availability.}] CollisionCount:<nil>}
2019-08-07 08:16:25.874495 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:26.582665 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:26.664078 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:27.178498 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:27.192549 D | op-k8sutil: deployment rook-ceph-mon-f status={ObservedGeneration:2 Replicas:0 UpdatedReplicas:0 ReadyReplicas:0 AvailableReplicas:0 UnavailableReplicas:0 Conditions:[{Type:Progressing Status:True LastUpdateTime:2019-08-06 14:22:03 +0000 UTC LastTransitionTime:2019-08-06 14:22:03 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-mon-f-7788fbf6ff" has successfully progressed.} {Type:Available Status:False LastUpdateTime:2019-08-07 08:16:23 +0000 UTC LastTransitionTime:2019-08-07 08:16:23 +0000 UTC Reason:MinimumReplicasUnavailable Message:Deployment does not have minimum availability.}] CollisionCount:<nil>}
2019-08-07 08:16:27.311520 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:27.924234 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:28.017625 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:28.192813 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:28.210412 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:28.601781 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:28.633407 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:28.863399 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:29.022832 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:29.163273 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:29.197645 D | op-k8sutil: deployment rook-ceph-mon-f status={ObservedGeneration:2 Replicas:0 UpdatedReplicas:0 ReadyReplicas:0 AvailableReplicas:0 UnavailableReplicas:0 Conditions:[{Type:Progressing Status:True LastUpdateTime:2019-08-06 14:22:03 +0000 UTC LastTransitionTime:2019-08-06 14:22:03 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-mon-f-7788fbf6ff" has successfully progressed.} {Type:Available Status:False LastUpdateTime:2019-08-07 08:16:23 +0000 UTC LastTransitionTime:2019-08-07 08:16:23 +0000 UTC Reason:MinimumReplicasUnavailable Message:Deployment does not have minimum availability.}] CollisionCount:<nil>}
2019-08-07 08:16:29.259036 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:29.539256 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:30.489377 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:30.513396 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:31.203208 D | op-k8sutil: deployment rook-ceph-mon-f status={ObservedGeneration:2 Replicas:0 UpdatedReplicas:0 ReadyReplicas:0 AvailableReplicas:0 UnavailableReplicas:0 Conditions:[{Type:Progressing Status:True LastUpdateTime:2019-08-06 14:22:03 +0000 UTC LastTransitionTime:2019-08-06 14:22:03 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-mon-f-7788fbf6ff" has successfully progressed.} {Type:Available Status:False LastUpdateTime:2019-08-07 08:16:23 +0000 UTC LastTransitionTime:2019-08-07 08:16:23 +0000 UTC Reason:MinimumReplicasUnavailable Message:Deployment does not have minimum availability.}] CollisionCount:<nil>}
2019-08-07 08:16:31.733770 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:31.937657 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:32.021816 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:32.197720 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:33.209671 D | op-k8sutil: deployment rook-ceph-mon-f status={ObservedGeneration:2 Replicas:1 UpdatedReplicas:1 ReadyReplicas:0 AvailableReplicas:0 UnavailableReplicas:1 Conditions:[{Type:Available Status:False LastUpdateTime:2019-08-07 08:16:23 +0000 UTC LastTransitionTime:2019-08-07 08:16:23 +0000 UTC Reason:MinimumReplicasUnavailable Message:Deployment does not have minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-07 08:16:31 +0000 UTC LastTransitionTime:2019-08-06 14:22:03 +0000 UTC Reason:ReplicaSetUpdated Message:ReplicaSet "rook-ceph-mon-f-7966c549fb" is progressing.}] CollisionCount:<nil>}
2019-08-07 08:16:35.215885 I | op-k8sutil: finished waiting for updated deployment rook-ceph-mon-f
2019-08-07 08:16:35.215956 D | op-mon: monConfig: %+v&{rook-ceph-mon-g g 100.66.122.247 6789 0xc000d7e460}
2019-08-07 08:16:35.216119 D | op-mon: Starting mon: rook-ceph-mon-g
2019-08-07 08:16:35.225997 I | op-mon: deployment for mon rook-ceph-mon-g already exists. updating if needed
2019-08-07 08:16:35.230249 I | op-k8sutil: updating deployment rook-ceph-mon-g
2019-08-07 08:16:35.274256 D | op-k8sutil: deployment rook-ceph-mon-g status={ObservedGeneration:2 Replicas:1 UpdatedReplicas:0 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:22:03 +0000 UTC LastTransitionTime:2019-08-06 14:22:03 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:22:03 +0000 UTC LastTransitionTime:2019-08-06 14:22:03 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-mon-g-9d47d458d" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:16:35.890823 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:36.599456 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:36.690096 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:37.196104 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:37.284955 D | op-k8sutil: deployment rook-ceph-mon-g status={ObservedGeneration:2 Replicas:0 UpdatedReplicas:0 ReadyReplicas:0 AvailableReplicas:0 UnavailableReplicas:0 Conditions:[{Type:Progressing Status:True LastUpdateTime:2019-08-06 14:22:03 +0000 UTC LastTransitionTime:2019-08-06 14:22:03 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-mon-g-9d47d458d" has successfully progressed.} {Type:Available Status:False LastUpdateTime:2019-08-07 08:16:35 +0000 UTC LastTransitionTime:2019-08-07 08:16:35 +0000 UTC Reason:MinimumReplicasUnavailable Message:Deployment does not have minimum availability.}] CollisionCount:<nil>}
2019-08-07 08:16:37.328795 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:37.944250 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:38.041418 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:38.210999 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:38.228863 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:38.627409 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:38.664305 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:38.881857 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:39.051084 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:39.186422 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:39.280189 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:39.289685 D | op-k8sutil: deployment rook-ceph-mon-g status={ObservedGeneration:2 Replicas:0 UpdatedReplicas:0 ReadyReplicas:0 AvailableReplicas:0 UnavailableReplicas:0 Conditions:[{Type:Progressing Status:True LastUpdateTime:2019-08-06 14:22:03 +0000 UTC LastTransitionTime:2019-08-06 14:22:03 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-mon-g-9d47d458d" has successfully progressed.} {Type:Available Status:False LastUpdateTime:2019-08-07 08:16:35 +0000 UTC LastTransitionTime:2019-08-07 08:16:35 +0000 UTC Reason:MinimumReplicasUnavailable Message:Deployment does not have minimum availability.}] CollisionCount:<nil>}
2019-08-07 08:16:39.558477 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:40.512772 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:40.536722 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:41.294532 D | op-k8sutil: deployment rook-ceph-mon-g status={ObservedGeneration:2 Replicas:0 UpdatedReplicas:0 ReadyReplicas:0 AvailableReplicas:0 UnavailableReplicas:0 Conditions:[{Type:Progressing Status:True LastUpdateTime:2019-08-06 14:22:03 +0000 UTC LastTransitionTime:2019-08-06 14:22:03 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-mon-g-9d47d458d" has successfully progressed.} {Type:Available Status:False LastUpdateTime:2019-08-07 08:16:35 +0000 UTC LastTransitionTime:2019-08-07 08:16:35 +0000 UTC Reason:MinimumReplicasUnavailable Message:Deployment does not have minimum availability.}] CollisionCount:<nil>}
2019-08-07 08:16:41.747691 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:41.951280 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:42.045598 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:42.216335 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:43.299987 D | op-k8sutil: deployment rook-ceph-mon-g status={ObservedGeneration:2 Replicas:0 UpdatedReplicas:0 ReadyReplicas:0 AvailableReplicas:0 UnavailableReplicas:0 Conditions:[{Type:Progressing Status:True LastUpdateTime:2019-08-06 14:22:03 +0000 UTC LastTransitionTime:2019-08-06 14:22:03 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-mon-g-9d47d458d" has successfully progressed.} {Type:Available Status:False LastUpdateTime:2019-08-07 08:16:35 +0000 UTC LastTransitionTime:2019-08-07 08:16:35 +0000 UTC Reason:MinimumReplicasUnavailable Message:Deployment does not have minimum availability.}] CollisionCount:<nil>}
2019-08-07 08:16:45.366712 D | op-k8sutil: deployment rook-ceph-mon-g status={ObservedGeneration:2 Replicas:0 UpdatedReplicas:0 ReadyReplicas:0 AvailableReplicas:0 UnavailableReplicas:0 Conditions:[{Type:Progressing Status:True LastUpdateTime:2019-08-06 14:22:03 +0000 UTC LastTransitionTime:2019-08-06 14:22:03 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-mon-g-9d47d458d" has successfully progressed.} {Type:Available Status:False LastUpdateTime:2019-08-07 08:16:35 +0000 UTC LastTransitionTime:2019-08-07 08:16:35 +0000 UTC Reason:MinimumReplicasUnavailable Message:Deployment does not have minimum availability.}] CollisionCount:<nil>}
2019-08-07 08:16:45.906487 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:46.621249 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:46.711758 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:47.216984 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:47.353040 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:47.379898 D | op-k8sutil: deployment rook-ceph-mon-g status={ObservedGeneration:2 Replicas:0 UpdatedReplicas:0 ReadyReplicas:0 AvailableReplicas:0 UnavailableReplicas:0 Conditions:[{Type:Progressing Status:True LastUpdateTime:2019-08-06 14:22:03 +0000 UTC LastTransitionTime:2019-08-06 14:22:03 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-mon-g-9d47d458d" has successfully progressed.} {Type:Available Status:False LastUpdateTime:2019-08-07 08:16:35 +0000 UTC LastTransitionTime:2019-08-07 08:16:35 +0000 UTC Reason:MinimumReplicasUnavailable Message:Deployment does not have minimum availability.}] CollisionCount:<nil>}
2019-08-07 08:16:47.961210 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:48.063579 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:48.228172 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:48.246325 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:48.652585 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:48.686357 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:48.905038 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:49.071220 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:49.213048 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:49.300572 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:49.384954 D | op-k8sutil: deployment rook-ceph-mon-g status={ObservedGeneration:2 Replicas:0 UpdatedReplicas:0 ReadyReplicas:0 AvailableReplicas:0 UnavailableReplicas:0 Conditions:[{Type:Progressing Status:True LastUpdateTime:2019-08-06 14:22:03 +0000 UTC LastTransitionTime:2019-08-06 14:22:03 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-mon-g-9d47d458d" has successfully progressed.} {Type:Available Status:False LastUpdateTime:2019-08-07 08:16:35 +0000 UTC LastTransitionTime:2019-08-07 08:16:35 +0000 UTC Reason:MinimumReplicasUnavailable Message:Deployment does not have minimum availability.}] CollisionCount:<nil>}
2019-08-07 08:16:49.585053 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:50.531886 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:50.556360 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:51.389959 D | op-k8sutil: deployment rook-ceph-mon-g status={ObservedGeneration:2 Replicas:1 UpdatedReplicas:1 ReadyReplicas:0 AvailableReplicas:0 UnavailableReplicas:1 Conditions:[{Type:Available Status:False LastUpdateTime:2019-08-07 08:16:35 +0000 UTC LastTransitionTime:2019-08-07 08:16:35 +0000 UTC Reason:MinimumReplicasUnavailable Message:Deployment does not have minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-07 08:16:51 +0000 UTC LastTransitionTime:2019-08-06 14:22:03 +0000 UTC Reason:ReplicaSetUpdated Message:ReplicaSet "rook-ceph-mon-g-6b49f6c769" is progressing.}] CollisionCount:<nil>}
2019-08-07 08:16:51.764667 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:51.975531 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:52.067959 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:52.238657 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:53.395492 D | op-k8sutil: deployment rook-ceph-mon-g status={ObservedGeneration:2 Replicas:1 UpdatedReplicas:1 ReadyReplicas:0 AvailableReplicas:0 UnavailableReplicas:1 Conditions:[{Type:Available Status:False LastUpdateTime:2019-08-07 08:16:35 +0000 UTC LastTransitionTime:2019-08-07 08:16:35 +0000 UTC Reason:MinimumReplicasUnavailable Message:Deployment does not have minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-07 08:16:51 +0000 UTC LastTransitionTime:2019-08-06 14:22:03 +0000 UTC Reason:ReplicaSetUpdated Message:ReplicaSet "rook-ceph-mon-g-6b49f6c769" is progressing.}] CollisionCount:<nil>}
2019-08-07 08:16:55.400320 I | op-k8sutil: finished waiting for updated deployment rook-ceph-mon-g
2019-08-07 08:16:55.400373 D | op-mon: monConfig: %+v&{rook-ceph-mon-h h 100.64.242.138 6789 0xc000d7e4b0}
2019-08-07 08:16:55.400541 D | op-mon: Starting mon: rook-ceph-mon-h
2019-08-07 08:16:55.409180 I | op-mon: deployment for mon rook-ceph-mon-h already exists. updating if needed
2019-08-07 08:16:55.413078 I | op-k8sutil: updating deployment rook-ceph-mon-h
2019-08-07 08:16:55.425059 D | op-k8sutil: deployment rook-ceph-mon-h status={ObservedGeneration:1 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:22:03 +0000 UTC LastTransitionTime:2019-08-06 14:22:03 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:22:03 +0000 UTC LastTransitionTime:2019-08-06 14:22:03 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-mon-h-7449c75457" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:16:55.928671 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:56.637894 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:56.762470 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:57.236899 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:57.373203 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:57.435532 D | op-k8sutil: deployment rook-ceph-mon-h status={ObservedGeneration:2 Replicas:0 UpdatedReplicas:0 ReadyReplicas:0 AvailableReplicas:0 UnavailableReplicas:0 Conditions:[{Type:Progressing Status:True LastUpdateTime:2019-08-06 14:22:03 +0000 UTC LastTransitionTime:2019-08-06 14:22:03 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-mon-h-7449c75457" has successfully progressed.} {Type:Available Status:False LastUpdateTime:2019-08-07 08:16:55 +0000 UTC LastTransitionTime:2019-08-07 08:16:55 +0000 UTC Reason:MinimumReplicasUnavailable Message:Deployment does not have minimum availability.}] CollisionCount:<nil>}
2019-08-07 08:16:57.982263 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:58.088031 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:58.245732 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:58.269474 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:58.681822 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:58.712005 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:58.924765 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:59.097587 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:59.239927 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:59.335941 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:16:59.440165 D | op-k8sutil: deployment rook-ceph-mon-h status={ObservedGeneration:2 Replicas:0 UpdatedReplicas:0 ReadyReplicas:0 AvailableReplicas:0 UnavailableReplicas:0 Conditions:[{Type:Progressing Status:True LastUpdateTime:2019-08-06 14:22:03 +0000 UTC LastTransitionTime:2019-08-06 14:22:03 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-mon-h-7449c75457" has successfully progressed.} {Type:Available Status:False LastUpdateTime:2019-08-07 08:16:55 +0000 UTC LastTransitionTime:2019-08-07 08:16:55 +0000 UTC Reason:MinimumReplicasUnavailable Message:Deployment does not have minimum availability.}] CollisionCount:<nil>}
2019-08-07 08:16:59.605666 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:00.560147 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:00.574881 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:01.444903 D | op-k8sutil: deployment rook-ceph-mon-h status={ObservedGeneration:2 Replicas:0 UpdatedReplicas:0 ReadyReplicas:0 AvailableReplicas:0 UnavailableReplicas:0 Conditions:[{Type:Progressing Status:True LastUpdateTime:2019-08-06 14:22:03 +0000 UTC LastTransitionTime:2019-08-06 14:22:03 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-mon-h-7449c75457" has successfully progressed.} {Type:Available Status:False LastUpdateTime:2019-08-07 08:16:55 +0000 UTC LastTransitionTime:2019-08-07 08:16:55 +0000 UTC Reason:MinimumReplicasUnavailable Message:Deployment does not have minimum availability.}] CollisionCount:<nil>}
2019-08-07 08:17:01.783044 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:01.991336 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:02.103442 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:02.254963 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:03.449782 D | op-k8sutil: deployment rook-ceph-mon-h status={ObservedGeneration:2 Replicas:1 UpdatedReplicas:1 ReadyReplicas:0 AvailableReplicas:0 UnavailableReplicas:1 Conditions:[{Type:Available Status:False LastUpdateTime:2019-08-07 08:16:55 +0000 UTC LastTransitionTime:2019-08-07 08:16:55 +0000 UTC Reason:MinimumReplicasUnavailable Message:Deployment does not have minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-07 08:17:01 +0000 UTC LastTransitionTime:2019-08-06 14:22:03 +0000 UTC Reason:ReplicaSetUpdated Message:ReplicaSet "rook-ceph-mon-h-858f958" is progressing.}] CollisionCount:<nil>}
2019-08-07 08:17:05.454291 I | op-k8sutil: finished waiting for updated deployment rook-ceph-mon-h
2019-08-07 08:17:05.454323 I | op-mon: mons created: 5
2019-08-07 08:17:05.454343 I | op-mon: waiting for mon quorum with [b a f g h]
2019-08-07 08:17:05.507250 I | op-mon: mons running: [b a f g h]
2019-08-07 08:17:05.507702 I | exec: Running command: ceph mon_status --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/938857954
2019-08-07 08:17:05.962339 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:06.652747 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:06.754597 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:07.264035 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:07.394812 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:08.000612 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:08.113535 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:08.266396 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:08.296032 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:08.705321 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:08.736433 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:08.943591 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:09.120390 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:09.260575 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:09.354077 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:09.627104 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:10.578064 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:10.593365 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:11.796471 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:12.008823 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:12.129545 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:12.281109 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:15.961109 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:16.666150 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:16.771993 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:17.279816 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:17.413929 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:18.021558 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:18.139894 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:18.289248 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:18.310348 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:18.732874 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:18.759578 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:18.964169 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:19.142920 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:19.276635 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:19.376408 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:19.647735 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:20.594692 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:20.611101 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:21.165733 I | exec: timed out
2019-08-07 08:17:21.165887 D | op-mon: failed to get mon_status, err: mon status failed. exit status 1
2019-08-07 08:17:21.812569 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:22.038662 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:22.159787 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:22.301286 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:25.978209 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:26.295000 I | op-mon: mons running: [b a f g h]
2019-08-07 08:17:26.295248 I | exec: Running command: ceph mon_status --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/842899929
2019-08-07 08:17:26.684319 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:26.794552 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:27.302705 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:27.469497 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:28.049426 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:28.160279 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:28.306569 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:28.328980 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:28.759201 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:28.783577 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:28.995080 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:29.162225 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:29.294409 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:29.394892 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:29.673306 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:30.611749 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:30.635453 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:31.830448 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:32.054955 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:32.182104 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:32.320364 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:35.991748 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:36.699639 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:36.813036 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:37.317149 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:37.466473 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:38.068851 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:38.176232 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:38.330554 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:38.345054 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:38.785329 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:38.810237 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:39.014408 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:39.185544 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:39.319438 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:39.415978 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:39.692689 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:40.633509 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:40.653456 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:41.846852 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:41.880743 I | exec: timed out
2019-08-07 08:17:41.880875 D | op-mon: failed to get mon_status, err: mon status failed. exit status 1
2019-08-07 08:17:42.076226 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:42.209567 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:42.340813 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:46.006061 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:46.716547 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:46.835954 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:46.940979 I | op-mon: mon h is not yet running
2019-08-07 08:17:46.941019 I | op-mon: mons running: [b a f g]
2019-08-07 08:17:46.941244 I | exec: Running command: ceph mon_status --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/641573220
2019-08-07 08:17:47.363667 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:47.492806 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:48.093832 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:48.196499 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:48.347242 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:48.363297 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:48.816282 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:48.834217 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:49.035772 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:49.208003 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:49.339346 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:49.434754 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:49.720111 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:50.654885 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:50.676668 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:51.861379 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:52.096128 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:52.236838 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:52.357681 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:56.027051 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:56.734378 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:56.852407 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:57.353227 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:57.518103 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:58.111177 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:58.214239 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:58.364128 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:58.386542 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:58.840104 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:58.857079 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:59.058047 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:59.238559 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:59.360201 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:59.460641 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:17:59.738035 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:00.676400 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:00.698844 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:01.877290 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:02.118065 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:02.262779 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:02.382693 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:02.465536 I | exec: timed out
2019-08-07 08:18:02.465685 D | op-mon: failed to get mon_status, err: mon status failed. exit status 1
2019-08-07 08:18:06.048075 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:06.752161 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:06.879430 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:07.372577 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:07.521862 I | op-mon: mon h is not yet running
2019-08-07 08:18:07.521899 I | op-mon: mons running: [b a f g]
2019-08-07 08:18:07.522138 I | exec: Running command: ceph mon_status --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/124545395
2019-08-07 08:18:07.539698 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:08.163833 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:08.233331 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:08.387386 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:08.406595 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:08.865544 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:08.886556 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:09.086259 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:09.262023 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:09.382072 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:09.485121 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:09.842275 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:10.692210 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:10.716090 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:11.893003 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:12.132998 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:12.288482 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:12.399959 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:16.066260 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:16.768226 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:16.898266 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:17.387791 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:17.566207 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:18.151169 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:18.248799 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:18.417569 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:18.426025 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:18.892266 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:18.909398 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:19.112717 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:19.282741 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:19.399748 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:19.505112 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:19.873376 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:20.715067 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:20.736145 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:21.913508 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:22.148052 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:22.316806 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:22.423065 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:23.065672 I | exec: timed out
2019-08-07 08:18:23.065828 D | op-mon: failed to get mon_status, err: mon status failed. exit status 1
2019-08-07 08:18:26.090634 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:26.781770 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:26.923931 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:27.403776 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:27.588083 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:28.167959 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:28.213814 I | op-mon: mon h is not yet running
2019-08-07 08:18:28.213848 I | op-mon: mons running: [b a f g]
2019-08-07 08:18:28.214032 I | exec: Running command: ceph mon_status --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/125074486
2019-08-07 08:18:28.267346 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:28.464100 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:28.465006 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:28.920926 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:28.931406 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:29.141205 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:29.305229 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:29.418793 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:29.527272 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:29.894532 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:30.737318 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:30.757517 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:31.929245 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:32.164083 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:32.334745 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:32.439665 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:36.109746 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:36.800009 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:36.944309 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:37.427915 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:37.615369 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:38.191508 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:38.282879 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:38.455818 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:38.463096 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:38.952303 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:38.959967 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:39.168783 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:39.326877 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:39.438659 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:39.548172 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:39.921157 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:40.752888 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:40.782776 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:41.945642 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:42.181506 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:42.359523 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:42.464268 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:43.865685 I | exec: timed out
2019-08-07 08:18:43.865826 D | op-mon: failed to get mon_status, err: mon status failed. exit status 1
2019-08-07 08:18:46.125123 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:46.862154 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:46.959535 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:47.442308 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:47.638678 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:48.205728 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:48.302418 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:48.475817 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:48.489566 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:48.932249 I | op-mon: mon h is not yet running
2019-08-07 08:18:48.932286 I | op-mon: mons running: [b a f g]
2019-08-07 08:18:48.932455 I | exec: Running command: ceph mon_status --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/716009501
2019-08-07 08:18:48.978280 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:48.987545 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:49.263887 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:49.363646 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:49.463753 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:49.566785 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:49.939394 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:50.772032 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:50.802116 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:51.959583 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:52.202393 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:52.385765 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:52.484501 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:56.139266 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:56.840506 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:56.982817 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:57.460810 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:57.665702 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:58.227294 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:58.324471 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:58.497473 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:58.511091 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:59.005990 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:59.012176 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:59.214482 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:59.363153 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:59.501536 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:59.596400 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:18:59.967426 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:00.789261 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:00.824855 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:01.981795 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:02.223236 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:02.417658 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:02.508971 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:06.163921 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:06.187026 I | exec: timed out
2019-08-07 08:19:04.461 7fa5677fe700 1 librados: shutdown
2019-08-07 08:19:06.187156 D | op-mon: failed to get mon_status, err: mon status failed. signal: segmentation fault (core dumped)
2019-08-07 08:19:06.862631 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:07.012956 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:07.484079 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:07.689546 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:08.245061 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:08.343892 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:08.514050 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:08.529679 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:09.038561 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:09.062301 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:09.233748 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:09.387953 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:09.518797 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:09.611821 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:09.988356 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:10.862148 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:10.863621 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:11.473681 I | op-mon: mon h is not yet running
2019-08-07 08:19:11.473717 I | op-mon: mons running: [b a f g]
2019-08-07 08:19:11.473996 I | exec: Running command: ceph mon_status --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/167156440
2019-08-07 08:19:11.997991 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:12.263796 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:12.441378 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:12.563153 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:13.509698 I | exec: 2019-08-07 08:19:12.372 7fb7ded09700 1 librados: starting msgr at
2019-08-07 08:19:12.372 7fb7ded09700 1 librados: starting objecter
2019-08-07 08:19:12.373 7fb7ded09700 1 librados: setting wanted keys
2019-08-07 08:19:12.373 7fb7ded09700 1 librados: calling monclient init
2019-08-07 08:19:12.463 7fb7ded09700 1 librados: init done
2019-08-07 08:19:13.380 7fb7ded09700 10 librados: watch_flush enter
2019-08-07 08:19:13.380 7fb7ded09700 10 librados: watch_flush exit
2019-08-07 08:19:13.381 7fb7ded09700 1 librados: shutdown
2019-08-07 08:19:13.510172 D | cephclient: MON STATUS: {Quorum:[0 1 2 3] MonMap:{Mons:[{Name:a Rank:0 Address:100.70.46.205:6789/0} {Name:b Rank:1 Address:100.67.17.84:6789/0} {Name:f Rank:2 Address:100.69.115.5:6789/0} {Name:g Rank:3 Address:100.66.122.247:6789/0} {Name:h Rank:4 Address:100.64.242.138:6789/0}]}}
2019-08-07 08:19:13.510200 I | op-mon: Monitors in quorum: [a b f g]
2019-08-07 08:19:13.510216 I | exec: Running command: ceph version
2019-08-07 08:19:15.664463 D | cephclient: ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable)
2019-08-07 08:19:15.664502 D | cephclient: ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable)
2019-08-07 08:19:15.664527 I | exec: Running command: ceph versions
2019-08-07 08:19:16.174026 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:16.870716 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:17.063941 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:17.563997 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:17.763563 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:18.075798 D | cephclient: {
"mon": {
"ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable)": 5
},
"mgr": {
"ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable)": 1
},
"osd": {
"ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable)": 48
},
"mds": {},
"overall": {
"ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable)": 54
}
}
2019-08-07 08:19:18.075840 D | cephclient: {
"mon": {
"ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable)": 5
},
"mgr": {
"ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable)": 1
},
"osd": {
"ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable)": 48
},
"mds": {},
"overall": {
"ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable)": 54
}
}
2019-08-07 08:19:18.076019 I | exec: Running command: ceph mon enable-msgr2
2019-08-07 08:19:18.270835 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:18.362440 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:18.563975 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:18.564811 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:19.064538 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:19.077472 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:19.263720 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:19.463695 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:19.563633 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:19.632646 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:20.063849 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:20.373199 I | cephclient: successfully enabled msgr2 protocol
2019-08-07 08:19:20.373257 D | op-mon: mon endpoints used are: g=100.66.122.247:6789,h=100.64.242.138:6789,b=100.67.17.84:6789,a=100.70.46.205:6789,f=100.69.115.5:6789
2019-08-07 08:19:20.373268 D | op-mon: Released lock for mon orchestration
2019-08-07 08:19:20.373285 I | op-mgr: start running mgr
2019-08-07 08:19:20.373474 I | exec: Running command: ceph auth get-or-create-key mgr.a mon allow * mds allow * osd allow * --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/724360279
2019-08-07 08:19:20.862748 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:20.873172 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:22.063170 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:22.263543 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:22.468733 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:22.547009 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:22.904290 I | exec: 2019-08-07 08:19:21.387 7f1c79ec3700 1 librados: starting msgr at
2019-08-07 08:19:21.387 7f1c79ec3700 1 librados: starting objecter
2019-08-07 08:19:21.462 7f1c79ec3700 1 librados: setting wanted keys
2019-08-07 08:19:21.462 7f1c79ec3700 1 librados: calling monclient init
2019-08-07 08:19:21.468 7f1c79ec3700 1 librados: init done
2019-08-07 08:19:22.774 7f1c79ec3700 10 librados: watch_flush enter
2019-08-07 08:19:22.774 7f1c79ec3700 10 librados: watch_flush exit
2019-08-07 08:19:22.775 7f1c79ec3700 1 librados: shutdown
2019-08-07 08:19:22.908363 D | op-mgr: legacy mgr key rook-ceph-mgr-a is already removed
2019-08-07 08:19:22.911772 D | op-cfg-keyring: updating secret for rook-ceph-mgr-a-keyring
2019-08-07 08:19:22.916394 I | exec: Running command: ceph config-key get mgr/dashboard/server_addr --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/062735818
2019-08-07 08:19:24.974821 I | exec: 2019-08-07 08:19:23.762 7f698614d700 1 librados: starting msgr at
2019-08-07 08:19:23.762 7f698614d700 1 librados: starting objecter
2019-08-07 08:19:23.763 7f698614d700 1 librados: setting wanted keys
2019-08-07 08:19:23.763 7f698614d700 1 librados: calling monclient init
2019-08-07 08:19:23.776 7f698614d700 1 librados: init done
Error ENOENT: error obtaining 'mgr/dashboard/server_addr': (2) No such file or directory
2019-08-07 08:19:24.874 7f698614d700 10 librados: watch_flush enter
2019-08-07 08:19:24.874 7f698614d700 10 librados: watch_flush exit
2019-08-07 08:19:24.875 7f698614d700 1 librados: shutdown
2019-08-07 08:19:24.975132 I | exec: Running command: ceph config-key del mgr/dashboard/server_addr --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/351026337
2019-08-07 08:19:26.263832 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:26.885873 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:27.062726 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:27.072155 I | exec: 2019-08-07 08:19:25.870 7f3dbce10700 1 librados: starting msgr at
2019-08-07 08:19:25.870 7f3dbce10700 1 librados: starting objecter
2019-08-07 08:19:25.870 7f3dbce10700 1 librados: setting wanted keys
2019-08-07 08:19:25.870 7f3dbce10700 1 librados: calling monclient init
2019-08-07 08:19:25.876 7f3dbce10700 1 librados: init done
no such key 'mgr/dashboard/server_addr'
2019-08-07 08:19:27.002 7f3dbce10700 10 librados: watch_flush enter
2019-08-07 08:19:27.002 7f3dbce10700 10 librados: watch_flush exit
2019-08-07 08:19:27.003 7f3dbce10700 1 librados: shutdown
2019-08-07 08:19:27.072328 I | op-mgr: clearing http bind fix mod=dashboard ver=12.0.0 luminous changed=false err=<nil>
2019-08-07 08:19:27.072473 I | exec: Running command: ceph config-key get mgr/dashboard/a/server_addr --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/481584012
2019-08-07 08:19:27.527578 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:27.763855 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:28.362156 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:28.382874 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:28.557736 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:28.569295 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:29.087744 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:29.098900 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:29.276178 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:29.463800 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:29.465300 I | exec: 2019-08-07 08:19:27.983 7f1e14b1f700 1 librados: starting msgr at
2019-08-07 08:19:27.983 7f1e14b1f700 1 librados: starting objecter
2019-08-07 08:19:27.984 7f1e14b1f700 1 librados: setting wanted keys
2019-08-07 08:19:27.984 7f1e14b1f700 1 librados: calling monclient init
2019-08-07 08:19:28.066 7f1e14b1f700 1 librados: init done
Error ENOENT: error obtaining 'mgr/dashboard/a/server_addr': (2) No such file or directory
2019-08-07 08:19:29.361 7f1e14b1f700 10 librados: watch_flush enter
2019-08-07 08:19:29.361 7f1e14b1f700 10 librados: watch_flush exit
2019-08-07 08:19:29.362 7f1e14b1f700 1 librados: shutdown
2019-08-07 08:19:29.465529 I | exec: Running command: ceph config-key del mgr/dashboard/a/server_addr --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/074820987
2019-08-07 08:19:29.561650 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:29.662199 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:30.063779 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:30.863812 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:30.891614 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:31.807305 I | exec: 2019-08-07 08:19:30.385 7fcf33bc9700 1 librados: starting msgr at
2019-08-07 08:19:30.385 7fcf33bc9700 1 librados: starting objecter
2019-08-07 08:19:30.461 7fcf33bc9700 1 librados: setting wanted keys
2019-08-07 08:19:30.461 7fcf33bc9700 1 librados: calling monclient init
2019-08-07 08:19:30.467 7fcf33bc9700 1 librados: init done
no such key 'mgr/dashboard/a/server_addr'
2019-08-07 08:19:31.676 7fcf33bc9700 10 librados: watch_flush enter
2019-08-07 08:19:31.676 7fcf33bc9700 10 librados: watch_flush exit
2019-08-07 08:19:31.677 7fcf33bc9700 1 librados: shutdown
2019-08-07 08:19:31.807483 I | op-mgr: clearing http bind fix mod=dashboard ver=12.0.0 luminous changed=false err=<nil>
2019-08-07 08:19:31.807622 I | exec: Running command: ceph config get mgr.a mgr/dashboard/server_addr --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/459142814
2019-08-07 08:19:32.063125 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:32.280359 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:32.562349 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:32.564433 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:34.071122 I | exec: 2019-08-07 08:19:32.775 7fd4dc943700 1 librados: starting msgr at
2019-08-07 08:19:32.775 7fd4dc943700 1 librados: starting objecter
2019-08-07 08:19:32.775 7fd4dc943700 1 librados: setting wanted keys
2019-08-07 08:19:32.775 7fd4dc943700 1 librados: calling monclient init
2019-08-07 08:19:32.862 7fd4dc943700 1 librados: init done
2019-08-07 08:19:33.967 7fd4dc943700 10 librados: watch_flush enter
2019-08-07 08:19:33.967 7fd4dc943700 10 librados: watch_flush exit
2019-08-07 08:19:33.968 7fd4dc943700 1 librados: shutdown
2019-08-07 08:19:34.071432 I | exec: Running command: ceph config rm mgr.a mgr/dashboard/server_addr --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/009826661
2019-08-07 08:19:36.212456 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:36.274484 I | exec: 2019-08-07 08:19:34.976 7f90a11f6700 1 librados: starting msgr at
2019-08-07 08:19:34.976 7f90a11f6700 1 librados: starting objecter
2019-08-07 08:19:34.976 7f90a11f6700 1 librados: setting wanted keys
2019-08-07 08:19:34.976 7f90a11f6700 1 librados: calling monclient init
2019-08-07 08:19:35.061 7f90a11f6700 1 librados: init done
2019-08-07 08:19:36.203 7f90a11f6700 10 librados: watch_flush enter
2019-08-07 08:19:36.203 7f90a11f6700 10 librados: watch_flush exit
2019-08-07 08:19:36.204 7f90a11f6700 1 librados: shutdown
2019-08-07 08:19:36.274651 I | op-mgr: clearing http bind fix mod=dashboard ver=13.0.0 mimic changed=true err=<nil>
2019-08-07 08:19:36.274790 I | exec: Running command: ceph config get mgr.a mgr/dashboard/a/server_addr --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/501342080
2019-08-07 08:19:36.962495 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:37.077281 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:37.563701 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:37.779554 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:38.362376 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:38.462321 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:38.573921 I | exec: 2019-08-07 08:19:37.261 7f6c8e628700 1 librados: starting msgr at
2019-08-07 08:19:37.261 7f6c8e628700 1 librados: starting objecter
2019-08-07 08:19:37.262 7f6c8e628700 1 librados: setting wanted keys
2019-08-07 08:19:37.262 7f6c8e628700 1 librados: calling monclient init
2019-08-07 08:19:37.267 7f6c8e628700 1 librados: init done
2019-08-07 08:19:38.499 7f6c8e628700 10 librados: watch_flush enter
2019-08-07 08:19:38.500 7f6c8e628700 10 librados: watch_flush exit
2019-08-07 08:19:38.501 7f6c8e628700 1 librados: shutdown
2019-08-07 08:19:38.574128 I | exec: Running command: ceph config rm mgr.a mgr/dashboard/a/server_addr --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/847906527
2019-08-07 08:19:38.574872 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:38.588101 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:39.163967 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:39.165117 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:39.295434 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:39.463278 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:39.585108 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:39.762252 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:40.063482 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:40.878400 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:40.962314 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:41.062097 I | exec: 2019-08-07 08:19:39.661 7f9e80430700 1 librados: starting msgr at
2019-08-07 08:19:39.661 7f9e80430700 1 librados: starting objecter
2019-08-07 08:19:39.662 7f9e80430700 1 librados: setting wanted keys
2019-08-07 08:19:39.662 7f9e80430700 1 librados: calling monclient init
2019-08-07 08:19:39.667 7f9e80430700 1 librados: init done
2019-08-07 08:19:40.897 7f9e80430700 10 librados: watch_flush enter
2019-08-07 08:19:40.897 7f9e80430700 10 librados: watch_flush exit
2019-08-07 08:19:40.899 7f9e80430700 1 librados: shutdown
2019-08-07 08:19:41.062271 I | op-mgr: clearing http bind fix mod=dashboard ver=13.0.0 mimic changed=true err=<nil>
2019-08-07 08:19:41.062414 I | exec: Running command: ceph config-key get mgr/prometheus/server_addr --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/847055538
2019-08-07 08:19:42.057058 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:42.301731 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:42.563864 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:42.589965 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:43.265031 I | exec: 2019-08-07 08:19:41.971 7f9b8bd32700 1 librados: starting msgr at
2019-08-07 08:19:41.971 7f9b8bd32700 1 librados: starting objecter
2019-08-07 08:19:41.972 7f9b8bd32700 1 librados: setting wanted keys
2019-08-07 08:19:41.972 7f9b8bd32700 1 librados: calling monclient init
2019-08-07 08:19:41.981 7f9b8bd32700 1 librados: init done
Error ENOENT: error obtaining 'mgr/prometheus/server_addr': (2) No such file or directory
2019-08-07 08:19:43.084 7f9b8bd32700 10 librados: watch_flush enter
2019-08-07 08:19:43.084 7f9b8bd32700 10 librados: watch_flush exit
2019-08-07 08:19:43.162 7f9b8bd32700 1 librados: shutdown
2019-08-07 08:19:43.265301 I | exec: Running command: ceph config-key del mgr/prometheus/server_addr --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/067831913
2019-08-07 08:19:45.374629 I | exec: 2019-08-07 08:19:44.086 7f5f277f5700 1 librados: starting msgr at
2019-08-07 08:19:44.086 7f5f277f5700 1 librados: starting objecter
2019-08-07 08:19:44.087 7f5f277f5700 1 librados: setting wanted keys
2019-08-07 08:19:44.087 7f5f277f5700 1 librados: calling monclient init
2019-08-07 08:19:44.170 7f5f277f5700 1 librados: init done
no such key 'mgr/prometheus/server_addr'
2019-08-07 08:19:45.267 7f5f277f5700 10 librados: watch_flush enter
2019-08-07 08:19:45.268 7f5f277f5700 10 librados: watch_flush exit
2019-08-07 08:19:45.269 7f5f277f5700 1 librados: shutdown
2019-08-07 08:19:45.374811 I | op-mgr: clearing http bind fix mod=prometheus ver=12.0.0 luminous changed=false err=<nil>
2019-08-07 08:19:45.375006 I | exec: Running command: ceph config-key get mgr/prometheus/a/server_addr --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/576681140
2019-08-07 08:19:46.264482 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:47.063733 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:47.264004 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:47.566318 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:47.772332 I | exec: 2019-08-07 08:19:46.180 7fa4d3a26700 1 librados: starting msgr at
2019-08-07 08:19:46.180 7fa4d3a26700 1 librados: starting objecter
2019-08-07 08:19:46.180 7fa4d3a26700 1 librados: setting wanted keys
2019-08-07 08:19:46.180 7fa4d3a26700 1 librados: calling monclient init
2019-08-07 08:19:46.264 7fa4d3a26700 1 librados: init done
Error ENOENT: error obtaining 'mgr/prometheus/a/server_addr': (2) No such file or directory
2019-08-07 08:19:47.665 7fa4d3a26700 10 librados: watch_flush enter
2019-08-07 08:19:47.666 7fa4d3a26700 10 librados: watch_flush exit
2019-08-07 08:19:47.667 7fa4d3a26700 1 librados: shutdown
2019-08-07 08:19:47.772638 I | exec: Running command: ceph config-key del mgr/prometheus/a/server_addr --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/602111107
2019-08-07 08:19:47.800482 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:48.362257 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:48.462446 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:48.663993 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:48.664867 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:49.164057 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:49.165179 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:49.363786 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:49.486087 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:49.663804 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:49.762630 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:50.072213 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:50.075235 I | exec: 2019-08-07 08:19:48.588 7f2f26a5e700 1 librados: starting msgr at
2019-08-07 08:19:48.588 7f2f26a5e700 1 librados: starting objecter
2019-08-07 08:19:48.661 7f2f26a5e700 1 librados: setting wanted keys
2019-08-07 08:19:48.661 7f2f26a5e700 1 librados: calling monclient init
2019-08-07 08:19:48.668 7f2f26a5e700 1 librados: init done
no such key 'mgr/prometheus/a/server_addr'
2019-08-07 08:19:50.006 7f2f26a5e700 10 librados: watch_flush enter
2019-08-07 08:19:50.006 7f2f26a5e700 10 librados: watch_flush exit
2019-08-07 08:19:50.007 7f2f26a5e700 1 librados: shutdown
2019-08-07 08:19:50.075407 I | op-mgr: clearing http bind fix mod=prometheus ver=12.0.0 luminous changed=false err=<nil>
2019-08-07 08:19:50.075551 I | exec: Running command: ceph config get mgr.a mgr/prometheus/server_addr --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/594296326
2019-08-07 08:19:50.963994 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:50.965392 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:52.163183 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:52.323684 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:52.472484 I | exec: 2019-08-07 08:19:51.075 7f74e559b700 1 librados: starting msgr at
2019-08-07 08:19:51.075 7f74e559b700 1 librados: starting objecter
2019-08-07 08:19:51.076 7f74e559b700 1 librados: setting wanted keys
2019-08-07 08:19:51.076 7f74e559b700 1 librados: calling monclient init
2019-08-07 08:19:51.165 7f74e559b700 1 librados: init done
2019-08-07 08:19:52.395 7f74e559b700 10 librados: watch_flush enter
2019-08-07 08:19:52.395 7f74e559b700 10 librados: watch_flush exit
2019-08-07 08:19:52.396 7f74e559b700 1 librados: shutdown
2019-08-07 08:19:52.472888 I | exec: Running command: ceph config rm mgr.a mgr/prometheus/server_addr --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/943357357
2019-08-07 08:19:52.664026 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:52.664827 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:54.773036 I | exec: 2019-08-07 08:19:53.377 7fc23edd5700 1 librados: starting msgr at
2019-08-07 08:19:53.377 7fc23edd5700 1 librados: starting objecter
2019-08-07 08:19:53.382 7fc23edd5700 1 librados: setting wanted keys
2019-08-07 08:19:53.382 7fc23edd5700 1 librados: calling monclient init
2019-08-07 08:19:53.475 7fc23edd5700 1 librados: init done
2019-08-07 08:19:54.697 7fc23edd5700 10 librados: watch_flush enter
2019-08-07 08:19:54.697 7fc23edd5700 10 librados: watch_flush exit
2019-08-07 08:19:54.698 7fc23edd5700 1 librados: shutdown
2019-08-07 08:19:54.773236 I | op-mgr: clearing http bind fix mod=prometheus ver=13.0.0 mimic changed=false err=<nil>
2019-08-07 08:19:54.773384 I | exec: Running command: ceph config get mgr.a mgr/prometheus/a/server_addr --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/875147048
2019-08-07 08:19:56.270843 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:56.962658 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:57.069683 I | exec: 2019-08-07 08:19:55.678 7f4d15c1c700 1 librados: starting msgr at
2019-08-07 08:19:55.678 7f4d15c1c700 1 librados: starting objecter
2019-08-07 08:19:55.762 7f4d15c1c700 1 librados: setting wanted keys
2019-08-07 08:19:55.762 7f4d15c1c700 1 librados: calling monclient init
2019-08-07 08:19:55.767 7f4d15c1c700 1 librados: init done
2019-08-07 08:19:56.962 7f4d15c1c700 10 librados: watch_flush enter
2019-08-07 08:19:56.962 7f4d15c1c700 10 librados: watch_flush exit
2019-08-07 08:19:56.963 7f4d15c1c700 1 librados: shutdown
2019-08-07 08:19:57.070019 I | exec: Running command: ceph config rm mgr.a mgr/prometheus/a/server_addr --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/945062503
2019-08-07 08:19:57.162977 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:57.663634 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:57.862059 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:58.354705 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:58.467678 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:58.665276 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:58.666876 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:59.164039 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:59.177071 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:59.362303 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:59.562288 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:59.570995 I | exec: 2019-08-07 08:19:58.063 7fa0596c2700 1 librados: starting msgr at
2019-08-07 08:19:58.063 7fa0596c2700 1 librados: starting objecter
2019-08-07 08:19:58.064 7fa0596c2700 1 librados: setting wanted keys
2019-08-07 08:19:58.064 7fa0596c2700 1 librados: calling monclient init
2019-08-07 08:19:58.070 7fa0596c2700 1 librados: init done
2019-08-07 08:19:59.499 7fa0596c2700 10 librados: watch_flush enter
2019-08-07 08:19:59.499 7fa0596c2700 10 librados: watch_flush exit
2019-08-07 08:19:59.500 7fa0596c2700 1 librados: shutdown
2019-08-07 08:19:59.571179 I | op-mgr: clearing http bind fix mod=prometheus ver=13.0.0 mimic changed=false err=<nil>
2019-08-07 08:19:59.572194 D | op-mgr: starting mgr deployment: &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:rook-ceph-mgr-a,GenerateName:,Namespace:rook-ceph-stage-primary,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{app: rook-ceph-mgr,ceph-version: 14.2.1,ceph_daemon_id: a,instance: a,mgr: a,rook-version: v1.0.4,rook_cluster: rook-ceph-stage-primary,},Annotations:map[string]string{prometheus.io/port: 9283,prometheus.io/scrape: true,},OwnerReferences:[{ceph.rook.io/v1 CephCluster rook-ceph-stage-primary 76235f05-b792-11e9-9b32-0050568460f6 <nil> 0xc000c66c6c}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{app: rook-ceph-mgr,ceph_daemon_id: a,instance: a,mgr: a,rook_cluster: rook-ceph-stage-primary,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:rook-ceph-mgr-a,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{app: rook-ceph-mgr,ceph_daemon_id: a,instance: a,mgr: a,rook_cluster: rook-ceph-stage-primary,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{rook-ceph-config {nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:rook-ceph-config,},Items:[{ceph.conf ceph.conf 0xc0010555f8}],DefaultMode:nil,Optional:nil,} nil nil nil nil nil nil nil nil nil}} {rook-ceph-mgr-a-keyring {nil nil nil nil nil &SecretVolumeSource{SecretName:rook-ceph-mgr-a-keyring,Items:[],DefaultMode:nil,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}} {rook-ceph-log {&HostPathVolumeSource{Path:/opt/rook/rook-ceph-stage-primary/rook-ceph-stage-primary/log,Type:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}} {ceph-daemon-data {nil &EmptyDirVolumeSource{Medium:,SizeLimit:<nil>,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{mgr ceph/ceph:v14.2.1-20190430 [ceph-mgr] [--fsid=7dd854f1-2892-4201-ab69-d4797f12ac50 --keyring=/etc/ceph/keyring-store/keyring --log-to-stderr=true --err-to-stderr=true --mon-cluster-log-to-stderr=true --log-stderr-prefix=debug --default-log-to-file=false --default-mon-cluster-log-to-file=false --mon-host=$(ROOK_CEPH_MON_HOST) --mon-initial-members=$(ROOK_CEPH_MON_INITIAL_MEMBERS) --id=a --foreground] [{mgr 0 6800 TCP } {http-metrics 0 9283 TCP } {dashboard 0 7000 TCP }] [] [{CONTAINER_IMAGE ceph/ceph:v14.2.1-20190430 nil} {POD_NAME EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}} {POD_NAMESPACE &EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}} {NODE_NAME &EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}} {POD_MEMORY_LIMIT &EnvVarSource{FieldRef:nil,ResourceFieldRef:&ResourceFieldSelector{ContainerName:,Resource:limits.memory,Divisor:0,},ConfigMapKeyRef:nil,SecretKeyRef:nil,}} {POD_MEMORY_REQUEST &EnvVarSource{FieldRef:nil,ResourceFieldRef:&ResourceFieldSelector{ContainerName:,Resource:requests.memory,Divisor:0,},ConfigMapKeyRef:nil,SecretKeyRef:nil,}} {POD_CPU_LIMIT &EnvVarSource{FieldRef:nil,ResourceFieldRef:&ResourceFieldSelector{ContainerName:,Resource:limits.cpu,Divisor:1,},ConfigMapKeyRef:nil,SecretKeyRef:nil,}} {POD_CPU_REQUEST &EnvVarSource{FieldRef:nil,ResourceFieldRef:&ResourceFieldSelector{ContainerName:,Resource:requests.cpu,Divisor:0,},ConfigMapKeyRef:nil,SecretKeyRef:nil,}} {ROOK_CEPH_MON_HOST &EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:rook-ceph-config,},Key:mon_host,Optional:nil,},}} {ROOK_CEPH_MON_INITIAL_MEMBERS &EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:rook-ceph-config,},Key:mon_initial_members,Optional:nil,},}} {ROOK_OPERATOR_NAMESPACE rook-ceph-stage-primary nil} {ROOK_CEPH_CLUSTER_CRD_VERSION v1 nil} {ROOK_VERSION v1.0.4 nil} {ROOK_CEPH_CLUSTER_CRD_NAME rook-ceph-stage-primary nil}] {map[cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{1073741824 0} {<nil>} 1Gi BinarySI}] map[cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{1073741824 0} {<nil>} 1Gi BinarySI}]} [{rook-ceph-config true /etc/ceph <nil> } {rook-ceph-mgr-a-keyring true /etc/ceph/keyring-store/ <nil> } {rook-ceph-log false /var/log/ceph <nil> } {ceph-daemon-data false /var/lib/ceph/mgr/ceph-a <nil> }] [] nil nil nil nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:nil,ActiveDeadlineSeconds:nil,DNSPolicy:,NodeSelector:map[string]string{},ServiceAccountName:rook-ceph-mgr,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:nil,ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:&Affinity{NodeAffinity:&NodeAffinity{RequiredDuringSchedulingIgnoredDuringExecution:&NodeSelector{NodeSelectorTerms:[{[{rook-namespace NotIn [rook-ceph-stage-primary]}] []}],},PreferredDuringSchedulingIgnoredDuringExecution:[],},PodAffinity:nil,PodAntiAffinity:nil,},SchedulerName:,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:nil,Paused:false,ProgressDeadlineSeconds:nil,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}
2019-08-07 08:19:59.607248 I | op-mgr: deployment for mgr rook-ceph-mgr-a already exists. updating if needed
2019-08-07 08:19:59.611860 I | op-k8sutil: updating deployment rook-ceph-mgr-a
2019-08-07 08:19:59.629102 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:19:59.638064 D | op-k8sutil: deployment rook-ceph-mgr-a status={ObservedGeneration:1 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:21:41 +0000 UTC LastTransitionTime:2019-08-06 14:21:41 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:21:41 +0000 UTC LastTransitionTime:2019-08-06 14:21:41 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-mgr-a-5d469cc9b5" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:19:59.727196 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:00.102269 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:00.918262 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:00.969693 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:01.645067 I | op-k8sutil: finished waiting for updated deployment rook-ceph-mgr-a
2019-08-07 08:20:01.645251 I | exec: Running command: ceph mgr module enable orchestrator_cli --force --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/994410650
2019-08-07 08:20:02.108642 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:02.363573 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:02.628498 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:02.638714 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:04.476431 I | exec: 2019-08-07 08:20:02.481 7f1ca203f700 1 librados: starting msgr at
2019-08-07 08:20:02.481 7f1ca203f700 1 librados: starting objecter
2019-08-07 08:20:02.561 7f1ca203f700 1 librados: setting wanted keys
2019-08-07 08:20:02.561 7f1ca203f700 1 librados: calling monclient init
2019-08-07 08:20:02.568 7f1ca203f700 1 librados: init done
module 'orchestrator_cli' is already enabled (always-on)
2019-08-07 08:20:04.388 7f1ca203f700 10 librados: watch_flush enter
2019-08-07 08:20:04.389 7f1ca203f700 10 librados: watch_flush exit
2019-08-07 08:20:04.390 7f1ca203f700 1 librados: shutdown
2019-08-07 08:20:04.476731 I | exec: Running command: ceph mgr module enable rook --force --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/343293745
2019-08-07 08:20:06.363780 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:06.670880 I | exec: 2019-08-07 08:20:05.386 7efef87c4700 1 librados: starting msgr at
2019-08-07 08:20:05.386 7efef87c4700 1 librados: starting objecter
2019-08-07 08:20:05.386 7efef87c4700 1 librados: setting wanted keys
2019-08-07 08:20:05.386 7efef87c4700 1 librados: calling monclient init
2019-08-07 08:20:05.465 7efef87c4700 1 librados: init done
module 'rook' is already enabled
2019-08-07 08:20:06.578 7efef87c4700 10 librados: watch_flush enter
2019-08-07 08:20:06.578 7efef87c4700 10 librados: watch_flush exit
2019-08-07 08:20:06.580 7efef87c4700 1 librados: shutdown
2019-08-07 08:20:06.671286 I | exec: Running command: ceph orchestrator set backend rook --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/614405852
2019-08-07 08:20:06.971561 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:07.163871 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:07.617860 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:07.862158 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:08.377899 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:08.483094 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:08.662481 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:08.663461 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:09.162179 I | exec: 2019-08-07 08:20:07.668 7f3392fec700 1 librados: starting msgr at
2019-08-07 08:20:07.668 7f3392fec700 1 librados: starting objecter
2019-08-07 08:20:07.668 7f3392fec700 1 librados: setting wanted keys
2019-08-07 08:20:07.668 7f3392fec700 1 librados: calling monclient init
2019-08-07 08:20:07.674 7f3392fec700 1 librados: init done
2019-08-07 08:20:08.983 7f3392fec700 10 librados: watch_flush enter
2019-08-07 08:20:08.983 7f3392fec700 10 librados: watch_flush exit
2019-08-07 08:20:08.984 7f3392fec700 1 librados: shutdown
2019-08-07 08:20:09.162489 I | exec: Running command: ceph mgr module enable prometheus --force --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/280519307
2019-08-07 08:20:09.262537 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:09.263871 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:09.372483 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:09.563697 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:09.664016 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:09.762567 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:10.162516 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:10.963781 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:11.062418 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:11.471874 I | exec: 2019-08-07 08:20:10.261 7f851464c700 1 librados: starting msgr at
2019-08-07 08:20:10.261 7f851464c700 1 librados: starting objecter
2019-08-07 08:20:10.262 7f851464c700 1 librados: setting wanted keys
2019-08-07 08:20:10.262 7f851464c700 1 librados: calling monclient init
2019-08-07 08:20:10.269 7f851464c700 1 librados: init done
module 'prometheus' is already enabled
2019-08-07 08:20:11.394 7f851464c700 10 librados: watch_flush enter
2019-08-07 08:20:11.395 7f851464c700 10 librados: watch_flush exit
2019-08-07 08:20:11.396 7f851464c700 1 librados: shutdown
2019-08-07 08:20:11.472292 I | exec: Running command: ceph mgr module enable dashboard --force --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/791871598
2019-08-07 08:20:12.163191 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:12.362484 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:12.664487 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:12.665141 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:14.474669 I | exec: 2019-08-07 08:20:12.380 7f5b1d16e700 1 librados: starting msgr at
2019-08-07 08:20:12.380 7f5b1d16e700 1 librados: starting objecter
2019-08-07 08:20:12.380 7f5b1d16e700 1 librados: setting wanted keys
2019-08-07 08:20:12.380 7f5b1d16e700 1 librados: calling monclient init
2019-08-07 08:20:12.465 7f5b1d16e700 1 librados: init done
module 'dashboard' is already enabled
2019-08-07 08:20:14.452 7f5b1d16e700 10 librados: watch_flush enter
2019-08-07 08:20:14.452 7f5b1d16e700 10 librados: watch_flush exit
2019-08-07 08:20:14.453 7f5b1d16e700 1 librados: shutdown
2019-08-07 08:20:16.362795 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:16.989231 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:17.163176 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:17.653889 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:17.879498 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:18.395621 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:18.502290 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:18.677317 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:18.683231 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:19.214943 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:19.221560 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:19.392647 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:19.479197 I | op-mgr: the dashboard secret was already generated
2019-08-07 08:20:19.479236 I | op-mgr: Running command: ceph dashboard set-login-credentials admin *******
2019-08-07 08:20:19.480320 D | exec: Running command: ceph dashboard set-login-credentials admin GKzKzG9om2 --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/190695669
2019-08-07 08:20:19.565124 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:19.762043 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:19.770990 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:20.147208 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:20.964185 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:21.062470 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:22.140342 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:22.169187 I | exec: 2019-08-07 08:20:20.585 7f28ec9c2700 1 librados: starting msgr at
2019-08-07 08:20:20.585 7f28ec9c2700 1 librados: starting objecter
2019-08-07 08:20:20.661 7f28ec9c2700 1 librados: setting wanted keys
2019-08-07 08:20:20.661 7f28ec9c2700 1 librados: calling monclient init
2019-08-07 08:20:20.667 7f28ec9c2700 1 librados: init done
2019-08-07 08:20:22.079 7f28ec9c2700 10 librados: watch_flush enter
2019-08-07 08:20:22.080 7f28ec9c2700 10 librados: watch_flush exit
2019-08-07 08:20:22.081 7f28ec9c2700 1 librados: shutdown
2019-08-07 08:20:22.169358 I | op-mgr: restarting the mgr module
2019-08-07 08:20:22.169498 I | exec: Running command: ceph mgr module disable dashboard --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/998836176
2019-08-07 08:20:22.381233 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:22.681461 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:22.694429 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:24.473615 I | exec: 2019-08-07 08:20:23.083 7f100fed5700 1 librados: starting msgr at
2019-08-07 08:20:23.083 7f100fed5700 1 librados: starting objecter
2019-08-07 08:20:23.084 7f100fed5700 1 librados: setting wanted keys
2019-08-07 08:20:23.084 7f100fed5700 1 librados: calling monclient init
2019-08-07 08:20:23.165 7f100fed5700 1 librados: init done
2019-08-07 08:20:24.409 7f100fed5700 10 librados: watch_flush enter
2019-08-07 08:20:24.409 7f100fed5700 10 librados: watch_flush exit
2019-08-07 08:20:24.410 7f100fed5700 1 librados: shutdown
2019-08-07 08:20:24.473975 I | exec: Running command: ceph mgr module enable dashboard --force --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/664511727
2019-08-07 08:20:26.362290 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:26.573229 I | exec: 2019-08-07 08:20:25.366 7fac34d9a700 1 librados: starting msgr at
2019-08-07 08:20:25.366 7fac34d9a700 1 librados: starting objecter
2019-08-07 08:20:25.367 7fac34d9a700 1 librados: setting wanted keys
2019-08-07 08:20:25.367 7fac34d9a700 1 librados: calling monclient init
2019-08-07 08:20:25.372 7fac34d9a700 1 librados: init done
2019-08-07 08:20:26.501 7fac34d9a700 10 librados: watch_flush enter
2019-08-07 08:20:26.501 7fac34d9a700 10 librados: watch_flush exit
2019-08-07 08:20:26.502 7fac34d9a700 1 librados: shutdown
2019-08-07 08:20:26.573526 I | exec: Running command: ceph config get mgr.a mgr/dashboard/url_prefix --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/957682050
2019-08-07 08:20:27.063946 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:27.185656 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:27.671916 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:27.963136 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:28.463978 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:28.562766 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:28.764216 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:28.765032 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:29.262166 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:29.262211 I | exec: 2019-08-07 08:20:27.489 7f959ae09700 1 librados: starting msgr at
2019-08-07 08:20:27.489 7f959ae09700 1 librados: starting objecter
2019-08-07 08:20:27.561 7f959ae09700 1 librados: setting wanted keys
2019-08-07 08:20:27.561 7f959ae09700 1 librados: calling monclient init
2019-08-07 08:20:27.567 7f959ae09700 1 librados: init done
2019-08-07 08:20:29.081 7f959ae09700 10 librados: watch_flush enter
2019-08-07 08:20:29.081 7f959ae09700 10 librados: watch_flush exit
2019-08-07 08:20:29.083 7f959ae09700 1 librados: shutdown
2019-08-07 08:20:29.262485 I | exec: Running command: ceph config rm mgr.a mgr/dashboard/url_prefix --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/026335481
2019-08-07 08:20:29.262711 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:29.463821 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:29.573462 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:29.763714 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:29.790893 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:30.166844 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:30.977985 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:31.062751 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:31.671331 I | exec: 2019-08-07 08:20:30.269 7fdae61ad700 1 librados: starting msgr at
2019-08-07 08:20:30.269 7fdae61ad700 1 librados: starting objecter
2019-08-07 08:20:30.270 7fdae61ad700 1 librados: setting wanted keys
2019-08-07 08:20:30.270 7fdae61ad700 1 librados: calling monclient init
2019-08-07 08:20:30.363 7fdae61ad700 1 librados: init done
2019-08-07 08:20:31.602 7fdae61ad700 10 librados: watch_flush enter
2019-08-07 08:20:31.602 7fdae61ad700 10 librados: watch_flush exit
2019-08-07 08:20:31.603 7fdae61ad700 1 librados: shutdown
2019-08-07 08:20:31.671650 I | exec: Running command: ceph config get mgr.a mgr/dashboard/server_port --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/183726084
2019-08-07 08:20:32.163093 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:32.403015 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:32.702807 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:32.710387 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:33.972241 I | exec: 2019-08-07 08:20:32.580 7f746da25700 1 librados: starting msgr at
2019-08-07 08:20:32.580 7f746da25700 1 librados: starting objecter
2019-08-07 08:20:32.581 7f746da25700 1 librados: setting wanted keys
2019-08-07 08:20:32.581 7f746da25700 1 librados: calling monclient init
2019-08-07 08:20:32.665 7f746da25700 1 librados: init done
2019-08-07 08:20:33.896 7f746da25700 10 librados: watch_flush enter
2019-08-07 08:20:33.896 7f746da25700 10 librados: watch_flush exit
2019-08-07 08:20:33.898 7f746da25700 1 librados: shutdown
2019-08-07 08:20:33.972589 I | exec: Running command: ceph config set mgr.a mgr/dashboard/server_port 7000 --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/117497235
2019-08-07 08:20:36.212491 I | exec: 2019-08-07 08:20:34.885 7f433b35d700 1 librados: starting msgr at
2019-08-07 08:20:34.885 7f433b35d700 1 librados: starting objecter
2019-08-07 08:20:34.885 7f433b35d700 1 librados: setting wanted keys
2019-08-07 08:20:34.885 7f433b35d700 1 librados: calling monclient init
2019-08-07 08:20:34.965 7f433b35d700 1 librados: init done
2019-08-07 08:20:36.081 7f433b35d700 10 librados: watch_flush enter
2019-08-07 08:20:36.082 7f433b35d700 10 librados: watch_flush exit
2019-08-07 08:20:36.161 7f433b35d700 1 librados: shutdown
2019-08-07 08:20:36.212813 I | exec: Running command: ceph config get mgr.a mgr/dashboard/ssl --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/676736470
2019-08-07 08:20:36.363796 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:37.023893 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:37.263388 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:37.763747 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:37.963592 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:38.463824 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:38.475345 I | exec: 2019-08-07 08:20:37.162 7fcbcfd52700 1 librados: starting msgr at
2019-08-07 08:20:37.162 7fcbcfd52700 1 librados: starting objecter
2019-08-07 08:20:37.163 7fcbcfd52700 1 librados: setting wanted keys
2019-08-07 08:20:37.163 7fcbcfd52700 1 librados: calling monclient init
2019-08-07 08:20:37.169 7fcbcfd52700 1 librados: init done
2019-08-07 08:20:38.368 7fcbcfd52700 10 librados: watch_flush enter
2019-08-07 08:20:38.368 7fcbcfd52700 10 librados: watch_flush exit
2019-08-07 08:20:38.369 7fcbcfd52700 1 librados: shutdown
2019-08-07 08:20:38.475669 I | exec: Running command: ceph config set mgr.a mgr/dashboard/ssl false --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/627476285
2019-08-07 08:20:38.546120 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:38.764153 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:38.765040 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:39.269034 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:39.272212 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:39.463822 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:39.591058 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:39.763794 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:39.862239 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:40.263924 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:40.876187 I | exec: 2019-08-07 08:20:39.478 7fa8476dc700 1 librados: starting msgr at
2019-08-07 08:20:39.478 7fa8476dc700 1 librados: starting objecter
2019-08-07 08:20:39.478 7fa8476dc700 1 librados: setting wanted keys
2019-08-07 08:20:39.478 7fa8476dc700 1 librados: calling monclient init
2019-08-07 08:20:39.566 7fa8476dc700 1 librados: init done
2019-08-07 08:20:40.768 7fa8476dc700 10 librados: watch_flush enter
2019-08-07 08:20:40.769 7fa8476dc700 10 librados: watch_flush exit
2019-08-07 08:20:40.770 7fa8476dc700 1 librados: shutdown
2019-08-07 08:20:40.930242 I | op-mgr: dashboard service already exists
2019-08-07 08:20:40.965017 I | op-mgr: mgr metrics service already exists
2019-08-07 08:20:40.965050 I | op-osd: start running osds in namespace rook-ceph-stage-primary
2019-08-07 08:20:40.984850 I | op-osd: 4 of the 4 storage nodes are valid
2019-08-07 08:20:40.984876 I | op-osd: start provisioning the osds on nodes, if needed
2019-08-07 08:20:40.985104 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName k8s-worker-101.lxstage.domain.com will be 713725d9f667a079e331f69f263a1fd0
2019-08-07 08:20:40.997307 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:41.003136 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName k8s-worker-101.lxstage.domain.com will be 713725d9f667a079e331f69f263a1fd0
2019-08-07 08:20:41.015605 I | op-osd: osd provision job started for node k8s-worker-101.lxstage.domain.com
2019-08-07 08:20:41.015648 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName k8s-worker-102.lxstage.domain.com will be 16f7814f5a22fc71d100e5b6a4b5bf2b
2019-08-07 08:20:41.027978 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName k8s-worker-102.lxstage.domain.com will be 16f7814f5a22fc71d100e5b6a4b5bf2b
2019-08-07 08:20:41.042743 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:41.062279 I | op-osd: osd provision job started for node k8s-worker-102.lxstage.domain.com
2019-08-07 08:20:41.062389 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName k8s-worker-103.lxstage.domain.com will be 3421d9d141eb39906bd993a89171141d
2019-08-07 08:20:41.282053 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName k8s-worker-103.lxstage.domain.com will be 3421d9d141eb39906bd993a89171141d
2019-08-07 08:20:41.293464 I | op-osd: osd provision job started for node k8s-worker-103.lxstage.domain.com
2019-08-07 08:20:41.293533 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName k8s-worker-104.lxstage.domain.com will be 314ab5663f2f709906d970de379b1dc7
2019-08-07 08:20:41.682462 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName k8s-worker-104.lxstage.domain.com will be 314ab5663f2f709906d970de379b1dc7
2019-08-07 08:20:41.693398 I | op-osd: osd provision job started for node k8s-worker-104.lxstage.domain.com
2019-08-07 08:20:41.693425 I | op-osd: start osds after provisioning is completed, if needed
2019-08-07 08:20:41.882455 I | op-osd: osd orchestration status for node k8s-worker-102.lxstage.domain.com is starting
2019-08-07 08:20:41.882491 I | op-osd: osd orchestration status for node k8s-worker-104.lxstage.domain.com is starting
2019-08-07 08:20:41.882508 I | op-osd: osd orchestration status for node k8s-worker-103.lxstage.domain.com is starting
2019-08-07 08:20:41.882522 I | op-osd: osd orchestration status for node k8s-worker-101.lxstage.domain.com is starting
2019-08-07 08:20:41.882533 I | op-osd: 0/4 node(s) completed osd provisioning, resource version 309430392
2019-08-07 08:20:42.170563 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:42.422267 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:42.474077 I | op-osd: osd orchestration status for node k8s-worker-102.lxstage.domain.com is computingDiff
2019-08-07 08:20:42.572070 I | op-osd: osd orchestration status for node k8s-worker-102.lxstage.domain.com is orchestrating
2019-08-07 08:20:42.726194 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:42.735195 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:42.779379 I | op-osd: osd orchestration status for node k8s-worker-101.lxstage.domain.com is computingDiff
2019-08-07 08:20:42.877156 I | op-osd: osd orchestration status for node k8s-worker-101.lxstage.domain.com is orchestrating
2019-08-07 08:20:44.152205 I | op-osd: osd orchestration status for node k8s-worker-103.lxstage.domain.com is computingDiff
2019-08-07 08:20:44.367635 I | op-osd: osd orchestration status for node k8s-worker-103.lxstage.domain.com is orchestrating
2019-08-07 08:20:46.362757 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:46.545680 I | op-osd: osd orchestration status for node k8s-worker-104.lxstage.domain.com is computingDiff
2019-08-07 08:20:46.760751 I | op-osd: osd orchestration status for node k8s-worker-104.lxstage.domain.com is orchestrating
2019-08-07 08:20:47.042408 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:47.222156 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:47.717372 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:47.950194 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:48.454149 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:48.563195 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:48.738375 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:48.749242 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:49.362473 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:49.363717 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:49.467133 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:49.626065 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:49.734661 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:49.839069 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:50.213846 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:51.013762 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:51.063747 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:52.184206 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:52.439720 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:52.743823 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:52.761980 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:56.372216 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:57.064843 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:57.245389 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:57.508011 I | op-osd: osd orchestration status for node k8s-worker-102.lxstage.domain.com is completed
2019-08-07 08:20:57.508041 I | op-osd: starting 12 osd daemons on node k8s-worker-102.lxstage.domain.com
2019-08-07 08:20:57.508073 D | op-osd: start osd {24 /var/lib/rook/osd24 /var/lib/rook/osd24/rook-ceph-stage-primary.config ceph /var/lib/rook/osd24/keyring a92aae31-64e2-4ad4-987e-3d2d211af869 false false true}
2019-08-07 08:20:57.519927 I | op-osd: deployment for osd 24 already exists. updating if needed
2019-08-07 08:20:57.523948 I | op-k8sutil: updating deployment rook-ceph-osd-24
2019-08-07 08:20:57.537315 D | op-k8sutil: deployment rook-ceph-osd-24 status={ObservedGeneration:1 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:28:59 +0000 UTC LastTransitionTime:2019-08-06 14:28:59 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:28:59 +0000 UTC LastTransitionTime:2019-08-06 14:28:52 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-24-7c9f6598b4" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:20:57.735511 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:57.972662 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:58.475569 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:58.579434 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:58.756162 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:58.770094 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:59.362409 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:59.363675 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:59.490286 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:59.542091 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-24
2019-08-07 08:20:59.542116 I | op-osd: started deployment for osd 24 (dir=false, type=bluestore)
2019-08-07 08:20:59.542139 D | op-osd: start osd {35 /var/lib/rook/osd35 /var/lib/rook/osd35/rook-ceph-stage-primary.config ceph /var/lib/rook/osd35/keyring cb017176-928b-4db4-9cb6-66629080f53b false false true}
2019-08-07 08:20:59.551635 I | op-osd: deployment for osd 35 already exists. updating if needed
2019-08-07 08:20:59.555521 I | op-k8sutil: updating deployment rook-ceph-osd-35
2019-08-07 08:20:59.568123 D | op-k8sutil: deployment rook-ceph-osd-35 status={ObservedGeneration:1 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:29:00 +0000 UTC LastTransitionTime:2019-08-06 14:29:00 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:29:00 +0000 UTC LastTransitionTime:2019-08-06 14:28:54 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-35-7b4689f654" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:20:59.643299 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:59.755744 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:20:59.859218 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:00.238499 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:01.037740 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:01.090202 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:01.573299 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-35
2019-08-07 08:21:01.573329 I | op-osd: started deployment for osd 35 (dir=false, type=bluestore)
2019-08-07 08:21:01.573353 D | op-osd: start osd {39 /var/lib/rook/osd39 /var/lib/rook/osd39/rook-ceph-stage-primary.config ceph /var/lib/rook/osd39/keyring 8ab42eea-0fea-44c4-b5bc-1a2a02fbfd58 false false true}
2019-08-07 08:21:01.670450 I | op-osd: deployment for osd 39 already exists. updating if needed
2019-08-07 08:21:01.674758 I | op-k8sutil: updating deployment rook-ceph-osd-39
2019-08-07 08:21:01.690801 D | op-k8sutil: deployment rook-ceph-osd-39 status={ObservedGeneration:1 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:29:00 +0000 UTC LastTransitionTime:2019-08-06 14:29:00 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:29:00 +0000 UTC LastTransitionTime:2019-08-06 14:28:55 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-39-76dddddf9d" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:21:02.201381 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:02.463484 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:02.770967 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:02.791131 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:03.695670 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-39
2019-08-07 08:21:03.695706 I | op-osd: started deployment for osd 39 (dir=false, type=bluestore)
2019-08-07 08:21:03.695732 D | op-osd: start osd {4 /var/lib/rook/osd4 /var/lib/rook/osd4/rook-ceph-stage-primary.config ceph /var/lib/rook/osd4/keyring b830d73e-ce5c-4915-8f5a-6d9a2df98280 false false true}
2019-08-07 08:21:03.704831 I | op-osd: deployment for osd 4 already exists. updating if needed
2019-08-07 08:21:03.708624 I | op-k8sutil: updating deployment rook-ceph-osd-4
2019-08-07 08:21:03.720352 D | op-k8sutil: deployment rook-ceph-osd-4 status={ObservedGeneration:1 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:29:00 +0000 UTC LastTransitionTime:2019-08-06 14:29:00 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:29:00 +0000 UTC LastTransitionTime:2019-08-06 14:28:55 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-4-6bb55d5fc6" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:21:05.725890 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-4
2019-08-07 08:21:05.725984 I | op-osd: started deployment for osd 4 (dir=false, type=bluestore)
2019-08-07 08:21:05.726010 D | op-osd: start osd {42 /var/lib/rook/osd42 /var/lib/rook/osd42/rook-ceph-stage-primary.config ceph /var/lib/rook/osd42/keyring 5c3d6a9f-c43f-4994-b4ed-91b28cd221b2 false false true}
2019-08-07 08:21:05.736649 I | op-osd: deployment for osd 42 already exists. updating if needed
2019-08-07 08:21:05.740794 I | op-k8sutil: updating deployment rook-ceph-osd-42
2019-08-07 08:21:05.755479 D | op-k8sutil: deployment rook-ceph-osd-42 status={ObservedGeneration:1 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:29:00 +0000 UTC LastTransitionTime:2019-08-06 14:29:00 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:29:00 +0000 UTC LastTransitionTime:2019-08-06 14:28:56 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-42-7db765d5db" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:21:06.389769 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:07.083846 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:07.264343 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:07.753243 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:07.760082 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-42
2019-08-07 08:21:07.760113 I | op-osd: started deployment for osd 42 (dir=false, type=bluestore)
2019-08-07 08:21:07.760139 D | op-osd: start osd {8 /var/lib/rook/osd8 /var/lib/rook/osd8/rook-ceph-stage-primary.config ceph /var/lib/rook/osd8/keyring d8f10678-a97e-4dab-ad71-bd528eb8baf1 false false true}
2019-08-07 08:21:07.769556 I | op-osd: deployment for osd 8 already exists. updating if needed
2019-08-07 08:21:07.773572 I | op-k8sutil: updating deployment rook-ceph-osd-8
2019-08-07 08:21:07.789126 D | op-k8sutil: deployment rook-ceph-osd-8 status={ObservedGeneration:1 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:29:01 +0000 UTC LastTransitionTime:2019-08-06 14:29:01 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:29:01 +0000 UTC LastTransitionTime:2019-08-06 14:28:58 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-8-865d7db956" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:21:07.997066 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:08.498577 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:08.602600 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:08.862967 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:08.863915 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:09.362781 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:09.364050 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:09.511077 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:09.664754 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:09.775389 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:09.798425 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-8
2019-08-07 08:21:09.798454 I | op-osd: started deployment for osd 8 (dir=false, type=bluestore)
2019-08-07 08:21:09.798480 D | op-osd: start osd {12 /var/lib/rook/osd12 /var/lib/rook/osd12/rook-ceph-stage-primary.config ceph /var/lib/rook/osd12/keyring b9f40551-ce08-4e7f-9ef2-8652c42bf641 false false true}
2019-08-07 08:21:09.812535 I | op-osd: deployment for osd 12 already exists. updating if needed
2019-08-07 08:21:09.817201 I | op-k8sutil: updating deployment rook-ceph-osd-12
2019-08-07 08:21:09.839005 D | op-k8sutil: deployment rook-ceph-osd-12 status={ObservedGeneration:1 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:28:49 +0000 UTC LastTransitionTime:2019-08-06 14:28:49 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:28:51 +0000 UTC LastTransitionTime:2019-08-06 14:28:49 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-12-9c8b8b7b7" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:21:09.881177 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:10.362482 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:11.069437 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:11.110609 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:11.844183 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-12
2019-08-07 08:21:11.844214 I | op-osd: started deployment for osd 12 (dir=false, type=bluestore)
2019-08-07 08:21:11.844239 D | op-osd: start osd {16 /var/lib/rook/osd16 /var/lib/rook/osd16/rook-ceph-stage-primary.config ceph /var/lib/rook/osd16/keyring b8653dd3-724d-47e4-851b-a967072ace81 false false true}
2019-08-07 08:21:11.857117 I | op-osd: deployment for osd 16 already exists. updating if needed
2019-08-07 08:21:11.862513 I | op-k8sutil: updating deployment rook-ceph-osd-16
2019-08-07 08:21:11.884491 D | op-k8sutil: deployment rook-ceph-osd-16 status={ObservedGeneration:1 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:28:53 +0000 UTC LastTransitionTime:2019-08-06 14:28:53 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:28:59 +0000 UTC LastTransitionTime:2019-08-06 14:28:50 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-16-56ff6dbb7c" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:21:12.220760 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:12.484569 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:12.795315 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:12.863707 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:13.889806 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-16
2019-08-07 08:21:13.889843 I | op-osd: started deployment for osd 16 (dir=false, type=bluestore)
2019-08-07 08:21:13.889871 D | op-osd: start osd {20 /var/lib/rook/osd20 /var/lib/rook/osd20/rook-ceph-stage-primary.config ceph /var/lib/rook/osd20/keyring 30b2a1ea-9a68-4b46-89bf-aba7bf8f29ff false false true}
2019-08-07 08:21:13.902373 I | op-osd: deployment for osd 20 already exists. updating if needed
2019-08-07 08:21:13.906401 I | op-k8sutil: updating deployment rook-ceph-osd-20
2019-08-07 08:21:13.922116 D | op-k8sutil: deployment rook-ceph-osd-20 status={ObservedGeneration:1 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:28:58 +0000 UTC LastTransitionTime:2019-08-06 14:28:58 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:28:58 +0000 UTC LastTransitionTime:2019-08-06 14:28:51 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-20-654fc7c8bb" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:21:15.928013 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-20
2019-08-07 08:21:15.928046 I | op-osd: started deployment for osd 20 (dir=false, type=bluestore)
2019-08-07 08:21:15.928072 D | op-osd: start osd {27 /var/lib/rook/osd27 /var/lib/rook/osd27/rook-ceph-stage-primary.config ceph /var/lib/rook/osd27/keyring 7bf33094-0c89-44fe-a1be-fd9507ec4f21 false false true}
2019-08-07 08:21:15.937514 I | op-osd: deployment for osd 27 already exists. updating if needed
2019-08-07 08:21:15.941435 I | op-k8sutil: updating deployment rook-ceph-osd-27
2019-08-07 08:21:15.953190 D | op-k8sutil: deployment rook-ceph-osd-27 status={ObservedGeneration:1 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:28:59 +0000 UTC LastTransitionTime:2019-08-06 14:28:59 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:28:59 +0000 UTC LastTransitionTime:2019-08-06 14:28:52 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-27-7499b6bbb9" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:21:16.409597 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:17.107945 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:17.362876 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:17.862710 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:17.958382 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-27
2019-08-07 08:21:17.958417 I | op-osd: started deployment for osd 27 (dir=false, type=bluestore)
2019-08-07 08:21:17.958445 D | op-osd: start osd {31 /var/lib/rook/osd31 /var/lib/rook/osd31/rook-ceph-stage-primary.config ceph /var/lib/rook/osd31/keyring 2d708cb1-f88a-4c88-a7c7-b7c041be05dd false false true}
2019-08-07 08:21:17.984636 I | op-osd: deployment for osd 31 already exists. updating if needed
2019-08-07 08:21:17.988794 I | op-k8sutil: updating deployment rook-ceph-osd-31
2019-08-07 08:21:18.002050 D | op-k8sutil: deployment rook-ceph-osd-31 status={ObservedGeneration:1 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:29:00 +0000 UTC LastTransitionTime:2019-08-06 14:29:00 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:29:00 +0000 UTC LastTransitionTime:2019-08-06 14:28:53 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-31-5db5d7b676" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:21:18.035377 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:18.518660 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:18.644547 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:18.796897 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:18.863221 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:19.374650 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:19.384037 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:19.532887 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:19.697192 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:19.800614 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:19.907165 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:20.007347 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-31
2019-08-07 08:21:20.007380 I | op-osd: started deployment for osd 31 (dir=false, type=bluestore)
2019-08-07 08:21:20.007406 D | op-osd: start osd {0 /var/lib/rook/osd0 /var/lib/rook/osd0/rook-ceph-stage-primary.config ceph /var/lib/rook/osd0/keyring eb12b21e-aae3-4cae-862d-febe86377aa0 false false true}
2019-08-07 08:21:20.023502 I | op-osd: deployment for osd 0 already exists. updating if needed
2019-08-07 08:21:20.029984 I | op-k8sutil: updating deployment rook-ceph-osd-0
2019-08-07 08:21:20.052412 D | op-k8sutil: deployment rook-ceph-osd-0 status={ObservedGeneration:1 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:28:49 +0000 UTC LastTransitionTime:2019-08-06 14:28:49 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:28:49 +0000 UTC LastTransitionTime:2019-08-06 14:28:49 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-0-68d4c56d68" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:21:20.362504 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:21.091571 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:21.132256 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:22.058080 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-0
2019-08-07 08:21:22.058115 I | op-osd: started deployment for osd 0 (dir=false, type=bluestore)
2019-08-07 08:21:22.061803 I | op-osd: osd orchestration status for node k8s-worker-101.lxstage.domain.com is completed
2019-08-07 08:21:22.061831 I | op-osd: starting 12 osd daemons on node k8s-worker-101.lxstage.domain.com
2019-08-07 08:21:22.061861 D | op-osd: start osd {23 /var/lib/rook/osd23 /var/lib/rook/osd23/rook-ceph-stage-primary.config ceph /var/lib/rook/osd23/keyring ef828443-ff97-479f-8994-7107e3855e51 false false true}
2019-08-07 08:21:22.071231 I | op-osd: deployment for osd 23 already exists. updating if needed
2019-08-07 08:21:22.075440 I | op-k8sutil: updating deployment rook-ceph-osd-23
2019-08-07 08:21:22.087921 D | op-k8sutil: deployment rook-ceph-osd-23 status={ObservedGeneration:1 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:28:59 +0000 UTC LastTransitionTime:2019-08-06 14:28:59 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:28:59 +0000 UTC LastTransitionTime:2019-08-06 14:28:52 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-23-58d557c87b" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:21:22.239594 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:22.509204 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:22.811200 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:22.863748 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:24.092742 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-23
2019-08-07 08:21:24.092778 I | op-osd: started deployment for osd 23 (dir=false, type=bluestore)
2019-08-07 08:21:24.092805 D | op-osd: start osd {32 /var/lib/rook/osd32 /var/lib/rook/osd32/rook-ceph-stage-primary.config ceph /var/lib/rook/osd32/keyring c9c709f2-77df-47e1-947c-42ec722bc985 false false true}
2019-08-07 08:21:24.102572 I | op-osd: deployment for osd 32 already exists. updating if needed
2019-08-07 08:21:24.162275 I | op-k8sutil: updating deployment rook-ceph-osd-32
2019-08-07 08:21:24.264557 D | op-k8sutil: deployment rook-ceph-osd-32 status={ObservedGeneration:1 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:29:00 +0000 UTC LastTransitionTime:2019-08-06 14:29:00 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:29:00 +0000 UTC LastTransitionTime:2019-08-06 14:28:54 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-32-5cc7f87b68" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:21:26.366619 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-32
2019-08-07 08:21:26.366655 I | op-osd: started deployment for osd 32 (dir=false, type=bluestore)
2019-08-07 08:21:26.366682 D | op-osd: start osd {44 /var/lib/rook/osd44 /var/lib/rook/osd44/rook-ceph-stage-primary.config ceph /var/lib/rook/osd44/keyring 92160976-b391-4f54-92ab-b6bf109f1b6c false false true}
2019-08-07 08:21:26.376839 I | op-osd: deployment for osd 44 already exists. updating if needed
2019-08-07 08:21:26.380877 I | op-k8sutil: updating deployment rook-ceph-osd-44
2019-08-07 08:21:26.393781 D | op-k8sutil: deployment rook-ceph-osd-44 status={ObservedGeneration:1 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:29:00 +0000 UTC LastTransitionTime:2019-08-06 14:29:00 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:29:00 +0000 UTC LastTransitionTime:2019-08-06 14:28:56 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-44-589488bbc7" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:21:26.425298 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:27.126100 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:27.362147 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:27.784792 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:28.060221 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:28.399734 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-44
2019-08-07 08:21:28.399769 I | op-osd: started deployment for osd 44 (dir=false, type=bluestore)
2019-08-07 08:21:28.399795 D | op-osd: start osd {47 /var/lib/rook/osd47 /var/lib/rook/osd47/rook-ceph-stage-primary.config ceph /var/lib/rook/osd47/keyring faef812d-e78e-421e-a26e-da75dac8f2c2 false false true}
2019-08-07 08:21:28.409502 I | op-osd: deployment for osd 47 already exists. updating if needed
2019-08-07 08:21:28.413938 I | op-k8sutil: updating deployment rook-ceph-osd-47
2019-08-07 08:21:28.427580 D | op-k8sutil: deployment rook-ceph-osd-47 status={ObservedGeneration:1 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:29:00 +0000 UTC LastTransitionTime:2019-08-06 14:29:00 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:29:00 +0000 UTC LastTransitionTime:2019-08-06 14:28:57 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-47-5c868b766b" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:21:28.538879 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:28.661287 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:28.822288 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:28.837843 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:29.401687 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:29.413014 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:29.554187 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:29.719624 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:29.830548 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:29.928799 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:30.362439 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:30.432985 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-47
2019-08-07 08:21:30.433016 I | op-osd: started deployment for osd 47 (dir=false, type=bluestore)
2019-08-07 08:21:30.433043 D | op-osd: start osd {10 /var/lib/rook/osd10 /var/lib/rook/osd10/rook-ceph-stage-primary.config ceph /var/lib/rook/osd10/keyring efe3a633-a626-4c83-94dc-56f51c448709 false false true}
2019-08-07 08:21:30.443662 I | op-osd: deployment for osd 10 already exists. updating if needed
2019-08-07 08:21:30.448044 I | op-k8sutil: updating deployment rook-ceph-osd-10
2019-08-07 08:21:30.461264 D | op-k8sutil: deployment rook-ceph-osd-10 status={ObservedGeneration:1 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:28:49 +0000 UTC LastTransitionTime:2019-08-06 14:28:49 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:28:49 +0000 UTC LastTransitionTime:2019-08-06 14:28:49 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-10-66c6c7d648" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:21:31.122814 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:31.162128 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:32.252143 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:32.466781 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-10
2019-08-07 08:21:32.466811 I | op-osd: started deployment for osd 10 (dir=false, type=bluestore)
2019-08-07 08:21:32.466836 D | op-osd: start osd {14 /var/lib/rook/osd14 /var/lib/rook/osd14/rook-ceph-stage-primary.config ceph /var/lib/rook/osd14/keyring de07ca94-6bae-4ad3-bf60-855aa41c2979 false false true}
2019-08-07 08:21:32.476804 I | op-osd: deployment for osd 14 already exists. updating if needed
2019-08-07 08:21:32.481233 I | op-k8sutil: updating deployment rook-ceph-osd-14
2019-08-07 08:21:32.494442 D | op-k8sutil: deployment rook-ceph-osd-14 status={ObservedGeneration:1 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:28:50 +0000 UTC LastTransitionTime:2019-08-06 14:28:50 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:28:56 +0000 UTC LastTransitionTime:2019-08-06 14:28:49 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-14-7595bf794b" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:21:32.527281 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:32.828012 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:32.867997 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:34.499308 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-14
2019-08-07 08:21:34.499344 I | op-osd: started deployment for osd 14 (dir=false, type=bluestore)
2019-08-07 08:21:34.499371 D | op-osd: start osd {18 /var/lib/rook/osd18 /var/lib/rook/osd18/rook-ceph-stage-primary.config ceph /var/lib/rook/osd18/keyring 69330f11-accc-440f-813b-660cea9942d1 false false true}
2019-08-07 08:21:34.508396 I | op-osd: deployment for osd 18 already exists. updating if needed
2019-08-07 08:21:34.512433 I | op-k8sutil: updating deployment rook-ceph-osd-18
2019-08-07 08:21:34.524594 D | op-k8sutil: deployment rook-ceph-osd-18 status={ObservedGeneration:1 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:28:54 +0000 UTC LastTransitionTime:2019-08-06 14:28:54 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:29:00 +0000 UTC LastTransitionTime:2019-08-06 14:28:50 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-18-668f89c6b8" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:21:36.440109 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:36.530119 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-18
2019-08-07 08:21:36.530148 I | op-osd: started deployment for osd 18 (dir=false, type=bluestore)
2019-08-07 08:21:36.530172 D | op-osd: start osd {2 /var/lib/rook/osd2 /var/lib/rook/osd2/rook-ceph-stage-primary.config ceph /var/lib/rook/osd2/keyring 1f672af0-17db-432c-8c41-b39ad136f489 false false true}
2019-08-07 08:21:36.540012 I | op-osd: deployment for osd 2 already exists. updating if needed
2019-08-07 08:21:36.544236 I | op-k8sutil: updating deployment rook-ceph-osd-2
2019-08-07 08:21:36.556631 D | op-k8sutil: deployment rook-ceph-osd-2 status={ObservedGeneration:1 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:28:57 +0000 UTC LastTransitionTime:2019-08-06 14:28:57 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:28:57 +0000 UTC LastTransitionTime:2019-08-06 14:28:51 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-2-5749874989" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:21:37.149031 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:37.362620 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:37.802300 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:38.082821 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:38.560342 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:38.563334 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-2
2019-08-07 08:21:38.563362 I | op-osd: started deployment for osd 2 (dir=false, type=bluestore)
2019-08-07 08:21:38.563388 D | op-osd: start osd {28 /var/lib/rook/osd28 /var/lib/rook/osd28/rook-ceph-stage-primary.config ceph /var/lib/rook/osd28/keyring 56d3533e-e622-42cb-af01-dd0148938cc9 false false true}
2019-08-07 08:21:38.572140 I | op-osd: deployment for osd 28 already exists. updating if needed
2019-08-07 08:21:38.576082 I | op-k8sutil: updating deployment rook-ceph-osd-28
2019-08-07 08:21:38.587184 D | op-k8sutil: deployment rook-ceph-osd-28 status={ObservedGeneration:1 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:28:59 +0000 UTC LastTransitionTime:2019-08-06 14:28:59 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:28:59 +0000 UTC LastTransitionTime:2019-08-06 14:28:53 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-28-6fd8675475" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:21:38.677980 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:38.842891 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:38.857458 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:39.434049 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:39.444280 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:39.574115 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:39.744390 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:39.850058 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:39.951105 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:40.362787 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:40.593210 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-28
2019-08-07 08:21:40.593241 I | op-osd: started deployment for osd 28 (dir=false, type=bluestore)
2019-08-07 08:21:40.593267 D | op-osd: start osd {36 /var/lib/rook/osd36 /var/lib/rook/osd36/rook-ceph-stage-primary.config ceph /var/lib/rook/osd36/keyring 7f17f7c0-f694-4ab3-8372-3e60bc4fe7b8 false false true}
2019-08-07 08:21:40.603462 I | op-osd: deployment for osd 36 already exists. updating if needed
2019-08-07 08:21:40.607693 I | op-k8sutil: updating deployment rook-ceph-osd-36
2019-08-07 08:21:40.680483 D | op-k8sutil: deployment rook-ceph-osd-36 status={ObservedGeneration:1 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:29:00 +0000 UTC LastTransitionTime:2019-08-06 14:29:00 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:29:00 +0000 UTC LastTransitionTime:2019-08-06 14:28:55 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-36-667bd6d6b4" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:21:41.161753 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:41.179925 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:42.266850 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:42.545692 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:42.685813 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-36
2019-08-07 08:21:42.685854 I | op-osd: started deployment for osd 36 (dir=false, type=bluestore)
2019-08-07 08:21:42.685889 D | op-osd: start osd {40 /var/lib/rook/osd40 /var/lib/rook/osd40/rook-ceph-stage-primary.config ceph /var/lib/rook/osd40/keyring 0acf76b9-d717-40c8-8445-030ca90c5a53 false false true}
2019-08-07 08:21:42.697859 I | op-osd: deployment for osd 40 already exists. updating if needed
2019-08-07 08:21:42.703858 I | op-k8sutil: updating deployment rook-ceph-osd-40
2019-08-07 08:21:42.717568 D | op-k8sutil: deployment rook-ceph-osd-40 status={ObservedGeneration:1 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:29:00 +0000 UTC LastTransitionTime:2019-08-06 14:29:00 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:29:00 +0000 UTC LastTransitionTime:2019-08-06 14:28:56 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-40-5c6d8d887c" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:21:42.862456 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:42.964606 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:44.722578 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-40
2019-08-07 08:21:44.722611 I | op-osd: started deployment for osd 40 (dir=false, type=bluestore)
2019-08-07 08:21:44.722637 D | op-osd: start osd {6 /var/lib/rook/osd6 /var/lib/rook/osd6/rook-ceph-stage-primary.config ceph /var/lib/rook/osd6/keyring b76ffc2d-7a56-4c3f-82c5-4975dcebfef6 false false true}
2019-08-07 08:21:44.773366 I | op-osd: deployment for osd 6 already exists. updating if needed
2019-08-07 08:21:44.777618 I | op-k8sutil: updating deployment rook-ceph-osd-6
2019-08-07 08:21:44.790666 D | op-k8sutil: deployment rook-ceph-osd-6 status={ObservedGeneration:1 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:29:01 +0000 UTC LastTransitionTime:2019-08-06 14:29:01 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:29:01 +0000 UTC LastTransitionTime:2019-08-06 14:28:58 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-6-67fdbb49d9" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:21:46.456207 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:46.795691 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-6
2019-08-07 08:21:46.795723 I | op-osd: started deployment for osd 6 (dir=false, type=bluestore)
2019-08-07 08:21:46.799721 I | op-osd: osd orchestration status for node k8s-worker-103.lxstage.domain.com is completed
2019-08-07 08:21:46.799745 I | op-osd: starting 12 osd daemons on node k8s-worker-103.lxstage.domain.com
2019-08-07 08:21:46.799774 D | op-osd: start osd {5 /var/lib/rook/osd5 /var/lib/rook/osd5/rook-ceph-stage-primary.config ceph /var/lib/rook/osd5/keyring 86a95d79-08b6-4717-8085-f90cd38926f5 false false true}
2019-08-07 08:21:46.809500 I | op-osd: deployment for osd 5 already exists. updating if needed
2019-08-07 08:21:46.813502 I | op-k8sutil: updating deployment rook-ceph-osd-5
2019-08-07 08:21:46.826538 D | op-k8sutil: deployment rook-ceph-osd-5 status={ObservedGeneration:1 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:29:01 +0000 UTC LastTransitionTime:2019-08-06 14:29:01 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:29:01 +0000 UTC LastTransitionTime:2019-08-06 14:28:57 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-5-76d7b66dcb" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:21:47.171204 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:47.340972 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:47.816952 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:48.105078 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:48.582334 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:48.695055 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:48.831408 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-5
2019-08-07 08:21:48.831440 I | op-osd: started deployment for osd 5 (dir=false, type=bluestore)
2019-08-07 08:21:48.831466 D | op-osd: start osd {1 /var/lib/rook/osd1 /var/lib/rook/osd1/rook-ceph-stage-primary.config ceph /var/lib/rook/osd1/keyring fe8f5ef7-2254-43dc-94ca-b941c5dc15d8 false false true}
2019-08-07 08:21:48.841236 I | op-osd: deployment for osd 1 already exists. updating if needed
2019-08-07 08:21:48.845498 I | op-k8sutil: updating deployment rook-ceph-osd-1
2019-08-07 08:21:48.864202 D | op-k8sutil: deployment rook-ceph-osd-1 status={ObservedGeneration:1 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:28:49 +0000 UTC LastTransitionTime:2019-08-06 14:28:49 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:28:49 +0000 UTC LastTransitionTime:2019-08-06 14:28:49 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-1-5dbb888c4d" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:21:48.864500 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:48.875367 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:49.460129 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:49.472007 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:49.592534 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:49.768294 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:49.871638 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:49.970805 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:50.362535 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:50.871880 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-1
2019-08-07 08:21:50.871936 I | op-osd: started deployment for osd 1 (dir=false, type=bluestore)
2019-08-07 08:21:50.871974 D | op-osd: start osd {17 /var/lib/rook/osd17 /var/lib/rook/osd17/rook-ceph-stage-primary.config ceph /var/lib/rook/osd17/keyring 29ab89f6-4fe3-4c8c-ab8c-bcfe1d0e0639 false false true}
2019-08-07 08:21:50.888572 I | op-osd: deployment for osd 17 already exists. updating if needed
2019-08-07 08:21:50.892278 I | op-k8sutil: updating deployment rook-ceph-osd-17
2019-08-07 08:21:50.904084 D | op-k8sutil: deployment rook-ceph-osd-17 status={ObservedGeneration:1 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:28:54 +0000 UTC LastTransitionTime:2019-08-06 14:28:54 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:29:00 +0000 UTC LastTransitionTime:2019-08-06 14:28:50 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-17-84d6f675b5" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:21:51.173811 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:51.204129 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:52.363245 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:52.564712 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:52.862755 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:52.909247 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-17
2019-08-07 08:21:52.909278 I | op-osd: started deployment for osd 17 (dir=false, type=bluestore)
2019-08-07 08:21:52.909306 D | op-osd: start osd {21 /var/lib/rook/osd21 /var/lib/rook/osd21/rook-ceph-stage-primary.config ceph /var/lib/rook/osd21/keyring 10e1ccf6-c478-4de8-a112-918900834da8 false false true}
2019-08-07 08:21:52.919355 I | op-osd: deployment for osd 21 already exists. updating if needed
2019-08-07 08:21:52.924513 I | op-k8sutil: updating deployment rook-ceph-osd-21
2019-08-07 08:21:52.930403 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:52.943835 D | op-k8sutil: deployment rook-ceph-osd-21 status={ObservedGeneration:1 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:28:58 +0000 UTC LastTransitionTime:2019-08-06 14:28:58 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:28:58 +0000 UTC LastTransitionTime:2019-08-06 14:28:51 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-21-79b54f444d" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:21:54.948818 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-21
2019-08-07 08:21:54.948852 I | op-osd: started deployment for osd 21 (dir=false, type=bluestore)
2019-08-07 08:21:54.948878 D | op-osd: start osd {33 /var/lib/rook/osd33 /var/lib/rook/osd33/rook-ceph-stage-primary.config ceph /var/lib/rook/osd33/keyring 2e065866-5c7b-4350-a30f-60a772d59043 false false true}
2019-08-07 08:21:54.958072 I | op-osd: deployment for osd 33 already exists. updating if needed
2019-08-07 08:21:54.962441 I | op-k8sutil: updating deployment rook-ceph-osd-33
2019-08-07 08:21:54.975094 D | op-k8sutil: deployment rook-ceph-osd-33 status={ObservedGeneration:1 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:29:00 +0000 UTC LastTransitionTime:2019-08-06 14:29:00 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:29:00 +0000 UTC LastTransitionTime:2019-08-06 14:28:54 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-33-74669d77c5" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:21:56.474390 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:56.980175 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-33
2019-08-07 08:21:56.980209 I | op-osd: started deployment for osd 33 (dir=false, type=bluestore)
2019-08-07 08:21:56.980235 D | op-osd: start osd {37 /var/lib/rook/osd37 /var/lib/rook/osd37/rook-ceph-stage-primary.config ceph /var/lib/rook/osd37/keyring 6b42cb02-f49e-4882-ac02-612326983b18 false false true}
2019-08-07 08:21:56.990274 I | op-osd: deployment for osd 37 already exists. updating if needed
2019-08-07 08:21:56.997343 I | op-k8sutil: updating deployment rook-ceph-osd-37
2019-08-07 08:21:57.081668 D | op-k8sutil: deployment rook-ceph-osd-37 status={ObservedGeneration:1 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:29:00 +0000 UTC LastTransitionTime:2019-08-06 14:29:00 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:29:00 +0000 UTC LastTransitionTime:2019-08-06 14:28:55 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-37-54b4c45bf" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:21:57.186531 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:57.358078 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:57.835542 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:58.128264 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:58.603510 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:58.711772 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:58.881480 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:58.896552 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:59.086605 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-37
2019-08-07 08:21:59.086636 I | op-osd: started deployment for osd 37 (dir=false, type=bluestore)
2019-08-07 08:21:59.086663 D | op-osd: start osd {41 /var/lib/rook/osd41 /var/lib/rook/osd41/rook-ceph-stage-primary.config ceph /var/lib/rook/osd41/keyring 1eddc614-9ec1-4c10-bc12-5477d92d06da false false true}
2019-08-07 08:21:59.097526 I | op-osd: deployment for osd 41 already exists. updating if needed
2019-08-07 08:21:59.101596 I | op-k8sutil: updating deployment rook-ceph-osd-41
2019-08-07 08:21:59.177722 D | op-k8sutil: deployment rook-ceph-osd-41 status={ObservedGeneration:1 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:29:00 +0000 UTC LastTransitionTime:2019-08-06 14:29:00 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:29:00 +0000 UTC LastTransitionTime:2019-08-06 14:28:56 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-41-78d847cb7b" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:21:59.488600 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:59.499648 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:59.615179 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:59.788463 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:59.891412 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:21:59.995738 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:00.388736 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:01.182924 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-41
2019-08-07 08:22:01.182956 I | op-osd: started deployment for osd 41 (dir=false, type=bluestore)
2019-08-07 08:22:01.182984 D | op-osd: start osd {45 /var/lib/rook/osd45 /var/lib/rook/osd45/rook-ceph-stage-primary.config ceph /var/lib/rook/osd45/keyring ad32e238-704f-4c67-9335-d997c3364672 false false true}
2019-08-07 08:22:01.198473 I | op-osd: deployment for osd 45 already exists. updating if needed
2019-08-07 08:22:01.266383 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:01.266725 I | op-k8sutil: updating deployment rook-ceph-osd-45
2019-08-07 08:22:01.267218 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:01.368582 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-45
2019-08-07 08:22:01.368625 I | op-osd: started deployment for osd 45 (dir=false, type=bluestore)
2019-08-07 08:22:01.368656 D | op-osd: start osd {13 /var/lib/rook/osd13 /var/lib/rook/osd13/rook-ceph-stage-primary.config ceph /var/lib/rook/osd13/keyring c30cff50-9e79-4d07-9e6f-af7c227f1594 false false true}
2019-08-07 08:22:01.378831 I | op-osd: deployment for osd 13 already exists. updating if needed
2019-08-07 08:22:01.563196 I | op-k8sutil: updating deployment rook-ceph-osd-13
2019-08-07 08:22:01.586726 D | op-k8sutil: deployment rook-ceph-osd-13 status={ObservedGeneration:1 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:28:49 +0000 UTC LastTransitionTime:2019-08-06 14:28:49 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:28:55 +0000 UTC LastTransitionTime:2019-08-06 14:28:49 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-13-674bdc9fcf" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:22:02.306508 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:02.581034 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:02.884493 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:02.953552 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:03.591737 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-13
2019-08-07 08:22:03.591774 I | op-osd: started deployment for osd 13 (dir=false, type=bluestore)
2019-08-07 08:22:03.591802 D | op-osd: start osd {25 /var/lib/rook/osd25 /var/lib/rook/osd25/rook-ceph-stage-primary.config ceph /var/lib/rook/osd25/keyring 963a7d1c-0b2d-42a1-a9f4-5f8d719d6bcb false false true}
2019-08-07 08:22:03.602125 I | op-osd: deployment for osd 25 already exists. updating if needed
2019-08-07 08:22:03.606386 I | op-k8sutil: updating deployment rook-ceph-osd-25
2019-08-07 08:22:03.679397 D | op-k8sutil: deployment rook-ceph-osd-25 status={ObservedGeneration:1 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:28:59 +0000 UTC LastTransitionTime:2019-08-06 14:28:59 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:28:59 +0000 UTC LastTransitionTime:2019-08-06 14:28:52 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-25-6fcd97f94b" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:22:05.684565 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-25
2019-08-07 08:22:05.684602 I | op-osd: started deployment for osd 25 (dir=false, type=bluestore)
2019-08-07 08:22:05.684631 D | op-osd: start osd {29 /var/lib/rook/osd29 /var/lib/rook/osd29/rook-ceph-stage-primary.config ceph /var/lib/rook/osd29/keyring 5b8289ff-184a-4c7a-b77b-c5524376ef96 false false true}
2019-08-07 08:22:05.694427 I | op-osd: deployment for osd 29 already exists. updating if needed
2019-08-07 08:22:05.698592 I | op-k8sutil: updating deployment rook-ceph-osd-29
2019-08-07 08:22:05.712077 D | op-k8sutil: deployment rook-ceph-osd-29 status={ObservedGeneration:1 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:28:59 +0000 UTC LastTransitionTime:2019-08-06 14:28:59 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:28:59 +0000 UTC LastTransitionTime:2019-08-06 14:28:53 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-29-7c6cc8c596" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:22:06.490859 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:07.207887 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:07.381148 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:07.717103 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-29
2019-08-07 08:22:07.717139 I | op-osd: started deployment for osd 29 (dir=false, type=bluestore)
2019-08-07 08:22:07.717165 D | op-osd: start osd {9 /var/lib/rook/osd9 /var/lib/rook/osd9/rook-ceph-stage-primary.config ceph /var/lib/rook/osd9/keyring 150221d5-12d0-4bcd-89be-a7119253ba02 false false true}
2019-08-07 08:22:07.726110 I | op-osd: deployment for osd 9 already exists. updating if needed
2019-08-07 08:22:07.730196 I | op-k8sutil: updating deployment rook-ceph-osd-9
2019-08-07 08:22:07.767183 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-9
2019-08-07 08:22:07.767210 I | op-osd: started deployment for osd 9 (dir=false, type=bluestore)
2019-08-07 08:22:07.770722 I | op-osd: osd orchestration status for node k8s-worker-104.lxstage.domain.com is completed
2019-08-07 08:22:07.770748 I | op-osd: starting 12 osd daemons on node k8s-worker-104.lxstage.domain.com
2019-08-07 08:22:07.770778 D | op-osd: start osd {30 /var/lib/rook/osd30 /var/lib/rook/osd30/rook-ceph-stage-primary.config ceph /var/lib/rook/osd30/keyring 534b5403-e8db-466f-8b82-7a39beb26c8e false false true}
2019-08-07 08:22:07.781741 I | op-osd: deployment for osd 30 already exists. updating if needed
2019-08-07 08:22:07.785671 I | op-k8sutil: updating deployment rook-ceph-osd-30
2019-08-07 08:22:07.809820 D | op-k8sutil: deployment rook-ceph-osd-30 status={ObservedGeneration:1 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:28:59 +0000 UTC LastTransitionTime:2019-08-06 14:28:59 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:28:59 +0000 UTC LastTransitionTime:2019-08-06 14:28:53 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-30-5f7bbb76dc" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:22:07.857625 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:08.149766 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:08.621346 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:08.731689 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:08.899192 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:08.912706 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:09.516296 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:09.527470 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:09.633873 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:09.808523 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:09.814807 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-30
2019-08-07 08:22:09.814834 I | op-osd: started deployment for osd 30 (dir=false, type=bluestore)
2019-08-07 08:22:09.814859 D | op-osd: start osd {34 /var/lib/rook/osd34 /var/lib/rook/osd34/rook-ceph-stage-primary.config ceph /var/lib/rook/osd34/keyring 5510ae67-d8fd-4b5a-ad33-b6855bdd96c5 false false true}
2019-08-07 08:22:09.823464 I | op-osd: deployment for osd 34 already exists. updating if needed
2019-08-07 08:22:09.827437 I | op-k8sutil: updating deployment rook-ceph-osd-34
2019-08-07 08:22:09.841187 D | op-k8sutil: deployment rook-ceph-osd-34 status={ObservedGeneration:1 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:29:00 +0000 UTC LastTransitionTime:2019-08-06 14:29:00 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:29:00 +0000 UTC LastTransitionTime:2019-08-06 14:28:54 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-34-7f55d7c95f" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:22:09.909075 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:10.013704 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:10.408617 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:11.224803 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:11.262384 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:11.846253 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-34
2019-08-07 08:22:11.846285 I | op-osd: started deployment for osd 34 (dir=false, type=bluestore)
2019-08-07 08:22:11.846313 D | op-osd: start osd {43 /var/lib/rook/osd43 /var/lib/rook/osd43/rook-ceph-stage-primary.config ceph /var/lib/rook/osd43/keyring 15869916-ebe1-4283-b9c7-dda18dacee5d false false true}
2019-08-07 08:22:11.855522 I | op-osd: deployment for osd 43 already exists. updating if needed
2019-08-07 08:22:11.859599 I | op-k8sutil: updating deployment rook-ceph-osd-43
2019-08-07 08:22:11.872767 D | op-k8sutil: deployment rook-ceph-osd-43 status={ObservedGeneration:1 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:29:00 +0000 UTC LastTransitionTime:2019-08-06 14:29:00 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:29:00 +0000 UTC LastTransitionTime:2019-08-06 14:28:56 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-43-85fb57984c" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:22:12.327194 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:12.600340 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:12.913199 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:12.977559 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:13.879675 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-43
2019-08-07 08:22:13.879709 I | op-osd: started deployment for osd 43 (dir=false, type=bluestore)
2019-08-07 08:22:13.879736 D | op-osd: start osd {46 /var/lib/rook/osd46 /var/lib/rook/osd46/rook-ceph-stage-primary.config ceph /var/lib/rook/osd46/keyring f5afc7df-8126-4dee-9d5f-7396b2e3b776 false false true}
2019-08-07 08:22:13.890804 I | op-osd: deployment for osd 46 already exists. updating if needed
2019-08-07 08:22:13.898143 I | op-k8sutil: updating deployment rook-ceph-osd-46
2019-08-07 08:22:13.927122 D | op-k8sutil: deployment rook-ceph-osd-46 status={ObservedGeneration:1 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:29:00 +0000 UTC LastTransitionTime:2019-08-06 14:29:00 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:29:00 +0000 UTC LastTransitionTime:2019-08-06 14:28:57 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-46-667c7cc9cb" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:22:15.931967 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-46
2019-08-07 08:22:15.931996 I | op-osd: started deployment for osd 46 (dir=false, type=bluestore)
2019-08-07 08:22:15.932019 D | op-osd: start osd {7 /var/lib/rook/osd7 /var/lib/rook/osd7/rook-ceph-stage-primary.config ceph /var/lib/rook/osd7/keyring 62b6e926-dde5-4aac-a145-182b3e5aab80 false false true}
2019-08-07 08:22:15.942051 I | op-osd: deployment for osd 7 already exists. updating if needed
2019-08-07 08:22:15.946214 I | op-k8sutil: updating deployment rook-ceph-osd-7
2019-08-07 08:22:15.962716 D | op-k8sutil: deployment rook-ceph-osd-7 status={ObservedGeneration:1 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:29:01 +0000 UTC LastTransitionTime:2019-08-06 14:29:01 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:29:01 +0000 UTC LastTransitionTime:2019-08-06 14:28:58 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-7-d85649bcc" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:22:16.504726 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:17.231176 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:17.411311 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:17.875627 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:17.968027 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-7
2019-08-07 08:22:17.968059 I | op-osd: started deployment for osd 7 (dir=false, type=bluestore)
2019-08-07 08:22:17.968084 D | op-osd: start osd {19 /var/lib/rook/osd19 /var/lib/rook/osd19/rook-ceph-stage-primary.config ceph /var/lib/rook/osd19/keyring 15fa3676-063e-4750-9f65-dc2d31106a6f false false true}
2019-08-07 08:22:17.983023 I | op-osd: deployment for osd 19 already exists. updating if needed
2019-08-07 08:22:17.987137 I | op-k8sutil: updating deployment rook-ceph-osd-19
2019-08-07 08:22:18.000159 D | op-k8sutil: deployment rook-ceph-osd-19 status={ObservedGeneration:1 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:28:55 +0000 UTC LastTransitionTime:2019-08-06 14:28:55 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:28:55 +0000 UTC LastTransitionTime:2019-08-06 14:28:50 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-19-d4774b88d" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:22:18.170696 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:18.640169 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:18.748441 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:18.918146 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:18.931676 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:19.540537 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:19.563027 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:19.652654 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:19.827755 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:19.926305 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:20.005291 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-19
2019-08-07 08:22:20.005322 I | op-osd: started deployment for osd 19 (dir=false, type=bluestore)
2019-08-07 08:22:20.005349 D | op-osd: start osd {15 /var/lib/rook/osd15 /var/lib/rook/osd15/rook-ceph-stage-primary.config ceph /var/lib/rook/osd15/keyring ee183c15-8b68-47bc-bbd8-2aa2ab0f068e false false true}
2019-08-07 08:22:20.020288 I | op-osd: deployment for osd 15 already exists. updating if needed
2019-08-07 08:22:20.025816 I | op-k8sutil: updating deployment rook-ceph-osd-15
2019-08-07 08:22:20.163806 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:20.167991 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-15
2019-08-07 08:22:20.168017 I | op-osd: started deployment for osd 15 (dir=false, type=bluestore)
2019-08-07 08:22:20.168044 D | op-osd: start osd {22 /var/lib/rook/osd22 /var/lib/rook/osd22/rook-ceph-stage-primary.config ceph /var/lib/rook/osd22/keyring 4e40f781-204a-4785-ab26-fa9704d2c915 false false true}
2019-08-07 08:22:20.177455 I | op-osd: deployment for osd 22 already exists. updating if needed
2019-08-07 08:22:20.181364 I | op-k8sutil: updating deployment rook-ceph-osd-22
2019-08-07 08:22:20.196680 D | op-k8sutil: deployment rook-ceph-osd-22 status={ObservedGeneration:1 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:28:58 +0000 UTC LastTransitionTime:2019-08-06 14:28:58 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:28:58 +0000 UTC LastTransitionTime:2019-08-06 14:28:51 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-22-8694f8c86" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:22:20.437120 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:21.245790 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:21.268886 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:22.201812 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-22
2019-08-07 08:22:22.201845 I | op-osd: started deployment for osd 22 (dir=false, type=bluestore)
2019-08-07 08:22:22.201871 D | op-osd: start osd {26 /var/lib/rook/osd26 /var/lib/rook/osd26/rook-ceph-stage-primary.config ceph /var/lib/rook/osd26/keyring 5c0d1771-9e93-41f1-bcc1-d95c810b4311 false false true}
2019-08-07 08:22:22.213743 I | op-osd: deployment for osd 26 already exists. updating if needed
2019-08-07 08:22:22.262308 I | op-k8sutil: updating deployment rook-ceph-osd-26
2019-08-07 08:22:22.279755 D | op-k8sutil: deployment rook-ceph-osd-26 status={ObservedGeneration:1 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:28:59 +0000 UTC LastTransitionTime:2019-08-06 14:28:59 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:28:59 +0000 UTC LastTransitionTime:2019-08-06 14:28:52 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-26-5d666577d6" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:22:22.347413 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:22.620216 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:22.922879 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:23.007235 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:24.367150 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-26
2019-08-07 08:22:24.367185 I | op-osd: started deployment for osd 26 (dir=false, type=bluestore)
2019-08-07 08:22:24.367212 D | op-osd: start osd {3 /var/lib/rook/osd3 /var/lib/rook/osd3/rook-ceph-stage-primary.config ceph /var/lib/rook/osd3/keyring 2447ad07-8fb4-4e28-b7a5-12a8e8a126ad false false true}
2019-08-07 08:22:24.377327 I | op-osd: deployment for osd 3 already exists. updating if needed
2019-08-07 08:22:24.381358 I | op-k8sutil: updating deployment rook-ceph-osd-3
2019-08-07 08:22:24.394360 D | op-k8sutil: deployment rook-ceph-osd-3 status={ObservedGeneration:1 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:28:59 +0000 UTC LastTransitionTime:2019-08-06 14:28:59 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:28:59 +0000 UTC LastTransitionTime:2019-08-06 14:28:53 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-3-7cc6bb9c4f" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:22:26.398989 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-3
2019-08-07 08:22:26.399025 I | op-osd: started deployment for osd 3 (dir=false, type=bluestore)
2019-08-07 08:22:26.399053 D | op-osd: start osd {38 /var/lib/rook/osd38 /var/lib/rook/osd38/rook-ceph-stage-primary.config ceph /var/lib/rook/osd38/keyring f244ef2a-c589-49e2-8850-0b0f9f42cf49 false false true}
2019-08-07 08:22:26.408226 I | op-osd: deployment for osd 38 already exists. updating if needed
2019-08-07 08:22:26.412298 I | op-k8sutil: updating deployment rook-ceph-osd-38
2019-08-07 08:22:26.428139 D | op-k8sutil: deployment rook-ceph-osd-38 status={ObservedGeneration:1 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:29:00 +0000 UTC LastTransitionTime:2019-08-06 14:29:00 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:29:00 +0000 UTC LastTransitionTime:2019-08-06 14:28:55 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-38-5db6b86dd5" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:22:26.526487 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:27.247463 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:27.427873 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:27.891785 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:28.191660 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:28.433392 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-38
2019-08-07 08:22:28.433428 I | op-osd: started deployment for osd 38 (dir=false, type=bluestore)
2019-08-07 08:22:28.433452 D | op-osd: start osd {11 /var/lib/rook/osd11 /var/lib/rook/osd11/rook-ceph-stage-primary.config ceph /var/lib/rook/osd11/keyring 5bd122c4-3b6a-4304-bb03-29c43a2ec2a5 false false true}
2019-08-07 08:22:28.448685 I | op-osd: deployment for osd 11 already exists. updating if needed
2019-08-07 08:22:28.452609 I | op-k8sutil: updating deployment rook-ceph-osd-11
2019-08-07 08:22:28.466386 D | op-k8sutil: deployment rook-ceph-osd-11 status={ObservedGeneration:1 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:28:49 +0000 UTC LastTransitionTime:2019-08-06 14:28:49 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:28:49 +0000 UTC LastTransitionTime:2019-08-06 14:28:49 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-11-75487b9bcd" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:22:28.655866 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:28.765235 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:28.944515 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:28.951153 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:29.564658 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:29.663541 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:29.671901 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:29.849417 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:29.948445 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:30.056183 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:30.457681 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:30.470927 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-11
2019-08-07 08:22:30.470955 I | op-osd: started deployment for osd 11 (dir=false, type=bluestore)
2019-08-07 08:22:30.474294 I | op-osd: 4/4 node(s) completed osd provisioning
2019-08-07 08:22:30.474387 I | op-osd: checking if any nodes were removed
2019-08-07 08:22:30.765417 D | op-osd: adding osd rook-ceph-osd-0 to node k8s-worker-102.lxstage.domain.com
2019-08-07 08:22:30.765461 D | op-osd: adding osd rook-ceph-osd-1 to node k8s-worker-103.lxstage.domain.com
2019-08-07 08:22:30.765475 D | op-osd: adding osd rook-ceph-osd-10 to node k8s-worker-101.lxstage.domain.com
2019-08-07 08:22:30.765486 D | op-osd: adding osd rook-ceph-osd-11 to node k8s-worker-104.lxstage.domain.com
2019-08-07 08:22:30.765496 D | op-osd: adding osd rook-ceph-osd-12 to node k8s-worker-102.lxstage.domain.com
2019-08-07 08:22:30.765507 D | op-osd: adding osd rook-ceph-osd-13 to node k8s-worker-103.lxstage.domain.com
2019-08-07 08:22:30.765520 D | op-osd: adding osd rook-ceph-osd-14 to node k8s-worker-101.lxstage.domain.com
2019-08-07 08:22:30.765533 D | op-osd: adding osd rook-ceph-osd-15 to node k8s-worker-104.lxstage.domain.com
2019-08-07 08:22:30.765545 D | op-osd: adding osd rook-ceph-osd-16 to node k8s-worker-102.lxstage.domain.com
2019-08-07 08:22:30.765555 D | op-osd: adding osd rook-ceph-osd-17 to node k8s-worker-103.lxstage.domain.com
2019-08-07 08:22:30.765567 D | op-osd: adding osd rook-ceph-osd-18 to node k8s-worker-101.lxstage.domain.com
2019-08-07 08:22:30.765584 D | op-osd: adding osd rook-ceph-osd-19 to node k8s-worker-104.lxstage.domain.com
2019-08-07 08:22:30.765595 D | op-osd: adding osd rook-ceph-osd-2 to node k8s-worker-101.lxstage.domain.com
2019-08-07 08:22:30.765610 D | op-osd: adding osd rook-ceph-osd-20 to node k8s-worker-102.lxstage.domain.com
2019-08-07 08:22:30.765626 D | op-osd: adding osd rook-ceph-osd-21 to node k8s-worker-103.lxstage.domain.com
2019-08-07 08:22:30.765637 D | op-osd: adding osd rook-ceph-osd-22 to node k8s-worker-104.lxstage.domain.com
2019-08-07 08:22:30.765646 D | op-osd: adding osd rook-ceph-osd-23 to node k8s-worker-101.lxstage.domain.com
2019-08-07 08:22:30.765657 D | op-osd: adding osd rook-ceph-osd-24 to node k8s-worker-102.lxstage.domain.com
2019-08-07 08:22:30.765669 D | op-osd: adding osd rook-ceph-osd-25 to node k8s-worker-103.lxstage.domain.com
2019-08-07 08:22:30.765682 D | op-osd: adding osd rook-ceph-osd-26 to node k8s-worker-104.lxstage.domain.com
2019-08-07 08:22:30.765697 D | op-osd: adding osd rook-ceph-osd-27 to node k8s-worker-102.lxstage.domain.com
2019-08-07 08:22:30.765708 D | op-osd: adding osd rook-ceph-osd-28 to node k8s-worker-101.lxstage.domain.com
2019-08-07 08:22:30.765718 D | op-osd: adding osd rook-ceph-osd-29 to node k8s-worker-103.lxstage.domain.com
2019-08-07 08:22:30.765727 D | op-osd: adding osd rook-ceph-osd-3 to node k8s-worker-104.lxstage.domain.com
2019-08-07 08:22:30.765736 D | op-osd: adding osd rook-ceph-osd-30 to node k8s-worker-104.lxstage.domain.com
2019-08-07 08:22:30.765746 D | op-osd: adding osd rook-ceph-osd-31 to node k8s-worker-102.lxstage.domain.com
2019-08-07 08:22:30.765757 D | op-osd: adding osd rook-ceph-osd-32 to node k8s-worker-101.lxstage.domain.com
2019-08-07 08:22:30.765773 D | op-osd: adding osd rook-ceph-osd-33 to node k8s-worker-103.lxstage.domain.com
2019-08-07 08:22:30.765786 D | op-osd: adding osd rook-ceph-osd-34 to node k8s-worker-104.lxstage.domain.com
2019-08-07 08:22:30.765795 D | op-osd: adding osd rook-ceph-osd-35 to node k8s-worker-102.lxstage.domain.com
2019-08-07 08:22:30.765810 D | op-osd: adding osd rook-ceph-osd-36 to node k8s-worker-101.lxstage.domain.com
2019-08-07 08:22:30.765865 D | op-osd: adding osd rook-ceph-osd-37 to node k8s-worker-103.lxstage.domain.com
2019-08-07 08:22:30.765877 D | op-osd: adding osd rook-ceph-osd-38 to node k8s-worker-104.lxstage.domain.com
2019-08-07 08:22:30.765886 D | op-osd: adding osd rook-ceph-osd-39 to node k8s-worker-102.lxstage.domain.com
2019-08-07 08:22:30.765895 D | op-osd: adding osd rook-ceph-osd-4 to node k8s-worker-102.lxstage.domain.com
2019-08-07 08:22:30.765914 D | op-osd: adding osd rook-ceph-osd-40 to node k8s-worker-101.lxstage.domain.com
2019-08-07 08:22:30.765925 D | op-osd: adding osd rook-ceph-osd-41 to node k8s-worker-103.lxstage.domain.com
2019-08-07 08:22:30.765945 D | op-osd: adding osd rook-ceph-osd-42 to node k8s-worker-102.lxstage.domain.com
2019-08-07 08:22:30.765957 D | op-osd: adding osd rook-ceph-osd-43 to node k8s-worker-104.lxstage.domain.com
2019-08-07 08:22:30.765966 D | op-osd: adding osd rook-ceph-osd-44 to node k8s-worker-101.lxstage.domain.com
2019-08-07 08:22:30.765974 D | op-osd: adding osd rook-ceph-osd-45 to node k8s-worker-103.lxstage.domain.com
2019-08-07 08:22:30.765983 D | op-osd: adding osd rook-ceph-osd-46 to node k8s-worker-104.lxstage.domain.com
2019-08-07 08:22:30.765993 D | op-osd: adding osd rook-ceph-osd-47 to node k8s-worker-101.lxstage.domain.com
2019-08-07 08:22:30.766002 D | op-osd: adding osd rook-ceph-osd-5 to node k8s-worker-103.lxstage.domain.com
2019-08-07 08:22:30.766016 D | op-osd: adding osd rook-ceph-osd-6 to node k8s-worker-101.lxstage.domain.com
2019-08-07 08:22:30.766032 D | op-osd: adding osd rook-ceph-osd-7 to node k8s-worker-104.lxstage.domain.com
2019-08-07 08:22:30.766042 D | op-osd: adding osd rook-ceph-osd-8 to node k8s-worker-102.lxstage.domain.com
2019-08-07 08:22:30.766050 D | op-osd: adding osd rook-ceph-osd-9 to node k8s-worker-103.lxstage.domain.com
2019-08-07 08:22:30.783112 I | op-osd: processing 0 removed nodes
2019-08-07 08:22:30.783135 I | op-osd: done processing removed nodes
2019-08-07 08:22:30.783144 I | exec: Running command: ceph versions
2019-08-07 08:22:31.265187 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:31.287949 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:32.362293 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:32.663743 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:32.963179 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:33.063841 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:33.199601 D | cephclient: {
"mon": {
"ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable)": 5
},
"mgr": {
"ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable)": 1
},
"osd": {
"ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable)": 48
},
"mds": {},
"overall": {
"ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable)": 54
}
}
2019-08-07 08:22:33.199641 D | cephclient: {
"mon": {
"ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable)": 5
},
"mgr": {
"ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable)": 1
},
"osd": {
"ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable)": 48
},
"mds": {},
"overall": {
"ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable)": 54
}
}
2019-08-07 08:22:33.199727 I | op-osd: len of version.Osd is 1
2019-08-07 08:22:33.199742 I | op-osd: v is ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable)
2019-08-07 08:22:33.199782 I | op-osd: osdVersion is: 14.2.1 nautilus
2019-08-07 08:22:33.199797 I | exec: Running command: ceph osd require-osd-release nautilus
2019-08-07 08:22:35.371253 I | cephclient: successfully disallowed pre-nautilus osds and enabled all new nautilus-only functionality
2019-08-07 08:22:35.371296 I | op-osd: completed running osds in namespace rook-ceph-stage-primary
2019-08-07 08:22:35.371315 I | rbd-mirror: configure rbd-mirroring with 0 workers
2019-08-07 08:22:35.379628 I | rbd-mirror: no extra daemons to remove
2019-08-07 08:22:35.379654 I | op-cluster: Done creating rook instance in namespace rook-ceph-stage-primary
2019-08-07 08:22:35.379670 I | op-cluster: CephCluster rook-ceph-stage-primary status: Created
2019-08-07 08:22:35.401670 I | op-pool: start watching pool resources in namespace rook-ceph-stage-primary
2019-08-07 08:22:35.405717 I | exec: Running command: ceph osd crush dump --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/664653176
2019-08-07 08:22:35.406357 I | op-pool: start watching legacy rook pools in all namespaces
2019-08-07 08:22:35.406397 I | op-object: start watching object store resources in namespace rook-ceph-stage-primary
2019-08-07 08:22:35.464402 I | op-object: start watching legacy rook objectstores in all namespaces
2019-08-07 08:22:35.464440 I | op-object: start watching object store user resources in namespace rook-ceph-stage-primary
2019-08-07 08:22:35.464455 I | op-file: start watching filesystem resource in namespace rook-ceph-stage-primary
2019-08-07 08:22:35.470362 I | op-file: start watching legacy rook filesystems in all namespaces
2019-08-07 08:22:35.470403 I | op-nfs: start watching ceph nfs resource in namespace rook-ceph-stage-primary
2019-08-07 08:22:35.470426 I | op-cluster: ceph status check interval is 60s
2019-08-07 08:22:35.470635 D | op-cluster: checking health of cluster
2019-08-07 08:22:35.470835 D | exec: Running command: ceph status --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/179877495
2019-08-07 08:22:35.562902 I | op-cluster: finalizer already set on cluster rook-ceph-stage-primary
2019-08-07 08:22:35.563152 D | op-cluster: update event for cluster rook-ceph-stage-primary
2019-08-07 08:22:35.563315 D | op-cluster: update event for cluster rook-ceph-stage-primary is not supported
2019-08-07 08:22:36.564114 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:37.263402 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:37.463870 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:37.909897 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:38.263647 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:38.763937 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:38.785841 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:38.965350 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:38.972484 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:39.664108 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:39.665268 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:39.762459 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:39.870870 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:39.972398 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:40.163882 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:40.563962 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:41.364445 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:41.365719 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:41.562429 I | exec: 2019-08-07 08:22:37.969 7f02a1043700 1 librados: starting msgr at
2019-08-07 08:22:37.969 7f02a1043700 1 librados: starting objecter
2019-08-07 08:22:37.969 7f02a1043700 1 librados: setting wanted keys
2019-08-07 08:22:37.969 7f02a1043700 1 librados: calling monclient init
2019-08-07 08:22:38.064 7f02a1043700 1 librados: init done
2019-08-07 08:22:41.263 7f02a1043700 10 librados: watch_flush enter
2019-08-07 08:22:41.263 7f02a1043700 10 librados: watch_flush exit
2019-08-07 08:22:41.361 7f02a1043700 1 librados: shutdown
2019-08-07 08:22:41.564132 I | op-pool: creating pool rook-ceph-stage-primary-pool in namespace rook-ceph-stage-primary
2019-08-07 08:22:41.564288 I | exec: Running command: ceph osd crush rule create-simple rook-ceph-stage-primary-pool default host --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/841130346
2019-08-07 08:22:42.467053 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:42.763986 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:42.968646 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:43.062720 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:43.262417 I | exec: 2019-08-07 08:22:37.669 7f85770f3700 1 librados: starting msgr at
2019-08-07 08:22:37.669 7f85770f3700 1 librados: starting objecter
2019-08-07 08:22:37.669 7f85770f3700 1 librados: setting wanted keys
2019-08-07 08:22:37.669 7f85770f3700 1 librados: calling monclient init
2019-08-07 08:22:37.766 7f85770f3700 1 librados: init done
2019-08-07 08:22:43.061 7f85770f3700 10 librados: watch_flush enter
2019-08-07 08:22:43.061 7f85770f3700 10 librados: watch_flush exit
2019-08-07 08:22:43.062 7f85770f3700 1 librados: shutdown
2019-08-07 08:22:43.264103 D | op-cluster: Cluster status: {Health:{Status:HEALTH_WARN Checks:map[MON_DOWN:{Severity:HEALTH_WARN Summary:{Message:1/5 mons down, quorum a,b,f,g}}]} FSID:7dd854f1-2892-4201-ab69-d4797f12ac50 ElectionEpoch:234 Quorum:[0 1 2 3] QuorumNames:[a b f g] MonMap:{Epoch:5 FSID:7dd854f1-2892-4201-ab69-d4797f12ac50 CreatedTime:2019-08-05 15:05:49.660802 ModifiedTime:2019-08-05 15:09:42.905706 Mons:[{Name:a Rank:0 Address:100.70.46.205:6789/0} {Name:b Rank:1 Address:100.67.17.84:6789/0} {Name:f Rank:2 Address:100.69.115.5:6789/0} {Name:g Rank:3 Address:100.66.122.247:6789/0} {Name:h Rank:4 Address:100.64.242.138:6789/0}]} OsdMap:{OsdMap:{Epoch:161 NumOsd:48 NumUpOsd:48 NumInOsd:48 Full:false NearFull:false NumRemappedPgs:0}} PgMap:{PgsByState:[{StateName:active+clean Count:512}] Version:0 NumPgs:512 DataBytes:125898804 UsedBytes:52301021184 AvailableBytes:51126529277952 TotalBytes:51178830299136 ReadBps:0 WriteBps:0 ReadOps:0 WriteOps:0 RecoveryBps:0 RecoveryObjectsPerSec:0 RecoveryKeysPerSec:0 CacheFlushBps:0 CacheEvictBps:0 CachePromoteBps:0} MgrMap:{Epoch:118 ActiveGID:534391 ActiveName:a ActiveAddr:100.192.28.144:6801/1 Available:true Standbys:[]}}
2019-08-07 08:22:43.362222 D | op-cluster: update event for cluster rook-ceph-stage-primary
2019-08-07 08:22:43.362436 D | op-cluster: update event for cluster rook-ceph-stage-primary is not supported
2019-08-07 08:22:44.877177 I | exec: 2019-08-07 08:22:43.687 7fd5945bb700 1 librados: starting msgr at
2019-08-07 08:22:43.687 7fd5945bb700 1 librados: starting objecter
2019-08-07 08:22:43.761 7fd5945bb700 1 librados: setting wanted keys
2019-08-07 08:22:43.761 7fd5945bb700 1 librados: calling monclient init
2019-08-07 08:22:43.767 7fd5945bb700 1 librados: init done
rule rook-ceph-stage-primary-pool already exists
2019-08-07 08:22:44.769 7fd5945bb700 10 librados: watch_flush enter
2019-08-07 08:22:44.769 7fd5945bb700 10 librados: watch_flush exit
2019-08-07 08:22:44.771 7fd5945bb700 1 librados: shutdown
2019-08-07 08:22:44.877486 I | exec: Running command: ceph osd pool create rook-ceph-stage-primary-pool 0 replicated rook-ceph-stage-primary-pool --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/473995201
2019-08-07 08:22:46.575611 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:46.968820 I | exec: 2019-08-07 08:22:45.862 7fdfb1bb3700 1 librados: starting msgr at
2019-08-07 08:22:45.862 7fdfb1bb3700 1 librados: starting objecter
2019-08-07 08:22:45.863 7fdfb1bb3700 1 librados: setting wanted keys
2019-08-07 08:22:45.863 7fdfb1bb3700 1 librados: calling monclient init
2019-08-07 08:22:45.869 7fdfb1bb3700 1 librados: init done
pool 'rook-ceph-stage-primary-pool' already exists
2019-08-07 08:22:46.861 7fdfb1bb3700 10 librados: watch_flush enter
2019-08-07 08:22:46.861 7fdfb1bb3700 10 librados: watch_flush exit
2019-08-07 08:22:46.862 7fdfb1bb3700 1 librados: shutdown
2019-08-07 08:22:46.969139 I | exec: Running command: ceph osd pool set rook-ceph-stage-primary-pool size 4 --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/808008748
2019-08-07 08:22:47.362150 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:47.466155 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:47.963771 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:48.263667 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:48.763845 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:48.862270 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:48.971591 I | exec: 2019-08-07 08:22:47.883 7f85eba3a700 1 librados: starting msgr at
2019-08-07 08:22:47.883 7f85eba3a700 1 librados: starting objecter
2019-08-07 08:22:47.883 7f85eba3a700 1 librados: setting wanted keys
2019-08-07 08:22:47.883 7f85eba3a700 1 librados: calling monclient init
2019-08-07 08:22:47.966 7f85eba3a700 1 librados: init done
set pool 1 size to 4
2019-08-07 08:22:48.895 7f85eba3a700 10 librados: watch_flush enter
2019-08-07 08:22:48.895 7f85eba3a700 10 librados: watch_flush exit
2019-08-07 08:22:48.897 7f85eba3a700 1 librados: shutdown
2019-08-07 08:22:48.971985 I | exec: Running command: ceph osd pool application enable rook-ceph-stage-primary-pool rbd --yes-i-really-mean-it --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/615296923
2019-08-07 08:22:49.062348 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:49.063423 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:49.664105 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:49.665167 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:49.762652 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:49.963726 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:50.062477 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:50.162387 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:50.562223 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:51.305736 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:51.328812 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:52.073753 I | exec: 2019-08-07 08:22:49.982 7f3d785bf700 1 librados: starting msgr at
2019-08-07 08:22:49.982 7f3d785bf700 1 librados: starting objecter
2019-08-07 08:22:50.060 7f3d785bf700 1 librados: setting wanted keys
2019-08-07 08:22:50.060 7f3d785bf700 1 librados: calling monclient init
2019-08-07 08:22:50.067 7f3d785bf700 1 librados: init done
enabled application 'rbd' on pool 'rook-ceph-stage-primary-pool'
2019-08-07 08:22:51.985 7f3d785bf700 10 librados: watch_flush enter
2019-08-07 08:22:51.985 7f3d785bf700 10 librados: watch_flush exit
2019-08-07 08:22:51.986 7f3d785bf700 1 librados: shutdown
2019-08-07 08:22:52.073938 I | cephclient: creating replicated pool rook-ceph-stage-primary-pool succeeded, buf:
2019-08-07 08:22:52.073961 I | op-pool: created pool rook-ceph-stage-primary-pool
2019-08-07 08:22:52.389010 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:52.677731 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:52.989121 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:53.081573 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:56.590183 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:57.362761 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:57.490300 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:57.947777 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:58.256654 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:58.715063 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:58.818411 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:59.003775 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:59.013402 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:59.645918 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:59.664220 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:59.730591 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:22:59.917306 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:00.010580 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:00.115426 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:00.565040 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:01.362754 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:01.364151 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:02.408478 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:02.693460 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:03.010663 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:03.103755 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:06.644387 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:07.362419 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:07.510381 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:07.966924 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:08.362308 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:08.743428 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:08.843065 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:09.022792 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:09.033445 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:09.669534 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:09.691867 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:09.750093 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:09.942237 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:10.030975 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:10.135745 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:10.585858 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:11.362532 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:11.384129 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:12.420738 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:12.711033 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:13.036523 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:13.132092 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:16.662676 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:17.335493 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:17.530428 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:17.982977 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:18.297559 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:18.762144 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:18.856502 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:19.040715 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:19.050014 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:19.694586 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:19.719110 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:19.865526 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:19.969787 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:20.047692 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:20.155349 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:20.470854 D | op-mon: checking health of mons
2019-08-07 08:23:20.470997 D | op-mon: Acquiring lock for mon orchestration
2019-08-07 08:23:20.471009 D | op-mon: Acquired lock for mon orchestration
2019-08-07 08:23:20.471018 D | op-mon: Checking health for mons in cluster. rook-ceph-stage-primary
2019-08-07 08:23:20.495139 D | op-mon: targeting the mon count 5
2019-08-07 08:23:20.495384 D | exec: Running command: ceph mon_status --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/431791166
2019-08-07 08:23:20.663962 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:21.462586 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:21.464069 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:22.463294 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:22.568222 I | exec: 2019-08-07 08:23:21.377 7f31f9797700 1 librados: starting msgr at
2019-08-07 08:23:21.377 7f31f9797700 1 librados: starting objecter
2019-08-07 08:23:21.378 7f31f9797700 1 librados: setting wanted keys
2019-08-07 08:23:21.378 7f31f9797700 1 librados: calling monclient init
2019-08-07 08:23:21.463 7f31f9797700 1 librados: init done
2019-08-07 08:23:22.461 7f31f9797700 10 librados: watch_flush enter
2019-08-07 08:23:22.462 7f31f9797700 10 librados: watch_flush exit
2019-08-07 08:23:22.463 7f31f9797700 1 librados: shutdown
2019-08-07 08:23:22.568590 D | cephclient: MON STATUS: {Quorum:[0 1 2 3] MonMap:{Mons:[{Name:a Rank:0 Address:100.70.46.205:6789/0} {Name:b Rank:1 Address:100.67.17.84:6789/0} {Name:f Rank:2 Address:100.69.115.5:6789/0} {Name:g Rank:3 Address:100.66.122.247:6789/0} {Name:h Rank:4 Address:100.64.242.138:6789/0}]}}
2019-08-07 08:23:22.568627 D | op-mon: Mon status: {Quorum:[0 1 2 3] MonMap:{Mons:[{Name:a Rank:0 Address:100.70.46.205:6789/0} {Name:b Rank:1 Address:100.67.17.84:6789/0} {Name:f Rank:2 Address:100.69.115.5:6789/0} {Name:g Rank:3 Address:100.66.122.247:6789/0} {Name:h Rank:4 Address:100.64.242.138:6789/0}]}}
2019-08-07 08:23:22.568648 D | op-mon: mon a found in quorum
2019-08-07 08:23:22.568657 D | op-mon: mon b found in quorum
2019-08-07 08:23:22.568665 D | op-mon: mon f found in quorum
2019-08-07 08:23:22.568675 D | op-mon: mon g found in quorum
2019-08-07 08:23:22.568700 D | op-mon: mon h NOT found in quorum. Mon status: {Quorum:[0 1 2 3] MonMap:{Mons:[{Name:a Rank:0 Address:100.70.46.205:6789/0} {Name:b Rank:1 Address:100.67.17.84:6789/0} {Name:f Rank:2 Address:100.69.115.5:6789/0} {Name:g Rank:3 Address:100.66.122.247:6789/0} {Name:h Rank:4 Address:100.64.242.138:6789/0}]}}
2019-08-07 08:23:22.568711 W | op-mon: mon h not found in quorum, waiting for timeout before failover
2019-08-07 08:23:22.594758 D | op-mon: there are 22 nodes available for 5 mons
2019-08-07 08:23:22.613897 D | op-mon: mon pod on node k8s-worker-00.lxstage.domain.com
2019-08-07 08:23:22.613936 D | op-mon: mon pod on node k8s-worker-101.lxstage.domain.com
2019-08-07 08:23:22.613952 D | op-mon: mon pod on node k8s-worker-102.lxstage.domain.com
2019-08-07 08:23:22.613963 D | op-mon: mon pod on node k8s-worker-103.lxstage.domain.com
2019-08-07 08:23:22.614015 I | op-mon: rebalance: enough nodes available 16 to failover mon a
2019-08-07 08:23:22.614025 I | op-mon: Failing over monitor a
2019-08-07 08:23:22.614058 I | op-mon: starting new mon: &{ResourceName:rook-ceph-mon-i DaemonName:i PublicIP: Port:6789 DataPathMap:0xc000944b40}
2019-08-07 08:23:22.614077 D | op-k8sutil: creating service rook-ceph-mon-i
2019-08-07 08:23:22.638048 I | op-mon: mon i endpoint are [v2:100.70.92.237:3300,v1:100.70.92.237:6789]
2019-08-07 08:23:22.672733 D | op-mon: there are 22 nodes available for 5 mons
2019-08-07 08:23:22.685012 D | op-mon: mon pod on node k8s-worker-00.lxstage.domain.com
2019-08-07 08:23:22.685041 D | op-mon: mon pod on node k8s-worker-101.lxstage.domain.com
2019-08-07 08:23:22.685055 D | op-mon: mon pod on node k8s-worker-102.lxstage.domain.com
2019-08-07 08:23:22.685066 D | op-mon: mon pod on node k8s-worker-103.lxstage.domain.com
2019-08-07 08:23:22.685117 I | op-mon: Found 16 running nodes without mons
2019-08-07 08:23:22.685131 D | op-mon: mon i assigned to node k8s-worker-01.lxstage.domain.com
2019-08-07 08:23:22.685141 D | op-mon: using IP 172.22.254.150 for node k8s-worker-01.lxstage.domain.com
2019-08-07 08:23:22.685151 D | op-mon: mons have been assigned to nodes
2019-08-07 08:23:22.685175 D | op-mon: monConfig: %+v&{rook-ceph-mon-i i 100.70.92.237 6789 0xc000944b40}
2019-08-07 08:23:22.685336 D | op-mon: Starting mon: rook-ceph-mon-i
2019-08-07 08:23:22.700111 I | op-mon: mons created: 1
2019-08-07 08:23:22.700140 I | op-mon: waiting for mon quorum with [i]
2019-08-07 08:23:22.716012 I | op-mon: mon i is not yet running
2019-08-07 08:23:22.716038 I | op-mon: mons running: []
2019-08-07 08:23:22.731051 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:23.054841 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:23.172887 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:26.675490 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:27.368458 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:27.564836 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:27.730171 I | op-mon: mon i is not yet running
2019-08-07 08:23:27.730199 I | op-mon: mons running: []
2019-08-07 08:23:27.997670 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:28.317585 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:28.784718 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:28.874155 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:29.062454 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:29.068213 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:29.762190 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:29.763646 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:29.784827 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:30.005251 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:30.070049 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:30.172852 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:30.626318 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:31.403529 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:31.418549 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:32.449303 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:32.741833 I | op-mon: mons running: [i]
2019-08-07 08:23:32.742129 I | exec: Running command: ceph mon_status --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/621743749
2019-08-07 08:23:32.764583 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:33.075004 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:33.263850 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:35.470848 D | op-osd: Checking osd processes status.
2019-08-07 08:23:35.470943 D | op-osd: OSDs with previously detected Down status: map[]
2019-08-07 08:23:35.471190 D | exec: Running command: ceph osd dump --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/132499488
2019-08-07 08:23:36.690629 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:37.378055 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:37.574640 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:38.016569 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:38.342829 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:38.805267 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:38.890492 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:39.083188 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:39.091234 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:39.747458 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:39.757095 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:39.808506 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:40.021835 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:40.089889 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:40.190604 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:40.646062 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:41.422816 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:41.434389 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:42.466491 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:42.764395 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:43.091848 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:43.262140 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:43.362790 D | op-cluster: checking health of cluster
2019-08-07 08:23:43.363148 D | exec: Running command: ceph status --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/901585663
2019-08-07 08:23:46.763826 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:47.463878 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:47.663855 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:48.062530 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:48.364177 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:48.824296 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:48.963651 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:49.164042 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:49.164889 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:49.166224 I | exec: 2019-08-07 08:23:42.966 7f0f1b8b4700 1 librados: starting msgr at
2019-08-07 08:23:42.966 7f0f1b8b4700 1 librados: starting objecter
2019-08-07 08:23:42.967 7f0f1b8b4700 1 librados: setting wanted keys
2019-08-07 08:23:42.967 7f0f1b8b4700 1 librados: calling monclient init
2019-08-07 08:23:43.061 7f0f1b8b4700 1 librados: init done
2019-08-07 08:23:48.861 7f0f1b8b4700 10 librados: watch_flush enter
2019-08-07 08:23:48.861 7f0f1b8b4700 10 librados: watch_flush exit
2019-08-07 08:23:48.862 7f0f1b8b4700 1 librados: shutdown
2019-08-07 08:23:49.166592 D | cephclient: MON STATUS: {Quorum:[0 1 2 3 5] MonMap:{Mons:[{Name:a Rank:0 Address:100.70.46.205:6789/0} {Name:b Rank:1 Address:100.67.17.84:6789/0} {Name:f Rank:2 Address:100.69.115.5:6789/0} {Name:g Rank:3 Address:100.66.122.247:6789/0} {Name:h Rank:4 Address:100.64.242.138:6789/0} {Name:i Rank:5 Address:100.70.92.237:6789/0}]}}
2019-08-07 08:23:49.166620 I | op-mon: Monitors in quorum: [a b f g i]
2019-08-07 08:23:49.166632 I | op-mon: ensuring removal of unhealthy monitor a
2019-08-07 08:23:49.180978 D | op-mon: removing monitor a
2019-08-07 08:23:49.181180 I | exec: Running command: ceph mon remove a --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/611279954
2019-08-07 08:23:49.365615 I | exec: 2019-08-07 08:23:43.064 7fd1f08b5700 1 librados: starting msgr at
2019-08-07 08:23:43.064 7fd1f08b5700 1 librados: starting objecter
2019-08-07 08:23:43.065 7fd1f08b5700 1 librados: setting wanted keys
2019-08-07 08:23:43.065 7fd1f08b5700 1 librados: calling monclient init
2019-08-07 08:23:43.166 7fd1f08b5700 1 librados: init done
2019-08-07 08:23:48.965 7fd1f08b5700 10 librados: watch_flush enter
2019-08-07 08:23:48.965 7fd1f08b5700 10 librados: watch_flush exit
2019-08-07 08:23:49.062 7fd1f08b5700 1 librados: shutdown
2019-08-07 08:23:49.368273 D | op-osd: osd dump &{[{0 1 1} {1 1 1} {2 1 1} {3 1 1} {4 1 1} {5 1 1} {6 1 1} {7 1 1} {8 1 1} {9 1 1} {10 1 1} {11 1 1} {12 1 1} {13 1 1} {14 1 1} {15 1 1} {16 1 1} {17 1 1} {18 1 1} {19 1 1} {20 1 1} {21 1 1} {22 1 1} {23 1 1} {24 1 1} {25 1 1} {26 1 1} {27 1 1} {28 1 1} {29 1 1} {30 1 1} {31 1 1} {32 1 1} {33 1 1} {34 1 1} {35 1 1} {36 1 1} {37 1 1} {38 1 1} {39 1 1} {40 1 1} {41 1 1} {42 1 1} {43 1 1} {44 1 1} {45 1 1} {46 1 1} {47 1 1}]}
2019-08-07 08:23:49.368297 D | op-osd: validating status of osd.0
2019-08-07 08:23:49.368308 D | op-osd: osd.0 is healthy.
2019-08-07 08:23:49.368316 D | op-osd: validating status of osd.1
2019-08-07 08:23:49.368325 D | op-osd: osd.1 is healthy.
2019-08-07 08:23:49.368333 D | op-osd: validating status of osd.2
2019-08-07 08:23:49.368340 D | op-osd: osd.2 is healthy.
2019-08-07 08:23:49.368346 D | op-osd: validating status of osd.3
2019-08-07 08:23:49.368354 D | op-osd: osd.3 is healthy.
2019-08-07 08:23:49.368362 D | op-osd: validating status of osd.4
2019-08-07 08:23:49.368369 D | op-osd: osd.4 is healthy.
2019-08-07 08:23:49.368377 D | op-osd: validating status of osd.5
2019-08-07 08:23:49.368385 D | op-osd: osd.5 is healthy.
2019-08-07 08:23:49.368392 D | op-osd: validating status of osd.6
2019-08-07 08:23:49.368399 D | op-osd: osd.6 is healthy.
2019-08-07 08:23:49.368407 D | op-osd: validating status of osd.7
2019-08-07 08:23:49.368414 D | op-osd: osd.7 is healthy.
2019-08-07 08:23:49.368422 D | op-osd: validating status of osd.8
2019-08-07 08:23:49.368430 D | op-osd: osd.8 is healthy.
2019-08-07 08:23:49.368436 D | op-osd: validating status of osd.9
2019-08-07 08:23:49.368443 D | op-osd: osd.9 is healthy.
2019-08-07 08:23:49.368450 D | op-osd: validating status of osd.10
2019-08-07 08:23:49.368458 D | op-osd: osd.10 is healthy.
2019-08-07 08:23:49.368464 D | op-osd: validating status of osd.11
2019-08-07 08:23:49.368472 D | op-osd: osd.11 is healthy.
2019-08-07 08:23:49.368479 D | op-osd: validating status of osd.12
2019-08-07 08:23:49.368486 D | op-osd: osd.12 is healthy.
2019-08-07 08:23:49.368493 D | op-osd: validating status of osd.13
2019-08-07 08:23:49.368500 D | op-osd: osd.13 is healthy.
2019-08-07 08:23:49.368507 D | op-osd: validating status of osd.14
2019-08-07 08:23:49.368514 D | op-osd: osd.14 is healthy.
2019-08-07 08:23:49.368521 D | op-osd: validating status of osd.15
2019-08-07 08:23:49.368529 D | op-osd: osd.15 is healthy.
2019-08-07 08:23:49.368535 D | op-osd: validating status of osd.16
2019-08-07 08:23:49.368543 D | op-osd: osd.16 is healthy.
2019-08-07 08:23:49.368549 D | op-osd: validating status of osd.17
2019-08-07 08:23:49.368557 D | op-osd: osd.17 is healthy.
2019-08-07 08:23:49.368563 D | op-osd: validating status of osd.18
2019-08-07 08:23:49.368571 D | op-osd: osd.18 is healthy.
2019-08-07 08:23:49.368577 D | op-osd: validating status of osd.19
2019-08-07 08:23:49.368585 D | op-osd: osd.19 is healthy.
2019-08-07 08:23:49.368592 D | op-osd: validating status of osd.20
2019-08-07 08:23:49.368600 D | op-osd: osd.20 is healthy.
2019-08-07 08:23:49.368607 D | op-osd: validating status of osd.21
2019-08-07 08:23:49.368615 D | op-osd: osd.21 is healthy.
2019-08-07 08:23:49.368622 D | op-osd: validating status of osd.22
2019-08-07 08:23:49.368631 D | op-osd: osd.22 is healthy.
2019-08-07 08:23:49.368637 D | op-osd: validating status of osd.23
2019-08-07 08:23:49.368645 D | op-osd: osd.23 is healthy.
2019-08-07 08:23:49.368652 D | op-osd: validating status of osd.24
2019-08-07 08:23:49.368661 D | op-osd: osd.24 is healthy.
2019-08-07 08:23:49.368667 D | op-osd: validating status of osd.25
2019-08-07 08:23:49.368676 D | op-osd: osd.25 is healthy.
2019-08-07 08:23:49.368682 D | op-osd: validating status of osd.26
2019-08-07 08:23:49.368691 D | op-osd: osd.26 is healthy.
2019-08-07 08:23:49.368697 D | op-osd: validating status of osd.27
2019-08-07 08:23:49.368705 D | op-osd: osd.27 is healthy.
2019-08-07 08:23:49.368712 D | op-osd: validating status of osd.28
2019-08-07 08:23:49.368720 D | op-osd: osd.28 is healthy.
2019-08-07 08:23:49.368727 D | op-osd: validating status of osd.29
2019-08-07 08:23:49.368735 D | op-osd: osd.29 is healthy.
2019-08-07 08:23:49.368741 D | op-osd: validating status of osd.30
2019-08-07 08:23:49.368750 D | op-osd: osd.30 is healthy.
2019-08-07 08:23:49.368757 D | op-osd: validating status of osd.31
2019-08-07 08:23:49.368766 D | op-osd: osd.31 is healthy.
2019-08-07 08:23:49.368772 D | op-osd: validating status of osd.32
2019-08-07 08:23:49.368781 D | op-osd: osd.32 is healthy.
2019-08-07 08:23:49.368788 D | op-osd: validating status of osd.33
2019-08-07 08:23:49.368797 D | op-osd: osd.33 is healthy.
2019-08-07 08:23:49.368803 D | op-osd: validating status of osd.34
2019-08-07 08:23:49.368813 D | op-osd: osd.34 is healthy.
2019-08-07 08:23:49.368819 D | op-osd: validating status of osd.35
2019-08-07 08:23:49.368828 D | op-osd: osd.35 is healthy.
2019-08-07 08:23:49.368835 D | op-osd: validating status of osd.36
2019-08-07 08:23:49.368844 D | op-osd: osd.36 is healthy.
2019-08-07 08:23:49.368850 D | op-osd: validating status of osd.37
2019-08-07 08:23:49.368859 D | op-osd: osd.37 is healthy.
2019-08-07 08:23:49.368866 D | op-osd: validating status of osd.38
2019-08-07 08:23:49.368874 D | op-osd: osd.38 is healthy.
2019-08-07 08:23:49.368881 D | op-osd: validating status of osd.39
2019-08-07 08:23:49.368890 D | op-osd: osd.39 is healthy.
2019-08-07 08:23:49.368897 D | op-osd: validating status of osd.40
2019-08-07 08:23:49.368913 D | op-osd: osd.40 is healthy.
2019-08-07 08:23:49.368919 D | op-osd: validating status of osd.41
2019-08-07 08:23:49.368928 D | op-osd: osd.41 is healthy.
2019-08-07 08:23:49.368936 D | op-osd: validating status of osd.42
2019-08-07 08:23:49.368944 D | op-osd: osd.42 is healthy.
2019-08-07 08:23:49.368950 D | op-osd: validating status of osd.43
2019-08-07 08:23:49.368960 D | op-osd: osd.43 is healthy.
2019-08-07 08:23:49.368967 D | op-osd: validating status of osd.44
2019-08-07 08:23:49.368976 D | op-osd: osd.44 is healthy.
2019-08-07 08:23:49.368983 D | op-osd: validating status of osd.45
2019-08-07 08:23:49.368992 D | op-osd: osd.45 is healthy.
2019-08-07 08:23:49.368999 D | op-osd: validating status of osd.46
2019-08-07 08:23:49.369008 D | op-osd: osd.46 is healthy.
2019-08-07 08:23:49.369015 D | op-osd: validating status of osd.47
2019-08-07 08:23:49.369024 D | op-osd: osd.47 is healthy.
2019-08-07 08:23:49.774687 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:49.781748 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:49.863704 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:50.064499 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:50.162207 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:50.262467 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:50.665040 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:51.464041 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:51.465419 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:52.563162 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:52.862153 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:53.110451 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:53.263128 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:53.567168 I | exec: 2019-08-07 08:23:47.073 7f55261f9700 1 librados: starting msgr at
2019-08-07 08:23:47.073 7f55261f9700 1 librados: starting objecter
2019-08-07 08:23:47.161 7f55261f9700 1 librados: setting wanted keys
2019-08-07 08:23:47.161 7f55261f9700 1 librados: calling monclient init
2019-08-07 08:23:47.262 7f55261f9700 1 librados: init done
2019-08-07 08:23:53.271 7f55261f9700 10 librados: watch_flush enter
2019-08-07 08:23:53.271 7f55261f9700 10 librados: watch_flush exit
2019-08-07 08:23:53.362 7f55261f9700 1 librados: shutdown
2019-08-07 08:23:53.568596 D | op-cluster: Cluster status: {Health:{Status:HEALTH_WARN Checks:map[MON_DOWN:{Severity:HEALTH_WARN Summary:{Message:1/6 mons down, quorum a,b,f,g,i}}]} FSID:7dd854f1-2892-4201-ab69-d4797f12ac50 ElectionEpoch:238 Quorum:[0 1 2 3 5] QuorumNames:[a b f g i] MonMap:{Epoch:6 FSID:7dd854f1-2892-4201-ab69-d4797f12ac50 CreatedTime:2019-08-05 15:05:49.660802 ModifiedTime:2019-08-07 08:23:32.428920 Mons:[{Name:a Rank:0 Address:100.70.46.205:6789/0} {Name:b Rank:1 Address:100.67.17.84:6789/0} {Name:f Rank:2 Address:100.69.115.5:6789/0} {Name:g Rank:3 Address:100.66.122.247:6789/0} {Name:h Rank:4 Address:100.64.242.138:6789/0} {Name:i Rank:5 Address:100.70.92.237:6789/0}]} OsdMap:{OsdMap:{Epoch:163 NumOsd:48 NumUpOsd:48 NumInOsd:48 Full:false NearFull:false NumRemappedPgs:0}} PgMap:{PgsByState:[{StateName:active+clean Count:512}] Version:0 NumPgs:512 DataBytes:125898804 UsedBytes:52305739776 AvailableBytes:51126524559360 TotalBytes:51178830299136 ReadBps:0 WriteBps:0 ReadOps:0 WriteOps:0 RecoveryBps:0 RecoveryObjectsPerSec:0 RecoveryKeysPerSec:0 CacheFlushBps:0 CacheEvictBps:0 CachePromoteBps:0} MgrMap:{Epoch:118 ActiveGID:534391 ActiveName:a ActiveAddr:100.192.28.144:6801/1 Available:true Standbys:[]}}
2019-08-07 08:23:53.584139 D | op-cluster: update event for cluster rook-ceph-stage-primary
2019-08-07 08:23:53.584385 D | op-cluster: update event for cluster rook-ceph-stage-primary is not supported
2019-08-07 08:23:56.729173 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:57.421315 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:57.613011 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:58.066928 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:58.391730 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:58.843590 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:58.922316 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:59.129501 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:59.135764 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:59.803735 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:59.809742 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:23:59.849962 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:00.075415 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:00.139859 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:00.231628 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:00.692535 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:01.465125 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:01.473219 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:02.505345 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:02.801890 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:03.127499 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:03.271412 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:03.876768 I | exec: 2019-08-07 08:23:51.770 7fc3fb3ab700 1 librados: starting msgr at
2019-08-07 08:23:51.770 7fc3fb3ab700 1 librados: starting objecter
2019-08-07 08:23:51.860 7fc3fb3ab700 1 librados: setting wanted keys
2019-08-07 08:23:51.860 7fc3fb3ab700 1 librados: calling monclient init
2019-08-07 08:23:51.867 7fc3fb3ab700 1 librados: init done
removing mon.a at [v2:100.70.46.205:3300/0,v1:100.70.46.205:6789/0], there will be 5 monitors
2019-08-07 08:24:03.827 7fc3fb3ab700 10 librados: watch_flush enter
2019-08-07 08:24:03.827 7fc3fb3ab700 10 librados: watch_flush exit
2019-08-07 08:24:03.829 7fc3fb3ab700 1 librados: shutdown
2019-08-07 08:24:03.877014 I | op-mon: removed monitor a
2019-08-07 08:24:03.931843 D | op-mon: updating config map rook-ceph-mon-endpoints that already exists
2019-08-07 08:24:03.943332 I | op-mon: saved mon endpoints to config map map[data:b=100.67.17.84:6789,f=100.69.115.5:6789,g=100.66.122.247:6789,h=100.64.242.138:6789,i=100.70.92.237:6789 maxMonId:8 mapping:{"node":{"b":{"Name":"k8s-worker-101.lxstage.domain.com","Hostname":"k8s-worker-101.lxstage.domain.com","Address":"172.22.254.183"},"c":{"Name":"k8s-worker-00.lxstage.domain.com","Hostname":"k8s-worker-00.lxstage.domain.com","Address":"172.22.254.105"},"d":{"Name":"k8s-worker-101.lxstage.domain.com","Hostname":"k8s-worker-101.lxstage.domain.com","Address":"172.22.254.183"},"e":{"Name":"k8s-worker-00.lxstage.domain.com","Hostname":"k8s-worker-00.lxstage.domain.com","Address":"172.22.254.105"},"f":{"Name":"k8s-worker-102.lxstage.domain.com","Hostname":"k8s-worker-102.lxstage.domain.com","Address":"172.22.254.186"},"g":{"Name":"k8s-worker-103.lxstage.domain.com","Hostname":"k8s-worker-103.lxstage.domain.com","Address":"172.22.254.185"},"h":{"Name":"k8s-worker-104.lxstage.domain.com","Hostname":"k8s-worker-104.lxstage.domain.com","Address":"172.22.254.187"},"i":{"Name":"k8s-worker-01.lxstage.domain.com","Hostname":"k8s-worker-01.lxstage.domain.com","Address":"172.22.254.150"}},"port":{}}]
2019-08-07 08:24:03.963795 D | op-config: Generated and stored config file:
[global]
mon_allow_pool_delete = true
mon_max_pg_per_osd = 1000
osd_pg_bits = 11
osd_pgp_bits = 11
osd_pool_default_size = 1
osd_pool_default_min_size = 1
osd_pool_default_pg_num = 100
osd_pool_default_pgp_num = 100
rbd_default_features = 3
fatal_signal_handlers = false
osd pool default pg num = 512
osd pool default pgp num = 512
osd pool default size = 3
osd pool default min size = 2
2019-08-07 08:24:03.969350 D | op-config: updating config secret &Secret{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:rook-ceph-config,GenerateName:,Namespace:rook-ceph-stage-primary,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{},OwnerReferences:[{ceph.rook.io/v1 CephCluster rook-ceph-stage-primary 76235f05-b792-11e9-9b32-0050568460f6 <nil> 0xc000c66c6c}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string][]byte{},Type:kubernetes.io/rook,StringData:map[string]string{mon_host: [v2:100.69.115.5:3300,v1:100.69.115.5:6789],[v2:100.66.122.247:3300,v1:100.66.122.247:6789],[v2:100.64.242.138:3300,v1:100.64.242.138:6789],[v2:100.70.92.237:3300,v1:100.70.92.237:6789],[v2:100.67.17.84:3300,v1:100.67.17.84:6789],mon_initial_members: f,g,h,i,b,},}
2019-08-07 08:24:03.975090 I | cephconfig: writing config file /var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config
2019-08-07 08:24:03.975271 I | cephconfig: copying config to /etc/ceph/ceph.conf
2019-08-07 08:24:03.975451 I | cephconfig: generated admin config in /var/lib/rook/rook-ceph-stage-primary
2019-08-07 08:24:03.975801 I | cephconfig: writing config file /var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config
2019-08-07 08:24:03.976021 I | cephconfig: copying config to /etc/ceph/ceph.conf
2019-08-07 08:24:03.976177 I | cephconfig: generated admin config in /var/lib/rook/rook-ceph-stage-primary
2019-08-07 08:24:03.976193 D | op-mon: Released lock for mon orchestration
2019-08-07 08:24:06.745930 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:07.440404 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:07.639279 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:08.086240 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:08.415450 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:08.863994 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:08.944447 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:09.150179 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:09.156231 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:09.862531 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:09.864098 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:09.872081 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:10.106204 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:10.160846 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:10.245347 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:10.724533 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:11.482165 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:11.494198 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:12.527803 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:12.862600 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:13.145191 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:13.297680 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:16.761570 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:17.459200 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:17.658701 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:18.104800 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:18.451069 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:18.884155 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:18.957243 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:19.169940 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:19.176410 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:19.862257 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:19.869825 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:19.894801 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:20.136711 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:20.178112 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:20.263660 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:20.748621 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:21.497445 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:21.515655 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:22.544377 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:22.836020 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:23.158836 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:23.320836 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:26.862109 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:27.476822 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:27.678866 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:28.124085 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:28.471279 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:28.901296 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:28.977196 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:29.188929 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:29.200396 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:29.889679 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:29.896274 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:29.916739 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:30.158765 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:30.201303 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:30.280562 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:30.771857 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:31.520101 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:31.532683 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:32.563567 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:32.862237 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:33.174593 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:33.340621 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:36.862579 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:37.495700 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:37.699750 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:38.137537 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:38.500405 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:38.919085 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:39.001169 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:39.208566 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:39.216852 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:39.913755 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:39.921615 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:39.941061 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:40.178027 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:40.218592 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:40.300162 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:40.790902 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:41.537925 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:41.551524 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:42.579414 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:42.876760 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:43.190762 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:43.367446 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:46.818050 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:47.514698 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:47.718410 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:48.167311 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:48.527809 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:48.939836 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:48.976327 D | op-mon: checking health of mons
2019-08-07 08:24:48.976362 D | op-mon: Acquiring lock for mon orchestration
2019-08-07 08:24:48.976373 D | op-mon: Acquired lock for mon orchestration
2019-08-07 08:24:48.976383 D | op-mon: Checking health for mons in cluster. rook-ceph-stage-primary
2019-08-07 08:24:48.996635 D | op-mon: targeting the mon count 5
2019-08-07 08:24:48.996872 D | exec: Running command: ceph mon_status --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/976512393
2019-08-07 08:24:49.062790 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:49.262866 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:49.263223 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:49.369260 D | op-osd: Checking osd processes status.
2019-08-07 08:24:49.369309 D | op-osd: OSDs with previously detected Down status: map[]
2019-08-07 08:24:49.369497 D | exec: Running command: ceph osd dump --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/600692564
2019-08-07 08:24:49.964243 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:49.965438 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:49.966743 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:50.263898 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:50.265227 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:50.362727 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:50.862848 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:51.564214 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:51.662466 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:52.663246 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:52.963729 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:53.263328 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:53.463859 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:53.662223 D | op-cluster: checking health of cluster
2019-08-07 08:24:53.662473 D | exec: Running command: ceph status --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/626022051
2019-08-07 08:24:54.866997 I | exec: 2019-08-07 08:24:50.771 7f449d8df700 1 librados: starting msgr at
2019-08-07 08:24:50.771 7f449d8df700 1 librados: starting objecter
2019-08-07 08:24:50.772 7f449d8df700 1 librados: setting wanted keys
2019-08-07 08:24:50.772 7f449d8df700 1 librados: calling monclient init
2019-08-07 08:24:50.865 7f449d8df700 1 librados: init done
2019-08-07 08:24:54.461 7f449d8df700 10 librados: watch_flush enter
2019-08-07 08:24:54.461 7f449d8df700 10 librados: watch_flush exit
2019-08-07 08:24:54.561 7f449d8df700 1 librados: shutdown
2019-08-07 08:24:54.867372 D | cephclient: MON STATUS: {Quorum:[0 1 2] MonMap:{Mons:[{Name:b Rank:0 Address:100.67.17.84:6789/0} {Name:f Rank:1 Address:100.69.115.5:6789/0} {Name:g Rank:2 Address:100.66.122.247:6789/0} {Name:h Rank:3 Address:100.64.242.138:6789/0} {Name:i Rank:4 Address:100.70.92.237:6789/0}]}}
2019-08-07 08:24:54.867409 D | op-mon: Mon status: {Quorum:[0 1 2] MonMap:{Mons:[{Name:b Rank:0 Address:100.67.17.84:6789/0} {Name:f Rank:1 Address:100.69.115.5:6789/0} {Name:g Rank:2 Address:100.66.122.247:6789/0} {Name:h Rank:3 Address:100.64.242.138:6789/0} {Name:i Rank:4 Address:100.70.92.237:6789/0}]}}
2019-08-07 08:24:54.867427 D | op-mon: mon b found in quorum
2019-08-07 08:24:54.867436 D | op-mon: mon f found in quorum
2019-08-07 08:24:54.867444 D | op-mon: mon g found in quorum
2019-08-07 08:24:54.867471 D | op-mon: mon h NOT found in quorum. Mon status: {Quorum:[0 1 2] MonMap:{Mons:[{Name:b Rank:0 Address:100.67.17.84:6789/0} {Name:f Rank:1 Address:100.69.115.5:6789/0} {Name:g Rank:2 Address:100.66.122.247:6789/0} {Name:h Rank:3 Address:100.64.242.138:6789/0} {Name:i Rank:4 Address:100.70.92.237:6789/0}]}}
2019-08-07 08:24:54.867483 W | op-mon: mon h not found in quorum, waiting for timeout before failover
2019-08-07 08:24:54.867509 D | op-mon: mon i NOT found in quorum. Mon status: {Quorum:[0 1 2] MonMap:{Mons:[{Name:b Rank:0 Address:100.67.17.84:6789/0} {Name:f Rank:1 Address:100.69.115.5:6789/0} {Name:g Rank:2 Address:100.66.122.247:6789/0} {Name:h Rank:3 Address:100.64.242.138:6789/0} {Name:i Rank:4 Address:100.70.92.237:6789/0}]}}
2019-08-07 08:24:54.867520 W | op-mon: mon i not found in quorum, waiting for timeout before failover
2019-08-07 08:24:55.064734 D | op-mon: there are 22 nodes available for 5 mons
2019-08-07 08:24:55.164356 D | op-mon: mon pod on node k8s-worker-101.lxstage.domain.com
2019-08-07 08:24:55.164394 D | op-mon: mon pod on node k8s-worker-102.lxstage.domain.com
2019-08-07 08:24:55.164407 D | op-mon: mon pod on node k8s-worker-103.lxstage.domain.com
2019-08-07 08:24:55.164418 D | op-mon: mon pod on node k8s-worker-01.lxstage.domain.com
2019-08-07 08:24:55.164469 I | op-mon: rebalance: enough nodes available 16 to failover mon d
2019-08-07 08:24:55.164478 I | op-mon: Failing over monitor d
2019-08-07 08:24:55.164509 I | op-mon: starting new mon: &{ResourceName:rook-ceph-mon-j DaemonName:j PublicIP: Port:6789 DataPathMap:0xc0013a60a0}
2019-08-07 08:24:55.164589 D | op-k8sutil: creating service rook-ceph-mon-j
2019-08-07 08:24:55.190015 I | op-mon: mon j endpoint are [v2:100.79.195.199:3300,v1:100.79.195.199:6789]
2019-08-07 08:24:55.663849 D | op-mon: there are 22 nodes available for 5 mons
2019-08-07 08:24:55.764391 D | op-mon: mon pod on node k8s-worker-101.lxstage.domain.com
2019-08-07 08:24:55.764423 D | op-mon: mon pod on node k8s-worker-102.lxstage.domain.com
2019-08-07 08:24:55.764436 D | op-mon: mon pod on node k8s-worker-103.lxstage.domain.com
2019-08-07 08:24:55.764448 D | op-mon: mon pod on node k8s-worker-01.lxstage.domain.com
2019-08-07 08:24:55.764505 I | op-mon: Found 16 running nodes without mons
2019-08-07 08:24:55.764517 D | op-mon: mon j assigned to node k8s-worker-00.lxstage.domain.com
2019-08-07 08:24:55.764527 D | op-mon: using IP 172.22.254.105 for node k8s-worker-00.lxstage.domain.com
2019-08-07 08:24:55.764534 D | op-mon: mons have been assigned to nodes
2019-08-07 08:24:55.764559 D | op-mon: monConfig: %+v&{rook-ceph-mon-j j 100.79.195.199 6789 0xc0013a60a0}
2019-08-07 08:24:55.764723 D | op-mon: Starting mon: rook-ceph-mon-j
2019-08-07 08:24:55.773072 I | op-mon: mons created: 1
2019-08-07 08:24:55.773105 I | op-mon: waiting for mon quorum with [j]
2019-08-07 08:24:55.782626 I | op-mon: mon j is not yet running
2019-08-07 08:24:55.782655 I | op-mon: mons running: []
2019-08-07 08:24:56.664583 I | exec: 2019-08-07 08:24:51.865 7efd956cb700 1 librados: starting msgr at
2019-08-07 08:24:51.865 7efd956cb700 1 librados: starting objecter
2019-08-07 08:24:51.867 7efd956cb700 1 librados: setting wanted keys
2019-08-07 08:24:51.867 7efd956cb700 1 librados: calling monclient init
2019-08-07 08:24:51.963 7efd956cb700 1 librados: init done
2019-08-07 08:24:56.373 7efd956cb700 10 librados: watch_flush enter
2019-08-07 08:24:56.373 7efd956cb700 10 librados: watch_flush exit
2019-08-07 08:24:56.462 7efd956cb700 1 librados: shutdown
2019-08-07 08:24:56.667316 D | op-osd: osd dump &{[{0 1 1} {1 1 1} {2 1 1} {3 1 1} {4 1 1} {5 1 1} {6 1 1} {7 1 1} {8 1 1} {9 1 1} {10 1 1} {11 1 1} {12 1 1} {13 1 1} {14 1 1} {15 1 1} {16 1 1} {17 1 1} {18 1 1} {19 1 1} {20 1 1} {21 1 1} {22 1 1} {23 1 1} {24 1 1} {25 1 1} {26 1 1} {27 1 1} {28 1 1} {29 1 1} {30 1 1} {31 1 1} {32 1 1} {33 1 1} {34 1 1} {35 1 1} {36 1 1} {37 1 1} {38 1 1} {39 1 1} {40 1 1} {41 1 1} {42 1 1} {43 1 1} {44 1 1} {45 1 1} {46 1 1} {47 1 1}]}
2019-08-07 08:24:56.667342 D | op-osd: validating status of osd.0
2019-08-07 08:24:56.667351 D | op-osd: osd.0 is healthy.
2019-08-07 08:24:56.667359 D | op-osd: validating status of osd.1
2019-08-07 08:24:56.667366 D | op-osd: osd.1 is healthy.
2019-08-07 08:24:56.667373 D | op-osd: validating status of osd.2
2019-08-07 08:24:56.667381 D | op-osd: osd.2 is healthy.
2019-08-07 08:24:56.667388 D | op-osd: validating status of osd.3
2019-08-07 08:24:56.667395 D | op-osd: osd.3 is healthy.
2019-08-07 08:24:56.667401 D | op-osd: validating status of osd.4
2019-08-07 08:24:56.667409 D | op-osd: osd.4 is healthy.
2019-08-07 08:24:56.667415 D | op-osd: validating status of osd.5
2019-08-07 08:24:56.667422 D | op-osd: osd.5 is healthy.
2019-08-07 08:24:56.667429 D | op-osd: validating status of osd.6
2019-08-07 08:24:56.667436 D | op-osd: osd.6 is healthy.
2019-08-07 08:24:56.667443 D | op-osd: validating status of osd.7
2019-08-07 08:24:56.667450 D | op-osd: osd.7 is healthy.
2019-08-07 08:24:56.667456 D | op-osd: validating status of osd.8
2019-08-07 08:24:56.667465 D | op-osd: osd.8 is healthy.
2019-08-07 08:24:56.667473 D | op-osd: validating status of osd.9
2019-08-07 08:24:56.667480 D | op-osd: osd.9 is healthy.
2019-08-07 08:24:56.667487 D | op-osd: validating status of osd.10
2019-08-07 08:24:56.667497 D | op-osd: osd.10 is healthy.
2019-08-07 08:24:56.667504 D | op-osd: validating status of osd.11
2019-08-07 08:24:56.667512 D | op-osd: osd.11 is healthy.
2019-08-07 08:24:56.667520 D | op-osd: validating status of osd.12
2019-08-07 08:24:56.667528 D | op-osd: osd.12 is healthy.
2019-08-07 08:24:56.667535 D | op-osd: validating status of osd.13
2019-08-07 08:24:56.667543 D | op-osd: osd.13 is healthy.
2019-08-07 08:24:56.667550 D | op-osd: validating status of osd.14
2019-08-07 08:24:56.667558 D | op-osd: osd.14 is healthy.
2019-08-07 08:24:56.667565 D | op-osd: validating status of osd.15
2019-08-07 08:24:56.667573 D | op-osd: osd.15 is healthy.
2019-08-07 08:24:56.667579 D | op-osd: validating status of osd.16
2019-08-07 08:24:56.667587 D | op-osd: osd.16 is healthy.
2019-08-07 08:24:56.667594 D | op-osd: validating status of osd.17
2019-08-07 08:24:56.667601 D | op-osd: osd.17 is healthy.
2019-08-07 08:24:56.667608 D | op-osd: validating status of osd.18
2019-08-07 08:24:56.667615 D | op-osd: osd.18 is healthy.
2019-08-07 08:24:56.667622 D | op-osd: validating status of osd.19
2019-08-07 08:24:56.667631 D | op-osd: osd.19 is healthy.
2019-08-07 08:24:56.667638 D | op-osd: validating status of osd.20
2019-08-07 08:24:56.667646 D | op-osd: osd.20 is healthy.
2019-08-07 08:24:56.667653 D | op-osd: validating status of osd.21
2019-08-07 08:24:56.667661 D | op-osd: osd.21 is healthy.
2019-08-07 08:24:56.667668 D | op-osd: validating status of osd.22
2019-08-07 08:24:56.667676 D | op-osd: osd.22 is healthy.
2019-08-07 08:24:56.667682 D | op-osd: validating status of osd.23
2019-08-07 08:24:56.667690 D | op-osd: osd.23 is healthy.
2019-08-07 08:24:56.667697 D | op-osd: validating status of osd.24
2019-08-07 08:24:56.667705 D | op-osd: osd.24 is healthy.
2019-08-07 08:24:56.667712 D | op-osd: validating status of osd.25
2019-08-07 08:24:56.667720 D | op-osd: osd.25 is healthy.
2019-08-07 08:24:56.667727 D | op-osd: validating status of osd.26
2019-08-07 08:24:56.667735 D | op-osd: osd.26 is healthy.
2019-08-07 08:24:56.667741 D | op-osd: validating status of osd.27
2019-08-07 08:24:56.667749 D | op-osd: osd.27 is healthy.
2019-08-07 08:24:56.667756 D | op-osd: validating status of osd.28
2019-08-07 08:24:56.667764 D | op-osd: osd.28 is healthy.
2019-08-07 08:24:56.667771 D | op-osd: validating status of osd.29
2019-08-07 08:24:56.667779 D | op-osd: osd.29 is healthy.
2019-08-07 08:24:56.667786 D | op-osd: validating status of osd.30
2019-08-07 08:24:56.667796 D | op-osd: osd.30 is healthy.
2019-08-07 08:24:56.667803 D | op-osd: validating status of osd.31
2019-08-07 08:24:56.667811 D | op-osd: osd.31 is healthy.
2019-08-07 08:24:56.667818 D | op-osd: validating status of osd.32
2019-08-07 08:24:56.667826 D | op-osd: osd.32 is healthy.
2019-08-07 08:24:56.667833 D | op-osd: validating status of osd.33
2019-08-07 08:24:56.667842 D | op-osd: osd.33 is healthy.
2019-08-07 08:24:56.667848 D | op-osd: validating status of osd.34
2019-08-07 08:24:56.667857 D | op-osd: osd.34 is healthy.
2019-08-07 08:24:56.667863 D | op-osd: validating status of osd.35
2019-08-07 08:24:56.667872 D | op-osd: osd.35 is healthy.
2019-08-07 08:24:56.667878 D | op-osd: validating status of osd.36
2019-08-07 08:24:56.667887 D | op-osd: osd.36 is healthy.
2019-08-07 08:24:56.667893 D | op-osd: validating status of osd.37
2019-08-07 08:24:56.667902 D | op-osd: osd.37 is healthy.
2019-08-07 08:24:56.667968 D | op-osd: validating status of osd.38
2019-08-07 08:24:56.667977 D | op-osd: osd.38 is healthy.
2019-08-07 08:24:56.667984 D | op-osd: validating status of osd.39
2019-08-07 08:24:56.667993 D | op-osd: osd.39 is healthy.
2019-08-07 08:24:56.668000 D | op-osd: validating status of osd.40
2019-08-07 08:24:56.668009 D | op-osd: osd.40 is healthy.
2019-08-07 08:24:56.668016 D | op-osd: validating status of osd.41
2019-08-07 08:24:56.668026 D | op-osd: osd.41 is healthy.
2019-08-07 08:24:56.668035 D | op-osd: validating status of osd.42
2019-08-07 08:24:56.668045 D | op-osd: osd.42 is healthy.
2019-08-07 08:24:56.668052 D | op-osd: validating status of osd.43
2019-08-07 08:24:56.668062 D | op-osd: osd.43 is healthy.
2019-08-07 08:24:56.668069 D | op-osd: validating status of osd.44
2019-08-07 08:24:56.668079 D | op-osd: osd.44 is healthy.
2019-08-07 08:24:56.668086 D | op-osd: validating status of osd.45
2019-08-07 08:24:56.668095 D | op-osd: osd.45 is healthy.
2019-08-07 08:24:56.668103 D | op-osd: validating status of osd.46
2019-08-07 08:24:56.668112 D | op-osd: osd.46 is healthy.
2019-08-07 08:24:56.668120 D | op-osd: validating status of osd.47
2019-08-07 08:24:56.668130 D | op-osd: osd.47 is healthy.
2019-08-07 08:24:56.837721 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:57.563972 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:57.763833 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:58.185578 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:58.562451 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:58.574067 I | exec: 2019-08-07 08:24:56.868 7f9e3ed57700 1 librados: starting msgr at
2019-08-07 08:24:56.868 7f9e3ed57700 1 librados: starting objecter
2019-08-07 08:24:56.868 7f9e3ed57700 1 librados: setting wanted keys
2019-08-07 08:24:56.868 7f9e3ed57700 1 librados: calling monclient init
2019-08-07 08:24:56.877 7f9e3ed57700 1 librados: init done
2019-08-07 08:24:58.501 7f9e3ed57700 10 librados: watch_flush enter
2019-08-07 08:24:58.501 7f9e3ed57700 10 librados: watch_flush exit
2019-08-07 08:24:58.503 7f9e3ed57700 1 librados: shutdown
2019-08-07 08:24:58.575405 D | op-cluster: Cluster status: {Health:{Status:HEALTH_WARN Checks:map[SLOW_OPS:{Severity:HEALTH_WARN Summary:{Message:1 slow ops, oldest one blocked for 41 sec, mon.i has slow ops}} MON_DOWN:{Severity:HEALTH_WARN Summary:{Message:2/5 mons down, quorum b,f,g}}]} FSID:7dd854f1-2892-4201-ab69-d4797f12ac50 ElectionEpoch:244 Quorum:[0 1 2] QuorumNames:[b f g] MonMap:{Epoch:7 FSID:7dd854f1-2892-4201-ab69-d4797f12ac50 CreatedTime:2019-08-05 15:05:49.660802 ModifiedTime:2019-08-07 08:24:03.789490 Mons:[{Name:b Rank:0 Address:100.67.17.84:6789/0} {Name:f Rank:1 Address:100.69.115.5:6789/0} {Name:g Rank:2 Address:100.66.122.247:6789/0} {Name:h Rank:3 Address:100.64.242.138:6789/0} {Name:i Rank:4 Address:100.70.92.237:6789/0}]} OsdMap:{OsdMap:{Epoch:163 NumOsd:48 NumUpOsd:48 NumInOsd:48 Full:false NearFull:false NumRemappedPgs:0}} PgMap:{PgsByState:[{StateName:active+clean Count:512}] Version:0 NumPgs:512 DataBytes:125898804 UsedBytes:52305739776 AvailableBytes:51126524559360 TotalBytes:51178830299136 ReadBps:0 WriteBps:0 ReadOps:0 WriteOps:0 RecoveryBps:0 RecoveryObjectsPerSec:0 RecoveryKeysPerSec:0 CacheFlushBps:0 CacheEvictBps:0 CachePromoteBps:0} MgrMap:{Epoch:118 ActiveGID:534391 ActiveName:a ActiveAddr:100.192.28.144:6801/1 Available:true Standbys:[]}}
2019-08-07 08:24:58.589582 D | op-cluster: update event for cluster rook-ceph-stage-primary
2019-08-07 08:24:58.589797 D | op-cluster: update event for cluster rook-ceph-stage-primary is not supported
2019-08-07 08:24:58.960712 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:59.037399 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:59.245762 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:59.260786 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:59.966655 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:24:59.972772 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:00.063925 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:00.216819 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:00.260831 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:00.337966 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:00.863946 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:00.875347 I | op-mon: mons running: [j]
2019-08-07 08:25:00.875519 I | exec: Running command: ceph mon_status --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/122391974
2019-08-07 08:25:01.581582 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:01.589133 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:02.617921 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:02.912134 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:03.247661 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:03.426411 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
W0807 08:25:04.568057 9 reflector.go:289] github.com/rook/rook/pkg/operator/ceph/cluster/controller.go:165: watch of *v1.ConfigMap ended with: too old resource version: 309424868 (309435089)
2019-08-07 08:25:05.574994 D | op-cluster: onDeviceCMUpdate old device cm: &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:local-device-k8s-worker-34.lxstage.domain.com,GenerateName:,Namespace:rook-ceph-stage-primary,SelfLink:/api/v1/namespaces/rook-ceph-stage-primary/configmaps/local-device-k8s-worker-34.lxstage.domain.com,UID:55496c48-b855-11e9-a92c-0050568460f6,ResourceVersion:308328514,Generation:0,CreationTimestamp:2019-08-06 14:20:25 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{app: rook-discover,rook.io/node: k8s-worker-34.lxstage.domain.com,velero.io/backup-name: rook-initial,velero.io/restore-name: rook4-1,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{devices: [{"name":"sda","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-path/pci-0000:00:10.0-scsi-0:0:0:0","size":107374182400,"uuid":"9e5d1b78-3970-4485-98de-c1eb5a4c812e","serial":"","type":"disk","rotational":true,"readOnly":false,"Partitions":[{"Name":"sda4","Size":1073741824,"Label":"USR-B","Filesystem":""},{"Name":"sda2","Size":2097152,"Label":"BIOS-BOOT","Filesystem":""},{"Name":"sda9","Size":104886943232,"Label":"ROOT","Filesystem":"ext4"},{"Name":"sda7","Size":67108864,"Label":"OEM-CONFIG","Filesystem":""},{"Name":"sda3","Size":1073741824,"Label":"USR-A","Filesystem":"ext4"},{"Name":"sda1","Size":134217728,"Label":"EFI-SYSTEM","Filesystem":"vfat"},{"Name":"sda6","Size":134217728,"Label":"OEM","Filesystem":"ext4"}],"filesystem":"","vendor":"VMware","model":"Virtual_disk","wwn":"","wwnVendorExtension":"","empty":false},{"name":"dm-0","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/dm-name-usr /dev/disk/by-id/dm-uuid-CRYPT-VERITY-927885ea3b564514904ff1dfe4422d9a-usr /dev/disk/by-id/raid-usr /dev/disk/by-uuid/01c8f018-9747-4b04-9e26-7bea7b3b5d59 /dev/mapper/usr","size":1065345024,"uuid":"2404bbfd-94ed-471f-a7cd-c03d22e22793","serial":"","type":"crypt","rotational":true,"readOnly":true,"Partitions":null,"filesystem":"ext4","vendor":"","model":"","wwn":"","wwnVendorExtension":"","empty":false}],},BinaryData:map[string][]byte{},}
2019-08-07 08:25:05.575087 D | op-cluster: onDeviceCMUpdate new device cm: &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:local-device-k8s-worker-34.lxstage.domain.com,GenerateName:,Namespace:rook-ceph-stage-primary,SelfLink:/api/v1/namespaces/rook-ceph-stage-primary/configmaps/local-device-k8s-worker-34.lxstage.domain.com,UID:55496c48-b855-11e9-a92c-0050568460f6,ResourceVersion:308328514,Generation:0,CreationTimestamp:2019-08-06 14:20:25 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{app: rook-discover,rook.io/node: k8s-worker-34.lxstage.domain.com,velero.io/backup-name: rook-initial,velero.io/restore-name: rook4-1,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{devices: [{"name":"sda","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-path/pci-0000:00:10.0-scsi-0:0:0:0","size":107374182400,"uuid":"9e5d1b78-3970-4485-98de-c1eb5a4c812e","serial":"","type":"disk","rotational":true,"readOnly":false,"Partitions":[{"Name":"sda4","Size":1073741824,"Label":"USR-B","Filesystem":""},{"Name":"sda2","Size":2097152,"Label":"BIOS-BOOT","Filesystem":""},{"Name":"sda9","Size":104886943232,"Label":"ROOT","Filesystem":"ext4"},{"Name":"sda7","Size":67108864,"Label":"OEM-CONFIG","Filesystem":""},{"Name":"sda3","Size":1073741824,"Label":"USR-A","Filesystem":"ext4"},{"Name":"sda1","Size":134217728,"Label":"EFI-SYSTEM","Filesystem":"vfat"},{"Name":"sda6","Size":134217728,"Label":"OEM","Filesystem":"ext4"}],"filesystem":"","vendor":"VMware","model":"Virtual_disk","wwn":"","wwnVendorExtension":"","empty":false},{"name":"dm-0","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/dm-name-usr /dev/disk/by-id/dm-uuid-CRYPT-VERITY-927885ea3b564514904ff1dfe4422d9a-usr /dev/disk/by-id/raid-usr /dev/disk/by-uuid/01c8f018-9747-4b04-9e26-7bea7b3b5d59 /dev/mapper/usr","size":1065345024,"uuid":"2404bbfd-94ed-471f-a7cd-c03d22e22793","serial":"","type":"crypt","rotational":true,"readOnly":true,"Partitions":null,"filesystem":"ext4","vendor":"","model":"","wwn":"","wwnVendorExtension":"","empty":false}],},BinaryData:map[string][]byte{},}
2019-08-07 08:25:05.575639 I | op-cluster: device lists are equal. skipping orchestration
2019-08-07 08:25:05.575716 D | op-cluster: onDeviceCMUpdate old device cm: &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:local-device-k8s-worker-22.lxstage.domain.com,GenerateName:,Namespace:rook-ceph-stage-primary,SelfLink:/api/v1/namespaces/rook-ceph-stage-primary/configmaps/local-device-k8s-worker-22.lxstage.domain.com,UID:553e54da-b855-11e9-a92c-0050568460f6,ResourceVersion:308328503,Generation:0,CreationTimestamp:2019-08-06 14:20:25 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{app: rook-discover,rook.io/node: k8s-worker-22.lxstage.domain.com,velero.io/backup-name: rook-initial,velero.io/restore-name: rook4-1,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{devices: [{"name":"sda","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-path/pci-0000:00:10.0-scsi-0:0:0:0","size":107374182400,"uuid":"f482272e-8513-4f93-b420-2eaed81862e8","serial":"","type":"disk","rotational":true,"readOnly":false,"Partitions":[{"Name":"sda4","Size":1073741824,"Label":"USR-B","Filesystem":""},{"Name":"sda2","Size":2097152,"Label":"BIOS-BOOT","Filesystem":""},{"Name":"sda9","Size":104886943232,"Label":"ROOT","Filesystem":"ext4"},{"Name":"sda7","Size":67108864,"Label":"OEM-CONFIG","Filesystem":""},{"Name":"sda3","Size":1073741824,"Label":"USR-A","Filesystem":"ext4"},{"Name":"sda1","Size":134217728,"Label":"EFI-SYSTEM","Filesystem":"vfat"},{"Name":"sda6","Size":134217728,"Label":"OEM","Filesystem":"ext4"}],"filesystem":"","vendor":"VMware","model":"Virtual_disk","wwn":"","wwnVendorExtension":"","empty":false},{"name":"dm-0","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/dm-name-usr /dev/disk/by-id/dm-uuid-CRYPT-VERITY-927885ea3b564514904ff1dfe4422d9a-usr /dev/disk/by-id/raid-usr /dev/disk/by-uuid/01c8f018-9747-4b04-9e26-7bea7b3b5d59 /dev/mapper/usr","size":1065345024,"uuid":"25877d99-6227-4a99-bb48-1693afcec88e","serial":"","type":"crypt","rotational":true,"readOnly":true,"Partitions":null,"filesystem":"ext4","vendor":"","model":"","wwn":"","wwnVendorExtension":"","empty":false}],},BinaryData:map[string][]byte{},}
2019-08-07 08:25:05.575774 D | op-cluster: onDeviceCMUpdate new device cm: &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:local-device-k8s-worker-22.lxstage.domain.com,GenerateName:,Namespace:rook-ceph-stage-primary,SelfLink:/api/v1/namespaces/rook-ceph-stage-primary/configmaps/local-device-k8s-worker-22.lxstage.domain.com,UID:553e54da-b855-11e9-a92c-0050568460f6,ResourceVersion:308328503,Generation:0,CreationTimestamp:2019-08-06 14:20:25 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{app: rook-discover,rook.io/node: k8s-worker-22.lxstage.domain.com,velero.io/backup-name: rook-initial,velero.io/restore-name: rook4-1,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{devices: [{"name":"sda","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-path/pci-0000:00:10.0-scsi-0:0:0:0","size":107374182400,"uuid":"f482272e-8513-4f93-b420-2eaed81862e8","serial":"","type":"disk","rotational":true,"readOnly":false,"Partitions":[{"Name":"sda4","Size":1073741824,"Label":"USR-B","Filesystem":""},{"Name":"sda2","Size":2097152,"Label":"BIOS-BOOT","Filesystem":""},{"Name":"sda9","Size":104886943232,"Label":"ROOT","Filesystem":"ext4"},{"Name":"sda7","Size":67108864,"Label":"OEM-CONFIG","Filesystem":""},{"Name":"sda3","Size":1073741824,"Label":"USR-A","Filesystem":"ext4"},{"Name":"sda1","Size":134217728,"Label":"EFI-SYSTEM","Filesystem":"vfat"},{"Name":"sda6","Size":134217728,"Label":"OEM","Filesystem":"ext4"}],"filesystem":"","vendor":"VMware","model":"Virtual_disk","wwn":"","wwnVendorExtension":"","empty":false},{"name":"dm-0","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/dm-name-usr /dev/disk/by-id/dm-uuid-CRYPT-VERITY-927885ea3b564514904ff1dfe4422d9a-usr /dev/disk/by-id/raid-usr /dev/disk/by-uuid/01c8f018-9747-4b04-9e26-7bea7b3b5d59 /dev/mapper/usr","size":1065345024,"uuid":"25877d99-6227-4a99-bb48-1693afcec88e","serial":"","type":"crypt","rotational":true,"readOnly":true,"Partitions":null,"filesystem":"ext4","vendor":"","model":"","wwn":"","wwnVendorExtension":"","empty":false}],},BinaryData:map[string][]byte{},}
2019-08-07 08:25:05.576056 I | op-cluster: device lists are equal. skipping orchestration
2019-08-07 08:25:05.576124 D | op-cluster: onDeviceCMUpdate old device cm: &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:local-device-k8s-worker-104.lxstage.domain.com,GenerateName:,Namespace:rook-ceph-stage-primary,SelfLink:/api/v1/namespaces/rook-ceph-stage-primary/configmaps/local-device-k8s-worker-104.lxstage.domain.com,UID:553aead7-b855-11e9-a92c-0050568460f6,ResourceVersion:308328500,Generation:0,CreationTimestamp:2019-08-06 14:20:25 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{app: rook-discover,rook.io/node: k8s-worker-104.lxstage.domain.com,velero.io/backup-name: rook-initial,velero.io/restore-name: rook4-1,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{devices: [{"name":"nvme0n1","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/nvme-Dell_Express_Flash_PM1725a_6.4TB_SFF__S39ZNX0M100208 /dev/disk/by-id/nvme-eui.33395a304d1002080025385800000002 /dev/disk/by-path/pci-0000:88:00.0-nvme-1","size":6401252745216,"uuid":"bd197001-56b1-4549-a6a0-0ceb378a3e51","serial":"Dell Express Flash PM1725a 6.4TB SFF_ S39ZNX0M100208","type":"disk","rotational":false,"readOnly":false,"Partitions":null,"filesystem":"","vendor":"","model":"Dell Express Flash PM1725a 6.4TB SFF","wwn":"eui.33395a304d1002080025385800000002","wwnVendorExtension":"","empty":true},{"name":"nvme1n1","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/nvme-Dell_Express_Flash_PM1725a_6.4TB_SFF__S39ZNX0M100191 /dev/disk/by-id/nvme-eui.33395a304d1001910025385800000002 /dev/disk/by-path/pci-0000:89:00.0-nvme-1","size":6401252745216,"uuid":"86f1995f-fc58-47ea-ab97-86739ef403ee","serial":"Dell Express Flash PM1725a 6.4TB SFF_ S39ZNX0M100191","type":"disk","rotational":false,"readOnly":false,"Partitions":null,"filesystem":"","vendor":"","model":"Dell Express Flash PM1725a 6.4TB SFF","wwn":"eui.33395a304d1001910025385800000002","wwnVendorExtension":"","empty":true},{"name":"sda","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/ata-DELLBOSS_VD_38815da2a1660010 /dev/disk/by-path/pci-0000:5e:00.0-ata-1","size":239990276096,"uuid":"5303323f-20cf-4c34-ac7e-36f1a912fdbc","serial":"DELLBOSS_VD_38815da2a1660010","type":"disk","rotational":true,"readOnly":false,"Partitions":[{"Name":"sda4","Size":1073741824,"Label":"USR-B","Filesystem":""},{"Name":"sda2","Size":2097152,"Label":"BIOS-BOOT","Filesystem":""},{"Name":"sda9","Size":237503036928,"Label":"ROOT","Filesystem":"ext4"},{"Name":"sda7","Size":67108864,"Label":"OEM-CONFIG","Filesystem":""},{"Name":"sda3","Size":1073741824,"Label":"USR-A","Filesystem":"ext4"},{"Name":"sda1","Size":134217728,"Label":"EFI-SYSTEM","Filesystem":"vfat"},{"Name":"sda6","Size":134217728,"Label":"OEM","Filesystem":"ext4"}],"filesystem":"","vendor":"","model":"DELLBOSS_VD","wwn":"","wwnVendorExtension":"","empty":false},{"name":"dm-0","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/dm-name-usr /dev/disk/by-id/dm-uuid-CRYPT-VERITY-20ca38800670494eb5ec7025c6f1e8ac-usr /dev/disk/by-id/raid-usr /dev/disk/by-uuid/21ca7b39-3739-4b86-9123-865631f253a4 /dev/mapper/usr","size":1065345024,"uuid":"665ba344-37ac-4ba9-913f-96c77f71e68d","serial":"","type":"crypt","rotational":true,"readOnly":true,"Partitions":null,"filesystem":"ext4","vendor":"","model":"","wwn":"","wwnVendorExtension":"","empty":false}],},BinaryData:map[string][]byte{},}
2019-08-07 08:25:05.576191 D | op-cluster: onDeviceCMUpdate new device cm: &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:local-device-k8s-worker-104.lxstage.domain.com,GenerateName:,Namespace:rook-ceph-stage-primary,SelfLink:/api/v1/namespaces/rook-ceph-stage-primary/configmaps/local-device-k8s-worker-104.lxstage.domain.com,UID:553aead7-b855-11e9-a92c-0050568460f6,ResourceVersion:308328500,Generation:0,CreationTimestamp:2019-08-06 14:20:25 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{app: rook-discover,rook.io/node: k8s-worker-104.lxstage.domain.com,velero.io/backup-name: rook-initial,velero.io/restore-name: rook4-1,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{devices: [{"name":"nvme0n1","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/nvme-Dell_Express_Flash_PM1725a_6.4TB_SFF__S39ZNX0M100208 /dev/disk/by-id/nvme-eui.33395a304d1002080025385800000002 /dev/disk/by-path/pci-0000:88:00.0-nvme-1","size":6401252745216,"uuid":"bd197001-56b1-4549-a6a0-0ceb378a3e51","serial":"Dell Express Flash PM1725a 6.4TB SFF_ S39ZNX0M100208","type":"disk","rotational":false,"readOnly":false,"Partitions":null,"filesystem":"","vendor":"","model":"Dell Express Flash PM1725a 6.4TB SFF","wwn":"eui.33395a304d1002080025385800000002","wwnVendorExtension":"","empty":true},{"name":"nvme1n1","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/nvme-Dell_Express_Flash_PM1725a_6.4TB_SFF__S39ZNX0M100191 /dev/disk/by-id/nvme-eui.33395a304d1001910025385800000002 /dev/disk/by-path/pci-0000:89:00.0-nvme-1","size":6401252745216,"uuid":"86f1995f-fc58-47ea-ab97-86739ef403ee","serial":"Dell Express Flash PM1725a 6.4TB SFF_ S39ZNX0M100191","type":"disk","rotational":false,"readOnly":false,"Partitions":null,"filesystem":"","vendor":"","model":"Dell Express Flash PM1725a 6.4TB SFF","wwn":"eui.33395a304d1001910025385800000002","wwnVendorExtension":"","empty":true},{"name":"sda","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/ata-DELLBOSS_VD_38815da2a1660010 /dev/disk/by-path/pci-0000:5e:00.0-ata-1","size":239990276096,"uuid":"5303323f-20cf-4c34-ac7e-36f1a912fdbc","serial":"DELLBOSS_VD_38815da2a1660010","type":"disk","rotational":true,"readOnly":false,"Partitions":[{"Name":"sda4","Size":1073741824,"Label":"USR-B","Filesystem":""},{"Name":"sda2","Size":2097152,"Label":"BIOS-BOOT","Filesystem":""},{"Name":"sda9","Size":237503036928,"Label":"ROOT","Filesystem":"ext4"},{"Name":"sda7","Size":67108864,"Label":"OEM-CONFIG","Filesystem":""},{"Name":"sda3","Size":1073741824,"Label":"USR-A","Filesystem":"ext4"},{"Name":"sda1","Size":134217728,"Label":"EFI-SYSTEM","Filesystem":"vfat"},{"Name":"sda6","Size":134217728,"Label":"OEM","Filesystem":"ext4"}],"filesystem":"","vendor":"","model":"DELLBOSS_VD","wwn":"","wwnVendorExtension":"","empty":false},{"name":"dm-0","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/dm-name-usr /dev/disk/by-id/dm-uuid-CRYPT-VERITY-20ca38800670494eb5ec7025c6f1e8ac-usr /dev/disk/by-id/raid-usr /dev/disk/by-uuid/21ca7b39-3739-4b86-9123-865631f253a4 /dev/mapper/usr","size":1065345024,"uuid":"665ba344-37ac-4ba9-913f-96c77f71e68d","serial":"","type":"crypt","rotational":true,"readOnly":true,"Partitions":null,"filesystem":"ext4","vendor":"","model":"","wwn":"","wwnVendorExtension":"","empty":false}],},BinaryData:map[string][]byte{},}
2019-08-07 08:25:05.576615 I | op-cluster: device lists are equal. skipping orchestration
2019-08-07 08:25:05.576671 D | op-cluster: onDeviceCMUpdate old device cm: &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:local-device-k8s-worker-29.lxstage.domain.com,GenerateName:,Namespace:rook-ceph-stage-primary,SelfLink:/api/v1/namespaces/rook-ceph-stage-primary/configmaps/local-device-k8s-worker-29.lxstage.domain.com,UID:55436ddc-b855-11e9-a92c-0050568460f6,ResourceVersion:308328508,Generation:0,CreationTimestamp:2019-08-06 14:20:25 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{app: rook-discover,rook.io/node: k8s-worker-29.lxstage.domain.com,velero.io/backup-name: rook-initial,velero.io/restore-name: rook4-1,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{devices: [{"name":"sda","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-path/pci-0000:00:10.0-scsi-0:0:0:0","size":107374182400,"uuid":"d1d564cd-ca6e-460b-aa63-ad64a299fae9","serial":"","type":"disk","rotational":true,"readOnly":false,"Partitions":[{"Name":"sda4","Size":1073741824,"Label":"USR-B","Filesystem":""},{"Name":"sda2","Size":2097152,"Label":"BIOS-BOOT","Filesystem":""},{"Name":"sda9","Size":104886943232,"Label":"ROOT","Filesystem":"ext4"},{"Name":"sda7","Size":67108864,"Label":"OEM-CONFIG","Filesystem":""},{"Name":"sda3","Size":1073741824,"Label":"USR-A","Filesystem":"ext4"},{"Name":"sda1","Size":134217728,"Label":"EFI-SYSTEM","Filesystem":"vfat"},{"Name":"sda6","Size":134217728,"Label":"OEM","Filesystem":"ext4"}],"filesystem":"","vendor":"VMware","model":"Virtual_disk","wwn":"","wwnVendorExtension":"","empty":false},{"name":"dm-0","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/dm-name-usr /dev/disk/by-id/dm-uuid-CRYPT-VERITY-20ca38800670494eb5ec7025c6f1e8ac-usr /dev/disk/by-id/raid-usr /dev/disk/by-uuid/21ca7b39-3739-4b86-9123-865631f253a4 /dev/mapper/usr","size":1065345024,"uuid":"259b305e-5785-4169-b22b-f881abb5cd73","serial":"","type":"crypt","rotational":true,"readOnly":true,"Partitions":null,"filesystem":"ext4","vendor":"","model":"","wwn":"","wwnVendorExtension":"","empty":false}],},BinaryData:map[string][]byte{},}
2019-08-07 08:25:05.576719 D | op-cluster: onDeviceCMUpdate new device cm: &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:local-device-k8s-worker-29.lxstage.domain.com,GenerateName:,Namespace:rook-ceph-stage-primary,SelfLink:/api/v1/namespaces/rook-ceph-stage-primary/configmaps/local-device-k8s-worker-29.lxstage.domain.com,UID:55436ddc-b855-11e9-a92c-0050568460f6,ResourceVersion:308328508,Generation:0,CreationTimestamp:2019-08-06 14:20:25 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{app: rook-discover,rook.io/node: k8s-worker-29.lxstage.domain.com,velero.io/backup-name: rook-initial,velero.io/restore-name: rook4-1,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{devices: [{"name":"sda","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-path/pci-0000:00:10.0-scsi-0:0:0:0","size":107374182400,"uuid":"d1d564cd-ca6e-460b-aa63-ad64a299fae9","serial":"","type":"disk","rotational":true,"readOnly":false,"Partitions":[{"Name":"sda4","Size":1073741824,"Label":"USR-B","Filesystem":""},{"Name":"sda2","Size":2097152,"Label":"BIOS-BOOT","Filesystem":""},{"Name":"sda9","Size":104886943232,"Label":"ROOT","Filesystem":"ext4"},{"Name":"sda7","Size":67108864,"Label":"OEM-CONFIG","Filesystem":""},{"Name":"sda3","Size":1073741824,"Label":"USR-A","Filesystem":"ext4"},{"Name":"sda1","Size":134217728,"Label":"EFI-SYSTEM","Filesystem":"vfat"},{"Name":"sda6","Size":134217728,"Label":"OEM","Filesystem":"ext4"}],"filesystem":"","vendor":"VMware","model":"Virtual_disk","wwn":"","wwnVendorExtension":"","empty":false},{"name":"dm-0","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/dm-name-usr /dev/disk/by-id/dm-uuid-CRYPT-VERITY-20ca38800670494eb5ec7025c6f1e8ac-usr /dev/disk/by-id/raid-usr /dev/disk/by-uuid/21ca7b39-3739-4b86-9123-865631f253a4 /dev/mapper/usr","size":1065345024,"uuid":"259b305e-5785-4169-b22b-f881abb5cd73","serial":"","type":"crypt","rotational":true,"readOnly":true,"Partitions":null,"filesystem":"ext4","vendor":"","model":"","wwn":"","wwnVendorExtension":"","empty":false}],},BinaryData:map[string][]byte{},}
2019-08-07 08:25:05.576971 I | op-cluster: device lists are equal. skipping orchestration
2019-08-07 08:25:05.577033 D | op-cluster: onDeviceCMUpdate old device cm: &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:local-device-k8s-worker-31.lxstage.domain.com,GenerateName:,Namespace:rook-ceph-stage-primary,SelfLink:/api/v1/namespaces/rook-ceph-stage-primary/configmaps/local-device-k8s-worker-31.lxstage.domain.com,UID:55455ada-b855-11e9-a92c-0050568460f6,ResourceVersion:308328510,Generation:0,CreationTimestamp:2019-08-06 14:20:25 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{app: rook-discover,rook.io/node: k8s-worker-31.lxstage.domain.com,velero.io/backup-name: rook-initial,velero.io/restore-name: rook4-1,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{devices: [{"name":"sda","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-path/pci-0000:00:10.0-scsi-0:0:0:0","size":107374182400,"uuid":"59276db9-9556-4b0f-842b-4fd292041d65","serial":"","type":"disk","rotational":true,"readOnly":false,"Partitions":[{"Name":"sda4","Size":1073741824,"Label":"USR-B","Filesystem":""},{"Name":"sda2","Size":2097152,"Label":"BIOS-BOOT","Filesystem":""},{"Name":"sda9","Size":104886943232,"Label":"ROOT","Filesystem":"ext4"},{"Name":"sda7","Size":67108864,"Label":"OEM-CONFIG","Filesystem":""},{"Name":"sda3","Size":1073741824,"Label":"USR-A","Filesystem":"ext4"},{"Name":"sda1","Size":134217728,"Label":"EFI-SYSTEM","Filesystem":"vfat"},{"Name":"sda6","Size":134217728,"Label":"OEM","Filesystem":"ext4"}],"filesystem":"","vendor":"VMware","model":"Virtual_disk","wwn":"","wwnVendorExtension":"","empty":false},{"name":"dm-0","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/dm-name-usr /dev/disk/by-id/dm-uuid-CRYPT-VERITY-20ca38800670494eb5ec7025c6f1e8ac-usr /dev/disk/by-id/raid-usr /dev/disk/by-uuid/21ca7b39-3739-4b86-9123-865631f253a4 /dev/mapper/usr","size":1065345024,"uuid":"c01bd8e2-436d-4611-bf44-73ad933e7a05","serial":"","type":"crypt","rotational":true,"readOnly":true,"Partitions":null,"filesystem":"ext4","vendor":"","model":"","wwn":"","wwnVendorExtension":"","empty":false}],},BinaryData:map[string][]byte{},}
2019-08-07 08:25:05.577084 D | op-cluster: onDeviceCMUpdate new device cm: &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:local-device-k8s-worker-31.lxstage.domain.com,GenerateName:,Namespace:rook-ceph-stage-primary,SelfLink:/api/v1/namespaces/rook-ceph-stage-primary/configmaps/local-device-k8s-worker-31.lxstage.domain.com,UID:55455ada-b855-11e9-a92c-0050568460f6,ResourceVersion:308328510,Generation:0,CreationTimestamp:2019-08-06 14:20:25 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{app: rook-discover,rook.io/node: k8s-worker-31.lxstage.domain.com,velero.io/backup-name: rook-initial,velero.io/restore-name: rook4-1,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{devices: [{"name":"sda","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-path/pci-0000:00:10.0-scsi-0:0:0:0","size":107374182400,"uuid":"59276db9-9556-4b0f-842b-4fd292041d65","serial":"","type":"disk","rotational":true,"readOnly":false,"Partitions":[{"Name":"sda4","Size":1073741824,"Label":"USR-B","Filesystem":""},{"Name":"sda2","Size":2097152,"Label":"BIOS-BOOT","Filesystem":""},{"Name":"sda9","Size":104886943232,"Label":"ROOT","Filesystem":"ext4"},{"Name":"sda7","Size":67108864,"Label":"OEM-CONFIG","Filesystem":""},{"Name":"sda3","Size":1073741824,"Label":"USR-A","Filesystem":"ext4"},{"Name":"sda1","Size":134217728,"Label":"EFI-SYSTEM","Filesystem":"vfat"},{"Name":"sda6","Size":134217728,"Label":"OEM","Filesystem":"ext4"}],"filesystem":"","vendor":"VMware","model":"Virtual_disk","wwn":"","wwnVendorExtension":"","empty":false},{"name":"dm-0","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/dm-name-usr /dev/disk/by-id/dm-uuid-CRYPT-VERITY-20ca38800670494eb5ec7025c6f1e8ac-usr /dev/disk/by-id/raid-usr /dev/disk/by-uuid/21ca7b39-3739-4b86-9123-865631f253a4 /dev/mapper/usr","size":1065345024,"uuid":"c01bd8e2-436d-4611-bf44-73ad933e7a05","serial":"","type":"crypt","rotational":true,"readOnly":true,"Partitions":null,"filesystem":"ext4","vendor":"","model":"","wwn":"","wwnVendorExtension":"","empty":false}],},BinaryData:map[string][]byte{},}
2019-08-07 08:25:05.577318 I | op-cluster: device lists are equal. skipping orchestration
2019-08-07 08:25:05.577380 D | op-cluster: onDeviceCMUpdate old device cm: &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:local-device-k8s-worker-03.lxstage.domain.com,GenerateName:,Namespace:rook-ceph-stage-primary,SelfLink:/api/v1/namespaces/rook-ceph-stage-primary/configmaps/local-device-k8s-worker-03.lxstage.domain.com,UID:5534ba75-b855-11e9-a92c-0050568460f6,ResourceVersion:308328495,Generation:0,CreationTimestamp:2019-08-06 14:20:25 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{app: rook-discover,rook.io/node: k8s-worker-03.lxstage.domain.com,velero.io/backup-name: rook-initial,velero.io/restore-name: rook4-1,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{devices: [{"name":"sdb","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/usb-HP_iLO_Internal_SD-CARD_000002660A01-0:0 /dev/disk/by-path/pci-0000:00:1d.0-usb-0:1.3.1:1.0-scsi-0:0:0:0","size":7971274752,"uuid":"8ed6101c-a731-4730-bf60-910545277464","serial":"HP_iLO_Internal_SD-CARD_000002660A01-0:0","type":"disk","rotational":true,"readOnly":false,"Partitions":[{"Name":"sdb9","Size":2684354560,"Label":"","Filesystem":""},{"Name":"sdb7","Size":115326976,"Label":"","Filesystem":""},{"Name":"sdb5","Size":262127616,"Label":"","Filesystem":""},{"Name":"sdb1","Size":4161536,"Label":"","Filesystem":""},{"Name":"sdb8","Size":299876352,"Label":"","Filesystem":""},{"Name":"sdb6","Size":262127616,"Label":"","Filesystem":""}],"filesystem":"","vendor":"HP_iLO","model":"Internal_SD-CARD","wwn":"","wwnVendorExtension":"","empty":false},{"name":"sda","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/scsi-3600508b1001c3efce7fde6de0a5e8403 /dev/disk/by-id/wwn-0x600508b1001c3efce7fde6de0a5e8403 /dev/disk/by-path/pci-0000:02:00.0-scsi-0:1:0:0","size":299966445568,"uuid":"2c392bda-997d-463d-9a22-390b60db2af7","serial":"3600508b1001c3efce7fde6de0a5e8403","type":"disk","rotational":true,"readOnly":false,"Partitions":[{"Name":"sda4","Size":1073741824,"Label":"USR-B","Filesystem":""},{"Name":"sda2","Size":2097152,"Label":"BIOS-BOOT","Filesystem":""},{"Name":"sda9","Size":297479206400,"Label":"ROOT","Filesystem":"ext4"},{"Name":"sda7","Size":67108864,"Label":"OEM-CONFIG","Filesystem":""},{"Name":"sda3","Size":1073741824,"Label":"USR-A","Filesystem":"ext4"},{"Name":"sda1","Size":134217728,"Label":"EFI-SYSTEM","Filesystem":"vfat"},{"Name":"sda6","Size":134217728,"Label":"OEM","Filesystem":"ext4"}],"filesystem":"","vendor":"HP","model":"LOGICAL_VOLUME","wwn":"0x600508b1001c3efc","wwnVendorExtension":"0x600508b1001c3efce7fde6de0a5e8403","empty":false},{"name":"dm-0","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/dm-name-usr /dev/disk/by-id/dm-uuid-CRYPT-VERITY-20ca38800670494eb5ec7025c6f1e8ac-usr /dev/disk/by-id/raid-usr /dev/disk/by-uuid/21ca7b39-3739-4b86-9123-865631f253a4 /dev/mapper/usr","size":1065345024,"uuid":"f8845dae-06e2-419c-8832-ae47f21eca5a","serial":"","type":"crypt","rotational":true,"readOnly":true,"Partitions":null,"filesystem":"ext4","vendor":"","model":"","wwn":"","wwnVendorExtension":"","empty":false}],},BinaryData:map[string][]byte{},}
2019-08-07 08:25:05.577433 D | op-cluster: onDeviceCMUpdate new device cm: &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:local-device-k8s-worker-03.lxstage.domain.com,GenerateName:,Namespace:rook-ceph-stage-primary,SelfLink:/api/v1/namespaces/rook-ceph-stage-primary/configmaps/local-device-k8s-worker-03.lxstage.domain.com,UID:5534ba75-b855-11e9-a92c-0050568460f6,ResourceVersion:308328495,Generation:0,CreationTimestamp:2019-08-06 14:20:25 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{app: rook-discover,rook.io/node: k8s-worker-03.lxstage.domain.com,velero.io/backup-name: rook-initial,velero.io/restore-name: rook4-1,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{devices: [{"name":"sdb","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/usb-HP_iLO_Internal_SD-CARD_000002660A01-0:0 /dev/disk/by-path/pci-0000:00:1d.0-usb-0:1.3.1:1.0-scsi-0:0:0:0","size":7971274752,"uuid":"8ed6101c-a731-4730-bf60-910545277464","serial":"HP_iLO_Internal_SD-CARD_000002660A01-0:0","type":"disk","rotational":true,"readOnly":false,"Partitions":[{"Name":"sdb9","Size":2684354560,"Label":"","Filesystem":""},{"Name":"sdb7","Size":115326976,"Label":"","Filesystem":""},{"Name":"sdb5","Size":262127616,"Label":"","Filesystem":""},{"Name":"sdb1","Size":4161536,"Label":"","Filesystem":""},{"Name":"sdb8","Size":299876352,"Label":"","Filesystem":""},{"Name":"sdb6","Size":262127616,"Label":"","Filesystem":""}],"filesystem":"","vendor":"HP_iLO","model":"Internal_SD-CARD","wwn":"","wwnVendorExtension":"","empty":false},{"name":"sda","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/scsi-3600508b1001c3efce7fde6de0a5e8403 /dev/disk/by-id/wwn-0x600508b1001c3efce7fde6de0a5e8403 /dev/disk/by-path/pci-0000:02:00.0-scsi-0:1:0:0","size":299966445568,"uuid":"2c392bda-997d-463d-9a22-390b60db2af7","serial":"3600508b1001c3efce7fde6de0a5e8403","type":"disk","rotational":true,"readOnly":false,"Partitions":[{"Name":"sda4","Size":1073741824,"Label":"USR-B","Filesystem":""},{"Name":"sda2","Size":2097152,"Label":"BIOS-BOOT","Filesystem":""},{"Name":"sda9","Size":297479206400,"Label":"ROOT","Filesystem":"ext4"},{"Name":"sda7","Size":67108864,"Label":"OEM-CONFIG","Filesystem":""},{"Name":"sda3","Size":1073741824,"Label":"USR-A","Filesystem":"ext4"},{"Name":"sda1","Size":134217728,"Label":"EFI-SYSTEM","Filesystem":"vfat"},{"Name":"sda6","Size":134217728,"Label":"OEM","Filesystem":"ext4"}],"filesystem":"","vendor":"HP","model":"LOGICAL_VOLUME","wwn":"0x600508b1001c3efc","wwnVendorExtension":"0x600508b1001c3efce7fde6de0a5e8403","empty":false},{"name":"dm-0","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/dm-name-usr /dev/disk/by-id/dm-uuid-CRYPT-VERITY-20ca38800670494eb5ec7025c6f1e8ac-usr /dev/disk/by-id/raid-usr /dev/disk/by-uuid/21ca7b39-3739-4b86-9123-865631f253a4 /dev/mapper/usr","size":1065345024,"uuid":"f8845dae-06e2-419c-8832-ae47f21eca5a","serial":"","type":"crypt","rotational":true,"readOnly":true,"Partitions":null,"filesystem":"ext4","vendor":"","model":"","wwn":"","wwnVendorExtension":"","empty":false}],},BinaryData:map[string][]byte{},}
2019-08-07 08:25:05.577823 I | op-cluster: device lists are equal. skipping orchestration
2019-08-07 08:25:05.577880 D | op-cluster: onDeviceCMUpdate old device cm: &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:local-device-k8s-worker-04.lxstage.domain.com,GenerateName:,Namespace:rook-ceph-stage-primary,SelfLink:/api/v1/namespaces/rook-ceph-stage-primary/configmaps/local-device-k8s-worker-04.lxstage.domain.com,UID:5535f98f-b855-11e9-a92c-0050568460f6,ResourceVersion:308328496,Generation:0,CreationTimestamp:2019-08-06 14:20:25 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{app: rook-discover,rook.io/node: k8s-worker-04.lxstage.domain.com,velero.io/backup-name: rook-initial,velero.io/restore-name: rook4-1,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{devices: [{"name":"sdb","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/usb-HP_iLO_Internal_SD-CARD_000002660A01-0:0 /dev/disk/by-path/pci-0000:00:1d.0-usb-0:1.3.1:1.0-scsi-0:0:0:0","size":7971274752,"uuid":"dc0f4dd1-62ad-4b32-aaf2-bef01773886f","serial":"HP_iLO_Internal_SD-CARD_000002660A01-0:0","type":"disk","rotational":true,"readOnly":false,"Partitions":[{"Name":"sdb9","Size":2684354560,"Label":"","Filesystem":""},{"Name":"sdb7","Size":115326976,"Label":"","Filesystem":""},{"Name":"sdb5","Size":262127616,"Label":"","Filesystem":""},{"Name":"sdb1","Size":4161536,"Label":"","Filesystem":""},{"Name":"sdb8","Size":299876352,"Label":"","Filesystem":""},{"Name":"sdb6","Size":262127616,"Label":"","Filesystem":""}],"filesystem":"","vendor":"HP_iLO","model":"Internal_SD-CARD","wwn":"","wwnVendorExtension":"","empty":false},{"name":"sda","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/scsi-3600508b1001c18081424e9e4e94c4477 /dev/disk/by-id/wwn-0x600508b1001c18081424e9e4e94c4477 /dev/disk/by-path/pci-0000:02:00.0-scsi-0:1:0:0","size":299966445568,"uuid":"b83aceff-263d-4d60-b608-5130c02cf07b","serial":"3600508b1001c18081424e9e4e94c4477","type":"disk","rotational":true,"readOnly":false,"Partitions":[{"Name":"sda4","Size":1073741824,"Label":"USR-B","Filesystem":""},{"Name":"sda2","Size":2097152,"Label":"BIOS-BOOT","Filesystem":""},{"Name":"sda9","Size":297479206400,"Label":"ROOT","Filesystem":"ext4"},{"Name":"sda7","Size":67108864,"Label":"OEM-CONFIG","Filesystem":""},{"Name":"sda3","Size":1073741824,"Label":"USR-A","Filesystem":"ext4"},{"Name":"sda1","Size":134217728,"Label":"EFI-SYSTEM","Filesystem":"vfat"},{"Name":"sda6","Size":134217728,"Label":"OEM","Filesystem":"ext4"}],"filesystem":"","vendor":"HP","model":"LOGICAL_VOLUME","wwn":"0x600508b1001c1808","wwnVendorExtension":"0x600508b1001c18081424e9e4e94c4477","empty":false},{"name":"dm-0","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/dm-name-usr /dev/disk/by-id/dm-uuid-CRYPT-VERITY-959135d6b3894b3b8125503de238d5c4-usr /dev/disk/by-id/raid-usr /dev/disk/by-uuid/efb984fe-ffa0-4411-9d43-b17ad24c8a6e /dev/mapper/usr","size":1065345024,"uuid":"4e442375-4090-414c-b441-41dd6dfbea46","serial":"","type":"crypt","rotational":true,"readOnly":true,"Partitions":null,"filesystem":"ext4","vendor":"","model":"","wwn":"","wwnVendorExtension":"","empty":false}],},BinaryData:map[string][]byte{},}
2019-08-07 08:25:05.577939 D | op-cluster: onDeviceCMUpdate new device cm: &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:local-device-k8s-worker-04.lxstage.domain.com,GenerateName:,Namespace:rook-ceph-stage-primary,SelfLink:/api/v1/namespaces/rook-ceph-stage-primary/configmaps/local-device-k8s-worker-04.lxstage.domain.com,UID:5535f98f-b855-11e9-a92c-0050568460f6,ResourceVersion:308328496,Generation:0,CreationTimestamp:2019-08-06 14:20:25 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{app: rook-discover,rook.io/node: k8s-worker-04.lxstage.domain.com,velero.io/backup-name: rook-initial,velero.io/restore-name: rook4-1,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{devices: [{"name":"sdb","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/usb-HP_iLO_Internal_SD-CARD_000002660A01-0:0 /dev/disk/by-path/pci-0000:00:1d.0-usb-0:1.3.1:1.0-scsi-0:0:0:0","size":7971274752,"uuid":"dc0f4dd1-62ad-4b32-aaf2-bef01773886f","serial":"HP_iLO_Internal_SD-CARD_000002660A01-0:0","type":"disk","rotational":true,"readOnly":false,"Partitions":[{"Name":"sdb9","Size":2684354560,"Label":"","Filesystem":""},{"Name":"sdb7","Size":115326976,"Label":"","Filesystem":""},{"Name":"sdb5","Size":262127616,"Label":"","Filesystem":""},{"Name":"sdb1","Size":4161536,"Label":"","Filesystem":""},{"Name":"sdb8","Size":299876352,"Label":"","Filesystem":""},{"Name":"sdb6","Size":262127616,"Label":"","Filesystem":""}],"filesystem":"","vendor":"HP_iLO","model":"Internal_SD-CARD","wwn":"","wwnVendorExtension":"","empty":false},{"name":"sda","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/scsi-3600508b1001c18081424e9e4e94c4477 /dev/disk/by-id/wwn-0x600508b1001c18081424e9e4e94c4477 /dev/disk/by-path/pci-0000:02:00.0-scsi-0:1:0:0","size":299966445568,"uuid":"b83aceff-263d-4d60-b608-5130c02cf07b","serial":"3600508b1001c18081424e9e4e94c4477","type":"disk","rotational":true,"readOnly":false,"Partitions":[{"Name":"sda4","Size":1073741824,"Label":"USR-B","Filesystem":""},{"Name":"sda2","Size":2097152,"Label":"BIOS-BOOT","Filesystem":""},{"Name":"sda9","Size":297479206400,"Label":"ROOT","Filesystem":"ext4"},{"Name":"sda7","Size":67108864,"Label":"OEM-CONFIG","Filesystem":""},{"Name":"sda3","Size":1073741824,"Label":"USR-A","Filesystem":"ext4"},{"Name":"sda1","Size":134217728,"Label":"EFI-SYSTEM","Filesystem":"vfat"},{"Name":"sda6","Size":134217728,"Label":"OEM","Filesystem":"ext4"}],"filesystem":"","vendor":"HP","model":"LOGICAL_VOLUME","wwn":"0x600508b1001c1808","wwnVendorExtension":"0x600508b1001c18081424e9e4e94c4477","empty":false},{"name":"dm-0","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/dm-name-usr /dev/disk/by-id/dm-uuid-CRYPT-VERITY-959135d6b3894b3b8125503de238d5c4-usr /dev/disk/by-id/raid-usr /dev/disk/by-uuid/efb984fe-ffa0-4411-9d43-b17ad24c8a6e /dev/mapper/usr","size":1065345024,"uuid":"4e442375-4090-414c-b441-41dd6dfbea46","serial":"","type":"crypt","rotational":true,"readOnly":true,"Partitions":null,"filesystem":"ext4","vendor":"","model":"","wwn":"","wwnVendorExtension":"","empty":false}],},BinaryData:map[string][]byte{},}
2019-08-07 08:25:05.578326 I | op-cluster: device lists are equal. skipping orchestration
2019-08-07 08:25:05.578380 D | op-cluster: onDeviceCMUpdate old device cm: &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:local-device-k8s-worker-21.lxstage.domain.com,GenerateName:,Namespace:rook-ceph-stage-primary,SelfLink:/api/v1/namespaces/rook-ceph-stage-primary/configmaps/local-device-k8s-worker-21.lxstage.domain.com,UID:553d1962-b855-11e9-a92c-0050568460f6,ResourceVersion:308328502,Generation:0,CreationTimestamp:2019-08-06 14:20:25 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{app: rook-discover,rook.io/node: k8s-worker-21.lxstage.domain.com,velero.io/backup-name: rook-initial,velero.io/restore-name: rook4-1,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{devices: [{"name":"sda","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-path/pci-0000:00:10.0-scsi-0:0:0:0","size":107374182400,"uuid":"627dbf49-bba9-431b-9eb5-729ee2b63ff4","serial":"","type":"disk","rotational":true,"readOnly":false,"Partitions":[{"Name":"sda4","Size":1073741824,"Label":"USR-B","Filesystem":""},{"Name":"sda2","Size":2097152,"Label":"BIOS-BOOT","Filesystem":""},{"Name":"sda9","Size":104886943232,"Label":"ROOT","Filesystem":"ext4"},{"Name":"sda7","Size":67108864,"Label":"OEM-CONFIG","Filesystem":""},{"Name":"sda3","Size":1073741824,"Label":"USR-A","Filesystem":"ext4"},{"Name":"sda1","Size":134217728,"Label":"EFI-SYSTEM","Filesystem":"vfat"},{"Name":"sda6","Size":134217728,"Label":"OEM","Filesystem":"ext4"}],"filesystem":"","vendor":"VMware","model":"Virtual_disk","wwn":"","wwnVendorExtension":"","empty":false},{"name":"dm-0","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/dm-name-usr /dev/disk/by-id/dm-uuid-CRYPT-VERITY-20ca38800670494eb5ec7025c6f1e8ac-usr /dev/disk/by-id/raid-usr /dev/disk/by-uuid/21ca7b39-3739-4b86-9123-865631f253a4 /dev/mapper/usr","size":1065345024,"uuid":"f4bf5fd7-e5ff-4c9e-ba2d-c4c69ac98b5e","serial":"","type":"crypt","rotational":true,"readOnly":true,"Partitions":null,"filesystem":"ext4","vendor":"","model":"","wwn":"","wwnVendorExtension":"","empty":false}],},BinaryData:map[string][]byte{},}
2019-08-07 08:25:05.578430 D | op-cluster: onDeviceCMUpdate new device cm: &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:local-device-k8s-worker-21.lxstage.domain.com,GenerateName:,Namespace:rook-ceph-stage-primary,SelfLink:/api/v1/namespaces/rook-ceph-stage-primary/configmaps/local-device-k8s-worker-21.lxstage.domain.com,UID:553d1962-b855-11e9-a92c-0050568460f6,ResourceVersion:308328502,Generation:0,CreationTimestamp:2019-08-06 14:20:25 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{app: rook-discover,rook.io/node: k8s-worker-21.lxstage.domain.com,velero.io/backup-name: rook-initial,velero.io/restore-name: rook4-1,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{devices: [{"name":"sda","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-path/pci-0000:00:10.0-scsi-0:0:0:0","size":107374182400,"uuid":"627dbf49-bba9-431b-9eb5-729ee2b63ff4","serial":"","type":"disk","rotational":true,"readOnly":false,"Partitions":[{"Name":"sda4","Size":1073741824,"Label":"USR-B","Filesystem":""},{"Name":"sda2","Size":2097152,"Label":"BIOS-BOOT","Filesystem":""},{"Name":"sda9","Size":104886943232,"Label":"ROOT","Filesystem":"ext4"},{"Name":"sda7","Size":67108864,"Label":"OEM-CONFIG","Filesystem":""},{"Name":"sda3","Size":1073741824,"Label":"USR-A","Filesystem":"ext4"},{"Name":"sda1","Size":134217728,"Label":"EFI-SYSTEM","Filesystem":"vfat"},{"Name":"sda6","Size":134217728,"Label":"OEM","Filesystem":"ext4"}],"filesystem":"","vendor":"VMware","model":"Virtual_disk","wwn":"","wwnVendorExtension":"","empty":false},{"name":"dm-0","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/dm-name-usr /dev/disk/by-id/dm-uuid-CRYPT-VERITY-20ca38800670494eb5ec7025c6f1e8ac-usr /dev/disk/by-id/raid-usr /dev/disk/by-uuid/21ca7b39-3739-4b86-9123-865631f253a4 /dev/mapper/usr","size":1065345024,"uuid":"f4bf5fd7-e5ff-4c9e-ba2d-c4c69ac98b5e","serial":"","type":"crypt","rotational":true,"readOnly":true,"Partitions":null,"filesystem":"ext4","vendor":"","model":"","wwn":"","wwnVendorExtension":"","empty":false}],},BinaryData:map[string][]byte{},}
2019-08-07 08:25:05.578667 I | op-cluster: device lists are equal. skipping orchestration
2019-08-07 08:25:05.578723 D | op-cluster: onDeviceCMUpdate old device cm: &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:local-device-k8s-worker-33.lxstage.domain.com,GenerateName:,Namespace:rook-ceph-stage-primary,SelfLink:/api/v1/namespaces/rook-ceph-stage-primary/configmaps/local-device-k8s-worker-33.lxstage.domain.com,UID:55480cfb-b855-11e9-a92c-0050568460f6,ResourceVersion:308328513,Generation:0,CreationTimestamp:2019-08-06 14:20:25 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{app: rook-discover,rook.io/node: k8s-worker-33.lxstage.domain.com,velero.io/backup-name: rook-initial,velero.io/restore-name: rook4-1,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{devices: [{"name":"sda","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-path/pci-0000:00:10.0-scsi-0:0:0:0","size":107374182400,"uuid":"9ceb4be4-2a93-4eec-af78-abe9fca514a6","serial":"","type":"disk","rotational":true,"readOnly":false,"Partitions":[{"Name":"sda4","Size":1073741824,"Label":"USR-B","Filesystem":""},{"Name":"sda2","Size":2097152,"Label":"BIOS-BOOT","Filesystem":""},{"Name":"sda9","Size":104886943232,"Label":"ROOT","Filesystem":"ext4"},{"Name":"sda7","Size":67108864,"Label":"OEM-CONFIG","Filesystem":""},{"Name":"sda3","Size":1073741824,"Label":"USR-A","Filesystem":"ext4"},{"Name":"sda1","Size":134217728,"Label":"EFI-SYSTEM","Filesystem":"vfat"},{"Name":"sda6","Size":134217728,"Label":"OEM","Filesystem":"ext4"}],"filesystem":"","vendor":"VMware","model":"Virtual_disk","wwn":"","wwnVendorExtension":"","empty":false},{"name":"dm-0","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/dm-name-usr /dev/disk/by-id/dm-uuid-CRYPT-VERITY-927885ea3b564514904ff1dfe4422d9a-usr /dev/disk/by-id/raid-usr /dev/disk/by-uuid/01c8f018-9747-4b04-9e26-7bea7b3b5d59 /dev/mapper/usr","size":1065345024,"uuid":"1a8b1df0-1015-4fe6-8292-0e9435fccf13","serial":"","type":"crypt","rotational":true,"readOnly":true,"Partitions":null,"filesystem":"ext4","vendor":"","model":"","wwn":"","wwnVendorExtension":"","empty":false}],},BinaryData:map[string][]byte{},}
2019-08-07 08:25:05.578820 D | op-cluster: onDeviceCMUpdate new device cm: &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:local-device-k8s-worker-33.lxstage.domain.com,GenerateName:,Namespace:rook-ceph-stage-primary,SelfLink:/api/v1/namespaces/rook-ceph-stage-primary/configmaps/local-device-k8s-worker-33.lxstage.domain.com,UID:55480cfb-b855-11e9-a92c-0050568460f6,ResourceVersion:308328513,Generation:0,CreationTimestamp:2019-08-06 14:20:25 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{app: rook-discover,rook.io/node: k8s-worker-33.lxstage.domain.com,velero.io/backup-name: rook-initial,velero.io/restore-name: rook4-1,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{devices: [{"name":"sda","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-path/pci-0000:00:10.0-scsi-0:0:0:0","size":107374182400,"uuid":"9ceb4be4-2a93-4eec-af78-abe9fca514a6","serial":"","type":"disk","rotational":true,"readOnly":false,"Partitions":[{"Name":"sda4","Size":1073741824,"Label":"USR-B","Filesystem":""},{"Name":"sda2","Size":2097152,"Label":"BIOS-BOOT","Filesystem":""},{"Name":"sda9","Size":104886943232,"Label":"ROOT","Filesystem":"ext4"},{"Name":"sda7","Size":67108864,"Label":"OEM-CONFIG","Filesystem":""},{"Name":"sda3","Size":1073741824,"Label":"USR-A","Filesystem":"ext4"},{"Name":"sda1","Size":134217728,"Label":"EFI-SYSTEM","Filesystem":"vfat"},{"Name":"sda6","Size":134217728,"Label":"OEM","Filesystem":"ext4"}],"filesystem":"","vendor":"VMware","model":"Virtual_disk","wwn":"","wwnVendorExtension":"","empty":false},{"name":"dm-0","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/dm-name-usr /dev/disk/by-id/dm-uuid-CRYPT-VERITY-927885ea3b564514904ff1dfe4422d9a-usr /dev/disk/by-id/raid-usr /dev/disk/by-uuid/01c8f018-9747-4b04-9e26-7bea7b3b5d59 /dev/mapper/usr","size":1065345024,"uuid":"1a8b1df0-1015-4fe6-8292-0e9435fccf13","serial":"","type":"crypt","rotational":true,"readOnly":true,"Partitions":null,"filesystem":"ext4","vendor":"","model":"","wwn":"","wwnVendorExtension":"","empty":false}],},BinaryData:map[string][]byte{},}
2019-08-07 08:25:05.579065 I | op-cluster: device lists are equal. skipping orchestration
2019-08-07 08:25:05.579128 D | op-cluster: onDeviceCMUpdate old device cm: &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:local-device-k8s-worker-00.lxstage.domain.com,GenerateName:,Namespace:rook-ceph-stage-primary,SelfLink:/api/v1/namespaces/rook-ceph-stage-primary/configmaps/local-device-k8s-worker-00.lxstage.domain.com,UID:55311a46-b855-11e9-a92c-0050568460f6,ResourceVersion:308328491,Generation:0,CreationTimestamp:2019-08-06 14:20:25 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{app: rook-discover,rook.io/node: k8s-worker-00.lxstage.domain.com,velero.io/backup-name: rook-initial,velero.io/restore-name: rook4-1,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{devices: [{"name":"sda","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-path/pci-0000:00:10.0-scsi-0:0:0:0","size":107374182400,"uuid":"3750885f-533e-4205-b55c-d0658846df00","serial":"","type":"disk","rotational":true,"readOnly":false,"Partitions":[{"Name":"sda4","Size":1073741824,"Label":"USR-B","Filesystem":""},{"Name":"sda2","Size":2097152,"Label":"BIOS-BOOT","Filesystem":""},{"Name":"sda9","Size":104886943232,"Label":"ROOT","Filesystem":"ext4"},{"Name":"sda7","Size":67108864,"Label":"OEM-CONFIG","Filesystem":""},{"Name":"sda3","Size":1073741824,"Label":"USR-A","Filesystem":"ext4"},{"Name":"sda1","Size":134217728,"Label":"EFI-SYSTEM","Filesystem":"vfat"},{"Name":"sda6","Size":134217728,"Label":"OEM","Filesystem":"ext4"}],"filesystem":"","vendor":"VMware","model":"Virtual_disk","wwn":"","wwnVendorExtension":"","empty":false},{"name":"dm-0","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/dm-name-usr /dev/disk/by-id/dm-uuid-CRYPT-VERITY-20ca38800670494eb5ec7025c6f1e8ac-usr /dev/disk/by-id/raid-usr /dev/disk/by-uuid/21ca7b39-3739-4b86-9123-865631f253a4 /dev/mapper/usr","size":1065345024,"uuid":"dfecc577-6736-4b93-8d17-35684244b049","serial":"","type":"crypt","rotational":true,"readOnly":true,"Partitions":null,"filesystem":"ext4","vendor":"","model":"","wwn":"","wwnVendorExtension":"","empty":false}],},BinaryData:map[string][]byte{},}
2019-08-07 08:25:05.579173 D | op-cluster: onDeviceCMUpdate new device cm: &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:local-device-k8s-worker-00.lxstage.domain.com,GenerateName:,Namespace:rook-ceph-stage-primary,SelfLink:/api/v1/namespaces/rook-ceph-stage-primary/configmaps/local-device-k8s-worker-00.lxstage.domain.com,UID:55311a46-b855-11e9-a92c-0050568460f6,ResourceVersion:308328491,Generation:0,CreationTimestamp:2019-08-06 14:20:25 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{app: rook-discover,rook.io/node: k8s-worker-00.lxstage.domain.com,velero.io/backup-name: rook-initial,velero.io/restore-name: rook4-1,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{devices: [{"name":"sda","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-path/pci-0000:00:10.0-scsi-0:0:0:0","size":107374182400,"uuid":"3750885f-533e-4205-b55c-d0658846df00","serial":"","type":"disk","rotational":true,"readOnly":false,"Partitions":[{"Name":"sda4","Size":1073741824,"Label":"USR-B","Filesystem":""},{"Name":"sda2","Size":2097152,"Label":"BIOS-BOOT","Filesystem":""},{"Name":"sda9","Size":104886943232,"Label":"ROOT","Filesystem":"ext4"},{"Name":"sda7","Size":67108864,"Label":"OEM-CONFIG","Filesystem":""},{"Name":"sda3","Size":1073741824,"Label":"USR-A","Filesystem":"ext4"},{"Name":"sda1","Size":134217728,"Label":"EFI-SYSTEM","Filesystem":"vfat"},{"Name":"sda6","Size":134217728,"Label":"OEM","Filesystem":"ext4"}],"filesystem":"","vendor":"VMware","model":"Virtual_disk","wwn":"","wwnVendorExtension":"","empty":false},{"name":"dm-0","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/dm-name-usr /dev/disk/by-id/dm-uuid-CRYPT-VERITY-20ca38800670494eb5ec7025c6f1e8ac-usr /dev/disk/by-id/raid-usr /dev/disk/by-uuid/21ca7b39-3739-4b86-9123-865631f253a4 /dev/mapper/usr","size":1065345024,"uuid":"dfecc577-6736-4b93-8d17-35684244b049","serial":"","type":"crypt","rotational":true,"readOnly":true,"Partitions":null,"filesystem":"ext4","vendor":"","model":"","wwn":"","wwnVendorExtension":"","empty":false}],},BinaryData:map[string][]byte{},}
2019-08-07 08:25:05.579407 I | op-cluster: device lists are equal. skipping orchestration
2019-08-07 08:25:05.579467 D | op-cluster: onDeviceCMUpdate old device cm: &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:local-device-k8s-worker-102.lxstage.domain.com,GenerateName:,Namespace:rook-ceph-stage-primary,SelfLink:/api/v1/namespaces/rook-ceph-stage-primary/configmaps/local-device-k8s-worker-102.lxstage.domain.com,UID:55383ded-b855-11e9-a92c-0050568460f6,ResourceVersion:308328498,Generation:0,CreationTimestamp:2019-08-06 14:20:25 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{app: rook-discover,rook.io/node: k8s-worker-102.lxstage.domain.com,velero.io/backup-name: rook-initial,velero.io/restore-name: rook4-1,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{devices: [{"name":"nvme0n1","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/nvme-Dell_Express_Flash_PM1725a_6.4TB_SFF__S39ZNX0M100222 /dev/disk/by-id/nvme-eui.33395a304d1002220025385800000002 /dev/disk/by-path/pci-0000:88:00.0-nvme-1","size":6401252745216,"uuid":"fff59f2a-373b-4131-93ef-29f92850450d","serial":"Dell Express Flash PM1725a 6.4TB SFF_ S39ZNX0M100222","type":"disk","rotational":false,"readOnly":false,"Partitions":null,"filesystem":"","vendor":"","model":"Dell Express Flash PM1725a 6.4TB SFF","wwn":"eui.33395a304d1002220025385800000002","wwnVendorExtension":"","empty":true},{"name":"nvme1n1","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/nvme-Dell_Express_Flash_PM1725a_6.4TB_SFF__S39ZNX0M100210 /dev/disk/by-id/nvme-eui.33395a304d1002100025385800000002 /dev/disk/by-path/pci-0000:89:00.0-nvme-1","size":6401252745216,"uuid":"71e683e3-37e0-455c-8b17-15c5638e73ff","serial":"Dell Express Flash PM1725a 6.4TB SFF_ S39ZNX0M100210","type":"disk","rotational":false,"readOnly":false,"Partitions":null,"filesystem":"","vendor":"","model":"Dell Express Flash PM1725a 6.4TB SFF","wwn":"eui.33395a304d1002100025385800000002","wwnVendorExtension":"","empty":true},{"name":"sda","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/ata-DELLBOSS_VD_c86b42613b290010 /dev/disk/by-path/pci-0000:5e:00.0-ata-1","size":239990276096,"uuid":"4075636c-c69f-4e3f-8380-a76e35a4a0c2","serial":"DELLBOSS_VD_c86b42613b290010","type":"disk","rotational":true,"readOnly":false,"Partitions":[{"Name":"sda4","Size":1073741824,"Label":"USR-B","Filesystem":""},{"Name":"sda2","Size":2097152,"Label":"BIOS-BOOT","Filesystem":""},{"Name":"sda9","Size":237503036928,"Label":"ROOT","Filesystem":"ext4"},{"Name":"sda7","Size":67108864,"Label":"OEM-CONFIG","Filesystem":""},{"Name":"sda3","Size":1073741824,"Label":"USR-A","Filesystem":"ext4"},{"Name":"sda1","Size":134217728,"Label":"EFI-SYSTEM","Filesystem":"vfat"},{"Name":"sda6","Size":134217728,"Label":"OEM","Filesystem":"ext4"}],"filesystem":"","vendor":"","model":"DELLBOSS_VD","wwn":"","wwnVendorExtension":"","empty":false},{"name":"dm-0","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/dm-name-usr /dev/disk/by-id/dm-uuid-CRYPT-VERITY-20ca38800670494eb5ec7025c6f1e8ac-usr /dev/disk/by-id/raid-usr /dev/disk/by-uuid/21ca7b39-3739-4b86-9123-865631f253a4 /dev/mapper/usr","size":1065345024,"uuid":"499b8e15-d15d-434e-a16e-be999fbd459c","serial":"","type":"crypt","rotational":true,"readOnly":true,"Partitions":null,"filesystem":"ext4","vendor":"","model":"","wwn":"","wwnVendorExtension":"","empty":false}],},BinaryData:map[string][]byte{},}
2019-08-07 08:25:05.579518 D | op-cluster: onDeviceCMUpdate new device cm: &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:local-device-k8s-worker-102.lxstage.domain.com,GenerateName:,Namespace:rook-ceph-stage-primary,SelfLink:/api/v1/namespaces/rook-ceph-stage-primary/configmaps/local-device-k8s-worker-102.lxstage.domain.com,UID:55383ded-b855-11e9-a92c-0050568460f6,ResourceVersion:308328498,Generation:0,CreationTimestamp:2019-08-06 14:20:25 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{app: rook-discover,rook.io/node: k8s-worker-102.lxstage.domain.com,velero.io/backup-name: rook-initial,velero.io/restore-name: rook4-1,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{devices: [{"name":"nvme0n1","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/nvme-Dell_Express_Flash_PM1725a_6.4TB_SFF__S39ZNX0M100222 /dev/disk/by-id/nvme-eui.33395a304d1002220025385800000002 /dev/disk/by-path/pci-0000:88:00.0-nvme-1","size":6401252745216,"uuid":"fff59f2a-373b-4131-93ef-29f92850450d","serial":"Dell Express Flash PM1725a 6.4TB SFF_ S39ZNX0M100222","type":"disk","rotational":false,"readOnly":false,"Partitions":null,"filesystem":"","vendor":"","model":"Dell Express Flash PM1725a 6.4TB SFF","wwn":"eui.33395a304d1002220025385800000002","wwnVendorExtension":"","empty":true},{"name":"nvme1n1","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/nvme-Dell_Express_Flash_PM1725a_6.4TB_SFF__S39ZNX0M100210 /dev/disk/by-id/nvme-eui.33395a304d1002100025385800000002 /dev/disk/by-path/pci-0000:89:00.0-nvme-1","size":6401252745216,"uuid":"71e683e3-37e0-455c-8b17-15c5638e73ff","serial":"Dell Express Flash PM1725a 6.4TB SFF_ S39ZNX0M100210","type":"disk","rotational":false,"readOnly":false,"Partitions":null,"filesystem":"","vendor":"","model":"Dell Express Flash PM1725a 6.4TB SFF","wwn":"eui.33395a304d1002100025385800000002","wwnVendorExtension":"","empty":true},{"name":"sda","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/ata-DELLBOSS_VD_c86b42613b290010 /dev/disk/by-path/pci-0000:5e:00.0-ata-1","size":239990276096,"uuid":"4075636c-c69f-4e3f-8380-a76e35a4a0c2","serial":"DELLBOSS_VD_c86b42613b290010","type":"disk","rotational":true,"readOnly":false,"Partitions":[{"Name":"sda4","Size":1073741824,"Label":"USR-B","Filesystem":""},{"Name":"sda2","Size":2097152,"Label":"BIOS-BOOT","Filesystem":""},{"Name":"sda9","Size":237503036928,"Label":"ROOT","Filesystem":"ext4"},{"Name":"sda7","Size":67108864,"Label":"OEM-CONFIG","Filesystem":""},{"Name":"sda3","Size":1073741824,"Label":"USR-A","Filesystem":"ext4"},{"Name":"sda1","Size":134217728,"Label":"EFI-SYSTEM","Filesystem":"vfat"},{"Name":"sda6","Size":134217728,"Label":"OEM","Filesystem":"ext4"}],"filesystem":"","vendor":"","model":"DELLBOSS_VD","wwn":"","wwnVendorExtension":"","empty":false},{"name":"dm-0","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/dm-name-usr /dev/disk/by-id/dm-uuid-CRYPT-VERITY-20ca38800670494eb5ec7025c6f1e8ac-usr /dev/disk/by-id/raid-usr /dev/disk/by-uuid/21ca7b39-3739-4b86-9123-865631f253a4 /dev/mapper/usr","size":1065345024,"uuid":"499b8e15-d15d-434e-a16e-be999fbd459c","serial":"","type":"crypt","rotational":true,"readOnly":true,"Partitions":null,"filesystem":"ext4","vendor":"","model":"","wwn":"","wwnVendorExtension":"","empty":false}],},BinaryData:map[string][]byte{},}
2019-08-07 08:25:05.579954 I | op-cluster: device lists are equal. skipping orchestration
2019-08-07 08:25:05.580022 D | op-cluster: onDeviceCMUpdate old device cm: &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:local-device-k8s-worker-01.lxstage.domain.com,GenerateName:,Namespace:rook-ceph-stage-primary,SelfLink:/api/v1/namespaces/rook-ceph-stage-primary/configmaps/local-device-k8s-worker-01.lxstage.domain.com,UID:5532891e-b855-11e9-a92c-0050568460f6,ResourceVersion:308328493,Generation:0,CreationTimestamp:2019-08-06 14:20:25 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{app: rook-discover,rook.io/node: k8s-worker-01.lxstage.domain.com,velero.io/backup-name: rook-initial,velero.io/restore-name: rook4-1,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{devices: [{"name":"sdb","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/usb-HP_iLO_Internal_SD-CARD_000002660A01-0:0 /dev/disk/by-path/pci-0000:00:1d.0-usb-0:1.3.1:1.0-scsi-0:0:0:0","size":7971274752,"uuid":"16d712e2-d4f0-47e7-8c66-ea3529d086c9","serial":"HP_iLO_Internal_SD-CARD_000002660A01-0:0","type":"disk","rotational":true,"readOnly":false,"Partitions":null,"filesystem":"","vendor":"HP_iLO","model":"Internal_SD-CARD","wwn":"","wwnVendorExtension":"","empty":true},{"name":"sda","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/scsi-3600508b1001c9bd50e1a1dab91331710 /dev/disk/by-id/wwn-0x600508b1001c9bd50e1a1dab91331710 /dev/disk/by-path/pci-0000:02:00.0-scsi-0:1:0:0","size":299966445568,"uuid":"0f8cedc7-3264-4fae-a009-5fc0da645cb8","serial":"3600508b1001c9bd50e1a1dab91331710","type":"disk","rotational":true,"readOnly":false,"Partitions":[{"Name":"sda4","Size":1073741824,"Label":"USR-B","Filesystem":""},{"Name":"sda2","Size":2097152,"Label":"BIOS-BOOT","Filesystem":""},{"Name":"sda9","Size":297479206400,"Label":"ROOT","Filesystem":"ext4"},{"Name":"sda7","Size":67108864,"Label":"OEM-CONFIG","Filesystem":""},{"Name":"sda3","Size":1073741824,"Label":"USR-A","Filesystem":"ext4"},{"Name":"sda1","Size":134217728,"Label":"EFI-SYSTEM","Filesystem":"vfat"},{"Name":"sda6","Size":134217728,"Label":"OEM","Filesystem":"ext4"}],"filesystem":"","vendor":"HP","model":"LOGICAL_VOLUME","wwn":"0x600508b1001c9bd5","wwnVendorExtension":"0x600508b1001c9bd50e1a1dab91331710","empty":false},{"name":"dm-0","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/dm-name-usr /dev/disk/by-id/dm-uuid-CRYPT-VERITY-20ca38800670494eb5ec7025c6f1e8ac-usr /dev/disk/by-id/raid-usr /dev/disk/by-uuid/21ca7b39-3739-4b86-9123-865631f253a4 /dev/mapper/usr","size":1065345024,"uuid":"d5445e1c-7e08-4bd3-a49d-3b2985965aeb","serial":"","type":"crypt","rotational":true,"readOnly":true,"Partitions":null,"filesystem":"ext4","vendor":"","model":"","wwn":"","wwnVendorExtension":"","empty":false}],},BinaryData:map[string][]byte{},}
2019-08-07 08:25:05.580072 D | op-cluster: onDeviceCMUpdate new device cm: &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:local-device-k8s-worker-01.lxstage.domain.com,GenerateName:,Namespace:rook-ceph-stage-primary,SelfLink:/api/v1/namespaces/rook-ceph-stage-primary/configmaps/local-device-k8s-worker-01.lxstage.domain.com,UID:5532891e-b855-11e9-a92c-0050568460f6,ResourceVersion:308328493,Generation:0,CreationTimestamp:2019-08-06 14:20:25 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{app: rook-discover,rook.io/node: k8s-worker-01.lxstage.domain.com,velero.io/backup-name: rook-initial,velero.io/restore-name: rook4-1,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{devices: [{"name":"sdb","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/usb-HP_iLO_Internal_SD-CARD_000002660A01-0:0 /dev/disk/by-path/pci-0000:00:1d.0-usb-0:1.3.1:1.0-scsi-0:0:0:0","size":7971274752,"uuid":"16d712e2-d4f0-47e7-8c66-ea3529d086c9","serial":"HP_iLO_Internal_SD-CARD_000002660A01-0:0","type":"disk","rotational":true,"readOnly":false,"Partitions":null,"filesystem":"","vendor":"HP_iLO","model":"Internal_SD-CARD","wwn":"","wwnVendorExtension":"","empty":true},{"name":"sda","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/scsi-3600508b1001c9bd50e1a1dab91331710 /dev/disk/by-id/wwn-0x600508b1001c9bd50e1a1dab91331710 /dev/disk/by-path/pci-0000:02:00.0-scsi-0:1:0:0","size":299966445568,"uuid":"0f8cedc7-3264-4fae-a009-5fc0da645cb8","serial":"3600508b1001c9bd50e1a1dab91331710","type":"disk","rotational":true,"readOnly":false,"Partitions":[{"Name":"sda4","Size":1073741824,"Label":"USR-B","Filesystem":""},{"Name":"sda2","Size":2097152,"Label":"BIOS-BOOT","Filesystem":""},{"Name":"sda9","Size":297479206400,"Label":"ROOT","Filesystem":"ext4"},{"Name":"sda7","Size":67108864,"Label":"OEM-CONFIG","Filesystem":""},{"Name":"sda3","Size":1073741824,"Label":"USR-A","Filesystem":"ext4"},{"Name":"sda1","Size":134217728,"Label":"EFI-SYSTEM","Filesystem":"vfat"},{"Name":"sda6","Size":134217728,"Label":"OEM","Filesystem":"ext4"}],"filesystem":"","vendor":"HP","model":"LOGICAL_VOLUME","wwn":"0x600508b1001c9bd5","wwnVendorExtension":"0x600508b1001c9bd50e1a1dab91331710","empty":false},{"name":"dm-0","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/dm-name-usr /dev/disk/by-id/dm-uuid-CRYPT-VERITY-20ca38800670494eb5ec7025c6f1e8ac-usr /dev/disk/by-id/raid-usr /dev/disk/by-uuid/21ca7b39-3739-4b86-9123-865631f253a4 /dev/mapper/usr","size":1065345024,"uuid":"d5445e1c-7e08-4bd3-a49d-3b2985965aeb","serial":"","type":"crypt","rotational":true,"readOnly":true,"Partitions":null,"filesystem":"ext4","vendor":"","model":"","wwn":"","wwnVendorExtension":"","empty":false}],},BinaryData:map[string][]byte{},}
2019-08-07 08:25:05.580430 I | op-cluster: device lists are equal. skipping orchestration
2019-08-07 08:25:05.580483 D | op-cluster: onDeviceCMUpdate old device cm: &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:local-device-k8s-worker-32.lxstage.domain.com,GenerateName:,Namespace:rook-ceph-stage-primary,SelfLink:/api/v1/namespaces/rook-ceph-stage-primary/configmaps/local-device-k8s-worker-32.lxstage.domain.com,UID:55469731-b855-11e9-a92c-0050568460f6,ResourceVersion:308328512,Generation:0,CreationTimestamp:2019-08-06 14:20:25 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{app: rook-discover,rook.io/node: k8s-worker-32.lxstage.domain.com,velero.io/backup-name: rook-initial,velero.io/restore-name: rook4-1,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{devices: [{"name":"sda","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-path/pci-0000:00:10.0-scsi-0:0:0:0","size":107374182400,"uuid":"d10bd427-e0ae-42f0-99c4-843c7cd41eb1","serial":"","type":"disk","rotational":true,"readOnly":false,"Partitions":[{"Name":"sda4","Size":1073741824,"Label":"USR-B","Filesystem":""},{"Name":"sda2","Size":2097152,"Label":"BIOS-BOOT","Filesystem":""},{"Name":"sda9","Size":104886943232,"Label":"ROOT","Filesystem":"ext4"},{"Name":"sda7","Size":67108864,"Label":"OEM-CONFIG","Filesystem":""},{"Name":"sda3","Size":1073741824,"Label":"USR-A","Filesystem":"ext4"},{"Name":"sda1","Size":134217728,"Label":"EFI-SYSTEM","Filesystem":"vfat"},{"Name":"sda6","Size":134217728,"Label":"OEM","Filesystem":"ext4"}],"filesystem":"","vendor":"VMware","model":"Virtual_disk","wwn":"","wwnVendorExtension":"","empty":false},{"name":"dm-0","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/dm-name-usr /dev/disk/by-id/dm-uuid-CRYPT-VERITY-20ca38800670494eb5ec7025c6f1e8ac-usr /dev/disk/by-id/raid-usr /dev/disk/by-uuid/21ca7b39-3739-4b86-9123-865631f253a4 /dev/mapper/usr","size":1065345024,"uuid":"b8a9ccc8-2d40-470c-b4a2-793c7f33351a","serial":"","type":"crypt","rotational":true,"readOnly":true,"Partitions":null,"filesystem":"ext4","vendor":"","model":"","wwn":"","wwnVendorExtension":"","empty":false}],},BinaryData:map[string][]byte{},}
2019-08-07 08:25:05.580531 D | op-cluster: onDeviceCMUpdate new device cm: &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:local-device-k8s-worker-32.lxstage.domain.com,GenerateName:,Namespace:rook-ceph-stage-primary,SelfLink:/api/v1/namespaces/rook-ceph-stage-primary/configmaps/local-device-k8s-worker-32.lxstage.domain.com,UID:55469731-b855-11e9-a92c-0050568460f6,ResourceVersion:308328512,Generation:0,CreationTimestamp:2019-08-06 14:20:25 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{app: rook-discover,rook.io/node: k8s-worker-32.lxstage.domain.com,velero.io/backup-name: rook-initial,velero.io/restore-name: rook4-1,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{devices: [{"name":"sda","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-path/pci-0000:00:10.0-scsi-0:0:0:0","size":107374182400,"uuid":"d10bd427-e0ae-42f0-99c4-843c7cd41eb1","serial":"","type":"disk","rotational":true,"readOnly":false,"Partitions":[{"Name":"sda4","Size":1073741824,"Label":"USR-B","Filesystem":""},{"Name":"sda2","Size":2097152,"Label":"BIOS-BOOT","Filesystem":""},{"Name":"sda9","Size":104886943232,"Label":"ROOT","Filesystem":"ext4"},{"Name":"sda7","Size":67108864,"Label":"OEM-CONFIG","Filesystem":""},{"Name":"sda3","Size":1073741824,"Label":"USR-A","Filesystem":"ext4"},{"Name":"sda1","Size":134217728,"Label":"EFI-SYSTEM","Filesystem":"vfat"},{"Name":"sda6","Size":134217728,"Label":"OEM","Filesystem":"ext4"}],"filesystem":"","vendor":"VMware","model":"Virtual_disk","wwn":"","wwnVendorExtension":"","empty":false},{"name":"dm-0","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/dm-name-usr /dev/disk/by-id/dm-uuid-CRYPT-VERITY-20ca38800670494eb5ec7025c6f1e8ac-usr /dev/disk/by-id/raid-usr /dev/disk/by-uuid/21ca7b39-3739-4b86-9123-865631f253a4 /dev/mapper/usr","size":1065345024,"uuid":"b8a9ccc8-2d40-470c-b4a2-793c7f33351a","serial":"","type":"crypt","rotational":true,"readOnly":true,"Partitions":null,"filesystem":"ext4","vendor":"","model":"","wwn":"","wwnVendorExtension":"","empty":false}],},BinaryData:map[string][]byte{},}
2019-08-07 08:25:05.580768 I | op-cluster: device lists are equal. skipping orchestration
2019-08-07 08:25:05.580816 D | op-cluster: onDeviceCMUpdate old device cm: &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:local-device-k8s-worker-103.lxstage.domain.com,GenerateName:,Namespace:rook-ceph-stage-primary,SelfLink:/api/v1/namespaces/rook-ceph-stage-primary/configmaps/local-device-k8s-worker-103.lxstage.domain.com,UID:55398132-b855-11e9-a92c-0050568460f6,ResourceVersion:308328499,Generation:0,CreationTimestamp:2019-08-06 14:20:25 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{app: rook-discover,rook.io/node: k8s-worker-103.lxstage.domain.com,velero.io/backup-name: rook-initial,velero.io/restore-name: rook4-1,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{devices: [{"name":"nvme0n1","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/nvme-Dell_Express_Flash_PM1725a_6.4TB_SFF__S39ZNX0M100213 /dev/disk/by-id/nvme-eui.33395a304d1002130025385800000002 /dev/disk/by-path/pci-0000:88:00.0-nvme-1","size":6401252745216,"uuid":"392a8a38-c5c9-444e-9d37-bb9131800a8b","serial":"Dell Express Flash PM1725a 6.4TB SFF_ S39ZNX0M100213","type":"disk","rotational":false,"readOnly":false,"Partitions":null,"filesystem":"","vendor":"","model":"Dell Express Flash PM1725a 6.4TB SFF","wwn":"eui.33395a304d1002130025385800000002","wwnVendorExtension":"","empty":true},{"name":"nvme1n1","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/nvme-Dell_Express_Flash_PM1725a_6.4TB_SFF__S39ZNX0M100203 /dev/disk/by-id/nvme-eui.33395a304d1002030025385800000002 /dev/disk/by-path/pci-0000:89:00.0-nvme-1","size":6401252745216,"uuid":"44f25539-6a46-40d6-aeed-64b610ea4514","serial":"Dell Express Flash PM1725a 6.4TB SFF_ S39ZNX0M100203","type":"disk","rotational":false,"readOnly":false,"Partitions":null,"filesystem":"","vendor":"","model":"Dell Express Flash PM1725a 6.4TB SFF","wwn":"eui.33395a304d1002030025385800000002","wwnVendorExtension":"","empty":true},{"name":"sda","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/ata-DELLBOSS_VD_b38edfa4a8c20010 /dev/disk/by-path/pci-0000:5e:00.0-ata-1","size":239990276096,"uuid":"edfbb30b-c225-4564-9a9a-9cc1b0f6e365","serial":"DELLBOSS_VD_b38edfa4a8c20010","type":"disk","rotational":true,"readOnly":false,"Partitions":[{"Name":"sda4","Size":1073741824,"Label":"USR-B","Filesystem":""},{"Name":"sda2","Size":2097152,"Label":"BIOS-BOOT","Filesystem":""},{"Name":"sda9","Size":237503036928,"Label":"ROOT","Filesystem":"ext4"},{"Name":"sda7","Size":67108864,"Label":"OEM-CONFIG","Filesystem":""},{"Name":"sda3","Size":1073741824,"Label":"USR-A","Filesystem":"ext4"},{"Name":"sda1","Size":134217728,"Label":"EFI-SYSTEM","Filesystem":"vfat"},{"Name":"sda6","Size":134217728,"Label":"OEM","Filesystem":"ext4"}],"filesystem":"","vendor":"","model":"DELLBOSS_VD","wwn":"","wwnVendorExtension":"","empty":false},{"name":"dm-0","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/dm-name-usr /dev/disk/by-id/dm-uuid-CRYPT-VERITY-20ca38800670494eb5ec7025c6f1e8ac-usr /dev/disk/by-id/raid-usr /dev/disk/by-uuid/21ca7b39-3739-4b86-9123-865631f253a4 /dev/mapper/usr","size":1065345024,"uuid":"6171e063-d085-45c9-850e-20b747568767","serial":"","type":"crypt","rotational":true,"readOnly":true,"Partitions":null,"filesystem":"ext4","vendor":"","model":"","wwn":"","wwnVendorExtension":"","empty":false}],},BinaryData:map[string][]byte{},}
2019-08-07 08:25:05.580875 D | op-cluster: onDeviceCMUpdate new device cm: &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:local-device-k8s-worker-103.lxstage.domain.com,GenerateName:,Namespace:rook-ceph-stage-primary,SelfLink:/api/v1/namespaces/rook-ceph-stage-primary/configmaps/local-device-k8s-worker-103.lxstage.domain.com,UID:55398132-b855-11e9-a92c-0050568460f6,ResourceVersion:308328499,Generation:0,CreationTimestamp:2019-08-06 14:20:25 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{app: rook-discover,rook.io/node: k8s-worker-103.lxstage.domain.com,velero.io/backup-name: rook-initial,velero.io/restore-name: rook4-1,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{devices: [{"name":"nvme0n1","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/nvme-Dell_Express_Flash_PM1725a_6.4TB_SFF__S39ZNX0M100213 /dev/disk/by-id/nvme-eui.33395a304d1002130025385800000002 /dev/disk/by-path/pci-0000:88:00.0-nvme-1","size":6401252745216,"uuid":"392a8a38-c5c9-444e-9d37-bb9131800a8b","serial":"Dell Express Flash PM1725a 6.4TB SFF_ S39ZNX0M100213","type":"disk","rotational":false,"readOnly":false,"Partitions":null,"filesystem":"","vendor":"","model":"Dell Express Flash PM1725a 6.4TB SFF","wwn":"eui.33395a304d1002130025385800000002","wwnVendorExtension":"","empty":true},{"name":"nvme1n1","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/nvme-Dell_Express_Flash_PM1725a_6.4TB_SFF__S39ZNX0M100203 /dev/disk/by-id/nvme-eui.33395a304d1002030025385800000002 /dev/disk/by-path/pci-0000:89:00.0-nvme-1","size":6401252745216,"uuid":"44f25539-6a46-40d6-aeed-64b610ea4514","serial":"Dell Express Flash PM1725a 6.4TB SFF_ S39ZNX0M100203","type":"disk","rotational":false,"readOnly":false,"Partitions":null,"filesystem":"","vendor":"","model":"Dell Express Flash PM1725a 6.4TB SFF","wwn":"eui.33395a304d1002030025385800000002","wwnVendorExtension":"","empty":true},{"name":"sda","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/ata-DELLBOSS_VD_b38edfa4a8c20010 /dev/disk/by-path/pci-0000:5e:00.0-ata-1","size":239990276096,"uuid":"edfbb30b-c225-4564-9a9a-9cc1b0f6e365","serial":"DELLBOSS_VD_b38edfa4a8c20010","type":"disk","rotational":true,"readOnly":false,"Partitions":[{"Name":"sda4","Size":1073741824,"Label":"USR-B","Filesystem":""},{"Name":"sda2","Size":2097152,"Label":"BIOS-BOOT","Filesystem":""},{"Name":"sda9","Size":237503036928,"Label":"ROOT","Filesystem":"ext4"},{"Name":"sda7","Size":67108864,"Label":"OEM-CONFIG","Filesystem":""},{"Name":"sda3","Size":1073741824,"Label":"USR-A","Filesystem":"ext4"},{"Name":"sda1","Size":134217728,"Label":"EFI-SYSTEM","Filesystem":"vfat"},{"Name":"sda6","Size":134217728,"Label":"OEM","Filesystem":"ext4"}],"filesystem":"","vendor":"","model":"DELLBOSS_VD","wwn":"","wwnVendorExtension":"","empty":false},{"name":"dm-0","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/dm-name-usr /dev/disk/by-id/dm-uuid-CRYPT-VERITY-20ca38800670494eb5ec7025c6f1e8ac-usr /dev/disk/by-id/raid-usr /dev/disk/by-uuid/21ca7b39-3739-4b86-9123-865631f253a4 /dev/mapper/usr","size":1065345024,"uuid":"6171e063-d085-45c9-850e-20b747568767","serial":"","type":"crypt","rotational":true,"readOnly":true,"Partitions":null,"filesystem":"ext4","vendor":"","model":"","wwn":"","wwnVendorExtension":"","empty":false}],},BinaryData:map[string][]byte{},}
2019-08-07 08:25:05.581303 I | op-cluster: device lists are equal. skipping orchestration
2019-08-07 08:25:05.581354 D | op-cluster: onDeviceCMUpdate old device cm: &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:local-device-k8s-worker-20.lxstage.domain.com,GenerateName:,Namespace:rook-ceph-stage-primary,SelfLink:/api/v1/namespaces/rook-ceph-stage-primary/configmaps/local-device-k8s-worker-20.lxstage.domain.com,UID:553c4a0d-b855-11e9-a92c-0050568460f6,ResourceVersion:308328501,Generation:0,CreationTimestamp:2019-08-06 14:20:25 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{app: rook-discover,rook.io/node: k8s-worker-20.lxstage.domain.com,velero.io/backup-name: rook-initial,velero.io/restore-name: rook4-1,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{devices: [{"name":"sda","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-path/pci-0000:00:10.0-scsi-0:0:0:0","size":107374182400,"uuid":"ed5f72fc-b048-46f5-a749-75f778105ec9","serial":"","type":"disk","rotational":true,"readOnly":false,"Partitions":[{"Name":"sda4","Size":1073741824,"Label":"USR-B","Filesystem":""},{"Name":"sda2","Size":2097152,"Label":"BIOS-BOOT","Filesystem":""},{"Name":"sda9","Size":104886943232,"Label":"ROOT","Filesystem":"ext4"},{"Name":"sda7","Size":67108864,"Label":"OEM-CONFIG","Filesystem":""},{"Name":"sda3","Size":1073741824,"Label":"USR-A","Filesystem":"ext4"},{"Name":"sda1","Size":134217728,"Label":"EFI-SYSTEM","Filesystem":"vfat"},{"Name":"sda6","Size":134217728,"Label":"OEM","Filesystem":"ext4"}],"filesystem":"","vendor":"VMware","model":"Virtual_disk","wwn":"","wwnVendorExtension":"","empty":false},{"name":"dm-0","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/dm-name-usr /dev/disk/by-id/dm-uuid-CRYPT-VERITY-927885ea3b564514904ff1dfe4422d9a-usr /dev/disk/by-id/raid-usr /dev/disk/by-uuid/01c8f018-9747-4b04-9e26-7bea7b3b5d59 /dev/mapper/usr","size":1065345024,"uuid":"7470250e-2911-4b7b-ba71-e358f5e34f9a","serial":"","type":"crypt","rotational":true,"readOnly":true,"Partitions":null,"filesystem":"ext4","vendor":"","model":"","wwn":"","wwnVendorExtension":"","empty":false}],},BinaryData:map[string][]byte{},}
2019-08-07 08:25:05.581406 D | op-cluster: onDeviceCMUpdate new device cm: &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:local-device-k8s-worker-20.lxstage.domain.com,GenerateName:,Namespace:rook-ceph-stage-primary,SelfLink:/api/v1/namespaces/rook-ceph-stage-primary/configmaps/local-device-k8s-worker-20.lxstage.domain.com,UID:553c4a0d-b855-11e9-a92c-0050568460f6,ResourceVersion:308328501,Generation:0,CreationTimestamp:2019-08-06 14:20:25 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{app: rook-discover,rook.io/node: k8s-worker-20.lxstage.domain.com,velero.io/backup-name: rook-initial,velero.io/restore-name: rook4-1,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{devices: [{"name":"sda","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-path/pci-0000:00:10.0-scsi-0:0:0:0","size":107374182400,"uuid":"ed5f72fc-b048-46f5-a749-75f778105ec9","serial":"","type":"disk","rotational":true,"readOnly":false,"Partitions":[{"Name":"sda4","Size":1073741824,"Label":"USR-B","Filesystem":""},{"Name":"sda2","Size":2097152,"Label":"BIOS-BOOT","Filesystem":""},{"Name":"sda9","Size":104886943232,"Label":"ROOT","Filesystem":"ext4"},{"Name":"sda7","Size":67108864,"Label":"OEM-CONFIG","Filesystem":""},{"Name":"sda3","Size":1073741824,"Label":"USR-A","Filesystem":"ext4"},{"Name":"sda1","Size":134217728,"Label":"EFI-SYSTEM","Filesystem":"vfat"},{"Name":"sda6","Size":134217728,"Label":"OEM","Filesystem":"ext4"}],"filesystem":"","vendor":"VMware","model":"Virtual_disk","wwn":"","wwnVendorExtension":"","empty":false},{"name":"dm-0","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/dm-name-usr /dev/disk/by-id/dm-uuid-CRYPT-VERITY-927885ea3b564514904ff1dfe4422d9a-usr /dev/disk/by-id/raid-usr /dev/disk/by-uuid/01c8f018-9747-4b04-9e26-7bea7b3b5d59 /dev/mapper/usr","size":1065345024,"uuid":"7470250e-2911-4b7b-ba71-e358f5e34f9a","serial":"","type":"crypt","rotational":true,"readOnly":true,"Partitions":null,"filesystem":"ext4","vendor":"","model":"","wwn":"","wwnVendorExtension":"","empty":false}],},BinaryData:map[string][]byte{},}
2019-08-07 08:25:05.581638 I | op-cluster: device lists are equal. skipping orchestration
2019-08-07 08:25:05.581687 D | op-cluster: onDeviceCMUpdate old device cm: &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:local-device-k8s-worker-02.lxstage.domain.com,GenerateName:,Namespace:rook-ceph-stage-primary,SelfLink:/api/v1/namespaces/rook-ceph-stage-primary/configmaps/local-device-k8s-worker-02.lxstage.domain.com,UID:5533c7d1-b855-11e9-a92c-0050568460f6,ResourceVersion:308328494,Generation:0,CreationTimestamp:2019-08-06 14:20:25 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{app: rook-discover,rook.io/node: k8s-worker-02.lxstage.domain.com,velero.io/backup-name: rook-initial,velero.io/restore-name: rook4-1,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{devices: [{"name":"sdb","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/usb-HP_iLO_Internal_SD-CARD_000002660A01-0:0 /dev/disk/by-path/pci-0000:00:1d.0-usb-0:1.3.1:1.0-scsi-0:0:0:0","size":7971274752,"uuid":"ec19f4bd-5206-4112-89d4-9e1d88db736e","serial":"HP_iLO_Internal_SD-CARD_000002660A01-0:0","type":"disk","rotational":true,"readOnly":false,"Partitions":null,"filesystem":"","vendor":"HP_iLO","model":"Internal_SD-CARD","wwn":"","wwnVendorExtension":"","empty":true},{"name":"sda","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/scsi-3600508b1001cba66daa52b3010096c8d /dev/disk/by-id/wwn-0x600508b1001cba66daa52b3010096c8d /dev/disk/by-path/pci-0000:02:00.0-scsi-0:1:0:0","size":299966445568,"uuid":"c3afd08e-453a-475b-811a-b05a10869fa1","serial":"3600508b1001cba66daa52b3010096c8d","type":"disk","rotational":true,"readOnly":false,"Partitions":[{"Name":"sda4","Size":1073741824,"Label":"USR-B","Filesystem":""},{"Name":"sda2","Size":2097152,"Label":"BIOS-BOOT","Filesystem":""},{"Name":"sda9","Size":297479206400,"Label":"ROOT","Filesystem":"ext4"},{"Name":"sda7","Size":67108864,"Label":"OEM-CONFIG","Filesystem":""},{"Name":"sda3","Size":1073741824,"Label":"USR-A","Filesystem":"ext4"},{"Name":"sda1","Size":134217728,"Label":"EFI-SYSTEM","Filesystem":"vfat"},{"Name":"sda6","Size":134217728,"Label":"OEM","Filesystem":"ext4"}],"filesystem":"","vendor":"HP","model":"LOGICAL_VOLUME","wwn":"0x600508b1001cba66","wwnVendorExtension":"0x600508b1001cba66daa52b3010096c8d","empty":false},{"name":"dm-0","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/dm-name-usr /dev/disk/by-id/dm-uuid-CRYPT-VERITY-20ca38800670494eb5ec7025c6f1e8ac-usr /dev/disk/by-id/raid-usr /dev/disk/by-uuid/21ca7b39-3739-4b86-9123-865631f253a4 /dev/mapper/usr","size":1065345024,"uuid":"24861766-72ee-4ecb-a1aa-10a35f9f87e5","serial":"","type":"crypt","rotational":true,"readOnly":true,"Partitions":null,"filesystem":"ext4","vendor":"","model":"","wwn":"","wwnVendorExtension":"","empty":false}],},BinaryData:map[string][]byte{},}
2019-08-07 08:25:05.581734 D | op-cluster: onDeviceCMUpdate new device cm: &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:local-device-k8s-worker-02.lxstage.domain.com,GenerateName:,Namespace:rook-ceph-stage-primary,SelfLink:/api/v1/namespaces/rook-ceph-stage-primary/configmaps/local-device-k8s-worker-02.lxstage.domain.com,UID:5533c7d1-b855-11e9-a92c-0050568460f6,ResourceVersion:308328494,Generation:0,CreationTimestamp:2019-08-06 14:20:25 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{app: rook-discover,rook.io/node: k8s-worker-02.lxstage.domain.com,velero.io/backup-name: rook-initial,velero.io/restore-name: rook4-1,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{devices: [{"name":"sdb","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/usb-HP_iLO_Internal_SD-CARD_000002660A01-0:0 /dev/disk/by-path/pci-0000:00:1d.0-usb-0:1.3.1:1.0-scsi-0:0:0:0","size":7971274752,"uuid":"ec19f4bd-5206-4112-89d4-9e1d88db736e","serial":"HP_iLO_Internal_SD-CARD_000002660A01-0:0","type":"disk","rotational":true,"readOnly":false,"Partitions":null,"filesystem":"","vendor":"HP_iLO","model":"Internal_SD-CARD","wwn":"","wwnVendorExtension":"","empty":true},{"name":"sda","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/scsi-3600508b1001cba66daa52b3010096c8d /dev/disk/by-id/wwn-0x600508b1001cba66daa52b3010096c8d /dev/disk/by-path/pci-0000:02:00.0-scsi-0:1:0:0","size":299966445568,"uuid":"c3afd08e-453a-475b-811a-b05a10869fa1","serial":"3600508b1001cba66daa52b3010096c8d","type":"disk","rotational":true,"readOnly":false,"Partitions":[{"Name":"sda4","Size":1073741824,"Label":"USR-B","Filesystem":""},{"Name":"sda2","Size":2097152,"Label":"BIOS-BOOT","Filesystem":""},{"Name":"sda9","Size":297479206400,"Label":"ROOT","Filesystem":"ext4"},{"Name":"sda7","Size":67108864,"Label":"OEM-CONFIG","Filesystem":""},{"Name":"sda3","Size":1073741824,"Label":"USR-A","Filesystem":"ext4"},{"Name":"sda1","Size":134217728,"Label":"EFI-SYSTEM","Filesystem":"vfat"},{"Name":"sda6","Size":134217728,"Label":"OEM","Filesystem":"ext4"}],"filesystem":"","vendor":"HP","model":"LOGICAL_VOLUME","wwn":"0x600508b1001cba66","wwnVendorExtension":"0x600508b1001cba66daa52b3010096c8d","empty":false},{"name":"dm-0","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/dm-name-usr /dev/disk/by-id/dm-uuid-CRYPT-VERITY-20ca38800670494eb5ec7025c6f1e8ac-usr /dev/disk/by-id/raid-usr /dev/disk/by-uuid/21ca7b39-3739-4b86-9123-865631f253a4 /dev/mapper/usr","size":1065345024,"uuid":"24861766-72ee-4ecb-a1aa-10a35f9f87e5","serial":"","type":"crypt","rotational":true,"readOnly":true,"Partitions":null,"filesystem":"ext4","vendor":"","model":"","wwn":"","wwnVendorExtension":"","empty":false}],},BinaryData:map[string][]byte{},}
2019-08-07 08:25:05.582076 I | op-cluster: device lists are equal. skipping orchestration
2019-08-07 08:25:05.582122 D | op-cluster: onDeviceCMUpdate old device cm: &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:local-device-k8s-worker-30.lxstage.domain.com,GenerateName:,Namespace:rook-ceph-stage-primary,SelfLink:/api/v1/namespaces/rook-ceph-stage-primary/configmaps/local-device-k8s-worker-30.lxstage.domain.com,UID:5544a030-b855-11e9-a92c-0050568460f6,ResourceVersion:308328509,Generation:0,CreationTimestamp:2019-08-06 14:20:25 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{app: rook-discover,rook.io/node: k8s-worker-30.lxstage.domain.com,velero.io/backup-name: rook-initial,velero.io/restore-name: rook4-1,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{devices: [{"name":"sda","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-path/pci-0000:00:10.0-scsi-0:0:0:0","size":107374182400,"uuid":"3282db70-a860-417d-ab2a-b9168f06d233","serial":"","type":"disk","rotational":true,"readOnly":false,"Partitions":[{"Name":"sda4","Size":1073741824,"Label":"USR-B","Filesystem":""},{"Name":"sda2","Size":2097152,"Label":"BIOS-BOOT","Filesystem":""},{"Name":"sda9","Size":104886943232,"Label":"ROOT","Filesystem":"ext4"},{"Name":"sda7","Size":67108864,"Label":"OEM-CONFIG","Filesystem":""},{"Name":"sda3","Size":1073741824,"Label":"USR-A","Filesystem":"ext4"},{"Name":"sda1","Size":134217728,"Label":"EFI-SYSTEM","Filesystem":"vfat"},{"Name":"sda6","Size":134217728,"Label":"OEM","Filesystem":"ext4"}],"filesystem":"","vendor":"VMware","model":"Virtual_disk","wwn":"","wwnVendorExtension":"","empty":false},{"name":"dm-0","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/dm-name-usr /dev/disk/by-id/dm-uuid-CRYPT-VERITY-20ca38800670494eb5ec7025c6f1e8ac-usr /dev/disk/by-id/raid-usr /dev/disk/by-uuid/21ca7b39-3739-4b86-9123-865631f253a4 /dev/mapper/usr","size":1065345024,"uuid":"7829e666-ba7b-4542-9c49-ad3cb48caaac","serial":"","type":"crypt","rotational":true,"readOnly":true,"Partitions":null,"filesystem":"ext4","vendor":"","model":"","wwn":"","wwnVendorExtension":"","empty":false}],},BinaryData:map[string][]byte{},}
2019-08-07 08:25:05.582172 D | op-cluster: onDeviceCMUpdate new device cm: &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:local-device-k8s-worker-30.lxstage.domain.com,GenerateName:,Namespace:rook-ceph-stage-primary,SelfLink:/api/v1/namespaces/rook-ceph-stage-primary/configmaps/local-device-k8s-worker-30.lxstage.domain.com,UID:5544a030-b855-11e9-a92c-0050568460f6,ResourceVersion:308328509,Generation:0,CreationTimestamp:2019-08-06 14:20:25 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{app: rook-discover,rook.io/node: k8s-worker-30.lxstage.domain.com,velero.io/backup-name: rook-initial,velero.io/restore-name: rook4-1,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{devices: [{"name":"sda","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-path/pci-0000:00:10.0-scsi-0:0:0:0","size":107374182400,"uuid":"3282db70-a860-417d-ab2a-b9168f06d233","serial":"","type":"disk","rotational":true,"readOnly":false,"Partitions":[{"Name":"sda4","Size":1073741824,"Label":"USR-B","Filesystem":""},{"Name":"sda2","Size":2097152,"Label":"BIOS-BOOT","Filesystem":""},{"Name":"sda9","Size":104886943232,"Label":"ROOT","Filesystem":"ext4"},{"Name":"sda7","Size":67108864,"Label":"OEM-CONFIG","Filesystem":""},{"Name":"sda3","Size":1073741824,"Label":"USR-A","Filesystem":"ext4"},{"Name":"sda1","Size":134217728,"Label":"EFI-SYSTEM","Filesystem":"vfat"},{"Name":"sda6","Size":134217728,"Label":"OEM","Filesystem":"ext4"}],"filesystem":"","vendor":"VMware","model":"Virtual_disk","wwn":"","wwnVendorExtension":"","empty":false},{"name":"dm-0","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/dm-name-usr /dev/disk/by-id/dm-uuid-CRYPT-VERITY-20ca38800670494eb5ec7025c6f1e8ac-usr /dev/disk/by-id/raid-usr /dev/disk/by-uuid/21ca7b39-3739-4b86-9123-865631f253a4 /dev/mapper/usr","size":1065345024,"uuid":"7829e666-ba7b-4542-9c49-ad3cb48caaac","serial":"","type":"crypt","rotational":true,"readOnly":true,"Partitions":null,"filesystem":"ext4","vendor":"","model":"","wwn":"","wwnVendorExtension":"","empty":false}],},BinaryData:map[string][]byte{},}
2019-08-07 08:25:05.582409 I | op-cluster: device lists are equal. skipping orchestration
2019-08-07 08:25:05.582453 D | op-cluster: onDeviceCMUpdate old device cm: &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:local-device-k8s-worker-24.lxstage.domain.com,GenerateName:,Namespace:rook-ceph-stage-primary,SelfLink:/api/v1/namespaces/rook-ceph-stage-primary/configmaps/local-device-k8s-worker-24.lxstage.domain.com,UID:55420365-b855-11e9-a92c-0050568460f6,ResourceVersion:308328507,Generation:0,CreationTimestamp:2019-08-06 14:20:25 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{app: rook-discover,rook.io/node: k8s-worker-24.lxstage.domain.com,velero.io/backup-name: rook-initial,velero.io/restore-name: rook4-1,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{devices: [{"name":"sda","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-path/pci-0000:00:10.0-scsi-0:0:0:0","size":107374182400,"uuid":"0b69e054-f432-472f-9e39-811c3d6e29c6","serial":"","type":"disk","rotational":true,"readOnly":false,"Partitions":[{"Name":"sda4","Size":1073741824,"Label":"USR-B","Filesystem":""},{"Name":"sda2","Size":2097152,"Label":"BIOS-BOOT","Filesystem":""},{"Name":"sda9","Size":104886943232,"Label":"ROOT","Filesystem":"ext4"},{"Name":"sda7","Size":67108864,"Label":"OEM-CONFIG","Filesystem":""},{"Name":"sda3","Size":1073741824,"Label":"USR-A","Filesystem":"ext4"},{"Name":"sda1","Size":134217728,"Label":"EFI-SYSTEM","Filesystem":"vfat"},{"Name":"sda6","Size":134217728,"Label":"OEM","Filesystem":"ext4"}],"filesystem":"","vendor":"VMware","model":"Virtual_disk","wwn":"","wwnVendorExtension":"","empty":false},{"name":"dm-0","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/dm-name-usr /dev/disk/by-id/dm-uuid-CRYPT-VERITY-20ca38800670494eb5ec7025c6f1e8ac-usr /dev/disk/by-id/raid-usr /dev/disk/by-uuid/21ca7b39-3739-4b86-9123-865631f253a4 /dev/mapper/usr","size":1065345024,"uuid":"18304b73-1985-406f-866d-015322a3401b","serial":"","type":"crypt","rotational":true,"readOnly":true,"Partitions":null,"filesystem":"ext4","vendor":"","model":"","wwn":"","wwnVendorExtension":"","empty":false}],},BinaryData:map[string][]byte{},}
2019-08-07 08:25:05.582502 D | op-cluster: onDeviceCMUpdate new device cm: &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:local-device-k8s-worker-24.lxstage.domain.com,GenerateName:,Namespace:rook-ceph-stage-primary,SelfLink:/api/v1/namespaces/rook-ceph-stage-primary/configmaps/local-device-k8s-worker-24.lxstage.domain.com,UID:55420365-b855-11e9-a92c-0050568460f6,ResourceVersion:308328507,Generation:0,CreationTimestamp:2019-08-06 14:20:25 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{app: rook-discover,rook.io/node: k8s-worker-24.lxstage.domain.com,velero.io/backup-name: rook-initial,velero.io/restore-name: rook4-1,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{devices: [{"name":"sda","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-path/pci-0000:00:10.0-scsi-0:0:0:0","size":107374182400,"uuid":"0b69e054-f432-472f-9e39-811c3d6e29c6","serial":"","type":"disk","rotational":true,"readOnly":false,"Partitions":[{"Name":"sda4","Size":1073741824,"Label":"USR-B","Filesystem":""},{"Name":"sda2","Size":2097152,"Label":"BIOS-BOOT","Filesystem":""},{"Name":"sda9","Size":104886943232,"Label":"ROOT","Filesystem":"ext4"},{"Name":"sda7","Size":67108864,"Label":"OEM-CONFIG","Filesystem":""},{"Name":"sda3","Size":1073741824,"Label":"USR-A","Filesystem":"ext4"},{"Name":"sda1","Size":134217728,"Label":"EFI-SYSTEM","Filesystem":"vfat"},{"Name":"sda6","Size":134217728,"Label":"OEM","Filesystem":"ext4"}],"filesystem":"","vendor":"VMware","model":"Virtual_disk","wwn":"","wwnVendorExtension":"","empty":false},{"name":"dm-0","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/dm-name-usr /dev/disk/by-id/dm-uuid-CRYPT-VERITY-20ca38800670494eb5ec7025c6f1e8ac-usr /dev/disk/by-id/raid-usr /dev/disk/by-uuid/21ca7b39-3739-4b86-9123-865631f253a4 /dev/mapper/usr","size":1065345024,"uuid":"18304b73-1985-406f-866d-015322a3401b","serial":"","type":"crypt","rotational":true,"readOnly":true,"Partitions":null,"filesystem":"ext4","vendor":"","model":"","wwn":"","wwnVendorExtension":"","empty":false}],},BinaryData:map[string][]byte{},}
2019-08-07 08:25:05.582740 I | op-cluster: device lists are equal. skipping orchestration
2019-08-07 08:25:05.582800 D | op-cluster: onDeviceCMUpdate old device cm: &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:local-device-k8s-worker-101.lxstage.domain.com,GenerateName:,Namespace:rook-ceph-stage-primary,SelfLink:/api/v1/namespaces/rook-ceph-stage-primary/configmaps/local-device-k8s-worker-101.lxstage.domain.com,UID:55371015-b855-11e9-a92c-0050568460f6,ResourceVersion:308328497,Generation:0,CreationTimestamp:2019-08-06 14:20:25 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{app: rook-discover,rook.io/node: k8s-worker-101.lxstage.domain.com,velero.io/backup-name: rook-initial,velero.io/restore-name: rook4-1,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{devices: [{"name":"nvme0n1","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/nvme-Dell_Express_Flash_PM1725a_6.4TB_SFF__S39ZNX0M100268 /dev/disk/by-id/nvme-eui.33395a304d1002680025385800000002 /dev/disk/by-path/pci-0000:88:00.0-nvme-1","size":6401252745216,"uuid":"766d290d-63ed-4040-ba29-29ef807028c2","serial":"Dell Express Flash PM1725a 6.4TB SFF_ S39ZNX0M100268","type":"disk","rotational":false,"readOnly":false,"Partitions":null,"filesystem":"","vendor":"","model":"Dell Express Flash PM1725a 6.4TB SFF","wwn":"eui.33395a304d1002680025385800000002","wwnVendorExtension":"","empty":true},{"name":"nvme1n1","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/nvme-Dell_Express_Flash_PM1725a_6.4TB_SFF__S39ZNX0M100269 /dev/disk/by-id/nvme-eui.33395a304d1002690025385800000002 /dev/disk/by-path/pci-0000:89:00.0-nvme-1","size":6401252745216,"uuid":"7e78e2b1-6011-422e-bd15-efc6e7442544","serial":"Dell Express Flash PM1725a 6.4TB SFF_ S39ZNX0M100269","type":"disk","rotational":false,"readOnly":false,"Partitions":null,"filesystem":"","vendor":"","model":"Dell Express Flash PM1725a 6.4TB SFF","wwn":"eui.33395a304d1002690025385800000002","wwnVendorExtension":"","empty":true},{"name":"sda","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/ata-DELLBOSS_VD_295c7f5887420010 /dev/disk/by-path/pci-0000:5e:00.0-ata-1","size":239990276096,"uuid":"55450a37-59eb-43e8-8d6e-4ebd76a00c12","serial":"DELLBOSS_VD_295c7f5887420010","type":"disk","rotational":true,"readOnly":false,"Partitions":[{"Name":"sda4","Size":1073741824,"Label":"USR-B","Filesystem":""},{"Name":"sda2","Size":2097152,"Label":"BIOS-BOOT","Filesystem":""},{"Name":"sda9","Size":237503036928,"Label":"ROOT","Filesystem":"ext4"},{"Name":"sda7","Size":67108864,"Label":"OEM-CONFIG","Filesystem":""},{"Name":"sda3","Size":1073741824,"Label":"USR-A","Filesystem":"ext4"},{"Name":"sda1","Size":134217728,"Label":"EFI-SYSTEM","Filesystem":"vfat"},{"Name":"sda6","Size":134217728,"Label":"OEM","Filesystem":"ext4"}],"filesystem":"","vendor":"","model":"DELLBOSS_VD","wwn":"","wwnVendorExtension":"","empty":false},{"name":"dm-0","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/dm-name-usr /dev/disk/by-id/dm-uuid-CRYPT-VERITY-20ca38800670494eb5ec7025c6f1e8ac-usr /dev/disk/by-id/raid-usr /dev/disk/by-uuid/21ca7b39-3739-4b86-9123-865631f253a4 /dev/mapper/usr","size":1065345024,"uuid":"944a1b47-5184-4d4e-9e13-55476b7082c1","serial":"","type":"crypt","rotational":true,"readOnly":true,"Partitions":null,"filesystem":"ext4","vendor":"","model":"","wwn":"","wwnVendorExtension":"","empty":false}],},BinaryData:map[string][]byte{},}
2019-08-07 08:25:05.582849 D | op-cluster: onDeviceCMUpdate new device cm: &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:local-device-k8s-worker-101.lxstage.domain.com,GenerateName:,Namespace:rook-ceph-stage-primary,SelfLink:/api/v1/namespaces/rook-ceph-stage-primary/configmaps/local-device-k8s-worker-101.lxstage.domain.com,UID:55371015-b855-11e9-a92c-0050568460f6,ResourceVersion:308328497,Generation:0,CreationTimestamp:2019-08-06 14:20:25 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{app: rook-discover,rook.io/node: k8s-worker-101.lxstage.domain.com,velero.io/backup-name: rook-initial,velero.io/restore-name: rook4-1,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{devices: [{"name":"nvme0n1","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/nvme-Dell_Express_Flash_PM1725a_6.4TB_SFF__S39ZNX0M100268 /dev/disk/by-id/nvme-eui.33395a304d1002680025385800000002 /dev/disk/by-path/pci-0000:88:00.0-nvme-1","size":6401252745216,"uuid":"766d290d-63ed-4040-ba29-29ef807028c2","serial":"Dell Express Flash PM1725a 6.4TB SFF_ S39ZNX0M100268","type":"disk","rotational":false,"readOnly":false,"Partitions":null,"filesystem":"","vendor":"","model":"Dell Express Flash PM1725a 6.4TB SFF","wwn":"eui.33395a304d1002680025385800000002","wwnVendorExtension":"","empty":true},{"name":"nvme1n1","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/nvme-Dell_Express_Flash_PM1725a_6.4TB_SFF__S39ZNX0M100269 /dev/disk/by-id/nvme-eui.33395a304d1002690025385800000002 /dev/disk/by-path/pci-0000:89:00.0-nvme-1","size":6401252745216,"uuid":"7e78e2b1-6011-422e-bd15-efc6e7442544","serial":"Dell Express Flash PM1725a 6.4TB SFF_ S39ZNX0M100269","type":"disk","rotational":false,"readOnly":false,"Partitions":null,"filesystem":"","vendor":"","model":"Dell Express Flash PM1725a 6.4TB SFF","wwn":"eui.33395a304d1002690025385800000002","wwnVendorExtension":"","empty":true},{"name":"sda","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/ata-DELLBOSS_VD_295c7f5887420010 /dev/disk/by-path/pci-0000:5e:00.0-ata-1","size":239990276096,"uuid":"55450a37-59eb-43e8-8d6e-4ebd76a00c12","serial":"DELLBOSS_VD_295c7f5887420010","type":"disk","rotational":true,"readOnly":false,"Partitions":[{"Name":"sda4","Size":1073741824,"Label":"USR-B","Filesystem":""},{"Name":"sda2","Size":2097152,"Label":"BIOS-BOOT","Filesystem":""},{"Name":"sda9","Size":237503036928,"Label":"ROOT","Filesystem":"ext4"},{"Name":"sda7","Size":67108864,"Label":"OEM-CONFIG","Filesystem":""},{"Name":"sda3","Size":1073741824,"Label":"USR-A","Filesystem":"ext4"},{"Name":"sda1","Size":134217728,"Label":"EFI-SYSTEM","Filesystem":"vfat"},{"Name":"sda6","Size":134217728,"Label":"OEM","Filesystem":"ext4"}],"filesystem":"","vendor":"","model":"DELLBOSS_VD","wwn":"","wwnVendorExtension":"","empty":false},{"name":"dm-0","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/dm-name-usr /dev/disk/by-id/dm-uuid-CRYPT-VERITY-20ca38800670494eb5ec7025c6f1e8ac-usr /dev/disk/by-id/raid-usr /dev/disk/by-uuid/21ca7b39-3739-4b86-9123-865631f253a4 /dev/mapper/usr","size":1065345024,"uuid":"944a1b47-5184-4d4e-9e13-55476b7082c1","serial":"","type":"crypt","rotational":true,"readOnly":true,"Partitions":null,"filesystem":"ext4","vendor":"","model":"","wwn":"","wwnVendorExtension":"","empty":false}],},BinaryData:map[string][]byte{},}
2019-08-07 08:25:05.583265 I | op-cluster: device lists are equal. skipping orchestration
2019-08-07 08:25:05.583313 D | op-cluster: onDeviceCMUpdate old device cm: &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:local-device-k8s-worker-23.lxstage.domain.com,GenerateName:,Namespace:rook-ceph-stage-primary,SelfLink:/api/v1/namespaces/rook-ceph-stage-primary/configmaps/local-device-k8s-worker-23.lxstage.domain.com,UID:553fba88-b855-11e9-a92c-0050568460f6,ResourceVersion:308328506,Generation:0,CreationTimestamp:2019-08-06 14:20:25 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{app: rook-discover,rook.io/node: k8s-worker-23.lxstage.domain.com,velero.io/backup-name: rook-initial,velero.io/restore-name: rook4-1,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{devices: [{"name":"sda","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-path/pci-0000:00:10.0-scsi-0:0:0:0","size":107374182400,"uuid":"698ed55f-ad71-4f3e-8a58-04360ad1b0cc","serial":"","type":"disk","rotational":true,"readOnly":false,"Partitions":[{"Name":"sda4","Size":1073741824,"Label":"USR-B","Filesystem":""},{"Name":"sda2","Size":2097152,"Label":"BIOS-BOOT","Filesystem":""},{"Name":"sda9","Size":104886943232,"Label":"ROOT","Filesystem":"ext4"},{"Name":"sda7","Size":67108864,"Label":"OEM-CONFIG","Filesystem":""},{"Name":"sda3","Size":1073741824,"Label":"USR-A","Filesystem":"ext4"},{"Name":"sda1","Size":134217728,"Label":"EFI-SYSTEM","Filesystem":"vfat"},{"Name":"sda6","Size":134217728,"Label":"OEM","Filesystem":"ext4"}],"filesystem":"","vendor":"VMware","model":"Virtual_disk","wwn":"","wwnVendorExtension":"","empty":false},{"name":"dm-0","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/dm-name-usr /dev/disk/by-id/dm-uuid-CRYPT-VERITY-20ca38800670494eb5ec7025c6f1e8ac-usr /dev/disk/by-id/raid-usr /dev/disk/by-uuid/21ca7b39-3739-4b86-9123-865631f253a4 /dev/mapper/usr","size":1065345024,"uuid":"0cbe88dc-cb4d-4a88-b632-09b5499caaa6","serial":"","type":"crypt","rotational":true,"readOnly":true,"Partitions":null,"filesystem":"ext4","vendor":"","model":"","wwn":"","wwnVendorExtension":"","empty":false}],},BinaryData:map[string][]byte{},}
2019-08-07 08:25:05.583358 D | op-cluster: onDeviceCMUpdate new device cm: &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:local-device-k8s-worker-23.lxstage.domain.com,GenerateName:,Namespace:rook-ceph-stage-primary,SelfLink:/api/v1/namespaces/rook-ceph-stage-primary/configmaps/local-device-k8s-worker-23.lxstage.domain.com,UID:553fba88-b855-11e9-a92c-0050568460f6,ResourceVersion:308328506,Generation:0,CreationTimestamp:2019-08-06 14:20:25 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{app: rook-discover,rook.io/node: k8s-worker-23.lxstage.domain.com,velero.io/backup-name: rook-initial,velero.io/restore-name: rook4-1,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{devices: [{"name":"sda","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-path/pci-0000:00:10.0-scsi-0:0:0:0","size":107374182400,"uuid":"698ed55f-ad71-4f3e-8a58-04360ad1b0cc","serial":"","type":"disk","rotational":true,"readOnly":false,"Partitions":[{"Name":"sda4","Size":1073741824,"Label":"USR-B","Filesystem":""},{"Name":"sda2","Size":2097152,"Label":"BIOS-BOOT","Filesystem":""},{"Name":"sda9","Size":104886943232,"Label":"ROOT","Filesystem":"ext4"},{"Name":"sda7","Size":67108864,"Label":"OEM-CONFIG","Filesystem":""},{"Name":"sda3","Size":1073741824,"Label":"USR-A","Filesystem":"ext4"},{"Name":"sda1","Size":134217728,"Label":"EFI-SYSTEM","Filesystem":"vfat"},{"Name":"sda6","Size":134217728,"Label":"OEM","Filesystem":"ext4"}],"filesystem":"","vendor":"VMware","model":"Virtual_disk","wwn":"","wwnVendorExtension":"","empty":false},{"name":"dm-0","parent":"","hasChildren":false,"devLinks":"/dev/disk/by-id/dm-name-usr /dev/disk/by-id/dm-uuid-CRYPT-VERITY-20ca38800670494eb5ec7025c6f1e8ac-usr /dev/disk/by-id/raid-usr /dev/disk/by-uuid/21ca7b39-3739-4b86-9123-865631f253a4 /dev/mapper/usr","size":1065345024,"uuid":"0cbe88dc-cb4d-4a88-b632-09b5499caaa6","serial":"","type":"crypt","rotational":true,"readOnly":true,"Partitions":null,"filesystem":"ext4","vendor":"","model":"","wwn":"","wwnVendorExtension":"","empty":false}],},BinaryData:map[string][]byte{},}
2019-08-07 08:25:05.583595 I | op-cluster: device lists are equal. skipping orchestration
2019-08-07 08:25:06.857057 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:07.550869 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:07.760402 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:08.206872 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:08.580197 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:08.991147 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:09.058538 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:09.268259 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:09.298091 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:09.996446 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:10.003482 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:10.012740 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:10.244657 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:10.282072 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:10.356918 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:10.859632 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:11.602548 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:11.624768 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:12.631444 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:12.929957 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:13.273176 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:13.449864 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:16.366081 I | exec: timed out
2019-08-07 08:25:16.366238 D | op-mon: failed to get mon_status, err: mon status failed. exit status 1
2019-08-07 08:25:16.881597 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:17.564722 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:17.785265 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:18.224971 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:18.605058 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:19.009774 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:19.077021 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:19.288814 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:19.363103 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:20.025923 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:20.067416 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:20.162621 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:20.265009 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:20.302247 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:20.382069 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:20.887407 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:21.379289 I | op-mon: mons running: [j]
2019-08-07 08:25:21.379540 I | exec: Running command: ceph mon_status --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/330675405
2019-08-07 08:25:21.664024 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:21.665332 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:22.649110 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:22.948110 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:23.294985 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:23.479706 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:26.909215 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:27.584497 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:27.805581 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:28.249858 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:28.635118 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:29.026638 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:29.092666 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:29.306533 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:29.331280 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:30.065283 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:30.066344 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:30.072602 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:30.287674 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:30.323320 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:30.403284 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:30.915791 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:31.640501 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:31.659098 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:32.664990 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:32.963102 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:33.311747 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:33.503136 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:36.936591 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:36.965946 I | exec: timed out
2019-08-07 08:25:36.966092 D | op-mon: failed to get mon_status, err: mon status failed. exit status 1
2019-08-07 08:25:37.606535 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:37.863936 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:38.269269 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:38.657003 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:39.045630 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:39.116720 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:39.325462 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:39.348212 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:40.082575 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:40.099393 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:40.102710 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:40.309291 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:40.344007 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:40.427234 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:40.941279 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:41.662869 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:41.678295 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:41.977126 I | op-mon: mons running: [j]
2019-08-07 08:25:41.977361 I | exec: Running command: ceph mon_status --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/627594184
2019-08-07 08:25:42.680198 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:42.981603 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:43.333860 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:43.524900 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:46.951722 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:47.652621 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:47.842695 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:48.290728 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:48.686324 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:49.066220 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:49.131614 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:49.344490 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:49.365307 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:50.124881 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:50.131217 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:50.136591 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:50.326076 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:50.367749 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:50.449049 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:50.957987 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:51.691452 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:51.697983 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:52.693229 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:52.999202 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:53.350791 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:53.559382 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:56.668383 D | op-osd: Checking osd processes status.
2019-08-07 08:25:56.668454 D | op-osd: OSDs with previously detected Down status: map[]
2019-08-07 08:25:56.668702 D | exec: Running command: ceph osd dump --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/802457223
2019-08-07 08:25:56.975380 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:57.569024 I | exec: timed out
2019-08-07 08:25:57.569165 D | op-mon: failed to get mon_status, err: mon status failed. exit status 1
2019-08-07 08:25:57.669839 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:57.866244 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:58.309413 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:58.588973 D | op-cluster: checking health of cluster
2019-08-07 08:25:58.589223 D | exec: Running command: ceph status --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/272300602
2019-08-07 08:25:58.763564 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:59.087722 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:59.163649 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:59.362405 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:25:59.388254 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:00.149676 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:00.159763 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:00.173993 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:00.364705 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:00.386836 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:00.469530 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:00.977708 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:01.712655 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:01.717974 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:02.582945 I | op-mon: mons running: [j]
2019-08-07 08:26:02.583129 I | exec: Running command: ceph mon_status --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/188268625
2019-08-07 08:26:02.763250 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:03.063755 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:03.370493 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:03.583442 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:06.990839 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:07.691562 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:07.886700 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:08.328708 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:08.729503 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:09.119928 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:09.175840 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:09.381655 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:09.419444 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:10.175935 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:10.182858 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:10.198211 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:10.377990 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:10.409651 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:10.489543 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:10.998673 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:11.734850 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:11.741123 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:12.365985 I | exec: timed out
2019-08-07 08:26:12.366141 W | op-osd: Failed OSD status check: failed to get osd dump: exit status 1
2019-08-07 08:26:12.726445 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:13.040003 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:13.388748 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:13.618567 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:14.165813 I | exec: timed out
2019-08-07 08:26:14.166027 E | op-cluster: failed to get ceph status. failed to get status: exit status 1
2019-08-07 08:26:17.010772 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:17.706244 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:17.906432 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:18.265968 I | exec: timed out
2019-08-07 08:26:18.266117 D | op-mon: failed to get mon_status, err: mon status failed. exit status 1
2019-08-07 08:26:18.346394 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:18.754118 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:19.143786 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:19.191829 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:19.403359 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:19.439178 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:20.208690 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:20.214040 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:20.224380 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:20.403440 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:20.427042 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:20.514238 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:21.025755 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:21.753408 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:21.763717 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:22.740945 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:23.058892 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:23.280413 I | op-mon: mons running: [j]
2019-08-07 08:26:23.280608 I | exec: Running command: ceph mon_status --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/687647612
2019-08-07 08:26:23.462378 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:23.663836 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:26.675762 I | exec: 2019-08-07 08:26:25.488 7f95a2067700 1 librados: starting msgr at
2019-08-07 08:26:25.488 7f95a2067700 1 librados: starting objecter
2019-08-07 08:26:25.489 7f95a2067700 1 librados: setting wanted keys
2019-08-07 08:26:25.489 7f95a2067700 1 librados: calling monclient init
2019-08-07 08:26:25.567 7f95a2067700 1 librados: init done
2019-08-07 08:26:26.600 7f95a2067700 10 librados: watch_flush enter
2019-08-07 08:26:26.600 7f95a2067700 10 librados: watch_flush exit
2019-08-07 08:26:26.601 7f95a2067700 1 librados: shutdown
2019-08-07 08:26:26.676238 D | cephclient: MON STATUS: {Quorum:[0 1 2 4 5] MonMap:{Mons:[{Name:b Rank:0 Address:100.67.17.84:6789/0} {Name:f Rank:1 Address:100.69.115.5:6789/0} {Name:g Rank:2 Address:100.66.122.247:6789/0} {Name:h Rank:3 Address:100.64.242.138:6789/0} {Name:i Rank:4 Address:100.70.92.237:6789/0} {Name:j Rank:5 Address:100.79.195.199:6789/0}]}}
2019-08-07 08:26:26.676270 I | op-mon: Monitors in quorum: [b f g i j]
2019-08-07 08:26:26.676282 I | op-mon: ensuring removal of unhealthy monitor d
2019-08-07 08:26:26.680503 I | op-mon: dead mon rook-ceph-mon-d was already gone
2019-08-07 08:26:26.680531 D | op-mon: removing monitor d
2019-08-07 08:26:26.680731 I | exec: Running command: ceph mon remove d --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/539885739
2019-08-07 08:26:27.063826 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:27.763938 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:27.963935 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:28.371826 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:28.773695 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:28.872572 I | exec: 2019-08-07 08:26:27.662 7efffc805700 1 librados: starting msgr at
2019-08-07 08:26:27.662 7efffc805700 1 librados: starting objecter
2019-08-07 08:26:27.663 7efffc805700 1 librados: setting wanted keys
2019-08-07 08:26:27.663 7efffc805700 1 librados: calling monclient init
2019-08-07 08:26:27.669 7efffc805700 1 librados: init done
mon.d does not exist or has already been removed
2019-08-07 08:26:28.761 7efffc805700 10 librados: watch_flush enter
2019-08-07 08:26:28.761 7efffc805700 10 librados: watch_flush exit
2019-08-07 08:26:28.762 7efffc805700 1 librados: shutdown
2019-08-07 08:26:28.872739 I | op-mon: removed monitor d
2019-08-07 08:26:28.876733 I | op-mon: dead mon service rook-ceph-mon-d was already gone
2019-08-07 08:26:28.885242 D | op-mon: updating config map rook-ceph-mon-endpoints that already exists
2019-08-07 08:26:28.890566 I | op-mon: saved mon endpoints to config map map[data:f=100.69.115.5:6789,g=100.66.122.247:6789,h=100.64.242.138:6789,i=100.70.92.237:6789,b=100.67.17.84:6789,j=100.79.195.199:6789 maxMonId:9 mapping:{"node":{"b":{"Name":"k8s-worker-101.lxstage.domain.com","Hostname":"k8s-worker-101.lxstage.domain.com","Address":"172.22.254.183"},"c":{"Name":"k8s-worker-00.lxstage.domain.com","Hostname":"k8s-worker-00.lxstage.domain.com","Address":"172.22.254.105"},"e":{"Name":"k8s-worker-00.lxstage.domain.com","Hostname":"k8s-worker-00.lxstage.domain.com","Address":"172.22.254.105"},"f":{"Name":"k8s-worker-102.lxstage.domain.com","Hostname":"k8s-worker-102.lxstage.domain.com","Address":"172.22.254.186"},"g":{"Name":"k8s-worker-103.lxstage.domain.com","Hostname":"k8s-worker-103.lxstage.domain.com","Address":"172.22.254.185"},"h":{"Name":"k8s-worker-104.lxstage.domain.com","Hostname":"k8s-worker-104.lxstage.domain.com","Address":"172.22.254.187"},"i":{"Name":"k8s-worker-01.lxstage.domain.com","Hostname":"k8s-worker-01.lxstage.domain.com","Address":"172.22.254.150"},"j":{"Name":"k8s-worker-00.lxstage.domain.com","Hostname":"k8s-worker-00.lxstage.domain.com","Address":"172.22.254.105"}},"port":{}}]
2019-08-07 08:26:28.901105 D | op-config: Generated and stored config file:
[global]
mon_allow_pool_delete = true
mon_max_pg_per_osd = 1000
osd_pg_bits = 11
osd_pgp_bits = 11
osd_pool_default_size = 1
osd_pool_default_min_size = 1
osd_pool_default_pg_num = 100
osd_pool_default_pgp_num = 100
rbd_default_features = 3
fatal_signal_handlers = false
osd pool default pg num = 512
osd pool default pgp num = 512
osd pool default size = 3
osd pool default min size = 2
2019-08-07 08:26:28.904408 D | op-config: updating config secret &Secret{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:rook-ceph-config,GenerateName:,Namespace:rook-ceph-stage-primary,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{},OwnerReferences:[{ceph.rook.io/v1 CephCluster rook-ceph-stage-primary 76235f05-b792-11e9-9b32-0050568460f6 <nil> 0xc000c66c6c}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string][]byte{},Type:kubernetes.io/rook,StringData:map[string]string{mon_host: [v2:100.66.122.247:3300,v1:100.66.122.247:6789],[v2:100.64.242.138:3300,v1:100.64.242.138:6789],[v2:100.70.92.237:3300,v1:100.70.92.237:6789],[v2:100.67.17.84:3300,v1:100.67.17.84:6789],[v2:100.79.195.199:3300,v1:100.79.195.199:6789],[v2:100.69.115.5:3300,v1:100.69.115.5:6789],mon_initial_members: g,h,i,b,j,f,},}
2019-08-07 08:26:28.909029 I | cephconfig: writing config file /var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config
2019-08-07 08:26:28.909202 I | cephconfig: copying config to /etc/ceph/ceph.conf
2019-08-07 08:26:28.909370 I | cephconfig: generated admin config in /var/lib/rook/rook-ceph-stage-primary
2019-08-07 08:26:28.909735 I | cephconfig: writing config file /var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config
2019-08-07 08:26:28.909901 I | cephconfig: copying config to /etc/ceph/ceph.conf
2019-08-07 08:26:28.910066 I | cephconfig: generated admin config in /var/lib/rook/rook-ceph-stage-primary
2019-08-07 08:26:28.910083 D | op-mon: Released lock for mon orchestration
2019-08-07 08:26:29.161148 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:29.208724 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:29.423451 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:29.456206 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:30.263008 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:30.264327 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:30.265746 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:30.425142 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:30.444561 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:30.535149 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:31.057749 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:31.772405 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:31.787890 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:32.761216 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:33.074650 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:33.430882 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:33.667459 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:37.052634 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:37.748142 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:37.944423 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:38.388531 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:38.798903 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:39.262309 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:39.265586 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:39.445408 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:39.475630 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:40.262613 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:40.268602 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:40.281236 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:40.443517 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:40.463081 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:40.552930 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:41.082428 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:41.799749 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:41.863687 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:42.863174 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:43.097278 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:43.450882 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:43.692434 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:47.072751 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:47.764360 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:47.975501 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:48.408644 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:48.862481 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:49.199358 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:49.262321 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:49.466882 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:49.562071 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:50.272960 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:50.295537 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:50.305869 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:50.461375 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:50.482089 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:50.574494 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:51.103886 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:51.823118 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:51.830436 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:52.797734 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:53.114848 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:53.468534 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:53.717033 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:57.089484 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:57.782252 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:58.003492 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:58.425802 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:58.863434 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:59.222007 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:59.265661 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:59.488651 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:26:59.510734 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:00.302836 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:00.326997 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:00.332296 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:00.481563 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:00.500100 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:00.594639 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:01.127529 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:01.841342 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:01.849243 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:02.815276 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:03.139969 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:03.496870 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:03.748686 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:07.103900 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:07.798785 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:08.019936 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:08.442118 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:08.875255 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:09.244352 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:09.286091 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:09.504426 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:09.530303 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:10.324590 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:10.348408 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:10.356681 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:10.500217 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:10.518895 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:10.613574 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:11.151842 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:11.858103 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:11.866881 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:12.366433 D | op-osd: Checking osd processes status.
2019-08-07 08:27:12.366485 D | op-osd: OSDs with previously detected Down status: map[]
2019-08-07 08:27:12.366716 D | exec: Running command: ceph osd dump --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/510435854
2019-08-07 08:27:12.833185 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:13.163788 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:13.563238 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:13.772237 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:13.962162 D | op-mon: checking health of mons
2019-08-07 08:27:13.962201 D | op-mon: Acquiring lock for mon orchestration
2019-08-07 08:27:13.962211 D | op-mon: Acquired lock for mon orchestration
2019-08-07 08:27:13.962221 D | op-mon: Checking health for mons in cluster. rook-ceph-stage-primary
2019-08-07 08:27:14.262348 D | op-cluster: checking health of cluster
2019-08-07 08:27:14.262601 D | exec: Running command: ceph status --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/823288853
2019-08-07 08:27:14.362094 D | op-mon: targeting the mon count 5
2019-08-07 08:27:14.362315 D | exec: Running command: ceph mon_status --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/475278448
2019-08-07 08:27:14.462470 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:14.463762 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:15.386826 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:15.563818 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:15.565141 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:16.262161 I | exec: 2019-08-07 08:27:13.172 7f5f898da700 1 librados: starting msgr at
2019-08-07 08:27:13.172 7f5f898da700 1 librados: starting objecter
2019-08-07 08:27:13.173 7f5f898da700 1 librados: setting wanted keys
2019-08-07 08:27:13.173 7f5f898da700 1 librados: calling monclient init
2019-08-07 08:27:13.181 7f5f898da700 1 librados: init done
2019-08-07 08:27:15.862 7f5f898da700 10 librados: watch_flush enter
2019-08-07 08:27:15.862 7f5f898da700 10 librados: watch_flush exit
2019-08-07 08:27:15.961 7f5f898da700 1 librados: shutdown
2019-08-07 08:27:16.264806 D | op-osd: osd dump &{[{0 1 1} {1 1 1} {2 1 1} {3 1 1} {4 1 1} {5 1 1} {6 1 1} {7 1 1} {8 1 1} {9 1 1} {10 1 1} {11 1 1} {12 1 1} {13 1 1} {14 1 1} {15 1 1} {16 1 1} {17 1 1} {18 1 1} {19 1 1} {20 1 1} {21 1 1} {22 1 1} {23 1 1} {24 1 1} {25 1 1} {26 1 1} {27 1 1} {28 1 1} {29 1 1} {30 1 1} {31 1 1} {32 1 1} {33 1 1} {34 1 1} {35 1 1} {36 1 1} {37 1 1} {38 1 1} {39 1 1} {40 1 1} {41 1 1} {42 1 1} {43 1 1} {44 1 1} {45 1 1} {46 1 1} {47 1 1}]}
2019-08-07 08:27:16.264831 D | op-osd: validating status of osd.0
2019-08-07 08:27:16.264841 D | op-osd: osd.0 is healthy.
2019-08-07 08:27:16.264849 D | op-osd: validating status of osd.1
2019-08-07 08:27:16.264856 D | op-osd: osd.1 is healthy.
2019-08-07 08:27:16.264863 D | op-osd: validating status of osd.2
2019-08-07 08:27:16.264870 D | op-osd: osd.2 is healthy.
2019-08-07 08:27:16.264877 D | op-osd: validating status of osd.3
2019-08-07 08:27:16.264884 D | op-osd: osd.3 is healthy.
2019-08-07 08:27:16.264891 D | op-osd: validating status of osd.4
2019-08-07 08:27:16.264898 D | op-osd: osd.4 is healthy.
2019-08-07 08:27:16.264917 D | op-osd: validating status of osd.5
2019-08-07 08:27:16.264925 D | op-osd: osd.5 is healthy.
2019-08-07 08:27:16.264932 D | op-osd: validating status of osd.6
2019-08-07 08:27:16.264939 D | op-osd: osd.6 is healthy.
2019-08-07 08:27:16.264946 D | op-osd: validating status of osd.7
2019-08-07 08:27:16.264955 D | op-osd: osd.7 is healthy.
2019-08-07 08:27:16.264961 D | op-osd: validating status of osd.8
2019-08-07 08:27:16.264969 D | op-osd: osd.8 is healthy.
2019-08-07 08:27:16.264976 D | op-osd: validating status of osd.9
2019-08-07 08:27:16.264984 D | op-osd: osd.9 is healthy.
2019-08-07 08:27:16.264991 D | op-osd: validating status of osd.10
2019-08-07 08:27:16.264999 D | op-osd: osd.10 is healthy.
2019-08-07 08:27:16.265006 D | op-osd: validating status of osd.11
2019-08-07 08:27:16.265014 D | op-osd: osd.11 is healthy.
2019-08-07 08:27:16.265021 D | op-osd: validating status of osd.12
2019-08-07 08:27:16.265028 D | op-osd: osd.12 is healthy.
2019-08-07 08:27:16.265036 D | op-osd: validating status of osd.13
2019-08-07 08:27:16.265044 D | op-osd: osd.13 is healthy.
2019-08-07 08:27:16.265050 D | op-osd: validating status of osd.14
2019-08-07 08:27:16.265059 D | op-osd: osd.14 is healthy.
2019-08-07 08:27:16.265066 D | op-osd: validating status of osd.15
2019-08-07 08:27:16.265075 D | op-osd: osd.15 is healthy.
2019-08-07 08:27:16.265082 D | op-osd: validating status of osd.16
2019-08-07 08:27:16.265090 D | op-osd: osd.16 is healthy.
2019-08-07 08:27:16.265096 D | op-osd: validating status of osd.17
2019-08-07 08:27:16.265104 D | op-osd: osd.17 is healthy.
2019-08-07 08:27:16.265111 D | op-osd: validating status of osd.18
2019-08-07 08:27:16.265119 D | op-osd: osd.18 is healthy.
2019-08-07 08:27:16.265126 D | op-osd: validating status of osd.19
2019-08-07 08:27:16.265134 D | op-osd: osd.19 is healthy.
2019-08-07 08:27:16.265141 D | op-osd: validating status of osd.20
2019-08-07 08:27:16.265148 D | op-osd: osd.20 is healthy.
2019-08-07 08:27:16.265155 D | op-osd: validating status of osd.21
2019-08-07 08:27:16.265164 D | op-osd: osd.21 is healthy.
2019-08-07 08:27:16.265171 D | op-osd: validating status of osd.22
2019-08-07 08:27:16.265180 D | op-osd: osd.22 is healthy.
2019-08-07 08:27:16.265187 D | op-osd: validating status of osd.23
2019-08-07 08:27:16.265195 D | op-osd: osd.23 is healthy.
2019-08-07 08:27:16.265202 D | op-osd: validating status of osd.24
2019-08-07 08:27:16.265211 D | op-osd: osd.24 is healthy.
2019-08-07 08:27:16.265217 D | op-osd: validating status of osd.25
2019-08-07 08:27:16.265226 D | op-osd: osd.25 is healthy.
2019-08-07 08:27:16.265233 D | op-osd: validating status of osd.26
2019-08-07 08:27:16.265241 D | op-osd: osd.26 is healthy.
2019-08-07 08:27:16.265248 D | op-osd: validating status of osd.27
2019-08-07 08:27:16.265257 D | op-osd: osd.27 is healthy.
2019-08-07 08:27:16.265263 D | op-osd: validating status of osd.28
2019-08-07 08:27:16.265272 D | op-osd: osd.28 is healthy.
2019-08-07 08:27:16.265279 D | op-osd: validating status of osd.29
2019-08-07 08:27:16.265288 D | op-osd: osd.29 is healthy.
2019-08-07 08:27:16.265295 D | op-osd: validating status of osd.30
2019-08-07 08:27:16.265304 D | op-osd: osd.30 is healthy.
2019-08-07 08:27:16.265311 D | op-osd: validating status of osd.31
2019-08-07 08:27:16.265320 D | op-osd: osd.31 is healthy.
2019-08-07 08:27:16.265327 D | op-osd: validating status of osd.32
2019-08-07 08:27:16.265336 D | op-osd: osd.32 is healthy.
2019-08-07 08:27:16.265342 D | op-osd: validating status of osd.33
2019-08-07 08:27:16.265351 D | op-osd: osd.33 is healthy.
2019-08-07 08:27:16.265358 D | op-osd: validating status of osd.34
2019-08-07 08:27:16.265367 D | op-osd: osd.34 is healthy.
2019-08-07 08:27:16.265374 D | op-osd: validating status of osd.35
2019-08-07 08:27:16.265383 D | op-osd: osd.35 is healthy.
2019-08-07 08:27:16.265389 D | op-osd: validating status of osd.36
2019-08-07 08:27:16.265399 D | op-osd: osd.36 is healthy.
2019-08-07 08:27:16.265406 D | op-osd: validating status of osd.37
2019-08-07 08:27:16.265415 D | op-osd: osd.37 is healthy.
2019-08-07 08:27:16.265422 D | op-osd: validating status of osd.38
2019-08-07 08:27:16.265432 D | op-osd: osd.38 is healthy.
2019-08-07 08:27:16.265439 D | op-osd: validating status of osd.39
2019-08-07 08:27:16.265448 D | op-osd: osd.39 is healthy.
2019-08-07 08:27:16.265455 D | op-osd: validating status of osd.40
2019-08-07 08:27:16.265464 D | op-osd: osd.40 is healthy.
2019-08-07 08:27:16.265471 D | op-osd: validating status of osd.41
2019-08-07 08:27:16.265480 D | op-osd: osd.41 is healthy.
2019-08-07 08:27:16.265487 D | op-osd: validating status of osd.42
2019-08-07 08:27:16.265496 D | op-osd: osd.42 is healthy.
2019-08-07 08:27:16.265504 D | op-osd: validating status of osd.43
2019-08-07 08:27:16.265513 D | op-osd: osd.43 is healthy.
2019-08-07 08:27:16.265520 D | op-osd: validating status of osd.44
2019-08-07 08:27:16.265530 D | op-osd: osd.44 is healthy.
2019-08-07 08:27:16.265536 D | op-osd: validating status of osd.45
2019-08-07 08:27:16.265546 D | op-osd: osd.45 is healthy.
2019-08-07 08:27:16.265553 D | op-osd: validating status of osd.46
2019-08-07 08:27:16.265563 D | op-osd: osd.46 is healthy.
2019-08-07 08:27:16.265571 D | op-osd: validating status of osd.47
2019-08-07 08:27:16.265580 D | op-osd: osd.47 is healthy.
2019-08-07 08:27:17.163833 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:17.363164 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:17.862515 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:18.063185 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:18.463677 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:18.963325 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:19.266572 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:19.362497 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:19.564086 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:19.564874 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:20.362671 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:20.462175 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:20.463949 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:20.563900 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:20.663866 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:21.166595 I | exec: 2019-08-07 08:27:17.662 7f6219667700 1 librados: starting msgr at
2019-08-07 08:27:17.662 7f6219667700 1 librados: starting objecter
2019-08-07 08:27:17.664 7f6219667700 1 librados: setting wanted keys
2019-08-07 08:27:17.664 7f6219667700 1 librados: calling monclient init
2019-08-07 08:27:17.670 7f6219667700 1 librados: init done
2019-08-07 08:27:20.868 7f6219667700 10 librados: watch_flush enter
2019-08-07 08:27:20.868 7f6219667700 10 librados: watch_flush exit
2019-08-07 08:27:20.960 7f6219667700 1 librados: shutdown
2019-08-07 08:27:21.167036 D | cephclient: MON STATUS: {Quorum:[0 1 2 4 5] MonMap:{Mons:[{Name:b Rank:0 Address:100.67.17.84:6789/0} {Name:f Rank:1 Address:100.69.115.5:6789/0} {Name:g Rank:2 Address:100.66.122.247:6789/0} {Name:h Rank:3 Address:100.64.242.138:6789/0} {Name:i Rank:4 Address:100.70.92.237:6789/0} {Name:j Rank:5 Address:100.79.195.199:6789/0}]}}
2019-08-07 08:27:21.167079 D | op-mon: Mon status: {Quorum:[0 1 2 4 5] MonMap:{Mons:[{Name:b Rank:0 Address:100.67.17.84:6789/0} {Name:f Rank:1 Address:100.69.115.5:6789/0} {Name:g Rank:2 Address:100.66.122.247:6789/0} {Name:h Rank:3 Address:100.64.242.138:6789/0} {Name:i Rank:4 Address:100.70.92.237:6789/0} {Name:j Rank:5 Address:100.79.195.199:6789/0}]}}
2019-08-07 08:27:21.167098 D | op-mon: mon b found in quorum
2019-08-07 08:27:21.167110 D | op-mon: mon f found in quorum
2019-08-07 08:27:21.167118 D | op-mon: mon g found in quorum
2019-08-07 08:27:21.167147 D | op-mon: mon h NOT found in quorum. Mon status: {Quorum:[0 1 2 4 5] MonMap:{Mons:[{Name:b Rank:0 Address:100.67.17.84:6789/0} {Name:f Rank:1 Address:100.69.115.5:6789/0} {Name:g Rank:2 Address:100.66.122.247:6789/0} {Name:h Rank:3 Address:100.64.242.138:6789/0} {Name:i Rank:4 Address:100.70.92.237:6789/0} {Name:j Rank:5 Address:100.79.195.199:6789/0}]}}
2019-08-07 08:27:21.167160 W | op-mon: mon h not found in quorum, waiting for timeout before failover
2019-08-07 08:27:21.167168 D | op-mon: mon i found in quorum
2019-08-07 08:27:21.167176 I | op-mon: mon i is back in quorum, removed from mon out timeout list
2019-08-07 08:27:21.167185 D | op-mon: mon j found in quorum
2019-08-07 08:27:21.171421 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:21.365657 D | op-mon: there are 22 nodes available for 6 mons
2019-08-07 08:27:21.380353 D | op-mon: mon pod on node k8s-worker-101.lxstage.domain.com
2019-08-07 08:27:21.380386 D | op-mon: mon pod on node k8s-worker-102.lxstage.domain.com
2019-08-07 08:27:21.380399 D | op-mon: mon pod on node k8s-worker-103.lxstage.domain.com
2019-08-07 08:27:21.380411 D | op-mon: mon pod on node k8s-worker-01.lxstage.domain.com
2019-08-07 08:27:21.380421 D | op-mon: mon pod on node k8s-worker-00.lxstage.domain.com
2019-08-07 08:27:21.380466 I | op-mon: rebalance: enough nodes available 15 to failover mon e
2019-08-07 08:27:21.380475 I | op-mon: ensuring removal of unhealthy monitor e
2019-08-07 08:27:21.383997 I | op-mon: dead mon rook-ceph-mon-e was already gone
2019-08-07 08:27:21.384029 D | op-mon: removing monitor e
2019-08-07 08:27:21.384190 I | exec: Running command: ceph mon remove e --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/371474703
2019-08-07 08:27:21.962968 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:21.964371 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:22.863090 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:22.864784 I | exec: 2019-08-07 08:27:17.466 7f93cef08700 1 librados: starting msgr at
2019-08-07 08:27:17.466 7f93cef08700 1 librados: starting objecter
2019-08-07 08:27:17.467 7f93cef08700 1 librados: setting wanted keys
2019-08-07 08:27:17.467 7f93cef08700 1 librados: calling monclient init
2019-08-07 08:27:17.566 7f93cef08700 1 librados: init done
2019-08-07 08:27:22.665 7f93cef08700 10 librados: watch_flush enter
2019-08-07 08:27:22.665 7f93cef08700 10 librados: watch_flush exit
2019-08-07 08:27:22.666 7f93cef08700 1 librados: shutdown
2019-08-07 08:27:22.866165 D | op-cluster: Cluster status: {Health:{Status:HEALTH_WARN Checks:map[MON_DOWN:{Severity:HEALTH_WARN Summary:{Message:1/6 mons down, quorum b,f,g,i,j}}]} FSID:7dd854f1-2892-4201-ab69-d4797f12ac50 ElectionEpoch:544 Quorum:[0 1 2 4 5] QuorumNames:[b f g i j] MonMap:{Epoch:8 FSID:7dd854f1-2892-4201-ab69-d4797f12ac50 CreatedTime:2019-08-05 15:05:49.660802 ModifiedTime:2019-08-07 08:24:59.011086 Mons:[{Name:b Rank:0 Address:100.67.17.84:6789/0} {Name:f Rank:1 Address:100.69.115.5:6789/0} {Name:g Rank:2 Address:100.66.122.247:6789/0} {Name:h Rank:3 Address:100.64.242.138:6789/0} {Name:i Rank:4 Address:100.70.92.237:6789/0} {Name:j Rank:5 Address:100.79.195.199:6789/0}]} OsdMap:{OsdMap:{Epoch:163 NumOsd:48 NumUpOsd:48 NumInOsd:48 Full:false NearFull:false NumRemappedPgs:0}} PgMap:{PgsByState:[{StateName:active+clean Count:512}] Version:0 NumPgs:512 DataBytes:125898804 UsedBytes:52305739776 AvailableBytes:51126524559360 TotalBytes:51178830299136 ReadBps:0 WriteBps:0 ReadOps:0 WriteOps:0 RecoveryBps:0 RecoveryObjectsPerSec:0 RecoveryKeysPerSec:0 CacheFlushBps:0 CacheEvictBps:0 CachePromoteBps:0} MgrMap:{Epoch:118 ActiveGID:534391 ActiveName:a ActiveAddr:100.192.28.144:6801/1 Available:true Standbys:[]}}
2019-08-07 08:27:22.880455 D | op-cluster: update event for cluster rook-ceph-stage-primary
2019-08-07 08:27:22.880672 D | op-cluster: update event for cluster rook-ceph-stage-primary is not supported
2019-08-07 08:27:23.176586 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:23.563308 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:23.862292 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:24.375893 I | exec: 2019-08-07 08:27:23.266 7f307a73e700 1 librados: starting msgr at
2019-08-07 08:27:23.266 7f307a73e700 1 librados: starting objecter
2019-08-07 08:27:23.266 7f307a73e700 1 librados: setting wanted keys
2019-08-07 08:27:23.266 7f307a73e700 1 librados: calling monclient init
2019-08-07 08:27:23.273 7f307a73e700 1 librados: init done
mon.e does not exist or has already been removed
2019-08-07 08:27:24.268 7f307a73e700 10 librados: watch_flush enter
2019-08-07 08:27:24.268 7f307a73e700 10 librados: watch_flush exit
2019-08-07 08:27:24.269 7f307a73e700 1 librados: shutdown
2019-08-07 08:27:24.376118 I | op-mon: removed monitor e
2019-08-07 08:27:24.379784 I | op-mon: dead mon service rook-ceph-mon-e was already gone
2019-08-07 08:27:24.388634 D | op-mon: updating config map rook-ceph-mon-endpoints that already exists
2019-08-07 08:27:24.394979 I | op-mon: saved mon endpoints to config map map[data:b=100.67.17.84:6789,j=100.79.195.199:6789,f=100.69.115.5:6789,g=100.66.122.247:6789,h=100.64.242.138:6789,i=100.70.92.237:6789 maxMonId:9 mapping:{"node":{"b":{"Name":"k8s-worker-101.lxstage.domain.com","Hostname":"k8s-worker-101.lxstage.domain.com","Address":"172.22.254.183"},"c":{"Name":"k8s-worker-00.lxstage.domain.com","Hostname":"k8s-worker-00.lxstage.domain.com","Address":"172.22.254.105"},"f":{"Name":"k8s-worker-102.lxstage.domain.com","Hostname":"k8s-worker-102.lxstage.domain.com","Address":"172.22.254.186"},"g":{"Name":"k8s-worker-103.lxstage.domain.com","Hostname":"k8s-worker-103.lxstage.domain.com","Address":"172.22.254.185"},"h":{"Name":"k8s-worker-104.lxstage.domain.com","Hostname":"k8s-worker-104.lxstage.domain.com","Address":"172.22.254.187"},"i":{"Name":"k8s-worker-01.lxstage.domain.com","Hostname":"k8s-worker-01.lxstage.domain.com","Address":"172.22.254.150"},"j":{"Name":"k8s-worker-00.lxstage.domain.com","Hostname":"k8s-worker-00.lxstage.domain.com","Address":"172.22.254.105"}},"port":{}}]
2019-08-07 08:27:24.406052 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:24.407440 D | op-config: Generated and stored config file:
[global]
mon_allow_pool_delete = true
mon_max_pg_per_osd = 1000
osd_pg_bits = 11
osd_pgp_bits = 11
osd_pool_default_size = 1
osd_pool_default_min_size = 1
osd_pool_default_pg_num = 100
osd_pool_default_pgp_num = 100
rbd_default_features = 3
fatal_signal_handlers = false
osd pool default pg num = 512
osd pool default pgp num = 512
osd pool default size = 3
osd pool default min size = 2
2019-08-07 08:27:24.410699 D | op-config: updating config secret &Secret{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:rook-ceph-config,GenerateName:,Namespace:rook-ceph-stage-primary,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{},OwnerReferences:[{ceph.rook.io/v1 CephCluster rook-ceph-stage-primary 76235f05-b792-11e9-9b32-0050568460f6 <nil> 0xc000c66c6c}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string][]byte{},Type:kubernetes.io/rook,StringData:map[string]string{mon_host: [v2:100.64.242.138:3300,v1:100.64.242.138:6789],[v2:100.70.92.237:3300,v1:100.70.92.237:6789],[v2:100.67.17.84:3300,v1:100.67.17.84:6789],[v2:100.79.195.199:3300,v1:100.79.195.199:6789],[v2:100.69.115.5:3300,v1:100.69.115.5:6789],[v2:100.66.122.247:3300,v1:100.66.122.247:6789],mon_initial_members: h,i,b,j,f,g,},}
2019-08-07 08:27:24.416064 I | cephconfig: writing config file /var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config
2019-08-07 08:27:24.416237 I | cephconfig: copying config to /etc/ceph/ceph.conf
2019-08-07 08:27:24.416402 I | cephconfig: generated admin config in /var/lib/rook/rook-ceph-stage-primary
2019-08-07 08:27:24.416753 I | cephconfig: writing config file /var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config
2019-08-07 08:27:24.416941 I | cephconfig: copying config to /etc/ceph/ceph.conf
2019-08-07 08:27:24.417099 I | cephconfig: generated admin config in /var/lib/rook/rook-ceph-stage-primary
2019-08-07 08:27:24.417117 D | op-mon: Released lock for mon orchestration
2019-08-07 08:27:27.143652 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:27.862515 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:28.065048 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:28.480045 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:28.945280 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:29.362240 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:29.363582 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:29.546053 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:29.563245 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:30.379352 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:30.388476 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:30.404936 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:30.554416 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:30.658985 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:31.195827 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:31.891650 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:31.900779 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:32.863428 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:33.194503 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:33.555390 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:33.863819 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:34.427652 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:37.156508 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:37.862490 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:38.084183 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:38.505171 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:38.965795 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:39.362587 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:39.363860 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:39.567352 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:39.578747 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:40.409532 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:40.416639 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:40.430730 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:40.571883 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:40.677192 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:41.219316 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:41.910337 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:41.922361 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:42.878786 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:43.210333 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:43.573994 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:43.862011 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:44.459679 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:47.178749 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:47.872571 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:48.109358 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:48.521292 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:48.988039 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:49.364324 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:49.365665 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:49.585556 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:49.594320 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:50.428314 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:50.447346 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:50.454751 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:50.590043 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:50.706105 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:51.245826 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:51.930006 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:51.945126 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:52.895098 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:53.235369 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:53.592166 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:53.878899 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:54.482930 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:57.196055 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:57.892027 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:58.128541 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:58.541855 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:59.010759 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:59.362629 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:59.380597 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:59.602842 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:27:59.615310 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:00.446859 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:00.473202 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:00.481888 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:00.656595 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:00.730642 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:01.268721 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:01.948407 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:01.977938 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:02.915583 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:03.266535 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:03.606738 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:03.901705 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:04.503173 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:07.211922 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:07.907995 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:08.145462 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:08.560332 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:09.049226 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:09.375007 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:09.400763 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:09.417302 D | op-mon: checking health of mons
2019-08-07 08:28:09.417332 D | op-mon: Acquiring lock for mon orchestration
2019-08-07 08:28:09.417342 D | op-mon: Acquired lock for mon orchestration
2019-08-07 08:28:09.417350 D | op-mon: Checking health for mons in cluster. rook-ceph-stage-primary
2019-08-07 08:28:09.438250 D | op-mon: targeting the mon count 5
2019-08-07 08:28:09.438469 D | exec: Running command: ceph mon_status --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/056213282
2019-08-07 08:28:09.663926 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:09.665700 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:10.478861 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:10.563862 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:10.565247 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:10.676285 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:10.762593 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:11.293728 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:11.610114 I | exec: 2019-08-07 08:28:10.387 7fb4afd59700 1 librados: starting msgr at
2019-08-07 08:28:10.387 7fb4afd59700 1 librados: starting objecter
2019-08-07 08:28:10.387 7fb4afd59700 1 librados: setting wanted keys
2019-08-07 08:28:10.387 7fb4afd59700 1 librados: calling monclient init
2019-08-07 08:28:10.466 7fb4afd59700 1 librados: init done
2019-08-07 08:28:11.480 7fb4afd59700 10 librados: watch_flush enter
2019-08-07 08:28:11.480 7fb4afd59700 10 librados: watch_flush exit
2019-08-07 08:28:11.561 7fb4afd59700 1 librados: shutdown
2019-08-07 08:28:11.610585 D | cephclient: MON STATUS: {Quorum:[0 1 2 4 5] MonMap:{Mons:[{Name:b Rank:0 Address:100.67.17.84:6789/0} {Name:f Rank:1 Address:100.69.115.5:6789/0} {Name:g Rank:2 Address:100.66.122.247:6789/0} {Name:h Rank:3 Address:100.64.242.138:6789/0} {Name:i Rank:4 Address:100.70.92.237:6789/0} {Name:j Rank:5 Address:100.79.195.199:6789/0}]}}
2019-08-07 08:28:11.610629 D | op-mon: Mon status: {Quorum:[0 1 2 4 5] MonMap:{Mons:[{Name:b Rank:0 Address:100.67.17.84:6789/0} {Name:f Rank:1 Address:100.69.115.5:6789/0} {Name:g Rank:2 Address:100.66.122.247:6789/0} {Name:h Rank:3 Address:100.64.242.138:6789/0} {Name:i Rank:4 Address:100.70.92.237:6789/0} {Name:j Rank:5 Address:100.79.195.199:6789/0}]}}
2019-08-07 08:28:11.610646 D | op-mon: mon b found in quorum
2019-08-07 08:28:11.610656 D | op-mon: mon f found in quorum
2019-08-07 08:28:11.610664 D | op-mon: mon g found in quorum
2019-08-07 08:28:11.610703 D | op-mon: mon h NOT found in quorum. Mon status: {Quorum:[0 1 2 4 5] MonMap:{Mons:[{Name:b Rank:0 Address:100.67.17.84:6789/0} {Name:f Rank:1 Address:100.69.115.5:6789/0} {Name:g Rank:2 Address:100.66.122.247:6789/0} {Name:h Rank:3 Address:100.64.242.138:6789/0} {Name:i Rank:4 Address:100.70.92.237:6789/0} {Name:j Rank:5 Address:100.79.195.199:6789/0}]}}
2019-08-07 08:28:11.610720 W | op-mon: mon h not found in quorum, waiting for timeout before failover
2019-08-07 08:28:11.610732 D | op-mon: mon i found in quorum
2019-08-07 08:28:11.610744 D | op-mon: mon j found in quorum
2019-08-07 08:28:11.673892 D | op-mon: there are 22 nodes available for 6 mons
2019-08-07 08:28:11.689105 D | op-mon: mon pod on node k8s-worker-101.lxstage.domain.com
2019-08-07 08:28:11.689142 D | op-mon: mon pod on node k8s-worker-102.lxstage.domain.com
2019-08-07 08:28:11.689155 D | op-mon: mon pod on node k8s-worker-103.lxstage.domain.com
2019-08-07 08:28:11.689168 D | op-mon: mon pod on node k8s-worker-01.lxstage.domain.com
2019-08-07 08:28:11.689177 D | op-mon: mon pod on node k8s-worker-00.lxstage.domain.com
2019-08-07 08:28:11.689242 I | op-mon: rebalance: enough nodes available 15 to failover mon c
2019-08-07 08:28:11.689258 I | op-mon: ensuring removal of unhealthy monitor c
2019-08-07 08:28:11.693127 I | op-mon: dead mon rook-ceph-mon-c was already gone
2019-08-07 08:28:11.693159 D | op-mon: removing monitor c
2019-08-07 08:28:11.693340 I | exec: Running command: ceph mon remove c --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/564581913
2019-08-07 08:28:11.965236 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:12.002837 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:12.963225 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:13.283777 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:13.663298 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:13.676642 I | exec: 2019-08-07 08:28:12.569 7fc9d8c88700 1 librados: starting msgr at
2019-08-07 08:28:12.569 7fc9d8c88700 1 librados: starting objecter
2019-08-07 08:28:12.569 7fc9d8c88700 1 librados: setting wanted keys
2019-08-07 08:28:12.569 7fc9d8c88700 1 librados: calling monclient init
2019-08-07 08:28:12.574 7fc9d8c88700 1 librados: init done
mon.c does not exist or has already been removed
2019-08-07 08:28:13.584 7fc9d8c88700 10 librados: watch_flush enter
2019-08-07 08:28:13.584 7fc9d8c88700 10 librados: watch_flush exit
2019-08-07 08:28:13.585 7fc9d8c88700 1 librados: shutdown
2019-08-07 08:28:13.676791 I | op-mon: removed monitor c
2019-08-07 08:28:13.681942 I | op-mon: dead mon service rook-ceph-mon-c was already gone
2019-08-07 08:28:13.693282 D | op-mon: updating config map rook-ceph-mon-endpoints that already exists
2019-08-07 08:28:13.700065 I | op-mon: saved mon endpoints to config map map[mapping:{"node":{"b":{"Name":"k8s-worker-101.lxstage.domain.com","Hostname":"k8s-worker-101.lxstage.domain.com","Address":"172.22.254.183"},"f":{"Name":"k8s-worker-102.lxstage.domain.com","Hostname":"k8s-worker-102.lxstage.domain.com","Address":"172.22.254.186"},"g":{"Name":"k8s-worker-103.lxstage.domain.com","Hostname":"k8s-worker-103.lxstage.domain.com","Address":"172.22.254.185"},"h":{"Name":"k8s-worker-104.lxstage.domain.com","Hostname":"k8s-worker-104.lxstage.domain.com","Address":"172.22.254.187"},"i":{"Name":"k8s-worker-01.lxstage.domain.com","Hostname":"k8s-worker-01.lxstage.domain.com","Address":"172.22.254.150"},"j":{"Name":"k8s-worker-00.lxstage.domain.com","Hostname":"k8s-worker-00.lxstage.domain.com","Address":"172.22.254.105"}},"port":{}} data:g=100.66.122.247:6789,h=100.64.242.138:6789,i=100.70.92.237:6789,b=100.67.17.84:6789,j=100.79.195.199:6789,f=100.69.115.5:6789 maxMonId:9]
2019-08-07 08:28:13.709668 D | op-config: Generated and stored config file:
[global]
mon_allow_pool_delete = true
mon_max_pg_per_osd = 1000
osd_pg_bits = 11
osd_pgp_bits = 11
osd_pool_default_size = 1
osd_pool_default_min_size = 1
osd_pool_default_pg_num = 100
osd_pool_default_pgp_num = 100
rbd_default_features = 3
fatal_signal_handlers = false
osd pool default pg num = 512
osd pool default pgp num = 512
osd pool default size = 3
osd pool default min size = 2
2019-08-07 08:28:13.712879 D | op-config: updating config secret &Secret{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:rook-ceph-config,GenerateName:,Namespace:rook-ceph-stage-primary,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{},OwnerReferences:[{ceph.rook.io/v1 CephCluster rook-ceph-stage-primary 76235f05-b792-11e9-9b32-0050568460f6 <nil> 0xc000c66c6c}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string][]byte{},Type:kubernetes.io/rook,StringData:map[string]string{mon_host: [v2:100.67.17.84:3300,v1:100.67.17.84:6789],[v2:100.79.195.199:3300,v1:100.79.195.199:6789],[v2:100.69.115.5:3300,v1:100.69.115.5:6789],[v2:100.66.122.247:3300,v1:100.66.122.247:6789],[v2:100.64.242.138:3300,v1:100.64.242.138:6789],[v2:100.70.92.237:3300,v1:100.70.92.237:6789],mon_initial_members: b,j,f,g,h,i,},}
2019-08-07 08:28:13.718763 I | cephconfig: writing config file /var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config
2019-08-07 08:28:13.719009 I | cephconfig: copying config to /etc/ceph/ceph.conf
2019-08-07 08:28:13.719184 I | cephconfig: generated admin config in /var/lib/rook/rook-ceph-stage-primary
2019-08-07 08:28:13.719579 I | cephconfig: writing config file /var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config
2019-08-07 08:28:13.719723 I | cephconfig: copying config to /etc/ceph/ceph.conf
2019-08-07 08:28:13.719871 I | cephconfig: generated admin config in /var/lib/rook/rook-ceph-stage-primary
2019-08-07 08:28:13.719887 D | op-mon: Released lock for mon orchestration
2019-08-07 08:28:13.928028 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:14.522105 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:16.362157 D | op-osd: Checking osd processes status.
2019-08-07 08:28:16.362218 D | op-osd: OSDs with previously detected Down status: map[]
2019-08-07 08:28:16.362418 D | exec: Running command: ceph osd dump --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/611973796
2019-08-07 08:28:17.264002 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:17.963982 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:18.164303 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:18.476876 I | exec: 2019-08-07 08:28:17.188 7f1e3aec5700 1 librados: starting msgr at
2019-08-07 08:28:17.188 7f1e3aec5700 1 librados: starting objecter
2019-08-07 08:28:17.261 7f1e3aec5700 1 librados: setting wanted keys
2019-08-07 08:28:17.261 7f1e3aec5700 1 librados: calling monclient init
2019-08-07 08:28:17.269 7f1e3aec5700 1 librados: init done
2019-08-07 08:28:18.379 7f1e3aec5700 10 librados: watch_flush enter
2019-08-07 08:28:18.379 7f1e3aec5700 10 librados: watch_flush exit
2019-08-07 08:28:18.380 7f1e3aec5700 1 librados: shutdown
2019-08-07 08:28:18.479459 D | op-osd: osd dump &{[{0 1 1} {1 1 1} {2 1 1} {3 1 1} {4 1 1} {5 1 1} {6 1 1} {7 1 1} {8 1 1} {9 1 1} {10 1 1} {11 1 1} {12 1 1} {13 1 1} {14 1 1} {15 1 1} {16 1 1} {17 1 1} {18 1 1} {19 1 1} {20 1 1} {21 1 1} {22 1 1} {23 1 1} {24 1 1} {25 1 1} {26 1 1} {27 1 1} {28 1 1} {29 1 1} {30 1 1} {31 1 1} {32 1 1} {33 1 1} {34 1 1} {35 1 1} {36 1 1} {37 1 1} {38 1 1} {39 1 1} {40 1 1} {41 1 1} {42 1 1} {43 1 1} {44 1 1} {45 1 1} {46 1 1} {47 1 1}]}
2019-08-07 08:28:18.479483 D | op-osd: validating status of osd.0
2019-08-07 08:28:18.479493 D | op-osd: osd.0 is healthy.
2019-08-07 08:28:18.479501 D | op-osd: validating status of osd.1
2019-08-07 08:28:18.479509 D | op-osd: osd.1 is healthy.
2019-08-07 08:28:18.479515 D | op-osd: validating status of osd.2
2019-08-07 08:28:18.479523 D | op-osd: osd.2 is healthy.
2019-08-07 08:28:18.479529 D | op-osd: validating status of osd.3
2019-08-07 08:28:18.479536 D | op-osd: osd.3 is healthy.
2019-08-07 08:28:18.479543 D | op-osd: validating status of osd.4
2019-08-07 08:28:18.479550 D | op-osd: osd.4 is healthy.
2019-08-07 08:28:18.479556 D | op-osd: validating status of osd.5
2019-08-07 08:28:18.479564 D | op-osd: osd.5 is healthy.
2019-08-07 08:28:18.479570 D | op-osd: validating status of osd.6
2019-08-07 08:28:18.479577 D | op-osd: osd.6 is healthy.
2019-08-07 08:28:18.479585 D | op-osd: validating status of osd.7
2019-08-07 08:28:18.479592 D | op-osd: osd.7 is healthy.
2019-08-07 08:28:18.479599 D | op-osd: validating status of osd.8
2019-08-07 08:28:18.479606 D | op-osd: osd.8 is healthy.
2019-08-07 08:28:18.479613 D | op-osd: validating status of osd.9
2019-08-07 08:28:18.479620 D | op-osd: osd.9 is healthy.
2019-08-07 08:28:18.479626 D | op-osd: validating status of osd.10
2019-08-07 08:28:18.479634 D | op-osd: osd.10 is healthy.
2019-08-07 08:28:18.479640 D | op-osd: validating status of osd.11
2019-08-07 08:28:18.479648 D | op-osd: osd.11 is healthy.
2019-08-07 08:28:18.479655 D | op-osd: validating status of osd.12
2019-08-07 08:28:18.479662 D | op-osd: osd.12 is healthy.
2019-08-07 08:28:18.479669 D | op-osd: validating status of osd.13
2019-08-07 08:28:18.479676 D | op-osd: osd.13 is healthy.
2019-08-07 08:28:18.479684 D | op-osd: validating status of osd.14
2019-08-07 08:28:18.479692 D | op-osd: osd.14 is healthy.
2019-08-07 08:28:18.479699 D | op-osd: validating status of osd.15
2019-08-07 08:28:18.479707 D | op-osd: osd.15 is healthy.
2019-08-07 08:28:18.479714 D | op-osd: validating status of osd.16
2019-08-07 08:28:18.479722 D | op-osd: osd.16 is healthy.
2019-08-07 08:28:18.479728 D | op-osd: validating status of osd.17
2019-08-07 08:28:18.479737 D | op-osd: osd.17 is healthy.
2019-08-07 08:28:18.479743 D | op-osd: validating status of osd.18
2019-08-07 08:28:18.479751 D | op-osd: osd.18 is healthy.
2019-08-07 08:28:18.479758 D | op-osd: validating status of osd.19
2019-08-07 08:28:18.479766 D | op-osd: osd.19 is healthy.
2019-08-07 08:28:18.479773 D | op-osd: validating status of osd.20
2019-08-07 08:28:18.479781 D | op-osd: osd.20 is healthy.
2019-08-07 08:28:18.479788 D | op-osd: validating status of osd.21
2019-08-07 08:28:18.479796 D | op-osd: osd.21 is healthy.
2019-08-07 08:28:18.479802 D | op-osd: validating status of osd.22
2019-08-07 08:28:18.479810 D | op-osd: osd.22 is healthy.
2019-08-07 08:28:18.479817 D | op-osd: validating status of osd.23
2019-08-07 08:28:18.479825 D | op-osd: osd.23 is healthy.
2019-08-07 08:28:18.479831 D | op-osd: validating status of osd.24
2019-08-07 08:28:18.479839 D | op-osd: osd.24 is healthy.
2019-08-07 08:28:18.479846 D | op-osd: validating status of osd.25
2019-08-07 08:28:18.479853 D | op-osd: osd.25 is healthy.
2019-08-07 08:28:18.479860 D | op-osd: validating status of osd.26
2019-08-07 08:28:18.479868 D | op-osd: osd.26 is healthy.
2019-08-07 08:28:18.479874 D | op-osd: validating status of osd.27
2019-08-07 08:28:18.479883 D | op-osd: osd.27 is healthy.
2019-08-07 08:28:18.479889 D | op-osd: validating status of osd.28
2019-08-07 08:28:18.479898 D | op-osd: osd.28 is healthy.
2019-08-07 08:28:18.479961 D | op-osd: validating status of osd.29
2019-08-07 08:28:18.479970 D | op-osd: osd.29 is healthy.
2019-08-07 08:28:18.479976 D | op-osd: validating status of osd.30
2019-08-07 08:28:18.479986 D | op-osd: osd.30 is healthy.
2019-08-07 08:28:18.479992 D | op-osd: validating status of osd.31
2019-08-07 08:28:18.480001 D | op-osd: osd.31 is healthy.
2019-08-07 08:28:18.480007 D | op-osd: validating status of osd.32
2019-08-07 08:28:18.480016 D | op-osd: osd.32 is healthy.
2019-08-07 08:28:18.480023 D | op-osd: validating status of osd.33
2019-08-07 08:28:18.480031 D | op-osd: osd.33 is healthy.
2019-08-07 08:28:18.480038 D | op-osd: validating status of osd.34
2019-08-07 08:28:18.480047 D | op-osd: osd.34 is healthy.
2019-08-07 08:28:18.480053 D | op-osd: validating status of osd.35
2019-08-07 08:28:18.480062 D | op-osd: osd.35 is healthy.
2019-08-07 08:28:18.480069 D | op-osd: validating status of osd.36
2019-08-07 08:28:18.480079 D | op-osd: osd.36 is healthy.
2019-08-07 08:28:18.480086 D | op-osd: validating status of osd.37
2019-08-07 08:28:18.480095 D | op-osd: osd.37 is healthy.
2019-08-07 08:28:18.480101 D | op-osd: validating status of osd.38
2019-08-07 08:28:18.480111 D | op-osd: osd.38 is healthy.
2019-08-07 08:28:18.480117 D | op-osd: validating status of osd.39
2019-08-07 08:28:18.480126 D | op-osd: osd.39 is healthy.
2019-08-07 08:28:18.480133 D | op-osd: validating status of osd.40
2019-08-07 08:28:18.480142 D | op-osd: osd.40 is healthy.
2019-08-07 08:28:18.480148 D | op-osd: validating status of osd.41
2019-08-07 08:28:18.480157 D | op-osd: osd.41 is healthy.
2019-08-07 08:28:18.480164 D | op-osd: validating status of osd.42
2019-08-07 08:28:18.480173 D | op-osd: osd.42 is healthy.
2019-08-07 08:28:18.480180 D | op-osd: validating status of osd.43
2019-08-07 08:28:18.480189 D | op-osd: osd.43 is healthy.
2019-08-07 08:28:18.480196 D | op-osd: validating status of osd.44
2019-08-07 08:28:18.480206 D | op-osd: osd.44 is healthy.
2019-08-07 08:28:18.480222 D | op-osd: validating status of osd.45
2019-08-07 08:28:18.480232 D | op-osd: osd.45 is healthy.
2019-08-07 08:28:18.480238 D | op-osd: validating status of osd.46
2019-08-07 08:28:18.480248 D | op-osd: osd.46 is healthy.
2019-08-07 08:28:18.480261 D | op-osd: validating status of osd.47
2019-08-07 08:28:18.480271 D | op-osd: osd.47 is healthy.
2019-08-07 08:28:18.576734 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:19.070903 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:19.390981 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:19.414950 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:19.640343 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:19.650146 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:20.495927 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:20.526403 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:20.537395 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:20.763300 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:20.771159 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:21.363999 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:21.987199 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:22.062422 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:22.879656 D | op-cluster: checking health of cluster
2019-08-07 08:28:22.880032 D | exec: Running command: ceph status --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/087198131
2019-08-07 08:28:22.945139 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:23.362174 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:23.663120 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:23.963928 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:24.563801 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:25.272333 I | exec: 2019-08-07 08:28:23.685 7f110342c700 1 librados: starting msgr at
2019-08-07 08:28:23.685 7f110342c700 1 librados: starting objecter
2019-08-07 08:28:23.761 7f110342c700 1 librados: setting wanted keys
2019-08-07 08:28:23.761 7f110342c700 1 librados: calling monclient init
2019-08-07 08:28:23.770 7f110342c700 1 librados: init done
2019-08-07 08:28:25.168 7f110342c700 10 librados: watch_flush enter
2019-08-07 08:28:25.168 7f110342c700 10 librados: watch_flush exit
2019-08-07 08:28:25.170 7f110342c700 1 librados: shutdown
2019-08-07 08:28:25.273729 D | op-cluster: Cluster status: {Health:{Status:HEALTH_WARN Checks:map[MON_DOWN:{Severity:HEALTH_WARN Summary:{Message:1/6 mons down, quorum b,f,g,i,j}}]} FSID:7dd854f1-2892-4201-ab69-d4797f12ac50 ElectionEpoch:544 Quorum:[0 1 2 4 5] QuorumNames:[b f g i j] MonMap:{Epoch:8 FSID:7dd854f1-2892-4201-ab69-d4797f12ac50 CreatedTime:2019-08-05 15:05:49.660802 ModifiedTime:2019-08-07 08:24:59.011086 Mons:[{Name:b Rank:0 Address:100.67.17.84:6789/0} {Name:f Rank:1 Address:100.69.115.5:6789/0} {Name:g Rank:2 Address:100.66.122.247:6789/0} {Name:h Rank:3 Address:100.64.242.138:6789/0} {Name:i Rank:4 Address:100.70.92.237:6789/0} {Name:j Rank:5 Address:100.79.195.199:6789/0}]} OsdMap:{OsdMap:{Epoch:163 NumOsd:48 NumUpOsd:48 NumInOsd:48 Full:false NearFull:false NumRemappedPgs:0}} PgMap:{PgsByState:[{StateName:active+clean Count:512}] Version:0 NumPgs:512 DataBytes:125898804 UsedBytes:52305739776 AvailableBytes:51126524559360 TotalBytes:51178830299136 ReadBps:0 WriteBps:0 ReadOps:0 WriteOps:0 RecoveryBps:0 RecoveryObjectsPerSec:0 RecoveryKeysPerSec:0 CacheFlushBps:0 CacheEvictBps:0 CachePromoteBps:0} MgrMap:{Epoch:118 ActiveGID:534391 ActiveName:a ActiveAddr:100.192.28.144:6801/1 Available:true Standbys:[]}}
2019-08-07 08:28:25.288688 D | op-cluster: update event for cluster rook-ceph-stage-primary
2019-08-07 08:28:25.288902 D | op-cluster: update event for cluster rook-ceph-stage-primary is not supported
2019-08-07 08:28:27.251545 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:27.953058 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:28.186056 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:28.596483 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:29.091323 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:29.411265 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:29.439968 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:29.659176 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:29.666063 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:30.513811 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:30.549596 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:30.562857 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:30.718474 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:30.803644 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:31.338788 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:32.009459 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:32.041712 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:32.963737 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:33.315816 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:33.661283 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:33.980864 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:34.559543 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:37.265783 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:37.966893 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:38.206901 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:38.618493 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:39.115954 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:39.427882 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:39.452622 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:39.679831 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:39.687022 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:40.531235 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:40.575327 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:40.598760 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:40.732851 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:40.823672 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:41.362512 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:42.033354 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:42.060995 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:42.982380 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:43.362207 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:43.678274 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:44.006144 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:44.579439 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:47.287820 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:47.987314 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:48.224298 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:48.636789 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:49.139655 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:49.450204 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:49.467701 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:49.698247 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:49.705255 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:50.559632 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:50.605955 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:50.623469 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:50.758345 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:50.845370 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:51.376073 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:52.049752 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:52.085827 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:52.996302 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:53.362485 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:53.693945 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:54.030639 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:54.604094 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:28:55.176512 I | operator: shutdown signal received, exiting...
Unable to retrieve container logs for docker://84b78541a4d56cfec16eaa248e8c5e84a41e193d5b4513239e952ed6f85d89232019-08-07 08:34:13.366360 I | rookcmd: starting Rook v1.0.4 with arguments '/usr/local/bin/rook ceph operator'
2019-08-07 08:34:13.366518 I | rookcmd: flag values: --alsologtostderr=false, --csi-attacher-image=quay.io/k8scsi/csi-attacher:v1.0.1, --csi-cephfs-image=quay.io/cephcsi/cephfsplugin:v1.0.0, --csi-cephfs-plugin-template-path=/etc/ceph-csi/cephfs/csi-cephfsplugin.yaml, --csi-cephfs-provisioner-template-path=/etc/ceph-csi/cephfs/csi-cephfsplugin-provisioner.yaml, --csi-enable-cephfs=false, --csi-enable-rbd=false, --csi-provisioner-image=quay.io/k8scsi/csi-provisioner:v1.0.1, --csi-rbd-image=quay.io/cephcsi/rbdplugin:v1.0.0, --csi-rbd-plugin-template-path=/etc/ceph-csi/rbd/csi-rbdplugin.yaml, --csi-rbd-provisioner-template-path=/etc/ceph-csi/rbd/csi-rbdplugin-provisioner.yaml, --csi-registrar-image=quay.io/k8scsi/csi-node-driver-registrar:v1.0.2, --csi-snapshotter-image=quay.io/k8scsi/csi-snapshotter:v1.0.1, --help=false, --log-flush-frequency=5s, --log-level=DEBUG, --log_backtrace_at=:0, --log_dir=, --log_file=, --logtostderr=true, --mon-healthcheck-interval=45s, --mon-out-timeout=10m0s, --skip_headers=false, --stderrthreshold=2, --v=0, --vmodule=
2019-08-07 08:34:13.371658 I | cephcmd: starting operator
2019-08-07 08:34:13.572143 I | op-agent: getting flexvolume dir path from FLEXVOLUME_DIR_PATH env var
2019-08-07 08:34:13.572231 I | op-agent: discovered flexvolume dir path from source env var. value: /var/lib/kubelet/volumeplugins
2019-08-07 08:34:13.572292 W | op-agent: Invalid ROOK_ENABLE_FSGROUP value "". Defaulting to "true".
2019-08-07 08:34:13.583703 I | op-agent: rook-ceph-agent daemonset already exists, updating ...
2019-08-07 08:34:13.600096 I | op-discover: rook-discover daemonset already exists, updating ...
2019-08-07 08:34:13.606229 I | operator: rook-provisioner ceph.rook.io/block started using ceph.rook.io flex vendor dir
I0807 08:34:13.606310 8 leaderelection.go:217] attempting to acquire leader lease rook-ceph-stage-primary/ceph.rook.io-block...
2019-08-07 08:34:13.606667 I | operator: rook-provisioner rook.io/block started using rook.io flex vendor dir
2019-08-07 08:34:13.606686 I | operator: Watching the current namespace for a cluster CRD
2019-08-07 08:34:13.606695 I | op-cluster: start watching clusters in all namespaces
2019-08-07 08:34:13.606725 I | op-cluster: Enabling hotplug orchestration: ROOK_DISABLE_DEVICE_HOTPLUG=
I0807 08:34:13.606752 8 leaderelection.go:217] attempting to acquire leader lease rook-ceph-stage-primary/rook.io-block...
2019-08-07 08:34:13.963604 I | op-cluster: start watching legacy rook clusters in all namespaces
2019-08-07 08:34:13.965775 I | op-cluster: starting cluster in namespace rook-ceph-stage-primary
2019-08-07 08:34:13.969320 D | op-cluster: Skipping -> Do not use all Nodes in cluster rook-ceph-stage-primary
2019-08-07 08:34:13.969363 D | op-cluster: Skipping -> Do not use all Nodes in cluster rook-ceph-stage-primary
2019-08-07 08:34:13.969380 D | op-cluster: Skipping -> Do not use all Nodes in cluster rook-ceph-stage-primary
2019-08-07 08:34:13.969393 D | op-cluster: Skipping -> Do not use all Nodes in cluster rook-ceph-stage-primary
2019-08-07 08:34:13.969405 D | op-cluster: Skipping -> Do not use all Nodes in cluster rook-ceph-stage-primary
2019-08-07 08:34:13.969418 D | op-cluster: Skipping -> Do not use all Nodes in cluster rook-ceph-stage-primary
2019-08-07 08:34:13.969430 D | op-cluster: Skipping -> Do not use all Nodes in cluster rook-ceph-stage-primary
2019-08-07 08:34:13.969442 D | op-cluster: Skipping -> Do not use all Nodes in cluster rook-ceph-stage-primary
2019-08-07 08:34:13.969459 D | op-cluster: Skipping -> Do not use all Nodes in cluster rook-ceph-stage-primary
2019-08-07 08:34:13.969472 D | op-cluster: Skipping -> Do not use all Nodes in cluster rook-ceph-stage-primary
2019-08-07 08:34:13.969486 D | op-cluster: Skipping -> Node is not tolerable for cluster rook-ceph-stage-primary
2019-08-07 08:34:13.969504 D | op-cluster: Skipping -> Node is not tolerable for cluster rook-ceph-stage-primary
2019-08-07 08:34:13.969518 D | op-cluster: Skipping -> Do not use all Nodes in cluster rook-ceph-stage-primary
2019-08-07 08:34:13.969542 D | op-cluster: Skipping -> Do not use all Nodes in cluster rook-ceph-stage-primary
2019-08-07 08:34:13.969556 D | op-cluster: Skipping -> Do not use all Nodes in cluster rook-ceph-stage-primary
2019-08-07 08:34:13.969568 D | op-cluster: Skipping -> Do not use all Nodes in cluster rook-ceph-stage-primary
2019-08-07 08:34:13.969580 D | op-cluster: Skipping -> Do not use all Nodes in cluster rook-ceph-stage-primary
2019-08-07 08:34:13.969598 D | op-cluster: Skipping -> Do not use all Nodes in cluster rook-ceph-stage-primary
2019-08-07 08:34:13.969610 D | op-cluster: Skipping -> Node is not tolerable for cluster rook-ceph-stage-primary
2019-08-07 08:34:13.969621 D | op-cluster: Skipping -> Do not use all Nodes in cluster rook-ceph-stage-primary
2019-08-07 08:34:13.969633 D | op-cluster: Skipping -> Do not use all Nodes in cluster rook-ceph-stage-primary
2019-08-07 08:34:13.969647 D | op-cluster: Skipping -> Do not use all Nodes in cluster rook-ceph-stage-primary
2019-08-07 08:34:14.005623 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:14.337635 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:14.853020 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:15.353929 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:17.893445 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:18.619118 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:18.867773 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:19.321649 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:19.972102 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:19.984711 I | op-k8sutil: waiting for job rook-ceph-detect-version to complete...
2019-08-07 08:34:20.164142 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:20.165278 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:20.336575 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:20.416008 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:21.315719 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:21.486182 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:21.538574 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:21.568544 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:21.614242 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:22.162235 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:22.725499 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:22.811715 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:23.562538 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:24.021169 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:24.363240 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:24.881827 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:25.032255 I | op-cluster: Detected ceph image version: 14.2.1 nautilus
2019-08-07 08:34:25.032282 I | op-cluster: CephCluster rook-ceph-stage-primary status: Creating
2019-08-07 08:34:25.059170 D | op-mon: Acquiring lock for mon orchestration
2019-08-07 08:34:25.059195 D | op-mon: Acquired lock for mon orchestration
2019-08-07 08:34:25.059205 I | op-mon: start running mons
2019-08-07 08:34:25.059213 D | op-mon: establishing ceph cluster info
2019-08-07 08:34:25.064864 D | op-mon: found existing monitor secrets for cluster rook-ceph-stage-primary
2019-08-07 08:34:25.069869 I | op-mon: parsing mon endpoints: j=100.79.195.199:6789,f=100.69.115.5:6789,g=100.66.122.247:6789,h=100.64.242.138:6789,i=100.70.92.237:6789,b=100.67.17.84:6789
2019-08-07 08:34:25.070099 I | op-mon: loaded: maxMonID=9, mons=map[f:0xc000f16a80 g:0xc000f16ae0 h:0xc000f16b20 i:0xc000f16b60 b:0xc000f16ba0 j:0xc000f16a00], mapping=&{Node:map[b:0xc0007988a0 f:0xc0007989c0 g:0xc0007989f0 h:0xc000798a20 i:0xc000798a50 j:0xc000798a80] Port:map[]}
2019-08-07 08:34:25.080482 D | op-mon: updating config map rook-ceph-mon-endpoints that already exists
2019-08-07 08:34:25.084473 I | op-mon: saved mon endpoints to config map map[data:j=100.79.195.199:6789,f=100.69.115.5:6789,g=100.66.122.247:6789,h=100.64.242.138:6789,i=100.70.92.237:6789,b=100.67.17.84:6789 maxMonId:9 mapping:{"node":{"b":{"Name":"k8s-worker-101.lxstage.domain.com","Hostname":"k8s-worker-101.lxstage.domain.com","Address":"172.22.254.183"},"f":{"Name":"k8s-worker-102.lxstage.domain.com","Hostname":"k8s-worker-102.lxstage.domain.com","Address":"172.22.254.186"},"g":{"Name":"k8s-worker-103.lxstage.domain.com","Hostname":"k8s-worker-103.lxstage.domain.com","Address":"172.22.254.185"},"h":{"Name":"k8s-worker-104.lxstage.domain.com","Hostname":"k8s-worker-104.lxstage.domain.com","Address":"172.22.254.187"},"i":{"Name":"k8s-worker-01.lxstage.domain.com","Hostname":"k8s-worker-01.lxstage.domain.com","Address":"172.22.254.150"},"j":{"Name":"k8s-worker-00.lxstage.domain.com","Hostname":"k8s-worker-00.lxstage.domain.com","Address":"172.22.254.105"}},"port":{}}]
2019-08-07 08:34:25.097891 D | op-config: Generated and stored config file:
[global]
mon_allow_pool_delete = true
mon_max_pg_per_osd = 1000
osd_pg_bits = 11
osd_pgp_bits = 11
osd_pool_default_size = 1
osd_pool_default_min_size = 1
osd_pool_default_pg_num = 100
osd_pool_default_pgp_num = 100
rbd_default_features = 3
fatal_signal_handlers = false
osd pool default pg num = 512
osd pool default pgp num = 512
osd pool default size = 3
osd pool default min size = 2
2019-08-07 08:34:25.196333 D | op-config: updating config secret &Secret{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:rook-ceph-config,GenerateName:,Namespace:rook-ceph-stage-primary,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{},OwnerReferences:[{ceph.rook.io/v1 CephCluster rook-ceph-stage-primary 76235f05-b792-11e9-9b32-0050568460f6 <nil> 0xc000be9e2c}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string][]byte{},Type:kubernetes.io/rook,StringData:map[string]string{mon_host: [v2:100.66.122.247:3300,v1:100.66.122.247:6789],[v2:100.64.242.138:3300,v1:100.64.242.138:6789],[v2:100.70.92.237:3300,v1:100.70.92.237:6789],[v2:100.67.17.84:3300,v1:100.67.17.84:6789],[v2:100.79.195.199:3300,v1:100.79.195.199:6789],[v2:100.69.115.5:3300,v1:100.69.115.5:6789],mon_initial_members: g,h,i,b,j,f,},}
2019-08-07 08:34:25.379711 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:25.396752 I | cephconfig: writing config file /var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config
2019-08-07 08:34:25.396901 I | cephconfig: copying config to /etc/ceph/ceph.conf
2019-08-07 08:34:25.397235 I | cephconfig: generated admin config in /var/lib/rook/rook-ceph-stage-primary
2019-08-07 08:34:25.595353 D | op-cfg-keyring: updating secret for rook-ceph-mons-keyring
2019-08-07 08:34:25.995768 D | op-cfg-keyring: updating secret for rook-ceph-admin-keyring
2019-08-07 08:34:26.579976 I | op-mon: targeting the mon count 5
2019-08-07 08:34:26.672260 D | op-mon: there are 22 nodes available for 6 mons
2019-08-07 08:34:26.805829 D | op-mon: mon pod on node k8s-worker-101.lxstage.domain.com
2019-08-07 08:34:26.805860 D | op-mon: mon pod on node k8s-worker-102.lxstage.domain.com
2019-08-07 08:34:26.805873 D | op-mon: mon pod on node k8s-worker-103.lxstage.domain.com
2019-08-07 08:34:26.805884 D | op-mon: mon pod on node k8s-worker-104.lxstage.domain.com
2019-08-07 08:34:26.805895 D | op-mon: mon pod on node k8s-worker-01.lxstage.domain.com
2019-08-07 08:34:26.805970 I | op-mon: Found 15 running nodes without mons
2019-08-07 08:34:26.805982 D | op-mon: mon j already assigned to a node, no need to assign
2019-08-07 08:34:26.805990 D | op-mon: mon f already assigned to a node, no need to assign
2019-08-07 08:34:26.805998 D | op-mon: mon g already assigned to a node, no need to assign
2019-08-07 08:34:26.806007 D | op-mon: mon h already assigned to a node, no need to assign
2019-08-07 08:34:26.806017 D | op-mon: mon i already assigned to a node, no need to assign
2019-08-07 08:34:26.806027 D | op-mon: mon b already assigned to a node, no need to assign
2019-08-07 08:34:26.806034 D | op-mon: mons have been assigned to nodes
2019-08-07 08:34:26.806041 I | op-mon: checking for basic quorum with existing mons
2019-08-07 08:34:26.806060 D | op-k8sutil: creating service rook-ceph-mon-j
2019-08-07 08:34:27.225427 D | op-k8sutil: updating service %s
2019-08-07 08:34:27.797348 I | op-mon: mon j endpoint are [v2:100.79.195.199:3300,v1:100.79.195.199:6789]
2019-08-07 08:34:27.797387 D | op-k8sutil: creating service rook-ceph-mon-f
2019-08-07 08:34:27.908698 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:28.031093 D | op-k8sutil: updating service %s
2019-08-07 08:34:28.398175 I | op-mon: mon f endpoint are [v2:100.69.115.5:3300,v1:100.69.115.5:6789]
2019-08-07 08:34:28.398217 D | op-k8sutil: creating service rook-ceph-mon-g
2019-08-07 08:34:28.627048 D | op-k8sutil: updating service %s
2019-08-07 08:34:28.662821 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:28.889646 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:28.998295 I | op-mon: mon g endpoint are [v2:100.66.122.247:3300,v1:100.66.122.247:6789]
2019-08-07 08:34:28.998331 D | op-k8sutil: creating service rook-ceph-mon-h
2019-08-07 08:34:29.226494 D | op-k8sutil: updating service %s
2019-08-07 08:34:29.341173 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:29.599322 I | op-mon: mon h endpoint are [v2:100.64.242.138:3300,v1:100.64.242.138:6789]
2019-08-07 08:34:29.599358 D | op-k8sutil: creating service rook-ceph-mon-i
2019-08-07 08:34:29.825624 D | op-k8sutil: updating service %s
2019-08-07 08:34:29.991692 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:30.129225 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:30.135097 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:30.359348 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:30.398073 I | op-mon: mon i endpoint are [v2:100.70.92.237:3300,v1:100.70.92.237:6789]
2019-08-07 08:34:30.398108 D | op-k8sutil: creating service rook-ceph-mon-b
2019-08-07 08:34:30.437198 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
I0807 08:34:30.597114 8 leaderelection.go:227] successfully acquired lease rook-ceph-stage-primary/rook.io-block
I0807 08:34:30.597230 8 controller.go:769] Starting provisioner controller rook.io/block_rook-ceph-operator-6b8b758497-tvpx8_22bd3280-b8ee-11e9-833c-261ab998204b!
I0807 08:34:30.597302 8 event.go:209] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"rook-ceph-stage-primary", Name:"rook.io-block", UID:"782fdbfc-b792-11e9-9b32-0050568460f6", APIVersion:"v1", ResourceVersion:"309445825", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' rook-ceph-operator-6b8b758497-tvpx8_22bd3280-b8ee-11e9-833c-261ab998204b became leader
2019-08-07 08:34:30.827289 D | op-k8sutil: updating service %s
2019-08-07 08:34:31.332377 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:31.508671 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:31.563851 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:31.597604 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:31.631205 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
I0807 08:34:31.997514 8 controller.go:818] Started provisioner controller rook.io/block_rook-ceph-operator-6b8b758497-tvpx8_22bd3280-b8ee-11e9-833c-261ab998204b!
2019-08-07 08:34:32.153444 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
I0807 08:34:32.200038 8 leaderelection.go:227] successfully acquired lease rook-ceph-stage-primary/ceph.rook.io-block
I0807 08:34:32.200158 8 controller.go:769] Starting provisioner controller ceph.rook.io/block_rook-ceph-operator-6b8b758497-tvpx8_22bd12bb-b8ee-11e9-833c-261ab998204b!
I0807 08:34:32.200171 8 event.go:209] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"rook-ceph-stage-primary", Name:"ceph.rook.io-block", UID:"782fe0db-b792-11e9-9b32-0050568460f6", APIVersion:"v1", ResourceVersion:"309445844", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' rook-ceph-operator-6b8b758497-tvpx8_22bd12bb-b8ee-11e9-833c-261ab998204b became leader
2019-08-07 08:34:32.598175 I | op-mon: mon b endpoint are [v2:100.67.17.84:3300,v1:100.67.17.84:6789]
2019-08-07 08:34:32.748238 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:32.830122 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
I0807 08:34:33.400508 8 controller.go:818] Started provisioner controller ceph.rook.io/block_rook-ceph-operator-6b8b758497-tvpx8_22bd12bb-b8ee-11e9-833c-261ab998204b!
I0807 08:34:33.400702 8 controller.go:1196] provision "admin-d0277887/datadir-zk-1" class "default": started
2019-08-07 08:34:33.578412 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:33.598783 D | op-mon: updating config map rook-ceph-mon-endpoints that already exists
I0807 08:34:34.000204 8 controller.go:1205] provision "admin-d0277887/datadir-zk-1" class "default": persistentvolume "pvc-95509a25-9be6-11e9-9a2e-0050568460f6" already exists, skipping
2019-08-07 08:34:34.040560 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:34.196717 I | op-mon: saved mon endpoints to config map map[data:i=100.70.92.237:6789,b=100.67.17.84:6789,j=100.79.195.199:6789,f=100.69.115.5:6789,g=100.66.122.247:6789,h=100.64.242.138:6789 maxMonId:9 mapping:{"node":{"b":{"Name":"k8s-worker-101.lxstage.domain.com","Hostname":"k8s-worker-101.lxstage.domain.com","Address":"172.22.254.183"},"f":{"Name":"k8s-worker-102.lxstage.domain.com","Hostname":"k8s-worker-102.lxstage.domain.com","Address":"172.22.254.186"},"g":{"Name":"k8s-worker-103.lxstage.domain.com","Hostname":"k8s-worker-103.lxstage.domain.com","Address":"172.22.254.185"},"h":{"Name":"k8s-worker-104.lxstage.domain.com","Hostname":"k8s-worker-104.lxstage.domain.com","Address":"172.22.254.187"},"i":{"Name":"k8s-worker-01.lxstage.domain.com","Hostname":"k8s-worker-01.lxstage.domain.com","Address":"172.22.254.150"},"j":{"Name":"k8s-worker-00.lxstage.domain.com","Hostname":"k8s-worker-00.lxstage.domain.com","Address":"172.22.254.105"}},"port":{}}]
2019-08-07 08:34:34.380025 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:34.906133 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:34.995715 D | op-config: Generated and stored config file:
[global]
mon_allow_pool_delete = true
mon_max_pg_per_osd = 1000
osd_pg_bits = 11
osd_pgp_bits = 11
osd_pool_default_size = 1
osd_pool_default_min_size = 1
osd_pool_default_pg_num = 100
osd_pool_default_pgp_num = 100
rbd_default_features = 3
fatal_signal_handlers = false
osd pool default pg num = 512
osd pool default pgp num = 512
osd pool default size = 3
osd pool default min size = 2
2019-08-07 08:34:35.394867 D | op-config: updating config secret &Secret{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:rook-ceph-config,GenerateName:,Namespace:rook-ceph-stage-primary,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{},OwnerReferences:[{ceph.rook.io/v1 CephCluster rook-ceph-stage-primary 76235f05-b792-11e9-9b32-0050568460f6 <nil> 0xc000be9e2c}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string][]byte{},Type:kubernetes.io/rook,StringData:map[string]string{mon_host: [v2:100.67.17.84:3300,v1:100.67.17.84:6789],[v2:100.79.195.199:3300,v1:100.79.195.199:6789],[v2:100.69.115.5:3300,v1:100.69.115.5:6789],[v2:100.66.122.247:3300,v1:100.66.122.247:6789],[v2:100.64.242.138:3300,v1:100.64.242.138:6789],[v2:100.70.92.237:3300,v1:100.70.92.237:6789],mon_initial_members: b,j,f,g,h,i,},}
2019-08-07 08:34:35.407860 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:35.597340 I | cephconfig: writing config file /var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config
2019-08-07 08:34:35.597525 I | cephconfig: copying config to /etc/ceph/ceph.conf
2019-08-07 08:34:35.597718 I | cephconfig: generated admin config in /var/lib/rook/rook-ceph-stage-primary
2019-08-07 08:34:35.662214 I | cephconfig: writing config file /var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config
2019-08-07 08:34:35.662465 I | cephconfig: copying config to /etc/ceph/ceph.conf
2019-08-07 08:34:35.662649 I | cephconfig: generated admin config in /var/lib/rook/rook-ceph-stage-primary
2019-08-07 08:34:35.662682 D | op-mon: monConfig: %+v&{rook-ceph-mon-j j 100.79.195.199 6789 0xc00015ea00}
2019-08-07 08:34:35.662869 D | op-mon: Starting mon: rook-ceph-mon-j
2019-08-07 08:34:35.680837 D | op-mon: monConfig: %+v&{rook-ceph-mon-f f 100.69.115.5 6789 0xc00015fef0}
2019-08-07 08:34:35.681040 D | op-mon: Starting mon: rook-ceph-mon-f
2019-08-07 08:34:35.709082 I | op-mon: deployment for mon rook-ceph-mon-f already exists. updating if needed
2019-08-07 08:34:35.713065 I | op-k8sutil: updating deployment rook-ceph-mon-f
2019-08-07 08:34:35.760694 D | op-k8sutil: deployment rook-ceph-mon-f status={ObservedGeneration:3 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-07 08:16:34 +0000 UTC LastTransitionTime:2019-08-07 08:16:34 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-07 08:16:34 +0000 UTC LastTransitionTime:2019-08-06 14:22:03 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-mon-f-7966c549fb" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:34:37.765715 I | op-k8sutil: finished waiting for updated deployment rook-ceph-mon-f
2019-08-07 08:34:37.765765 D | op-mon: monConfig: %+v&{rook-ceph-mon-g g 100.66.122.247 6789 0xc0005245a0}
2019-08-07 08:34:37.765944 D | op-mon: Starting mon: rook-ceph-mon-g
2019-08-07 08:34:37.776397 I | op-mon: deployment for mon rook-ceph-mon-g already exists. updating if needed
2019-08-07 08:34:37.781258 I | op-k8sutil: updating deployment rook-ceph-mon-g
2019-08-07 08:34:37.793348 D | op-k8sutil: deployment rook-ceph-mon-g status={ObservedGeneration:3 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-07 08:16:54 +0000 UTC LastTransitionTime:2019-08-07 08:16:54 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-07 08:16:54 +0000 UTC LastTransitionTime:2019-08-06 14:22:03 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-mon-g-6b49f6c769" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:34:37.923616 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:38.655224 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:38.909299 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:39.369808 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:39.798278 I | op-k8sutil: finished waiting for updated deployment rook-ceph-mon-g
2019-08-07 08:34:39.798327 D | op-mon: monConfig: %+v&{rook-ceph-mon-h h 100.64.242.138 6789 0xc0005245f0}
2019-08-07 08:34:39.798497 D | op-mon: Starting mon: rook-ceph-mon-h
2019-08-07 08:34:39.871206 I | op-mon: deployment for mon rook-ceph-mon-h already exists. updating if needed
2019-08-07 08:34:39.876481 I | op-k8sutil: updating deployment rook-ceph-mon-h
2019-08-07 08:34:39.892773 D | op-k8sutil: deployment rook-ceph-mon-h status={ObservedGeneration:1 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-07 08:29:57 +0000 UTC LastTransitionTime:2019-08-07 08:29:57 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-07 08:29:57 +0000 UTC LastTransitionTime:2019-08-07 08:29:53 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-mon-h-858f958" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:34:40.016454 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:40.157177 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:40.162203 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:40.384450 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:40.455716 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:41.350529 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:41.529331 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:41.597812 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:41.666733 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:41.764185 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:41.898271 I | op-k8sutil: finished waiting for updated deployment rook-ceph-mon-h
2019-08-07 08:34:41.898317 D | op-mon: monConfig: %+v&{rook-ceph-mon-i i 100.70.92.237 6789 0xc000524690}
2019-08-07 08:34:41.898480 D | op-mon: Starting mon: rook-ceph-mon-i
2019-08-07 08:34:41.910877 I | op-mon: deployment for mon rook-ceph-mon-i already exists. updating if needed
2019-08-07 08:34:41.914851 I | op-k8sutil: updating deployment rook-ceph-mon-i
2019-08-07 08:34:41.928567 D | op-k8sutil: deployment rook-ceph-mon-i status={ObservedGeneration:2 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-07 08:23:31 +0000 UTC LastTransitionTime:2019-08-07 08:23:31 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-07 08:23:31 +0000 UTC LastTransitionTime:2019-08-07 08:23:22 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-mon-i-c64d84df8" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:34:42.174571 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:42.766607 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:42.862194 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:43.596612 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:43.933970 I | op-k8sutil: finished waiting for updated deployment rook-ceph-mon-i
2019-08-07 08:34:43.934017 D | op-mon: monConfig: %+v&{rook-ceph-mon-b b 100.67.17.84 6789 0xc000524a50}
2019-08-07 08:34:43.934181 D | op-mon: Starting mon: rook-ceph-mon-b
2019-08-07 08:34:43.962323 I | op-mon: deployment for mon rook-ceph-mon-b already exists. updating if needed
2019-08-07 08:34:43.967129 I | op-k8sutil: updating deployment rook-ceph-mon-b
2019-08-07 08:34:43.981215 D | op-k8sutil: deployment rook-ceph-mon-b status={ObservedGeneration:3 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-07 08:16:05 +0000 UTC LastTransitionTime:2019-08-07 08:16:05 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-07 08:16:05 +0000 UTC LastTransitionTime:2019-08-06 14:22:03 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-mon-b-5df554cc8c" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:34:44.054251 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:44.407341 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:44.933403 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:45.427362 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:45.985789 I | op-k8sutil: finished waiting for updated deployment rook-ceph-mon-b
2019-08-07 08:34:45.985823 I | op-mon: mons created: 6
2019-08-07 08:34:45.985843 I | op-mon: waiting for mon quorum with [j f g h i b]
2019-08-07 08:34:46.098137 I | op-mon: mons running: [j f g h i b]
2019-08-07 08:34:46.098510 I | exec: Running command: ceph mon_status --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/648164830
2019-08-07 08:34:47.964024 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:47.972919 I | exec: 2019-08-07 08:34:46.787 7f7b066d5700 1 librados: starting msgr at
2019-08-07 08:34:46.787 7f7b066d5700 1 librados: starting objecter
2019-08-07 08:34:46.787 7f7b066d5700 1 librados: setting wanted keys
2019-08-07 08:34:46.787 7f7b066d5700 1 librados: calling monclient init
2019-08-07 08:34:46.867 7f7b066d5700 1 librados: init done
2019-08-07 08:34:47.900 7f7b066d5700 10 librados: watch_flush enter
2019-08-07 08:34:47.900 7f7b066d5700 10 librados: watch_flush exit
2019-08-07 08:34:47.901 7f7b066d5700 1 librados: shutdown
2019-08-07 08:34:47.973412 D | cephclient: MON STATUS: {Quorum:[0 1 2 3 4] MonMap:{Mons:[{Name:b Rank:0 Address:100.67.17.84:6789/0} {Name:f Rank:1 Address:100.69.115.5:6789/0} {Name:g Rank:2 Address:100.66.122.247:6789/0} {Name:h Rank:3 Address:100.64.242.138:6789/0} {Name:i Rank:4 Address:100.70.92.237:6789/0} {Name:j Rank:5 Address:100.79.195.199:6789/0}]}}
2019-08-07 08:34:47.973440 I | op-mon: Monitors in quorum: [b f g h i]
2019-08-07 08:34:47.973455 I | exec: Running command: ceph version
2019-08-07 08:34:48.670056 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:48.927731 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:49.463856 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:50.063627 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:50.174712 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:50.263572 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:50.275118 D | cephclient: ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable)
2019-08-07 08:34:50.275147 D | cephclient: ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable)
2019-08-07 08:34:50.275176 I | exec: Running command: ceph versions
2019-08-07 08:34:50.463316 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:50.479191 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:51.368285 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:51.563780 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:51.662390 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:51.673085 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:51.677604 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:52.262767 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:52.607745 D | cephclient: {
"mon": {
"ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable)": 6
},
"mgr": {
"ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable)": 1
},
"osd": {
"ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable)": 48
},
"mds": {},
"overall": {
"ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable)": 55
}
}
2019-08-07 08:34:52.607785 D | cephclient: {
"mon": {
"ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable)": 6
},
"mgr": {
"ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable)": 1
},
"osd": {
"ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable)": 48
},
"mds": {},
"overall": {
"ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable)": 55
}
}
2019-08-07 08:34:52.662050 I | exec: Running command: ceph mon enable-msgr2
2019-08-07 08:34:52.787470 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:52.869398 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:53.663070 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:54.163576 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:54.463241 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:54.662364 I | cephclient: successfully enabled msgr2 protocol
2019-08-07 08:34:54.662427 D | op-mon: mon endpoints used are: b=100.67.17.84:6789,j=100.79.195.199:6789,f=100.69.115.5:6789,g=100.66.122.247:6789,h=100.64.242.138:6789,i=100.70.92.237:6789
2019-08-07 08:34:54.662440 D | op-mon: Released lock for mon orchestration
2019-08-07 08:34:54.662461 I | op-mgr: start running mgr
2019-08-07 08:34:54.662732 I | exec: Running command: ceph auth get-or-create-key mgr.a mon allow * mds allow * osd allow * --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/591273893
2019-08-07 08:34:54.962389 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:55.474655 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:56.974375 I | exec: 2019-08-07 08:34:55.577 7f253d753700 1 librados: starting msgr at
2019-08-07 08:34:55.577 7f253d753700 1 librados: starting objecter
2019-08-07 08:34:55.578 7f253d753700 1 librados: setting wanted keys
2019-08-07 08:34:55.578 7f253d753700 1 librados: calling monclient init
2019-08-07 08:34:55.666 7f253d753700 1 librados: init done
2019-08-07 08:34:56.868 7f253d753700 10 librados: watch_flush enter
2019-08-07 08:34:56.868 7f253d753700 10 librados: watch_flush exit
2019-08-07 08:34:56.869 7f253d753700 1 librados: shutdown
2019-08-07 08:34:56.978449 D | op-mgr: legacy mgr key rook-ceph-mgr-a is already removed
2019-08-07 08:34:56.981874 D | op-cfg-keyring: updating secret for rook-ceph-mgr-a-keyring
2019-08-07 08:34:56.985564 I | exec: Running command: ceph config-key get mgr/dashboard/server_addr --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/269272256
2019-08-07 08:34:57.962883 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:58.685813 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:58.964265 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:34:59.173316 I | exec: 2019-08-07 08:34:57.782 7f86debb1700 1 librados: starting msgr at
2019-08-07 08:34:57.782 7f86debb1700 1 librados: starting objecter
2019-08-07 08:34:57.782 7f86debb1700 1 librados: setting wanted keys
2019-08-07 08:34:57.782 7f86debb1700 1 librados: calling monclient init
2019-08-07 08:34:57.866 7f86debb1700 1 librados: init done
Error ENOENT: error obtaining 'mgr/dashboard/server_addr': (2) No such file or directory
2019-08-07 08:34:59.093 7f86debb1700 10 librados: watch_flush enter
2019-08-07 08:34:59.094 7f86debb1700 10 librados: watch_flush exit
2019-08-07 08:34:59.095 7f86debb1700 1 librados: shutdown
2019-08-07 08:34:59.173613 I | exec: Running command: ceph config-key del mgr/dashboard/server_addr --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/348214559
2019-08-07 08:34:59.463763 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:00.071700 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:00.262546 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:00.264862 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:00.462752 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:00.562866 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:01.463786 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:01.466196 I | exec: 2019-08-07 08:35:00.162 7f8911d5a700 1 librados: starting msgr at
2019-08-07 08:35:00.162 7f8911d5a700 1 librados: starting objecter
2019-08-07 08:35:00.163 7f8911d5a700 1 librados: setting wanted keys
2019-08-07 08:35:00.163 7f8911d5a700 1 librados: calling monclient init
2019-08-07 08:35:00.167 7f8911d5a700 1 librados: init done
no such key 'mgr/dashboard/server_addr'
2019-08-07 08:35:01.361 7f8911d5a700 10 librados: watch_flush enter
2019-08-07 08:35:01.361 7f8911d5a700 10 librados: watch_flush exit
2019-08-07 08:35:01.362 7f8911d5a700 1 librados: shutdown
2019-08-07 08:35:01.466365 I | op-mgr: clearing http bind fix mod=dashboard ver=12.0.0 luminous changed=false err=<nil>
2019-08-07 08:35:01.466520 I | exec: Running command: ceph config-key get mgr/dashboard/a/server_addr --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/929695730
2019-08-07 08:35:01.570007 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:01.663271 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:01.762246 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:01.763744 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:02.264173 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:02.863937 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:02.962642 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:03.664997 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:03.862162 I | exec: 2019-08-07 08:35:02.470 7fed54494700 1 librados: starting msgr at
2019-08-07 08:35:02.470 7fed54494700 1 librados: starting objecter
2019-08-07 08:35:02.470 7fed54494700 1 librados: setting wanted keys
2019-08-07 08:35:02.470 7fed54494700 1 librados: calling monclient init
2019-08-07 08:35:02.477 7fed54494700 1 librados: init done
Error ENOENT: error obtaining 'mgr/dashboard/a/server_addr': (2) No such file or directory
2019-08-07 08:35:03.672 7fed54494700 10 librados: watch_flush enter
2019-08-07 08:35:03.672 7fed54494700 10 librados: watch_flush exit
2019-08-07 08:35:03.761 7fed54494700 1 librados: shutdown
2019-08-07 08:35:03.862447 I | exec: Running command: ceph config-key del mgr/dashboard/a/server_addr --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/520864425
2019-08-07 08:35:04.163956 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:04.462441 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:04.983722 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:05.563790 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:06.176439 I | exec: 2019-08-07 08:35:04.781 7fd868eac700 1 librados: starting msgr at
2019-08-07 08:35:04.781 7fd868eac700 1 librados: starting objecter
2019-08-07 08:35:04.861 7fd868eac700 1 librados: setting wanted keys
2019-08-07 08:35:04.861 7fd868eac700 1 librados: calling monclient init
2019-08-07 08:35:04.867 7fd868eac700 1 librados: init done
no such key 'mgr/dashboard/a/server_addr'
2019-08-07 08:35:06.098 7fd868eac700 10 librados: watch_flush enter
2019-08-07 08:35:06.099 7fd868eac700 10 librados: watch_flush exit
2019-08-07 08:35:06.100 7fd868eac700 1 librados: shutdown
2019-08-07 08:35:06.176554 I | op-mgr: clearing http bind fix mod=dashboard ver=12.0.0 luminous changed=false err=<nil>
2019-08-07 08:35:06.176670 I | exec: Running command: ceph config get mgr.a mgr/dashboard/server_addr --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/017122804
2019-08-07 08:35:07.975793 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:08.674547 I | exec: 2019-08-07 08:35:07.283 7f10deaa2700 1 librados: starting msgr at
2019-08-07 08:35:07.283 7f10deaa2700 1 librados: starting objecter
2019-08-07 08:35:07.283 7f10deaa2700 1 librados: setting wanted keys
2019-08-07 08:35:07.283 7f10deaa2700 1 librados: calling monclient init
2019-08-07 08:35:07.365 7f10deaa2700 1 librados: init done
2019-08-07 08:35:08.569 7f10deaa2700 10 librados: watch_flush enter
2019-08-07 08:35:08.569 7f10deaa2700 10 librados: watch_flush exit
2019-08-07 08:35:08.570 7f10deaa2700 1 librados: shutdown
2019-08-07 08:35:08.674871 I | exec: Running command: ceph config rm mgr.a mgr/dashboard/server_addr --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/989291203
2019-08-07 08:35:08.763986 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:08.966969 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:09.463842 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:10.163759 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:10.262512 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:10.263864 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:10.463337 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:10.563288 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:10.974553 I | exec: 2019-08-07 08:35:09.568 7f4ff20d1700 1 librados: starting msgr at
2019-08-07 08:35:09.568 7f4ff20d1700 1 librados: starting objecter
2019-08-07 08:35:09.569 7f4ff20d1700 1 librados: setting wanted keys
2019-08-07 08:35:09.569 7f4ff20d1700 1 librados: calling monclient init
2019-08-07 08:35:09.575 7f4ff20d1700 1 librados: init done
2019-08-07 08:35:10.904 7f4ff20d1700 10 librados: watch_flush enter
2019-08-07 08:35:10.904 7f4ff20d1700 10 librados: watch_flush exit
2019-08-07 08:35:10.905 7f4ff20d1700 1 librados: shutdown
2019-08-07 08:35:10.974746 I | op-mgr: clearing http bind fix mod=dashboard ver=13.0.0 mimic changed=true err=<nil>
2019-08-07 08:35:10.974893 I | exec: Running command: ceph config get mgr.a mgr/dashboard/a/server_addr --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/110891334
2019-08-07 08:35:11.462377 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:11.588914 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:11.663105 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:11.762334 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:11.763889 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:12.251838 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:12.863870 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:12.962676 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:13.298713 I | exec: 2019-08-07 08:35:11.887 7fb416841700 1 librados: starting msgr at
2019-08-07 08:35:11.887 7fb416841700 1 librados: starting objecter
2019-08-07 08:35:11.888 7fb416841700 1 librados: setting wanted keys
2019-08-07 08:35:11.888 7fb416841700 1 librados: calling monclient init
2019-08-07 08:35:11.965 7fb416841700 1 librados: init done
2019-08-07 08:35:13.168 7fb416841700 10 librados: watch_flush enter
2019-08-07 08:35:13.168 7fb416841700 10 librados: watch_flush exit
2019-08-07 08:35:13.169 7fb416841700 1 librados: shutdown
2019-08-07 08:35:13.299093 I | exec: Running command: ceph config rm mgr.a mgr/dashboard/a/server_addr --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/105652205
2019-08-07 08:35:13.678489 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:14.163719 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:14.485936 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:15.063847 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:15.563857 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:15.577274 I | exec: 2019-08-07 08:35:14.277 7fc694440700 1 librados: starting msgr at
2019-08-07 08:35:14.277 7fc694440700 1 librados: starting objecter
2019-08-07 08:35:14.277 7fc694440700 1 librados: setting wanted keys
2019-08-07 08:35:14.277 7fc694440700 1 librados: calling monclient init
2019-08-07 08:35:14.361 7fc694440700 1 librados: init done
2019-08-07 08:35:15.470 7fc694440700 10 librados: watch_flush enter
2019-08-07 08:35:15.470 7fc694440700 10 librados: watch_flush exit
2019-08-07 08:35:15.471 7fc694440700 1 librados: shutdown
2019-08-07 08:35:15.577472 I | op-mgr: clearing http bind fix mod=dashboard ver=13.0.0 mimic changed=true err=<nil>
2019-08-07 08:35:15.577618 I | exec: Running command: ceph config-key get mgr/prometheus/server_addr --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/228064360
2019-08-07 08:35:17.775385 I | exec: 2019-08-07 08:35:16.477 7f84532ba700 1 librados: starting msgr at
2019-08-07 08:35:16.477 7f84532ba700 1 librados: starting objecter
2019-08-07 08:35:16.483 7f84532ba700 1 librados: setting wanted keys
2019-08-07 08:35:16.483 7f84532ba700 1 librados: calling monclient init
2019-08-07 08:35:16.563 7f84532ba700 1 librados: init done
Error ENOENT: error obtaining 'mgr/prometheus/server_addr': (2) No such file or directory
2019-08-07 08:35:17.701 7f84532ba700 10 librados: watch_flush enter
2019-08-07 08:35:17.701 7f84532ba700 10 librados: watch_flush exit
2019-08-07 08:35:17.703 7f84532ba700 1 librados: shutdown
2019-08-07 08:35:17.775669 I | exec: Running command: ceph config-key del mgr/prometheus/server_addr --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/402842279
2019-08-07 08:35:18.063839 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:18.763852 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:18.984394 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:19.463944 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:20.066381 I | exec: 2019-08-07 08:35:18.765 7fbaad958700 1 librados: starting msgr at
2019-08-07 08:35:18.765 7fbaad958700 1 librados: starting objecter
2019-08-07 08:35:18.766 7fbaad958700 1 librados: setting wanted keys
2019-08-07 08:35:18.766 7fbaad958700 1 librados: calling monclient init
2019-08-07 08:35:18.772 7fbaad958700 1 librados: init done
no such key 'mgr/prometheus/server_addr'
2019-08-07 08:35:19.876 7fbaad958700 10 librados: watch_flush enter
2019-08-07 08:35:19.876 7fbaad958700 10 librados: watch_flush exit
2019-08-07 08:35:19.878 7fbaad958700 1 librados: shutdown
2019-08-07 08:35:20.066578 I | op-mgr: clearing http bind fix mod=prometheus ver=12.0.0 luminous changed=false err=<nil>
2019-08-07 08:35:20.066723 I | exec: Running command: ceph config-key get mgr/prometheus/a/server_addr --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/548204506
2019-08-07 08:35:20.163630 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:20.263035 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:20.264268 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:20.473934 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:20.536876 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:21.464011 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:21.663997 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:21.687890 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:21.763900 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:21.765355 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:22.270997 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:22.503844 I | exec: 2019-08-07 08:35:21.081 7f647a691700 1 librados: starting msgr at
2019-08-07 08:35:21.081 7f647a691700 1 librados: starting objecter
2019-08-07 08:35:21.082 7f647a691700 1 librados: setting wanted keys
2019-08-07 08:35:21.082 7f647a691700 1 librados: calling monclient init
2019-08-07 08:35:21.167 7f647a691700 1 librados: init done
Error ENOENT: error obtaining 'mgr/prometheus/a/server_addr': (2) No such file or directory
2019-08-07 08:35:22.374 7f647a691700 10 librados: watch_flush enter
2019-08-07 08:35:22.374 7f647a691700 10 librados: watch_flush exit
2019-08-07 08:35:22.375 7f647a691700 1 librados: shutdown
2019-08-07 08:35:22.504129 I | exec: Running command: ceph config-key del mgr/prometheus/a/server_addr --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/331514225
2019-08-07 08:35:22.852366 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:22.963832 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:23.763202 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:24.148549 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:24.563247 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:24.774569 I | exec: 2019-08-07 08:35:23.385 7fa7680b4700 1 librados: starting msgr at
2019-08-07 08:35:23.385 7fa7680b4700 1 librados: starting objecter
2019-08-07 08:35:23.461 7fa7680b4700 1 librados: setting wanted keys
2019-08-07 08:35:23.461 7fa7680b4700 1 librados: calling monclient init
2019-08-07 08:35:23.467 7fa7680b4700 1 librados: init done
no such key 'mgr/prometheus/a/server_addr'
2019-08-07 08:35:24.700 7fa7680b4700 10 librados: watch_flush enter
2019-08-07 08:35:24.700 7fa7680b4700 10 librados: watch_flush exit
2019-08-07 08:35:24.701 7fa7680b4700 1 librados: shutdown
2019-08-07 08:35:24.774760 I | op-mgr: clearing http bind fix mod=prometheus ver=12.0.0 luminous changed=false err=<nil>
2019-08-07 08:35:24.774968 I | exec: Running command: ceph config get mgr.a mgr/prometheus/server_addr --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/626049564
2019-08-07 08:35:25.065024 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:25.562240 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:27.064825 I | exec: 2019-08-07 08:35:25.684 7f679c480700 1 librados: starting msgr at
2019-08-07 08:35:25.684 7f679c480700 1 librados: starting objecter
2019-08-07 08:35:25.761 7f679c480700 1 librados: setting wanted keys
2019-08-07 08:35:25.761 7f679c480700 1 librados: calling monclient init
2019-08-07 08:35:25.767 7f679c480700 1 librados: init done
2019-08-07 08:35:26.871 7f679c480700 10 librados: watch_flush enter
2019-08-07 08:35:26.871 7f679c480700 10 librados: watch_flush exit
2019-08-07 08:35:26.961 7f679c480700 1 librados: shutdown
2019-08-07 08:35:27.065156 I | exec: Running command: ceph config rm mgr.a mgr/prometheus/server_addr --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/968795339
2019-08-07 08:35:28.015779 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:28.764057 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:29.063875 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:29.275521 I | exec: 2019-08-07 08:35:27.884 7f23a8126700 1 librados: starting msgr at
2019-08-07 08:35:27.884 7f23a8126700 1 librados: starting objecter
2019-08-07 08:35:27.885 7f23a8126700 1 librados: setting wanted keys
2019-08-07 08:35:27.885 7f23a8126700 1 librados: calling monclient init
2019-08-07 08:35:27.966 7f23a8126700 1 librados: init done
2019-08-07 08:35:29.168 7f23a8126700 10 librados: watch_flush enter
2019-08-07 08:35:29.168 7f23a8126700 10 librados: watch_flush exit
2019-08-07 08:35:29.169 7f23a8126700 1 librados: shutdown
2019-08-07 08:35:29.275706 I | op-mgr: clearing http bind fix mod=prometheus ver=13.0.0 mimic changed=false err=<nil>
2019-08-07 08:35:29.275850 I | exec: Running command: ceph config get mgr.a mgr/prometheus/a/server_addr --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/323598254
2019-08-07 08:35:29.474759 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:30.163672 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:30.262251 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:30.267680 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:30.563374 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:30.564651 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:31.562299 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:31.575286 I | exec: 2019-08-07 08:35:30.174 7f1769a9c700 1 librados: starting msgr at
2019-08-07 08:35:30.174 7f1769a9c700 1 librados: starting objecter
2019-08-07 08:35:30.261 7f1769a9c700 1 librados: setting wanted keys
2019-08-07 08:35:30.261 7f1769a9c700 1 librados: calling monclient init
2019-08-07 08:35:30.266 7f1769a9c700 1 librados: init done
2019-08-07 08:35:31.466 7f1769a9c700 10 librados: watch_flush enter
2019-08-07 08:35:31.466 7f1769a9c700 10 librados: watch_flush exit
2019-08-07 08:35:31.467 7f1769a9c700 1 librados: shutdown
2019-08-07 08:35:31.575576 I | exec: Running command: ceph config rm mgr.a mgr/prometheus/a/server_addr --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/254084917
2019-08-07 08:35:31.662621 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:31.709706 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:31.766633 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:31.862229 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:32.363837 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:32.869393 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:32.962365 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:33.762602 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:33.873409 I | exec: 2019-08-07 08:35:32.562 7f4f8bde8700 1 librados: starting msgr at
2019-08-07 08:35:32.562 7f4f8bde8700 1 librados: starting objecter
2019-08-07 08:35:32.562 7f4f8bde8700 1 librados: setting wanted keys
2019-08-07 08:35:32.562 7f4f8bde8700 1 librados: calling monclient init
2019-08-07 08:35:32.567 7f4f8bde8700 1 librados: init done
2019-08-07 08:35:33.768 7f4f8bde8700 10 librados: watch_flush enter
2019-08-07 08:35:33.768 7f4f8bde8700 10 librados: watch_flush exit
2019-08-07 08:35:33.769 7f4f8bde8700 1 librados: shutdown
2019-08-07 08:35:33.873578 I | op-mgr: clearing http bind fix mod=prometheus ver=13.0.0 mimic changed=false err=<nil>
2019-08-07 08:35:33.874708 D | op-mgr: starting mgr deployment: &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:rook-ceph-mgr-a,GenerateName:,Namespace:rook-ceph-stage-primary,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{app: rook-ceph-mgr,ceph-version: 14.2.1,ceph_daemon_id: a,instance: a,mgr: a,rook-version: v1.0.4,rook_cluster: rook-ceph-stage-primary,},Annotations:map[string]string{prometheus.io/port: 9283,prometheus.io/scrape: true,},OwnerReferences:[{ceph.rook.io/v1 CephCluster rook-ceph-stage-primary 76235f05-b792-11e9-9b32-0050568460f6 <nil> 0xc000be9e2c}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{app: rook-ceph-mgr,ceph_daemon_id: a,instance: a,mgr: a,rook_cluster: rook-ceph-stage-primary,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:rook-ceph-mgr-a,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{app: rook-ceph-mgr,ceph_daemon_id: a,instance: a,mgr: a,rook_cluster: rook-ceph-stage-primary,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{rook-ceph-config {nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:rook-ceph-config,},Items:[{ceph.conf ceph.conf 0xc000c82288}],DefaultMode:nil,Optional:nil,} nil nil nil nil nil nil nil nil nil}} {rook-ceph-mgr-a-keyring {nil nil nil nil nil &SecretVolumeSource{SecretName:rook-ceph-mgr-a-keyring,Items:[],DefaultMode:nil,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}} {rook-ceph-log {&HostPathVolumeSource{Path:/opt/rook/rook-ceph-stage-primary/rook-ceph-stage-primary/log,Type:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}} {ceph-daemon-data {nil &EmptyDirVolumeSource{Medium:,SizeLimit:<nil>,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{mgr ceph/ceph:v14.2.1-20190430 [ceph-mgr] [--fsid=7dd854f1-2892-4201-ab69-d4797f12ac50 --keyring=/etc/ceph/keyring-store/keyring --log-to-stderr=true --err-to-stderr=true --mon-cluster-log-to-stderr=true --log-stderr-prefix=debug --default-log-to-file=false --default-mon-cluster-log-to-file=false --mon-host=$(ROOK_CEPH_MON_HOST) --mon-initial-members=$(ROOK_CEPH_MON_INITIAL_MEMBERS) --id=a --foreground] [{mgr 0 6800 TCP } {http-metrics 0 9283 TCP } {dashboard 0 7000 TCP }] [] [{CONTAINER_IMAGE ceph/ceph:v14.2.1-20190430 nil} {POD_NAME EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}} {POD_NAMESPACE &EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}} {NODE_NAME &EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}} {POD_MEMORY_LIMIT &EnvVarSource{FieldRef:nil,ResourceFieldRef:&ResourceFieldSelector{ContainerName:,Resource:limits.memory,Divisor:0,},ConfigMapKeyRef:nil,SecretKeyRef:nil,}} {POD_MEMORY_REQUEST &EnvVarSource{FieldRef:nil,ResourceFieldRef:&ResourceFieldSelector{ContainerName:,Resource:requests.memory,Divisor:0,},ConfigMapKeyRef:nil,SecretKeyRef:nil,}} {POD_CPU_LIMIT &EnvVarSource{FieldRef:nil,ResourceFieldRef:&ResourceFieldSelector{ContainerName:,Resource:limits.cpu,Divisor:1,},ConfigMapKeyRef:nil,SecretKeyRef:nil,}} {POD_CPU_REQUEST &EnvVarSource{FieldRef:nil,ResourceFieldRef:&ResourceFieldSelector{ContainerName:,Resource:requests.cpu,Divisor:0,},ConfigMapKeyRef:nil,SecretKeyRef:nil,}} {ROOK_CEPH_MON_HOST &EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:rook-ceph-config,},Key:mon_host,Optional:nil,},}} {ROOK_CEPH_MON_INITIAL_MEMBERS &EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:rook-ceph-config,},Key:mon_initial_members,Optional:nil,},}} {ROOK_OPERATOR_NAMESPACE rook-ceph-stage-primary nil} {ROOK_CEPH_CLUSTER_CRD_VERSION v1 nil} {ROOK_VERSION v1.0.4 nil} {ROOK_CEPH_CLUSTER_CRD_NAME rook-ceph-stage-primary nil}] {map[cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{1073741824 0} {<nil>} 1Gi BinarySI}] map[cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{1073741824 0} {<nil>} 1Gi BinarySI}]} [{rook-ceph-config true /etc/ceph <nil> } {rook-ceph-mgr-a-keyring true /etc/ceph/keyring-store/ <nil> } {rook-ceph-log false /var/log/ceph <nil> } {ceph-daemon-data false /var/lib/ceph/mgr/ceph-a <nil> }] [] nil nil nil nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:nil,ActiveDeadlineSeconds:nil,DNSPolicy:,NodeSelector:map[string]string{},ServiceAccountName:rook-ceph-mgr,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:nil,ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:&Affinity{NodeAffinity:&NodeAffinity{RequiredDuringSchedulingIgnoredDuringExecution:&NodeSelector{NodeSelectorTerms:[{[{rook-namespace NotIn [rook-ceph-stage-primary]}] []}],},PreferredDuringSchedulingIgnoredDuringExecution:[],},PodAffinity:nil,PodAntiAffinity:nil,},SchedulerName:,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:nil,Paused:false,ProgressDeadlineSeconds:nil,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}
2019-08-07 08:35:33.889458 I | op-mgr: deployment for mgr rook-ceph-mgr-a already exists. updating if needed
2019-08-07 08:35:33.895389 I | op-k8sutil: updating deployment rook-ceph-mgr-a
2019-08-07 08:35:33.911737 D | op-k8sutil: deployment rook-ceph-mgr-a status={ObservedGeneration:3 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:21:41 +0000 UTC LastTransitionTime:2019-08-06 14:21:41 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:21:41 +0000 UTC LastTransitionTime:2019-08-06 14:21:41 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-mgr-a-5d469cc9b5" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:35:34.170125 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:34.527749 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:35.069349 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:35.562401 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:35.916548 I | op-k8sutil: finished waiting for updated deployment rook-ceph-mgr-a
2019-08-07 08:35:35.916789 I | exec: Running command: ceph mgr module enable orchestrator_cli --force --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/359803152
2019-08-07 08:35:38.034332 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:38.684902 I | exec: 2019-08-07 08:35:36.786 7f2b2d14f700 1 librados: starting msgr at
2019-08-07 08:35:36.786 7f2b2d14f700 1 librados: starting objecter
2019-08-07 08:35:36.861 7f2b2d14f700 1 librados: setting wanted keys
2019-08-07 08:35:36.861 7f2b2d14f700 1 librados: calling monclient init
2019-08-07 08:35:36.868 7f2b2d14f700 1 librados: init done
module 'orchestrator_cli' is already enabled (always-on)
2019-08-07 08:35:38.605 7f2b2d14f700 10 librados: watch_flush enter
2019-08-07 08:35:38.605 7f2b2d14f700 10 librados: watch_flush exit
2019-08-07 08:35:38.607 7f2b2d14f700 1 librados: shutdown
2019-08-07 08:35:38.685300 I | exec: Running command: ceph mgr module enable rook --force --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/065331503
2019-08-07 08:35:38.762945 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:39.064012 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:39.563840 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:40.176761 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:40.274805 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:40.362863 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:40.563243 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:40.573856 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:41.488514 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:41.673421 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:41.745391 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:41.785869 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:41.862216 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:41.876159 I | exec: 2019-08-07 08:35:39.670 7f7a1b6de700 1 librados: starting msgr at
2019-08-07 08:35:39.670 7f7a1b6de700 1 librados: starting objecter
2019-08-07 08:35:39.670 7f7a1b6de700 1 librados: setting wanted keys
2019-08-07 08:35:39.670 7f7a1b6de700 1 librados: calling monclient init
2019-08-07 08:35:39.760 7f7a1b6de700 1 librados: init done
module 'rook' is already enabled
2019-08-07 08:35:41.789 7f7a1b6de700 10 librados: watch_flush enter
2019-08-07 08:35:41.789 7f7a1b6de700 10 librados: watch_flush exit
2019-08-07 08:35:41.790 7f7a1b6de700 1 librados: shutdown
2019-08-07 08:35:41.876409 I | exec: Running command: ceph orchestrator set backend rook --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/196501186
2019-08-07 08:35:42.363445 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:42.963953 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:42.971542 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:43.728630 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:44.187571 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:44.210804 I | exec: 2019-08-07 08:35:42.861 7f8db5ba2700 1 librados: starting msgr at
2019-08-07 08:35:42.861 7f8db5ba2700 1 librados: starting objecter
2019-08-07 08:35:42.864 7f8db5ba2700 1 librados: setting wanted keys
2019-08-07 08:35:42.864 7f8db5ba2700 1 librados: calling monclient init
2019-08-07 08:35:42.869 7f8db5ba2700 1 librados: init done
2019-08-07 08:35:44.081 7f8db5ba2700 10 librados: watch_flush enter
2019-08-07 08:35:44.081 7f8db5ba2700 10 librados: watch_flush exit
2019-08-07 08:35:44.083 7f8db5ba2700 1 librados: shutdown
2019-08-07 08:35:44.211154 I | exec: Running command: ceph mgr module enable prometheus --force --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/211071289
2019-08-07 08:35:44.563194 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:45.163861 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:45.586334 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:46.275748 I | exec: 2019-08-07 08:35:45.084 7faa5d745700 1 librados: starting msgr at
2019-08-07 08:35:45.084 7faa5d745700 1 librados: starting objecter
2019-08-07 08:35:45.161 7faa5d745700 1 librados: setting wanted keys
2019-08-07 08:35:45.161 7faa5d745700 1 librados: calling monclient init
2019-08-07 08:35:45.167 7faa5d745700 1 librados: init done
module 'prometheus' is already enabled
2019-08-07 08:35:46.238 7faa5d745700 10 librados: watch_flush enter
2019-08-07 08:35:46.238 7faa5d745700 10 librados: watch_flush exit
2019-08-07 08:35:46.239 7faa5d745700 1 librados: shutdown
2019-08-07 08:35:46.276071 I | exec: Running command: ceph mgr module enable dashboard --force --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/039981380
2019-08-07 08:35:48.063899 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:48.477549 I | exec: 2019-08-07 08:35:47.265 7f6a463c1700 1 librados: starting msgr at
2019-08-07 08:35:47.265 7f6a463c1700 1 librados: starting objecter
2019-08-07 08:35:47.265 7f6a463c1700 1 librados: setting wanted keys
2019-08-07 08:35:47.265 7f6a463c1700 1 librados: calling monclient init
2019-08-07 08:35:47.272 7f6a463c1700 1 librados: init done
module 'dashboard' is already enabled
2019-08-07 08:35:48.401 7f6a463c1700 10 librados: watch_flush enter
2019-08-07 08:35:48.401 7f6a463c1700 10 librados: watch_flush exit
2019-08-07 08:35:48.402 7f6a463c1700 1 librados: shutdown
2019-08-07 08:35:48.770716 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:49.056508 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:49.517224 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:50.205759 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:50.302665 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:50.306430 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:50.532471 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:50.592471 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:51.509509 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:51.695809 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:51.768142 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:51.804689 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:51.842917 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:52.337431 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:52.912305 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:52.990834 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:53.481979 I | op-mgr: the dashboard secret was already generated
2019-08-07 08:35:53.482013 I | op-mgr: Running command: ceph dashboard set-login-credentials admin *******
2019-08-07 08:35:53.482192 D | exec: Running command: ceph dashboard set-login-credentials admin GKzKzG9om2 --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/710731731
2019-08-07 08:35:53.763286 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:54.205358 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:54.566043 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:55.163810 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:55.608392 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:55.870340 I | exec: 2019-08-07 08:35:54.386 7f8a26173700 1 librados: starting msgr at
2019-08-07 08:35:54.386 7f8a26173700 1 librados: starting objecter
2019-08-07 08:35:54.387 7f8a26173700 1 librados: setting wanted keys
2019-08-07 08:35:54.387 7f8a26173700 1 librados: calling monclient init
2019-08-07 08:35:54.466 7f8a26173700 1 librados: init done
2019-08-07 08:35:55.794 7f8a26173700 10 librados: watch_flush enter
2019-08-07 08:35:55.794 7f8a26173700 10 librados: watch_flush exit
2019-08-07 08:35:55.795 7f8a26173700 1 librados: shutdown
2019-08-07 08:35:55.870535 I | op-mgr: restarting the mgr module
2019-08-07 08:35:55.870680 I | exec: Running command: ceph mgr module disable dashboard --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/021133590
2019-08-07 08:35:58.071820 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:58.072973 I | exec: 2019-08-07 08:35:56.781 7f49c9f7a700 1 librados: starting msgr at
2019-08-07 08:35:56.781 7f49c9f7a700 1 librados: starting objecter
2019-08-07 08:35:56.782 7f49c9f7a700 1 librados: setting wanted keys
2019-08-07 08:35:56.782 7f49c9f7a700 1 librados: calling monclient init
2019-08-07 08:35:56.867 7f49c9f7a700 1 librados: init done
2019-08-07 08:35:57.995 7f49c9f7a700 10 librados: watch_flush enter
2019-08-07 08:35:57.995 7f49c9f7a700 10 librados: watch_flush exit
2019-08-07 08:35:57.996 7f49c9f7a700 1 librados: shutdown
2019-08-07 08:35:58.073289 I | exec: Running command: ceph mgr module enable dashboard --force --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/994678653
2019-08-07 08:35:58.784068 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:59.162252 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:35:59.563749 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:00.231779 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:00.319965 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:00.332262 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:00.551811 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:00.620274 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:01.173246 I | exec: 2019-08-07 08:35:58.986 7f7c11ca9700 1 librados: starting msgr at
2019-08-07 08:35:58.986 7f7c11ca9700 1 librados: starting objecter
2019-08-07 08:35:59.062 7f7c11ca9700 1 librados: setting wanted keys
2019-08-07 08:35:59.062 7f7c11ca9700 1 librados: calling monclient init
2019-08-07 08:35:59.067 7f7c11ca9700 1 librados: init done
2019-08-07 08:36:01.096 7f7c11ca9700 10 librados: watch_flush enter
2019-08-07 08:36:01.096 7f7c11ca9700 10 librados: watch_flush exit
2019-08-07 08:36:01.097 7f7c11ca9700 1 librados: shutdown
2019-08-07 08:36:01.173528 I | exec: Running command: ceph config get mgr.a mgr/dashboard/url_prefix --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/390354616
2019-08-07 08:36:01.536709 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:01.763803 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:01.790950 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:01.830749 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:01.867572 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:02.362219 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:02.964033 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:03.062452 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:03.669443 I | exec: 2019-08-07 08:36:02.177 7fd28a67c700 1 librados: starting msgr at
2019-08-07 08:36:02.177 7fd28a67c700 1 librados: starting objecter
2019-08-07 08:36:02.177 7fd28a67c700 1 librados: setting wanted keys
2019-08-07 08:36:02.177 7fd28a67c700 1 librados: calling monclient init
2019-08-07 08:36:02.265 7fd28a67c700 1 librados: init done
2019-08-07 08:36:03.560 7fd28a67c700 10 librados: watch_flush enter
2019-08-07 08:36:03.560 7fd28a67c700 10 librados: watch_flush exit
2019-08-07 08:36:03.561 7fd28a67c700 1 librados: shutdown
2019-08-07 08:36:03.669749 I | exec: Running command: ceph config rm mgr.a mgr/dashboard/url_prefix --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/702195383
2019-08-07 08:36:03.758163 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:04.263770 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:04.663373 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:05.164000 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:05.663717 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:05.972735 I | exec: 2019-08-07 08:36:04.579 7f18485b5700 1 librados: starting msgr at
2019-08-07 08:36:04.579 7f18485b5700 1 librados: starting objecter
2019-08-07 08:36:04.579 7f18485b5700 1 librados: setting wanted keys
2019-08-07 08:36:04.579 7f18485b5700 1 librados: calling monclient init
2019-08-07 08:36:04.662 7f18485b5700 1 librados: init done
2019-08-07 08:36:05.868 7f18485b5700 10 librados: watch_flush enter
2019-08-07 08:36:05.868 7f18485b5700 10 librados: watch_flush exit
2019-08-07 08:36:05.869 7f18485b5700 1 librados: shutdown
2019-08-07 08:36:05.973096 I | exec: Running command: ceph config get mgr.a mgr/dashboard/server_port --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/808294058
2019-08-07 08:36:08.088520 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:08.273508 I | exec: 2019-08-07 08:36:06.886 7f05eb04d700 1 librados: starting msgr at
2019-08-07 08:36:06.886 7f05eb04d700 1 librados: starting objecter
2019-08-07 08:36:06.962 7f05eb04d700 1 librados: setting wanted keys
2019-08-07 08:36:06.962 7f05eb04d700 1 librados: calling monclient init
2019-08-07 08:36:06.967 7f05eb04d700 1 librados: init done
2019-08-07 08:36:08.161 7f05eb04d700 10 librados: watch_flush enter
2019-08-07 08:36:08.161 7f05eb04d700 10 librados: watch_flush exit
2019-08-07 08:36:08.163 7f05eb04d700 1 librados: shutdown
2019-08-07 08:36:08.273833 I | exec: Running command: ceph config set mgr.a mgr/dashboard/server_port 7000 --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/405520897
2019-08-07 08:36:08.862506 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:09.163995 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:09.563687 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:10.270925 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:10.366846 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:10.366921 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:10.574244 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:10.662549 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:10.974370 I | exec: 2019-08-07 08:36:09.264 7f8d389cb700 1 librados: starting msgr at
2019-08-07 08:36:09.264 7f8d389cb700 1 librados: starting objecter
2019-08-07 08:36:09.265 7f8d389cb700 1 librados: setting wanted keys
2019-08-07 08:36:09.265 7f8d389cb700 1 librados: calling monclient init
2019-08-07 08:36:09.272 7f8d389cb700 1 librados: init done
2019-08-07 08:36:10.868 7f8d389cb700 10 librados: watch_flush enter
2019-08-07 08:36:10.868 7f8d389cb700 10 librados: watch_flush exit
2019-08-07 08:36:10.869 7f8d389cb700 1 librados: shutdown
2019-08-07 08:36:10.974648 I | exec: Running command: ceph config get mgr.a mgr/dashboard/ssl --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/998777708
2019-08-07 08:36:11.564901 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:11.764163 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:11.862369 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:11.864415 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:11.962548 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:12.464519 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:12.963609 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:13.062381 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:13.372095 I | exec: 2019-08-07 08:36:11.977 7f3fee3c5700 1 librados: starting msgr at
2019-08-07 08:36:11.977 7f3fee3c5700 1 librados: starting objecter
2019-08-07 08:36:11.978 7f3fee3c5700 1 librados: setting wanted keys
2019-08-07 08:36:11.978 7f3fee3c5700 1 librados: calling monclient init
2019-08-07 08:36:12.066 7f3fee3c5700 1 librados: init done
2019-08-07 08:36:13.299 7f3fee3c5700 10 librados: watch_flush enter
2019-08-07 08:36:13.299 7f3fee3c5700 10 librados: watch_flush exit
2019-08-07 08:36:13.300 7f3fee3c5700 1 librados: shutdown
2019-08-07 08:36:13.372401 I | exec: Running command: ceph config set mgr.a mgr/dashboard/ssl false --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/501345755
2019-08-07 08:36:13.863236 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:14.264491 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:14.663368 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:15.262479 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:15.667382 I | exec: 2019-08-07 08:36:14.370 7f0fdc3af700 1 librados: starting msgr at
2019-08-07 08:36:14.370 7f0fdc3af700 1 librados: starting objecter
2019-08-07 08:36:14.371 7f0fdc3af700 1 librados: setting wanted keys
2019-08-07 08:36:14.371 7f0fdc3af700 1 librados: calling monclient init
2019-08-07 08:36:14.376 7f0fdc3af700 1 librados: init done
2019-08-07 08:36:15.561 7f0fdc3af700 10 librados: watch_flush enter
2019-08-07 08:36:15.561 7f0fdc3af700 10 librados: watch_flush exit
2019-08-07 08:36:15.562 7f0fdc3af700 1 librados: shutdown
2019-08-07 08:36:15.668574 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:15.704663 I | op-mgr: dashboard service already exists
2019-08-07 08:36:15.742313 I | op-mgr: mgr metrics service already exists
2019-08-07 08:36:15.742357 I | op-osd: start running osds in namespace rook-ceph-stage-primary
2019-08-07 08:36:15.766288 I | op-osd: 4 of the 4 storage nodes are valid
2019-08-07 08:36:15.766307 I | op-osd: start provisioning the osds on nodes, if needed
2019-08-07 08:36:15.766414 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName k8s-worker-101.lxstage.domain.com will be 713725d9f667a079e331f69f263a1fd0
2019-08-07 08:36:15.778250 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName k8s-worker-101.lxstage.domain.com will be 713725d9f667a079e331f69f263a1fd0
2019-08-07 08:36:15.781572 I | op-k8sutil: Removing previous job rook-ceph-osd-prepare-713725d9f667a079e331f69f263a1fd0 to start a new one
2019-08-07 08:36:15.797199 I | op-k8sutil: batch job rook-ceph-osd-prepare-713725d9f667a079e331f69f263a1fd0 still exists
2019-08-07 08:36:17.801265 I | op-k8sutil: batch job rook-ceph-osd-prepare-713725d9f667a079e331f69f263a1fd0 deleted
2019-08-07 08:36:17.807174 I | op-osd: osd provision job started for node k8s-worker-101.lxstage.domain.com
2019-08-07 08:36:17.807226 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName k8s-worker-102.lxstage.domain.com will be 16f7814f5a22fc71d100e5b6a4b5bf2b
2019-08-07 08:36:17.817950 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName k8s-worker-102.lxstage.domain.com will be 16f7814f5a22fc71d100e5b6a4b5bf2b
2019-08-07 08:36:17.821958 I | op-k8sutil: Removing previous job rook-ceph-osd-prepare-16f7814f5a22fc71d100e5b6a4b5bf2b to start a new one
2019-08-07 08:36:17.854138 I | op-k8sutil: batch job rook-ceph-osd-prepare-16f7814f5a22fc71d100e5b6a4b5bf2b still exists
2019-08-07 08:36:18.111476 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:18.819450 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:19.108663 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:19.577667 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:19.858067 I | op-k8sutil: batch job rook-ceph-osd-prepare-16f7814f5a22fc71d100e5b6a4b5bf2b deleted
2019-08-07 08:36:19.864637 I | op-osd: osd provision job started for node k8s-worker-102.lxstage.domain.com
2019-08-07 08:36:19.864692 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName k8s-worker-103.lxstage.domain.com will be 3421d9d141eb39906bd993a89171141d
2019-08-07 08:36:19.881626 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName k8s-worker-103.lxstage.domain.com will be 3421d9d141eb39906bd993a89171141d
2019-08-07 08:36:19.885982 I | op-k8sutil: Removing previous job rook-ceph-osd-prepare-3421d9d141eb39906bd993a89171141d to start a new one
2019-08-07 08:36:19.912823 I | op-k8sutil: batch job rook-ceph-osd-prepare-3421d9d141eb39906bd993a89171141d still exists
2019-08-07 08:36:20.285826 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:20.354959 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:20.366341 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:20.589138 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:20.670286 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:21.585948 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:21.778797 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:21.852022 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:21.870535 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:21.916330 I | op-k8sutil: batch job rook-ceph-osd-prepare-3421d9d141eb39906bd993a89171141d deleted
2019-08-07 08:36:21.917528 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:21.923147 I | op-osd: osd provision job started for node k8s-worker-103.lxstage.domain.com
2019-08-07 08:36:21.923205 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName k8s-worker-104.lxstage.domain.com will be 314ab5663f2f709906d970de379b1dc7
2019-08-07 08:36:21.940307 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName k8s-worker-104.lxstage.domain.com will be 314ab5663f2f709906d970de379b1dc7
2019-08-07 08:36:21.944228 I | op-k8sutil: Removing previous job rook-ceph-osd-prepare-314ab5663f2f709906d970de379b1dc7 to start a new one
2019-08-07 08:36:21.971543 I | op-k8sutil: batch job rook-ceph-osd-prepare-314ab5663f2f709906d970de379b1dc7 still exists
2019-08-07 08:36:22.418728 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:22.984743 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:23.065148 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:23.815136 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:23.975118 I | op-k8sutil: batch job rook-ceph-osd-prepare-314ab5663f2f709906d970de379b1dc7 deleted
2019-08-07 08:36:23.981834 I | op-osd: osd provision job started for node k8s-worker-104.lxstage.domain.com
2019-08-07 08:36:23.981861 I | op-osd: start osds after provisioning is completed, if needed
2019-08-07 08:36:24.067418 I | op-osd: osd orchestration status for node k8s-worker-102.lxstage.domain.com is orchestrating
2019-08-07 08:36:24.067458 I | op-osd: osd orchestration status for node k8s-worker-104.lxstage.domain.com is starting
2019-08-07 08:36:24.067475 I | op-osd: osd orchestration status for node k8s-worker-103.lxstage.domain.com is starting
2019-08-07 08:36:24.067491 I | op-osd: osd orchestration status for node k8s-worker-101.lxstage.domain.com is starting
2019-08-07 08:36:24.067502 I | op-osd: 0/4 node(s) completed osd provisioning, resource version 309447904
2019-08-07 08:36:24.280573 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:24.626087 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:25.228991 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:25.695234 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:28.128761 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:28.833175 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:29.128041 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:29.595126 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:30.362425 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:30.465022 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:30.466245 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:30.640431 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:30.691124 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:31.600531 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:31.799152 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:31.876649 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:31.890023 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:31.943747 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:32.441390 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:33.062514 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:33.083244 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:33.828846 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:34.302150 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:34.643562 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:34.932226 I | op-osd: osd orchestration status for node k8s-worker-101.lxstage.domain.com is computingDiff
2019-08-07 08:36:35.036550 I | op-osd: osd orchestration status for node k8s-worker-101.lxstage.domain.com is orchestrating
2019-08-07 08:36:35.253803 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:35.715969 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:36.392250 I | op-osd: osd orchestration status for node k8s-worker-102.lxstage.domain.com is completed
2019-08-07 08:36:36.392282 I | op-osd: starting 12 osd daemons on node k8s-worker-102.lxstage.domain.com
2019-08-07 08:36:36.392318 D | op-osd: start osd {4 /var/lib/rook/osd4 /var/lib/rook/osd4/rook-ceph-stage-primary.config ceph /var/lib/rook/osd4/keyring b830d73e-ce5c-4915-8f5a-6d9a2df98280 false false true}
2019-08-07 08:36:36.401992 I | op-osd: deployment for osd 4 already exists. updating if needed
2019-08-07 08:36:36.462281 I | op-k8sutil: updating deployment rook-ceph-osd-4
2019-08-07 08:36:36.476855 D | op-k8sutil: deployment rook-ceph-osd-4 status={ObservedGeneration:3 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:29:00 +0000 UTC LastTransitionTime:2019-08-06 14:29:00 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:29:00 +0000 UTC LastTransitionTime:2019-08-06 14:28:55 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-4-6bb55d5fc6" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:36:38.145850 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:38.482094 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-4
2019-08-07 08:36:38.482128 I | op-osd: started deployment for osd 4 (dir=false, type=bluestore)
2019-08-07 08:36:38.482156 D | op-osd: start osd {16 /var/lib/rook/osd16 /var/lib/rook/osd16/rook-ceph-stage-primary.config ceph /var/lib/rook/osd16/keyring b8653dd3-724d-47e4-851b-a967072ace81 false false true}
2019-08-07 08:36:38.491724 I | op-osd: deployment for osd 16 already exists. updating if needed
2019-08-07 08:36:38.495781 I | op-k8sutil: updating deployment rook-ceph-osd-16
2019-08-07 08:36:38.578063 D | op-k8sutil: deployment rook-ceph-osd-16 status={ObservedGeneration:3 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:28:53 +0000 UTC LastTransitionTime:2019-08-06 14:28:53 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:28:59 +0000 UTC LastTransitionTime:2019-08-06 14:28:50 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-16-56ff6dbb7c" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:36:38.848199 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:39.145444 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:39.613022 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:40.332318 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:40.398140 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:40.405630 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:40.583222 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-16
2019-08-07 08:36:40.583251 I | op-osd: started deployment for osd 16 (dir=false, type=bluestore)
2019-08-07 08:36:40.583278 D | op-osd: start osd {20 /var/lib/rook/osd20 /var/lib/rook/osd20/rook-ceph-stage-primary.config ceph /var/lib/rook/osd20/keyring 30b2a1ea-9a68-4b46-89bf-aba7bf8f29ff false false true}
2019-08-07 08:36:40.662567 I | op-osd: deployment for osd 20 already exists. updating if needed
2019-08-07 08:36:40.663682 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:40.667101 I | op-k8sutil: updating deployment rook-ceph-osd-20
2019-08-07 08:36:40.681619 D | op-k8sutil: deployment rook-ceph-osd-20 status={ObservedGeneration:3 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:28:58 +0000 UTC LastTransitionTime:2019-08-06 14:28:58 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:28:58 +0000 UTC LastTransitionTime:2019-08-06 14:28:51 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-20-654fc7c8bb" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:36:40.712837 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:41.662468 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:41.822005 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:41.900727 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:41.916900 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:41.978881 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:42.461646 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:42.686962 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-20
2019-08-07 08:36:42.686998 I | op-osd: started deployment for osd 20 (dir=false, type=bluestore)
2019-08-07 08:36:42.687023 D | op-osd: start osd {31 /var/lib/rook/osd31 /var/lib/rook/osd31/rook-ceph-stage-primary.config ceph /var/lib/rook/osd31/keyring 2d708cb1-f88a-4c88-a7c7-b7c041be05dd false false true}
2019-08-07 08:36:42.697485 I | op-osd: deployment for osd 31 already exists. updating if needed
2019-08-07 08:36:42.701870 I | op-k8sutil: updating deployment rook-ceph-osd-31
2019-08-07 08:36:42.720457 D | op-k8sutil: deployment rook-ceph-osd-31 status={ObservedGeneration:3 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:29:00 +0000 UTC LastTransitionTime:2019-08-06 14:29:00 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:29:00 +0000 UTC LastTransitionTime:2019-08-06 14:28:53 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-31-5db5d7b676" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:36:43.062416 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:43.107840 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:43.858142 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:44.320361 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:44.661565 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:44.725562 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-31
2019-08-07 08:36:44.725591 I | op-osd: started deployment for osd 31 (dir=false, type=bluestore)
2019-08-07 08:36:44.725619 D | op-osd: start osd {27 /var/lib/rook/osd27 /var/lib/rook/osd27/rook-ceph-stage-primary.config ceph /var/lib/rook/osd27/keyring 7bf33094-0c89-44fe-a1be-fd9507ec4f21 false false true}
2019-08-07 08:36:44.735679 I | op-osd: deployment for osd 27 already exists. updating if needed
2019-08-07 08:36:44.739825 I | op-k8sutil: updating deployment rook-ceph-osd-27
2019-08-07 08:36:44.756208 D | op-k8sutil: deployment rook-ceph-osd-27 status={ObservedGeneration:3 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:28:59 +0000 UTC LastTransitionTime:2019-08-06 14:28:59 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:28:59 +0000 UTC LastTransitionTime:2019-08-06 14:28:52 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-27-7499b6bbb9" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:36:45.283358 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:45.740295 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:46.761295 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-27
2019-08-07 08:36:46.761337 I | op-osd: started deployment for osd 27 (dir=false, type=bluestore)
2019-08-07 08:36:46.761364 D | op-osd: start osd {35 /var/lib/rook/osd35 /var/lib/rook/osd35/rook-ceph-stage-primary.config ceph /var/lib/rook/osd35/keyring cb017176-928b-4db4-9cb6-66629080f53b false false true}
2019-08-07 08:36:46.773878 I | op-osd: deployment for osd 35 already exists. updating if needed
2019-08-07 08:36:46.778537 I | op-k8sutil: updating deployment rook-ceph-osd-35
2019-08-07 08:36:46.792475 D | op-k8sutil: deployment rook-ceph-osd-35 status={ObservedGeneration:3 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:29:00 +0000 UTC LastTransitionTime:2019-08-06 14:29:00 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:29:00 +0000 UTC LastTransitionTime:2019-08-06 14:28:54 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-35-7b4689f654" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:36:48.164304 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:48.797500 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-35
2019-08-07 08:36:48.797539 I | op-osd: started deployment for osd 35 (dir=false, type=bluestore)
2019-08-07 08:36:48.797566 D | op-osd: start osd {39 /var/lib/rook/osd39 /var/lib/rook/osd39/rook-ceph-stage-primary.config ceph /var/lib/rook/osd39/keyring 8ab42eea-0fea-44c4-b5bc-1a2a02fbfd58 false false true}
2019-08-07 08:36:48.808161 I | op-osd: deployment for osd 39 already exists. updating if needed
2019-08-07 08:36:48.814164 I | op-k8sutil: updating deployment rook-ceph-osd-39
2019-08-07 08:36:48.870902 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-39
2019-08-07 08:36:48.870978 I | op-osd: started deployment for osd 39 (dir=false, type=bluestore)
2019-08-07 08:36:48.871002 D | op-osd: start osd {42 /var/lib/rook/osd42 /var/lib/rook/osd42/rook-ceph-stage-primary.config ceph /var/lib/rook/osd42/keyring 5c3d6a9f-c43f-4994-b4ed-91b28cd221b2 false false true}
2019-08-07 08:36:48.872148 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:48.881623 I | op-osd: deployment for osd 42 already exists. updating if needed
2019-08-07 08:36:48.885295 I | op-k8sutil: updating deployment rook-ceph-osd-42
2019-08-07 08:36:48.898363 D | op-k8sutil: deployment rook-ceph-osd-42 status={ObservedGeneration:3 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:29:00 +0000 UTC LastTransitionTime:2019-08-06 14:29:00 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:29:00 +0000 UTC LastTransitionTime:2019-08-06 14:28:56 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-42-7db765d5db" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:36:49.170559 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:49.637252 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:50.365222 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:50.412468 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:50.428672 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:50.675412 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:50.729480 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:50.903315 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-42
2019-08-07 08:36:50.903348 I | op-osd: started deployment for osd 42 (dir=false, type=bluestore)
2019-08-07 08:36:50.903375 D | op-osd: start osd {8 /var/lib/rook/osd8 /var/lib/rook/osd8/rook-ceph-stage-primary.config ceph /var/lib/rook/osd8/keyring d8f10678-a97e-4dab-ad71-bd528eb8baf1 false false true}
2019-08-07 08:36:50.912547 I | op-osd: deployment for osd 8 already exists. updating if needed
2019-08-07 08:36:50.962204 I | op-k8sutil: updating deployment rook-ceph-osd-8
2019-08-07 08:36:50.978510 D | op-k8sutil: deployment rook-ceph-osd-8 status={ObservedGeneration:3 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:29:01 +0000 UTC LastTransitionTime:2019-08-06 14:29:01 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:29:01 +0000 UTC LastTransitionTime:2019-08-06 14:28:58 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-8-865d7db956" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:36:51.662515 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:51.843726 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:51.932798 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:51.940246 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:52.006535 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:52.410110 I | operator: shutdown signal received, exiting...
2019-08-07 08:36:54.281994 I | rookcmd: starting Rook v1.0.4 with arguments '/usr/local/bin/rook ceph operator'
2019-08-07 08:36:54.282184 I | rookcmd: flag values: --alsologtostderr=false, --csi-attacher-image=quay.io/k8scsi/csi-attacher:v1.0.1, --csi-cephfs-image=quay.io/cephcsi/cephfsplugin:v1.0.0, --csi-cephfs-plugin-template-path=/etc/ceph-csi/cephfs/csi-cephfsplugin.yaml, --csi-cephfs-provisioner-template-path=/etc/ceph-csi/cephfs/csi-cephfsplugin-provisioner.yaml, --csi-enable-cephfs=false, --csi-enable-rbd=false, --csi-provisioner-image=quay.io/k8scsi/csi-provisioner:v1.0.1, --csi-rbd-image=quay.io/cephcsi/rbdplugin:v1.0.0, --csi-rbd-plugin-template-path=/etc/ceph-csi/rbd/csi-rbdplugin.yaml, --csi-rbd-provisioner-template-path=/etc/ceph-csi/rbd/csi-rbdplugin-provisioner.yaml, --csi-registrar-image=quay.io/k8scsi/csi-node-driver-registrar:v1.0.2, --csi-snapshotter-image=quay.io/k8scsi/csi-snapshotter:v1.0.1, --help=false, --log-flush-frequency=5s, --log-level=DEBUG, --log_backtrace_at=:0, --log_dir=, --log_file=, --logtostderr=true, --mon-healthcheck-interval=45s, --mon-out-timeout=10m0s, --skip_headers=false, --stderrthreshold=2, --v=0, --vmodule=
2019-08-07 08:36:54.362541 I | cephcmd: starting operator
2019-08-07 08:36:55.586757 I | op-agent: getting flexvolume dir path from FLEXVOLUME_DIR_PATH env var
2019-08-07 08:36:55.586789 I | op-agent: discovered flexvolume dir path from source env var. value: /var/lib/kubelet/volumeplugins
2019-08-07 08:36:55.586805 W | op-agent: Invalid ROOK_ENABLE_FSGROUP value "". Defaulting to "true".
2019-08-07 08:36:55.673761 I | op-agent: rook-ceph-agent daemonset already exists, updating ...
2019-08-07 08:36:55.693969 I | op-discover: rook-discover daemonset already exists, updating ...
2019-08-07 08:36:55.701410 I | operator: rook-provisioner ceph.rook.io/block started using ceph.rook.io flex vendor dir
I0807 08:36:55.701553 8 leaderelection.go:217] attempting to acquire leader lease rook-ceph-stage-primary/ceph.rook.io-block...
2019-08-07 08:36:55.702154 I | operator: rook-provisioner rook.io/block started using rook.io flex vendor dir
2019-08-07 08:36:55.702186 I | operator: Watching the current namespace for a cluster CRD
2019-08-07 08:36:55.702201 I | op-cluster: start watching clusters in all namespaces
2019-08-07 08:36:55.702248 I | op-cluster: Enabling hotplug orchestration: ROOK_DISABLE_DEVICE_HOTPLUG=
I0807 08:36:55.703193 8 leaderelection.go:217] attempting to acquire leader lease rook-ceph-stage-primary/rook.io-block...
2019-08-07 08:36:56.064169 I | op-cluster: start watching legacy rook clusters in all namespaces
2019-08-07 08:36:56.067322 I | op-cluster: starting cluster in namespace rook-ceph-stage-primary
2019-08-07 08:36:56.068077 D | op-cluster: Skipping -> Node is not tolerable for cluster rook-ceph-stage-primary
2019-08-07 08:36:56.068100 D | op-cluster: Skipping -> Do not use all Nodes in cluster rook-ceph-stage-primary
2019-08-07 08:36:56.068112 D | op-cluster: Skipping -> Do not use all Nodes in cluster rook-ceph-stage-primary
2019-08-07 08:36:56.068123 D | op-cluster: Skipping -> Do not use all Nodes in cluster rook-ceph-stage-primary
2019-08-07 08:36:56.068134 D | op-cluster: Skipping -> Do not use all Nodes in cluster rook-ceph-stage-primary
2019-08-07 08:36:56.068144 D | op-cluster: Skipping -> Do not use all Nodes in cluster rook-ceph-stage-primary
2019-08-07 08:36:56.068156 D | op-cluster: Skipping -> Do not use all Nodes in cluster rook-ceph-stage-primary
2019-08-07 08:36:56.068167 D | op-cluster: Skipping -> Do not use all Nodes in cluster rook-ceph-stage-primary
2019-08-07 08:36:56.068181 D | op-cluster: Skipping -> Do not use all Nodes in cluster rook-ceph-stage-primary
2019-08-07 08:36:56.068191 D | op-cluster: Skipping -> Do not use all Nodes in cluster rook-ceph-stage-primary
2019-08-07 08:36:56.068201 D | op-cluster: Skipping -> Do not use all Nodes in cluster rook-ceph-stage-primary
2019-08-07 08:36:56.068211 D | op-cluster: Skipping -> Node is not tolerable for cluster rook-ceph-stage-primary
2019-08-07 08:36:56.068222 D | op-cluster: Skipping -> Node is not tolerable for cluster rook-ceph-stage-primary
2019-08-07 08:36:56.068235 D | op-cluster: Skipping -> Do not use all Nodes in cluster rook-ceph-stage-primary
2019-08-07 08:36:56.068246 D | op-cluster: Skipping -> Do not use all Nodes in cluster rook-ceph-stage-primary
2019-08-07 08:36:56.068256 D | op-cluster: Skipping -> Do not use all Nodes in cluster rook-ceph-stage-primary
2019-08-07 08:36:56.068267 D | op-cluster: Skipping -> Do not use all Nodes in cluster rook-ceph-stage-primary
2019-08-07 08:36:56.068277 D | op-cluster: Skipping -> Do not use all Nodes in cluster rook-ceph-stage-primary
2019-08-07 08:36:56.068288 D | op-cluster: Skipping -> Do not use all Nodes in cluster rook-ceph-stage-primary
2019-08-07 08:36:56.068299 D | op-cluster: Skipping -> Do not use all Nodes in cluster rook-ceph-stage-primary
2019-08-07 08:36:56.068309 D | op-cluster: Skipping -> Do not use all Nodes in cluster rook-ceph-stage-primary
2019-08-07 08:36:56.068319 D | op-cluster: Skipping -> Do not use all Nodes in cluster rook-ceph-stage-primary
2019-08-07 08:36:56.164668 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:58.177190 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:58.889299 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:59.192814 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:36:59.655153 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:00.387333 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:00.428345 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:00.444871 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:00.688433 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:00.758240 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:01.677052 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:01.864494 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:01.956639 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:01.961836 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:02.033898 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:02.084948 I | op-k8sutil: waiting for job rook-ceph-detect-version to complete...
2019-08-07 08:37:02.508553 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:03.071183 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:03.162293 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:03.903045 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:04.352607 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:04.702682 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:05.338086 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:05.780539 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:07.190026 I | op-cluster: Detected ceph image version: 14.2.1 nautilus
2019-08-07 08:37:07.190061 I | op-cluster: CephCluster rook-ceph-stage-primary status: Creating
2019-08-07 08:37:07.213116 D | op-mon: Acquiring lock for mon orchestration
2019-08-07 08:37:07.213140 D | op-mon: Acquired lock for mon orchestration
2019-08-07 08:37:07.213151 I | op-mon: start running mons
2019-08-07 08:37:07.213159 D | op-mon: establishing ceph cluster info
2019-08-07 08:37:07.218731 D | op-mon: found existing monitor secrets for cluster rook-ceph-stage-primary
2019-08-07 08:37:07.223113 I | op-mon: parsing mon endpoints: i=100.70.92.237:6789,b=100.67.17.84:6789,j=100.79.195.199:6789,f=100.69.115.5:6789,g=100.66.122.247:6789,h=100.64.242.138:6789
2019-08-07 08:37:07.223312 I | op-mon: loaded: maxMonID=9, mons=map[i:0xc000a861e0 b:0xc000a86240 j:0xc000a862a0 f:0xc000a86300 g:0xc000a863a0 h:0xc000a86460], mapping=&{Node:map[j:0xc0007cd890 b:0xc0007cd740 f:0xc0007cd7d0 g:0xc0007cd800 h:0xc0007cd830 i:0xc0007cd860] Port:map[]}
2019-08-07 08:37:07.231862 D | op-mon: updating config map rook-ceph-mon-endpoints that already exists
2019-08-07 08:37:07.262139 I | op-mon: saved mon endpoints to config map map[maxMonId:9 mapping:{"node":{"b":{"Name":"k8s-worker-101.lxstage.domain.com","Hostname":"k8s-worker-101.lxstage.domain.com","Address":"172.22.254.183"},"f":{"Name":"k8s-worker-102.lxstage.domain.com","Hostname":"k8s-worker-102.lxstage.domain.com","Address":"172.22.254.186"},"g":{"Name":"k8s-worker-103.lxstage.domain.com","Hostname":"k8s-worker-103.lxstage.domain.com","Address":"172.22.254.185"},"h":{"Name":"k8s-worker-104.lxstage.domain.com","Hostname":"k8s-worker-104.lxstage.domain.com","Address":"172.22.254.187"},"i":{"Name":"k8s-worker-01.lxstage.domain.com","Hostname":"k8s-worker-01.lxstage.domain.com","Address":"172.22.254.150"},"j":{"Name":"k8s-worker-00.lxstage.domain.com","Hostname":"k8s-worker-00.lxstage.domain.com","Address":"172.22.254.105"}},"port":{}} data:f=100.69.115.5:6789,g=100.66.122.247:6789,h=100.64.242.138:6789,i=100.70.92.237:6789,b=100.67.17.84:6789,j=100.79.195.199:6789]
2019-08-07 08:37:07.294566 D | op-config: Generated and stored config file:
[global]
mon_allow_pool_delete = true
mon_max_pg_per_osd = 1000
osd_pg_bits = 11
osd_pgp_bits = 11
osd_pool_default_size = 1
osd_pool_default_min_size = 1
osd_pool_default_pg_num = 100
osd_pool_default_pgp_num = 100
rbd_default_features = 3
fatal_signal_handlers = false
osd pool default pg num = 512
osd pool default pgp num = 512
osd pool default size = 3
osd pool default min size = 2
2019-08-07 08:37:07.494413 D | op-config: updating config secret &Secret{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:rook-ceph-config,GenerateName:,Namespace:rook-ceph-stage-primary,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{},OwnerReferences:[{ceph.rook.io/v1 CephCluster rook-ceph-stage-primary 76235f05-b792-11e9-9b32-0050568460f6 <nil> 0xc000beeb2c}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string][]byte{},Type:kubernetes.io/rook,StringData:map[string]string{mon_host: [v2:100.70.92.237:3300,v1:100.70.92.237:6789],[v2:100.67.17.84:3300,v1:100.67.17.84:6789],[v2:100.79.195.199:3300,v1:100.79.195.199:6789],[v2:100.69.115.5:3300,v1:100.69.115.5:6789],[v2:100.66.122.247:3300,v1:100.66.122.247:6789],[v2:100.64.242.138:3300,v1:100.64.242.138:6789],mon_initial_members: i,b,j,f,g,h,},}
2019-08-07 08:37:07.696899 I | cephconfig: writing config file /var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config
2019-08-07 08:37:07.697093 I | cephconfig: copying config to /etc/ceph/ceph.conf
2019-08-07 08:37:07.697404 I | cephconfig: generated admin config in /var/lib/rook/rook-ceph-stage-primary
2019-08-07 08:37:07.895027 D | op-cfg-keyring: updating secret for rook-ceph-mons-keyring
2019-08-07 08:37:08.197607 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:08.294158 D | op-cfg-keyring: updating secret for rook-ceph-admin-keyring
2019-08-07 08:37:08.904224 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:08.911830 I | op-mon: targeting the mon count 5
2019-08-07 08:37:09.273761 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:09.365238 D | op-mon: there are 22 nodes available for 6 mons
2019-08-07 08:37:09.380671 D | op-mon: mon pod on node k8s-worker-102.lxstage.domain.com
2019-08-07 08:37:09.380707 D | op-mon: mon pod on node k8s-worker-103.lxstage.domain.com
2019-08-07 08:37:09.380720 D | op-mon: mon pod on node k8s-worker-104.lxstage.domain.com
2019-08-07 08:37:09.380732 D | op-mon: mon pod on node k8s-worker-01.lxstage.domain.com
2019-08-07 08:37:09.380742 D | op-mon: mon pod on node k8s-worker-00.lxstage.domain.com
2019-08-07 08:37:09.380797 I | op-mon: Found 15 running nodes without mons
2019-08-07 08:37:09.380809 D | op-mon: mon i already assigned to a node, no need to assign
2019-08-07 08:37:09.380816 D | op-mon: mon b already assigned to a node, no need to assign
2019-08-07 08:37:09.380824 D | op-mon: mon j already assigned to a node, no need to assign
2019-08-07 08:37:09.380831 D | op-mon: mon f already assigned to a node, no need to assign
2019-08-07 08:37:09.380838 D | op-mon: mon g already assigned to a node, no need to assign
2019-08-07 08:37:09.380845 D | op-mon: mon h already assigned to a node, no need to assign
2019-08-07 08:37:09.380852 D | op-mon: mons have been assigned to nodes
2019-08-07 08:37:09.380859 I | op-mon: checking for basic quorum with existing mons
2019-08-07 08:37:09.380874 D | op-k8sutil: creating service rook-ceph-mon-i
2019-08-07 08:37:09.528207 D | op-k8sutil: updating service %s
2019-08-07 08:37:09.670563 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:10.099011 I | op-mon: mon i endpoint are [v2:100.70.92.237:3300,v1:100.70.92.237:6789]
2019-08-07 08:37:10.099050 D | op-k8sutil: creating service rook-ceph-mon-b
2019-08-07 08:37:10.325974 D | op-k8sutil: updating service %s
2019-08-07 08:37:10.445467 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:10.452985 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:10.464227 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:10.696662 I | op-mon: mon b endpoint are [v2:100.67.17.84:3300,v1:100.67.17.84:6789]
2019-08-07 08:37:10.696700 D | op-k8sutil: creating service rook-ceph-mon-j
2019-08-07 08:37:10.707046 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:10.780816 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:10.926028 D | op-k8sutil: updating service %s
2019-08-07 08:37:11.297828 I | op-mon: mon j endpoint are [v2:100.79.195.199:3300,v1:100.79.195.199:6789]
2019-08-07 08:37:11.297870 D | op-k8sutil: creating service rook-ceph-mon-f
2019-08-07 08:37:11.525762 D | op-k8sutil: updating service %s
2019-08-07 08:37:11.695871 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:11.886896 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:11.897434 I | op-mon: mon f endpoint are [v2:100.69.115.5:3300,v1:100.69.115.5:6789]
2019-08-07 08:37:11.897467 D | op-k8sutil: creating service rook-ceph-mon-g
2019-08-07 08:37:11.985290 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:11.995013 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:12.070953 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:12.125155 D | op-k8sutil: updating service %s
2019-08-07 08:37:12.496165 I | op-mon: mon g endpoint are [v2:100.66.122.247:3300,v1:100.66.122.247:6789]
2019-08-07 08:37:12.496199 D | op-k8sutil: creating service rook-ceph-mon-h
2019-08-07 08:37:12.562731 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:12.723364 D | op-k8sutil: updating service %s
2019-08-07 08:37:13.095758 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:13.175801 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
I0807 08:37:13.295465 8 leaderelection.go:227] successfully acquired lease rook-ceph-stage-primary/ceph.rook.io-block
I0807 08:37:13.295588 8 controller.go:769] Starting provisioner controller ceph.rook.io/block_rook-ceph-operator-6b8b758497-qk4dz_835ac704-b8ee-11e9-847f-429adf364821!
I0807 08:37:13.295751 8 event.go:209] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"rook-ceph-stage-primary", Name:"ceph.rook.io-block", UID:"782fe0db-b792-11e9-9b32-0050568460f6", APIVersion:"v1", ResourceVersion:"309448915", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' rook-ceph-operator-6b8b758497-qk4dz_835ac704-b8ee-11e9-847f-429adf364821 became leader
2019-08-07 08:37:13.497429 I | op-mon: mon h endpoint are [v2:100.64.242.138:3300,v1:100.64.242.138:6789]
2019-08-07 08:37:13.913998 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:14.366579 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
I0807 08:37:14.395970 8 controller.go:818] Started provisioner controller ceph.rook.io/block_rook-ceph-operator-6b8b758497-qk4dz_835ac704-b8ee-11e9-847f-429adf364821!
I0807 08:37:14.396081 8 controller.go:1196] provision "admin-d0277887/datadir-zk-1" class "default": started
2019-08-07 08:37:14.697347 D | op-mon: updating config map rook-ceph-mon-endpoints that already exists
2019-08-07 08:37:14.715806 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
I0807 08:37:15.098993 8 controller.go:1205] provision "admin-d0277887/datadir-zk-1" class "default": persistentvolume "pvc-95509a25-9be6-11e9-9a2e-0050568460f6" already exists, skipping
I0807 08:37:15.295681 8 leaderelection.go:227] successfully acquired lease rook-ceph-stage-primary/rook.io-block
I0807 08:37:15.295816 8 controller.go:769] Starting provisioner controller rook.io/block_rook-ceph-operator-6b8b758497-qk4dz_835af5d2-b8ee-11e9-847f-429adf364821!
I0807 08:37:15.295813 8 event.go:209] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"rook-ceph-stage-primary", Name:"rook.io-block", UID:"782fdbfc-b792-11e9-9b32-0050568460f6", APIVersion:"v1", ResourceVersion:"309448953", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' rook-ceph-operator-6b8b758497-qk4dz_835af5d2-b8ee-11e9-847f-429adf364821 became leader
2019-08-07 08:37:15.374374 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:15.499397 I | op-mon: saved mon endpoints to config map map[data:j=100.79.195.199:6789,f=100.69.115.5:6789,g=100.66.122.247:6789,h=100.64.242.138:6789,i=100.70.92.237:6789,b=100.67.17.84:6789 maxMonId:9 mapping:{"node":{"b":{"Name":"k8s-worker-101.lxstage.domain.com","Hostname":"k8s-worker-101.lxstage.domain.com","Address":"172.22.254.183"},"f":{"Name":"k8s-worker-102.lxstage.domain.com","Hostname":"k8s-worker-102.lxstage.domain.com","Address":"172.22.254.186"},"g":{"Name":"k8s-worker-103.lxstage.domain.com","Hostname":"k8s-worker-103.lxstage.domain.com","Address":"172.22.254.185"},"h":{"Name":"k8s-worker-104.lxstage.domain.com","Hostname":"k8s-worker-104.lxstage.domain.com","Address":"172.22.254.187"},"i":{"Name":"k8s-worker-01.lxstage.domain.com","Hostname":"k8s-worker-01.lxstage.domain.com","Address":"172.22.254.150"},"j":{"Name":"k8s-worker-00.lxstage.domain.com","Hostname":"k8s-worker-00.lxstage.domain.com","Address":"172.22.254.105"}},"port":{}}]
2019-08-07 08:37:15.806368 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
I0807 08:37:16.296186 8 controller.go:818] Started provisioner controller rook.io/block_rook-ceph-operator-6b8b758497-qk4dz_835af5d2-b8ee-11e9-847f-429adf364821!
2019-08-07 08:37:17.094602 D | op-config: Generated and stored config file:
[global]
mon_allow_pool_delete = true
mon_max_pg_per_osd = 1000
osd_pg_bits = 11
osd_pgp_bits = 11
osd_pool_default_size = 1
osd_pool_default_min_size = 1
osd_pool_default_pg_num = 100
osd_pool_default_pgp_num = 100
rbd_default_features = 3
fatal_signal_handlers = false
osd pool default pg num = 512
osd pool default pgp num = 512
osd pool default size = 3
osd pool default min size = 2
2019-08-07 08:37:17.494427 D | op-config: updating config secret &Secret{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:rook-ceph-config,GenerateName:,Namespace:rook-ceph-stage-primary,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{},OwnerReferences:[{ceph.rook.io/v1 CephCluster rook-ceph-stage-primary 76235f05-b792-11e9-9b32-0050568460f6 <nil> 0xc000beeb2c}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string][]byte{},Type:kubernetes.io/rook,StringData:map[string]string{mon_host: [v2:100.64.242.138:3300,v1:100.64.242.138:6789],[v2:100.70.92.237:3300,v1:100.70.92.237:6789],[v2:100.67.17.84:3300,v1:100.67.17.84:6789],[v2:100.79.195.199:3300,v1:100.79.195.199:6789],[v2:100.69.115.5:3300,v1:100.69.115.5:6789],[v2:100.66.122.247:3300,v1:100.66.122.247:6789],mon_initial_members: h,i,b,j,f,g,},}
2019-08-07 08:37:17.896683 I | cephconfig: writing config file /var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config
2019-08-07 08:37:17.896882 I | cephconfig: copying config to /etc/ceph/ceph.conf
2019-08-07 08:37:17.897044 I | cephconfig: generated admin config in /var/lib/rook/rook-ceph-stage-primary
2019-08-07 08:37:17.897357 I | cephconfig: writing config file /var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config
2019-08-07 08:37:17.897494 I | cephconfig: copying config to /etc/ceph/ceph.conf
2019-08-07 08:37:17.897616 I | cephconfig: generated admin config in /var/lib/rook/rook-ceph-stage-primary
2019-08-07 08:37:17.897643 D | op-mon: monConfig: %+v&{rook-ceph-mon-i i 100.70.92.237 6789 0xc000b87680}
2019-08-07 08:37:17.897825 D | op-mon: Starting mon: rook-ceph-mon-i
2019-08-07 08:37:17.910656 I | op-mon: deployment for mon rook-ceph-mon-i already exists. updating if needed
2019-08-07 08:37:17.923756 I | op-k8sutil: updating deployment rook-ceph-mon-i
2019-08-07 08:37:17.936574 D | op-k8sutil: deployment rook-ceph-mon-i status={ObservedGeneration:3 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-07 08:23:31 +0000 UTC LastTransitionTime:2019-08-07 08:23:31 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-07 08:23:31 +0000 UTC LastTransitionTime:2019-08-07 08:23:22 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-mon-i-c64d84df8" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:37:18.263959 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:18.919777 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:19.262415 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:19.694777 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:19.942651 I | op-k8sutil: finished waiting for updated deployment rook-ceph-mon-i
2019-08-07 08:37:19.942704 D | op-mon: monConfig: %+v&{rook-ceph-mon-b b 100.67.17.84 6789 0xc000b876d0}
2019-08-07 08:37:19.942879 D | op-mon: Starting mon: rook-ceph-mon-b
2019-08-07 08:37:19.951595 D | op-mon: monConfig: %+v&{rook-ceph-mon-j j 100.79.195.199 6789 0xc000b87720}
2019-08-07 08:37:19.951770 D | op-mon: Starting mon: rook-ceph-mon-j
2019-08-07 08:37:19.983465 I | op-mon: deployment for mon rook-ceph-mon-j already exists. updating if needed
2019-08-07 08:37:20.005295 I | op-k8sutil: updating deployment rook-ceph-mon-j
2019-08-07 08:37:20.032897 D | op-k8sutil: deployment rook-ceph-mon-j status={ObservedGeneration:1 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-07 08:34:38 +0000 UTC LastTransitionTime:2019-08-07 08:34:38 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-07 08:34:38 +0000 UTC LastTransitionTime:2019-08-07 08:34:35 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-mon-j-5f4f668ddc" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:37:20.464025 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:20.475453 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:20.485852 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:20.726082 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:20.799963 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:21.763031 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:21.910615 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:22.004878 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:22.014576 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:22.037497 I | op-k8sutil: finished waiting for updated deployment rook-ceph-mon-j
2019-08-07 08:37:22.037539 D | op-mon: monConfig: %+v&{rook-ceph-mon-f f 100.69.115.5 6789 0xc000b87770}
2019-08-07 08:37:22.037706 D | op-mon: Starting mon: rook-ceph-mon-f
2019-08-07 08:37:22.047756 I | op-mon: deployment for mon rook-ceph-mon-f already exists. updating if needed
2019-08-07 08:37:22.052444 I | op-k8sutil: updating deployment rook-ceph-mon-f
2019-08-07 08:37:22.066378 D | op-k8sutil: deployment rook-ceph-mon-f status={ObservedGeneration:4 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-07 08:16:34 +0000 UTC LastTransitionTime:2019-08-07 08:16:34 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-07 08:16:34 +0000 UTC LastTransitionTime:2019-08-06 14:22:03 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-mon-f-7966c549fb" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:37:22.103803 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:22.549935 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:23.121151 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:23.194541 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:23.927504 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:24.071506 I | op-k8sutil: finished waiting for updated deployment rook-ceph-mon-f
2019-08-07 08:37:24.071548 D | op-mon: monConfig: %+v&{rook-ceph-mon-g g 100.66.122.247 6789 0xc000b877c0}
2019-08-07 08:37:24.071719 D | op-mon: Starting mon: rook-ceph-mon-g
2019-08-07 08:37:24.081053 I | op-mon: deployment for mon rook-ceph-mon-g already exists. updating if needed
2019-08-07 08:37:24.087359 I | op-k8sutil: updating deployment rook-ceph-mon-g
2019-08-07 08:37:24.185559 D | op-k8sutil: deployment rook-ceph-mon-g status={ObservedGeneration:4 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-07 08:16:54 +0000 UTC LastTransitionTime:2019-08-07 08:16:54 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-07 08:16:54 +0000 UTC LastTransitionTime:2019-08-06 14:22:03 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-mon-g-6b49f6c769" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:37:24.382182 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:24.734859 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:25.391250 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:25.826256 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:26.191511 I | op-k8sutil: finished waiting for updated deployment rook-ceph-mon-g
2019-08-07 08:37:26.191563 D | op-mon: monConfig: %+v&{rook-ceph-mon-h h 100.64.242.138 6789 0xc000b87810}
2019-08-07 08:37:26.191734 D | op-mon: Starting mon: rook-ceph-mon-h
2019-08-07 08:37:26.201153 I | op-mon: deployment for mon rook-ceph-mon-h already exists. updating if needed
2019-08-07 08:37:26.205563 I | op-k8sutil: updating deployment rook-ceph-mon-h
2019-08-07 08:37:26.219027 D | op-k8sutil: deployment rook-ceph-mon-h status={ObservedGeneration:2 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-07 08:29:57 +0000 UTC LastTransitionTime:2019-08-07 08:29:57 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-07 08:29:57 +0000 UTC LastTransitionTime:2019-08-07 08:29:53 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-mon-h-858f958" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:37:28.224186 I | op-k8sutil: finished waiting for updated deployment rook-ceph-mon-h
2019-08-07 08:37:28.224221 I | op-mon: mons created: 6
2019-08-07 08:37:28.224243 I | op-mon: waiting for mon quorum with [i b j f g h]
2019-08-07 08:37:28.264192 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:28.480393 I | op-mon: mons running: [i b j f g h]
2019-08-07 08:37:28.480873 I | exec: Running command: ceph mon_status --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/677721698
2019-08-07 08:37:28.962550 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:29.264015 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:29.763979 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:30.482338 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:30.498318 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:30.562053 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:30.564962 I | exec: 2019-08-07 08:37:29.369 7fc9bf814700 1 librados: starting msgr at
2019-08-07 08:37:29.369 7fc9bf814700 1 librados: starting objecter
2019-08-07 08:37:29.370 7fc9bf814700 1 librados: setting wanted keys
2019-08-07 08:37:29.370 7fc9bf814700 1 librados: calling monclient init
2019-08-07 08:37:29.375 7fc9bf814700 1 librados: init done
2019-08-07 08:37:30.381 7fc9bf814700 10 librados: watch_flush enter
2019-08-07 08:37:30.381 7fc9bf814700 10 librados: watch_flush exit
2019-08-07 08:37:30.383 7fc9bf814700 1 librados: shutdown
2019-08-07 08:37:30.565167 D | cephclient: MON STATUS: {Quorum:[1 2 3 4 5] MonMap:{Mons:[{Name:b Rank:0 Address:100.67.17.84:6789/0} {Name:f Rank:1 Address:100.69.115.5:6789/0} {Name:g Rank:2 Address:100.66.122.247:6789/0} {Name:h Rank:3 Address:100.64.242.138:6789/0} {Name:i Rank:4 Address:100.70.92.237:6789/0} {Name:j Rank:5 Address:100.79.195.199:6789/0}]}}
2019-08-07 08:37:30.565176 I | op-mon: Monitors in quorum: [f g h i j]
2019-08-07 08:37:30.565183 I | exec: Running command: ceph version
2019-08-07 08:37:30.764223 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:30.863690 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:31.764521 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:31.962390 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:32.062664 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:32.064006 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:32.162117 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:32.574160 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:32.762180 D | cephclient: ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable)
2019-08-07 08:37:32.762229 D | cephclient: ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable)
2019-08-07 08:37:32.762263 I | exec: Running command: ceph versions
2019-08-07 08:37:33.163984 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:33.213658 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:33.946737 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:34.463820 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:34.763402 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:35.071243 D | cephclient: {
"mon": {
"ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable)": 6
},
"mgr": {
"ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable)": 1
},
"osd": {
"ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable)": 48
},
"mds": {},
"overall": {
"ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable)": 55
}
}
2019-08-07 08:37:35.071276 D | cephclient: {
"mon": {
"ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable)": 6
},
"mgr": {
"ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable)": 1
},
"osd": {
"ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable)": 48
},
"mds": {},
"overall": {
"ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable)": 55
}
}
2019-08-07 08:37:35.071383 I | exec: Running command: ceph mon enable-msgr2
2019-08-07 08:37:35.463860 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:35.848514 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:37.177108 I | cephclient: successfully enabled msgr2 protocol
2019-08-07 08:37:37.177162 D | op-mon: mon endpoints used are: i=100.70.92.237:6789,b=100.67.17.84:6789,j=100.79.195.199:6789,f=100.69.115.5:6789,g=100.66.122.247:6789,h=100.64.242.138:6789
2019-08-07 08:37:37.177175 D | op-mon: Released lock for mon orchestration
2019-08-07 08:37:37.177194 I | op-mgr: start running mgr
2019-08-07 08:37:37.177427 I | exec: Running command: ceph auth get-or-create-key mgr.a mon allow * mds allow * osd allow * --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/886605913
2019-08-07 08:37:38.263251 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:38.964145 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:39.270325 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:39.570772 I | exec: 2019-08-07 08:37:38.169 7f69b2e02700 1 librados: starting msgr at
2019-08-07 08:37:38.169 7f69b2e02700 1 librados: starting objecter
2019-08-07 08:37:38.169 7f69b2e02700 1 librados: setting wanted keys
2019-08-07 08:37:38.169 7f69b2e02700 1 librados: calling monclient init
2019-08-07 08:37:38.175 7f69b2e02700 1 librados: init done
2019-08-07 08:37:39.500 7f69b2e02700 10 librados: watch_flush enter
2019-08-07 08:37:39.500 7f69b2e02700 10 librados: watch_flush exit
2019-08-07 08:37:39.501 7f69b2e02700 1 librados: shutdown
2019-08-07 08:37:39.575622 D | op-mgr: legacy mgr key rook-ceph-mgr-a is already removed
2019-08-07 08:37:39.579387 D | op-cfg-keyring: updating secret for rook-ceph-mgr-a-keyring
2019-08-07 08:37:39.583472 I | exec: Running command: ceph config-key get mgr/dashboard/server_addr --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/188531172
2019-08-07 08:37:39.763819 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:40.564101 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:40.565285 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:40.566472 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:40.757211 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:40.863132 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:41.763873 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:41.905149 I | exec: 2019-08-07 08:37:40.572 7f7524d8b700 1 librados: starting msgr at
2019-08-07 08:37:40.572 7f7524d8b700 1 librados: starting objecter
2019-08-07 08:37:40.572 7f7524d8b700 1 librados: setting wanted keys
2019-08-07 08:37:40.572 7f7524d8b700 1 librados: calling monclient init
2019-08-07 08:37:40.665 7f7524d8b700 1 librados: init done
Error ENOENT: error obtaining 'mgr/dashboard/server_addr': (2) No such file or directory
2019-08-07 08:37:41.773 7f7524d8b700 10 librados: watch_flush enter
2019-08-07 08:37:41.773 7f7524d8b700 10 librados: watch_flush exit
2019-08-07 08:37:41.860 7f7524d8b700 1 librados: shutdown
2019-08-07 08:37:41.905427 I | exec: Running command: ceph config-key del mgr/dashboard/server_addr --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/451529203
2019-08-07 08:37:41.963901 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:42.062457 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:42.063769 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:42.163471 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:42.662579 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:43.162337 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:43.262137 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:43.963241 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:44.275039 I | exec: 2019-08-07 08:37:42.962 7fbe3b7cf700 1 librados: starting msgr at
2019-08-07 08:37:42.962 7fbe3b7cf700 1 librados: starting objecter
2019-08-07 08:37:42.962 7fbe3b7cf700 1 librados: setting wanted keys
2019-08-07 08:37:42.962 7fbe3b7cf700 1 librados: calling monclient init
2019-08-07 08:37:42.968 7fbe3b7cf700 1 librados: init done
no such key 'mgr/dashboard/server_addr'
2019-08-07 08:37:44.196 7fbe3b7cf700 10 librados: watch_flush enter
2019-08-07 08:37:44.196 7fbe3b7cf700 10 librados: watch_flush exit
2019-08-07 08:37:44.197 7fbe3b7cf700 1 librados: shutdown
2019-08-07 08:37:44.275232 I | op-mgr: clearing http bind fix mod=dashboard ver=12.0.0 luminous changed=false err=<nil>
2019-08-07 08:37:44.275380 I | exec: Running command: ceph config-key get mgr/dashboard/a/server_addr --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/246967990
2019-08-07 08:37:44.463677 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:44.773259 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:45.463881 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:45.874077 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:46.562122 I | exec: 2019-08-07 08:37:45.183 7faaa392d700 1 librados: starting msgr at
2019-08-07 08:37:45.183 7faaa392d700 1 librados: starting objecter
2019-08-07 08:37:45.261 7faaa392d700 1 librados: setting wanted keys
2019-08-07 08:37:45.261 7faaa392d700 1 librados: calling monclient init
2019-08-07 08:37:45.268 7faaa392d700 1 librados: init done
Error ENOENT: error obtaining 'mgr/dashboard/a/server_addr': (2) No such file or directory
2019-08-07 08:37:46.382 7faaa392d700 10 librados: watch_flush enter
2019-08-07 08:37:46.383 7faaa392d700 10 librados: watch_flush exit
2019-08-07 08:37:46.461 7faaa392d700 1 librados: shutdown
2019-08-07 08:37:46.562388 I | exec: Running command: ceph config-key del mgr/dashboard/a/server_addr --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/411191453
2019-08-07 08:37:48.362795 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:48.776894 I | exec: 2019-08-07 08:37:47.388 7f092d1e0700 1 librados: starting msgr at
2019-08-07 08:37:47.388 7f092d1e0700 1 librados: starting objecter
2019-08-07 08:37:47.388 7f092d1e0700 1 librados: setting wanted keys
2019-08-07 08:37:47.388 7f092d1e0700 1 librados: calling monclient init
2019-08-07 08:37:47.466 7f092d1e0700 1 librados: init done
no such key 'mgr/dashboard/a/server_addr'
2019-08-07 08:37:48.703 7f092d1e0700 10 librados: watch_flush enter
2019-08-07 08:37:48.703 7f092d1e0700 10 librados: watch_flush exit
2019-08-07 08:37:48.705 7f092d1e0700 1 librados: shutdown
2019-08-07 08:37:48.777170 I | op-mgr: clearing http bind fix mod=dashboard ver=12.0.0 luminous changed=false err=<nil>
2019-08-07 08:37:48.777356 I | exec: Running command: ceph config get mgr.a mgr/dashboard/server_addr --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/442887000
2019-08-07 08:37:48.976370 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:49.288164 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:49.763823 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:50.563994 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:50.565141 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:50.566316 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:50.779718 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:50.862155 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:51.175253 I | exec: 2019-08-07 08:37:49.688 7fae4c72a700 1 librados: starting msgr at
2019-08-07 08:37:49.688 7fae4c72a700 1 librados: starting objecter
2019-08-07 08:37:49.761 7fae4c72a700 1 librados: setting wanted keys
2019-08-07 08:37:49.761 7fae4c72a700 1 librados: calling monclient init
2019-08-07 08:37:49.767 7fae4c72a700 1 librados: init done
2019-08-07 08:37:51.069 7fae4c72a700 10 librados: watch_flush enter
2019-08-07 08:37:51.069 7fae4c72a700 10 librados: watch_flush exit
2019-08-07 08:37:51.071 7fae4c72a700 1 librados: shutdown
2019-08-07 08:37:51.175563 I | exec: Running command: ceph config rm mgr.a mgr/dashboard/server_addr --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/555459799
2019-08-07 08:37:51.781874 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:51.970665 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:52.071488 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:52.162282 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:52.190338 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:52.663933 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:53.183760 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:53.262169 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:53.476729 I | exec: 2019-08-07 08:37:52.169 7f0c1c15c700 1 librados: starting msgr at
2019-08-07 08:37:52.169 7f0c1c15c700 1 librados: starting objecter
2019-08-07 08:37:52.170 7f0c1c15c700 1 librados: setting wanted keys
2019-08-07 08:37:52.170 7f0c1c15c700 1 librados: calling monclient init
2019-08-07 08:37:52.263 7f0c1c15c700 1 librados: init done
2019-08-07 08:37:53.392 7f0c1c15c700 10 librados: watch_flush enter
2019-08-07 08:37:53.392 7f0c1c15c700 10 librados: watch_flush exit
2019-08-07 08:37:53.393 7f0c1c15c700 1 librados: shutdown
2019-08-07 08:37:53.476879 I | op-mgr: clearing http bind fix mod=dashboard ver=13.0.0 mimic changed=true err=<nil>
2019-08-07 08:37:53.477026 I | exec: Running command: ceph config get mgr.a mgr/dashboard/a/server_addr --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/795025994
2019-08-07 08:37:53.978355 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:54.437702 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:54.790428 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:55.463725 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:55.768297 I | exec: 2019-08-07 08:37:54.385 7ff75875c700 1 librados: starting msgr at
2019-08-07 08:37:54.385 7ff75875c700 1 librados: starting objecter
2019-08-07 08:37:54.460 7ff75875c700 1 librados: setting wanted keys
2019-08-07 08:37:54.460 7ff75875c700 1 librados: calling monclient init
2019-08-07 08:37:54.467 7ff75875c700 1 librados: init done
2019-08-07 08:37:55.580 7ff75875c700 10 librados: watch_flush enter
2019-08-07 08:37:55.580 7ff75875c700 10 librados: watch_flush exit
2019-08-07 08:37:55.661 7ff75875c700 1 librados: shutdown
2019-08-07 08:37:55.768595 I | exec: Running command: ceph config rm mgr.a mgr/dashboard/a/server_addr --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/311661857
2019-08-07 08:37:55.963863 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:57.973018 I | exec: 2019-08-07 08:37:56.688 7fc75f2cf700 1 librados: starting msgr at
2019-08-07 08:37:56.688 7fc75f2cf700 1 librados: starting objecter
2019-08-07 08:37:56.688 7fc75f2cf700 1 librados: setting wanted keys
2019-08-07 08:37:56.688 7fc75f2cf700 1 librados: calling monclient init
2019-08-07 08:37:56.771 7fc75f2cf700 1 librados: init done
2019-08-07 08:37:57.868 7fc75f2cf700 10 librados: watch_flush enter
2019-08-07 08:37:57.868 7fc75f2cf700 10 librados: watch_flush exit
2019-08-07 08:37:57.870 7fc75f2cf700 1 librados: shutdown
2019-08-07 08:37:57.973195 I | op-mgr: clearing http bind fix mod=dashboard ver=13.0.0 mimic changed=true err=<nil>
2019-08-07 08:37:57.973332 I | exec: Running command: ceph config-key get mgr/prometheus/server_addr --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/550114316
2019-08-07 08:37:58.363741 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:59.062354 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:59.369006 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:37:59.764841 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:00.372555 I | exec: 2019-08-07 08:37:58.865 7f6f65835700 1 librados: starting msgr at
2019-08-07 08:37:58.865 7f6f65835700 1 librados: starting objecter
2019-08-07 08:37:58.866 7f6f65835700 1 librados: setting wanted keys
2019-08-07 08:37:58.866 7f6f65835700 1 librados: calling monclient init
2019-08-07 08:37:58.872 7f6f65835700 1 librados: init done
Error ENOENT: error obtaining 'mgr/prometheus/server_addr': (2) No such file or directory
2019-08-07 08:38:00.268 7f6f65835700 10 librados: watch_flush enter
2019-08-07 08:38:00.268 7f6f65835700 10 librados: watch_flush exit
2019-08-07 08:38:00.269 7f6f65835700 1 librados: shutdown
2019-08-07 08:38:00.372888 I | exec: Running command: ceph config-key del mgr/prometheus/server_addr --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/816706043
2019-08-07 08:38:00.563726 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:00.575067 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:00.586511 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:00.863221 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:00.879892 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:01.862296 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:02.063973 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:02.097651 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:02.163412 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:02.224737 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:02.663815 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:02.776435 I | exec: 2019-08-07 08:38:01.368 7f0e6b5fb700 1 librados: starting msgr at
2019-08-07 08:38:01.368 7f0e6b5fb700 1 librados: starting objecter
2019-08-07 08:38:01.369 7f0e6b5fb700 1 librados: setting wanted keys
2019-08-07 08:38:01.369 7f0e6b5fb700 1 librados: calling monclient init
2019-08-07 08:38:01.375 7f0e6b5fb700 1 librados: init done
no such key 'mgr/prometheus/server_addr'
2019-08-07 08:38:02.668 7f0e6b5fb700 10 librados: watch_flush enter
2019-08-07 08:38:02.668 7f0e6b5fb700 10 librados: watch_flush exit
2019-08-07 08:38:02.669 7f0e6b5fb700 1 librados: shutdown
2019-08-07 08:38:02.776619 I | op-mgr: clearing http bind fix mod=prometheus ver=12.0.0 luminous changed=false err=<nil>
2019-08-07 08:38:02.776768 I | exec: Running command: ceph config-key get mgr/prometheus/a/server_addr --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/145501982
2019-08-07 08:38:03.263813 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:03.269407 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:04.063056 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:04.462593 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:04.811352 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:04.978045 I | exec: 2019-08-07 08:38:03.586 7fad969dc700 1 librados: starting msgr at
2019-08-07 08:38:03.586 7fad969dc700 1 librados: starting objecter
2019-08-07 08:38:03.586 7fad969dc700 1 librados: setting wanted keys
2019-08-07 08:38:03.586 7fad969dc700 1 librados: calling monclient init
2019-08-07 08:38:03.669 7fad969dc700 1 librados: init done
Error ENOENT: error obtaining 'mgr/prometheus/a/server_addr': (2) No such file or directory
2019-08-07 08:38:04.871 7fad969dc700 10 librados: watch_flush enter
2019-08-07 08:38:04.871 7fad969dc700 10 librados: watch_flush exit
2019-08-07 08:38:04.872 7fad969dc700 1 librados: shutdown
2019-08-07 08:38:04.978292 I | exec: Running command: ceph config-key del mgr/prometheus/a/server_addr --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/163672549
2019-08-07 08:38:05.562161 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:05.963996 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:07.203991 I | exec: 2019-08-07 08:38:05.886 7f5c064fe700 1 librados: starting msgr at
2019-08-07 08:38:05.886 7f5c064fe700 1 librados: starting objecter
2019-08-07 08:38:05.886 7f5c064fe700 1 librados: setting wanted keys
2019-08-07 08:38:05.886 7f5c064fe700 1 librados: calling monclient init
2019-08-07 08:38:05.964 7f5c064fe700 1 librados: init done
no such key 'mgr/prometheus/a/server_addr'
2019-08-07 08:38:07.074 7f5c064fe700 10 librados: watch_flush enter
2019-08-07 08:38:07.074 7f5c064fe700 10 librados: watch_flush exit
2019-08-07 08:38:07.076 7f5c064fe700 1 librados: shutdown
2019-08-07 08:38:07.204186 I | op-mgr: clearing http bind fix mod=prometheus ver=12.0.0 luminous changed=false err=<nil>
2019-08-07 08:38:07.204378 I | exec: Running command: ceph config get mgr.a mgr/prometheus/server_addr --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/906543104
2019-08-07 08:38:08.332885 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:09.063816 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:09.362481 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:09.373561 I | exec: 2019-08-07 08:38:07.984 7fadb9321700 1 librados: starting msgr at
2019-08-07 08:38:07.984 7fadb9321700 1 librados: starting objecter
2019-08-07 08:38:07.985 7fadb9321700 1 librados: setting wanted keys
2019-08-07 08:38:07.985 7fadb9321700 1 librados: calling monclient init
2019-08-07 08:38:08.066 7fadb9321700 1 librados: init done
2019-08-07 08:38:09.267 7fadb9321700 10 librados: watch_flush enter
2019-08-07 08:38:09.267 7fadb9321700 10 librados: watch_flush exit
2019-08-07 08:38:09.269 7fadb9321700 1 librados: shutdown
2019-08-07 08:38:09.373848 I | exec: Running command: ceph config rm mgr.a mgr/prometheus/server_addr --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/929084255
2019-08-07 08:38:09.862996 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:10.562097 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:10.662426 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:10.663626 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:10.863247 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:10.962563 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:11.764738 I | exec: 2019-08-07 08:38:10.282 7f461b35d700 1 librados: starting msgr at
2019-08-07 08:38:10.282 7f461b35d700 1 librados: starting objecter
2019-08-07 08:38:10.283 7f461b35d700 1 librados: setting wanted keys
2019-08-07 08:38:10.283 7f461b35d700 1 librados: calling monclient init
2019-08-07 08:38:10.366 7f461b35d700 1 librados: init done
2019-08-07 08:38:11.661 7f461b35d700 10 librados: watch_flush enter
2019-08-07 08:38:11.661 7f461b35d700 10 librados: watch_flush exit
2019-08-07 08:38:11.662 7f461b35d700 1 librados: shutdown
2019-08-07 08:38:11.764985 I | op-mgr: clearing http bind fix mod=prometheus ver=13.0.0 mimic changed=false err=<nil>
2019-08-07 08:38:11.765134 I | exec: Running command: ceph config get mgr.a mgr/prometheus/a/server_addr --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/404154674
2019-08-07 08:38:11.863315 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:12.063841 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:12.128745 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:12.163543 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:12.262307 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:12.662361 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:13.264019 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:13.292672 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:14.062408 I | exec: 2019-08-07 08:38:12.685 7ff08bd50700 1 librados: starting msgr at
2019-08-07 08:38:12.685 7ff08bd50700 1 librados: starting objecter
2019-08-07 08:38:12.763 7ff08bd50700 1 librados: setting wanted keys
2019-08-07 08:38:12.763 7ff08bd50700 1 librados: calling monclient init
2019-08-07 08:38:12.771 7ff08bd50700 1 librados: init done
2019-08-07 08:38:13.874 7ff08bd50700 10 librados: watch_flush enter
2019-08-07 08:38:13.874 7ff08bd50700 10 librados: watch_flush exit
2019-08-07 08:38:13.961 7ff08bd50700 1 librados: shutdown
2019-08-07 08:38:14.062713 I | exec: Running command: ceph config rm mgr.a mgr/prometheus/a/server_addr --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/550104297
2019-08-07 08:38:14.063213 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:14.475361 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:14.832444 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:15.563730 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:15.962288 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:16.564220 I | exec: 2019-08-07 08:38:14.986 7fbdb9673700 1 librados: starting msgr at
2019-08-07 08:38:14.986 7fbdb9673700 1 librados: starting objecter
2019-08-07 08:38:15.062 7fbdb9673700 1 librados: setting wanted keys
2019-08-07 08:38:15.062 7fbdb9673700 1 librados: calling monclient init
2019-08-07 08:38:15.068 7fbdb9673700 1 librados: init done
2019-08-07 08:38:16.382 7fbdb9673700 10 librados: watch_flush enter
2019-08-07 08:38:16.382 7fbdb9673700 10 librados: watch_flush exit
2019-08-07 08:38:16.462 7fbdb9673700 1 librados: shutdown
2019-08-07 08:38:16.564425 I | op-mgr: clearing http bind fix mod=prometheus ver=13.0.0 mimic changed=false err=<nil>
2019-08-07 08:38:16.565604 D | op-mgr: starting mgr deployment: &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:rook-ceph-mgr-a,GenerateName:,Namespace:rook-ceph-stage-primary,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{app: rook-ceph-mgr,ceph-version: 14.2.1,ceph_daemon_id: a,instance: a,mgr: a,rook-version: v1.0.4,rook_cluster: rook-ceph-stage-primary,},Annotations:map[string]string{prometheus.io/port: 9283,prometheus.io/scrape: true,},OwnerReferences:[{ceph.rook.io/v1 CephCluster rook-ceph-stage-primary 76235f05-b792-11e9-9b32-0050568460f6 <nil> 0xc000beeb2c}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{app: rook-ceph-mgr,ceph_daemon_id: a,instance: a,mgr: a,rook_cluster: rook-ceph-stage-primary,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:rook-ceph-mgr-a,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{app: rook-ceph-mgr,ceph_daemon_id: a,instance: a,mgr: a,rook_cluster: rook-ceph-stage-primary,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{rook-ceph-config {nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:rook-ceph-config,},Items:[{ceph.conf ceph.conf 0xc00061df58}],DefaultMode:nil,Optional:nil,} nil nil nil nil nil nil nil nil nil}} {rook-ceph-mgr-a-keyring {nil nil nil nil nil &SecretVolumeSource{SecretName:rook-ceph-mgr-a-keyring,Items:[],DefaultMode:nil,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}} {rook-ceph-log {&HostPathVolumeSource{Path:/opt/rook/rook-ceph-stage-primary/rook-ceph-stage-primary/log,Type:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}} {ceph-daemon-data {nil &EmptyDirVolumeSource{Medium:,SizeLimit:<nil>,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{mgr ceph/ceph:v14.2.1-20190430 [ceph-mgr] [--fsid=7dd854f1-2892-4201-ab69-d4797f12ac50 --keyring=/etc/ceph/keyring-store/keyring --log-to-stderr=true --err-to-stderr=true --mon-cluster-log-to-stderr=true --log-stderr-prefix=debug --default-log-to-file=false --default-mon-cluster-log-to-file=false --mon-host=$(ROOK_CEPH_MON_HOST) --mon-initial-members=$(ROOK_CEPH_MON_INITIAL_MEMBERS) --id=a --foreground] [{mgr 0 6800 TCP } {http-metrics 0 9283 TCP } {dashboard 0 7000 TCP }] [] [{CONTAINER_IMAGE ceph/ceph:v14.2.1-20190430 nil} {POD_NAME EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}} {POD_NAMESPACE &EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}} {NODE_NAME &EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}} {POD_MEMORY_LIMIT &EnvVarSource{FieldRef:nil,ResourceFieldRef:&ResourceFieldSelector{ContainerName:,Resource:limits.memory,Divisor:0,},ConfigMapKeyRef:nil,SecretKeyRef:nil,}} {POD_MEMORY_REQUEST &EnvVarSource{FieldRef:nil,ResourceFieldRef:&ResourceFieldSelector{ContainerName:,Resource:requests.memory,Divisor:0,},ConfigMapKeyRef:nil,SecretKeyRef:nil,}} {POD_CPU_LIMIT &EnvVarSource{FieldRef:nil,ResourceFieldRef:&ResourceFieldSelector{ContainerName:,Resource:limits.cpu,Divisor:1,},ConfigMapKeyRef:nil,SecretKeyRef:nil,}} {POD_CPU_REQUEST &EnvVarSource{FieldRef:nil,ResourceFieldRef:&ResourceFieldSelector{ContainerName:,Resource:requests.cpu,Divisor:0,},ConfigMapKeyRef:nil,SecretKeyRef:nil,}} {ROOK_CEPH_MON_HOST &EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:rook-ceph-config,},Key:mon_host,Optional:nil,},}} {ROOK_CEPH_MON_INITIAL_MEMBERS &EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:rook-ceph-config,},Key:mon_initial_members,Optional:nil,},}} {ROOK_OPERATOR_NAMESPACE rook-ceph-stage-primary nil} {ROOK_CEPH_CLUSTER_CRD_VERSION v1 nil} {ROOK_VERSION v1.0.4 nil} {ROOK_CEPH_CLUSTER_CRD_NAME rook-ceph-stage-primary nil}] {map[cpu:{{2 0} {<nil>} 2 DecimalSI} memory:{{1073741824 0} {<nil>} 1Gi BinarySI}] map[memory:{{1073741824 0} {<nil>} 1Gi BinarySI} cpu:{{2 0} {<nil>} 2 DecimalSI}]} [{rook-ceph-config true /etc/ceph <nil> } {rook-ceph-mgr-a-keyring true /etc/ceph/keyring-store/ <nil> } {rook-ceph-log false /var/log/ceph <nil> } {ceph-daemon-data false /var/lib/ceph/mgr/ceph-a <nil> }] [] nil nil nil nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:nil,ActiveDeadlineSeconds:nil,DNSPolicy:,NodeSelector:map[string]string{},ServiceAccountName:rook-ceph-mgr,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:nil,ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:&Affinity{NodeAffinity:&NodeAffinity{RequiredDuringSchedulingIgnoredDuringExecution:&NodeSelector{NodeSelectorTerms:[{[{rook-namespace NotIn [rook-ceph-stage-primary]}] []}],},PreferredDuringSchedulingIgnoredDuringExecution:[],},PodAffinity:nil,PodAntiAffinity:nil,},SchedulerName:,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:nil,Paused:false,ProgressDeadlineSeconds:nil,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}
2019-08-07 08:38:16.578615 I | op-mgr: deployment for mgr rook-ceph-mgr-a already exists. updating if needed
2019-08-07 08:38:16.582524 I | op-k8sutil: updating deployment rook-ceph-mgr-a
2019-08-07 08:38:16.596717 D | op-k8sutil: deployment rook-ceph-mgr-a status={ObservedGeneration:4 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:21:41 +0000 UTC LastTransitionTime:2019-08-06 14:21:41 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:21:41 +0000 UTC LastTransitionTime:2019-08-06 14:21:41 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-mgr-a-5d469cc9b5" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:38:18.351621 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:18.662842 I | op-k8sutil: finished waiting for updated deployment rook-ceph-mgr-a
2019-08-07 08:38:18.663128 I | exec: Running command: ceph mgr module enable orchestrator_cli --force --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/945665332
2019-08-07 08:38:19.063875 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:19.362123 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:19.806692 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:20.585600 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:20.613616 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:20.637769 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:20.788000 I | exec: 2019-08-07 08:38:19.587 7f6624a04700 1 librados: starting msgr at
2019-08-07 08:38:19.587 7f6624a04700 1 librados: starting objecter
2019-08-07 08:38:19.661 7f6624a04700 1 librados: setting wanted keys
2019-08-07 08:38:19.661 7f6624a04700 1 librados: calling monclient init
2019-08-07 08:38:19.666 7f6624a04700 1 librados: init done
module 'orchestrator_cli' is already enabled (always-on)
2019-08-07 08:38:20.740 7f6624a04700 10 librados: watch_flush enter
2019-08-07 08:38:20.740 7f6624a04700 10 librados: watch_flush exit
2019-08-07 08:38:20.741 7f6624a04700 1 librados: shutdown
2019-08-07 08:38:20.788272 I | exec: Running command: ceph mgr module enable rook --force --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/770038531
2019-08-07 08:38:20.832676 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:20.963810 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:21.852618 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:22.070486 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:22.162675 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:22.163962 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:22.280834 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:22.673446 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:22.976038 I | exec: 2019-08-07 08:38:21.680 7ffadc0e5700 1 librados: starting msgr at
2019-08-07 08:38:21.680 7ffadc0e5700 1 librados: starting objecter
2019-08-07 08:38:21.681 7ffadc0e5700 1 librados: setting wanted keys
2019-08-07 08:38:21.681 7ffadc0e5700 1 librados: calling monclient init
2019-08-07 08:38:21.763 7ffadc0e5700 1 librados: init done
module 'rook' is already enabled
2019-08-07 08:38:22.939 7ffadc0e5700 10 librados: watch_flush enter
2019-08-07 08:38:22.939 7ffadc0e5700 10 librados: watch_flush exit
2019-08-07 08:38:22.940 7ffadc0e5700 1 librados: shutdown
2019-08-07 08:38:22.976298 I | exec: Running command: ceph orchestrator set backend rook --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/144072326
2019-08-07 08:38:23.263829 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:23.310399 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:24.063108 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:24.563848 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:24.862431 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:25.276440 I | exec: 2019-08-07 08:38:23.873 7f9ee6fce700 1 librados: starting msgr at
2019-08-07 08:38:23.873 7f9ee6fce700 1 librados: starting objecter
2019-08-07 08:38:23.874 7f9ee6fce700 1 librados: setting wanted keys
2019-08-07 08:38:23.874 7f9ee6fce700 1 librados: calling monclient init
2019-08-07 08:38:23.878 7f9ee6fce700 1 librados: init done
2019-08-07 08:38:25.174 7f9ee6fce700 10 librados: watch_flush enter
2019-08-07 08:38:25.174 7f9ee6fce700 10 librados: watch_flush exit
2019-08-07 08:38:25.175 7f9ee6fce700 1 librados: shutdown
2019-08-07 08:38:25.276711 I | exec: Running command: ceph mgr module enable prometheus --force --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/251435053
2019-08-07 08:38:25.563948 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:25.962159 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:28.371938 I | exec: 2019-08-07 08:38:26.262 7f174a4aa700 1 librados: starting msgr at
2019-08-07 08:38:26.262 7f174a4aa700 1 librados: starting objecter
2019-08-07 08:38:26.262 7f174a4aa700 1 librados: setting wanted keys
2019-08-07 08:38:26.262 7f174a4aa700 1 librados: calling monclient init
2019-08-07 08:38:26.270 7f174a4aa700 1 librados: init done
module 'prometheus' is already enabled
2019-08-07 08:38:28.335 7f174a4aa700 10 librados: watch_flush enter
2019-08-07 08:38:28.335 7f174a4aa700 10 librados: watch_flush exit
2019-08-07 08:38:28.336 7f174a4aa700 1 librados: shutdown
2019-08-07 08:38:28.372264 I | exec: Running command: ceph mgr module enable dashboard --force --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/526341544
2019-08-07 08:38:28.465481 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:29.063978 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:29.462556 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:29.863863 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:30.601720 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:30.630996 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:30.663094 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:30.771952 I | exec: 2019-08-07 08:38:29.584 7f706c819700 1 librados: starting msgr at
2019-08-07 08:38:29.584 7f706c819700 1 librados: starting objecter
2019-08-07 08:38:29.585 7f706c819700 1 librados: setting wanted keys
2019-08-07 08:38:29.585 7f706c819700 1 librados: calling monclient init
2019-08-07 08:38:29.664 7f706c819700 1 librados: init done
module 'dashboard' is already enabled
2019-08-07 08:38:30.690 7f706c819700 10 librados: watch_flush enter
2019-08-07 08:38:30.690 7f706c819700 10 librados: watch_flush exit
2019-08-07 08:38:30.691 7f706c819700 1 librados: shutdown
2019-08-07 08:38:30.856999 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:30.944099 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:31.873790 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:32.092299 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:32.166170 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:32.189964 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:32.311064 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:32.705173 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:33.279190 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:33.333541 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:34.043216 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:34.512709 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:34.868061 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:35.565313 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:35.776846 I | op-mgr: the dashboard secret was already generated
2019-08-07 08:38:35.776876 I | op-mgr: Running command: ceph dashboard set-login-credentials admin *******
2019-08-07 08:38:35.777135 D | exec: Running command: ceph dashboard set-login-credentials admin GKzKzG9om2 --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/603056871
2019-08-07 08:38:35.979419 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:37.974792 I | exec: 2019-08-07 08:38:36.587 7fb47d983700 1 librados: starting msgr at
2019-08-07 08:38:36.587 7fb47d983700 1 librados: starting objecter
2019-08-07 08:38:36.662 7fb47d983700 1 librados: setting wanted keys
2019-08-07 08:38:36.662 7fb47d983700 1 librados: calling monclient init
2019-08-07 08:38:36.668 7fb47d983700 1 librados: init done
2019-08-07 08:38:37.897 7fb47d983700 10 librados: watch_flush enter
2019-08-07 08:38:37.897 7fb47d983700 10 librados: watch_flush exit
2019-08-07 08:38:37.898 7fb47d983700 1 librados: shutdown
2019-08-07 08:38:37.975036 I | op-mgr: restarting the mgr module
2019-08-07 08:38:37.975185 I | exec: Running command: ceph mgr module disable dashboard --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/049587482
2019-08-07 08:38:38.463964 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:39.072674 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:39.463875 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:39.862270 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:40.470902 I | exec: 2019-08-07 08:38:38.888 7f96636c4700 1 librados: starting msgr at
2019-08-07 08:38:38.888 7f96636c4700 1 librados: starting objecter
2019-08-07 08:38:38.961 7f96636c4700 1 librados: setting wanted keys
2019-08-07 08:38:38.961 7f96636c4700 1 librados: calling monclient init
2019-08-07 08:38:38.968 7f96636c4700 1 librados: init done
2019-08-07 08:38:40.446 7f96636c4700 10 librados: watch_flush enter
2019-08-07 08:38:40.446 7f96636c4700 10 librados: watch_flush exit
2019-08-07 08:38:40.448 7f96636c4700 1 librados: shutdown
2019-08-07 08:38:40.471247 I | exec: Running command: ceph mgr module enable dashboard --force --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/770386353
2019-08-07 08:38:40.663975 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:40.665187 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:40.685499 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:40.873752 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:40.962396 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:41.963985 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:42.163747 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:42.184581 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:42.263524 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:42.333658 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:42.724371 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:43.296008 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:43.350351 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:43.773231 I | exec: 2019-08-07 08:38:41.485 7ff0c45fb700 1 librados: starting msgr at
2019-08-07 08:38:41.485 7ff0c45fb700 1 librados: starting objecter
2019-08-07 08:38:41.485 7ff0c45fb700 1 librados: setting wanted keys
2019-08-07 08:38:41.485 7ff0c45fb700 1 librados: calling monclient init
2019-08-07 08:38:41.566 7ff0c45fb700 1 librados: init done
2019-08-07 08:38:43.684 7ff0c45fb700 10 librados: watch_flush enter
2019-08-07 08:38:43.684 7ff0c45fb700 10 librados: watch_flush exit
2019-08-07 08:38:43.685 7ff0c45fb700 1 librados: shutdown
2019-08-07 08:38:43.773544 I | exec: Running command: ceph config get mgr.a mgr/dashboard/url_prefix --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/281901916
2019-08-07 08:38:44.063218 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:44.563656 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:44.884055 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:45.588455 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:45.973156 I | exec: 2019-08-07 08:38:44.688 7f48be793700 1 librados: starting msgr at
2019-08-07 08:38:44.688 7f48be793700 1 librados: starting objecter
2019-08-07 08:38:44.761 7f48be793700 1 librados: setting wanted keys
2019-08-07 08:38:44.761 7f48be793700 1 librados: calling monclient init
2019-08-07 08:38:44.768 7f48be793700 1 librados: init done
2019-08-07 08:38:45.866 7f48be793700 10 librados: watch_flush enter
2019-08-07 08:38:45.866 7f48be793700 10 librados: watch_flush exit
2019-08-07 08:38:45.867 7f48be793700 1 librados: shutdown
2019-08-07 08:38:45.973426 I | exec: Running command: ceph config rm mgr.a mgr/dashboard/url_prefix --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/242146059
2019-08-07 08:38:46.006628 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:48.175262 I | exec: 2019-08-07 08:38:46.866 7fc9d3a65700 1 librados: starting msgr at
2019-08-07 08:38:46.866 7fc9d3a65700 1 librados: starting objecter
2019-08-07 08:38:46.866 7fc9d3a65700 1 librados: setting wanted keys
2019-08-07 08:38:46.866 7fc9d3a65700 1 librados: calling monclient init
2019-08-07 08:38:46.872 7fc9d3a65700 1 librados: init done
2019-08-07 08:38:48.064 7fc9d3a65700 10 librados: watch_flush enter
2019-08-07 08:38:48.064 7fc9d3a65700 10 librados: watch_flush exit
2019-08-07 08:38:48.066 7fc9d3a65700 1 librados: shutdown
2019-08-07 08:38:48.175594 I | exec: Running command: ceph config get mgr.a mgr/dashboard/server_port --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/106335982
2019-08-07 08:38:48.463972 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:49.092621 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:49.463871 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:49.869592 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:50.564761 I | exec: 2019-08-07 08:38:49.085 7fc4c258a700 1 librados: starting msgr at
2019-08-07 08:38:49.085 7fc4c258a700 1 librados: starting objecter
2019-08-07 08:38:49.161 7fc4c258a700 1 librados: setting wanted keys
2019-08-07 08:38:49.161 7fc4c258a700 1 librados: calling monclient init
2019-08-07 08:38:49.168 7fc4c258a700 1 librados: init done
2019-08-07 08:38:50.383 7fc4c258a700 10 librados: watch_flush enter
2019-08-07 08:38:50.383 7fc4c258a700 10 librados: watch_flush exit
2019-08-07 08:38:50.461 7fc4c258a700 1 librados: shutdown
2019-08-07 08:38:50.565116 I | exec: Running command: ceph config set mgr.a mgr/dashboard/server_port 7000 --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/060142453
2019-08-07 08:38:50.646895 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:50.683397 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:50.711884 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:50.890766 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:50.978630 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:51.963860 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:52.163754 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:52.207886 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:52.262664 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:52.362381 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:52.763801 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:53.313190 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:53.365848 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:54.072647 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:54.560301 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:54.904009 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:55.631494 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:56.031886 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:57.476171 I | exec: 2019-08-07 08:38:51.487 7f75d4ffa700 1 librados: starting msgr at
2019-08-07 08:38:51.487 7f75d4ffa700 1 librados: starting objecter
2019-08-07 08:38:51.561 7f75d4ffa700 1 librados: setting wanted keys
2019-08-07 08:38:51.561 7f75d4ffa700 1 librados: calling monclient init
2019-08-07 08:38:51.567 7f75d4ffa700 1 librados: init done
2019-08-07 08:38:57.443 7f75d4ffa700 10 librados: watch_flush enter
2019-08-07 08:38:57.443 7f75d4ffa700 10 librados: watch_flush exit
2019-08-07 08:38:57.444 7f75d4ffa700 1 librados: shutdown
2019-08-07 08:38:57.476510 I | exec: Running command: ceph config get mgr.a mgr/dashboard/ssl --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/741283408
2019-08-07 08:38:58.462199 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:59.108441 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:59.448493 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:38:59.890155 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:00.669184 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:00.705719 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:00.741409 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:00.905835 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:01.004176 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:01.929779 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:02.154102 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:02.238898 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:02.272206 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:02.386587 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:02.771200 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:03.329056 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:03.390318 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:04.101620 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:04.581564 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:04.921398 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:05.655863 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:06.050558 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:08.464352 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:08.782147 I | exec: 2019-08-07 08:38:58.381 7f49389b4700 1 librados: starting msgr at
2019-08-07 08:38:58.381 7f49389b4700 1 librados: starting objecter
2019-08-07 08:38:58.381 7f49389b4700 1 librados: setting wanted keys
2019-08-07 08:38:58.381 7f49389b4700 1 librados: calling monclient init
2019-08-07 08:39:07.536 7f49389b4700 1 librados: init done
2019-08-07 08:39:08.702 7f49389b4700 10 librados: watch_flush enter
2019-08-07 08:39:08.702 7f49389b4700 10 librados: watch_flush exit
2019-08-07 08:39:08.704 7f49389b4700 1 librados: shutdown
2019-08-07 08:39:08.782476 I | exec: Running command: ceph config set mgr.a mgr/dashboard/ssl false --connect-timeout=15 --cluster=rook-ceph-stage-primary --conf=/var/lib/rook/rook-ceph-stage-primary/rook-ceph-stage-primary.config --keyring=/var/lib/rook/rook-ceph-stage-primary/client.admin.keyring --format json --out-file /tmp/012541295
2019-08-07 08:39:09.162621 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:09.480690 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:09.912382 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:10.691048 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:10.725193 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:10.772435 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:10.926617 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:11.028589 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:11.951863 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:12.176199 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:12.267427 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:12.293803 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:12.421662 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:12.791845 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:13.351503 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:13.412068 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:14.120836 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:14.663881 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:14.963331 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:15.686716 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:16.074133 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:18.473061 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:19.151174 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:19.498848 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:19.928775 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:19.979570 I | exec: 2019-08-07 08:39:13.985 7fb30b92b700 1 librados: starting msgr at
2019-08-07 08:39:13.985 7fb30b92b700 1 librados: starting objecter
2019-08-07 08:39:13.985 7fb30b92b700 1 librados: setting wanted keys
2019-08-07 08:39:13.985 7fb30b92b700 1 librados: calling monclient init
2019-08-07 08:39:14.067 7fb30b92b700 1 librados: init done
2019-08-07 08:39:19.957 7fb30b92b700 10 librados: watch_flush enter
2019-08-07 08:39:19.957 7fb30b92b700 10 librados: watch_flush exit
2019-08-07 08:39:19.958 7fb30b92b700 1 librados: shutdown
2019-08-07 08:39:20.015341 I | op-mgr: dashboard service already exists
2019-08-07 08:39:20.053503 I | op-mgr: mgr metrics service already exists
2019-08-07 08:39:20.053541 I | op-osd: start running osds in namespace rook-ceph-stage-primary
2019-08-07 08:39:20.074270 I | op-osd: 4 of the 4 storage nodes are valid
2019-08-07 08:39:20.074298 I | op-osd: start provisioning the osds on nodes, if needed
2019-08-07 08:39:20.074447 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName k8s-worker-101.lxstage.domain.com will be 713725d9f667a079e331f69f263a1fd0
2019-08-07 08:39:20.084858 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName k8s-worker-101.lxstage.domain.com will be 713725d9f667a079e331f69f263a1fd0
2019-08-07 08:39:20.088385 I | op-k8sutil: Removing previous job rook-ceph-osd-prepare-713725d9f667a079e331f69f263a1fd0 to start a new one
2019-08-07 08:39:20.102199 I | op-k8sutil: batch job rook-ceph-osd-prepare-713725d9f667a079e331f69f263a1fd0 still exists
2019-08-07 08:39:20.709832 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:20.763510 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:20.791670 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:20.948537 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:21.049406 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:21.971661 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:22.106175 I | op-k8sutil: batch job rook-ceph-osd-prepare-713725d9f667a079e331f69f263a1fd0 deleted
2019-08-07 08:39:22.113245 I | op-osd: osd provision job started for node k8s-worker-101.lxstage.domain.com
2019-08-07 08:39:22.113301 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName k8s-worker-102.lxstage.domain.com will be 16f7814f5a22fc71d100e5b6a4b5bf2b
2019-08-07 08:39:22.122683 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName k8s-worker-102.lxstage.domain.com will be 16f7814f5a22fc71d100e5b6a4b5bf2b
2019-08-07 08:39:22.126431 I | op-k8sutil: Removing previous job rook-ceph-osd-prepare-16f7814f5a22fc71d100e5b6a4b5bf2b to start a new one
2019-08-07 08:39:22.144969 I | op-k8sutil: batch job rook-ceph-osd-prepare-16f7814f5a22fc71d100e5b6a4b5bf2b still exists
2019-08-07 08:39:22.199075 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:22.298118 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:22.319280 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:22.448709 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:22.809440 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:23.368843 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:23.430027 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:24.135116 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:24.148441 I | op-k8sutil: batch job rook-ceph-osd-prepare-16f7814f5a22fc71d100e5b6a4b5bf2b deleted
2019-08-07 08:39:24.154886 I | op-osd: osd provision job started for node k8s-worker-102.lxstage.domain.com
2019-08-07 08:39:24.161978 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName k8s-worker-103.lxstage.domain.com will be 3421d9d141eb39906bd993a89171141d
2019-08-07 08:39:24.177524 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName k8s-worker-103.lxstage.domain.com will be 3421d9d141eb39906bd993a89171141d
2019-08-07 08:39:24.182193 I | op-k8sutil: Removing previous job rook-ceph-osd-prepare-3421d9d141eb39906bd993a89171141d to start a new one
2019-08-07 08:39:24.210861 I | op-k8sutil: batch job rook-ceph-osd-prepare-3421d9d141eb39906bd993a89171141d still exists
2019-08-07 08:39:24.617691 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:24.964372 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:25.762493 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:26.100524 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:26.214569 I | op-k8sutil: batch job rook-ceph-osd-prepare-3421d9d141eb39906bd993a89171141d deleted
2019-08-07 08:39:26.221029 I | op-osd: osd provision job started for node k8s-worker-103.lxstage.domain.com
2019-08-07 08:39:26.221085 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName k8s-worker-104.lxstage.domain.com will be 314ab5663f2f709906d970de379b1dc7
2019-08-07 08:39:26.232306 I | op-k8sutil: format and nodeName longer than 63 chars, nodeName k8s-worker-104.lxstage.domain.com will be 314ab5663f2f709906d970de379b1dc7
2019-08-07 08:39:26.243437 I | op-k8sutil: Removing previous job rook-ceph-osd-prepare-314ab5663f2f709906d970de379b1dc7 to start a new one
2019-08-07 08:39:26.268612 I | op-k8sutil: batch job rook-ceph-osd-prepare-314ab5663f2f709906d970de379b1dc7 still exists
2019-08-07 08:39:28.272413 I | op-k8sutil: batch job rook-ceph-osd-prepare-314ab5663f2f709906d970de379b1dc7 deleted
2019-08-07 08:39:28.280861 I | op-osd: osd provision job started for node k8s-worker-104.lxstage.domain.com
2019-08-07 08:39:28.280888 I | op-osd: start osds after provisioning is completed, if needed
2019-08-07 08:39:28.285363 I | op-osd: osd orchestration status for node k8s-worker-102.lxstage.domain.com is orchestrating
2019-08-07 08:39:28.285399 I | op-osd: osd orchestration status for node k8s-worker-104.lxstage.domain.com is starting
2019-08-07 08:39:28.285417 I | op-osd: osd orchestration status for node k8s-worker-103.lxstage.domain.com is starting
2019-08-07 08:39:28.285432 I | op-osd: osd orchestration status for node k8s-worker-101.lxstage.domain.com is orchestrating
2019-08-07 08:39:28.285442 I | op-osd: 0/4 node(s) completed osd provisioning, resource version 309451282
2019-08-07 08:39:28.491409 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:29.173052 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:29.526616 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:29.949401 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:30.108091 I | op-osd: osd orchestration status for node k8s-worker-104.lxstage.domain.com is computingDiff
2019-08-07 08:39:30.339403 I | op-osd: osd orchestration status for node k8s-worker-104.lxstage.domain.com is orchestrating
2019-08-07 08:39:30.729623 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:30.762128 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:30.812749 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:30.967048 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:31.074108 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:31.991491 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:32.226492 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:32.325278 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:32.350238 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:32.475524 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:32.832660 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:33.392751 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:33.462293 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:34.149160 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:34.640138 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:34.976755 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:35.729417 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:36.119156 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:38.514187 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:39.189505 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:39.544728 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:39.972566 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:40.080121 I | op-osd: osd orchestration status for node k8s-worker-101.lxstage.domain.com is completed
2019-08-07 08:39:40.080153 I | op-osd: starting 12 osd daemons on node k8s-worker-101.lxstage.domain.com
2019-08-07 08:39:40.080185 D | op-osd: start osd {23 /var/lib/rook/osd23 /var/lib/rook/osd23/rook-ceph-stage-primary.config ceph /var/lib/rook/osd23/keyring ef828443-ff97-479f-8994-7107e3855e51 false false true}
2019-08-07 08:39:40.089602 I | op-osd: deployment for osd 23 already exists. updating if needed
2019-08-07 08:39:40.093432 I | op-k8sutil: updating deployment rook-ceph-osd-23
2019-08-07 08:39:40.104836 D | op-k8sutil: deployment rook-ceph-osd-23 status={ObservedGeneration:3 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:28:59 +0000 UTC LastTransitionTime:2019-08-06 14:28:59 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:28:59 +0000 UTC LastTransitionTime:2019-08-06 14:28:52 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-23-58d557c87b" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:39:40.762697 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:40.773448 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:40.833250 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:40.988409 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:41.094241 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:42.010669 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:42.110365 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-23
2019-08-07 08:39:42.110395 I | op-osd: started deployment for osd 23 (dir=false, type=bluestore)
2019-08-07 08:39:42.110420 D | op-osd: start osd {28 /var/lib/rook/osd28 /var/lib/rook/osd28/rook-ceph-stage-primary.config ceph /var/lib/rook/osd28/keyring 56d3533e-e622-42cb-af01-dd0148938cc9 false false true}
2019-08-07 08:39:42.120403 I | op-osd: deployment for osd 28 already exists. updating if needed
2019-08-07 08:39:42.124401 I | op-k8sutil: updating deployment rook-ceph-osd-28
2019-08-07 08:39:42.142073 D | op-k8sutil: deployment rook-ceph-osd-28 status={ObservedGeneration:3 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:28:59 +0000 UTC LastTransitionTime:2019-08-06 14:28:59 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:28:59 +0000 UTC LastTransitionTime:2019-08-06 14:28:53 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-28-6fd8675475" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:39:42.248417 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:42.345177 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:42.377118 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:42.500150 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:42.855234 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:43.416672 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:43.470221 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:44.146839 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-28
2019-08-07 08:39:44.146872 I | op-osd: started deployment for osd 28 (dir=false, type=bluestore)
2019-08-07 08:39:44.146897 D | op-osd: start osd {32 /var/lib/rook/osd32 /var/lib/rook/osd32/rook-ceph-stage-primary.config ceph /var/lib/rook/osd32/keyring c9c709f2-77df-47e1-947c-42ec722bc985 false false true}
2019-08-07 08:39:44.156690 I | op-osd: deployment for osd 32 already exists. updating if needed
2019-08-07 08:39:44.163481 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:44.163685 I | op-k8sutil: updating deployment rook-ceph-osd-32
2019-08-07 08:39:44.178623 D | op-k8sutil: deployment rook-ceph-osd-32 status={ObservedGeneration:3 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:29:00 +0000 UTC LastTransitionTime:2019-08-06 14:29:00 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:29:00 +0000 UTC LastTransitionTime:2019-08-06 14:28:54 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-32-5cc7f87b68" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:39:44.665918 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:44.995291 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:45.763357 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:46.140609 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:46.184164 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-32
2019-08-07 08:39:46.184194 I | op-osd: started deployment for osd 32 (dir=false, type=bluestore)
2019-08-07 08:39:46.184220 D | op-osd: start osd {14 /var/lib/rook/osd14 /var/lib/rook/osd14/rook-ceph-stage-primary.config ceph /var/lib/rook/osd14/keyring de07ca94-6bae-4ad3-bf60-855aa41c2979 false false true}
2019-08-07 08:39:46.201415 I | op-osd: deployment for osd 14 already exists. updating if needed
2019-08-07 08:39:46.205836 I | op-k8sutil: updating deployment rook-ceph-osd-14
2019-08-07 08:39:46.218747 D | op-k8sutil: deployment rook-ceph-osd-14 status={ObservedGeneration:3 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:28:50 +0000 UTC LastTransitionTime:2019-08-06 14:28:50 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:28:56 +0000 UTC LastTransitionTime:2019-08-06 14:28:49 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-14-7595bf794b" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:39:48.224358 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-14
2019-08-07 08:39:48.224397 I | op-osd: started deployment for osd 14 (dir=false, type=bluestore)
2019-08-07 08:39:48.224424 D | op-osd: start osd {18 /var/lib/rook/osd18 /var/lib/rook/osd18/rook-ceph-stage-primary.config ceph /var/lib/rook/osd18/keyring 69330f11-accc-440f-813b-660cea9942d1 false false true}
2019-08-07 08:39:48.274052 I | op-osd: deployment for osd 18 already exists. updating if needed
2019-08-07 08:39:48.278291 I | op-k8sutil: updating deployment rook-ceph-osd-18
2019-08-07 08:39:48.291972 D | op-k8sutil: deployment rook-ceph-osd-18 status={ObservedGeneration:3 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:28:54 +0000 UTC LastTransitionTime:2019-08-06 14:28:54 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:29:00 +0000 UTC LastTransitionTime:2019-08-06 14:28:50 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-18-668f89c6b8" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:39:48.537697 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:49.214466 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:49.563969 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:49.989824 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:50.296953 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-18
2019-08-07 08:39:50.296990 I | op-osd: started deployment for osd 18 (dir=false, type=bluestore)
2019-08-07 08:39:50.297018 D | op-osd: start osd {36 /var/lib/rook/osd36 /var/lib/rook/osd36/rook-ceph-stage-primary.config ceph /var/lib/rook/osd36/keyring 7f17f7c0-f694-4ab3-8372-3e60bc4fe7b8 false false true}
2019-08-07 08:39:50.308461 I | op-osd: deployment for osd 36 already exists. updating if needed
2019-08-07 08:39:50.312596 I | op-k8sutil: updating deployment rook-ceph-osd-36
2019-08-07 08:39:50.329684 D | op-k8sutil: deployment rook-ceph-osd-36 status={ObservedGeneration:3 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:29:00 +0000 UTC LastTransitionTime:2019-08-06 14:29:00 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:29:00 +0000 UTC LastTransitionTime:2019-08-06 14:28:55 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-36-667bd6d6b4" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:39:50.772291 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:50.793453 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:50.864520 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:51.011262 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:51.122568 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:52.031956 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:52.278355 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:52.335347 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-36
2019-08-07 08:39:52.335382 I | op-osd: started deployment for osd 36 (dir=false, type=bluestore)
2019-08-07 08:39:52.335409 D | op-osd: start osd {40 /var/lib/rook/osd40 /var/lib/rook/osd40/rook-ceph-stage-primary.config ceph /var/lib/rook/osd40/keyring 0acf76b9-d717-40c8-8445-030ca90c5a53 false false true}
2019-08-07 08:39:52.344421 I | op-osd: deployment for osd 40 already exists. updating if needed
2019-08-07 08:39:52.348533 I | op-k8sutil: updating deployment rook-ceph-osd-40
2019-08-07 08:39:52.373126 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:52.376517 D | op-k8sutil: deployment rook-ceph-osd-40 status={ObservedGeneration:3 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:29:00 +0000 UTC LastTransitionTime:2019-08-06 14:29:00 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:29:00 +0000 UTC LastTransitionTime:2019-08-06 14:28:56 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-40-5c6d8d887c" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:39:52.402267 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:52.525442 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:52.883685 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:53.444064 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:53.491451 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:54.177324 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:54.382010 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-40
2019-08-07 08:39:54.382043 I | op-osd: started deployment for osd 40 (dir=false, type=bluestore)
2019-08-07 08:39:54.382072 D | op-osd: start osd {44 /var/lib/rook/osd44 /var/lib/rook/osd44/rook-ceph-stage-primary.config ceph /var/lib/rook/osd44/keyring 92160976-b391-4f54-92ab-b6bf109f1b6c false false true}
2019-08-07 08:39:54.409346 I | op-osd: deployment for osd 44 already exists. updating if needed
2019-08-07 08:39:54.413550 I | op-k8sutil: updating deployment rook-ceph-osd-44
2019-08-07 08:39:54.478143 D | op-k8sutil: deployment rook-ceph-osd-44 status={ObservedGeneration:3 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:29:00 +0000 UTC LastTransitionTime:2019-08-06 14:29:00 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:29:00 +0000 UTC LastTransitionTime:2019-08-06 14:28:56 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-44-589488bbc7" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:39:54.686496 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:55.029376 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:55.809251 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:56.169132 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:56.483279 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-44
2019-08-07 08:39:56.483312 I | op-osd: started deployment for osd 44 (dir=false, type=bluestore)
2019-08-07 08:39:56.483344 D | op-osd: start osd {47 /var/lib/rook/osd47 /var/lib/rook/osd47/rook-ceph-stage-primary.config ceph /var/lib/rook/osd47/keyring faef812d-e78e-421e-a26e-da75dac8f2c2 false false true}
2019-08-07 08:39:56.494206 I | op-osd: deployment for osd 47 already exists. updating if needed
2019-08-07 08:39:56.498303 I | op-k8sutil: updating deployment rook-ceph-osd-47
2019-08-07 08:39:56.591532 D | op-k8sutil: deployment rook-ceph-osd-47 status={ObservedGeneration:3 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:29:00 +0000 UTC LastTransitionTime:2019-08-06 14:29:00 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:29:00 +0000 UTC LastTransitionTime:2019-08-06 14:28:57 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-47-5c868b766b" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:39:58.557645 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:58.597240 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-47
2019-08-07 08:39:58.597270 I | op-osd: started deployment for osd 47 (dir=false, type=bluestore)
2019-08-07 08:39:58.597303 D | op-osd: start osd {6 /var/lib/rook/osd6 /var/lib/rook/osd6/rook-ceph-stage-primary.config ceph /var/lib/rook/osd6/keyring b76ffc2d-7a56-4c3f-82c5-4975dcebfef6 false false true}
2019-08-07 08:39:58.608369 I | op-osd: deployment for osd 6 already exists. updating if needed
2019-08-07 08:39:58.612423 I | op-k8sutil: updating deployment rook-ceph-osd-6
2019-08-07 08:39:58.628268 D | op-k8sutil: deployment rook-ceph-osd-6 status={ObservedGeneration:3 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:29:01 +0000 UTC LastTransitionTime:2019-08-06 14:29:01 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:29:01 +0000 UTC LastTransitionTime:2019-08-06 14:28:58 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-6-67fdbb49d9" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:39:59.263870 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:39:59.610536 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:00.016682 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:00.633664 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-6
2019-08-07 08:40:00.633699 I | op-osd: started deployment for osd 6 (dir=false, type=bluestore)
2019-08-07 08:40:00.633725 D | op-osd: start osd {10 /var/lib/rook/osd10 /var/lib/rook/osd10/rook-ceph-stage-primary.config ceph /var/lib/rook/osd10/keyring efe3a633-a626-4c83-94dc-56f51c448709 false false true}
2019-08-07 08:40:00.643765 I | op-osd: deployment for osd 10 already exists. updating if needed
2019-08-07 08:40:00.647877 I | op-k8sutil: updating deployment rook-ceph-osd-10
2019-08-07 08:40:00.662360 D | op-k8sutil: deployment rook-ceph-osd-10 status={ObservedGeneration:3 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:28:49 +0000 UTC LastTransitionTime:2019-08-06 14:28:49 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:28:49 +0000 UTC LastTransitionTime:2019-08-06 14:28:49 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-10-66c6c7d648" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:40:00.792055 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:00.811636 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:00.886753 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:01.035643 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:01.158123 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:02.054857 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:02.301361 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:02.394253 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:02.424923 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:02.550169 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:02.667610 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-10
2019-08-07 08:40:02.667642 I | op-osd: started deployment for osd 10 (dir=false, type=bluestore)
2019-08-07 08:40:02.667669 D | op-osd: start osd {2 /var/lib/rook/osd2 /var/lib/rook/osd2/rook-ceph-stage-primary.config ceph /var/lib/rook/osd2/keyring 1f672af0-17db-432c-8c41-b39ad136f489 false false true}
2019-08-07 08:40:02.677950 I | op-osd: deployment for osd 2 already exists. updating if needed
2019-08-07 08:40:02.682288 I | op-k8sutil: updating deployment rook-ceph-osd-2
2019-08-07 08:40:02.768068 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-2
2019-08-07 08:40:02.768100 I | op-osd: started deployment for osd 2 (dir=false, type=bluestore)
2019-08-07 08:40:02.772215 I | op-osd: osd orchestration status for node k8s-worker-102.lxstage.domain.com is completed
2019-08-07 08:40:02.772242 I | op-osd: starting 12 osd daemons on node k8s-worker-102.lxstage.domain.com
2019-08-07 08:40:02.772273 D | op-osd: start osd {24 /var/lib/rook/osd24 /var/lib/rook/osd24/rook-ceph-stage-primary.config ceph /var/lib/rook/osd24/keyring a92aae31-64e2-4ad4-987e-3d2d211af869 false false true}
2019-08-07 08:40:02.782810 I | op-osd: deployment for osd 24 already exists. updating if needed
2019-08-07 08:40:02.789131 I | op-k8sutil: updating deployment rook-ceph-osd-24
2019-08-07 08:40:02.805105 D | op-k8sutil: deployment rook-ceph-osd-24 status={ObservedGeneration:3 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:28:59 +0000 UTC LastTransitionTime:2019-08-06 14:28:59 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:28:59 +0000 UTC LastTransitionTime:2019-08-06 14:28:52 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-24-7c9f6598b4" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:40:02.910890 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:03.468091 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:03.513020 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:04.201638 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:04.705134 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:04.809900 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-24
2019-08-07 08:40:04.809949 I | op-osd: started deployment for osd 24 (dir=false, type=bluestore)
2019-08-07 08:40:04.809979 D | op-osd: start osd {27 /var/lib/rook/osd27 /var/lib/rook/osd27/rook-ceph-stage-primary.config ceph /var/lib/rook/osd27/keyring 7bf33094-0c89-44fe-a1be-fd9507ec4f21 false false true}
2019-08-07 08:40:04.819676 I | op-osd: deployment for osd 27 already exists. updating if needed
2019-08-07 08:40:04.823735 I | op-k8sutil: updating deployment rook-ceph-osd-27
2019-08-07 08:40:04.838366 D | op-k8sutil: deployment rook-ceph-osd-27 status={ObservedGeneration:4 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:28:59 +0000 UTC LastTransitionTime:2019-08-06 14:28:59 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:28:59 +0000 UTC LastTransitionTime:2019-08-06 14:28:52 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-27-7499b6bbb9" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:40:05.056626 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:05.835594 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:06.188844 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:06.847191 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-27
2019-08-07 08:40:06.847222 I | op-osd: started deployment for osd 27 (dir=false, type=bluestore)
2019-08-07 08:40:06.847250 D | op-osd: start osd {31 /var/lib/rook/osd31 /var/lib/rook/osd31/rook-ceph-stage-primary.config ceph /var/lib/rook/osd31/keyring 2d708cb1-f88a-4c88-a7c7-b7c041be05dd false false true}
2019-08-07 08:40:06.857558 I | op-osd: deployment for osd 31 already exists. updating if needed
2019-08-07 08:40:06.862014 I | op-k8sutil: updating deployment rook-ceph-osd-31
2019-08-07 08:40:06.874473 D | op-k8sutil: deployment rook-ceph-osd-31 status={ObservedGeneration:4 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:29:00 +0000 UTC LastTransitionTime:2019-08-06 14:29:00 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:29:00 +0000 UTC LastTransitionTime:2019-08-06 14:28:53 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-31-5db5d7b676" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:40:08.574456 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:08.879877 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-31
2019-08-07 08:40:08.879926 I | op-osd: started deployment for osd 31 (dir=false, type=bluestore)
2019-08-07 08:40:08.879955 D | op-osd: start osd {39 /var/lib/rook/osd39 /var/lib/rook/osd39/rook-ceph-stage-primary.config ceph /var/lib/rook/osd39/keyring 8ab42eea-0fea-44c4-b5bc-1a2a02fbfd58 false false true}
2019-08-07 08:40:09.063107 I | op-osd: deployment for osd 39 already exists. updating if needed
2019-08-07 08:40:09.066916 I | op-k8sutil: updating deployment rook-ceph-osd-39
2019-08-07 08:40:09.168104 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-39
2019-08-07 08:40:09.168133 I | op-osd: started deployment for osd 39 (dir=false, type=bluestore)
2019-08-07 08:40:09.168161 D | op-osd: start osd {8 /var/lib/rook/osd8 /var/lib/rook/osd8/rook-ceph-stage-primary.config ceph /var/lib/rook/osd8/keyring d8f10678-a97e-4dab-ad71-bd528eb8baf1 false false true}
2019-08-07 08:40:09.178375 I | op-osd: deployment for osd 8 already exists. updating if needed
2019-08-07 08:40:09.182772 I | op-k8sutil: updating deployment rook-ceph-osd-8
2019-08-07 08:40:09.195823 D | op-k8sutil: deployment rook-ceph-osd-8 status={ObservedGeneration:4 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:29:01 +0000 UTC LastTransitionTime:2019-08-06 14:29:01 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:29:01 +0000 UTC LastTransitionTime:2019-08-06 14:28:58 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-8-865d7db956" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:40:09.262393 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:09.635268 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:10.038804 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:10.813677 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:10.834235 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:10.912066 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:11.053152 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:11.180039 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:11.200367 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-8
2019-08-07 08:40:11.200397 I | op-osd: started deployment for osd 8 (dir=false, type=bluestore)
2019-08-07 08:40:11.200427 D | op-osd: start osd {0 /var/lib/rook/osd0 /var/lib/rook/osd0/rook-ceph-stage-primary.config ceph /var/lib/rook/osd0/keyring eb12b21e-aae3-4cae-862d-febe86377aa0 false false true}
2019-08-07 08:40:11.278636 I | op-osd: deployment for osd 0 already exists. updating if needed
2019-08-07 08:40:11.283094 I | op-k8sutil: updating deployment rook-ceph-osd-0
2019-08-07 08:40:11.299190 D | op-k8sutil: deployment rook-ceph-osd-0 status={ObservedGeneration:3 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:28:49 +0000 UTC LastTransitionTime:2019-08-06 14:28:49 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:28:49 +0000 UTC LastTransitionTime:2019-08-06 14:28:49 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-0-68d4c56d68" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:40:12.081232 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:12.327201 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:12.417887 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:12.445089 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:12.583770 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:12.938264 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:13.305438 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-0
2019-08-07 08:40:13.305474 I | op-osd: started deployment for osd 0 (dir=false, type=bluestore)
2019-08-07 08:40:13.305501 D | op-osd: start osd {20 /var/lib/rook/osd20 /var/lib/rook/osd20/rook-ceph-stage-primary.config ceph /var/lib/rook/osd20/keyring 30b2a1ea-9a68-4b46-89bf-aba7bf8f29ff false false true}
2019-08-07 08:40:13.314557 I | op-osd: deployment for osd 20 already exists. updating if needed
2019-08-07 08:40:13.318444 I | op-k8sutil: updating deployment rook-ceph-osd-20
2019-08-07 08:40:13.337453 D | op-k8sutil: deployment rook-ceph-osd-20 status={ObservedGeneration:4 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:28:58 +0000 UTC LastTransitionTime:2019-08-06 14:28:58 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:28:58 +0000 UTC LastTransitionTime:2019-08-06 14:28:51 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-20-654fc7c8bb" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:40:13.487791 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:13.531429 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:14.217528 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:14.762261 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:15.071189 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:15.342029 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-20
2019-08-07 08:40:15.342061 I | op-osd: started deployment for osd 20 (dir=false, type=bluestore)
2019-08-07 08:40:15.342085 D | op-osd: start osd {35 /var/lib/rook/osd35 /var/lib/rook/osd35/rook-ceph-stage-primary.config ceph /var/lib/rook/osd35/keyring cb017176-928b-4db4-9cb6-66629080f53b false false true}
2019-08-07 08:40:15.350680 I | op-osd: deployment for osd 35 already exists. updating if needed
2019-08-07 08:40:15.354557 I | op-k8sutil: updating deployment rook-ceph-osd-35
2019-08-07 08:40:15.366799 D | op-k8sutil: deployment rook-ceph-osd-35 status={ObservedGeneration:4 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:29:00 +0000 UTC LastTransitionTime:2019-08-06 14:29:00 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:29:00 +0000 UTC LastTransitionTime:2019-08-06 14:28:54 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-35-7b4689f654" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:40:15.861315 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:16.262456 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:17.371875 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-35
2019-08-07 08:40:17.371938 I | op-osd: started deployment for osd 35 (dir=false, type=bluestore)
2019-08-07 08:40:17.371975 D | op-osd: start osd {4 /var/lib/rook/osd4 /var/lib/rook/osd4/rook-ceph-stage-primary.config ceph /var/lib/rook/osd4/keyring b830d73e-ce5c-4915-8f5a-6d9a2df98280 false false true}
2019-08-07 08:40:17.381153 I | op-osd: deployment for osd 4 already exists. updating if needed
2019-08-07 08:40:17.385528 I | op-k8sutil: updating deployment rook-ceph-osd-4
2019-08-07 08:40:17.398210 D | op-k8sutil: deployment rook-ceph-osd-4 status={ObservedGeneration:4 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:29:00 +0000 UTC LastTransitionTime:2019-08-06 14:29:00 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:29:00 +0000 UTC LastTransitionTime:2019-08-06 14:28:55 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-4-6bb55d5fc6" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:40:18.590879 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:19.280633 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:19.402636 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-4
2019-08-07 08:40:19.402672 I | op-osd: started deployment for osd 4 (dir=false, type=bluestore)
2019-08-07 08:40:19.402699 D | op-osd: start osd {42 /var/lib/rook/osd42 /var/lib/rook/osd42/rook-ceph-stage-primary.config ceph /var/lib/rook/osd42/keyring 5c3d6a9f-c43f-4994-b4ed-91b28cd221b2 false false true}
2019-08-07 08:40:19.411530 I | op-osd: deployment for osd 42 already exists. updating if needed
2019-08-07 08:40:19.419670 I | op-k8sutil: updating deployment rook-ceph-osd-42
2019-08-07 08:40:19.435358 D | op-k8sutil: deployment rook-ceph-osd-42 status={ObservedGeneration:4 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:29:00 +0000 UTC LastTransitionTime:2019-08-06 14:29:00 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:29:00 +0000 UTC LastTransitionTime:2019-08-06 14:28:56 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-42-7db765d5db" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:40:19.657992 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:20.061060 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:20.836780 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:20.851996 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:20.930702 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:21.076740 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:21.201340 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:21.440364 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-42
2019-08-07 08:40:21.440397 I | op-osd: started deployment for osd 42 (dir=false, type=bluestore)
2019-08-07 08:40:21.440424 D | op-osd: start osd {12 /var/lib/rook/osd12 /var/lib/rook/osd12/rook-ceph-stage-primary.config ceph /var/lib/rook/osd12/keyring b9f40551-ce08-4e7f-9ef2-8652c42bf641 false false true}
2019-08-07 08:40:21.455892 I | op-osd: deployment for osd 12 already exists. updating if needed
2019-08-07 08:40:21.460117 I | op-k8sutil: updating deployment rook-ceph-osd-12
2019-08-07 08:40:21.475303 D | op-k8sutil: deployment rook-ceph-osd-12 status={ObservedGeneration:3 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:28:49 +0000 UTC LastTransitionTime:2019-08-06 14:28:49 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:28:51 +0000 UTC LastTransitionTime:2019-08-06 14:28:49 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-12-9c8b8b7b7" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:40:22.104273 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:22.348831 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:22.442957 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:22.471020 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:22.612499 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:22.963047 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:23.480786 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-12
2019-08-07 08:40:23.480818 I | op-osd: started deployment for osd 12 (dir=false, type=bluestore)
2019-08-07 08:40:23.480845 D | op-osd: start osd {16 /var/lib/rook/osd16 /var/lib/rook/osd16/rook-ceph-stage-primary.config ceph /var/lib/rook/osd16/keyring b8653dd3-724d-47e4-851b-a967072ace81 false false true}
2019-08-07 08:40:23.491021 I | op-osd: deployment for osd 16 already exists. updating if needed
2019-08-07 08:40:23.495393 I | op-k8sutil: updating deployment rook-ceph-osd-16
2019-08-07 08:40:23.517384 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:23.524116 D | op-k8sutil: deployment rook-ceph-osd-16 status={ObservedGeneration:4 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:28:53 +0000 UTC LastTransitionTime:2019-08-06 14:28:53 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:28:59 +0000 UTC LastTransitionTime:2019-08-06 14:28:50 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-16-56ff6dbb7c" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:40:23.553688 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:24.262338 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:24.762214 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:25.087740 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:25.529585 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-16
2019-08-07 08:40:25.529619 I | op-osd: started deployment for osd 16 (dir=false, type=bluestore)
2019-08-07 08:40:25.533403 I | op-osd: osd orchestration status for node k8s-worker-103.lxstage.domain.com is computingDiff
2019-08-07 08:40:25.533779 I | op-osd: osd orchestration status for node k8s-worker-103.lxstage.domain.com is orchestrating
2019-08-07 08:40:25.534661 I | op-osd: osd orchestration status for node k8s-worker-104.lxstage.domain.com is completed
2019-08-07 08:40:25.534680 I | op-osd: starting 12 osd daemons on node k8s-worker-104.lxstage.domain.com
2019-08-07 08:40:25.534710 D | op-osd: start osd {19 /var/lib/rook/osd19 /var/lib/rook/osd19/rook-ceph-stage-primary.config ceph /var/lib/rook/osd19/keyring 15fa3676-063e-4750-9f65-dc2d31106a6f false false true}
2019-08-07 08:40:25.549443 I | op-osd: deployment for osd 19 already exists. updating if needed
2019-08-07 08:40:25.554868 I | op-k8sutil: updating deployment rook-ceph-osd-19
2019-08-07 08:40:25.668646 D | op-k8sutil: deployment rook-ceph-osd-19 status={ObservedGeneration:3 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:28:55 +0000 UTC LastTransitionTime:2019-08-06 14:28:55 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:28:55 +0000 UTC LastTransitionTime:2019-08-06 14:28:50 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-19-d4774b88d" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:40:25.892271 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:26.227858 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:27.673873 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-19
2019-08-07 08:40:27.673973 I | op-osd: started deployment for osd 19 (dir=false, type=bluestore)
2019-08-07 08:40:27.674002 D | op-osd: start osd {3 /var/lib/rook/osd3 /var/lib/rook/osd3/rook-ceph-stage-primary.config ceph /var/lib/rook/osd3/keyring 2447ad07-8fb4-4e28-b7a5-12a8e8a126ad false false true}
2019-08-07 08:40:27.684098 I | op-osd: deployment for osd 3 already exists. updating if needed
2019-08-07 08:40:27.688555 I | op-k8sutil: updating deployment rook-ceph-osd-3
2019-08-07 08:40:27.705822 D | op-k8sutil: deployment rook-ceph-osd-3 status={ObservedGeneration:3 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:28:59 +0000 UTC LastTransitionTime:2019-08-06 14:28:59 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:28:59 +0000 UTC LastTransitionTime:2019-08-06 14:28:53 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-3-7cc6bb9c4f" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:40:28.628239 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:29.297049 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:29.680654 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:29.710797 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-3
2019-08-07 08:40:29.710826 I | op-osd: started deployment for osd 3 (dir=false, type=bluestore)
2019-08-07 08:40:29.710851 D | op-osd: start osd {7 /var/lib/rook/osd7 /var/lib/rook/osd7/rook-ceph-stage-primary.config ceph /var/lib/rook/osd7/keyring 62b6e926-dde5-4aac-a145-182b3e5aab80 false false true}
2019-08-07 08:40:29.729294 I | op-osd: deployment for osd 7 already exists. updating if needed
2019-08-07 08:40:29.735095 I | op-k8sutil: updating deployment rook-ceph-osd-7
2019-08-07 08:40:29.748779 D | op-k8sutil: deployment rook-ceph-osd-7 status={ObservedGeneration:3 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:29:01 +0000 UTC LastTransitionTime:2019-08-06 14:29:01 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:29:01 +0000 UTC LastTransitionTime:2019-08-06 14:28:58 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-7-d85649bcc" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:40:30.082590 D | op-cluster: Skipping cluster update. Updated node k8s-worker-24.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:30.854221 D | op-cluster: Skipping cluster update. Updated node k8s-worker-04.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:30.872501 D | op-cluster: Skipping cluster update. Updated node k8s-worker-33.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:30.952352 D | op-cluster: Skipping cluster update. Updated node k8s-worker-29.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:31.107298 D | op-cluster: Skipping cluster update. Updated node k8s-worker-00.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:31.222631 D | op-cluster: Skipping cluster update. Updated node k8s-worker-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:31.754339 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-7
2019-08-07 08:40:31.754377 I | op-osd: started deployment for osd 7 (dir=false, type=bluestore)
2019-08-07 08:40:31.754404 D | op-osd: start osd {38 /var/lib/rook/osd38 /var/lib/rook/osd38/rook-ceph-stage-primary.config ceph /var/lib/rook/osd38/keyring f244ef2a-c589-49e2-8850-0b0f9f42cf49 false false true}
2019-08-07 08:40:31.764311 I | op-osd: deployment for osd 38 already exists. updating if needed
2019-08-07 08:40:31.768528 I | op-k8sutil: updating deployment rook-ceph-osd-38
2019-08-07 08:40:31.780874 D | op-k8sutil: deployment rook-ceph-osd-38 status={ObservedGeneration:3 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:29:00 +0000 UTC LastTransitionTime:2019-08-06 14:29:00 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:29:00 +0000 UTC LastTransitionTime:2019-08-06 14:28:55 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-38-5db6b86dd5" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:40:32.123168 D | op-cluster: Skipping cluster update. Updated node k8s-worker-34.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:32.371418 D | op-cluster: Skipping cluster update. Updated node k8s-worker-20.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:32.462918 D | op-cluster: Skipping cluster update. Updated node k8s-worker-22.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:32.495107 D | op-cluster: Skipping cluster update. Updated node k8s-worker-103.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:32.639583 D | op-cluster: Skipping cluster update. Updated node k8s-worker-104.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:32.986407 D | op-cluster: Skipping cluster update. Updated node k8s-worker-101.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:33.534035 D | op-cluster: Skipping cluster update. Updated node k8s-worker-03.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:33.574575 D | op-cluster: Skipping cluster update. Updated node k8s-worker-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:33.786633 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-38
2019-08-07 08:40:33.786663 I | op-osd: started deployment for osd 38 (dir=false, type=bluestore)
2019-08-07 08:40:33.786689 D | op-osd: start osd {43 /var/lib/rook/osd43 /var/lib/rook/osd43/rook-ceph-stage-primary.config ceph /var/lib/rook/osd43/keyring 15869916-ebe1-4283-b9c7-dda18dacee5d false false true}
2019-08-07 08:40:33.795552 I | op-osd: deployment for osd 43 already exists. updating if needed
2019-08-07 08:40:33.799694 I | op-k8sutil: updating deployment rook-ceph-osd-43
2019-08-07 08:40:33.812745 D | op-k8sutil: deployment rook-ceph-osd-43 status={ObservedGeneration:3 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:29:00 +0000 UTC LastTransitionTime:2019-08-06 14:29:00 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:29:00 +0000 UTC LastTransitionTime:2019-08-06 14:28:56 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-43-85fb57984c" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:40:34.251767 D | op-cluster: Skipping cluster update. Updated node k8s-master-01.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:34.756588 D | op-cluster: Skipping cluster update. Updated node k8s-worker-31.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:35.104142 D | op-cluster: Skipping cluster update. Updated node k8s-master-02.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:35.818366 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-43
2019-08-07 08:40:35.818404 I | op-osd: started deployment for osd 43 (dir=false, type=bluestore)
2019-08-07 08:40:35.818430 D | op-osd: start osd {11 /var/lib/rook/osd11 /var/lib/rook/osd11/rook-ceph-stage-primary.config ceph /var/lib/rook/osd11/keyring 5bd122c4-3b6a-4304-bb03-29c43a2ec2a5 false false true}
2019-08-07 08:40:35.836195 I | op-osd: deployment for osd 11 already exists. updating if needed
2019-08-07 08:40:35.840542 I | op-k8sutil: updating deployment rook-ceph-osd-11
2019-08-07 08:40:35.857830 D | op-k8sutil: deployment rook-ceph-osd-11 status={ObservedGeneration:3 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:28:49 +0000 UTC LastTransitionTime:2019-08-06 14:28:49 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:28:49 +0000 UTC LastTransitionTime:2019-08-06 14:28:49 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-11-75487b9bcd" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:40:35.912683 D | op-cluster: Skipping cluster update. Updated node k8s-worker-32.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:36.258678 D | op-cluster: Skipping cluster update. Updated node k8s-worker-30.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:37.862863 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-11
2019-08-07 08:40:37.862989 I | op-osd: started deployment for osd 11 (dir=false, type=bluestore)
2019-08-07 08:40:37.863021 D | op-osd: start osd {15 /var/lib/rook/osd15 /var/lib/rook/osd15/rook-ceph-stage-primary.config ceph /var/lib/rook/osd15/keyring ee183c15-8b68-47bc-bbd8-2aa2ab0f068e false false true}
2019-08-07 08:40:37.873025 I | op-osd: deployment for osd 15 already exists. updating if needed
2019-08-07 08:40:37.877893 I | op-k8sutil: updating deployment rook-ceph-osd-15
2019-08-07 08:40:37.891730 D | op-k8sutil: deployment rook-ceph-osd-15 status={ObservedGeneration:3 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Available Status:True LastUpdateTime:2019-08-06 14:28:52 +0000 UTC LastTransitionTime:2019-08-06 14:28:52 +0000 UTC Reason:MinimumReplicasAvailable Message:Deployment has minimum availability.} {Type:Progressing Status:True LastUpdateTime:2019-08-06 14:28:59 +0000 UTC LastTransitionTime:2019-08-06 14:28:50 +0000 UTC Reason:NewReplicaSetAvailable Message:ReplicaSet "rook-ceph-osd-15-8f74594b6" has successfully progressed.}] CollisionCount:<nil>}
2019-08-07 08:40:38.654899 D | op-cluster: Skipping cluster update. Updated node k8s-worker-21.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:39.311961 D | op-cluster: Skipping cluster update. Updated node k8s-worker-23.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:39.701088 D | op-cluster: Skipping cluster update. Updated node k8s-worker-102.lxstage.domain.com was and it is still schedulable
2019-08-07 08:40:39.966762 I | op-k8sutil: finished waiting for updated deployment rook-ceph-osd-15
2019-08-07 08:40:39.966794 I | op-osd: started deployment for osd 15 (dir=false, type=bluestore)
2019-08-07 08:40:39.966819 D | op-osd: start osd {22 /var/lib/rook/osd22 /var/lib/rook/osd22/rook-ceph-stage-primary.config ceph /var/lib/rook/osd22/keyring 4e40f781-204a-4785-ab26-fa9704d2c915 false false true}
2019-08-07 08:40:39.976243 I | op-osd: deployment for osd 22 already exists. updating if needed
2019-08-07 08:40:39.980322 I | op-k8sutil: updating deployment rook-ceph-osd-22
2019-08-07 08:40:39.994441 D | op-k8sutil: deployment rook-ceph-osd-22 status={ObservedGeneration:3 Replicas:1 UpdatedReplicas:1 ReadyReplicas:1 AvailableReplicas:1 UnavailableReplicas:0 Conditions:[{Type:Avai
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment