ceph-container: OSD unable to join CRUSH

I start a osd a new OSD.

2017-10-05T15:49:36+02:00 vm02 /ceph-osd-sda  2017-10-05T13:49:36Z vm02 confd[36443]: INFO Backend set to etcd
2017-10-05T15:49:36+02:00 vm02 /ceph-osd-sda  2017-10-05 13:49:36  /entrypoint.sh: Device detected, assuming ceph-disk scenario is desired
2017-10-05T15:49:36+02:00 vm02 /ceph-osd-sda  2017-10-05T13:49:36Z vm02 confd[36443]: INFO Target config /etc/ceph/ceph.conf out of sync
2017-10-05T15:49:36+02:00 vm02 /ceph-osd-sda  2017-10-05T13:49:36Z vm02 confd[36443]: INFO Backend nodes set to http://10.3.60.25:2379
2017-10-05T15:49:36+02:00 vm02 /ceph-osd-sda  true
2017-10-05T15:49:36+02:00 vm02 /ceph-osd-sda  2017-10-05T13:49:36Z vm02 confd[36443]: INFO Starting confd
2017-10-05T15:49:36+02:00 vm02 /ceph-osd-sda  2017-10-05 13:49:36  /entrypoint.sh: Adding bootstrap keyrings.
2017-10-05T15:49:36+02:00 vm02 /ceph-osd-sda  2017-10-05 13:49:36  /entrypoint.sh: Preparing and activating /dev/sda
2017-10-05T15:49:36+02:00 vm02 /ceph-osd-sda  2017-10-05T13:49:36Z vm02 confd[36443]: INFO Target config /etc/ceph/ceph.conf has been updated
2017-10-05T15:49:37+02:00 vm02 /ceph-osd-sda  command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --cluster ceph --setuser ceph --setgroup ceph
2017-10-05T15:49:37+02:00 vm02 /ceph-osd-sda  command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --cluster ceph --setuser ceph --setgroup ceph
2017-10-05T15:49:37+02:00 vm02 /ceph-osd-sda  command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
2017-10-05T15:49:37+02:00 vm02 /ceph-osd-sda  get_dm_uuid: get_dm_uuid /dev/sda uuid path is /sys/dev/block/8:0/dm/uuid
2017-10-05T15:49:37+02:00 vm02 /ceph-osd-sda  get_dm_uuid: get_dm_uuid /dev/sda uuid path is /sys/dev/block/8:0/dm/uuid
2017-10-05T15:49:37+02:00 vm02 /ceph-osd-sda  get_dm_uuid: get_dm_uuid /dev/sda uuid path is /sys/dev/block/8:0/dm/uuid
2017-10-05T15:49:37+02:00 vm02 /ceph-osd-sda  HEALTH_WARN 418 pgs backfill_wait; 24 pgs backfilling; 389 pgs degraded; 2 pgs recovery_wait; 393 pgs stuck unclean; 389 pgs undersized; recovery 426849/3885729 objects degraded (10.985%); recovery 212687/3885729 objects misplaced (5.474%); noout flag(s) set
2017-10-05T15:49:37+02:00 vm02 /ceph-osd-sda  command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
2017-10-05T15:49:37+02:00 vm02 /ceph-osd-sda  command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
2017-10-05T15:49:37+02:00 vm02 /ceph-osd-sda  command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_type
2017-10-05T15:49:37+02:00 vm02 /ceph-osd-sda  command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
2017-10-05T15:49:37+02:00 vm02 /ceph-osd-sda  get_dm_uuid: get_dm_uuid /dev/sda uuid path is /sys/dev/block/8:0/dm/uuid
2017-10-05T15:49:37+02:00 vm02 /ceph-osd-sda  command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --cluster ceph --setuser ceph --setgroup ceph
2017-10-05T15:49:37+02:00 vm02 /ceph-osd-sda  command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid

After that docker container dies and restarts and then it restart loops with this log:

2017-10-05T15:49:48+02:00 vm02 /ceph-osd-sda  true
2017-10-05T15:49:48+02:00 vm02 /ceph-osd-sda  2017-10-05 13:49:48  /entrypoint.sh: Adding bootstrap keyrings.
2017-10-05T15:49:48+02:00 vm02 /ceph-osd-sda  2017-10-05T13:49:48Z vm02 confd[45664]: INFO Backend nodes set to http://10.3.60.25:2379
2017-10-05T15:49:48+02:00 vm02 /ceph-osd-sda  2017-10-05 13:49:48  /entrypoint.sh: Bootstrapped OSD found; activating /dev/sda
2017-10-05T15:49:48+02:00 vm02 /ceph-osd-sda  2017-10-05T13:49:48Z vm02 confd[45664]: INFO Starting confd
2017-10-05T15:49:48+02:00 vm02 /ceph-osd-sda  2017-10-05T13:49:48Z vm02 confd[45664]: INFO Backend set to etcd
2017-10-05T15:49:49+02:00 vm02 /ceph-osd-sda  /dev/nvme0n1p1

The OSD’s aren’t added to the OSD tree. don’t know what is wrong since I don’t really see any errors in the log.

About this issue

  • Original URL
  • State: closed
  • Created 7 years ago
  • Comments: 19 (15 by maintainers)

Most upvoted comments

@Raboo, according to this: #805 , our issue is fixed already, need to test this.

To create a own mirror.

  1. fork ceph-docker
  2. git clone your fork locally
  3. git reset --hard [COMMIT]
  4. git push origin HEAD --force

After that create a automated build in docker hub and link it to your fork. In build settings in your docker hub repo, set it to build with following settings.

Type: Branch
Name: master
Dockerfile Location: /ceph-releases/[RELEASE]/[OS]/[OS_VERSION]/daemon
Docker Tag Name: tag-build-master-[RELEASE]-[OS]-[OS_VERSION]

Then trigger a build, after a few minutes you will have a mirror of that specific commit you choose.