ceph-container: Permission denied when creating a journal in a new OSD container
Hi, thank you for your work on these containers. I am running on a small issue, and this is the output:
DEBUG:ceph-disk:OSD id is 0
DEBUG:ceph-disk:Initializing OSD...
INFO:ceph-disk:Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/tmp/mnt.y_qXKn/activate.monmap
got monmap epoch 1
INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /var/lib/ceph/tmp/mnt.y_qXKn/activate.monmap --osd-data /var/lib/ceph/tmp/mnt.y_qXKn --osd-journal /var/lib/ceph/tmp/mnt.y_qXKn/journal --osd-uuid 29cd5619-cb46-41da-be4e-05f86180b67c --keyring /var/lib/ceph/tmp/mnt.y_qXKn/keyring --setuser ceph --setgroup ceph
2015-12-03 16:18:44.042200 7f8c7bbb1940 -1 filestore(/var/lib/ceph/tmp/mnt.y_qXKn) mkjournal error creating journal on /var/lib/ceph/tmp/mnt.y_qXKn/journal: (13) Permission denied
2015-12-03 16:18:44.042222 7f8c7bbb1940 -1 OSD::mkfs: ObjectStore::mkfs failed with error -13
2015-12-03 16:18:44.042257 7f8c7bbb1940 -1 ** ERROR: error creating empty object store in /var/lib/ceph/tmp/mnt.y_qXKn: (13) Permission denied
ERROR:ceph-disk:Failed to activate
DEBUG:ceph-disk:Unmounting /var/lib/ceph/tmp/mnt.y_qXKn
INFO:ceph-disk:Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.y_qXKn
I start the OSD container using this command:
sudo docker run -d --net=host \
--privileged=true \
-v /var/lib/ceph/:/var/lib/ceph/ \
-v /dev/:/dev/ \
-e OSD_DEVICE=/dev/sda \
-e OSD_TYPE=disk \
-e MON_IP_AUTO_DETECT=4 \
-e KV_TYPE=consul \
-e KV_IP=192.168.1.6 \
-e KV_PORT=8500 \
-e ESD_FORCE_ZAP=1 \
ceph/daemon osd
The ceph monitor should be running, and it is started like this:
sudo docker run -d --net=host \
-v /var/lib/ceph/:/var/lib/ceph/ \
-e MON_NAME=ceph_node1 \
-e MON_IP=192.168.1.41 \
-e CEPH_PUBLIC_NETWORK=192.168.0.0/24 \
-e CEPH_CLUSTER_NETWORK=192.168.0.0/24 \
-e MON_IP_AUTO_DETECT=4 \
-e KV_TYPE=consul \
-e KV_IP=192.168.1.6 \
-e KV_PORT=8500 \
ceph/daemon mon
I have the default settings from populate.sh (on a side topic, kviator wasn’t working on a goland container. I used curl instead).
I also have 6 hard drive per node, how can I use multiple OSD_TYPE ? Should I start one OSD per hard drive?
Thank you
About this issue
- Original URL
- State: closed
- Created 9 years ago
- Comments: 86 (71 by maintainers)
Commits related to this issue
- Merge pull request #203 from fmeppo/master Fix for issue #171 — committed to ceph/ceph-container by leseb 8 years ago
Same issue here: command: Running command: /usr/bin/timeout 300 ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /var/lib/ceph/tmp/mnt.onP01K/activate.monmap --osd-data /var/lib/ceph/tmp/mnt.onP01K --osd-journal /var/lib/ceph/tmp/mnt.onP01K/journal --osd-uuid 7902072a-e34d-41d6-b091-bdc624640650 --keyring /var/lib/ceph/tmp/mnt.onP01K/keyring --setuser ceph --setgroup disk mount_activate: Failed to activate unmount: Unmounting /var/lib/ceph/tmp/mnt.onP01K command_check_call: Running command: /bin/umount – /var/lib/ceph/tmp/mnt.onP01K Traceback (most recent call last): File “/usr/sbin/ceph-disk”, line 9, in <module> load_entry_point(‘ceph-disk==1.0.0’, ‘console_scripts’, ‘ceph-disk’)() File “/usr/lib/python2.7/site-packages/ceph_disk/main.py”, line 4994, in run main(sys.argv[1:]) File “/usr/lib/python2.7/site-packages/ceph_disk/main.py”, line 4945, in main args.func(args) File “/usr/lib/python2.7/site-packages/ceph_disk/main.py”, line 3299, in main_activate reactivate=args.reactivate, File “/usr/lib/python2.7/site-packages/ceph_disk/main.py”, line 3056, in mount_activate (osd_id, cluster) = activate(path, activate_key_template, init) File “/usr/lib/python2.7/site-packages/ceph_disk/main.py”, line 3232, in activate keyring=keyring, File “/usr/lib/python2.7/site-packages/ceph_disk/main.py”, line 2725, in mkfs ‘–setgroup’, get_ceph_group(), File “/usr/lib/python2.7/site-packages/ceph_disk/main.py”, line 2672, in ceph_osd_mkfs raise Error(‘%s failed : %s’ % (str(arguments), error)) ceph_disk.main.Error: Error: [‘ceph-osd’, ‘–cluster’, ‘ceph’, ‘–mkfs’, ‘–mkkey’, ‘-i’, ‘0’, ‘–monmap’, ‘/var/lib/ceph/tmp/mnt.onP01K/activate.monmap’, ‘–osd-data’, ‘/var/lib/ceph/tmp/mnt.onP01K’, ‘–osd-journal’, ‘/var/lib/ceph/tmp/mnt.onP01K/journal’, ‘–osd-uuid’, ‘7902072a-e34d-41d6-b091-bdc624640650’, ‘–keyring’, ‘/var/lib/ceph/tmp/mnt.onP01K/keyring’, ‘–setuser’, ‘ceph’, ‘–setgroup’, ‘disk’] failed : 2016-06-30 12:56:44.787350 7f2423018800 -1 filestore(/var/lib/ceph/tmp/mnt.onP01K) mkjournal error creating journal on /var/lib/ceph/tmp/mnt.onP01K/journal: (2) No such file or directory 2016-06-30 12:56:44.787447 7f2423018800 -1 OSD::mkfs: ObjectStore::mkfs failed with error -2 2016-06-30 12:56:44.787522 7f2423018800 -1 ** ERROR: error creating empty object store in /var/lib/ceph/tmp/mnt.onP01K: (2) No such file or directory
I dont think this issue is specific to docker, but rather is a general ceph-disk issue. I have the same problem creating disks with dmcrypt (plain keys) using ceph 10.2.1 on Ubuntu 14.04.4
Here is upstart log:
had the same issue with permission denied on CoreOS
when changing the script to use
ceph-osd
to use--setuser root --setgroup root
it works but that doesn’t seem like the right thing to do.the SELinux commands did not work on my CoreOS distro. not sure if there is an equivalent. I could write to /etc/ceph and /var/lib/ceph to write the config and run the monitor so not sure this is even an issue.
The process is pretty confusing: i thought running the ceph/daemon osd was creating an OSD, but from this thread I get that I need to use ceph osd create from the monitor prior to that. is that right? How does the number reported when using ceph osd create relate to the OSD_ID? I don’t quite follow here: what should the number be? Can it be the same on all nodes? or should it be unique? I am used to taking the IP of my node removing the ‘.’ and using it as an ID, so that it is unique in the cluster and can be determined on-the-fly but that didn’t work with ceph, probably the number is too big. when i used 0 or 1 i worked, but i want to eventually provision nodes without having to fix IDs ahead of time, so if it needs to be unique I need another strategy.
another issue I see is the osd daemon scans for ids in a directory, which means 1 docker container can run multiple OSDs, as opposed to running 1 OSD per container. Except for modifying the script there is no easy way to define the OSD_ID to use in a container.
The doc is great when you know what you;re looking for, but as a beginner all this is very confusing. Thanks for enlightening me…