moby: docker does not remove btrfs subvolumes when destroying container
I receive the following error when deleting a container which created a btrfs subvolume (as happens when you run docker in docker).
# docker run --rm fedora:20 sh -c 'yum -y -q install btrfs-progs && btrfs subvolume create /test'
Public key for lzo-2.08-1.fc20.x86_64.rpm is not installed
Public key for e2fsprogs-libs-1.42.8-3.fc20.x86_64.rpm is not installed
Importing GPG key 0x246110C1:
Userid : "Fedora (20) <fedora@fedoraproject.org>"
Fingerprint: c7c9 a9c8 9153 f201 83ce 7cba 2eb1 61fa 2461 10c1
Package : fedora-release-20-3.noarch (@fedora-updates/$releasever)
From : /etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-20-x86_64
Create subvolume '//test'
FATA[0033] Error response from daemon: Cannot destroy container c9badf5fc87bb9bfb50a3ee6e5e7c840476bd704e62404c9136aab4d27239d1e: Driver btrfs failed to remove root filesystem c9badf5fc87bb9bfb50a3ee6e5e7c840476bd704e62404c9136aab4d27239d1e: Failed to destroy btrfs snapshot: directory not empty
Info:
# docker info
docContainers: 22
Images: 47
Storage Driver: btrfs
Execution Driver: native-0.2
Kernel Version: 3.13.2-gentoo
Operating System: Gentoo/Linux
CPUs: 8
Total Memory: 15.64 GiB
Name: whistler
ID: RL3I:O6RS:UJRN:UU74:WAGE:4X5B:T2ZU:ZRSU:BN6Q:WN7L:QTPM:VCLN
Username: phemmer
Registry: [https://index.docker.io/v1/]
WARNING: No swap limit support
# docker version
Client API version: 1.16
Go version (client): go1.3.3
OS/Arch (client): linux/amd64
Server version: 1.4.1
Server API version: 1.16
Go version (server): go1.3.3
Git commit (server): 5bc2ff8
About this issue
- Original URL
- State: closed
- Created 9 years ago
- Comments: 49 (15 by maintainers)
Commits related to this issue
- [circle] using docker 1.9, https://github.com/docker/docker/issues/9939 — committed to Sabayon/docker-stage3-base-amd64 by mudler 9 years ago
- Update docker commands to solve error https://github.com/docker/docker/issues/9939 — committed to tablexi/chef-sendmail-ses by phoolish 7 years ago
- Update docker commands to solve error https://github.com/docker/docker/issues/9939 — committed to tablexi/chef-sendmail-ses by phoolish 7 years ago
Since people here say this still happens, maybe it might be a good idea to reopen this ticket?
To anyone else having to derp with this, I had to do similar to @TomasTomecek
^that worked for me 😉
@rgbkrk https://github.com/docker/docker/blob/620339f166984540f15aadef2348646eee9a5b42/contrib/nuke-graph-directory.sh 😉
Anyone know a way to distinguish the orphan subvolumes? I don’t want to blow away my running containers.
Loop:
Please reopen and fix #9939!
I just had to cleanup due to the same reason!
This is still an issue 😢
@johnharris85 Shoot me an email at dockergist@mailinator.com
@thechane I wrote a script that traverses the docker structures and finds what directories are orphaned and optionally deletes them. If it’d be useful to people I can put up on github and share here. I’ve only done basic testing on oraclelinux but it should apply to other OSes.
Last news on ticket #38207 trying to reopen this one, @thaJeztah needs someone that is capable of reproducing the error and provide all details asked by the issue template. I think there are many more listeners on this thread so, if someone is still experiencing this issue, I think there might be a chance.
Using btrfs commands I could remove those sub-volumes e.g. :
Fix embedded in “docker clean” would be nice
Awesome, thank you!
@hcoin, This indeed helps:
Anyways it seems that btrfs need some periodic cleaning operations. On my side, I use this function in my .bashrc to do it:
Thx devopxy, that work for me to… This command delete all sub-volumes present is the current directory : btrfs subvolume delete *
duplicate of https://github.com/docker/docker/issues/7773, closing, lmk tho if you believe differently
ONLY WHEN ceph -s reports ‘totally normal, everything you care about is running and all is perfect’ – do docker system prune -a --volumes If you do it under any other ceph operating condition – not good.
This may break your
docker build
cache and other stuffJust ran into this today and had to clean it out. There doesn’t appear to be any real solution…is there?
I´ve deactivated btrfs for now and I´m using overlay2
I’m having this problem too.
@TomasTomecek me too
I had problem with this today (BTRFS seemed out of space because of it) with version 1.9.1
Any progress?
Still hitting this. Are you planning to address this anytime soon?