moby: docker does not remove btrfs subvolumes when destroying container

I receive the following error when deleting a container which created a btrfs subvolume (as happens when you run docker in docker).

# docker run --rm fedora:20 sh -c 'yum -y -q install btrfs-progs && btrfs subvolume create /test'
Public key for lzo-2.08-1.fc20.x86_64.rpm is not installed
Public key for e2fsprogs-libs-1.42.8-3.fc20.x86_64.rpm is not installed
Importing GPG key 0x246110C1:
 Userid     : "Fedora (20) <fedora@fedoraproject.org>"
 Fingerprint: c7c9 a9c8 9153 f201 83ce 7cba 2eb1 61fa 2461 10c1
 Package    : fedora-release-20-3.noarch (@fedora-updates/$releasever)
 From       : /etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-20-x86_64
Create subvolume '//test'
FATA[0033] Error response from daemon: Cannot destroy container c9badf5fc87bb9bfb50a3ee6e5e7c840476bd704e62404c9136aab4d27239d1e: Driver btrfs failed to remove root filesystem c9badf5fc87bb9bfb50a3ee6e5e7c840476bd704e62404c9136aab4d27239d1e: Failed to destroy btrfs snapshot: directory not empty 

Info:

# docker info
docContainers: 22
Images: 47
Storage Driver: btrfs
Execution Driver: native-0.2
Kernel Version: 3.13.2-gentoo
Operating System: Gentoo/Linux
CPUs: 8
Total Memory: 15.64 GiB
Name: whistler
ID: RL3I:O6RS:UJRN:UU74:WAGE:4X5B:T2ZU:ZRSU:BN6Q:WN7L:QTPM:VCLN
Username: phemmer
Registry: [https://index.docker.io/v1/]
WARNING: No swap limit support

# docker version
Client API version: 1.16
Go version (client): go1.3.3
OS/Arch (client): linux/amd64
Server version: 1.4.1
Server API version: 1.16
Go version (server): go1.3.3
Git commit (server): 5bc2ff8

About this issue

  • Original URL
  • State: closed
  • Created 9 years ago
  • Comments: 49 (15 by maintainers)

Commits related to this issue

Most upvoted comments

Since people here say this still happens, maybe it might be a good idea to reopen this ticket?

To anyone else having to derp with this, I had to do similar to @TomasTomecek

┌[root@lovell] 
└[/var/lib/docker]> btrfs subvolume delete btrfs/subvolumes/*

^that worked for me 😉

Anyone know a way to distinguish the orphan subvolumes? I don’t want to blow away my running containers.

Loop:

Please reopen and fix #9939!

I just had to cleanup due to the same reason!

This is still an issue 😢

@johnharris85 Shoot me an email at dockergist@mailinator.com

@thechane I wrote a script that traverses the docker structures and finds what directories are orphaned and optionally deletes them. If it’d be useful to people I can put up on github and share here. I’ve only done basic testing on oraclelinux but it should apply to other OSes.

Last news on ticket #38207 trying to reopen this one, @thaJeztah needs someone that is capable of reproducing the error and provide all details asked by the issue template. I think there are many more listeners on this thread so, if someone is still experiencing this issue, I think there might be a chance.

Using btrfs commands I could remove those sub-volumes e.g. :

btrfs subvolume delete eb669bae4f4aa17f3c432d956f481146e4ac77e3f1803fee15e1f2b17787510d-init

Fix embedded in “docker clean” would be nice

Awesome, thank you!

@hcoin, This indeed helps:

$ docker system prune -a --volumes
WARNING! This will remove:
  - all stopped containers
  - all networks not used by at least one container
  - all volumes not used by at least one container
  - all images without at least one container associated to them
  - all build cache

Are you sure you want to continue? [y/N] y
Deleted Volumes:
...
Deleted Images:
...
Total reclaimed space: 5.033GB

Anyways it seems that btrfs need some periodic cleaning operations. On my side, I use this function in my .bashrc to do it:

unalias btrfsCleanup 2>/dev/null
btrfsCleanup() {
    echo "btrfsCleanup"
    sudo btrfs fi show
    sudo btrfs fi df /
    sudo btrfs fi usage /
    sudo btrfs balance start -dusage=80 /
    sudo btrfs scrub start -d /
    sleep 120
    sudo btrfs fi df /var
    sudo btrfs fi usage /var
    sudo btrfs balance start -dusage=80 /var
    sudo btrfs scrub start -d /var
    sleep 120
    sudo btrfs fi df /var
    sudo btrfs fi usage /var
    echo "Done"
}

Using btrfs commands I could remove those sub-volumes e.g. :

btrfs subvolume delete eb669bae4f4aa17f3c432d956f481146e4ac77e3f1803fee15e1f2b17787510d-init

Thx devopxy, that work for me to… This command delete all sub-volumes present is the current directory : btrfs subvolume delete *

duplicate of https://github.com/docker/docker/issues/7773, closing, lmk tho if you believe differently

ONLY WHEN ceph -s reports ‘totally normal, everything you care about is running and all is perfect’ – do docker system prune -a --volumes If you do it under any other ceph operating condition – not good.

The following helps:

pushd /var/lib/docker/btrfs/subvolumes/
btrfs subvolume delete *
popd

This may break your docker build cache and other stuff

Just ran into this today and had to clean it out. There doesn’t appear to be any real solution…is there?

I´ve deactivated btrfs for now and I´m using overlay2

I’m having this problem too.

I had problem with this today (BTRFS seemed out of space because of it) with version 1.9.1

Any progress?

Still hitting this. Are you planning to address this anytime soon?