moby: Docker does not free up disk space after container, volume and image removal
Versions & co
Docker
Docker version
$ docker version
Client:
Version: 1.8.2
API version: 1.20
Go version: go1.4.2
Git commit: 0a8c2e3
Built: Thu Sep 10 19:19:00 UTC 2015
OS/Arch: linux/amd64
Server:
Version: 1.8.2
API version: 1.20
Go version: go1.4.2
Git commit: 0a8c2e3
Built: Thu Sep 10 19:19:00 UTC 2015
OS/Arch: linux/amd64
Docker info:
$ docker info
Containers: XXX
Images: XXX
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: XXX
Dirperm1 Supported: true
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.19.0-26-generic
Operating System: Ubuntu 14.04.3 LTS
CPUs: 1
Total Memory: XXX GiB
Name: XXX
ID: XXXX:XXXX:XXXX:XXXX
Operating system
Linux 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
Issue
Here is how I currently deploy my application:
- Build a new image based on a new version of my application code
- Up a new container based on the image created in
1
- Remove the previous container and its volume with the command
docker rm -v xxxxx
- Remove all the unused images with
docker rmi $(docker images -q)
However, little by little, I’m running out of disk space. I made sure I don’t have any orphan volumes, unused containers and images, etc…
I found a post on a forum telling the following
It’s a kernel problem with devicemapper, which affects the RedHat family of OS (RedHat, Fedora, CentOS, and Amazon Linux). Deleted containers don’t free up mapped disk space. This means that on the affected OSs you’ll slowly run out of space as you start and restart containers.
The Docker project is aware of this, and the kernel is supposedly fixed in upstream (https://github.com/docker/docker/issues/3182).
My machine being a Linux hosted on AWS, I wonder if the kernel I’m using could be related to the issue I referenced above ? If not, does any one has an idea about what could be the origin of this problem ? I spent the whole day looking for a solution, but could not find any so far 😦
About this issue
- Original URL
- State: open
- Created 8 years ago
- Reactions: 68
- Comments: 137 (25 by maintainers)
I have the same issue.
I stoped all docker containers, however when I run this command:
sudo lsof -nP | grep ‘(deleted)’
I get:
ONLY when I do sudo service docker restart, only then it frees the space.
Here is the best picture to describe it:
I have found a work-around in the meantime. It’s a little tedious, but it clears up space on my Ubuntu 16.04 VM. Essentially… performing a “double-tap” on the system. Running as
root
or any system sudoer:cd
’d into your/var/lib/docker/aufs
directory when running these commands./var/lib/docker/
.sudo systemctl docker status
/var/lib/docker/aufs
directory is empty withls /var/lib/docker/aufs/
.df
. Happy times!Might be worth checking if it’s actually
/var/lib/docker
that’s growing in size / taking up your disk space, or a different directory. Note; to remove unused (“dangling”) images, you candocker rmi $(docker images -aq --filter dangling=true)
Same problem here.
docker image prune -a
helped in my case.docker info
Containers: 27 Running: 27 Paused: 0 Stopped: 0 Images: 25 Server Version: 18.03.0-ce Storage Driver: overlay2 Backing Filesystem: extfs Supports d_type: true Native Overlay Diff: true Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host macvlan null overlay Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog Swarm: active NodeID: l1ghr85txalrh7ykc41lry2uu Is Manager: true ClusterID: uhf7o5vxegl0xp8p2qisnenh9 Managers: 4 Nodes: 4 Orchestration: Task History Retention Limit: 5 Raft: Snapshot Interval: 10000 Number of Old Snapshots to Retain: 0 Heartbeat Tick: 1 Election Tick: 3 Dispatcher: Heartbeat Period: 5 seconds CA Configuration: Expiry Duration: 3 months Force Rotate: 0 Autolock Managers: false Root Rotation In Progress: false Node Address: 10.139.0.4 Manager Addresses: 10.139.0.11:2377 10.139.0.12:2377 10.139.0.3:2377 10.139.0.4:2377 Runtimes: runc Default Runtime: runc Init Binary: docker-init containerd version: cfd04396dc68220d1cecbe686a6cc3aa5ce3667c runc version: 4fc53a81fb7c994640722ac585fa9ca548971871 init version: 949e6fa Security Options: apparmor seccomp Profile: default Kernel Version: 4.13.0-38-generic Operating System: Ubuntu 16.04.4 LTS OSType: linux Architecture: x86_64 CPUs: 40 Total Memory: 125.6GiB Name: worker2 ID: 7CRN:SS6K:ST4S:63TY:RLSZ:6LXB:PNOF:4Y4G:KHEA:NXQ6:X3QR:BWJS Docker Root Dir: /var/lib/docker Debug Mode (client): false Debug Mode (server): false Registry: https://index.docker.io/v1/ Labels: Experimental: false Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false
@HWiese1980 docker (up until docker 17.06) removed containers when
docker rm --force
was used; even if there was an issue with removing the actual layers (which could happen if the process running in a container was keeping the mount busy); as a result, those layers got “orphaned” (docker no longer had a reference to them), thus got left around.docker 17.06 and up will (in the same situation), keep the container registered (in “dead” state), which allows you to remove the container (and layers) at a later stage.
However if you’ve been running older versions of docker, and have a cleanup script that uses
docker rm -f
, chances are those layers accumulated over time. You can choose to do a “full” cleanup (you’ll loose all your local images, volumes, and containers, so only do this if there’s no important information currently) to do so, stop the docker service, andrm -rf /var/lib/docker
. Alternatively, you can stop the docker service, move the directory (as a backup), and start the service again.In your situation, it looks like there’s no (or very little) data in the
volumes
directory, so if there’s no images or containers on your host, it may be “safe” to just remove the/var/lib/docker
directory.If you’re sure you have no images, containers, or volumes that you need to keep, you can stop the docker daemon (
sudo systemctl stop docker
), and remove that directory (sudo rm -rf /var/lib/docker
) obviously: be very careful when typing those commands 😅I had this problem for a very long time (about 1 year). And it suddenly stopped, about a month ago.
During the last year I tried the various commands that are listed in this thread. Running periodically:
docker system prune -af
can help but doesn’t solve the real problem - only its symptoms.As I mentioned, this problem stopped in my case about a month ago. My drive was completely full and whatever I tried it remained full. I decided to track the exact source of disk usage. I found a log file of ~30 GB. I looked into it and found error messages outputted by one of my scripts (running inside one of my docker containers). As it turned out, the script itself was not working properly, throwing endless invalid errors.
There was a 3 step solution: (1) I erased the log file and freed the ~30GB of space. (2) I stopped the script, preventing the the log file from growing out of control. (3) I debugged the script and fixed the true cause of the issue.
I never had any such issues ever since. I just checked and I have over 30GB of free space as of writing this message.
My guess is that this is not a docker issue or bug. It’s a symptom of our own bugs, running from inside one or more docker containers.
Good luck finding the true cause of your problems!
This issue is definitely still happening. We have a Jenkins builder node with a nightly cronjob doing
docker system prune -af
and we were still losing disk space undervar/lib/docker/aufs
. Docker version is 18.03.0-ce. We were running Debian with Kernel 3.16 and have now upgraded to Kernel 4.9, switched to overlay2 and we hope this will be resolved now.Is there already a solution for this issue?
/var/lib/docker/aufs takes a damn lot of space on my disk. There are no images and containers left anymore:
I don’t get rid of it… without manually deleting it which I’m afraid of doing because I don’t know what of that data is still needed.
I’m suffering similar problem on debian jessie. I freed ~400MB with a service restart, but have 2,1GB of old container garbage inside /var/lib/docker/aufs with just one container running
I think i got hit by the same thing, installed docker earlier today on this new laptop, so it was clean before, and built a few images to test, getting low on space, i took care on calling docker rm on any stopped docker (produced by my builds, never used
-f
to remove them), and then docker rmi on all untagged images, currently i have thisalready restarted docker, didn’t change anything, i think i’ll remove everything ending with
-removing
in thediff/
directory, thankfully nothing important depends on the docker images in this laptop, but still, wouldn’t like for this to happen on a server.This is also true of ubuntu, it’s not a fedora or redhat problem, it’s a docker problem…
Don’t ever remove /var/lib/docker/aufs. It will completely make your existing containers useless unless you really want to.
Have you tried
docker system prune
? Also, when you remove a container, do you use the-v
option? It seems that volumes are not removed by default when containers are removed viadocker rm
.@stephencookefp if you can afford to lose the containers without fuss I’ve got the following in my
.bashrc
/.profile
I then run
docker_rm_all && docker_rmi_all && docker_murder_aufs
If you don’t have images or containers you don’t need the
docker_rm_all
ordocker_rmi_all
I think there are two different issues here, with different “solutions”.
docker system prune -a
to free space.One can determine that issue 2 is one’s issue by running
sudo lsof -nP | grep docker | grep '(deleted)'
as @groyee has shown earlier in this thread. If you get output, then docker is holding on to freed space.Thank you all so much! I wish I’d read this thread from the bottom up 😃
I have a similar problem where clearing out my volumes, images, and containers did not free up the disk space. I traced the culprit to this file, which is 96 gb. /Users/MyUserAccount/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/Docker.qcow2
However, looks like this is a known issue for Macs: https://github.com/docker/for-mac/issues/371 https://github.com/docker/docker/issues/23437
17.09.0~ce-0~ubuntu
definitely solved it for me.Removed the comment and blocked per the community guidelines
edit: @soar removed your last comment as well, to keep the discussion on-topic
I confirm that after upgrading from 17.06.1-ce to 17.06.2-ce the size of the /var/lib/docker/aufs/diff directory dropped from 25G to 539M. Using an Azure VM, Ubuntu 16.04
Found a slightly less radical way of cleaning up space. It seems it does not require reinstalling docker-ce (maybe not even restart), but it still requires removing all images and containers as well as manually removing directories and files:
Not 100% sure “l” directory needs to exist or overlay2 storage backend will recreate it if it does not exist.
If sha256 hashes are not removed “docker pull” will fail with
It looks like even if all images are removed hashes still exist. Maybe prune is failing to remove some image layers. Also tried with “docker image prune --force” and those hashes were not removed.
Now everything is cleaned up but will do some more testing in a few hours/days when the problem will appear again.
Hello!
I have the same problem
But 7.4G space used
Any ideas?
@EChaffraix be careful with
prune -af
, where-a
stands for all and tries to wipe all unused images forcefully, was bitten by it earlier.After reading this thread I have upgraded one of my build systems from:
to
But nothing has changed.
aufs/diff
still weights 40 GBs:With no containers and images in system:
Yep, I already confirmed that 😦 To be more accurate, the folders growing in size are
/var/lib/docker/aufs/diff
and/var/lib/docker/aufs/mnt
. The size of any other folder under/var/lib/docker
is not really significant.Thanks. I’m already doing that. On each deployment, I:
-v
option to also remove the associated volumeWhich is the reason why I don’t understand why my disk space is decreasing over time 😦
I faced exactly a similar issue as @albertca but on ubuntu 22.04 and smaller scaled project,
docker system prune -a -f
doesn’t delete everything, there is still a huge data in/var/lib/docker
. Mostly onoverlay2
(more than 1 GB) andbuildkit
(300 MiB) directories.I have also another issue that might be related.
docker system df
throws an error:If you’re on macOS and use Docker Desktop, the docker daemon runs in a lightweight VM. That VM uses a disk image for the Linux filesystem (
~/Library/Containers/com.docker.docker/Data/vms/0/data/Docker.raw
), in which docker stores its images, containers, and volumes. The disk image uses a “sparse” format, which means the disk image has a “maximum” size set, but only uses the amount of physical disk space that’s actually used.Unfortunately, many tools on macOS (including the Finder) do not show the size correctly, and show the “maximum” size of the image, not the actual size.
You can run the following command to see both the “actual” and “maximum” size of the disk image;
In this case, the “maximum” size is
60G
, and the actual size is17583868k
(so ~17,5G
). The physical size of the image should match the actual amount of data used inside the VM, but there may be some delay between cleaning up storage inside the VM (e.g., using thedocker system prune
command), and the disk image shrinking (the VM periodically runs a “trim” to shrink the disk image’s physical size, but may pick an “idle” moment).You can use
docker system df
to see how much space is taken up inside the VM, e.g.:Note that logs related to containers is not taken into account, as well as “other” files inside the VM (
docker
,containerd
,kubernetes
itself, as well as system logs etc)Production environment does not mean “hand-holding”. That’s the way package managers an installers work; Updating a package will restart the service; for example, updating nginx with
apt install nginx
would also restart the web server.I am seeing large issues with space below /var/lib/docker/aufs/diff
Has a fix been released or is there a workaround using a different Storage Driver.
Which is a bit of a joke for running four containers. How can I get this space back?
Guys, simply upgrade to
v17.06.2-ce-rc1
orv17.07.0-ce
: https://download.docker.com/linux/ubuntu/dists/xenial/pool/test/amd64/confirming this issue. can you folks at least not attach a warning when you start taking up too much space. i do something like this and it becomes very noticeable fairly quickly what the issue is:
is there an official work-around ot this issue or better yet when are you planning to actually fix it?
I’ve resolved this issue (when running WSL2) by manually recompacting the WSL2 ext4.vhdx size:
Same shit. Upgraded to docker to 17.05.0, cleaned all orphaned trash with
docker system prune -af
, removed unnecessary images. But it does not helped very much.Still has many data in
/var/lib/docker/aufs/diff
Greping largest folder’s id in
/var/lib/docker/image/aufs/layerdb/mounts/*/mount-id
gives me id which does not match to any container id.Two commands cleaned a lot of space :
docker system prune -af
docker system prune --volumes
I reduced docker size from 80G to 20G.
Well, I’ll translate what our friend tried to say. Updating nginx does not produce any downtime. It’s true, many packages take care about that. They don’t induce any downtime without notice.
@Mihai-Mircea This is probably related to a docker daemon restart during the update. Not because of the updated code itself.
When restarting it frees some (leaked?) references to files and allows them to be really deleted. As far as I can tell, this is a bug.
Thanks for a great product! I’m also having this issue.
MacOS 10.12.6
Then only way I’m able to solve this is to go through Docker GUI and reset all data.
Please let me know if there’s any more information I can give you.
what i found, was that i upgraded docker-ce-17.06.1 -> docker-ce-17.06.2 and when i restarted dockerd, it started to remove those
-removing
dirs. took a while beforedocker ps
started to respond.but that was of course on linux directly, upgrade in the kvm on osx may be different…
Thank you @Lewiscowles1986 - Freed up 22GB; my server was un-usable.
Docker, Inc - this issue should really be prioritized - what the hell is going on?
Update: 3 days and already full again. I’m switching to
overlay2
and will see if it clears up the problem.docker-ce_17.06.2
is in stable now, so aapt update && apt upgrade
should do it.Had the same issue on 17.06.1-ce build 874a737; used over 600gb in /var/lib/docker/aufs with no images in docker ps -a or docker images -a. Deleting /var/lib/docker/aufs and reinstalling docker resolved the issue at the time, but I can see the folder growing disproportionately to the containers available. This is on a CI server with heavy docker utilization.
Just hit this on a server and laptop, both running Ubuntu 16.04 and Docker 17.06.
30 GB in
/var/lib/docker/aufs
, while the actual images and volumes account just for 5 GB. Runningdocker system prune -af
freed up only 120 MB.Is it safe to remove all containers and
/var/lib/docker/aufs/*
, then recreate the containers? Will named volumes stay intact?Anyhow, we need a simple workaround that can safely remove all the clutter and can be set in cron.
This issues should be reopened. I can also report a cluttered
/var/lib/docker/aufs
directory.@vitalyzhakov is this on a fresh installation, or a version of docker that was upgraded from an earlier version? Older versions of docker (when using
docker rm --force
) forcibly removed containers, and when their filesystem was in use, those layers could be left behind after the container was removed. So if you upgraded from an earlier version, and useddocker rm --force
, it’s possible those layers are leftovers from older versions.However, with more information, that would be hard to tell.
GARBAGE=“/var/lib/docker/aufs/diff” du -hd 1 $GARBAGE | sort -hrk 1 | head -25 find $GARBAGE -maxdepth 1 -name *-removing -exec rm -rf ‘{}’ ;
This will be helpfull.
The problem is not version specific. The update process refreshes something somehow, that fixes the problem.
I can confirm that, this is a bug in 17.06.1-ce, after upgrading and restarting docker, all the spaces are freed.
I know that. I’m just saying that a kind notice (“y/n”) would be nice, for a software that aims to be used in production environments.
i can confirm that since we moved away from aufs to overlay2 on ubuntu16 we had no issues with space being cleared using the docker system prune -a -f`works. workaround is drop aufs and use overlay2. it works!
@jcberthon Did you tried?
https://github.com/moby/moby/issues/22207
I can also report many *-removing files in the diffs and layers folder just like in the ls output from @tshirtman. Looks like docker can’t remove those files when pulling / updating images or creating containers even though I stop all containers before updating. See https://stackoverflow.com/a/45798794
Maybe this issue should be moved to the docker side?
I’m re-opening the issue considering the “recent” activity.