moby: Device-mapper does not release free space from removed images

Docker claims, via docker info to have freed space after an image is deleted, but the data file retains its former size and the sparse file allocated for the device-mapper storage backend file will continue to grow without bound as more extents are allocated.

I am using lxc-docker on Ubuntu 13.10:

Linux ergodev-zed 3.11.0-14-generic #21-Ubuntu SMP Tue Nov 12 17:04:55 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux

This sequence of commands reveals the problem:

Doing a docker pull stackbrew/ubuntu:13.10 increased space usage reported docker info, before:

Containers: 0
Images: 0
Driver: devicemapper
 Pool Name: docker-252:0-131308-pool
 Data file: /var/lib/docker/devicemapper/devicemapper/data
 Metadata file: /var/lib/docker/devicemapper/devicemapper/metadata
 Data Space Used: 291.5 Mb
 Data Space Total: 102400.0 Mb
 Metadata Space Used: 0.7 Mb
 Metadata Space Total: 2048.0 Mb
WARNING: No swap limit support

And after docker pull stackbrew/ubuntu:13.10:

Containers: 0
Images: 3
Driver: devicemapper
 Pool Name: docker-252:0-131308-pool
 Data file: /var/lib/docker/devicemapper/devicemapper/data
 Metadata file: /var/lib/docker/devicemapper/devicemapper/metadata
 Data Space Used: 413.1 Mb
 Data Space Total: 102400.0 Mb
 Metadata Space Used: 0.8 Mb
 Metadata Space Total: 2048.0 Mb
WARNING: No swap limit support

And after docker rmi 8f71d74c8cfc, it returns:

Containers: 0
Images: 0
Driver: devicemapper
 Pool Name: docker-252:0-131308-pool
 Data file: /var/lib/docker/devicemapper/devicemapper/data
 Metadata file: /var/lib/docker/devicemapper/devicemapper/metadata
 Data Space Used: 291.5 Mb
 Data Space Total: 102400.0 Mb
 Metadata Space Used: 0.7 Mb
 Metadata Space Total: 2048.0 Mb
WARNING: No swap limit support

Only problem is, the data file has expanded to 414MiB (849016 512-byte sector blocks) per stat. Some of that space is properly reused after an image has been deleted, but the data file never shrinks. And under some mysterious condition (not yet able to reproduce) I have 291.5 MiB allocated that can’t even be reused.

My dmsetup ls looks like this when there are 0 images installed:

# dmsetup ls
docker-252:0-131308-pool        (252:2)
ergodev--zed--vg-root   (252:0)
cryptswap       (252:1)

And a du of the data file shows this:

# du /var/lib/docker/devicemapper/devicemapper/data -h
656M    /var/lib/docker/devicemapper/devicemapper/data

How can I have docker reclaim space, and why doesn’t docker automatically do this when images are removed?

About this issue

  • Original URL
  • State: closed
  • Created 11 years ago
  • Reactions: 10
  • Comments: 206 (82 by maintainers)

Commits related to this issue

Most upvoted comments

Deeply reluctant as I am, to once again resurrect this ancient thread, there is still no meaningful advice in it about how to work around this issue on an existing machine encountering this issue.

This is my best effort at a tldr; for the entire thread; I hope it helps others who find this thread.

Issue encountered

Your volume has a significant (and growing) amount of space which is in /var/lib/docker and you’re using ext3.

Resolution

You’re out of luck. Upgrade your file system or see blowing docker away at the bottom.

Issue encountered

Your volume has a significant (and growing) amount of space which is in /var/lib/docker and you’re not using ext3 (eg. system currently using xfs or ext4)

Resolution

You may be able to reclaim space on your device using standard docker commands.

Read http://blog.yohanliyanage.com/2015/05/docker-clean-up-after-yourself/

Run these commands:

docker volume ls
docker ps
docker images

If you have nothing listed in any of these, see blowing docker away at the bottom.

If you see old stale images, unused containers, etc. you can perform manual cleanup with:

# Delete 'exited' containers
docker rm -v $(docker ps -a -q -f status=exited)

# Delete 'dangling' images
docker rmi $(docker images -f "dangling=true" -q)

# Delete 'dangling' volumes
docker volume rm $(docker volume ls -qf dangling=true)

This should reclaim much of the hidden container space in the devicemapper.

Blowing docker away

Didn’t work? You’re out of luck.

Your best bet at this point is:

service docker stop
rm -rf /var/lib/docker
service docker start

This will destroy all your docker images. Make sure to export ones you want to keep before doing this.

Ultimately, please read https://docs.docker.com/engine/userguide/storagedriver/device-mapper-driver/#configure-direct-lvm-mode-for-production; but I hope this will assist others who find this thread.

If you have problems with using advice above open a new ticket that specifically addresses the issue you encounter and link to this issue; do not post it here.

First of all, if you still have this problem, please open a new issue;

Wait. I did report an issue

You replied on a 3 year old, closed issue; following the discussion above, the original issue was resolved. Your issue may be the same, but needs more research to be sure; the errors you’re reporing indicate that it may actually be something else.

I really recommend to open a new issue, not commenting on a closed issue

provided the details of my machine and setup, which I’m not obliged to.

You’re not obliged to, but without any information to go on, it’s unlikely to be resolved. So, when reporting a bug, please include the information that’s asked for in the template: https://raw.githubusercontent.com/docker/docker/master/.github/ISSUE_TEMPLATE.md

None of devteam responded to my and others bug reports in half a year period.

If you mean “one of the maintainers”, please keep in mind that there’s almost 24000 issues and PRs, and less than 20 maintainers, many of whom doing that besides their daily job. Not every comment will be noticed especially if it’s on a closed issue.

If this is not recommended feature to use - why is it silent and default one?

It’s the default if aufs, btrfs, and zfs are not supported, you can find the priority that’s used when selecting drivers; see daemon/graphdriver/driver_linux.go. It’s still above overlay, because unfortunately there’s some remaining issues with that driver that some people may be affected by.

Automatically selecting a graphdriver is just to “get things running”; the best driver for your situation depends on your use-case. Docker cannot make that decision automatically, so this is up to the user to configure.

If you do not care for people using devicemapper - I might be even ok with this.

Reading back the discussion above, I see that the upstream devicemapper maintainers have looked into this multple times, trying to assist users reporting these issues, and resolving the issue. The issue was resolved for those that reported it, or in some cases, depended on distros updating devicemapper versions. I don’t think that can be considered “not caring”.

Also, why is default installation uses ‘strongly discouraged’ storage option?

Running on loop devices is fine for getting docker running, and currently the only way to set up devicemapper automatically. For production, and to get a better performance overall, use direct-lvm, as explained in the devicemapper section in the storage driver user guide.

Why wasn’t I told so at installation?

That’s out of scope for the installation, really. If you’re going to use some software in production, it should be reasonable to assume that you get yourself familiar with that software, and know what’s needed to set it up for your use case. Some maintainers even argued if the warning should be output at all. Linux is not a “holding hands” OS (does your distro show a warning that data loss can occur if you’re using RAID-0? If you have ports opened in your Firewall?)

@mercuriete On your dev machine just uninstall docker, delete the directory and reinstall it. Works fine.

@SvenDowideit @shykes @vieux @alexlarsson @Zolmeister @creack @crosbymichael @dhrp @jamtur01 @tianon @erikh @LK4D4

Tagging the top committers because this is ridiculous. There needs to be a public conversation about this and an admission from Docker’s team that this bug can lead to containers or systems that periodically break and need to be recreated. I already know several people who have had to implement insane devops solutions like periodically reimaging their Docker hosts every week or two because their build bots have so much churn. I introduced this issue close to a year ago and there’s been, as near as I can tell, no definitive solution created, and older kernels that are ostensibly supported are not.

Docker team: please do the research to determine which kernel versions are effected, why, and what patch will fix the issue, and document it. Publish that information along with what kernel versions you support, because right now consumers of Docker are getting bit by this issue over and over and over, as evidenced by the fact that I still get emails on this issue every week or two. Seriously, this is a breaking issue and it’s been a pain point since before 1.0.

As I see it, there are several possible options to fix this issue in a satisfactory way that would stop the emails I keep getting for +1s on this issue:

  1. Notify users when Device-Mapper is being used on an unsupported kernel and provide them with detailed instructions for how to reclaim space, and if possible automatically set up a process to do this in Docker. I would advise that this notice should also be emitted when using the docker CLI against a host that suffers from this problem, so that when remote managing hosts from the docker CLI, users are made aware that some hosts may not reclaim space correctly.

  2. Fix the problem (somehow). I don’t know enough about kernel development to know what this would entail, but, based on my novice reading, I suggest this:

    a. As device mapper is a kernel module, bring a functional, working version of it into the Docker source tree as something like dm-docker

    b. Make sufficient changes to dm-docker that it can coexist with device mapper.

    c. On affected platforms, install the dm-docker kernel module on installation and default to using dm-docker.

  3. Amend your installation docs and the docker.com site to include a warning on affected kernel versions, and add a runtime check to the packages to verify correct device-mapper operation, and if not report it to the user.

This should be a blocking issue for the next stable release of Docker, because it’s just plain unacceptable to keep punting on it and leaving users in the lurch.

no one forced you to use Docker.

That’s like Oracle telling a Java developer to use PHP due to a JVM bug. That’s also not consistent with the elevator pitch here

Three years ago, Docker made an esoteric Linux kernel technology called containerization simple and accessible to everyone.

I’m sure a lot of people are grateful that Docker took off like it did and that couldn’t have happened without volunteering from the community. However, it shouldn’t be this hard to admit that it has it’s problems too without implicitly dropping the “I’m a upstream contributor so shut up and listen” line whenever someone brings up an unlikable point.

Reinstalling docker to release disk space is the most ridiculous answer that I came across while looking for a solution for this issue. Not only that’s a waste of time, it’s not even allowed in most environments. It’s a good way to get paid if you’re an hourly worker.

I have exactly the same problem here on an Amazon Linux EC2 instance.

Linux ip-172-31-25-154 4.4.5-15.26.amzn1.x86_64 #1 SMP Wed Mar 16 17:15:34 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux

On instances where I install new docker images on a regular basis the only solution is to do the following:

service docker stop
yum remove docker -y
rm -rf /var/lib/docker
yum install docker -y
service docker start

I don’t really think such a thing is acceptable in a production environment

some extra info:

df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/xvda1       20G   20G     0 100% /

I can’t believe it’s still an issue! come on guys, i’m still having that

Wait. I did report an issue, provided the details of my machine and setup, which I’m not obliged to. None of devteam responded to my and others bug reports in half a year period. Now I stated this fact, you call my behavior bitchy? Do you even open-source? I’m looking for Go project to work on, and it will not be Docker, I give you that. Is this your goal? On 23 Jun 2016 16:45, “gregory grey” ror6ax@gmail.com wrote:

If this is not recommended feature to use - why is it silent and default one? If you do not care for people using devicemapper - I might be even ok with this. But do inform the user about it! Do you realize the amount of headache people have due to this amazing ‘workaround’ you settled on?? On 23 Jun 2016 4:32 p.m., “kpande” notifications@github.com wrote:

workaround is to avoid using the docker device-mapper driver, unfortunately.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/docker/docker/issues/3182#issuecomment-228068397, or mute the thread https://github.com/notifications/unsubscribe/ACRlBbL5BDn3qMzUW_UiaALB32anUER4ks5qOpkJgaJpZM4BTlBd .

If this is not recommended feature to use - why is it silent and default one? If you do not care for people using devicemapper - I might be even ok with this. But do inform the user about it! Do you realize the amount of headache people have due to this amazing ‘workaround’ you settled on?? On 23 Jun 2016 4:32 p.m., “kpande” notifications@github.com wrote:

workaround is to avoid using the docker device-mapper driver, unfortunately.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/docker/docker/issues/3182#issuecomment-228068397, or mute the thread https://github.com/notifications/unsubscribe/ACRlBbL5BDn3qMzUW_UiaALB32anUER4ks5qOpkJgaJpZM4BTlBd .

rm -rf /var/lib/docker

You can also use nuke-graph-directory.sh.

There’s your first problem @misterbigstuff…you bought something that’s open source?

As this bug is there for years and is seem that is not closed yet, could you put in the docker documentation about devidemapper how to destroy safety all docker information? i mean, in this page: https://docs.docker.com/engine/userguide/storagedriver/device-mapper-driver/ put something like “Cleaning device mapper” and how to do it.

I will try to do rm -rf /var/lib/docker but I’m not feel confortable doing that. Can somebody tell me if is safe?

I am using gentoo linux in my daily laptop and I tried docker for learning but is filling up my disk and reinstall the whole system is not an option because is not a VM and reinstall gentoo takes time.

Thank you for your work.

This issue is marked closed. We need a resolution. No workaround, no reconfiguration. What is the real status, and what are the configuration settings that are implicated? Dropping and recreating a production Docker node is not acceptable.

There’s always rkt

Running docker system prune freed a lot of space on my machines.

https://docs.docker.com/engine/reference/commandline/system_prune/

after buying one of these pieces of garbage and seeing the state of support, i returned it.

Hi,

I seemed to have the same problem under Ubuntu 14.04. However the cause where unwanted volumes (cf. blog http://blog.yohanliyanage.com/2015/05/docker-clean-up-after-yourself/). Running the command

docker run -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/docker:/var/lib/docker --rm martin/docker-cleanup-volumes

released a lot of disk space.

I’m looking at creating a tool like fstrim that can be used to get back the space.

Note: Its not fully fixed until you also run a kernel with http://git.kernel.org/cgit/linux/kernel/git/device-mapper/linux-dm.git/commit/?h=for-next&id=6d03f6ac888f2cfc9c840db0b965436d32f1816d in it. Without that the docker fix is only partial.

I completely agree. It’s amazing that a product like Docker just keeps on eating away disk space with nothing you can do about it except for uninstall/reinstall.

I’m using service docker start

Right now I have box with 0 images. docker volume ls -qf dangling=true shows nothing. docker volume ls shows a lot of volumes, which are, by definition, orphaned, since there’s no-images to own them. docker volume rm $(docker volume ls) shows lots of such messages:

Error response from daemon: get local: no such volume
Error response from daemon: Conflict: remove 6989acc79fd53d26e3d4668117a7cb6fbd2998e6214d5c4843ee9fceda66fb14: volume is in use - [77e0eddb05f2b53e22cca97aa8bdcd51620c94acd2020b04b779e485c7563c57]

Device mapper directory eats up 30 GiG. Docker version 1.10.2, build c3959b1 CentOS 7, 3.10.0-327.10.1.el7.x86_64

+1, I’m very interested in hearing some discussion on this subject. My strategy so far has been

  • be careful what you build/pull
  • be prepared to blow away your /var/lib/docker 😐

@AaronFriel, which version of Docker are you on? 0.7.1?

Well, this is kind of… lame.

In my case I found this issue after I uninstalled Docker and deleted the the /var/lib/docker directory, so I couldn’t run the equivalent of service docker stopservice docker start.

I found that my system was not reporting the space from deleting /var/lib/docker as freed (I had ~14 GB sitting in what seemed like limbo).

The fix to this is to simply reload your file system, in my case I just rebooted and the space was reclaimed.

we fixed the leaky volume issue with a dirty hack, since it was preventing the docker daemon from starting up before the service timed out on our high-churn docker hosts:

PRODUCTION [root@ws-docker01.prod ~]$ cat /etc/systemd/system/docker.service.d/docker-high-churn.conf 
[Service]
ExecStartPre=-/bin/rm -rf /var/lib/docker/containers
ExecStopPost=-/bin/rm -rf /var/lib/docker/volumes

which fixes the issue without flushing the pre-cached images.

This isn’t fixed in any meaningful way. On a fully up-to-date Ubuntu 14.04.x installation (the latest LTS release) and with the latest version of Docker (installed via $ wget -qO- https://get.docker.com/ | sh), Docker will continuously leak space with no easy way to reclaim. docker stop $(docker ps -q) && docker rm $(docker ps -q -a) && docker rmi $(docker images -q) only releases a small amount of space.

The only way to reclaim all space is with the following hack:

$ sudo service docker stop
$ sudo rm -rf /var/lib/docker
$ sudo service docker start

Which then requires re-pulling any images you might need.

@tomlux The devicemapper loopback mode you’re using is mostly meant as a way to easily toy around with docker. for serious work, the loopback will be slower, and will have some limitations. I’d very highly recommend having a read over http://www.projectatomic.io/docs/docker-storage-recommendation/

You’ll get better performance, and won’t hit things like this, assuming you’ve applied all the system updates.

Still exists in 0.7.2 on ubuntu 12.04.3 LTS.

A lot of the space is in docker/devicemapper/devicemapper/data and metadata, but also in docker/devicemapper/mnt

It’s neat that I learned you can see the container file systems in docker/devicemapper/mnt/SOME_KIND_OF_ID/rootfs

but it’s not neat that my hard disk is almost completely eaten up and only fixable by rmdir -r docker.