moby: 'docker system df' does not list all space being used
For me docker system df -v
shows:
Images space usage:
REPOSITORY TAG IMAGE ID CREATED SIZE SHARED SIZE UNIQUE SIZE CONTAINERS
nuccy latest eb40b1ff80bc About an hour ago 20.37GB 53.7MB 20.32GB 0
debian stretch d508d16c64cd 4 weeks ago 100.6MB 0B 100.6MB 0
bitnami/minideb stretch fa89f2915564 5 weeks ago 53.7MB 53.7MB 0B 0
Containers space usage:
CONTAINER ID IMAGE COMMAND LOCAL VOLUMES SIZE CREATED STATUS NAMES
Local Volumes space usage:
VOLUME NAME LINKS SIZE
6aec75524e3dedd97dca14c77a7d0b2ed6c52bf08b8626dfcfb1e7d6704a3bb3 0 0B
a9fe3492b569a6fd5d27af16f614d6a82a2ab332d948f924e644350313d37706 0 0B
sv-cfg 0 597.3kB
Build cache usage: 35.05MB
CACHE ID CACHE TYPE SIZE CREATED LAST USED USAGE SHARED
857e4ad1a7b8 regular 1.62MB 6 weeks ago 0 false
13d5529fd232 regular 101MB 2 weeks ago 2 weeks ago 12 true
6659c036b8dc internal 880kB 6 weeks ago 2 weeks ago 27 false
8c934103363f frontend 16.3MB 6 weeks ago 13 days ago 44 false
ca8bf53712bd regular 53.7MB 2 weeks ago 8 hours ago 8 true
00deee01db1f frontend 16.3MB 10 days ago 8 hours ago 4 false
So about 20 GB of space being used.
However the directory /var/lib/docker/overlay2
indicates there are more images, meaning that about 40 GB of space are being used by docker:
root /var/lib/docker/overlay2 # du -hsx * | sort -rh
19G fd8a111f864daa966c7cdde633e81ca9f0c6a6b085a01b699442f1dd2ae23707
17G 81b29726d152a815fa118513ea844e9aee654111185bb020461f8fdc5ff363cd
245M 8975705b1d4c7d7d7132d9d8f1c2b20cd393e18b262e16d6108335ba1c7b2f2b
111M c3ab47614a0b32adbf20d6a246e0e07d6b7d3260d18b24320acccab076ac3f53
61M 44c04b2edb92b6b7f4ffed950da58b3a990e9fd2c0fa80f098340f4309abfdf3
50M 74168178318e1abff6f59d96aed3995e7e67ae211ea6b96c37ff5c2d7226e695
16M f30cb0f43677d828528edf31d8c7b41041f5ed23d4ec15258446a9fa615d61dc
16M 37ffdb7679240e4c31634aa5ec9c7a7b6a8c8f4fecb5acde7628cdbc4d4ac7c5
6,1M 2f218061dc4f061e6dd625c47d96ba2f2cb5e5aa791ec65dd59728cc8ca7186c
2,4M eb31594cba19cc0dd48a684c869be28832ce940d1dd3612445e4ade2c711a1d9
1,7M c8bdbe9b03b4fb9bda5d86df450a911a87a5b51fe8149bfcc5e40f32860f243c
884K 04ead42412872adf89de4ee1293f45d15c6bc3b66f786ff78e26f0de737bc0df
76K l
44K 39dd1d95705e295e0a5cded06e7c858fc6010fcc5c52599717ba7108b6071f1f
40K d6ff20865169fc9c11d854760e9f7e0476a522cec83801faa6f2bc8a19697d18
36K 75006f6885eb22a524cc1fdb8543e30e00464e211a6d3aba25610d46ea98d421
28K 64c65b9b4796cd0e9d1c23c846c74c3a1a017b0f074571e75ab470e4b4a3f8a1
A docker system prune
wouldn’t prune those images either.
I had noticed this when my server ran out of space and docker system df
still showed a moderate usage - which is simply not true.
I’m on CentOS 7.
# docker version
Client:
Version: 18.09.3
API version: 1.39
Go version: go1.10.8
Git commit: 774a1f4
Built: Thu Feb 28 06:33:21 2019
OS/Arch: linux/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 18.09.3
API version: 1.39 (minimum version 1.12)
Go version: go1.10.8
Git commit: 774a1f4
Built: Thu Feb 28 06:02:24 2019
OS/Arch: linux/amd64
Experimental: true
#30975 related?
About this issue
- Original URL
- State: open
- Created 5 years ago
- Reactions: 8
- Comments: 35 (9 by maintainers)
Well the issue seems to be that devs see the issue in userspace reporting incorrect space left on the device.
Those arguments dont hold up however, if my HDD is 100% full it’s 100% full. I don’t understand why my bug report (and the ones of many others) do not get taken seriously.
There is obviously something wrong and it needs to be taken seriously and adressed properly.
I’m having a similar problem. After deleting everything (docker rm + docker rmi) and doing a system prune, this is what I’m getting:
I notice that buildkit is leaving a 23 Gb metadata.db file, despite reporting used space as 0. And I would expect this metadata.db file to have been cleaned out by “docker system prune -a”. Or by a “docker service restart”. Tried both, neither worked. There is also a bunch of folders under /var/lib/docker/buildkit/net, reported to be 16Kb each for a total of about 380Mb.
The rest of the space is mostly used under overlay2/:
To solve the situation I had to stop the docker service, unmount the /var/lib/docker filesystem, mkfs, remount, and restart the service.
Not 100% sure, but I think the
-x
/--one-file-system
option does that, and ignores mounts;With
--one-file-system
;Without;
At least in our case container log files are clearly not the issue. All commands executed as root
Per the requirements for opening a new issue when there is one already outlining my issue. I am experiencing the same issue as @Nuc1eoN
docker system df
shows a very different totals list from what is the actual size of/var/lib/docker/overlay2
.My details are as follows:
OS:
Docker Info:
All Existing Containers:
Docker System DF Output:
NOTE: We use BuildKit, so we have
build cache
objects.Host DF Output:
NOTE:
/var/lib/docker/
is under our root partition on/
, which is/dev/nvme0n1p1
, which brings another question… why isoverlay
not showing up as a mount in thedf -h
output?And here is the final bit, showing the output of
ncdu -x
on/var/lib/docker/overlay2
(I am not listing every single dir line in the output… it would crush this Github page… but instead am posting the header dir and the final total report):Screenshot:
The total size reported by
ncdu -x
for theoverlay2
directory is678.1GB
, whereasdocker system df
shows a combined total of around445GB
.Also make shure to exclude container log files from that, as they’re not included in docker’s size output
What else, apart from log data, is relevant and not taken into account? Could it be that Moby/Docker somehow got lost on what belongs to it and therefore not cleaning it in
/var/lib/docker
& not counting it ondocker system df
? Or could it be related to issue 21925 (Docker does not free up disk space after container, volume and image removal)?Currently we have a huge difference.
/var/lib/docker/
is more than twice the size of whatdocker system df
reports (33G
):but docker shows a bit more than
12.58GB
:Meaning, that
/var/lib/docker/
contains more than double the size docker is reporting.And there aren't any huge logs either (click to expand)
Docker version (click to expand)
Docker is running within a user namespace (hence
/var/lib/docker/4099.500000/
) and this is on Ubuntu 18.04.4 LTS:4.15.0-106-generic #107-Ubuntu SMP Thu Jun 4 11:27:52 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
du --max-depth=1 -ch /var/lib/docker/4099.500000 (click to expand)
Apart from the issue with inaccurate results: which was the first release containing the garbage collector? Could the garbage collector solve the issue with the growing space?
There a lot of entries that are neither containers nor images any more. I checked it this way (got the idea from these comments):
The result gave 77 entries. Could this be related? And the size seems to be significant, too. I checked it with
PS: This seems to be a duplicate of Issue #942. So this issue might be a bit older. PPS: Could the issue of the growing storage be related to #32360? PPPS: These problems also seems to be described on StackOverflow. For instance (here or here).
du
will not understand that container filesystems are overlay mounts and not actually taking up extra space.