moby: Error pulling image (...) no space left on device

Hi, I try to pull an image from a private corporate registry to a machine.

The image is a NodeJS image with some custom environment variables + a NodeJS application with npm dependencies.

The machine has 4GB of RAM.

user@machine:~$ docker pull my.private.registry.corp/org/image
Pulling repository my.private.registry.corp/org/image
a076dcf8de89: Pulling dependent layers 
511136ea3c5a: Download complete 
27d47432a69b: Download complete 
5f92234dcf1e: Download complete 
51a9c7c1f8bb: Download complete 
5ba9dab47459: Download complete 
1b5cb86bd8eb: Download complete 
e052bcc1b051: Download complete 
a076dcf8de89: Error pulling image (latest) from my.private.registry.corp/org/image, Untar exit status 1 open /tmp/app/node_modules/node-rest-client/node_modules/xml2js/node_modules/xmlbuilder/node_modules/lodash/utility/attempt.js: no space left on device odash/utility/attempt.js: no space left on device wnload complete 
9a7129a697b6: Download complete 
5a4df78f03f1: Download complete 
91e17b8f0ad0: Download complete 
27fd9249b530: Download complete 
21d10d188d73: Download complete 
95c43c63c917: Download complete 
875aec76aa78: Download complete 
a8ed7d8cb50f: Download complete 
300671eaa3d4: Download complete 
be9f77f4e0cb: Download complete 
4f58aff69463: Download complete 
de93133d1b6e: Download complete 
1965d5845989: Download complete 
cae34eb68397: Download complete 
dfcf337450ef: Download complete 
bf8c96846c44: Download complete 
e76410f1b8c1: Download complete 
644792e8361d: Download complete 
8a9b2274fed7: Download complete 
3fc11a89092f: Error downloading dependent layers 
FATA[0007] Error pulling image (latest) from my.private.registry.corp/org/image, Untar exit status 1 open /tmp/app/node_modules/node-rest-client/node_modules/xml2js/node_modules/xmlbuilder/node_modules/lodash/utility/attempt.js: no space left on device 

The strange thing is that there is space left on the device.

$ df -h
Filesystem                                          Size  Used Avail Use% Mounted on
/dev/mapper/packer--UBUNTU1204--BUILD--54--vg-root   39G   23G   15G  61% /
udev                                                2.0G   12K  2.0G   1% /dev
tmpfs                                               395M  244K  395M   1% /run
none                                                5.0M     0  5.0M   0% /run/lock
none                                                2.0G  188K  2.0G   1% /run/shm
/dev/sda1                                           228M   28M  189M  13% /boot
cgroup                                              2.0G     0  2.0G   0% /sys/fs/cgroup

About this issue

  • Original URL
  • State: closed
  • Created 9 years ago
  • Reactions: 18
  • Comments: 66 (20 by maintainers)

Most upvoted comments

Pain.

Had the same issue. Over 90% of my inodes where in use, couldn’t even pull an image anymore. After running the following shell script, inode usage dropped to 7%.

# remove stopped containers
docker rm $(docker ps -a -q)

# remove dangling images
docker rmi $(docker images -q --filter "dangling=true")

It removes not running containers first (remove if you don’t want that). And afterwards, it removes all the dangling images which actually was the major factor in the inode usage drop.

@jcheroske just use docker system prune (or if you only want to remove images; docker image prune), which were made for that purpose

Still a pain in Docker 1.12. Shocked this is still an open issue.

Check the dm.basesize option: https://docs.docker.com/engine/reference/commandline/dockerd/#storage-driver-options. The default is 10GB which is probably why you’re hitting it while extracting your image that is 10GB.

You might want to see if you have free inodes left with df -i.

We’ve been having major issues (#9755) with Docker exhausting inodes.

Same issue here. @SamVerschueren 's suggestion regarding removing stopped containers/dangling images only gained about 8%. Not on coreos - just plain EC2 image (Amazon Linux 4.1.17-22.30.amzn1.x86_64) using ext4.

This is a CI/CD server that does a lot of builds.

docker info
Containers: 0
Images: 979
Server Version: 1.9.1
Storage Driver: overlay
 Backing Filesystem: extfs
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 4.1.17-22.30.amzn1.x86_64
Operating System: Amazon Linux AMI 2015.09
CPUs: 2
Total Memory: 3.862 GiB

Before:

df -i
Filesystem       Inodes    IUsed  IFree IUse% Mounted on
/dev/xvda1     14024704 14007626  17078  100% /

After:

df -i
Filesystem       Inodes    IUsed   IFree IUse% Mounted on
/dev/xvda1     14024704 12897618 1127086   92% /

rm -rf /var/lib/docker was basically the only way to fix. Took hours.

If you’re like me and have very few containers, @SamVerschueren solution might help. I am running OSX 10.11.3 with Docker version 1.9.1, build a34a1d5. Just by removing the dangling images I was able to build new containers without getting the ‘no storage left on device’ error. It would be helpful if Docker would clean up after itself, but I don’t really understand the full scope of this issue so maybe I am wrong to wish that.

@whosthatknocking overlay and overlay2 are the same with respect to Posix. The issue here is inode exhaustion, which os only a problem with overlay.

The fixes are

  1. Use overlay2 if you have a recent kernel that supports it. (4.x)
  2. reformat the drive you are using if you use an old kernel to have more inodes.

I am going to close this issue as overlay2 should be as stable as overlay now; feel free to continue to comment if there are still issues, or open a new issue if you have specific problems using `overlay2.

I build the file system with the following unit. I will look for the inode tuning tweak you mentioned. If you happen to have the reference, please post?

[Unit]
Description=Formats the disk drive
[Service]
Type=oneshot
RemainAfterExit=yes
Environment="LABEL=var-lib-docker"
Environment="DEV=/dev/xvdb"
ExecStart=-/bin/bash -c "wipefs -a -f $DEV && mkfs.ext4 -F -L $LABEL $DEV && echo wiped"

We faced similar problem but Inode % seems fine (40% used). DiskSpace is used about 81%. It is pulling a docker image about 1.7 GB. Seems still about 2GB space would have left. But following error was found. So I am not sure why this happened. I am beginner in this area and may not have much information in this topic. Any help is deeply appreciated.

+ /usr/local/bin/docker version
Client:
 Version:      17.05.0-ce

Server:
 Version:      17.05.0-ce
 API version:  1.29 (minimum version 1.12)
 Go version:   go1.7.5

+ /usr/local/bin/docker build --rm=true -f Dockerfile -t $VALUE . --pull=true
Sending build context to Docker daemon  1.677GB
.........
ee804876babe: Pull complete
225c66a863d8: Pull complete
2df5bb5034a3: Pull complete
96acbc28c73d: Pull complete
write /var/lib/docker/tmp/GetImageBlob194874586: no space left on device

$ df -h Filesystem Size Used Avail Use% Mounted on udev 3.9G 12K 3.9G 1% /dev tmpfs 799M 520K 798M 1% /run /dev/vda1 20G 16G 3.8G 81% /

$ df -i Filesystem Inodes IUsed IFree IUse% Mounted on udev 1018557 415 1018142 1% /dev tmpfs 1022081 1346 1020735 1% /run /dev/vda1 1310720 523561 787159 40% /

@mbentley super, it worked! Just for reference, here’s how my Docker daemon options looks like:

OPTIONS="--default-ulimit nofile=1024:4096 -H tcp://0.0.0.0:4243 -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock --storage-driver=devicemapper --storage-opt=dm.thinpooldev=/dev/mapper/docker-thinpool --storage-opt=dm.use_deferred_removal=true --storage-opt=dm.use_deferred_deletion=true --storage-opt dm.basesize=50G"