moby: Docker push fails with "file integrity checksum failed" seemingly at random
Docker save and docker push both fail for some images seemingly at random with “file integrity checksum failed”
I use a custom-made tool to build a bunch of docker images at once, and on basically every docker version I’ve used (CE 17.06.0, 1.12.1, 1.12.6, 1.11.1), I get an error where some of the built images can’t be pushed or saved with the message file integrity checksum failed for "services/base_server/lib/apt/libgtk2.0/libasan2_5.4.0-6ubuntu1~16.04.4_amd64.deb"
or equivalent (always a .deb
file, but that might just be because the majority of files are .deb
)
Steps to reproduce the issue: I can’t give repro steps outside of a single computer, the build system seems to work everywhere else.
Describe the results you received:
file integrity checksum failed for "services/base_server/lib/apt/libgtk2.0/libasan2_5.4.0-6ubuntu1~16.04.4_amd64.deb"
Additional information you deem important (e.g. issue happens only occasionally): I’ve tried overlay2 and aufs, so this doesn’t seem to be a graph driver bug. Others at my company also don’t get this bug (with the same environment, OS, and docker version)
I tried upgrading my kernel, then my entire OS (ubuntu 14.04 to ubuntu 16.04) and multiple docker versions, but I might just be cursed.
Output of docker version
:
Client:
Version: 17.06.0-ce
API version: 1.30
Go version: go1.8.3
Git commit: 02c1d87
Built: Fri Jun 23 21:23:31 2017
OS/Arch: linux/amd64
Server:
Version: 17.06.0-ce
API version: 1.30 (minimum version 1.12)
Go version: go1.8.3
Git commit: 02c1d87
Built: Fri Jun 23 21:19:04 2017
OS/Arch: linux/amd64
Experimental: false
Output of docker info
:
Containers: 6
Running: 4
Paused: 0
Stopped: 2
Images: 427
Server Version: 17.06.0-ce
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 421
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: cfb82a876ecc11b5ca0977d1733adbe58599088a
runc version: 2d41c047c83e09a6d61d464906feb2a2f3c52aa4
init version: 949e6fa
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 4.8.0-58-generic
Operating System: Ubuntu 16.04.2 LTS
OSType: linux
Architecture: x86_64
CPUs: 12
Total Memory: 62.85GiB
ID: 46HT:IF2S:6OKV:J7K3:VQUG:ZZTN:PPC6:ZJFR:HRO2:P5FR:3WN5:PFVC
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
WARNING: No swap limit support
Additional environment details (AWS, VirtualBox, physical, etc.):
About this issue
- Original URL
- State: closed
- Created 7 years ago
- Reactions: 1
- Comments: 25 (7 by maintainers)
docker system prune -a
solved the problemHi,
I was having something similar when using docker push.
My image was over 100 MB and using Docker Version 17.06.2-ce-win27.
To fix that I had to increase the allocated memory for Docker engine. Go to Docker settings > Advanced.
Regards, RJ
Fixed: This was because one of my ram sticks was bad, and a single bit swap was happening on operations with a lot of writes.
I’m not going to close this because I think a (try a memory check) message would save a lot of headache if this ever happens again.
Had exactly the same problema. It worked after I deleted the images and rebuilt
Rebuild the image is a good solution.
docker system prune -a
Use with caution! This is like rm -rf . !
Face the same issue on Docker CE 19.03.5 at random. Loading the image into docker runtime with command “docker load -i image.tgz”, it shows images are loaded successfully and the return code is 0. But if I push a image which is loaded with previous command to the registry, the issue happens. If I try to save the image from docker runtime with command "docker save -o /tmp/myimage.tar <myrepo/myorname/image:tag>, I got the same error.
My workaround is clean the memory cache with “sync; echo 3 > /proc/sys/vm/drop_caches” delete the image and reload it again.
Facing a similar problem and unable to solve the problem using the solutions provided in this thread. Please inform on how to resolve this problem. Rebuilding is not an option as it took me days to reach this stage with the docker image.
I’ve also encountered this issue, using Docker for Mac:
(with Experimental being both
true
andfalse
).Following @RoshanJeewantha’s advice to increase the RAM allocated to the docker VM seems to have resolved the issue. Is the OOM killer nuking things during the build process? This is the only thing I can think of. (Extensive hardware tests indicate that the machines I’m running Docker on most definitely do not have bad RAM.)