buildah: Subsequent push failures

Description

Upon pushing multiple tags to a docker registry, any after the first fail.

Steps to reproduce the issue:

  1. buildah bud for a dockerfile
  2. buildah commit
  3. buildah push <image> docker://<image>:tag1
  4. buildah push <image> docker://<image>:tag2

Describe the results you received:

Getting image source signatures Copying blob sha256:f637bb87f74f0710bea59badbff1240cddaa7c47a12656789829396349d82b00

0 B / 578.24 MiB [------------------------------------------------------------] 8 B / 578.24 MiB [>-----------------------------------------------------------] 512 B / 578.24 MiB [>------------------------------------------------------] 0s Patch https://crucible.lab:4000/v2/oci/portagedir/blobs/uploads/ee903314-762d-4bdd-8f4c-e4ee585a4cd4?_state=ez4VTc9aem0iZ0biV_Ujr0XtKFz83gsO3g9jlnFhCrh7Ik5hbWUiOiJvY2kvcG9ydGFnZWRpciIsIlVVSUQiOiJlZTkwMzMxNC03NjJkLTRiZGQtOGY0Yy1lNGVlNTg1YTRjZDQiLCJPZmZzZXQiOjAsIlN0YXJ0ZWRBdCI6IjIwMTgtMDgtMzFUMTM6NTg6NTIuNjg0OTkyNDA3WiJ9: open /var/lib/containers/storage/overlay/f637bb87f74f0710bea59badbff1240cddaa7c47a12656789829396349d82b00/merged/Manifest: no such file or directory

Describe the results you expected:

A successful push to the registry.

The ideal results would be that buildah notices that those blobs already exist on the registry and then just adds a tag. I’ve never seen that behavior, however, and it just pushes one copy of the blob for every tag I use.

Output of buildah version:

Version:         1.4-dev
Go Version:      go1.10.3
Image Spec:      1.0.0
Runtime Spec:    1.0.0
CNI Spec:        0.4.0
libcni Version:  v0.7.0-alpha1
Git Commit:      20b236a8
Built:           Fri Aug 31 13:53:03 2018
OS/Arch:         linux/amd64

Output of uname -a:

Linux bb08613bf569 4.17.0-gentoo #1 SMP Thu Jun 7 02:49:57 UTC 2018 x86_64 Intel(R) Xeon(R) CPU D-1587 @ 1.70GHz GenuineIntel GNU/Linux

Output of cat /etc/containers/storage.conf:

cat: /etc/containers/storage.conf: No such file or directory

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Comments: 21 (7 by maintainers)

Most upvoted comments

I’m seeing the same issue doing a skopeo copy containers-storage:… docker://… on a buildah built image.

My first thought was this was related to doing concurrent buildah calls on different images, but I’ll keep testing and attempt to come up with a reproducer.

I saw something similar to this while playing last week. I was not able to push a second time to a dir:/tmp/image directory.

It almost seems that something is being modified in container/storage on the push.

# buildah push docker.io/library/alpine:latest dir:/tmp/dan
Getting image source signatures
Copying blob sha256:73046094a9b835e443af1a9d736fcfc11a994107500e474d0abf399499ed280c
 3.00 KiB / 4.46 MiB [>-----------------------------------------------------] 0s
error pushing image "docker.io/library/alpine:latest" to "dir:/tmp/dan": error copying layers and metadata: Error writing blob: open /var/lib/containers/storage/overlay/73046094a9b835e443af1a9d736fcfc11a994107500e474d0abf399499ed280c/merged/bin/busybox: no such file or directory

What is weird is the file is still there.

# ctr=$(buildah from docker.io/library/alpine:latest)
# mnt=$(buildah mount $ctr)
# ls -l $mnt/bin/busybox
-rwxr-xr-x. 1 root root 796312 May 30 06:46 /var/lib/containers/storage/overlay/fc28213940301dc3664236d2e2dd5b4599acafd186fd08138e9ecfce8724b505/merged/bin/busybox

Thanks, I’m seeing it now to a private registry too. What’s interesting is I was having problems getting the registry to behave and after doing a push with docker of busybox, my second push of alpine failed with a busybox reference… See below.

# docker run -d -p 5000:5000 registry:2

# docker tag busybox 127.0.0.1:5000/busybox

# docker push 127.0.0.1:5000/busybox
The push refers to a repository [127.0.0.1:5000/busybox]
f9d9e4e6e2f0: Pushed 
latest: digest: sha256:5e8e0509e829bb8f990249135a36e81a3ecbe94294e7a185cc14616e5fad96bd size: 527

# buildah pull alpine

# buildah push --tls-verify=false alpine 127.0.0.1:5000/my-alpine:first
Getting image source signatures
Copying blob sha256:73046094a9b835e443af1a9d736fcfc11a994107500e474d0abf399499ed280c
 4.46 MiB / 4.46 MiB [======================================================] 0s
Copying config sha256:11cd0b38bc3ceb958ffb2f9bd70be3fb317ce7d255c8a4c3f4af30e298aa1aab
 1.48 KiB / 1.48 KiB [======================================================] 0s
Writing manifest to image destination
Storing signatures

# # buildah --debug push --tls-verify=false alpine 127.0.0.1:5000/my-alpine:second
DEBU[0000] [graphdriver] trying provided driver "overlay" 
DEBU[0000] overlay: override_kernelcheck=true           
DEBU[0000] overlay test mount with multiple lowers succeeded 
DEBU[0000] backingFs=xfs, projectQuotaSupported=false, useNativeDiff=true 
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,overlay.override_kernel_check=true]localhost/alpine:latest" 
DEBU[0000] reference "[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,overlay.override_kernel_check=true]localhost/alpine:latest" does not resolve to an image ID 
DEBU[0000] error locating image "localhost/alpine": image not known 
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,overlay.override_kernel_check=true]docker.io/library/alpine:latest" 
DEBU[0000] Using registries.d directory /etc/containers/registries.d for sigstore configuration 
DEBU[0000]  Using "default-docker" configuration        
DEBU[0000]   Using file:///var/lib/atomic/sigstore      
DEBU[0000] Looking for TLS certificates and private keys in /etc/docker/certs.d/127.0.0.1:5000 
DEBU[0000] IsRunningImageAllowed for image containers-storage:[overlay@/var/lib/containers/storage]docker.io/library/alpine:latest@11cd0b38bc3ceb958ffb2f9bd70be3fb317ce7d255c8a4c3f4af30e298aa1aab 
DEBU[0000]  Using default policy section                
DEBU[0000]  Requirement 0: allowed                      
DEBU[0000] Overall: allowed                             
Getting image source signatures
DEBU[0000] Manifest has MIME type application/vnd.docker.distribution.manifest.v2+json, ordered candidate list [application/vnd.docker.distribution.manifest.v2+json, application/vnd.docker.distribution.manifest.v1+prettyjws, application/vnd.oci.image.manifest.v1+json, application/vnd.docker.distribution.manifest.v1+json] 
DEBU[0000] ... will first try using the original manifest unmodified 
DEBU[0000] Checking /v2/my-alpine/blobs/sha256:73046094a9b835e443af1a9d736fcfc11a994107500e474d0abf399499ed280c 
DEBU[0000] GET https://127.0.0.1:5000/v2/               
DEBU[0000] Ping https://127.0.0.1:5000/v2/ err &url.Error{Op:"Get", URL:"https://127.0.0.1:5000/v2/", Err:(*errors.errorString)(0xc420146be0)} 
DEBU[0000] GET http://127.0.0.1:5000/v2/                
DEBU[0000] Ping http://127.0.0.1:5000/v2/ status 200    
DEBU[0000] HEAD http://127.0.0.1:5000/v2/my-alpine/blobs/sha256:73046094a9b835e443af1a9d736fcfc11a994107500e474d0abf399499ed280c 
DEBU[0000] ... not present                              
Copying blob sha256:73046094a9b835e443af1a9d736fcfc11a994107500e474d0abf399499ed280c
DEBU[0000] exporting filesystem layer "73046094a9b835e443af1a9d736fcfc11a994107500e474d0abf399499ed280c" without compression for blob "sha256:73046094a9b835e443af1a9d736fcfc11a994107500e474d0abf399499ed280c" 
DEBU[0000] No compression detected                      
 0 B / 4.46 MiB [--------------------------------------------------------------]DEBU[0000] Compressing blob on the fly                  
DEBU[0000] Uploading /v2/my-alpine/blobs/uploads/       
DEBU[0000] POST http://127.0.0.1:5000/v2/my-alpine/blobs/uploads/ 
DEBU[0000] PATCH http://127.0.0.1:5000/v2/my-alpine/blobs/uploads/63b25493-1106-4f76-bd54-6ca65413c030?_state=HLRBKeCfloQ4GWh2ANJonz_rAD3eY3qlK_tJhIPinoZ7Ik5hbWUiOiJteS1hbHBpbmUiLCJVVUlEIjoiNjNiMjU0OTMtMTEwNi00Zjc2LWJkNTQtNmNhNjU0MTNjMDMwIiwiT2Zmc2V0IjowLCJTdGFydGVkQXQiOiIyMDE4LTA4LTMxVDIwOjM1OjI5LjU4OTg1Njc4OVoifQ%3D%3D 
DEBU[0000] Error uploading layer chunked, response (*http.Response)(nil) 
 3.00 KiB / 4.46 MiB [>-----------------------------------------------------] 0s
ERRO[0000] Patch http://127.0.0.1:5000/v2/my-alpine/blobs/uploads/63b25493-1106-4f76-bd54-6ca65413c030?_state=HLRBKeCfloQ4GWh2ANJonz_rAD3eY3qlK_tJhIPinoZ7Ik5hbWUiOiJteS1hbHBpbmUiLCJVVUlEIjoiNjNiMjU0OTMtMTEwNi00Zjc2LWJkNTQtNmNhNjU0MTNjMDMwIiwiT2Zmc2V0IjowLCJTdGFydGVkQXQiOiIyMDE4LTA4LTMxVDIwOjM1OjI5LjU4OTg1Njc4OVoifQ%!D(MISSING)%!D(MISSING): open /var/lib/containers/storage/overlay/73046094a9b835e443af1a9d736fcfc11a994107500e474d0abf399499ed280c/merged/bin/busybox: no such file or directory 

The ideal results would be that buildah notices that those blobs already exist on the registry and then just adds a tag. I’ve never seen that behavior, however, and it just pushes one copy of the blob for every tag I use.

That indeed does not currently happen, I am working on making that possible.

I have no idea what is going on, this is something in c/storage.

Only a tiny comment on the workaround:

buildah push <image> docker://<image>:tag1
buildah rmi --all
buildah pull <image>:tag1
buildah tag <image>:tag1 <image>:tag2
buildah push docker://<image>:tag2

This should be easier (and notably faster):

buildah push <image> docker://<image>:tag1
skopeo copy docker://<image>:tag1 docker://<image>:tag2