kaniko: Layer fails to extract when creating multistage image
Actual behavior
I get a file name too long from lstat when pulling my docker image built with kaniko. The error is:
$ docker pull fluidattacks/alpine-kaniko:latest
latest: Pulling from fluidattacks/alpine-kaniko
050382585609: Already exists
97b8426a6f54: Pull complete
2d05ad4487d2: Extracting [==================================================>] 211.2kB/211.2kB
failed to register layer: lstat /var/lib/docker/overlay2/6c957bc6aa940ccff95bea6b9bbaa45d6561e1179a085bfa1d344c1e826e127a/diff/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/.config/gcloud/docker_credential_gcr_config.json: file name too long
My logs after building the container and pushing it are:
Running with gitlab-runner 12.1.0-rc1 (6da35412)
on docker-auto-scale 72989761
Using Docker executor with image gcr.io/kaniko-project/executor:debug ...
Pulling docker image gcr.io/kaniko-project/executor:debug ...
Using docker image sha256:60ef6732686c9655a6c28a3d2d805f4f0642d5e403c7c24ffc79e0c8d00bd0a0 for gcr.io/kaniko-project/executor:debug ...
Running on runner-72989761-project-10466586-concurrent-0 via runner-72989761-srm-1563296684-cd599c9c...
Fetching changes...
Initialized empty Git repository in /builds/fluidattacks/default/.git/
Created fresh repository.
From https://gitlab.com/fluidattacks/default
* [new branch] dsalazaratfluid -> origin/dsalazaratfluid
* [new branch] master -> origin/master
Checking out 0b9a7e74 as dsalazaratfluid...
Skipping Git submodules setup
$ sh ./ci-scripts/build-public.sh alpine-kaniko latest
INFO[0000] Resolved base name gcr.io/kaniko-project/executor:debug to gcr.io/kaniko-project/executor:debug
INFO[0000] Resolved base name alpine:latest to alpine:latest
INFO[0000] Resolved base name gcr.io/kaniko-project/executor:debug to gcr.io/kaniko-project/executor:debug
INFO[0000] Resolved base name alpine:latest to alpine:latest
INFO[0000] Downloading base image gcr.io/kaniko-project/executor:debug
2019/07/16 17:06:06 No matching credentials were found, falling back on anonymous
INFO[0000] Error while retrieving image from cache: getting file info: stat /cache/sha256:7587952834538c83a73b881def2f1bbb8ad73d545699105a96a2a5e370fa56bc: no such file or directory
INFO[0000] Downloading base image gcr.io/kaniko-project/executor:debug
2019/07/16 17:06:06 No matching credentials were found, falling back on anonymous
INFO[0000] Downloading base image alpine:latest
INFO[0000] Error while retrieving image from cache: getting file info: stat /cache/sha256:57334c50959f26ce1ee025d08f136c2292c128f84e7b229d1b0da5dac89e9866: no such file or directory
INFO[0000] Downloading base image alpine:latest
INFO[0001] Built cross stage deps: map[0:[/kaniko]]
INFO[0001] Downloading base image gcr.io/kaniko-project/executor:debug
2019/07/16 17:06:07 No matching credentials were found, falling back on anonymous
INFO[0001] Error while retrieving image from cache: getting file info: stat /cache/sha256:7587952834538c83a73b881def2f1bbb8ad73d545699105a96a2a5e370fa56bc: no such file or directory
INFO[0001] Downloading base image gcr.io/kaniko-project/executor:debug
2019/07/16 17:06:07 No matching credentials were found, falling back on anonymous
INFO[0001] Only file modification time will be considered when snapshotting
INFO[0002] Taking snapshot of full filesystem...
INFO[0002] Saving file /kaniko for later use.
INFO[0005] Deleting filesystem...
INFO[0005] Downloading base image alpine:latest
INFO[0005] Error while retrieving image from cache: getting file info: stat /cache/sha256:57334c50959f26ce1ee025d08f136c2292c128f84e7b229d1b0da5dac89e9866: no such file or directory
INFO[0005] Downloading base image alpine:latest
INFO[0005] Only file modification time will be considered when snapshotting
INFO[0005] Unpacking rootfs as cmd RUN apk update && apk upgrade && apk add --no-cache bash git requires it.
INFO[0006] Taking snapshot of full filesystem...
INFO[0006] ENV DOCKER_CONFIG='/kaniko/.docker'
INFO[0006] ENV GOOGLE_APPLICATION_CREDENTIALS='/kaniko/.docker/config.json'
INFO[0006] RUN apk update && apk upgrade && apk add --no-cache bash git
INFO[0006] cmd: /bin/sh
INFO[0006] args: [-c apk update && apk upgrade && apk add --no-cache bash git]
fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/community/x86_64/APKINDEX.tar.gz
v3.10.1-2-gbc3922e64b [http://dl-cdn.alpinelinux.org/alpine/v3.10/main]
v3.10.1-1-gb7bbae6e40 [http://dl-cdn.alpinelinux.org/alpine/v3.10/community]
OK: 10327 distinct packages available
OK: 6 MiB in 14 packages
fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/community/x86_64/APKINDEX.tar.gz
(1/11) Installing ncurses-terminfo-base (6.1_p20190518-r0)
(2/11) Installing ncurses-terminfo (6.1_p20190518-r0)
(3/11) Installing ncurses-libs (6.1_p20190518-r0)
(4/11) Installing readline (8.0.0-r0)
(5/11) Installing bash (5.0.0-r0)
Executing bash-5.0.0-r0.post-install
(6/11) Installing ca-certificates (20190108-r0)
(7/11) Installing nghttp2-libs (1.38.0-r0)
(8/11) Installing libcurl (7.65.1-r0)
(9/11) Installing expat (2.2.7-r0)
(10/11) Installing pcre2 (10.33-r0)
(11/11) Installing git (2.22.0-r0)
Executing busybox-1.30.1-r2.trigger
Executing ca-certificates-20190108-r0.trigger
OK: 30 MiB in 25 packages
INFO[0007] Taking snapshot of full filesystem...
INFO[0009] COPY --from=kaniko /kaniko /kaniko
INFO[0012] Taking snapshot of files...
INFO[0027] Deleting filesystem...
2019/07/16 17:06:33 existing blob: sha256:0503825856099e6adb39c8297af09547f69684b7016b7f3680ed801aa310baaa
2019/07/16 17:06:34 pushed blob: sha256:2d05ad4487d2149a48e960513b97f95007da6709b7ffbf1dbbe9f7ac64b840fc
2019/07/16 17:06:34 pushed blob: sha256:a2f5d816c3ee3bc28b743061365a37ebe07ae65a61bd7f38de402e721d6d0881
2019/07/16 17:06:35 pushed blob: sha256:97b8426a6f549d1f8d6aecd69aa80b89736e3f739a3d1a0b8e12857a373bf68f
2019/07/16 17:06:36 index.docker.io/fluidattacks/alpine-kaniko:latest: digest: sha256:39f2d5410b9b1c2d6e9e8acbdce66064226731fe45af7a2cf37aa35db56717cc size: 756
Job succeeded
Expected behavior Image should pull correctly
To Reproduce Steps to reproduce the behavior:
- Build dockerfile with Kaniko
- Push it to registry
- Try to download it
Additional Information
- Dockerfile:
# This image builds alpine with bash, git and kaniko executor
FROM gcr.io/kaniko-project/executor:debug as kaniko
FROM alpine:latest
ENV DOCKER_CONFIG='/kaniko/.docker'
ENV GOOGLE_APPLICATION_CREDENTIALS='/kaniko/.docker/config.json'
RUN apk update && \
apk upgrade && \
apk add --no-cache \
bash \
git
COPY --from=kaniko /kaniko /kaniko
- Gitlab-CI script
alpine-kaniko:
stage: setup
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
retry: 1
script:
- sh ./ci-scripts/build-public.sh alpine-kaniko latest
- build-public.sh
echo "{\"auths\":{\"${DOCKER_HUB_URL}\":\
{\"username\":\"${DOCKER_HUB_USER}\",\
\"password\":\"${DOCKER_HUB_PASS}\"}}}" > /kaniko/.docker/config.json
/kaniko/executor \
--cleanup \
--context "dockerfiles/public/$1/" \
--dockerfile "dockerfiles/public/$1/Dockerfile" \
--destination "fluidattacks/$1:$2" \
--snapshotMode time
About this issue
- Original URL
- State: open
- Created 5 years ago
- Reactions: 3
- Comments: 20 (3 by maintainers)
@dsalaza4 I’ve been facing the same issue, but here’s how i worked around it.
For my use-case, i’m kind of doing the same thing, but not with GCR, but rather with AWS ECR.
I noticed that this usually happens with the
config.jsonfile where kaniko somehow saves this file with a long recursive filename (which i think is probably a bug, but not so sure). To avoid that, i tried copying theconfig.jsonfile separately from the base /kaniko image and other relevant files separately instead of copying the whole/kanikofolder (which i think causes this issue) so that these files are not affected when executor builds and make changes to the filesystem.In my case (with ECR), i’m doing this with my custom config.json. For you, i guess something like this would work as well (you just need to use google api credentials for your use-case):
Build and run this image to check if this works:
However if you copy the entire folder of /kaniko during build, the
findcommand would reveal the following results.You can read this more to see what more you need to copy w.r.t GCR and Google APIs.
Hopefully it works for you as well since it works for me.
The problem is that the official kaniko images are
FROM scratch, so if you want to use e.g. Git to extract useful building/tagging info from revision control, you’re SOL.kaniko:debugjust adds Busybox, and at least in the case of Git there are no official static binaries made available that could easily be added to the kaniko image.TBH it would be much better if there were e.g. a
kaniko:alpineimage available.Okay, let’s step back a little and try to figure out what we were trying to solve in the second place. i.e. your credentials were being saved and visible in the resulting docker image. Prior to that, with the solution i proposed before, it worked and you were able to build a docker image.
From what i understand, although you don’t require the docker credentials at build time, you still require those credentials at runtime i.e. in the live kaniko executor container from the resulting image i.e. when you run executor to build and push images. The good thing is, you don’t need to save the credentials to the image but you can inject it in a running executor container with the
echo blah.The problem with this approach is that, the config where the docker credentials were stored got wiped out when the executor deleted the filesystem on building the image. I said that it had a default whitelist directory that prevents these directories from being deleted, however i was wrong. By default it doesn’t whitelist the
/home/jenkins/agentdirectory, but you can instruct kaniko to whitelist it by setting a VOLUME directive in the Dockerfile as in (let’s go with/buildnow instead of/home/jenkins/agentfor simplicity):So can you try with the following Dockerfile:
Then you can echo your credentials into the
/build/.docker/config.jsonfile.The way i’m doing it doing my builds is the following:
Now i use
my-registry/my-repo:pr179inside my CI pipeline to build images with kaniko going forward. This can also be used to build the same image recursively with kaniko. So think of this image as the seed image built by docker instead of kaniko and then from here on, all images would be built inside this container with kaniko executor as in the following:The reason it worked for me before was because i’ve been using the
jenkins/jnlp-slave:alpinebase image instead of the pure alpine image and that already had theVOLUMEdirective set to/home/jenkins/agentwhich made kaniko automatically whitelist the directory. So in this case, the/builddirectory should now be whitelisted and your credentials shouldn’t be wiped out when building and pushing the image.I’ve tried it on my end, hopefully it works for you too.
I’m reopening this issue as I just found out that although copying the executor and other needed files fixes the initial problem, it seems like we’re still having some serious issues when it comes to what we’re actually copying.
Let me explain in more detail.
When executing:
one might expect that such files would come from the specified stage:
That is not the case with Kaniko. If the container you’re using for building the image is also gcr.io/kaniko-project/executor:debug, what happens is that the files get copied over from the container that is building the image, instead of the stage included in the Dockerfile. I know it’s a little confusing, I will show the particular example that made me realize of this.
Right after I built the container with kaniko, I went in to check the files within the /kaniko folder. There, I found out that my config.json file looked like this:
where both MY_USER and MY_PASS were visible. This was due to the fact that when building the image, in order to be able to push it to the registry, I logged in first:
I tried in many ways to reference the files from the stage instead of the files from the building container with no success.
I also tried building the image with docker, and voila, it worked as expected. My config.json looked like this (the default kaniko config.json):
What a bummer 😞