kaniko: Kaniko build fails : too many levels of symbolic links
Actual behavior Added “” in dockerfile,build and error:
time="2022-08-17T10:25:08+08:00" level=info msg="Returning cached image manifest"
time="2022-08-17T10:25:08+08:00" level=info msg="Executing 0 build triggers"
time="2022-08-17T10:25:08+08:00" level=info msg="Unpacking rootfs as cmd RUN ln -snf /usr/share/zoneinfo/Asia/Shanghai /et
c/localtime && echo 'Asia/Shanghai' > /etc/timezone && rm -rf /usr/local/openresty/nginx/conf/* && mkdir -p /us
r/local/openresty/nginx/run/ requires it."
time="2022-08-17T10:25:13+08:00" level=error msg="Error: stat /var/spool/mail: too many levels of symbolic links\nerror ca
lling stat on /var/spool/mail.\ngithub.com/GoogleContainerTools/kaniko/pkg/util.mkdirAllWithPermissions\n\t/root/go/pkg/mo
d/github.com/!google!container!tools/kaniko@v1.8.1/pkg/util/fs_util.go:787\ngithub.com/GoogleContainerTools/kaniko/pkg/uti
l.ExtractFile\n\t/root/go/pkg/mod/github.com/!google!container!tools/kaniko@v1.8.1/pkg/util/fs_util.go:348\ngithub.com/Goo
gleContainerTools/kaniko/pkg/util.GetFSFromLayers\n\t/root/go/pkg/mod/github.com/!google!container!tools/kaniko@v1.8.1/pkg
/util/fs_util.go:205\ngithub.com/GoogleContainerTools/kaniko/pkg/util.GetFSFromImage\n\t/root/go/pkg/mod/github.com/!googl
e!container!tools/kaniko@v1.8.1/pkg/util/fs_util.go:131\ngithub.com/GoogleContainerTools/kaniko/pkg/executor.(*stageBuilde
r).build.func1\n\t/root/go/pkg/mod/github.com/!google!container!tools/kaniko@v1.8.1/pkg/executor/build.go:330\ngithub.com/
GoogleContainerTools/kaniko/pkg/util.Retry\n\t/root/go/pkg/mod/github.com/!google!container!tools/kaniko@v1.8.1/pkg/util/u
til.go:165\ngithub.com/GoogleContainerTools/kaniko/pkg/executor.(*stageBuilder).build\n\t/root/go/pkg/mod/github.com/!goog
le!container!tools/kaniko@v1.8.1/pkg/executor/build.go:334\ngithub.com/GoogleContainerTools/kaniko/pkg/executor.DoBuild\n\
t/root/go/pkg/mod/github.com/!google!container!tools/kaniko@v1.8.1/pkg/executor/build.go:632\nmain.main\n\t/data/devops/wo
rkspace/src/kaniko-build/cmd/main.go:134\nruntime.main\n\t/data/devops/apps/go/1.18.2/src/runtime/proc.go:250\nruntime.goe
xit\n\t/data/devops/apps/go/1.18.2/src/runtime/asm_amd64.s:1571\nfailed to get filesystem from image\ngithub.com/GoogleCon
tainerTools/kaniko/pkg/executor.(*stageBuilder).build\n\t/root/go/pkg/mod/github.com/!google!container!tools/kaniko@v1.8.1
/pkg/executor/build.go:335\ngithub.com/GoogleContainerTools/kaniko/pkg/executor.DoBuild\n\t/root/go/pkg/mod/github.com/!go
ogle!container!tools/kaniko@v1.8.1/pkg/executor/build.go:632\nmain.main\n\t/data/devops/workspace/src/kaniko-build/cmd/mai
n.go:134\nruntime.main\n\t/data/devops/apps/go/1.18.2/src/runtime/proc.go:250\nruntime.goexit\n\t/data/devops/apps/go/1.18
.2/src/runtime/asm_amd64.s:1571\nerror building stage\ngithub.com/GoogleContainerTools/kaniko/pkg/executor.DoBuild\n\t/roo
t/go/pkg/mod/github.com/!google!container!tools/kaniko@v1.8.1/pkg/executor/build.go:633\nmain.main\n\t/data/devops/workspa
ce/src/kaniko-build/cmd/main.go:134\nruntime.main\n\t/data/devops/apps/go/1.18.2/src/runtime/proc.go:250\nruntime.goexit\n
\t/data/devops/apps/go/1.18.2/src/runtime/asm_amd64.s:1571\n"
Expected behavior A clear and concise description of what you expected to happen.
To Reproduce Steps to reproduce the behavior:
- use github.com/GoogleContainerTools/kaniko v1.8.1 dependencies for development
- dockerFile
FROM bkrepo/openrestry:0.0.1
LABEL maintainer="Tencent BlueKing Devops"
ENV INSTALL_PATH="/data/workspace/"
ENV LANG="en_US.UTF-8"
RUN ln -snf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime && \
echo 'Asia/Shanghai' > /etc/timezone && \
rm -rf /usr/local/openresty/nginx/conf/ /usr/local/openresty/nginx/log/ /usr/local/openresty/nginx/run/ && \
mkdir -p /data/workspace/ /data/bkce/ci/ /data/bkce/logs/ci/nginx/ /data/bkce/logs/run/ && \
ln -snf /data/bkce/ci/gateway /usr/local/openresty/nginx/conf && \
ln -snf /data/bkce/logs/ci/nginx /usr/local/openresty/nginx/log && \
ln -snf /data/bkce/logs/run /usr/local/openresty/nginx/run && \
ln -snf /data/bkce /data/bkee && \
chown -R nobody:nobody /data/bkce/logs/
WORKDIR /usr/local/openresty/nginx/
CMD ./sbin/nginx -g 'daemon off;'
3.main.go
go
options := &config.KanikoOptions{
RegistryOptions: config.RegistryOptions{
InsecurePull: true,
SkipTLSVerify: true,
SkipTLSVerifyPull: true,
Insecure: true,
},
DockerfilePath: dockerFilePath,
RunV2: false,
SrcContext: dockerBuildDir,
Destinations: []string{"image"},
SkipUnusedStages: true,
SnapshotMode: "full",
BuildArgs: strings.Split(param.DockerBuildArgs, "\n"),
TarPath: imageTarDir,
//NoPush: true,
CustomPlatform: "linux/amd64",
}
Additional Information
- Dockerfile Please provide either the Dockerfile you’re trying to build or one that can reproduce this error.
- Build Context Please provide or clearly describe any files needed to build the Dockerfile (ADD/COPY commands)
- Kaniko Image (fully qualified with digest)
Triage Notes for the Maintainers
| Description | Yes/No |
|---|---|
| Please check if this a new feature you are proposing |
|
| Please check if the build works in docker but not in kaniko |
|
Please check if this error is seen when you use --cache flag |
|
| Please check if your dockerfile is a multistage dockerfile |
|
About this issue
- Original URL
- State: open
- Created 2 years ago
- Reactions: 10
- Comments: 17
a workaround add build args,
--ignore-path=/var/mail --ignore-path=/var/spool/mailok so i provide a jenkins where people can use kaniko to build docker images and recently a colleague stumbled over this and he was using an oracle java image. i asked him to just switch the base image.
sadly today i was working on a service which uses the oracle graalvm base image and well…it has the same issue
i then looked at the two images: kaniko: ls -l /var/mail drwxr-xr-x 1 root root 0 Aug 9 2022 mail ls -l /var/spool/mail lrwxrwxrwx 1 root root 9 Aug 9 2022 mail -> /var/mail
graal: ls -l /var/mail lrwxrwxrwx. 1 root root 10 May 5 2021 mail -> spool/mail ls -l /var/spool/mail drwxrwxr-x. 1 root root 0 Apr 11 2018 mail
whoopsie, turnaround
so what do i do? just reverse it 😄
et voila, /kaniko works just fine 😃
i feel like this is a kaniko issue, but this should work around it
In my case the issue seems to be caused by building two images sequentially without using the
--cleanupflag (we originally omitted it as a workaround for #1568).The first image being built was based on
nginx:1.21.3having a relative symlink target (../mail) for/var/spool/mail. The second one was based ongolang:1.19.0-alpine3.16having an absolute symlink target (/var/mail) for/var/spool/mail.We could “fix” the issue by using the workaround suggested in #1568.
Actually I think I may have an idea what might be happening. This is theory, but it does explain the behaviour.
Kaniko seems to extract the entire filesystem of the container locally, hence the message that looks like this:
INFO[0001] Unpacking rootfs as cmd RUN rm -rf /var/spool/mail requires it.At this point, it may fall over because of the symlink loop. This will happen if you’ve taken the Kaniko binaries and put them into a new image, that has a base image that is NOT
scratchbecause that base image will have directories to support the base OS.The Kaniko image, however, is built from
scratch– see the files in this folder: https://github.com/GoogleContainerTools/kaniko/tree/main/deployFor example:
https://github.com/GoogleContainerTools/kaniko/blob/fe2413e6e3c8caf943d50cf1d233a561943df1d6/deploy/Dockerfile#L47
As a result, when Kaniko extracts the rootfs from the image you specify in your Dockerfile, there is no symlink loop, since there’s no filesystem on the Kaniko image, as it’s been built from
scratchAs a test, try mounting your local folder into Kaniko image and build that way, so something like this:
And see if it fails. Alternatively, build from the kaniko image as a base, and add your files on top:
Then build and run it with your choice of switches
Dockerfile-fedora-test:
Example output:
Also having this same issue. The suggestion by @sambonbonne unfortunately didn’t help, since I get the error at the very START, before it can even get to the
rm -rf /var/spool/maillineI had the same issue, the problem was that the base image I was using (in my
FROM) has a “loop symlink”./var/mailwas a symlink to/var/spool/mailand/var/spool/mailwas a symlink to/var/mail(I don’t know why, it’a an Alpine image from Docker Hub).For now I “fixed” it by adding a
RUN rm -rf /var/mail /var/spool/mailas the first instruction of the image but it’s not really pretty.