kaniko: Building multiple images with `--cleanup` fails in v1.5.0 when calling `chmod`
Actual behavior
When building multiple images (using --cleanup on each of them), the first build succeeds but subsequent builds fail with the following error:
ERROR: Process exited immediately after creation. See output below
Executing sh script inside container kaniko of pod jobName-branchName-buildNumber-cpmb7-q99pr-0v57c
OCI runtime exec failed: exec failed: container_linux.go:346: starting container process caused "chdir to cwd (\"/workspace\") set in config.json failed: no such file or directory": unknown
This error occurs when running chmod +x docker-entrypoint.sh inside the Kaniko container in a Jenkins build.
Expected behavior
In v1.3.0, all builds run with no errors, and the same is expected of v1.5.0.
To Reproduce
This is inside a loop in a Jenkins build. Each iteration writes a Dockerfile to the current working directory, which is a shared mounted volume like /home/jenkins/agent/workspace/jobName_branchName.
container('kaniko') {
script {
jobs.each { job ->
writeFile(file: 'Dockerfile', text: job.dockerfile)
writeFile(file: 'docker-entrypoint.sh', text: job.entrypoint)
sh 'chmod +x docker-entrypoint.sh'
sh "/kaniko/executor -f `pwd`/Dockerfile -c `pwd` --cache=true --destination=${job.dockerImage}:${job.tag} --cleanup"
}
}
}
There is also a volume mount at /kaniko/.docker with credentials in config.json.
I can confirm that removing --cleanup causes the build to succeed (though it is not a good workaround because the file system will be dirty and can corrupt the images).
Additional Information
- Dockerfile
The Dockerfile is generated dynamically, but here is an example of the final output:
FROM python:3.6.12-buster
WORKDIR /app
COPY docker-entrypoint.sh .
COPY requirements.txt .
RUN pip3 install -r ./requirements.txt
COPY myapp .
ENTRYPOINT [ "./docker-entrypoint.sh" ]
CMD [ "python3", "myscript.py" ]
- Kaniko Image (fully qualified with digest)
Working image: v1.3.0-debug, sha256:473d6dfb011c69f32192e668d86a47c0235791e7e857c870ad70c5e86ec07e8c
Failing image: v1.5.0-debug, sha256:a0f4fc8cbd93a94ad5ab2148b44b80d26eb42611330b598d77f3f591f606379a
Triage Notes for the Maintainers
| Description | Yes/No |
|---|---|
| Please check if this a new feature you are proposing |
|
| Please check if the build works in docker but not in kaniko |
|
Please check if this error is seen when you use --cache flag |
|
| Please check if your dockerfile is a multistage dockerfile |
|
About this issue
- Original URL
- State: open
- Created 3 years ago
- Reactions: 14
- Comments: 19
I ran into this error late Friday… with fresh eyes this morning I worked with kubectl to directly create a kaniko pod/container, and
kubectl execinto it as I assume Jenkins does…I found that the
--cleanupcommand was removing the/workspacedirectory and that subsequentkubectl execcommands fail because the WORKDIR/workspaceno longer exists.The quick and dirty work around that I am using is to append
&& mkdir -p /workspaceto the end of my/kaniko/executorcommand…using @jmmk’s example…
In
v1.6.0kaniko’s--cleanupflag deletes the/busyboxdirectory as well. So you cantmkdir /workspacesince the mkdir executable is located in/busybox.Undocumented “feature” is to add
/busyboxin--ignore-pathsso that--cleanupdoesn’t delete it.using the example above:
Hi, I am also seeing this issue in my Jenkins build setup using a Kaniko container. The first image is successfully created, subsequent stages building other images do fail. Rolling back to v1.3.0 resolves the issue and the pipeline with the identical Jenkinsfile works then.
I am also using the --cleanup flag to clean the filesystem. I am not considering to remove the --cleanup flag as a solution at the moment.
… to me this begs the question… Should
--cleanupremove the contents of/workspacerather than removing/workspaceentirely?@austinorth I pinned the build using
--cleanupto version1.3.0. Until we hear something back and unless you require a feature from recent builds, that seems the best way to go.@allenhsu we also had the issue without using
--cleanup. Are you using a multi-stage build? I believe those do the equivalent of--cleanupbetween each stage.I’m also seeing this. Reverting to 1.3.0 fixes the issue for now.
Ran into this same issue today, the
--cleanup && mkdir -p /workspacetrick worked.I would second @LelandSindt that
--cleanupshould be more targeted in how it cleans up directories, or we need another solution that’s Jenkins-aware so that this doesn’t blow up Jenkins the way it currently does.No lie, this was pretty surprising to run into considering Kaniko claims to focus on the Kubernetes experience and Jenkins is likely the primary way folks are going to be doing CI/CD with Kaniko. If you’re using Jenkins and building more than one image in the same container, you’re gonna hit this for sure. 🐛
In fairness I am not using Kubernetes but Docker in ECS config would be in https://your-jenkins.com/manage/configureClouds/ and this is in “ECS agent templates” it would be similar to docker agent template, I assume there will be something similar for K8s
There is a solution for K8s in cloudbees https://docs.cloudbees.com/docs/cloudbees-ci-kb/latest/cloudbees-ci-on-modern-cloud-platforms/what-you-need-to-know-when-using-kaniko-from-kubernetes-jenkins-agents
When is this going to be fixed? I have a requirement to build multiple images using 1 container, all Dockerfile’s container
chmod, AND I need to save and push them, using the--tarpathflag and push to an ECR using Crane.I’ve reverted to v1.3.0 so that the
--cleanupflag works, but the--tarpathflag is not available on that version!Anyone find a workaround for this? I noticed the latest release (1.6.0) still results in this same error.