helm: Cannot upgrade a release with Job
Output of helm version
: 3.1.1
Output of kubectl version
: 1.16.0 client / 1.15.7 server
Cloud Provider/Platform (AKS, GKE, Minikube etc.): AKS
We’re deploying a chart that deploys batch/v1/Job and batch/v1beta1/CronJob amongst other things. We needed to update a tag of the container that’s deployed as Job/CronJob, and during update --install --atomic
we’re getting this error:
Error: UPGRADE FAILED: release X failed, and has been rolled back due to atomic being set: cannot patch "X" with kind Job: Job.batch "Y" is invalid: spec.template: Invalid value: [skipped]: field is immutable
The only difference between the two deployments is the value of a container tag.
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Comments: 17 (6 by maintainers)
The existing job need to be deleted, because the template section in the job is immutable or not updatable. so you have 2 following options.
ttlSecondsAfterFinished
allows helps to delete the job automatically after a sepecified period. for eg:If helm bills itself as a “package manager for Kubernetes”, shouldn’t it be a bit more helpful in cases like that? Whenever I run into this situation, I have to do uninstall of the release, then reinstall the chart anew. Seems like helm could do that for me with a flag on helm install…
Hi everyone, In addition to all of the above responses you can check in the side of helm-hooks : https://helm.sh/docs/topics/charts_hooks/ I solved my issue thanks to @distnie and its second point which suggests to use
ttlSecondsAfterFinished
. The purpose of my job was to fill the database with seeds and in addition I have covered all potential issues by only allow job running withpost-install
hookAs mentioned earlier,
If you know of a way to improve the situation, please feel free to submit a patch.
The situation you are describing is orthogonal to the issue described above. I’d suggest taking a look at the issue queue for a larger write up on that issue; there are other PRs looking to alleviate that issue but it’s not the same as what’s being discussed here.
To anyone that came looking for similar use cases on upgrading immutable resources, the pre-deploy section in the docs may address it. From https://fluxcd.io/docs/use-cases/running-jobs/
The issue is that certain things are considered by Kubernetes to be immutable. You cannot upgrade certain values on certain types of Kubernetes objects. However, Helm doesn’t know which values those are, because they are not exposed by the schema. So Helm sends the manifest to Kubernetes, which responds back with a list of fields that are immutable. The deployment fails because the objects cannot be upgraded.
The solution is for your charts to not attempt to modify immutable fields. You can find more information in the Kubernetes documentation https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/ (though you may have to go to the Kubernetes YAML API docs to find out exactly which fields are immutable)
FWIW, many of the Kubernetes controllers will not let you change certain things in the spec template metadata because Kubernetes uses that info to find things you have deployed.