helm: helm upgrade fails with spec.clusterIP: Invalid value: "": field is immutable
When issue helm upgrade, it shows errors like, (“my-service” change from “clusterIP: None” to “type: LoadBalancer” without field clusterIP)
Error: UPGRADE FAILED: Service "my-service" is invalid: spec.clusterIP: Invalid value: "": field is immutable
However, all other pods with new version are still going to be restarted, except that “my-service” Type does not change to new type “LoadBalancer”
I understand that why upgrade failed because helm does not support changing on some certain fields. But why helm still upgrades other services/pods by restarting it. Should helm does nothing if there is any error during the upgrade? I excepted helm to treat the whole set of services as a package to either upgrade all or none, but seems my expectation might be wrong.
And if we ever end up in such situation, then what we should to get out the situation? like how to upgrade “my-service” to have new type?
And if I use --dry-run option, helm does not show any errors.
Is this consider a bug or expected, i.e. upgrade throws some error but some service still gets upgraded.
Output of helm version:
Client: &version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
Output of kubectl version:
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:36:53Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.10-gke.27", GitCommit:"145f9e21a4515947d6fb10819e5a336aff1b6959", GitTreeState:"clean", BuildDate:"2020-02-21T18:01:40Z", GoVersion:"go1.12.12b4", Compiler:"gc", Platform:"linux/amd64"}
Cloud Provider/Platform (AKS, GKE, Minikube etc.): GKE and Minkube
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Reactions: 97
- Comments: 70 (17 by maintainers)
Update! The update fails ONLY when using
helm upgrade --installw/--force. Less of a blocker now.FYI, the issue raised by the OP and the comments raised here about
--forceare separate, discrete issues. Let’s try to focus on OP’s issue here.To clarify, the issue OP is describing is a potential regression @n1koo identified in https://github.com/helm/helm/issues/7956#issuecomment-620749552. That seems like a legitimate bug.
The other comments mentioning the removal of
--forceworking for them is intentional and expected behaviour from Kubernetes’ point of view. With--force, you are asking Helm to make a PUT request against Kubernetes. Effectively, you are asking Kubernetes to take your target manifests (the templates rendered in your chart fromhelm upgrade) as the source of truth and overwrite the resources in your cluster with the rendered manifests. This is identical tokubectl apply --overwrite.In most cases, your templates don’t specify a cluster IP, which means that
helm upgrade --forceis asking to remove (or change) the service’s cluster IP. This is an illegal operation from Kubernetes’ point of view.This is also documented in #7082.
This is also why removing
--forceworks: Helm makes a PATCH operation, diffing against the live state, merging in the cluster IP into the patched manifest, preserving the cluster IP over the upgrade.If you want to forcefully remove and re-create the object like what was done in Helm 2, have a look at #7431.
Hope this clarifies things.
Moving forward, let’s try to focus on OP’s issue here.
Not enough information has been provided to reproduce. Please tell us how to create a reproducible chart, and which Helm commands you used.
Hit this in helm 3.5.2
We have the same behavior with v3.2.0, downgrading to v3.1.3 is our temporary fix
Helm version 3.5.0 still not work. But without
--forceis worked.I’m in helm 3.3.4 and this is still an issue
I thought I made it fairly clear in my earlier comment that there are two separate, unique cases where a user can see this error. One is OP’s case. The other is from the use of --force. We are focusing on OP’s issue here.
Out of respect for the people who are experiencing the same issue as the OP, please stop hijacking this thread to talk about --force. We are trying to discuss how to resolve OP’s issue. If you want to talk about topics that are irrelevant to the issue that the OP described, please either open a new ticket or have a look at the suggestions I made earlier.
@tibetsam with regards to fixing this for Helm 2: no. We are no longer providing bug fixes for Helm 2. See https://helm.sh/blog/helm-v2-deprecation-timeline/ for more info.
Disabling --force flag made it work.
@jbilliau-rcd
try not using --force
@pre
I think there is something wack happening with the three way merge. Perhaps the last-applied annotation is being improperly recorded somehow.
@EvgeniGordeev This going to be crude solution which worked for me with small downtime. Uninstall the chart/reinstall.
So far found a reasonable workaround -
--reuse-valuesflag. Works for my case.I ended up figuring it out; apparently you can change labels on a
Deploymentand on the pod spec, but NOT on the match selector…Kubernetes does not like that. Which is strange to me; how else am I supposed to modify my deployment to only select pods onversion“v2” during, say, a canary deployment? Currently, I have no way of doing that, so im confused on that part.We have the same problem on v3.4.1 with --force flag.
also after an upgrade from helm2 in 3.6.3 remove
--forceworkedI think I managed to reproduce OP’s problem with the jupytherhub helm chart. Hopefully, with the instructions below, you will manage to reproduce the issue:
Important Jupyterhub helm chart does not contain a
spec.clusterIPfield in its Service specifications, as you can see (for example) here: https://github.com/jupyterhub/zero-to-jupyterhub-k8s/blob/c0a43af12a89d54bcd6dcb927fdcc2f623a14aca/jupyterhub/templates/hub/service.yaml#L17-L29I am using helm and kind to reproduce the problem:
How to reproduce
FYI I am following the instructions for helm file installation (link)
--forceoption)Error:
@technosophos
--forceresolve issue with ClusterIP when you migrate to helm 3 as helm 2 don’t try upgrade ClusterIP when helm 3 do. Helm 3 not able resolve issue with immutable filed asmatchLabelsHello @technosophos @bacongobbler we have the same 2 issues:
version.BuildInfo{Version:"v3.2.1", GitCommit:"fe51cd1e31e6a202cba7dead9552a6d418ded79a", GitTreeState:"clean", GoVersion:"go1.13.10"}Servicetemplate withoutclusterIPbut kubernetes will assignclusterIPautomatically:after migrate to helm 3 with
helm 2to3 convertand try upgrade the same releasehelm3 upgrade --install --force:if i will do the same without
--force->helm3 upgrade --installworks fine without error.spec.selector.matchLabelsin Deployment which are immutable field without--forceI get error:if I will do the same with
--forceI get error:Is it possible implement the same behaviour for
--forceas inhelm 2because we can without any error upgrade immutable filed ?So https://github.com/helm/helm/issues/6378#issuecomment-557746499 is correct. Please read that before continuing on with this issue. If
clusterIP: ""is set, Kubernetes will assign an IP. On the next Helm upgrade, ifclusterIP:""again, it will give the error above, because it appears to Kubernetes that you are trying to reset the IP. (Yes, Kubernetes modifies thespec:section of a service!)When the
Createmethod bypasses the 3-way diff, it setsclusterIP: ""instead of setting it to the IP address assigned by Kubernetes.To reproduce:
The second time you run the upgrade, it will fail.
Closing as a duplicate of #6378. @cablespaghetti found the deeper explanation for this behaviour, which is described in great detail.
Let us know if that does not work for you.
Hi, here are the reproduce steps Having two services yaml file as below.
nginx.yaml
prometheus.yaml
Then put there two files in helm1/templates/ then install. It shows prometheus service uses clusterIP and nginx version is 1.14.2
Now update the section for nginx.yaml to new version 1.16
and prometheus.yaml by changing it to LoadBalancer.
Now put them as helm2 and do the upgrade. Then you can see the upgrade throw some errors, but the nginx service goes through, by upgrade to a new version, but the prometheus is not upgraded as it is still using Cluster IP.
helm list shows
helm history