helm: helm upgrade fails with spec.clusterIP: Invalid value: "": field is immutable

When issue helm upgrade, it shows errors like, (“my-service” change from “clusterIP: None” to “type: LoadBalancer” without field clusterIP)

Error: UPGRADE FAILED: Service "my-service" is invalid: spec.clusterIP: Invalid value: "": field is immutable 

However, all other pods with new version are still going to be restarted, except that “my-service” Type does not change to new type “LoadBalancer”

I understand that why upgrade failed because helm does not support changing on some certain fields. But why helm still upgrades other services/pods by restarting it. Should helm does nothing if there is any error during the upgrade? I excepted helm to treat the whole set of services as a package to either upgrade all or none, but seems my expectation might be wrong.

And if we ever end up in such situation, then what we should to get out the situation? like how to upgrade “my-service” to have new type?

And if I use --dry-run option, helm does not show any errors.

Is this consider a bug or expected, i.e. upgrade throws some error but some service still gets upgraded.

Output of helm version:

Client: &version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}

Output of kubectl version:

Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:36:53Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.10-gke.27", GitCommit:"145f9e21a4515947d6fb10819e5a336aff1b6959", GitTreeState:"clean", BuildDate:"2020-02-21T18:01:40Z", GoVersion:"go1.12.12b4", Compiler:"gc", Platform:"linux/amd64"}

Cloud Provider/Platform (AKS, GKE, Minikube etc.): GKE and Minkube

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Reactions: 97
  • Comments: 70 (17 by maintainers)

Most upvoted comments

Update! The update fails ONLY when using helm upgrade --install w/ --force. Less of a blocker now.

FYI, the issue raised by the OP and the comments raised here about --force are separate, discrete issues. Let’s try to focus on OP’s issue here.

To clarify, the issue OP is describing is a potential regression @n1koo identified in https://github.com/helm/helm/issues/7956#issuecomment-620749552. That seems like a legitimate bug.

The other comments mentioning the removal of --force working for them is intentional and expected behaviour from Kubernetes’ point of view. With --force, you are asking Helm to make a PUT request against Kubernetes. Effectively, you are asking Kubernetes to take your target manifests (the templates rendered in your chart from helm upgrade) as the source of truth and overwrite the resources in your cluster with the rendered manifests. This is identical to kubectl apply --overwrite.

In most cases, your templates don’t specify a cluster IP, which means that helm upgrade --force is asking to remove (or change) the service’s cluster IP. This is an illegal operation from Kubernetes’ point of view.

This is also documented in #7082.

This is also why removing --force works: Helm makes a PATCH operation, diffing against the live state, merging in the cluster IP into the patched manifest, preserving the cluster IP over the upgrade.

If you want to forcefully remove and re-create the object like what was done in Helm 2, have a look at #7431.

Hope this clarifies things.

Moving forward, let’s try to focus on OP’s issue here.

Not enough information has been provided to reproduce. Please tell us how to create a reproducible chart, and which Helm commands you used.

Hit this in helm 3.5.2

We have the same behavior with v3.2.0, downgrading to v3.1.3 is our temporary fix

Upgrade to helm version 3.5.0 solved the issue.

Helm version 3.5.0 still not work. But without --force is worked.

I’m in helm 3.3.4 and this is still an issue

I thought I made it fairly clear in my earlier comment that there are two separate, unique cases where a user can see this error. One is OP’s case. The other is from the use of --force. We are focusing on OP’s issue here.

Out of respect for the people who are experiencing the same issue as the OP, please stop hijacking this thread to talk about --force. We are trying to discuss how to resolve OP’s issue. If you want to talk about topics that are irrelevant to the issue that the OP described, please either open a new ticket or have a look at the suggestions I made earlier.

@tibetsam with regards to fixing this for Helm 2: no. We are no longer providing bug fixes for Helm 2. See https://helm.sh/blog/helm-v2-deprecation-timeline/ for more info.

I am having this issue when the Helm Chart version changes and having an existing deployment.

Using Helm v3.2.0

Disabling --force flag made it work.

@jbilliau-rcd

try not using --force

@pre

I think there is something wack happening with the three way merge. Perhaps the last-applied annotation is being improperly recorded somehow.

@EvgeniGordeev This going to be crude solution which worked for me with small downtime. Uninstall the chart/reinstall.

So far found a reasonable workaround - --reuse-values flag. Works for my case.

I ended up figuring it out; apparently you can change labels on a Deployment and on the pod spec, but NOT on the match selector…Kubernetes does not like that. Which is strange to me; how else am I supposed to modify my deployment to only select pods on version “v2” during, say, a canary deployment? Currently, I have no way of doing that, so im confused on that part.

We have the same problem on v3.4.1 with --force flag.

also after an upgrade from helm2 in 3.6.3 remove --force worked

I think I managed to reproduce OP’s problem with the jupytherhub helm chart. Hopefully, with the instructions below, you will manage to reproduce the issue:


Important Jupyterhub helm chart does not contain a spec.clusterIP field in its Service specifications, as you can see (for example) here: https://github.com/jupyterhub/zero-to-jupyterhub-k8s/blob/c0a43af12a89d54bcd6dcb927fdcc2f623a14aca/jupyterhub/templates/hub/service.yaml#L17-L29


I am using helm and kind to reproduce the problem:

➜ helm version
version.BuildInfo{Version:"v3.4.0", GitCommit:"7090a89efc8a18f3d8178bf47d2462450349a004", GitTreeState:"clean", GoVersion:"go1.14.10"}

➜ kind version
kind v0.9.0 go1.15.2 linux/amd64

➜ kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean", BuildDate:"2020-04-16T11:56:40Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.1", GitCommit:"206bcadf021e76c27513500ca24182692aabd17e", GitTreeState:"clean", BuildDate:"2020-09-14T07:30:52Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"}

How to reproduce

  1. Create new kind cluster
kind create cluster
  1. Create a file called config.yaml with following content (random generated hex):
proxy:
  secretToken: "3a4bbf7405dfe1096ea2eb9736c0df299299f94651fe0605cfb1c6c5700a6786"

FYI I am following the instructions for helm file installation (link)

  1. Add helm repository
helm repo add jupyterhub https://jupyterhub.github.io/helm-chart/
helm repo update
  1. Install the chart (with --force option)
RELEASE=jhub
NAMESPACE=jhub

helm upgrade --cleanup-on-fail --force \
  --install $RELEASE jupyterhub/jupyterhub \
  --namespace $NAMESPACE \
  --create-namespace \
  --version=0.9.0 \
  --values config.yaml
  1. Repeat step 5

Error:

Error: UPGRADE FAILED: failed to replace object: PersistentVolumeClaim "hub-db-dir" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
        AccessModes:      []core.PersistentVolumeAccessMode{"ReadWriteOnce"},
        Selector:         nil,
        Resources:        core.ResourceRequirements{Requests: core.ResourceList{s"storage": {i: resource.int64Amount{value: 1073741824}, s: "1Gi", Format: "BinarySI"}}},
-       VolumeName:       "",
+       VolumeName:       "pvc-c614de5c-4749-4755-bd3a-6e603605c44e",
-       StorageClassName: nil,
+       StorageClassName: &"standard",
        VolumeMode:       &"Filesystem",
        DataSource:       nil,
  }
 && failed to replace object: Service "hub" is invalid: spec.clusterIP: Invalid value: "": field is immutable && failed to replace object: Service "proxy-api" is invalid: spec.clusterIP: Invalid value: "": field is immutable && failed to replace object: Service "proxy-public" is invalid: spec.clusterIP: Invalid value: "": field is immutable

@technosophos --force resolve issue with ClusterIP when you migrate to helm 3 as helm 2 don’t try upgrade ClusterIP when helm 3 do. Helm 3 not able resolve issue with immutable filed as matchLabels

Hello @technosophos @bacongobbler we have the same 2 issues:

version.BuildInfo{Version:"v3.2.1", GitCommit:"fe51cd1e31e6a202cba7dead9552a6d418ded79a", GitTreeState:"clean", GoVersion:"go1.13.10"}

  1. Issue We have Service template without clusterIP but kubernetes will assign clusterIP automatically:
apiVersion: v1
kind: Service
metadata:
  name: {{ .Release.Name }}
  labels:
    app: {{ .Values.image.name }}
    release: {{ .Release.Name }}
spec:
  type: ClusterIP
  ports:
    - port: {{ .Values.service.port }}
      targetPort: {{ .Values.service.port }}
      protocol: TCP
      name: http
  selector:
    app: {{ .Values.image.name }}
    release: {{ .Release.Name }}

after migrate to helm 3 with helm 2to3 convert and try upgrade the same release helm3 upgrade --install --force:

failed to replace object: Service "dummy-stage" is invalid: spec.clusterIP: Invalid value: "": field is immutable

if i will do the same without --force -> helm3 upgrade --install works fine without error.

  1. Issue if I want change spec.selector.matchLabels in Deployment which are immutable field without --force I get error:
cannot patch "dummy-stage" with kind Deployment: Deployment.apps "dummy-stage" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app.kubernetes.io/name":"web-nerf-dummy-app"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable

if I will do the same with --force I get error:

failed to replace object: Deployment.apps "dummy-stage" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app.kubernetes.io/name":"web-nerf-dummy-app"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable

Is it possible implement the same behaviour for --force as in helm 2 because we can without any error upgrade immutable filed ?

So https://github.com/helm/helm/issues/6378#issuecomment-557746499 is correct. Please read that before continuing on with this issue. If clusterIP: "" is set, Kubernetes will assign an IP. On the next Helm upgrade, if clusterIP:"" again, it will give the error above, because it appears to Kubernetes that you are trying to reset the IP. (Yes, Kubernetes modifies the spec: section of a service!)

When the Create method bypasses the 3-way diff, it sets clusterIP: "" instead of setting it to the IP address assigned by Kubernetes.

To reproduce:

$ helm create issue7956
$ # edit issue7956/templates/service.yaml and add `clusterIP: ""` under `spec:`
$ helm upgrade --install issue7956 issue7956
...
$ helm upgrade issue7956 issue7956
Error: UPGRADE FAILED: cannot patch "issue-issue7956" with kind Service: Service "issue-issue7956" is invalid: spec.clusterIP: Invalid value: "": field is immutable

The second time you run the upgrade, it will fail.

Closing as a duplicate of #6378. @cablespaghetti found the deeper explanation for this behaviour, which is described in great detail.

Let us know if that does not work for you.

Hi, here are the reproduce steps Having two services yaml file as below.

nginx.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80

prometheus.yaml

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: prometheus
spec:
  template:
    metadata:
      labels:
        app: prometheus
    spec:
      containers:
      - image: prom/prometheus
        name: prometheus
        ports:
        - containerPort: 9090
        imagePullPolicy: Always
      hostname: prometheus
      restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
  name: prometheus
spec:
  selector:
    app: prometheus
  clusterIP: None
  ports:
  - name: headless
    port: 9090
    targetPort: 0

Then put there two files in helm1/templates/ then install. It shows prometheus service uses clusterIP and nginx version is 1.14.2

# helm upgrade --install test helm1
Release "test" does not exist. Installing it now.
NAME: test
LAST DEPLOYED: Tue Apr 21 20:42:55 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None

# kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP    35d
prometheus   ClusterIP   None         <none>        9090/TCP   7s

# kubectl describe deployment nginx |grep Image
    Image:        nginx:1.14.2

Now update the section for nginx.yaml to new version 1.16

        image: nginx:1.16

and prometheus.yaml by changing it to LoadBalancer.

spec:
  selector:
    app: prometheus
  ports:
  - name: "9090"
    port: 9090
    protocol: TCP
    targetPort: 9090
  type: LoadBalancer

Now put them as helm2 and do the upgrade. Then you can see the upgrade throw some errors, but the nginx service goes through, by upgrade to a new version, but the prometheus is not upgraded as it is still using Cluster IP.

# helm upgrade --install test helm2
Error: UPGRADE FAILED: cannot patch "prometheus" with kind Service: Service "prometheus" is invalid: spec.clusterIP: Invalid value: "": field is immutable

# kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP    35d
prometheus   ClusterIP   None         <none>        9090/TCP   5m34s

# kubectl describe deployment nginx |grep Image
    Image:        nginx:1.16

helm list shows

# helm list
NAME	NAMESPACE	REVISION	UPDATED                                	STATUS	CHART                                    	APP VERSION
test	default  	2       	2020-04-21 20:48:20.133644429 -0700 PDT	failed	

helm history

# helm history test
REVISION	UPDATED                 	STATUS  	CHART       APP VERSION	DESCRIPTION                                                                                                                                               
1       	Tue Apr 21 20:42:55 2020	deployed	helm-helm	1.0.0.6    	Install complete                                                                                                                                          
2       	Tue Apr 21 20:48:20 2020	failed  	helm-helm	1.0.0.6    	Upgrade "test" failed: cannot patch "prometheus" with kind Service: Service "prometheus" is invalid: spec.clusterIP: Invalid value: "": field is immutable