helm: Error when updating Statefulsets

The first time I’m updating a stateful set I get an error like Error: UPGRADE FAILED: StatefulSet.apps "eerie-tortoise-consul" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas' are forbidden. on the first try. Subsequent attempts seem to work with no error.

An easy way to repro for us:

  1. helm install stable/consul
  2. helm upgrade wobbly-bat stable/consul --set=Memory=200Mi
  3. Observe error
  4. helm upgrade wobbly-bat stable/consul --set=Memory=300Mi
  5. No error, statefulset is updated

Client: &version.Version{SemVer:"v2.2.3", GitCommit:"1402a4d6ec9fb349e17b912e32fe259ca21181e3", GitTreeState:"clean"} Server: &version.Version{SemVer:"v2.2.3", GitCommit:"1402a4d6ec9fb349e17b912e32fe259ca21181e3", GitTreeState:"clean"}

Logs

2017/03/21 00:16:36 storage.go:133: Getting release history for ‘eerie-tortoise’ 2017/03/21 00:16:37 release_server.go:913: Executing pre-upgrade hooks for eerie-tortoise 2017/03/21 00:16:37 release_server.go:940: Hooks complete for pre-upgrade eerie-tortoise 2017/03/21 00:16:37 client.go:381: generating strategic merge patch for *runtime.Unstructured 2017/03/21 00:16:37 client.go:393: Looks like there are no changes for eerie-tortoise-consul-tls 2017/03/21 00:16:37 client.go:393: Looks like there are no changes for eerie-tortoise-consul-statsdexport-config 2017/03/21 00:16:37 client.go:393: Looks like there are no changes for eerie-tortoise-consul-config 2017/03/21 00:16:37 client.go:393: Looks like there are no changes for eerie-tortoise-consul-alerts 2017/03/21 00:16:37 client.go:393: Looks like there are no changes for eerie-tortoise-consul 2017/03/21 00:16:37 client.go:393: Looks like there are no changes for eerie-tortoise-consul-ing 2017/03/21 00:16:37 client.go:381: generating strategic merge patch for *runtime.Unstructured 2017/03/21 00:16:37 client.go:246: error updating the resource eerie-tortoise-consul: StatefulSet.apps “eerie-tortoise-consul” is invalid: spec: Forbidden: updates to statefulset spec for fields other than ‘replicas’ are forbidden. 2017/03/21 00:16:37 release_server.go:324: warning: Upgrade “eerie-tortoise” failed: StatefulSet.apps “eerie-tortoise-consul” is invalid: spec: Forbidden: updates to statefulset spec for fields other than ‘replicas’ are forbidden. 2017/03/21 00:16:37 storage.go:53: Updating “eerie-tortoise” (v13) in storage 2017/03/21 00:16:37 storage.go:45: Create release “eerie-tortoise” (v14) in storage 2017/03/21 00:16:45 storage.go:133: Getting release history for ‘eerie-tortoise’ 2017/03/21 00:16:45 release_server.go:913: Executing pre-upgrade hooks for eerie-tortoise 2017/03/21 00:16:45 release_server.go:940: Hooks complete for pre-upgrade eerie-tortoise 2017/03/21 00:16:45 client.go:381: generating strategic merge patch for *runtime.Unstructured 2017/03/21 00:16:45 client.go:393: Looks like there are no changes for eerie-tortoise-consul-tls 2017/03/21 00:16:46 client.go:393: Looks like there are no changes for eerie-tortoise-consul-statsdexport-config 2017/03/21 00:16:46 client.go:393: Looks like there are no changes for eerie-tortoise-consul-config 2017/03/21 00:16:46 client.go:393: Looks like there are no changes for eerie-tortoise-consul-alerts 2017/03/21 00:16:46 client.go:393: Looks like there are no changes for eerie-tortoise-consul 2017/03/21 00:16:46 client.go:393: Looks like there are no changes for eerie-tortoise-consul-ing 2017/03/21 00:16:46 client.go:393: Looks like there are no changes for eerie-tortoise-consul 2017/03/21 00:16:46 release_server.go:913: Executing post-upgrade hooks for eerie-tortoise 2017/03/21 00:16:46 release_server.go:940: Hooks complete for post-upgrade eerie-tortoise 2017/03/21 00:16:46 storage.go:53: Updating “eerie-tortoise” (v14) in storage 2017/03/21 00:16:46 storage.go:45: Create release “eerie-tortoise” (v15) in storage 2017/03/21 00:16:46 storage.go:133: Getting release history for ‘eerie-tortoise’ 2017/03/21 00:16:47 client.go:155: Doing get for: ‘eerie-tortoise-consul-gossip-key’ 2017/03/21 00:16:47 client.go:155: Doing get for: ‘eerie-tortoise-consul-tls’ 2017/03/21 00:16:47 client.go:155: Doing get for: ‘eerie-tortoise-consul-statsdexport-config’ 2017/03/21 00:16:47 client.go:155: Doing get for: ‘eerie-tortoise-consul-config’ 2017/03/21 00:16:47 client.go:155: Doing get for: ‘eerie-tortoise-consul-alerts’ 2017/03/21 00:16:47 client.go:155: Doing get for: ‘eerie-tortoise-consul’ 2017/03/21 00:16:47 client.go:155: Doing get for: ‘eerie-tortoise-consul-ing’ 2017/03/21 00:16:47 client.go:155: Doing get for: ‘eerie-tortoise-consul’

About this issue

  • Original URL
  • State: closed
  • Created 7 years ago
  • Reactions: 1
  • Comments: 27 (6 by maintainers)

Most upvoted comments

I had a similar error when adding a 2nd container (k8s 1.7.8). Solved it by manually deleting without deleting running pods: kubectl delete sts --cascade=false dev-kafka

then running helm upgrade succeeded and it automatically started rolling up the old pods.

What’s the recommended way of updating StatefulSets managed by Helm with changes beyond containers and replicas?

I managed to solve this issue by deleting and purging my helm release, then redeploying:

helm delete --purge <RELEASE NAME>

then helm upgrade --install --wait <CHART>

I just faced the same issue, with 2.10.0, and it seems the only way I found to fix it is @balboah 's solution (delete sts before helm upgrade). But I’m still not happy with that, because I don’t want my CI/CD pipeline to run that before a failing deployment. I would have to run a rollback to the previous version to recreate the sts, where before, helm would take care of that. Anyone still facing this ?

PS. What’s the recommended way of updating StatefulSets managed by Helm with changes beyond containers and replicas?

Also, curious about above ^

The changes made were to spec.template.containers.resources so we would expect no errors. We received the error only on the first attempt. Subsequent attempts were verified to stick.

PS. What’s the recommended way of updating StatefulSets managed by Helm with changes beyond containers and replicas?

FWIW, still seeing this on k8s 1.8.5 & 1.8.6 with helm 2.7.2 and 2.8.0-rc.1

I had this problem with 2.8.2, then upgraded to 2.9.1 and had no issue.

With k8s 1.9 I get the error every time I try to update my chart (incubator/cassandra). The only way I found to update the number of replicas is to use the --reuse-values flag :

$ helm upgrade --reuse-values --set config.cluster_size=6 cassandra incubator/cassandra
Release "cassandra" has been upgraded. Happy Helming!
LAST DEPLOYED: Sun Feb 25 11:29:20 2018
NAMESPACE: cassandra
STATUS: DEPLOYED

RESOURCES:
==> v1/Service
NAME                 TYPE       CLUSTER-IP  EXTERNAL-IP  PORT(S)                                       AGE
cassandra-cassandra  ClusterIP  None        <none>       7000/TCP,7001/TCP,7199/TCP,9042/TCP,9160/TCP  9m

==> v1beta1/StatefulSet
NAME                 DESIRED  CURRENT  AGE
cassandra-cassandra  6        4        9m
...

Also interested. What is the process when I do need to update a StatefulSet? It cannot be “delete and create”, because:

  1. It would destroy the entire set, which is not what I am looking for (e.g. want to add a persistent volume, or change memory reservation)
  2. It doesn’t work for CD pipelines (not going to ask CircleCI or Jenkins or Concourse to parse the kube .yml, look for StatefulSet and delete them before applying!).