helm: Helm upgrade failing since 2.12
Helm keeps failing since 2.12 when you add a new resource to an existing chart, it succeeds first time but it fails sequentially after the first helm upgrade. On 2.14.1 the error message is prettier but the error continues.
helm upgrade processei processei-1.1.tgz Release “processei” has been upgraded. LAST DEPLOYED: Fri Jun 14 22:15:26 2019 NAMESPACE: default STATUS: DEPLOYED
RESOURCES: ==> v1/ConfigMap NAME DATA AGE processei-nginx-ingress-controller 5 137d
==> v1/Deployment NAME READY UP-TO-DATE AVAILABLE AGE processei-pgbouncer 1/1 1 1 3s
==> v1/PersistentVolume NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE postgresql 25Gi RWO Retain Bound default/processei-postgresql postgresql 137d rabbitmq 8Gi RWO Retain Available rabbitmq 137d
==> v1/PersistentVolumeClaim NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE processei-postgresql Bound postgresql 25Gi RWO postgresql 137d
==> v1/Pod(related) NAME READY STATUS RESTARTS AGE processei-captcha-85754bbc88-qswj7 1/1 Running 6 7d20h processei-cronos-65f555f99-8dp9r 1/1 Running 0 3h44m processei-nginx-ingress-controller-775bc6d4b7-kmmkq 1/1 Running 0 7d19h processei-nginx-ingress-default-backend-7fb678dbdc-hsjgk 1/1 Running 3 29d processei-notifications-notifications-f7fbd7b68-lxbr4 1/1 Running 0 3h44m processei-pgbouncer-689d6cb76d-fpxtz 2/2 Running 0 2s processei-postgresql-b449c7f5-dscm7 2/2 Running 0 3h44m processei-processei-68bbf8544b-shgzr 1/1 Running 0 31m processei-worker-api-8997986d6-5sgf9 1/1 Running 0 3h44m processei-worker-api-8997986d6-ln9cd 1/1 Running 0 3h43m processei-worker-api-8997986d6-m64jv 1/1 Running 0 3h44m rabbitmq-0 1/1 Running 7 137d
==> v1/Secret NAME TYPE DATA AGE bounceruri Opaque 4 3s processei-postgresql Opaque 1 137d
==> v1/Service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE db-postgresql NodePort 10.97.151.52 <none> 5432:5432/TCP 137d pgbouncer-service ClusterIP 10.111.112.50 <none> 5432/TCP,9127/TCP 3s processei-nginx-ingress-controller NodePort 10.103.242.63 <none> 80:80/TCP,443:443/TCP 137d processei-nginx-ingress-default-backend NodePort 10.105.27.158 <none> 80:28100/TCP 137d rabbitmq NodePort 10.106.97.9 <none> 5672:5672/TCP,4369:6170/TCP,25672:24324/TCP 137d rabbitmq-management NodePort 10.110.115.81 <none> 15672:31059/TCP 137d service-captcha NodePort 10.106.40.26 <none> 80:25218/TCP 137d service-cronos NodePort 10.110.40.77 <none> 80:28721/TCP 137d service-processei NodePort 10.110.77.41 <none> 80:9135/TCP 137d
==> v1/ServiceAccount NAME SECRETS AGE processei-nginx-ingress 1 137d
==> v1beta1/ClusterRole NAME AGE processei-nginx-ingress 137d
==> v1beta1/ClusterRoleBinding NAME AGE processei-nginx-ingress 137d
==> v1beta1/Deployment NAME READY UP-TO-DATE AVAILABLE AGE processei-captcha 1/1 1 1 137d processei-cronos 1/1 1 1 137d processei-nginx-ingress-controller 1/1 1 1 137d processei-nginx-ingress-default-backend 1/1 1 1 137d processei-notifications-notifications 1/1 1 1 137d processei-postgresql 1/1 1 1 137d processei-processei 1/1 1 1 137d processei-worker-api 3/3 3 3 137d
==> v1beta1/PodDisruptionBudget NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE processei-nginx-ingress-controller 1 N/A 0 137d processei-nginx-ingress-default-backend 1 N/A 0 137d
==> v1beta1/Role NAME AGE processei-nginx-ingress 137d
==> v1beta1/RoleBinding NAME AGE processei-nginx-ingress 137d
==> v1beta1/StatefulSet NAME READY AGE rabbitmq 1/1 137d
root@master:~# helm upgrade processei processei-1.1.tgz UPGRADE FAILED Error: kind Secret with the name “bounceruri” already exists in the cluster and wasn’t defined in the previous release. Before upgrading, please either delete the resource from the cluster or remove it from the chart root@master:~# helm upgrade processei processei-1.1.tgz --reuse-values UPGRADE FAILED Error: kind Secret with the name “bounceruri” already exists in the cluster and wasn’t defined in the previous release. Before upgrading, please either delete the resource from the cluster or remove it from the chart
Output of helm version: Client: &version.Version{SemVer:“v2.14.1”, GitCommit:“5270352a09c7e8b6e8c9593002a73535276507c0”, GitTreeState:“clean”}
Server: &version.Version{SemVer:“v2.14.1”, GitCommit:“5270352a09c7e8b6e8c9593002a73535276507c0”, GitTreeState:“clean”}
Output of kubectl version: Client Version: version.Info{Major:“1”, Minor:“11”, GitVersion:“v1.11.2”, GitCommit:“bb9ffb1654d4a729bb4cec18ff088eacc153c239”, GitTreeState:“clean”, BuildDate:“2018-08-07T23:17:28Z”, GoVersion:“go1.10.3”, Compiler:“gc”, Platform:“linux/amd64”}
Server Version: version.Info{Major:“1”, Minor:“11”, GitVersion:“v1.11.2”, GitCommit:“bb9ffb1654d4a729bb4cec18ff088eacc153c239”, GitTreeState:“clean”, BuildDate:“2018-08-07T23:08:19Z”, GoVersion:“go1.10.3”, Compiler:“gc”, Platform:“linux/amd64”}
Cloud Provider/Platform (AKS, GKE, Minikube etc.): self-hosted
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Reactions: 1
- Comments: 16 (4 by maintainers)
This problem remains for us even with max revision history at 10.
Helm version 2.14.3
I’m experiencing this bug as well. I just rolled out 2.14 for my org to use, in the hopes I would have more bug fixes than bugs, but apparently I got unlucky this time. Is anyone able to re-open this issue @rafagonc perhaps?
@karuppiah7890 I had 15 and I reduced it down to 3 and once I did a deployment, tiller deleted the old versions and then all subsequent deployments worked.