helm: Error: UPGRADE FAILED: cannot re-use a name that is still in use
Output of helm version:
Client: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.9.0", GitCommit:"f6025bb9ee7daf9fee0026541c90a6f557a3e0bc", GitTreeState:"clean"}
Output of kubectl version:
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.1", GitCommit:"d4ab47518836c750f9949b9e0d387f20fb92260b", GitTreeState:"clean", BuildDate:"2018-04-12T14:26:04Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8+", GitVersion:"v1.8.12-gke.1", GitCommit:"f47fa5292f604d07539ddbf7e5840b77d686051b", GitTreeState:"clean", BuildDate:"2018-05-11T16:56:15Z", GoVersion:"go1.8.3b4", Compiler:"gc", Platform:"linux/amd64"}
Cloud Provider/Platform (AKS, GKE, Minikube etc.): GKE
I got the following error during helm upgrade:
Error: UPGRADE FAILED: cannot re-use a name that is still in use
It’d be nice knowing which name is that, and of what type.
About this issue
- Original URL
- State: closed
- Created 6 years ago
- Reactions: 19
- Comments: 30 (2 by maintainers)
For those stuck when initial deployment fails. After you fix your chart but still cannot deploy with the same release name, check the secrets with kubectl and remove the one corresponding to the release.
PS: I am testing with helm3, not sure if helm2 stores releases exactly in the same way in the cluster.
I see, because my charts template have some YAML parse error, after fixed , it work well
The problem is even bigger when your initial deployment fails. In that scenario, there’s nothing to rollback towards, and everything seems to be stuck.
I was wondering if something is being done in order to improve the user experience about this specific issue?
The error message is really misleading and it would be nice to add some clue that this may be caused by a bad template and not forcefully a release name being already used.
Today I found that the --force parameter makes this problem worse. With the parameter I get “Error: UPGRADE FAILED: a released named x is in use, cannot re-use a name that is still in use” removing the parameter returns “Error: UPGRADE FAILED: YAML parse error on …/templates/secret.yaml: error converting YAML to JSON: yaml: line 11: did not find expected key”
@abdennour An update on https://github.com/helm/helm/issues/4174#issuecomment-605485703.
Helm v2 stores release data as ConfigMqps (default) or Secrets in the namespace of the Tiller instance (
kube-systemby default)It could be retrieved with the command:
kubectl get configmap/secret -n <tiller_namespace> -l "OWNER=TILLER"For example:
The name is the release version/revision and that is what you delete. For example:
kubectl delete configmap mysql-2.v1 -n kube-systemHelm v3 stored release data as Secrets (default) or ConfigMqps in the namespace of the release.
It can be retrieved using the command:
kubectl get secret --all-namespaces -l "owner=helm"The name is the release version/revision (which includes an object prefix) and that is what you delete. For example:
kubectl delete secret sh.helm.release.v1.foo.v1 -n testThis error happens always after getting a random
transport is closingerror. That means, if myhelm upgrade --installfails withtransport is closingerror (timeout), then the release enters PENDING_UPGRADE status and all subsequenthelm upgrade --installwill result in:Error: UPGRADE FAILED: "namehere" has no deployed releasesif--forceis NOT passedError: UPGRADE FAILED: a released named namehere is in use, cannot re-use a name that is still in useif--forceIS passedThere are no YAML errors in the deployment. It deploys correctly when a different release name is provided.
@vijaygos can try
helm templateorhelm lintfor analyze your templates.With Helm 2, all releases metadata are saved in Tiller. With Helm 3, all releases metadata are saved as Secrets in the same Namespace of the release. If you got “cannot re-use a name that is still in use”, this means you may need to check some orphan secrets and delete them :
Same here. I got this error and it took me ages to correlate it with YAML error. Thanks @dragon9783 for sharing your finding. I’ll think of how to test the YAML before running into that misleading error.
For anyone else who finds themselves here, I found that this less helpful error was a result of the presence of
--force. Runninghelm upgrade --force --dry-run ...produced this:while
helm upgrade --dry-run ...(without--force) gave this actionable error:In my case pointing to the “wontfix” bug of incorrect ordering of prereleases in #6190 (c/o https://github.com/Masterminds/semver/issues/69)
It happens for me because I’m using the Terraform Helm provider, and I was waiting until the deployment is complete. If it never becomes ready (can easily happen due to a missing environment var, etc.), then Terraform times out. I can revert by kubectl, still, but I can’t “fix” it by doing a new deployment. I’m in a beter state not waiting until the new pod is ready, but I don’t like that config (I may have dependent deploys in the future).
The problem had already occured at this point, and if I use Terraform to destroy the deployment (something that you never really want to do in prod because your fallback (last build’s replicaset) is still up while your new build is waiting for readyness), it doesn’t remove the secret.
Is there a way to “clean” the secret instead of deleting it, so that I can just make a new replicaset in the same deployment, as one normally would with a new version? I’d like to do it this way to avoid downtime.
@illinar +1,This method solved my problem。
+1 on @illinar 's answer, I also had to remove config maps and services. I am also testing with helm 3.
Thanks!
I had the same issue, after I installed and removed a chart I got the error:
Error: cannot re-use a name that is still in useAlso
helm listwould show that the chart is not there anymoreI tried linting and templating, and the chart returned successful result, but found that changing name would then highlight leftover resources from a previous installation.
Once those files were deleted I could redeploy with the old name.
same problem here . i can not install the old name any more. i can only install a new name…really bad for me