helm: Error: UPGRADE FAILED: cannot re-use a name that is still in use

Output of helm version:

Client: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.9.0", GitCommit:"f6025bb9ee7daf9fee0026541c90a6f557a3e0bc", GitTreeState:"clean"}

Output of kubectl version:

Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.1", GitCommit:"d4ab47518836c750f9949b9e0d387f20fb92260b", GitTreeState:"clean", BuildDate:"2018-04-12T14:26:04Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8+", GitVersion:"v1.8.12-gke.1", GitCommit:"f47fa5292f604d07539ddbf7e5840b77d686051b", GitTreeState:"clean", BuildDate:"2018-05-11T16:56:15Z", GoVersion:"go1.8.3b4", Compiler:"gc", Platform:"linux/amd64"}

Cloud Provider/Platform (AKS, GKE, Minikube etc.): GKE

I got the following error during helm upgrade: Error: UPGRADE FAILED: cannot re-use a name that is still in use

It’d be nice knowing which name is that, and of what type.

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Reactions: 19
  • Comments: 30 (2 by maintainers)

Most upvoted comments

For those stuck when initial deployment fails. After you fix your chart but still cannot deploy with the same release name, check the secrets with kubectl and remove the one corresponding to the release.

$ kubectl -n your-namespace get secrets

NAME                  TYPE                          DATA   AGE
your-release-name.v1  helm.sh/release               1      5m

$ kubectl -n your-namespace delete secret your-release-name.v1

PS: I am testing with helm3, not sure if helm2 stores releases exactly in the same way in the cluster.

I see, because my charts template have some YAML parse error, after fixed , it work well

The problem is even bigger when your initial deployment fails. In that scenario, there’s nothing to rollback towards, and everything seems to be stuck.

I was wondering if something is being done in order to improve the user experience about this specific issue?

The error message is really misleading and it would be nice to add some clue that this may be caused by a bad template and not forcefully a release name being already used.

Today I found that the --force parameter makes this problem worse. With the parameter I get “Error: UPGRADE FAILED: a released named x is in use, cannot re-use a name that is still in use” removing the parameter returns “Error: UPGRADE FAILED: YAML parse error on …/templates/secret.yaml: error converting YAML to JSON: yaml: line 11: did not find expected key”

@abdennour An update on https://github.com/helm/helm/issues/4174#issuecomment-605485703.

Helm v2 stores release data as ConfigMqps (default) or Secrets in the namespace of the Tiller instance ( kube-system by default)

It could be retrieved with the command: kubectl get configmap/secret -n <tiller_namespace> -l "OWNER=TILLER"

For example:

$ kubectl get configmap -n kube-system -l "OWNER=TILLER"
NAME         DATA   AGE
mysql-2.v1   1      4d17h
mysql-5.v1   1      4d17h

The name is the release version/revision and that is what you delete. For example: kubectl delete configmap mysql-2.v1 -n kube-system

Helm v3 stored release data as Secrets (default) or ConfigMqps in the namespace of the release.

It can be retrieved using the command: kubectl get secret --all-namespaces -l "owner=helm"

$ kubectl get secret --all-namespaces -l "owner=helm"
NAMESPACE   NAME                               TYPE                 DATA   AGE
default     sh.helm.release.v1.bar.v1          helm.sh/release.v1   1      5d22h
default     sh.helm.release.v1.mydemo.v1       helm.sh/release.v1   1      2d16h
default     sh.helm.release.v1.mydemo.v2       helm.sh/release.v1   1      2d16h
default     sh.helm.release.v1.mysql.v1        helm.sh/release.v1   1      5d12h
default     sh.helm.release.v1.tester-del.v1   helm.sh/release.v1   1      5d12h
test        sh.helm.release.v1.foo.v1          helm.sh/release.v1   1      3d17h

The name is the release version/revision (which includes an object prefix) and that is what you delete. For example: kubectl delete secret sh.helm.release.v1.foo.v1 -n test

This error happens always after getting a random transport is closing error. That means, if my helm upgrade --install fails with transport is closing error (timeout), then the release enters PENDING_UPGRADE status and all subsequent helm upgrade --install will result in:

  • Error: UPGRADE FAILED: "namehere" has no deployed releases if --force is NOT passed
  • Error: UPGRADE FAILED: a released named namehere is in use, cannot re-use a name that is still in use if --force IS passed

There are no YAML errors in the deployment. It deploys correctly when a different release name is provided.

@vijaygos can try helm template or helm lint for analyze your templates.

With Helm 2, all releases metadata are saved in Tiller. With Helm 3, all releases metadata are saved as Secrets in the same Namespace of the release. If you got “cannot re-use a name that is still in use”, this means you may need to check some orphan secrets and delete them :

kubectl -n ${NAMESPACE} delete secret -lname=${HELM_RELEASE}

Same here. I got this error and it took me ages to correlate it with YAML error. Thanks @dragon9783 for sharing your finding. I’ll think of how to test the YAML before running into that misleading error.

For anyone else who finds themselves here, I found that this less helpful error was a result of the presence of --force. Running helm upgrade --force --dry-run ... produced this:

UPGRADE FAILED
Error: a release named staging is in use, cannot re-use a name that is still in use
Error: UPGRADE FAILED: a release named staging is in use, cannot re-use a name that is still in use

while helm upgrade --dry-run ... (without --force) gave this actionable error:

UPGRADE FAILED
Error: Chart requires kubernetesVersion: >= 1.15.0 which is incompatible with Kubernetes v1.15.12-gke.2
Error: UPGRADE FAILED: Chart requires kubernetesVersion: >= 1.15.0 which is incompatible with Kubernetes v1.15.12-gke.2

In my case pointing to the “wontfix” bug of incorrect ordering of prereleases in #6190 (c/o https://github.com/Masterminds/semver/issues/69)

It happens for me because I’m using the Terraform Helm provider, and I was waiting until the deployment is complete. If it never becomes ready (can easily happen due to a missing environment var, etc.), then Terraform times out. I can revert by kubectl, still, but I can’t “fix” it by doing a new deployment. I’m in a beter state not waiting until the new pod is ready, but I don’t like that config (I may have dependent deploys in the future).

The problem had already occured at this point, and if I use Terraform to destroy the deployment (something that you never really want to do in prod because your fallback (last build’s replicaset) is still up while your new build is waiting for readyness), it doesn’t remove the secret.

Is there a way to “clean” the secret instead of deleting it, so that I can just make a new replicaset in the same deployment, as one normally would with a new version? I’d like to do it this way to avoid downtime.

@illinar +1,This method solved my problem。

+1 on @illinar 's answer, I also had to remove config maps and services. I am also testing with helm 3.

Thanks!

I had the same issue, after I installed and removed a chart I got the error: Error: cannot re-use a name that is still in use

Also helm list would show that the chart is not there anymore

I tried linting and templating, and the chart returned successful result, but found that changing name would then highlight leftover resources from a previous installation.

Once those files were deleted I could redeploy with the old name.

same problem here . i can not install the old name any more. i can only install a new name…really bad for me