helm: UPGRADE FAILED: No resource with the name "" found

Repro

Create a simple Chart.yaml:

name: upgrade-repro
version: 0.1.0

With a single K8S resource in the templates/ dir:

kind: ConfigMap
apiVersion: v1
metadata:
  name: cm1
data:
  example.property.1: hello

Install the chart:

helm install .
exasperated-op
Last Deployed: Tue Sep 13 12:43:23 2016
Namespace: default
Status: DEPLOYED

Resources:
==> v1/ConfigMap
NAME      DATA      AGE
cm1       1         0s

Verify the release exists:

helm status exasperated-op
Last Deployed: Tue Sep 13 12:43:23 2016
Namespace: default
Status: DEPLOYED

Resources:
==> v1/ConfigMap
NAME      DATA      AGE
cm1       1         1m

Now add a 2nd K8S resource in templates/ dir:

kind: ConfigMap
apiVersion: v1
metadata:
  name: cm2
data:
  example.property.2: hello

Upgrade the chart:

helm upgrade exasperated-op .
Error: UPGRADE FAILED: Looks like there are no changes for cm1

That’s weird. Bump the version in Chart.yaml:

name: upgrade-repro
version: 0.2.0

Try upgrade again:

helm upgrade exasperated-op .
Error: UPGRADE FAILED: No resource with the name cm2 found.

Expected

helm upgrade should create the cm2 resource instead of erroring that it doesn’t exist.

Edit: to be clear: helm is creating the cm2 ConfigMap, but helm fails regardless.

Current state after performing steps

helm status exasperated-op
Last Deployed: Tue Sep 13 12:43:23 2016
Namespace: default
Status: DEPLOYED

Resources:
==> v1/ConfigMap
NAME      DATA      AGE
cm1       1         6m

kubectl get configmap --namespace default
NAME           DATA      AGE
cm1            1         6m
cm2            1         4m

About this issue

  • Original URL
  • State: closed
  • Created 8 years ago
  • Reactions: 124
  • Comments: 110 (37 by maintainers)

Commits related to this issue

Most upvoted comments

This is a process I use to recover from this problem (so far it has worked every time without any incident… but be careful anyway):

  1. Run helm list and find out latest revision for affected chart

    NAME        REVISION UPDATED                  STATUS  CHART              NAMESPACE
    fetlife-web 381      Thu Mar 15 19:46:00 2018 FAILED  fetlife-web-0.1.0  default
    
  2. Go from there and find latest revision with DEPLOYED state

    kubectl -n kube-system edit cm fetlife-web.v381
    kubectl -n kube-system edit cm fetlife-web.v380
    kubectl -n kube-system edit cm fetlife-web.v379
    kubectl -n kube-system edit cm fetlife-web.v378
    
  3. Once you find last DEPLOYED revision, change its state from DEPLOYED to SUPERSEDED and save the file

  4. Try to do helm upgrade again, if it’s successful then you are done!

  5. If you encounter upgrade error like this:

    Error: UPGRADE FAILED: "fetlife-web" has no deployed releases
    

    then edit the status for very last revision from FAILED to DEPLOYED

    kubectl -n kube-system edit cm fetlife-web.v381
    
  6. Try to do helm upgrade again, if it fails again just flip the table…

This happens frequently with our usage of helm and requires a full --purge. That is not a solution.

Is there someone assigned to fix this? Is there a PR for this already? Can I help with anything?

I’ve been bitten by this issue more than once since it’s an easy situation to get yourself into but apparently there’s no easy way to get out of. I suppose the “good” part in my case is that resources are updated even with the error on the release (not sure if that makes me happy or worried)

I think helm should either forbid the user from getting into this wrong state or correctly handle it. Are there any real fixes to this outside of deleting everything (that is only viable for non-production uses)?

Atomic will not resolve the issue

Example chart: https://github.com/distorhead/ex-helm-upgrade-failure

  1. Check out master, run deploy.
git clone https://github.com/distorhead/ex-helm-upgrade-failure
cd ex-helm-upgrade-failure
helm upgrade --atomic --install --namespace myns myrelease .

Chart contains 2 deployments – myserver1 and myserver2:

Release "myrelease" does not exist. Installing it now.
NAME:   myrelease
LAST DEPLOYED: Tue Feb  5 23:48:57 2019
NAMESPACE: myns
STATUS: DEPLOYED

RESOURCES:
==> v1beta1/Deployment
NAME       READY  UP-TO-DATE  AVAILABLE  AGE
myserver1  1/1    1           1          5s
myserver2  1/1    1           1          5s
  1. Make breaking change. Delete deployment myserver1 from chart and modify deployment myserver2 with user-error (delete image field for example):
git checkout break-atomic
git diff master
diff --git a/templates/deploy.yaml b/templates/deploy.yaml
index 198516e..64be153 100644
--- a/templates/deploy.yaml
+++ b/templates/deploy.yaml
@@ -1,21 +1,5 @@
 apiVersion: apps/v1beta1
 kind: Deployment
-metadata:
-  name: myserver1
-spec:
-  replicas: 1
-  template:
-    metadata:
-      labels:
-        service: myserver1
-    spec:
-      containers:
-      - name: main
-        command: ["/bin/bash", "-c", "while true ; do date ; sleep 1 ; done"]
-        image: ubuntu:16.04
----
-apiVersion: apps/v1beta1
-kind: Deployment
 metadata:
   name: myserver2
 spec:
@@ -28,4 +12,3 @@ spec:
       containers:
       - name: main
         command: ["/bin/bash", "-c", "while true ; do date ; sleep 1 ; done"]
-        image: ubuntu:16.04
  1. Run deploy:
git checkout break-atomic
helm upgrade --atomic --install --namespace myns myrelease .

Say hello to our friend again:

UPGRADE FAILED
ROLLING BACK
Error: Deployment.apps "myserver2" is invalid: spec.template.spec.containers[0].image: Required value
Error: no Deployment with the name "myserver1" found

@bacongobbler @thomastaylor312 @jkroepke

We’re also affected by the issue. We use the latest helm 2.10 with GKE 10.6. When can I expect that will be fixed? Do we have some reasonable workaround for the issue? Removing the whole deployment with --purge option is so poor.

I believe that with the --cleanup-on-fail flag enabled, this error case should go away. Closing as resolved via #4871 and #5143.

I was able to workaround this with helm rollback and specifying the most recent revision (the one that failed)

First, I think Helm should be OK with namespaces and other resources already existing if it’s trying to (re-)install them. Kubernetes is all about “make the configuration right, and let kube figure out how to make the world match the config.” Second I think Helm should be all-or-nothing. If a deploy fails, the cluster should be in the state it was before the deploy started. If there are two releases that both want to create namespace X, then there’s a reference counting problem. If there is a release that wants to create namespace X, but it already exists, then there’s a provenance problem. However, helm can record this using annotations on the objects, and do the right thing.

It is not applicable if you use CI/CD. What happens if a upgrade fails and you use rolling update strategy. Must I delete my still working release?

@bacongobbler I could be way off here. I am facing similar issues in upgrade. Judging by how difficult it is to solve this problem, I wonder if something more fundamental needs to be reconsidered. Part of the complexity appears to be due to the fact that Helm maintains its own version of the known configuration, separate from the actual source-of-truth which is kubernetes. Would the system be more reliable if Helm only kept a copy of previously deployed helm charts for the purposes of history and rollback, but didn’t use it at all during upgrade. Instead, Helm would get the truth from kubectl itself, and then always have a 2-way diff to perform?

If a helm chart says it should have resource X, and kubectl sees an existing resource X, then:

  • If the existing resource is tagged as being controlled by Helm, then Helm performs the required upgrade
  • If the existing resource is not tagged as being controlled by Helm, then Helm fails with a clean error message (or some command line flag can be used to --force the upgrade and cause Helm to take ownership of existing Resource X).

If the helm chart says it should have resource X and there isn’t one according to kubectl, then Helm creates it.

If kubectl reports that it has a resource Y tagged as being controlled by this helm chart, and there is no resource Y in this helm chart, then helm deletes the resource.

Any resources not tagged as being controlled by this helm chart are always ignored by helm when performing the upgrade, except in the case mentioned above where the helm chart says it needs resource X and X exists but isn’t tagged.

If for some reason the roll-out of a helm chart happens and fails, and only half the resources were rolled out, then during a rollback helm would use the stored config files from the previous successful deployment and run the exact same algorithm, or things could be left in a broken state relative to some helm command line flag. If the user attempts to upgrade again, since kubernetes is used as the source of truth and not the last-known successful deployment, it should still be a simple 2-way diff between the new helm chart and the existing state of the system.

@brendan-rius You’re welcome to contribute code to fix this issue, or think of ideas. See #3805 and #4146 for some pointers.

Getting similar error while upgrading:

$ helm upgrade --install bunny ./app --namespace=staging --reuse-values --debug
[debug] Created tunnel using local port: '53859'

[debug] SERVER: "127.0.0.1:53859"

Error: UPGRADE FAILED: no ConfigMap with the name "bunny-proxy-config" found

Configmap is created

$ k get configmap
NAME                 DATA      AGE
bunny-proxy-config   1         7m

My configmap:

apiVersion: v1
kind: ConfigMap
metadata:
  name: {{ template "proxy.fullname" . }}-config
  labels:
    app: {{ template "proxy.name" . }}
    chart: {{ template "proxy.chart" . }}
    release: {{ .Release.Name }}
    heritage: {{ .Release.Service }}
data:
  asd: qwe

this can easily be worked around by manually intervening and deleting the resource

@bacongobbler Please, understand that this resource might be a production namespace’s Service or Deployment object, which might (and already did) heavily disrupt our service guarantees.

I found our issue was because of a failed deploy.

Helm doesn’t attempt to clean up after a failed deploy, which means things like the new ConfigMap I added above get created but without a reference in the ‘prior’ deploy. That means when the next deploy occurs, helm finds the resource in k8s and expects it to be referenced in the latest deployed revision (or something; I’m not sure what exact logic it uses to find the ‘prior’ release) to check what changes there are. It’s not in that release, so it cannot find the resource, and fails.

This is mainly an issue when developing a chart as a failed deploy puts k8s in a state helm does not properly track. When I figured out this is what was happening I knew I just needed to delete the ConfigMap from k8s and try the deploy again.

@bacongobbler @michelleN Is there anything what makes it hard to improve error message for this issue?

I believe error message should state that “there is a conflict because resource wasn’t created by helm and manual intervention is required” and not “not found”. Only this small change to the error will improve user experience by a good margin.

In our case we were adding a configmap to a chart and the chart fails to be upgraded with:

Error: UPGRADE FAILED: no resource with the name “<configmap-name>” found

Note: We’re using 2.7.2; on later versions this message has changed to include the type of the resource that can’t be found.

I believe this happens because when helm is determining what has changed it looks for the new configmap resource in the old release, and fails to find it. See https://github.com/kubernetes/helm/blob/master/pkg/kube/client.go#L276-L280 for the code where this error comes from.

Tiller logs for the failing upgrade:

[tiller] 2018/05/03 19:09:14 preparing update for staging-collector
[storage] 2018/05/03 19:09:14 getting deployed release from "staging-collector" history
[tiller] 2018/05/03 19:10:39 getting history for release staging-collector
[storage] 2018/05/03 19:10:39 getting release history for "staging-collector"
[tiller] 2018/05/03 19:10:41 preparing update for staging-collector
[storage] 2018/05/03 19:10:41 getting deployed release from "staging-collector" history
[storage] 2018/05/03 19:10:42 getting last revision of "staging-collector"
[storage] 2018/05/03 19:10:42 getting release history for "staging-collector"
[tiller] 2018/05/03 19:10:44 rendering collector chart using values
[tiller] 2018/05/03 19:10:44 creating updated release for staging-collector
[storage] 2018/05/03 19:10:44 creating release "staging-collector.v858"
[tiller] 2018/05/03 19:10:44 performing update for staging-collector
[tiller] 2018/05/03 19:10:44 executing 0 pre-upgrade hooks for staging-collector
[tiller] 2018/05/03 19:10:44 hooks complete for pre-upgrade staging-collector
[kube] 2018/05/03 19:10:44 building resources from updated manifest
[kube] 2018/05/03 19:10:44 checking 3 resources for changes
[tiller] 2018/05/03 19:10:44 warning: Upgrade "staging-collector" failed: no resource with the name "collector-config" found 
[storage] 2018/05/03 19:10:44 updating release "staging-collector.v857"
[storage] 2018/05/03 19:10:44 updating release "staging-collector.v858" 

I think @distorhead might want to take a look at that one and see if it also resolves his concerns he raised in https://github.com/helm/helm/pull/4871. Other than that, it looks like --atomic should address the concern assuming you always use the --atomic flag.

I don’t believe there’s been any proposed solutions to address the issue when you get into this particular state, but I could be wrong. If the mitigation strategy for this issue is

  • manually go through the cluster’s live state and fix it as per the workaround
  • upgrade to Helm 2.13.0 and use helm upgrade --atomic going forward

Then I think this is safe to close.

Reposting my comment here since it’s more related to this issue than 3-way merge strategies:

If a three way merge is not viable for new deployments in Helm 2.y.z, when will #1193 be fixed? The bug has been open for nearly two years with no clear resolution planned for Helm 2.0.

At this point, we’re stumped on how to proceed. We’ve discussed the bug for weeks and none of the proposed solutions will work in all cases, either by introducing new bugs or significantly changing tiller’s upgrade behaviour.

For example, @michelleN and I brainstormed earlier this week and thought of two possible solutions, neither of which are particularly fantastic:

  1. When an upgrade fails, we automatically roll back and delete resources that were created during this release.

This is very risky as the cluster may be in an unknown state after a failed upgrade, so Helm may be unable to proceed in a clean fashion, potentially causing application downtime.

  1. During an upgrade, if we are creating a new resource and we see that it already exists, we instead apply those changes to the existing resource, or delete/re-create it.

This is extremely risky as Helm may delete objects that were installed via other packages or through kubectl create, neither of which users may want.

The safest option so far has been to ask users to manually intervene in the case of this conflict, which I’ll demonstrate below.

If anyone has suggestions/feedback/alternative proposals, we’d love to hear your thoughts.

@bacongobbler , if no support for the 3-way merge feature is planned, we need an alternative or a workaround. Otherwise, #1193 is a seriously painful bocker.

To re-iterate the issue as well as the workaround:

When an upgrade that installs new resources fails, the release goes into a FAILED state and stops the upgrade process. The next time you call helm upgrade, Helm does a diff against the last DEPLOYED release. In the last DEPLOYED release, this object did not exist, so it tries to create the new resource, but fails because it already exists. The error message is completely misleading as @arturictus points out.

This can easily be worked around by manually intervening and deleting the resource that the error reports as “not found”. Following the example I demonstrated in https://github.com/helm/helm/pull/4223#issuecomment-397413568:

><> helm fetch --untar https://github.com/helm/helm/files/2103643/foo-0.1.0.tar.gz
><> helm install ./foo/
...
><> vim foo/templates/service.yaml                    # change the service name from "foo" to "foo-bar"
><> kubectl create -f foo/templates/service.yaml      # create the service
service "foo-bar" created
><> helm upgrade $(helm last) ./foo/
Error: UPGRADE FAILED: no Service with the name "foo-bar" found
><> kubectl delete svc foo-bar
service "foo-bar" deleted
><> helm upgrade $(helm last) ./foo/
Release "riotous-echidna" has been upgraded. Happy Helming!
...
><> kubectl get svc
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
foo-bar      ClusterIP   10.104.143.52   <none>        80/TCP    3s
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP   1h

In other words, deleting resources created during the FAILED release works around the issue.

To re-iterate the issue as well as the workaround:

When an upgrade that installs new resources fails, the release goes into a FAILED state and stops the upgrade process. The next time you call helm upgrade, Helm does a diff against the last DEPLOYED release. In the last DEPLOYED release, this object did not exist, so it tries to create the new resource, but fails because it already exists. The error message is completely misleading as @arturictus points out.

This can easily be worked around by manually intervening and deleting the resource that the error reports as “not found”. Following the example I demonstrated in #4223 (comment):

><> helm fetch --untar https://github.com/helm/helm/files/2103643/foo-0.1.0.tar.gz
><> helm install ./foo/
...
><> vim foo/templates/service.yaml                    # change the service name from "foo" to "foo-bar"
><> kubectl create -f foo/templates/service.yaml      # create the service
service "foo-bar" created
><> helm upgrade $(helm last) ./foo/
Error: UPGRADE FAILED: no Service with the name "foo-bar" found
><> kubectl delete svc foo-bar
service "foo-bar" deleted
><> helm upgrade $(helm last) ./foo/
Release "riotous-echidna" has been upgraded. Happy Helming!
...
><> kubectl get svc
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
foo-bar      ClusterIP   10.104.143.52   <none>        80/TCP    3s
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP   1h

In other words, deleting resources created during the FAILED release works around the issue.

Thanks for putting this workaround together @bacongobbler - it’s essentially what we came to as a process as well. One painful issue here is during complex upgrades many new resources - at times a few dependencies levels deep - may find themselves in this state. I haven’t yet found a way to fully enumerate these states in an automatic way leading to situations where one needs to repeatedly fail an upgrade to “search” for all relevant resources. For example, recently a newly added dependency itself had a dependency on a postgresql chart. In order to resolve this issue it was necessary to delete a secret, configmap, service, deployment and pvc - each found the long way 'round.

We are seeing this problem, too. Our reproduce steps:

  1. helm install a chart that successfully installs a deployment
  2. Update the chart to include a custom resource in addition to existing deployment
  3. Change the image of the deployment podspec to mimick a deployment failure
  4. helm install new chart. This will cause a rolling update of the deployment, which we’ve intentionally set up to fail.
  5. The helm install should fail. But the the custom resource should be left in k8s etcd. (verify using kubectl)
  6. (At this point, helm is in a bad state)
  7. Fix the chart – put a goot image in the deployment podspec.
  8. helm install. We expect this to work, but it doesn’t. Reports the “No resource with name ___”. The name is that of the custom resource.
  9. Recovery: delete the residual custom resource using kubectl. Now helm install will work.

Note that first attempt at helm install with a newly introduced custom resource in the chart must fail to get into this state.

Hi,

we have the same issue as @dilumr described… with version 2.11.0:

Client: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}

Error: UPGRADE FAILED: no ConfigMap with the name "xxx" found

Still seeing this in v2.9.1 (currently released stable version)

I disagree that it’s “very dangerous” to back out of an upgrade. I think doing so is the correct solution.

Kubernetes is declarative. Snapshot what the cluster state was before attempting to upgrade. If there’s an error partway through, then roll back to the snapshot. If someone has script hooks that would leave the cluster in a bad state when doing this, then that’s their own fault. (Maybe that could be solved with rollback hooks, too)

Of course, it would be great if an upgrade was pre-flighted and didn’t file in the first place as much as possible. Errors in dependency charts generated by values or --set arguments should be possible to check before trying to change anything, for example. Things like forgetting to bump the version number could also be pre-flighted to avoid making changes when it won’t work.

this (ugly) fix works for me: 0. I’m getting error :

helm upgrade az-test-2-prom ./prometheus --namespace monitor --set cluster_name="az-test-2" -f values.yaml
Error: UPGRADE FAILED: no ConfigMap with the name "az-test-2-prom-prometheus-grafana-config" found

1. find last DEPLOYED revisions

export TEMPLATE='{{range .items}}{{.metadata.name}}{{"\t"}}{{.metadata.labels.STATUS}}{{"\n"}}{{end}}'
kubectl -nkube-system get cm -l 'OWNER=TILLER' -ogo-template="$TEMPLATE"
az-test-2-prom.v1	SUPERSEDED
az-test-2-prom.v10	SUPERSEDED
az-test-2-prom.v11	SUPERSEDED
az-test-2-prom.v12	SUPERSEDED
az-test-2-prom.v13	SUPERSEDED
az-test-2-prom.v14	SUPERSEDED
az-test-2-prom.v15	SUPERSEDED
az-test-2-prom.v16	SUPERSEDED
az-test-2-prom.v17	DEPLOYED
az-test-2-prom.v18	FAILED
az-test-2-prom.v19	FAILED
az-test-2-prom.v2	SUPERSEDED
az-test-2-prom.v20	FAILED
az-test-2-prom.v21	FAILED
az-test-2-prom.v22	FAILED
az-test-2-prom.v23	FAILED
az-test-2-prom.v24	FAILED
az-test-2-prom.v25	FAILED
az-test-2-prom.v26	FAILED
az-test-2-prom.v27	FAILED
az-test-2-prom.v28	FAILED
az-test-2-prom.v29	FAILED
az-test-2-prom.v3	SUPERSEDED
az-test-2-prom.v30	FAILED
az-test-2-prom.v4	SUPERSEDED
az-test-2-prom.v5	FAILED
az-test-2-prom.v6	SUPERSEDED
az-test-2-prom.v7	SUPERSEDED
az-test-2-prom.v8	SUPERSEDED
az-test-2-prom.v9	FAILED

In this output v17 is deployed

2. Delete all revisions including v17:

for ii in {17..30}
> do
>   kubectl -nkube-system delete cm az-test-2-prom.v${ii}
> done

**3. Update release v16 to DEPLOYED **

kubectl -nkube-system patch cm az-test-2-prom.v16 -p '{"metadata": {"labels": {"STATUS": "DEPLOYED"}}}'

** 4. (Important) Find all resources existing new resources were added since last deployed (v16) and delete them, for example kubectl -nmonitor delete cm az-test-2-prom-prometheus-grafana-config kubectl -nmonitor delect svc ...

Run helm upgrade ... and see Happy Helming

Yes I know. I’m just explaining and observing the bug’s behaviour so others know what is involved. 😃

My situation was that I had a new resource, and I deployed the new version of the helm chart with the new resource. That deployment failed b/c I fat fingered some yaml. Well, the new objects were created in kubernetes. I fixed the yaml, and ran the upgrade on my chart again, and voila, the error message that the resource is not found appears. I had to go into kubernetes and remove the new resources (in my case a role and rolebinding) that were created by the failed deployment. After that, the helm check to see if the current object exists fails (https://github.com/kubernetes/helm/blob/7432bdd716c4bc34ad95a85a761c7cee50a74ca3/pkg/kube/client.go#L257) will not succeed, and the resources are created again. Seems like a bug, where maybe new resources for a failed chart should be accounted for?

@michelleN thanks! Sorry I haven’t had time this week to attempt a repro on master. Looking forward to upgrading soon!

@michelleN I’ve prepared a PR to change the error text: #5460.

@jkroepke just pointed out to me that PR #5143 provides a good workaround for this. When the --atomic flag is released in the next minor version, you should be able to use it to automatically purge or rollback when there is an error.

@bacongobbler given you have been involved with most of the back and forth on this one, is there something else that can be done to fully fix this, or would the --atomic flag be sufficient?

we observe the same problem. It happens if you have a template, which is either:

  • in a {{if $condition -}} statement
  • or in a {{ range $index, $value := $array-}}

I am hitting this issue as well with latest helm 2.12.0 and kubernetes 1.10.11, even rolling back to latest good release as @aguilarm suggested did not work, also deleting the resources that helm complains about does not help, and after the upgrade command fails it leaves those same resources as actually partially recreated. Very annoying for a prod env…

I have 2 clusters with very similar environment, main different between the 2 being the total number of nodes. In one case a helm delete --purge followed by fresh helm install worked but in another it did not and I am yet to figure out a way to bring that to the latest template changes.

Going back to @bacongobbler’s earlier comment:

  1. During an upgrade, if we are creating a new resource and we see that it already exists, we instead apply those changes to the existing resource, or delete/re-create it. This is extremely risky as Helm may delete objects that were installed via other packages or through kubectl create, neither of which users may want.

I wonder if we can mitigate this risk by making the new behaviour opt-in? Within a given namespace I generally use helm exclusively, and I suspect this is the case for many. If I could give Helm install/upgrade a flag to tell it that anything in the given namespace that isn’t part of an existing release is fine to delete/overwrite, would that help?

Since you also said “via other packages”, I presume you don’t want Helm to have to examine other releases as part of performing a release, so my suggestion wouldn’t work except in the single-release-per-namespace model. To reply to that objection, I would say: if you want to manage multiple packages in a namespace and still get this behaviour, create an umbrella chart whose sole purpose is to specify the chart dependencies you want. Then use the new flag (“–exclusive”?) when deploying that umbrella chart.

Obviously this doesn’t solve the problem for all use cases, but perhaps it’s enough of a workaround.

I’m running into a similar issue where I have a chart with bundled dependencies. If I add a new dependency and run a helm upgrade the result is the same as described. The resources are properly created however helm returns an error.

So, if this is installed: helm install -n my-release

my-thing/
  Chart.yml
  charts/
    depended-upon-thing/

And then a new chart is added as a dependency:

my-thing/
  Chart.yml
  charts/
    depended-upon-thing/
    new-dependency/

When the release is upgraded with: helm upgrade my-release my-thing helm produces the following error:

Error: UPGRADE FAILED: No resource with the name new-dependency found.