helm: Helm upgrade not working as expected

Hi,

I am trying to upgrade one of my packages. But the changes which I have made in the “deployment.yaml” template in the chart is not there after the upgrade. I added the following lines in the spec of my kubernetes deployment.yaml file

spec:
     containers:
     - env:
        - name: LOGBACK_DB_ACQUIRE_INCREMENT
           value: "1"
        - name: LOGBACK_DB_MAX_IDLE_TIME_EXCESS_CONNECTIONS
           value: "10"
        - name: LOGBACK_DB_MAX_POOL_SIZE
           value: "2"
        - name: LOGBACK_DB_MIN_POOL_SIZE
           value: "1"

I tried upgrading using the following command

helm upgrade ironic-molly spring-app-0.1.2.tgz --recreate-pods

Where “ironic-molly” is the release-name and spring-app-0.1.2.tgz is my chart with changes.

Helm output says that the package is upgraded , but the changes which i have made is missing in the deployment.yaml . What might be causing this issue.?

Regards, Muhammed Roshan

About this issue

  • Original URL
  • State: closed
  • Created 7 years ago
  • Reactions: 5
  • Comments: 43 (6 by maintainers)

Commits related to this issue

Most upvoted comments

We are experiencing the same behaviour.

Description

Changed imagePullSecrets in template cluster-service-chart/templates/deployment.yaml (not variable)

      imagePullSecrets:
      - name: cluster-service-pull-secret

then applying new chart with

helm upgrade -f some_values.yaml --install cluster-service <NEW_CHART>

helm get cluster-service shows new deployment, but

Expected:

kubectl get deploy/cluster-service -o yaml will show new imagePullSecrets value

Actual:

kubectl get deploy/cluster-service -o yaml shows old imagePullSecrets value. And whole deployment looks old.

Version

helm version                                                           
Client: &version.Version{SemVer:"v2.5.0", GitCommit:"012cb0ac1a1b2f888144ef5a67b8dab6c2d45be6", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.5.0", GitCommit:"012cb0ac1a1b2f888144ef5a67b8dab6c2d45be6", GitTreeState:"clean"}

I’m also seeing this. helm get shows correct output but deployment is never updated.

I’m also facing the same issue when installing the stable/jenkins chart using v2.10.0. After the initial installation, the kube resources would not reflect the change in Agent.resources after calling helm upgrade -f values.yaml [RELEASE] stable/jenkins. Calling helm get values [RELEASE showed that the values were correctly set in the chart, however the resources do not reflect the change. I also tried adding the --force and --recreate-pods options to the helm upgrade command however produced the same results (--force didn’t even recreate the deployment). Lastly, I tried manually deleting the jenkins deployment and then calling update again (with those same options), but that did not work. Without being able to change Agent.resources, the Jenkins executer dies with an OOM during every build.

The solution was to entirely delete the chart and reinstall, but this time having my desired values set in values.yaml first. Luckily my org was not reliant on Jenkins before I ran into this problem, otherwise this solution would not have been feasible.

helm upgrade performs inconsistently; sometimes updated values propagate all the way down to the resources, and sometimes they don’t. It’s unclear whether this problem is solely within helm upgrade, or if it’s chart dependent. However it’s clear from this thread that the same problem has appeared for various charts and more investigation should be given.

I am also seeing this when an environment variable has changed in this case.

helm template | kubectl apply is a common use case, yes. 😃

I was having this issue in the 3.0.0-alpha1 release but solved by installing Helm 2.14.0

@GloriaPG can you please share what issues you are experiencing, optimally in a new ticket? We need test cases to help verify whether you found a bug, and if we need to clarify something in the documentation or if there’s something else we can do to help.

consolidating this into #5915. Helm 3 recently has implemented a three-way merge patch strategy which should alleviate some of the issues here. Please follow up with that ticket so we can keep the conversation relevant and all in one place. Thanks!

Thanks, I managed to have a “maybe” safer way by piping helm get in a file, removing the release info on top and applying whats left with kubectl. That kills the whole concept of helm tho 😢

We still haven’t identified the underlying issue that’s causing this… Typically we only escalate to a bug once someone has actually pinpointed how to reproduce the problem.

If someone can provide some steps to reproduce this with a public chart that would be helpful to diagnose the issue. Thanks!

Maybe the label to this issue should be changed 😃 With all the reports over a longer period and the me-toos, it could be considered as kind/bug or something?

I am also seeing this issue when changing the yml of a daemonset, the container spec is not getting updated…