helm: --wait returns true when pod cannot start

Hi, I’ve just observed the fact that helm install --wait, which resulted in a pod being in an ImagePullBackOff state returned true.

Is this expected behaviour? Because reading the help about wait it doesn’t seem so:

“if set, will wait until all Pods, PVCs, Services, and minimum number of Pods of a Deployment are in a ready state before marking the release as successful.”

❯ helm version
Client: &version.Version{SemVer:"v2.7.2", GitCommit:"8478fb4fc723885b155c924d1c8c410b7a9444e6", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.7.2", GitCommit:"8478fb4fc723885b155c924d1c8c410b7a9444e6", GitTreeState:"clean"}

About this issue

  • Original URL
  • State: closed
  • Created 7 years ago
  • Reactions: 1
  • Comments: 20 (10 by maintainers)

Most upvoted comments

Ok, update on the issue. Due to the nature of how k8s handles different versions API objects, the fix for this will be non-trivial, hacky, and likely brittle based on all current observations. I’ll be bringing this up in the Helm dev call this next week for further discussion and ideas

I believe I found the issue with this. I am trying out a fix right now

This is a real issue, especially in CI/CD environments … I would love to see this treated with a little more urgency 🙂

If that is the case, it’d be nice from a CI/CD perspective if we could add wait-until-running which waits for things to pass their readinessProbes

Thoughts?