helm: Helm Hook "Post Install" not working for Dependencies Chart

https://helm.sh/docs/topics/charts_hooks/#writing-a-hook - Trying to get this working

Requirements: I want the console pod to start at the end in the below Chart X which has dependency to console chart.

Solution Space: Thought of the helm hooks as a life-cycle to deploy it at the end.

Result/Behavior: The annotations getting applied, but the console pods starts as soon as I deploy the whole chart.

Problem: Because Console is the UI for kafka cluster, it needs to wait for kafka broker to be available. I want to run it post-install of a Helm Chart installed. But it doesn’t…

What am I doing wrong? Or is it not a supported feature?

Screen Shot 2022-10-06 at 9 48 18 AM Screen Shot 2022-10-06 at 9 48 58 AM Screen Shot 2022-10-06 at 9 49 41 AM

Output of helm version:

version.BuildInfo{Version:"v3.10.0", GitCommit:"ce66412a723e4d89555dc67217607c6579ffcb21", GitTreeState:"clean", GoVersion:"go1.19.1"}

Output of kubectl version:

kubectl version                    
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short.  Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.2", GitCommit:"5835544ca568b757a8ecae5c153f317e5736700e", GitTreeState:"clean", BuildDate:"2022-09-21T14:25:45Z", GoVersion:"go1.19.1", Compiler:"gc", Platform:"darwin/amd64"}
Kustomize Version: v4.5.7
Server Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.0", GitCommit:"a866cbe2e5bbaa01cfd5e969aa3e033f3282a8a2", GitTreeState:"clean", BuildDate:"2022-08-23T17:38:15Z", GoVersion:"go1.19", Compiler:"gc", Platform:"linux/amd64"}

Cloud Provider/Platform (AKS, GKE, Minikube etc.):

  1. Docker-Desktop
  2. Rancher on Azure VM

PS: Unable to add label (not sure why) - this in my view is question/support type of ticket

About this issue

  • Original URL
  • State: closed
  • Created 2 years ago
  • Comments: 15 (7 by maintainers)

Most upvoted comments

Cool… I would say this question has been answered. And I have understood the challenge behind it. As of now there isn’t any way (& it is not a HELM limitation, but k8s behavior) - so I will depend on the k8s-scheduler for a failing pod to killed/restarted.

That would need to go into k8s first. They’d need to provide a standard for communicating how a custom resource’s status would be considered “ready”.

These are the resources that helm knows how to wait for: https://github.com/helm/helm/blob/main/pkg/kube/wait.go#L109-L145

If they’re not in that list, no waiting is done. You can’t really wait for a configmap to be ready or a secret, and the same is true for any custom resource. There’s just no standard that’s enforced by the kubernetes api server.

I mean… I like the console you’re using, but the rest of it’s so big and slow… 😉

@joejulian - I didn’t earlier (when I reported the issue), but I just did and still same result.

If you want to experiment, I have it all in my private git - https://github.com/robin-carry/strimzi-parent

git clone https://github.com/robin-carry/strimzi-parent
cd strimzi-parent
./0-init.sh
helm install my-cluster . --wait

I still see immediately my-cluster-console-**** pod started and running, trying to connect to the broker endpoint, which isn’t available.

PS: I realized my private repo doesn’t have the helm hook code checked in. I just did - in case you had already sync the code, please run the ./0-init.sh & git pull --rebase origin main once again.