helm: CRDs not being applied correctly

I’m seeing the following when installing the chart in this PR https://github.com/kubernetes/charts/pull/2369

$ helm install --name istio . --namespace istio-system
Error: unable to decode "": no kind "CustomResourceDefinition" is registered for version "apiextensions.k8s.io/v1beta1"

If I render the files with CRDs manually using helm template and install using kubectl create -f they work just fine.

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.6", GitCommit:"4bc5e7f9a6c25dc4c03d4d656f2cefd21540e28c", GitTreeState:"clean", BuildDate:"2017-09-15T08:51:09Z", GoVersion:"go1.9", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.5", GitCommit:"17d7182a7ccbb167074be7a87f0a68bd00d58d97", GitTreeState:"clean", BuildDate:"2017-08-31T08:56:23Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
$ helm version
Client: &version.Version{SemVer:"v2.6.1", GitCommit:"bbc1f71dc03afc5f00c6ac84b9308f8ecb4f39ac", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.6.1", GitCommit:"bbc1f71dc03afc5f00c6ac84b9308f8ecb4f39ac", GitTreeState:"clean"} 

About this issue

  • Original URL
  • State: closed
  • Created 7 years ago
  • Reactions: 13
  • Comments: 42 (17 by maintainers)

Commits related to this issue

Most upvoted comments

Whats the status of this? Is ordered support for CRDs in the roadmap for a future release or being worked on?

@bacongobbler The crd-install hook seems to work great when installing a CRD, but I’m trying to figure out the implications it has when updating a chart when a CRD needs changes.

Since the CRD is installed via a hook, it’s not attached to that specific chart deployment, so if I need to change something in the CRD, it doesn’t get updated in the cluster unless I tear down the chart and install it again. This also means that you can’t add a CRD to a chart that has already been deployed.

Deletes are also tricky since the CRD doesn’t get removed, this means I can’t reliably delete and reinstall a chart. The "helm.sh/hook-delete-policy": before-hook-creation makes the reinstall process a little better, but typically the install takes 2 tries.

I’m still digging and getting familiar with how all these particular technologies work together. Am I missing something with how the CRD lifecycle is supposed to be managed wth helm?

Is there any reason not to have all CRD’s annotated by default? (why need the hook at all?)

How does it work with nested charts? Can I nest a chart that provides a CRD, and then in the outer chart register objects of that type?

@dabelenda I’ve suggested elsewhere that subcharts in requirements.yaml could be weighted/ordered. Then you could split deployments into phases and control the order at that level? https://github.com/helm/helm/issues/1780#issuecomment-357387173

It would be nice to be able to e.g. apply weights to subcharts in requirements.yaml, with charts of the same weight being sorted together with the current algorithm. All weights, including parent chart would be zero by default. Negative weights would install before the parent chart, positive weights would install after.

With this optional feature the processing of all existing charts would be unaffected. And only people who really need ordering would specify any weights for requirements.yaml.

and certmanager

well the workaround i have in place to get this done as a subchart (for cert-manager) (which combines 2 bugs in helm):

cd path/my-chart-dir
helm install stable/cert-manager -n temp-release # creates CRDs
helm delete --purge temp-release  # doesn't clean up CRDs lol
helm install actual-release .

not a sustainable path forward, but thought it was funny

I see this issue remains open after a year with no recent discussion. I just hit the same problem using helm v2.10.0 and a new chart of my own. Just curious if there is any officially sanctioned workaround or fix in progress? Thanks.

Ok, So I found a really evil hack around the ordering issue. It turns out helm doesn’t process the template files as yaml. It instead uses a regex to extract the kind attribute (WTF?!). So with knowing the ordering (https://github.com/kubernetes/helm/blob/master/pkg/tiller/kind_sorter.go#L29), you can add the following to the top of the file, and helm will order the file as if it were that kind of resource:

# kind: Namespace

It looks like this might only work when using helm template though, not with tiller, which is fine for us because tiller has security issues and we don’t use it.

Look at prometheus-operator, helm is not waiting for third party resource to become fully registered, therefore one off install hook is necessary

Probably because support for the crd-install hook was dropped in Helm 3 in favour of a crd directory.

I’ve been hearing the same thing from others, for example istio.

did you try with the crd-install hook? (still waiting for an answer to why not always default crd-install hook on CRD’s too)

I believe this was fixed with the new crd-install hook introduced in Helm 2.9. Can others try this out and see if that works?

Using a pre-install hook won’t work. Pre-install hooks are validated at the same time as the chart, so the validation fails because the CRD hasn’t been applied yet (as @bamb00 found out).

By using the crd-install hook, CRDs are validated prior to the rest of the chart going through the validation process, so the chart can install the CRD first before going on to validate the rest of the chart and install instances of that CRD.

TPR/CRDs are a particular issue that often necessitates some ordering control. But there are other ordering issues for umbrella charts with many sub-charts. It would be nice to have a simple and general ordering mechanism, rather than a special behavior for TPR/CRD. As identified here, it is critical to consider when validation occurs.

One possible approach would to allow weights to be applied to subcharts in requirements.yaml, with the resources for charts of the same weight being sorted together with the current algorithm, validated together, and applied together. All weights, including the parent chart would be zero by default. Negative weights would install before the parent chart, positive weights would install after.

Deletions would follow the same grouping in reverse, deleting resources for equally weighted groups of parent and/or sub-charts, from highest weight to lowest weight.

With this optional feature the processing of all existing charts (with no weights) would be unaffected - for both deployment order, validation behavior, and deletion. Only people who really need ordering would specify any weights for requirements.yaml.

This mechanism would enable TPR/CRDs to be applied first, simply by including them in a lower weight chart (parent or sub-chart) than the charts (parent and/or sub-charts) that rely on those TPR/CRDs.

Considerations:

  1. Would tiller need to wait for the previous deployed group to be successfully registered and/or running? If that is needed, chart authors could use hooks to e.g. delay until CRDs have fully registered. And possibly the current ‘–wait’ mechanism could apply to each group of resources? Could ‘–wait’ also include checking that TPR/CRDs have registered in the API?
  2. What will the partial success/clean-up behaviour be? Since each group of charts of a weight (parent and/or sub-charts) are validated and applied before the next group can be validated, the subsequent groups might not validate or fail to apply.
  3. This proposal is basically the current sort, validate and apply process, except applied in multiple rounds. You could simulate the same effect with multiple separate Helm chart packages, shared values files, and a script to apply them in the desired order. But that means house dependencies like TPR/CRD in separate chart packages.
  4. The expectation is 90% of charts will continue to have no weights at all. Some charts with TPR/CRD would have two weights in the whole chart. And some complicated umbrella charts might need a handful of different weights.
  5. Numeric weights are easy and flexible, but not super human friendly. Perhaps ‘named’ weights would be a better way to go. Charts could ‘weighted’ in requirements.yaml with just the labels “first” or “last” or not labelled that equates to weights of -1, 1 and 0 respectively (or -MaxInt, MaxInt, and 0 respectively). With only huge umbrella charts needing to resort to number labels.
  6. A limitation of this approach is that to achieve ordering you need to have at least one (embedded) subchart. So a minimal chart with TPR/CRD would have an embedded subchart.
  7. I know squat about the Helm code, does the above have complete show-stoppers that make it impossible or an unreasonably complicated way forward?

Hey @Ciantic and @JulienBreux so this is because your local types/discovery cache is stale after you install a CRD. We have handled this with the new crds/ directory installation in Helm 3. It will invalidate the cache for you. This is due to the way Kubernetes discovery caching works and not due to Helm itself per se. To get around it, you can delete ~/.kube/cache/discovery (or append the cluster name to the end of the path to just remove its specific cache) in between running the install of Istio init and installing Istio itself

I’m using helm3 and experience the same problem sporadically, this time with istio resources:

Error: apiVersion "config.istio.io/v1alpha2" in istio/charts/mixer/templates/config.yaml is not available

It works occasionally, but most often it just throws me that error. It doesn’t happen always.

I am “only” creating the Project, Quotas, and RoleBindings here. The need to create a dummy subchart to only create a Project and then apply Quotas and RoleBindings is weird. Every single CRD that have dependencies will have to be in different subcharts. It would creates hard-to-understand charts if the dependency tree is non trivial.

I think of Prometheus-Operator Prometheus ServiceMonitoring etc

This issue applies to more than just waiting for the CRD before trying to use it. Even if the resource kind were registered, you still have an the issue where one resource could depend on another, but helm deploys them in the wrong order.

While allowing weights in requirements.yaml makes things better, I don’t think it solves the issue very well. The scenario I just ran into is trying to use helm with OpenShift to create projects. Helm is unaware of the Project resource kind, and so it tries to put things in the project before creating the project. The project chart consists of the Project resource, plus some RoleBindings, and a few other things. I don’t think it makes sense to have to split these into multiple subcharts just to get the Project first. And on top of that, now you have to deal with global values to hold the project name.

Unfortunately I don’t know of a good solution.