kustomize: kustomize handling of CRD inconsistent with k8s handling starting in 1.16

Until Kubernetes 1.15, kustomize merge patch strategy was aligned with the kubernetes. When applying using mergePatch, kustomize and kubernetes were quite aligned:

  • Using SMP/strategicMergePatch for K8s native object (Deployment, Service)
  • Using the less powerfull JMP (JsonMergePatch) for CRD. This JMP is reacting differently especially when merge list which follow the pattern of the containers list in the PodSpecTemplate

The side effect that when using a CRD for instance argo/Rollout instead of a Deployment, the merging behavior was different also the fields were identical. But at least kustomize and kubectl were behaving the same way.

Starting 1.16, an major improvement has been done in CRD handling:

Server-side apply will now use the openapi provided in the CRD validation field to help figure out how to correctly merge objects and update ownership. See kubernetes PR.

This means that kubectl apply/patch will end up kind of using SMP for K8s native objects and CRDs, but kustomize will still use JMP, hence will be misleading the user.

Using kustomize with CRD (Istio, Argo, Prometheus, upcoming kubeadm) was already a quite tedious process to create the configuration even with the auto discovery of the CRD.json schema. But it now looks like the merge will be inconsistent with the k8s one.

Will most likely have to write a KEP to help the kustomize user: 1, Potential solution: Create a procedure based on the kustomize external go plugin, plugin attempting to register the crd.go code into overall scheme registry used by kustomize to figure if it needs to use SMP or JMP. 2. Potential solution: Follow kubectl pattern which runs most of the code on the server side.

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Reactions: 6
  • Comments: 17 (6 by maintainers)

Most upvoted comments

We are highly interested in a solution for kustomize to be able to perform strategic merge patching for CRDs. We are relying more and more on CRDs and are constantly fighting kustomize to be able to do what we want. A stop-gap solution we are starting to employ is using a plugin to perform the strategic merge patching for specific CRDs, but we are finding limits to the plugin approach.

Given that kubernetes itself is moving in a direction where CRDs are being used more heavily in kubernetes proper, I think this will become more and more important even for vanilla kubernetes.

I think it’s great news that Kubernetes v1.16 is able to look at a CRD definition alone and figure out how to perform strategic merge patching. If the K8s API server is able to perform strategic merge patching given on the CRD definition alone, I see no reason why kustomize wouldn’t be able to do the same. It could leverage the same library/techniques that the API server is doing, even enabling a client-side solution provided only the CRD definitions.

Our ideal solution would look something like:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

crds:
- github.com/argoproj/argo-rollouts//manifests/crds?ref=release-0.12

resources:
- my-rollout.yaml

patchesStrategicMerge:
- add-environment-variable.yaml

Notice that CRDs section would support pointing to a remote, as well as local files. It is desirable to support remote references because the CRD definitions are often centrally defined/controlled, as they are tied/upgraded with to the clusters.

Another thing to note, is that we feel the crds: section should be improved to be able to point to plain CRD yamls, as opposed to OpenAPI validation like the current behavior. This makes it trivial to do something like:

kubectl get crd rollouts.argoproj.io -o yaml > rollout-crd.yaml

And then have the kustomization.yaml have the crd section reference the generated file:

crds:
- rollout-crd.yaml

Is my proposal something Kustomize would consider?

This is a begin of fix: Creating a dummy transformer/plugin which is only performing the sheme.AddToScheme operation: here

Nice. What a clever way of doing this!

However, I feel that a plugin approach to solving this, still does not make for a good experience. It would require either getting the plugin upstreamed into kustomize core (which isn’t scalable from a kustomize perspective), or having an external go plugin and dealing with a distribution problem on clients, which will need to build different versions because of the requirement of matching golang library dependencies.

IMO, the ideal experience would be to simply have a kustomization.yaml point to a CRD specification YAML (either locally on disk, or as a kustomize remote), and have kustomize be able to figure out strategic merge patching of the CRDs automatically.

Would love to hear from @Liujingfang1 @monopole about this, since we would be eager to work on this.

Note that kubectl apply behavior has not yet changed for custom resources. It still does client-side apply.

Great point and thanks for highlighting this.

Although, I think kustomize’s ability to SMP for the purposes of resource composition vs. kubectl’s ability to SMP for the purposes of applying/patching should be orthogonal issues.

In other words, I don’t necessary agree that we must have consistent behavior for SMP in kustomize vs. SMP as part of kubectl apply, especially if it means waiting to change kustomize’s behavior to support SMP only once kubectl apply (client-side) gets it.