jaeger-operator: Too long CRD definition
Error from server (Invalid): error when creating “STDIN”: CustomResourceDefinition.apiextensions.k8s.io “jaegers.jaegertracing.io” is invalid: metadata.annotations: Too long: must have at most 262144 characters
From master, right now. Onto docker-desktop macOS
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.6", GitCommit:"96fac5cd13a5dc064f7d9f4f23030a6aeface6cc", GitTreeState:"clean", BuildDate:"2019-08-19T11:13:49Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.5", GitCommit:"20c265fef0741dd71a66480e35bd69f18351daea", GitTreeState:"clean", BuildDate:"2019-10-15T19:07:57Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
repro
curl --silent -L https://raw.githubusercontent.com/jaegertracing/jaeger-operator/master/deploy/crds/jaegertracing.io_jaegers_crd.yaml \
> k8s/base/jaegertracing.io_jaegers_crd.yaml
kubectl apply -f k8s/base/jaegertracing.io_jaegers_crd.yaml
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Reactions: 4
- Comments: 25 (13 by maintainers)
See #854. In short, use
kubectl create -f ..., or remove thevalidationnode if you absolutely have to usekubectl apply.kubectl createandkubectl applyare not the same. I only usekubectl applyand now my workflow is broken because of the CRD.Another related issue: https://github.com/kubernetes/kubernetes/issues/82292
Given the amount of people that seems affected by this one, I’ll try to apply the suggested workaround, by disabling the description fields.
This is not a good enough solution, if kustomize uses apply and if I’m expected to continuously re-run apply from a scheduled job (as I am). Can you at least ship a CRD definition without the validation node, so that the kubectl tooling works as expected?
@jpkrohling thanks for this! i’ve tested the updated CRD and it works fine with kustomize + apply.
We’ve worked around this by removing all “description” fields from the provided CRDs, based on a suggestion by the upstream Kubebuilder issue: kubernetes-sigs/kubebuilder#1140
Would you consider publishing the CRDs without the description fields to allow them to be compatible with
kubectl applyout of the box?The issue has been reported a few times in different channels, but everyone else seems to be happy with
kubectl createbeing an alternative tokubectl apply.In any case, could you provide the
kustomizecommands you are using?yqorpythonwhich are commonly used to transform yaml to json and then filter. Your expectations of our environment don’t match the real-world; it has to run on Linux, Windows and macOS.validationin the schema if it’s so utterly useless that you make it sound likeAnecdotally: I’m storing modified versions of tooling output e.g. for the Istio mesh that we got configured because they have a bug; https://github.com/istio/istio/issues/20082 which seems to be extra problematic when its output is applied to already applied k8s state; triggering a much worse bug — grey failure — https://github.com/istio/istio/issues/20454 . Every bug report in that repo is preceeded by a 1-3 week triage phase where they ensure it’s not “operator error”; which it is sometimes, but like in the above bugs, it also makes the DX suck.
But why am I stupid enough to apply the output of the tooling (istioctl) straight to the cluster? Because the documentation says that’s how it’s done and it doesn’t explain the link; that the Helm chart is inlined in the tool and can actually be used to generate k8s manifests; as such they’ve made what was previously explicit, implicit, and this makes people make mistakes. Your suggestion is exactly this; let’s work around a bug in an upstream project with a hack, and it will bite someone in the ass sooner or later 😉