tidb-operator: Error occurred when running `kubectl apply`

Bug Report

What version of Kubernetes are you using?

Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.3", GitCommit:"c92036820499fedefec0f847e2054d824aea6cd1", GitTreeState:"clean", BuildDate:"2021-10-27T18:41:28Z", GoVersion:"go1.16.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.2", GitCommit:"8b5a19147530eaac9476b0ab82980b4088bbc1b2", GitTreeState:"clean", BuildDate:"2021-09-15T21:32:41Z", GoVersion:"go1.16.8", Compiler:"gc", Platform:"linux/amd64"}

What version of TiDB Operator are you using?

Defaulted container "tidb-scheduler" out of: tidb-scheduler, kube-scheduler
TiDB Operator Version: version.Info{GitVersion:"v1.2.4", GitCommit:"684ce274555ac739a58c6ed07ff673a2d98a5429", GitTreeState:"clean", BuildDate:"2021-10-21T03:22:23Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}

What storage classes exist in the Kubernetes cluster and what are used for PD/TiKV pods?

NAME                 PROVISIONER                RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
standard (default)   k8s.io/minikube-hostpath   Delete          Immediate           false                  8m52s

sukki_qin@cloudshell:~$ kubectl get pvc -n tidb-admin
No resources found in tidb-admin namespace.

What’s the status of the TiDB cluster pods?

sukki_qin@cloudshell:~$ kubectl get po -n tidb-admin -o wide
NAME                                       READY   STATUS             RESTARTS        AGE   IP           NODE       NOMINATED NODE   READINESS GATES
tidb-controller-manager-7c79b4567c-lmd7w   1/1     Running            0               10m   172.17.0.3   minikube   <none>           <none>
tidb-scheduler-759b84cd6d-5zpw2            1/2     CrashLoopBackOff   6 (4m18s ago)   10m   172.17.0.4   minikube   <none>           <none>

What did you do?

I’ve tried to deploy a test cluster using minikube following the instruction of https://docs.pingcap.com/zh/tidb-in-kubernetes/stable/get-started#使用-minikube-创建-kubernetes-集群 What did you expect to see?

customresourcedefinition.apiextensions.k8s.io/tidbclusters.pingcap.com created
customresourcedefinition.apiextensions.k8s.io/backups.pingcap.com created
customresourcedefinition.apiextensions.k8s.io/restores.pingcap.com created
customresourcedefinition.apiextensions.k8s.io/backupschedules.pingcap.com created
customresourcedefinition.apiextensions.k8s.io/tidbmonitors.pingcap.com created
customresourcedefinition.apiextensions.k8s.io/tidbinitializers.pingcap.com created
customresourcedefinition.apiextensions.k8s.io/tidbclusterautoscalers.pingcap.com created

What did you see instead?

customresourcedefinition.apiextensions.k8s.io/backupschedules.pingcap.com created
customresourcedefinition.apiextensions.k8s.io/backups.pingcap.com created
customresourcedefinition.apiextensions.k8s.io/dmclusters.pingcap.com created
customresourcedefinition.apiextensions.k8s.io/restores.pingcap.com created
customresourcedefinition.apiextensions.k8s.io/tidbclusterautoscalers.pingcap.com created
customresourcedefinition.apiextensions.k8s.io/tidbinitializers.pingcap.com created
customresourcedefinition.apiextensions.k8s.io/tidbmonitors.pingcap.com created
The CustomResourceDefinition "tidbclusters.pingcap.com" is invalid: metadata.annotations: Too long: must have at most 262144 bytes

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Comments: 15 (8 by maintainers)

Most upvoted comments

@donbowman Does kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/crd.yaml --server-side=true work for you? The server side apply is enabled by default since Kubernetes v1.16.

@sukkiCat I’m sorry for the late reply.

  1. as you’re using Kubernetes v1.22.2, we no longer recommend using tidb-scheduler, so I think you can disable it when deploying the TiDB Operator.
  2. for Kubernetes v1.22.2, some issues we’ve fixed recently (see #4151) and will be released in TiDB Operator v1.3. so, can you use a lower version of Kubernetes (or the nightly version of TiDB Operator with #4151) for the testing now?