kubernetes: Getting new CRD with short name/category will occur error

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug

/kind feature

What happened:

When I created a crd e.g.:

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  # name must match the spec fields below, and be in the form: <plural>.<group>
  name: crontabs.stable.example.com
spec:
  # group name to use for REST API: /apis/<group>/<version>
  group: stable.example.com
  # version name to use for REST API: /apis/<group>/<version>
  version: v1
  # either Namespaced or Cluster
  scope: Namespaced
  names:
    # plural name to be used in the URL: /apis/<group>/<version>/<plural>
    plural: crontabs
    # singular name to be used as an alias on the CLI and for display
    singular: crontab
    # kind is normally the CamelCased singular type. Your resource manifests use this.
    kind: CronTab
    # shortNames allow shorter string to match your resource on the CLI
    shortNames:
    - ct
    categories:
    - testcat

Then I used kubectl get testcat. I found that the first time the result will be with an error:

error: the server doesn’t have a resource type “testcat”

And as for the second time it will be normal:

No resources found.

And it can be stably reproduced every time I created a new CRD.

What you expected to happen:

kubectl returns No resources found. the first time I requested the new CRD instead of an error.

Anything else we need to know?:

I also found that requesting short name e.g. kubectl get ct will be the same. It returns an error the first time but becomes normal the second time.😲

Environment:

  • Kubernetes version (use kubectl version):
  • Cloud provider or hardware configuration:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

/kind bug

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Comments: 35 (28 by maintainers)

Commits related to this issue

Most upvoted comments

Looks like this happens because cache file in .kube/http-cache/ or .kube/cache/ is not synced with kube-apiserver.

delete all files under .kube/http-cache/ and .kube/cache/ will cause a sync and the error will not show up.

This looks like a cli issue.

/unshrug

looks like an important problem we want to solve (?)