kubernetes: `kubectl get .` invalidates discovery

What happened:

Ran the following on a cluster with many custom resources installed:

kubectl api-resources --verbs=list --namespaced -o name | \
  xargs -n 1 kubectl get --show-kind --ignore-not-found -n my-namespace

The command took longer to run than expected. Adding trace logging with --v=6, the kubectl get call was invalidating discovery data on every invocation, meaning every kubectl get call fully re-fetched discovery data.

This can be reproduced by making a kubectl get call with a single <resource>.<group> argument as output by api-resources -o name:

... install a CRD that defines a `foos` resource in the `example.com` API group ...

# fetch to ensure discovery data including the new custom resource is cached
kubectl get foos.example.com

# refetch and observe discovery data is invalidated
kubectl get foos.example.com --v=6
I1109 15:06:06.874348   74544 loader.go:375] Config loaded from file:  /Users/liggitt/go/src/k8s.io/kubernetes/_output/certs/admin.kubeconfig
I1109 15:06:06.879882   74544 discovery.go:214] Invalidating discovery information
I1109 15:06:06.889069   74544 round_trippers.go:444] GET https://localhost:6443/api?timeout=32s 200 OK in 9 milliseconds
I1109 15:06:06.912802   74544 round_trippers.go:444] GET https://localhost:6443/apis?timeout=32s 200 OK in 1 milliseconds
I1109 15:06:06.939306   74544 round_trippers.go:444] GET https://localhost:6443/apis/certificates.k8s.io/v1beta1?timeout=32s 200 OK in 2 milliseconds
I1109 15:06:06.940452   74544 round_trippers.go:444] GET https://localhost:6443/apis/authorization.k8s.io/v1?timeout=32s 200 OK in 3 milliseconds
...

What you expected to happen:

Cached discovery data to be used

About this issue

  • Original URL
  • State: open
  • Created 4 years ago
  • Comments: 22 (20 by maintainers)

Most upvoted comments

Just one note: This problem is happening also for builtin types.

$ kubectl get roles.rbac.authorization.k8s.io

invalidates all discovery cache.

We generally suggest users to use fully qualified names instead short names. However, that issue might have an negative impact on embracing the usage of fully qualified names.

AFAIK, aggregated discovery has already a feature to only invalidate cache for specific resources and in the future this slowness will be resolved.

that’s a convention (followed by all the APIs shipped by the Kubernetes project), but is not guaranteed for custom/aggregated resources:

apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  name: weirds.example.com
spec:
  group: example.com
  scope: Namespaced
  versions:
  - name: mylevel1
    served: true
    storage: true
    schema:
      openAPIV3Schema:
        type: object
  - name: mylevel2
    served: true
    storage: false
    schema:
      openAPIV3Schema:
        type: object
  names:
    singular: weird
    plural: weirds
    kind: Weird
    listKind: WeirdList
---
kind: Weird
apiVersion: example.com/mylevel1
metadata:
  name: test
kubectl get weirds.mylevel1.example.com
NAME   AGE
test   15s

if the a.b.c.d matches a group, version, and kind, we should prefer that since it is fully qualified