metrics-server: error: Metrics API not available

What happened:

kubectl get pods --all-namespaces 
NAMESPACE     NAME                                       READY   STATUS             RESTARTS       AGE
kube-system   calico-kube-controllers-85578c44bf-526bd   1/1     Running            0              89m
kube-system   calico-node-4x7zk                          1/1     Running            0              80m
kube-system   calico-node-6bfnp                          1/1     Running            5 (84m ago)    119m
kube-system   calico-node-79tnt                          1/1     Running            0              71m
kube-system   calico-node-h99hx                          1/1     Running            0              82m
kube-system   calico-node-r4dk4                          1/1     Running            0              83m
kube-system   calico-typha-866bf4ccff-xb4kl              1/1     Running            0              89m
kube-system   coredns-5d78c9869d-gbhnw                   0/1     CrashLoopBackOff   39 (10s ago)   159m
kube-system   coredns-5d78c9869d-zklwl                   0/1     CrashLoopBackOff   39 (16s ago)   159m
kube-system   etcd-k0.xlab.io                            1/1     Running            2              159m
kube-system   kube-apiserver-k0.xlab.io                  1/1     Running            0              159m
kube-system   kube-controller-manager-k0.xlab.io         1/1     Running            0              159m
kube-system   kube-proxy-8wrl7                           1/1     Running            0              71m
kube-system   kube-proxy-9d5xs                           1/1     Running            0              82m
kube-system   kube-proxy-ksq4n                           1/1     Running            0              83m
kube-system   kube-proxy-r926v                           1/1     Running            0              159m
kube-system   kube-proxy-w954b                           1/1     Running            0              80m
kube-system   kube-scheduler-k0.xlab.io                  1/1     Running            0              159m
kube-system   metrics-server-7866664974-bzt4j            1/1     Running            0              2m29s
kubectl apply -f metrics-server.yaml
kubectl top node
error: Metrics API not available

What you expected to happen: Show metrics.

Anything else we need to know?: latest version metrics server yaml.

Environment:

  • Kubernetes distribution (GKE, EKS, Kubeadm, the hard way, etc.): Kubeadm on my local servers.

  • Container Network Setup (flannel, calico, etc.): Calico

  • Kubernetes version (use kubectl version):

kubectl version -o yaml
clientVersion:
  buildDate: "2023-06-14T09:53:42Z"
  compiler: gc
  gitCommit: 25b4e43193bcda6c7328a6d147b1fb73a33f1598
  gitTreeState: clean
  gitVersion: v1.27.3
  goVersion: go1.20.5
  major: "1"
  minor: "27"
  platform: linux/amd64
kustomizeVersion: v5.0.1
serverVersion:
  buildDate: "2023-06-14T09:47:40Z"
  compiler: gc
  gitCommit: 25b4e43193bcda6c7328a6d147b1fb73a33f1598
  gitTreeState: clean
  gitVersion: v1.27.3
  goVersion: go1.20.5
  major: "1"
  minor: "27"
  platform: linux/amd64
  • Metrics Server manifest
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
    rbac.authorization.k8s.io/aggregate-to-admin: "true"
    rbac.authorization.k8s.io/aggregate-to-edit: "true"
    rbac.authorization.k8s.io/aggregate-to-view: "true"
  name: system:aggregated-metrics-reader
rules:
- apiGroups:
  - metrics.k8s.io
  resources:
  - pods
  - nodes
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
rules:
- apiGroups:
  - ""
  resources:
  - nodes/metrics
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - pods
  - nodes
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server-auth-reader
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server:system:auth-delegator
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:metrics-server
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  ports:
  - name: https
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  strategy:
    rollingUpdate:
      maxUnavailable: 0
  template:
    metadata:
      labels:
        k8s-app: metrics-server
    spec:
      containers:
      - args:
        - --cert-dir=/tmp
        - --secure-port=4443
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        - --metric-resolution=15s
        - --kubelet-insecure-tls=true
        image: registry.k8s.io/metrics-server/metrics-server:v0.6.3
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /livez
            port: https
            scheme: HTTPS
          periodSeconds: 10
        name: metrics-server
        ports:
        - containerPort: 4443
          name: https
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /readyz
            port: https
            scheme: HTTPS
          initialDelaySeconds: 20
          periodSeconds: 10
        resources:
          requests:
            cpu: 100m
            memory: 200Mi
        securityContext:
          allowPrivilegeEscalation: false
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1000
        volumeMounts:
        - mountPath: /tmp
          name: tmp-dir
      nodeSelector:
        kubernetes.io/os: linux
      priorityClassName: system-cluster-critical
      serviceAccountName: metrics-server
      volumes:
      - emptyDir: {}
        name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
  labels:
    k8s-app: metrics-server
  name: v1beta1.metrics.k8s.io
spec:
  group: metrics.k8s.io
  groupPriorityMinimum: 100
  insecureSkipTLSVerify: true
  service:
    name: metrics-server
    namespace: kube-system
  version: v1beta1
  versionPriority: 100

  • Kubelet config:
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data:<MyKey>
    server: https://k0.xlab.io:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: system:node:k0.xlab.io
  name: system:node:k0.xlab.io@kubernetes
current-context: system:node:k0.xlab.io@kubernetes
kind: Config
preferences: {}
users:
- name: system:node:k0.xlab.io
  user:
    client-certificate: /var/lib/kubelet/pki/kubelet-client-current.pem
    client-key: /var/lib/kubelet/pki/kubelet-client-current.pem
  • Metrics server logs:
I0701 16:17:05.854367       1 serving.go:342] Generated self-signed cert (/tmp/apiserver.crt, /tmp/apiserver.key)
I0701 16:17:06.442804       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
I0701 16:17:06.442814       1 shared_informer.go:240] Waiting for caches to sync for RequestHeaderAuthRequestController
I0701 16:17:06.443407       1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0701 16:17:06.443418       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0701 16:17:06.444083       1 secure_serving.go:267] Serving securely on [::]:4443
I0701 16:17:06.444097       1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
I0701 16:17:06.444102       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0701 16:17:06.444104       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0701 16:17:06.444098       1 dynamic_serving_content.go:131] "Starting controller" name="serving-cert::/tmp/apiserver.crt::/tmp/apiserver.key"
W0701 16:17:06.444137       1 shared_informer.go:372] The sharedIndexInformer has started, run more than once is not allowed
I0701 16:17:06.544671       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 
I0701 16:17:06.544688       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
I0701 16:17:06.544698       1 shared_informer.go:247] Caches are synced for RequestHeaderAuthRequestController 
  • Status of Metrics API:
Name:         v1beta1.metrics.k8s.io
Namespace:    
Labels:       k8s-app=metrics-server
Annotations:  <none>
API Version:  apiregistration.k8s.io/v1
Kind:         APIService
Metadata:
  Creation Timestamp:  2023-07-01T16:17:04Z
  Resource Version:    20032
  UID:                 bb670fc2-666f-4617-ac5b-4405bbb2328c
Spec:
  Group:                     metrics.k8s.io
  Group Priority Minimum:    100
  Insecure Skip TLS Verify:  true
  Service:
    Name:            metrics-server
    Namespace:       kube-system
    Port:            443
  Version:           v1beta1
  Version Priority:  100
Status:
  Conditions:
    Last Transition Time:  2023-07-01T16:17:04Z
    Message:               failing or missing response from https://10.104.75.22:443/apis/metrics.k8s.io/v1beta1: Get "https://10.104.75.22:443/apis/metrics.k8s.io/v1beta1": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
    Reason:                FailedDiscoveryCheck
    Status:                False
    Type:                  Available
Events:                    <none>

/kind bug

About this issue

  • Original URL
  • State: open
  • Created a year ago
  • Comments: 20 (3 by maintainers)

Most upvoted comments

Help me pls!!

This worked for me, thanks to @NileshGule:

  1. Deploy metric server:
[deploy metrics server](https://gist.github.com/NileshGule/8f772cf04ea6ae9c76d3f3e9186165c2#deploy-metrics-server)
  1. Open the file in editor mode:
k -n kube-system edit deploy metrics-server
  1. Under the containers section, add only the command part:
      containers:
      - args:
        - --cert-dir=/tmp
        - --secure-port=4443
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        - --metric-resolution=15s
        command:
        - /metrics-server
        - --kubelet-insecure-tls
        - --kubelet-preferred-address-types=InternalIP
  1. Check if the metric-server is running now:
k -n kube-system get pods
NAME                                      READY   STATUS    RESTARTS   AGE
calico-kube-controllers-9d57d8f49-d26pd   1/1     Running   3          25h
canal-5xf7z                               2/2     Running   0          11m
canal-mgtxd                               2/2     Running   0          11m
coredns-7cbb7cccb8-gpnp5                  1/1     Running   0          25h
coredns-7cbb7cccb8-qqcs6                  1/1     Running   0          25h
etcd-controlplane                         1/1     Running   0          25h
kube-apiserver-controlplane               1/1     Running   2          25h
kube-controller-manager-controlplane      1/1     Running   2          25h
kube-proxy-mk759                          1/1     Running   0          25h
kube-proxy-wmp2n                          1/1     Running   0          25h
kube-scheduler-controlplane               1/1     Running   2          25h
metrics-server-678d4b775-gqb65            1/1     Running   0          48s
  1. Now try the top command:
 k top node
controlplane $ k top node
NAME           CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
controlplane   85m          8%     1211Mi          64%       
node01         34m          3%     957Mi           50%     

Thanks so much for this! it works for me!

I had similar issue, metrics-server up & running where as top command is not working as expected, error says “error: Metrics API not available” with 1.28 version with pod n/w is Calico. My container runtime engine is cri-o, & K8S install by kubeadm ("v1.28.2) on ubuntu machines ( 5 node cluster) Client Version: v1.28.2 Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3 Server Version: v1.28.3 Calico: v3.26.1 metrics-server:v0.6.4 Since its Calico networking plugin for my CNI, I just added below 2 lines in my metrics-server deployment with reference to https://datacenterdope.wordpress.com/2020/01/20/installing-kubernetes-metrics-server-with-kubeadm/

- --kubelet-insecure-tls —> this is at spec.containers.args section hostNetwork: true —> this is at spec.containers section After editing/adding with above two lines at metrics-server deployment, top command started working; because metrics-server pod started to communicating with API server, otherwise we may end-up see “Readiness Probe” failed for metrics-server deployment.

image

This worked for me, thanks to @NileshGule:

  1. Deploy metric server:
[deploy metrics server](https://gist.github.com/NileshGule/8f772cf04ea6ae9c76d3f3e9186165c2#deploy-metrics-server)
  1. Open the file in editor mode:
k -n kube-system edit deploy metrics-server
  1. Under the containers section, add only the command part:
      containers:
      - args:
        - --cert-dir=/tmp
        - --secure-port=4443
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        - --metric-resolution=15s
        command:
        - /metrics-server
        - --kubelet-insecure-tls
        - --kubelet-preferred-address-types=InternalIP
  1. Check if the metric-server is running now:
k -n kube-system get pods
NAME                                      READY   STATUS    RESTARTS   AGE
calico-kube-controllers-9d57d8f49-d26pd   1/1     Running   3          25h
canal-5xf7z                               2/2     Running   0          11m
canal-mgtxd                               2/2     Running   0          11m
coredns-7cbb7cccb8-gpnp5                  1/1     Running   0          25h
coredns-7cbb7cccb8-qqcs6                  1/1     Running   0          25h
etcd-controlplane                         1/1     Running   0          25h
kube-apiserver-controlplane               1/1     Running   2          25h
kube-controller-manager-controlplane      1/1     Running   2          25h
kube-proxy-mk759                          1/1     Running   0          25h
kube-proxy-wmp2n                          1/1     Running   0          25h
kube-scheduler-controlplane               1/1     Running   2          25h
metrics-server-678d4b775-gqb65            1/1     Running   0          48s
  1. Now try the top command:
 k top node
controlplane $ k top node
NAME           CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
controlplane   85m          8%     1211Mi          64%       
node01         34m          3%     957Mi           50%