metrics-server: Metrics servers deployment fails "no metrics to serve"

What happened: “Failed probe” probe=“metric-storage-ready” err=“no metrics to serve”

What you expected to happen: Successfully deployment

Anything else we need to know?: Apple M1

Environment:

  • Kubernetes distribution (GKE, EKS, Kubeadm, the hard way, etc.): Docker Desktop 4.9.1

  • Container Network Setup (flannel, calico, etc.): nothing

  • Kubernetes version (use kubectl version): 1.24.0

  • Metrics Server manifest

spoiler for Metrics Server manifest:
  apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
    rbac.authorization.k8s.io/aggregate-to-admin: "true"
    rbac.authorization.k8s.io/aggregate-to-edit: "true"
    rbac.authorization.k8s.io/aggregate-to-view: "true"
  name: system:aggregated-metrics-reader
rules:
- apiGroups:
  - metrics.k8s.io
  resources:
  - pods
  - nodes
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
rules:
- apiGroups:
  - ""
  resources:
  - nodes/metrics
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - pods
  - nodes
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server-auth-reader
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server:system:auth-delegator
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:metrics-server
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  ports:
  - name: https
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  strategy:
    rollingUpdate:
      maxUnavailable: 0
  template:
    metadata:
      labels:
        k8s-app: metrics-server
    spec:
      containers:
      - args:
        - --cert-dir=/tmp
        - --secure-port=4443
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        - --metric-resolution=15s
        - --kubelet-insecure-tls
        image: k8s.gcr.io/metrics-server/metrics-server:v0.6.1
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /livez
            port: https
            scheme: HTTPS
          periodSeconds: 10
        name: metrics-server
        ports:
        - containerPort: 4443
          name: https
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /readyz
            port: https
            scheme: HTTPS
          initialDelaySeconds: 20
          periodSeconds: 10
        resources:
          requests:
            cpu: 100m
            memory: 200Mi
        securityContext:
          allowPrivilegeEscalation: false
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1000
        volumeMounts:
        - mountPath: /tmp
          name: tmp-dir
      nodeSelector:
        kubernetes.io/os: linux
      priorityClassName: system-cluster-critical
      serviceAccountName: metrics-server
      volumes:
      - emptyDir: {}
        name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
  labels:
    k8s-app: metrics-server
  name: v1beta1.metrics.k8s.io
spec:
  group: metrics.k8s.io
  groupPriorityMinimum: 100
  insecureSkipTLSVerify: true
  service:
    name: metrics-server
    namespace: kube-system
  version: v1beta1
  versionPriority: 100
  • Kubelet config:
spoiler for Kubelet config:
  • Metrics server logs:
spoiler for Metrics Server logs:
0703 07:29:45.898838       1 server.go:187] "Failed probe" probe="metric-storage-ready" err="no metrics to serve"
E0703 07:29:46.125476      1 scraper.go:140] "Failed to scrape node" err="Get \"https://192.168.65.4:10250/metrics/resource\": context deadline exceeded" node="docker-desktop" 
  • Status of Metrics API:
spolier for Status of Metrics API:
Name:         v1beta1.metrics.k8s.io
Namespace:    
Labels:       k8s-app=metrics-server
Annotations:  <none>
API Version:  apiregistration.k8s.io/v1
Kind:         APIService
Metadata:
Creation Timestamp:  2022-07-03T07:10:45Z
Managed Fields:
  API Version:  apiregistration.k8s.io/v1
  Fields Type:  FieldsV1
  fieldsV1:
    f:status:
      f:conditions:
        .:
        k:{"type":"Available"}:
          .:
          f:lastTransitionTime:
          f:message:
          f:reason:
          f:status:
          f:type:
  Manager:      kube-apiserver
  Operation:    Update
  Subresource:  status
  Time:         2022-07-03T07:10:45Z
  API Version:  apiregistration.k8s.io/v1
  Fields Type:  FieldsV1
  fieldsV1:
    f:metadata:
      f:annotations:
        .:
        f:kubectl.kubernetes.io/last-applied-configuration:
      f:labels:
        .:
        f:k8s-app:
    f:spec:
      f:group:
      f:groupPriorityMinimum:
      f:insecureSkipTLSVerify:
      f:service:
        .:
        f:name:
        f:namespace:
        f:port:
      f:version:
      f:versionPriority:
  Manager:         kubectl-client-side-apply
  Operation:       Update
  Time:            2022-07-03T07:10:45Z
Resource Version:  575651
UID:               7c929d93-5c90-4b91-8df7-ec04d3cdc561
Spec:
Group:                     metrics.k8s.io
Group Priority Minimum:    100
Insecure Skip TLS Verify:  true
Service:
  Name:            metrics-server
  Namespace:       kube-system
  Port:            443
Version:           v1beta1
Version Priority:  100
Status:
Conditions:
  Last Transition Time:  2022-07-03T07:10:45Z
  Message:               endpoints for service/metrics-server in "kube-system" have no addresses with port name "https"
  Reason:                MissingEndpoints
  Status:                False
  Type:                  Available
Events:                    <none>

/kind bug

About this issue

  • Original URL
  • State: closed
  • Created 2 years ago
  • Comments: 22 (6 by maintainers)

Most upvoted comments

How you modified --metric-resolution? I add via helm value in argocd application and I have both values in a pod?

Used kustomization.yaml this way:

resources:
  - https://github.com/kubernetes-sigs/metrics-server/releases/download/metrics-server-helm-chart-3.8.2/components.yaml

patchesStrategicMerge:
- |-
  apiVersion: apps/v1
  kind: Deployment
  metadata:
    name: metrics-server
    namespace: kube-system
  spec:
    template:
      spec:
        containers:
          - name: metrics-server
            # Note: As `args` is an array `$patch` won't work on it.
            args:
              - --cert-dir=/tmp
              - --secure-port=10250
              - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
              - --kubelet-use-node-status-port
              - --metric-resolution=15s
              - --kubelet-insecure-tls=true
            ports:
              - $patch: replace
              - containerPort: 10250
                name: https
                protocol: TCP

@yangjunmyfm192085 Thanks for pointing to that other issue. I solved the problem now by increasing the timeout via --metric-resolution=40s. Metrics server is now running.

@yangjunmyfm192085 It isn’t related to metrics-server because I’m able to see logs of pods using kubectl logs command. I tried to see logs in tail mode <0> and it works correctly. So, I changed k9s.logger.sinceSeconds option in my k9s configuration and now I see logs instantly when I press logs button.

At last I got CPU/RAM metrics but I’m still having next logs on the start of pod: server.go:187] "Failed probe" probe="metric-storage-ready" err="no metrics to serve"

However it works well and I’m able to analyse metrics.

ok. This is normal. Because metrics-server needs at least 2 scrape cycles to get data.

I’m facing this issue as well. Went through every similar issue and tried the following things:

  • Adding --kubelet-insecure-tls to args
  • Increasing --metrics-resolution to 30s or even more
  • Using --kubelet-preferred-address-types=InternalIP.

Absolutely nothing works. I don’t know what to do anymore. Kubernetes version is the same and I am also on Apple M1.

How you modified --metric-resolution? I add via helm value in argocd application and I have both values in a pod?

Used kustomization.yaml this way:

resources:
  - https://github.com/kubernetes-sigs/metrics-server/releases/download/metrics-server-helm-chart-3.8.2/components.yaml

patchesStrategicMerge:
- |-
  apiVersion: apps/v1
  kind: Deployment
  metadata:
    name: metrics-server
    namespace: kube-system
  spec:
    template:
      spec:
        containers:
          - name: metrics-server
            # Note: As `args` is an array `$patch` won't work on it.
            args:
              - --cert-dir=/tmp
              - --secure-port=10250
              - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
              - --kubelet-use-node-status-port
              - --metric-resolution=15s
              - --kubelet-insecure-tls=true
            ports:
              - $patch: replace
              - containerPort: 10250
                name: https
                protocol: TCP

@superherointj Solved my problem, Thanks dude

Thank you superherointj. But can I do this trick in Argocd yaml file? Without customize.

Solved “no metrics to serve” issue by:

  1. Using port 10250 for metric-server(instead of 4443).
  2. Opening firewall port TCP/10250.

@yangjunmyfm192085 It isn’t related to metrics-server because I’m able to see logs of pods using kubectl logs command. I tried to see logs in tail mode <0> and it works correctly. So, I changed k9s.logger.sinceSeconds option in my k9s configuration and now I see logs instantly when I press <l> logs button.

At last I got CPU/RAM metrics but I’m still having next logs on the start of pod: server.go:187] "Failed probe" probe="metric-storage-ready" err="no metrics to serve"

However it works well and I’m able to analyse metrics.