metrics-server: Unable to fetch pod metrics & request failed - "401 Unauthorized"

Added “fixes” which reduces the errors: git diff deploy/1.8+/metrics-server-deployment.yaml

       - name: metrics-server
         image: k8s.gcr.io/metrics-server-amd64:v0.3.1
+        command:
+          - /metrics-server
+          - --kubelet-insecure-tls
+          - --kubelet-preferred-address-types=InternalIP
         imagePullPolicy: Always
         volumeMounts:
         - name: tmp-dir
➜  metrics-server  git:(master) ✗ kubectl top nodes
error: metrics not available yet
➜  metrics-server  git:(master) ✗ kubectl top pod

kubectl -n kube-system logs -f metrics-server-68df9fbc9f-fsvgn

E0129 00:52:04.760832       1 reststorage.go:144] unable to fetch pod metrics for pod istio-system/istio-policy-56c4579578-k5szz: no metrics known for pod
E0129 00:52:07.145193       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/kube-controller-manager-ip-10-132-10-233.us-west-2.compute.internal: no metrics known for pod
E0129 00:52:07.145211       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/coredns-784bfc9fbd-pw6hz: no metrics known for pod
E0129 00:52:07.145215       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/kube-scheduler-ip-10-132-11-28.us-west-2.compute.internal: no metrics known for pod
E0129 00:52:07.145218       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/etcd-server-ip-10-132-10-233.us-west-2.compute.internal: no metrics known for pod
E0129 00:52:07.145221       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/kube-proxy-ip-10-132-11-127.us-west-2.compute.internal: no metrics known for pod
E0129 00:52:07.145224       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/kube-controller-manager-ip-10-132-9-84.us-west-2.compute.internal: no metrics known for pod
E0129 00:52:07.145227       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/kube-apiserver-ip-10-132-9-84.us-west-2.compute.internal: no metrics known for pod
E0129 00:52:07.145230       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/etcd-server-ip-10-132-9-84.us-west-2.compute.internal: no metrics known for pod
E0129 00:52:07.145233       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/kube-scheduler-ip-10-132-10-233.us-west-2.compute.internal: no metrics known for pod
E0129 00:52:07.145236       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/kube-proxy-ip-10-132-10-233.us-west-2.compute.internal: no metrics known for pod
E0129 00:52:07.145239       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/etcd-server-events-ip-10-132-11-28.us-west-2.compute.internal: no metrics known for pod
E0129 00:52:07.145242       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/kube-proxy-ip-10-132-9-104.us-west-2.compute.internal: no metrics known for pod
E0129 00:52:07.145244       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/etcd-server-events-ip-10-132-9-84.us-west-2.compute.internal: no metrics known for pod
E0129 00:52:07.145247       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/kube-controller-manager-ip-10-132-11-28.us-west-2.compute.internal: no metrics known for pod
E0129 00:52:07.145250       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/kube-proxy-ip-10-132-11-28.us-west-2.compute.internal: no metrics known for pod
E0129 00:52:07.145254       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/kube-proxy-ip-10-132-9-84.us-west-2.compute.internal: no metrics known for pod
E0129 00:52:07.145257       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/dns-controller-7fb44784-np4bd: no metrics known for pod
E0129 00:52:07.145260       1 reststorage.go:144] unable to fetch pod metrics for pod istio-system/istio-policy-56c4579578-k5szz: no metrics known for pod
E0129 00:52:07.145263       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/kube-scheduler-ip-10-132-9-84.us-west-2.compute.internal: no metrics known for pod
E0129 00:52:07.145266       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/etcd-server-events-ip-10-132-10-233.us-west-2.compute.internal: no metrics known for pod
E0129 00:52:07.145269       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/coredns-784bfc9fbd-q8f52: no metrics known for pod
E0129 00:52:07.145272       1 reststorage.go:144] unable to fetch pod metrics for pod istio-system/istio-grafana-post-install-cwx6n: no metrics known for pod
E0129 00:52:07.145277       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/kube-apiserver-ip-10-132-10-233.us-west-2.compute.internal: no metrics known for pod
E0129 00:52:07.145296       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/kube-apiserver-ip-10-132-11-28.us-west-2.compute.internal: no metrics known for pod
E0129 00:52:07.145305       1 reststorage.go:144] unable to fetch pod metrics for pod istio-system/istio-cleanup-secrets-56bjj: no metrics known for pod
E0129 00:52:07.145310       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/tiller-deploy-57f988f854-zjftk: no metrics known for pod
E0129 00:52:07.145318       1 reststorage.go:144] unable to fetch pod metrics for pod istio-system/istio-security-post-install-98mvv: no metrics known for pod
E0129 00:52:07.145322       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/metrics-server-68df9fbc9f-fsvgn: no metrics known for pod
E0129 00:52:07.145325       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/kube-proxy-ip-10-132-10-63.us-west-2.compute.internal: no metrics known for pod
E0129 00:52:07.145329       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/etcd-server-ip-10-132-11-28.us-west-2.compute.internal: no metrics known for pod
E0129 00:52:34.895839       1 reststorage.go:144] unable to fetch pod metrics for pod istio-system/istio-policy-56c4579578-k5szz: no metrics known for pod
E0129 00:52:48.373899       1 manager.go:102] unable to fully collect metrics: [unable to fully scrape metrics from source kubelet_summary:ip-10-132-9-84.us-west-2.compute.internal: unable to fetch metrics from Kubelet ip-10-132-9-84.us-west-2.compute.internal (10.132.9.84): request failed - "401 Unauthorized", response: "Unauthorized", unable to fully scrape metrics from source kubelet_summary:ip-10-132-10-63.us-west-2.compute.internal: unable to fetch metrics from Kubelet ip-10-132-10-63.us-west-2.compute.internal (10.132.10.63): request failed - "401 Unauthorized", response: "Unauthorized", unable to fully scrape metrics from source kubelet_summary:ip-10-132-11-28.us-west-2.compute.internal: unable to fetch metrics from Kubelet ip-10-132-11-28.us-west-2.compute.internal (10.132.11.28): request failed - "401 Unauthorized", response: "Unauthorized", unable to fully scrape metrics from source kubelet_summary:ip-10-132-9-104.us-west-2.compute.internal: unable to fetch metrics from Kubelet ip-10-132-9-104.us-west-2.compute.internal (10.132.9.104): request failed - "401 Unauthorized", response: "Unauthorized", unable to fully scrape metrics from source kubelet_summary:ip-10-132-11-127.us-west-2.compute.internal: unable to fetch metrics from Kubelet ip-10-132-11-127.us-west-2.compute.internal (10.132.11.127): request failed - "401 Unauthorized", response: "Unauthorized", unable to fully scrape metrics from source kubelet_summary:ip-10-132-10-233.us-west-2.compute.internal: unable to fetch metrics from Kubelet ip-10-132-10-233.us-west-2.compute.internal (10.132.10.233): request failed - "401 Unauthorized", response: "Unauthorized"]
E0129 00:53:05.099637       1 reststorage.go:144] unable to fetch pod metrics for pod istio-system/istio-policy-56c4579578-k5szz: no metrics known for pod
E0129 00:53:35.216151       1 reststorage.go:144] unable to fetch pod metrics for pod istio-system/istio-policy-56c4579578-k5szz: no metrics known for pod

Is there a version that works (ie one of the 200 forks)? I’ve used k8s 1.10 and 1.11 on AWS via Kops.

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Reactions: 17
  • Comments: 37 (3 by maintainers)

Commits related to this issue

Most upvoted comments

Metrics server may fail to authenticate if kubelet is running with --anonymous-auth=false flag. Passing --authentication-token-webhook=true and --authorization-mode=Webhook flags to kubelet can fix this. kops config for kubelet:

kubelet:
  anonymousAuth: false
  authenticationTokenWebhook: true
  authorizationMode: Webhook

This might break authorization for kubelet-api user if ClusterRoleBinding is not created with system:kubelet-api-admin. Which can be fixed by creating the ClusterRoleBinding:

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: kubelet-api-admin
subjects:
- kind: User
  name: kubelet-api
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: system:kubelet-api-admin
  apiGroup: rbac.authorization.k8s.io

@mabushey I believe using “args” is slightly better than “command”, it respects the entrypoint.

      - args:
        - --kubelet-insecure-tls
        - --kubelet-preferred-address-types=InternalIP

I changed from using “args” to “commands” and I don’t see the 401 Unauthorized now. However, kubectl logs -f metrics-server... -n kube-system still shows “no metrics known for pod”:

$ k logs -f metrics-server-68df9fbc9f-dgr8v -n kube-system
I0312 03:55:41.841800       1 serving.go:273] Generated self-signed cert (apiserver.local.config/certificates/apiserver.crt, apiserver.local.config/certificates/apiserver.key)
W0312 03:55:42.433339       1 authentication.go:166] cluster doesn't provide client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication to extension api-server won't work.
W0312 03:55:42.439873       1 authentication.go:210] cluster doesn't provide client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication to extension api-server won't work.
[restful] 2019/03/12 03:55:42 log.go:33: [restful/swagger] listing is available at https://:443/swaggerapi
[restful] 2019/03/12 03:55:42 log.go:33: [restful/swagger] https://:443/swaggerui/ is mapped to folder /swagger-ui/
I0312 03:55:42.488139       1 serve.go:96] Serving securely on [::]:443
E0312 03:55:46.554516       1 reststorage.go:144] unable to fetch pod metrics for pod default/iconverse-nlp-0: no metrics known for pod
E0312 03:55:46.554540       1 reststorage.go:144] unable to fetch pod metrics for pod default/iconverse-nlp-1: no metrics known for pod
  E0312 05:08:42.634201       1 manager.go:102] unable to fully collect metrics: [unable to fully scrape metrics from source kubelet_summary:ip-192-168-84-18.ap-southeast-1.compute.internal: [unable to get CPU for container "iconverse-connector" in pod default/iconverse-connector-0 on node "192.168.84.18", discarding data: missing cpu usage metric, unable to get CPU for container "iconverse-fluentd" in pod default/iconverse-connector-0 on node "192.168.84.18", discarding data: missing cpu usage metric], unable to fully scrape metrics from source kubelet_summary:ip-192-168-22-244.ap-southeast-1.compute.internal: [unable to get CPU for container "iconverse-fluentd" in pod default/iconverse-converse-0 on node "192.168.22.244", discarding data: missing cpu usage metric, unable to get CPU for container "iconverse-converse" in pod default/iconverse-converse-0 on node "192.168.22.244", discarding data: missing cpu usage metric, unable to get CPU for container "iconverse-fluentd" in pod default/iconverse-admin-0 on node "192.168.22.244", discarding data: missing cpu usage metric, unable to get CPU for container "iconverse-admin" in pod default/iconverse-admin-0 on node "192.168.22.244", discarding data: missing cpu usage metric, unable to get CPU for container "iconverse-ui" in pod default/iconverse-ui-0 on node "192.168.22.244", discarding data: missing cpu usage metric]]

kubectl top nodes show valid data with resource percentage. kubectl top pod does not have any percentage at all.

@zahid0 Oh, no. My script has a mistake, should run kops rolling-update but I ran kops update. Now it works.

I can get the metrics. Thanks a lot for your help 😃

kubectl top nodes
NAME                                             CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
ip-1-2-3-4.ap-southeast-1.compute.internal   160m         8%     1585Mi          41%
ip-1-2-3-4.ap-southeast-1.compute.internal   1151m        57%    2397Mi          30%
ip-1-2-3-4.ap-southeast-1.compute.internal   1005m        50%    2769Mi          35%

@vinhnglx kops rolling-update is required after terraform apply. https://github.com/kubernetes/kops/blob/master/docs/terraform.md#caveats

@zoltan-fedor @zhanghan12 I had similar issue (though kubectl top pods still doesn’t show % for me), but basically HPA works now.

My setup is kops 1.13 with k8s 1.13.5 & istio

I have setup similar to other comments:

  kubelet:
    anonymousAuth: false
    authenticationTokenWebhook: true
    authorizationMode: Webhook

And running metrics-server as:

/metrics-server
--v=10
--kubelet-insecure-tls
--kubelet-preferred-address-types=InternalIP,Hostname,InternalDNS,ExternalDNS,ExternalIP

What really helped me is able to debug the issue by setting the log level…

And i found this article by rancher

By default, HPA will try to read metrics (resource and custom) with user system:anonymous

Following the guide and creating the additional ClusterRole binding for system:anonymous user seem to have fixed the issue for me

@zahid0 I’m using Kops to install Kubernetes with vpc, private subnets, and calico CNI for networking. I’m not able to ssh to the instance to check the kubelet.

But I already set the authentication-token-webhook=true and authorization-mode=Webhook using kops edit cluster command

# kops edit cluster

kind: Cluster
metadata:
  creationTimestamp: 2019-01-29T06:45:14Z
  name: xxx.xxx.com
spec:
  # ...
  kubelet:
    anonymousAuth: false
    authenticationTokenWebhook: true
    authorizationMode: Webhook  

And it still shows the 401 Unauthorized

Here is my working config: kops cluster spec for kubelet.

  kubelet:
    anonymousAuth: false
    authenticationTokenWebhook: true
    authorizationMode: Webhook

Metrics server yaml:

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: metrics-server:system:auth-delegator
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
  - kind: ServiceAccount
    name: metrics-server
    namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: metrics-server-auth-reader
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: extension-apiserver-authentication-reader
subjects:
  - kind: ServiceAccount
    name: metrics-server
    namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: system:metrics-server
rules:
  - apiGroups:
      - ""
    resources:
      - pods
      - nodes
      - nodes/stats
      - namespaces
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "extensions"
    resources:
      - deployments
    verbs:
      - get
      - list
      - watch
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: system:aggregated-metrics-reader
  labels:
    rbac.authorization.k8s.io/aggregate-to-view: "true"
    rbac.authorization.k8s.io/aggregate-to-edit: "true"
    rbac.authorization.k8s.io/aggregate-to-admin: "true"
rules:
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods"]
    verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:metrics-server
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:metrics-server
subjects:
  - kind: ServiceAccount
    name: metrics-server
    namespace: kube-system
---
apiVersion: apiregistration.k8s.io/v1beta1
kind: APIService
metadata:
  name: v1beta1.metrics.k8s.io
spec:
  service:
    name: metrics-server
    namespace: kube-system
  group: metrics.k8s.io
  version: v1beta1
  insecureSkipTLSVerify: true
  groupPriorityMinimum: 100
  versionPriority: 100
---
apiVersion: v1
kind: Service
metadata:
  name: metrics-server
  namespace: kube-system
  labels:
    kubernetes.io/name: "Metrics-server"
spec:
  selector:
    k8s-app: metrics-server
  ports:
    - port: 443
      protocol: TCP
      targetPort: 443
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: metrics-server
  namespace: kube-system
  labels:
    k8s-app: metrics-server
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  template:
    metadata:
      name: metrics-server
      labels:
        k8s-app: metrics-server
    spec:
      serviceAccountName: metrics-server
      volumes:
      # mount in tmp so we can safely use from-scratch images and/or read-only containers
      - name: tmp-dir
        emptyDir: {}
      containers:
      - name: metrics-server
        image: gcr.io/google_containers/metrics-server-amd64:v0.3.1
        imagePullPolicy: Always
        command:
            - /metrics-server
            - --kubelet-preferred-address-types=InternalIP,Hostname,ExternalIP
            - --kubelet-insecure-tls
        volumeMounts:
        - name: tmp-dir
          mountPath: /tmp

role.yaml from https://github.com/kubernetes/kops/issues/5706 and

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: system:kubelet-api-admin
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
subjects:
- kind: User
  name: kubelet-api
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: system:kubelet-api-admin
  apiGroup: rbac.authorization.k8s.io

I have created metric server with below deployment and addedd kubelet config in kops but I still get 401

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: metrics-server
  namespace: kube-system
  labels:
    k8s-app: metrics-server
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  template:
    metadata:
      name: metrics-server
      labels:
        k8s-app: metrics-server
    spec:
      serviceAccountName: metrics-server
      volumes:
      # mount in tmp so we can safely use from-scratch images and/or read-only containers
      - name: tmp-dir
        emptyDir: {}
      containers:
      - name: metrics-server
        image: gcr.io/google_containers/metrics-server-amd64:v0.3.1
        imagePullPolicy: Always
        command:
            - /metrics-server
            - --kubelet-preferred-address-types=InternalIP,Hostname,ExternalIP
            - --kubelet-insecure-tls
        volumeMounts:
        - name: tmp-dir
          mountPath: /tmp

Logs:

E0209 22:52:55.288570       1 manager.go:102] unable to fully collect metrics: [unable to fully scrape metrics from source kubelet_summary:ip-172-20-64-197.compute.internal: unable to fetch metrics from Kubelet ip-172-20-64-197.compute.internal (172.20.64.197): request failed - "401 Unauthorized", response: "Unauthorized", unable to fully scrape metrics from source kubelet_summary:ip-172-20-100-28.compute.internal: unable to get CPU for container "sentinel" in pod default/redis-sentinel-744bj on node "172.20.100.28", discarding data: missing cpu usage metric]
E0209 22:53:55.273084       1 manager.go:102] unable to fully collect metrics: [unable to fully scrape metrics from source kubelet_summary:ip-172-20-117-178.us-west-2.compute.internal: unable to get CPU for container "nginx-ingress" in pod default/nginx-ingress-rc-gj9tl on node "172.20.117.178", discarding data: missing cpu usage metric, unable to fully scrape metrics from source kubelet_summary:ip-172-20-64-197.compute.internal: unable to fetch metrics from Kubelet ip-172-20-64-197.compute.internal (172.20.64.197): request failed - "401 Unauthorized", response: "Unauthorized"]
E0209 22:54:55.286313       1 manager.go:102] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-172-20-64-197.compute.internal: unable to fetch metrics from Kubelet ip-172-20-64-197.compute.internal (172.20.64.197): request failed - "401 Unauthorized", response: "Unauthorized"
E0209 22:56:55.264838       1 manager.go:102] unable to fully collect metrics: unable to extract connection information for node "ip-172-20-69-45.compute.internal": node ip-172-20-69-45.compute.internal is not ready
E0209 22:57:55.266908       1 manager.go:102] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-172-20-69-45.compute.internal: unable to get CPU for container "kafka" in pod default/kafka-1 on node "172.20.69.45", discarding data: missing cpu usage metric
E0209 22:59:55.255294       1 manager.go:102] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-172-20-117-178.compute.internal: unable to get CPU for container "nginx-ingress" in pod default/nginx-ingress-rc-gj9tl on node "172.20.117.178", discarding data: missing cpu usage metric
E0209 23:03:52.091447       1 reststorage.go:144] unable to fetch pod metrics for pod default/baker-xxx: no metrics known for pod
E0209 23:05:55.297098       1 manager.go:102] unable to fully collect metrics: unable to fully scrape metrics from source kubelet_summary:ip-172-20-117-178.compute.internal: unable to get CPU for container "nginx-ingress" in pod default/nginx-ingress-rc-gj9tl on node "172.20.117.178", discarding data: missing cpu usage metric

this works thank you very much. saved some prod time 😃 https://github.com/kubernetes-sigs/metrics-server/issues/212#issuecomment-459321884

@vinhnglx do you mind showing the output of kops update cluster and kops rolling-update cluster

@vinhnglx could you check the arguments passed to kubelet on one of the nodes? If you run kubelet using systemd, then ssh to instance and run sudo systemctl status kubelet. Make sure --authentication-token-webhook=true and --authorization-mode=Webhook flags are passed. Checking kubectl logs may also help (run journalctl -u kubelet on the node for logs).

thanks @rajeshkodali .

I still hit the error “401 Unauthorized”

E0214 03:16:54.413600       1 manager.go:102] unable to fully collect metrics: [unable to fully scrape metrics from source kubelet_summary:ip-10-10-2-189.ap-southeast-1.compute.internal: unable to fetch metrics from Kubelet ip-10-10-2-189.ap-southeast-1.compute.internal (10.10.2.189): request failed - "401 Unauthorized", response: "Unauthorized", unable to fully scrape metrics from source kubelet_summary:ip-10-10-1-140.ap-southeast-1.compute.internal: unable to fetch metrics from Kubelet ip-10-10-1-140.ap-southeast-1.compute.internal (10.10.1.140): request failed - "401 Unauthorized", response: "Unauthorized", unable to fully scrape metrics from source kubelet_summary:ip-10-10-1-124.ap-southeast-1.compute.internal: unable to fetch metrics from Kubelet ip-10-10-1-124.ap-southeast-1.compute.internal (10.10.1.124): request failed - "401 Unauthorized", response: "Unauthorized"]
E0214 03:17:03.349620       1 reststorage.go:144] unable to fetch pod metrics for pod default/backend-14-feb-2019-10-20-15-5bb5b77bcc-stb4t: no metrics known for pod
E0214 03:17:17.633193       1 reststorage.go:144] unable to fetch pod metrics for pod default/frontend-14-feb-2019-10-20-04-5d5c4678bc-k7vpv: no metrics known for pod
E0214 03:17:33.357307       1 reststorage.go:144] unable to fetch pod metrics for pod default/backend-14-feb-2019-10-20-15-5bb5b77bcc-stb4t: no metrics known for pod