helm: Couldn't get resource list for metrics error

With the latest version of helm, I’m getting warnings like this when running operations. I’m wondering if it has to do with deprecated API versions, or if it’s something else.

% helm upgrade --dry-run -i xxx somedir -f helm-charts/values-default.yaml
E0126 14:24:31.061339    6338 memcache.go:255] couldn't get resource list for external.metrics.k8s.io/v1beta1: Got empty response for: external.metrics.k8s.io/v1beta1
E0126 14:24:31.366546    6338 memcache.go:255] couldn't get resource list for external.metrics.k8s.io/v1beta1: Got empty response for: external.metrics.k8s.io/v1beta1
E0126 14:24:31.493404    6338 memcache.go:255] couldn't get resource list for external.metrics.k8s.io/v1beta1: Got empty response for: external.metrics.k8s.io/v1beta1
E0126 14:24:31.698458    6338 memcache.go:255] couldn't get resource list for external.metrics.k8s.io/v1beta1: Got empty response for: external.metrics.k8s.io/v1beta1
E0126 14:24:31.980491    6338 memcache.go:255] couldn't get resource list for external.metrics.k8s.io/v1beta1: Got empty response for: external.metrics.k8s.io/v1beta1
E0126 14:24:32.227059    6338 memcache.go:255] couldn't get resource list for external.metrics.k8s.io/v1beta1: Got empty response for: external.metrics.k8s.io/v1beta1
E0126 14:24:32.369217    6338 memcache.go:255] couldn't get resource list for external.metrics.k8s.io/v1beta1: Got empty response for: external.metrics.k8s.io/v1beta1
E0126 14:24:32.477016    6338 memcache.go:255] couldn't get resource list for external.metrics.k8s.io/v1beta1: Got empty response for: external.metrics.k8s.io/v1beta1
E0126 14:24:32.605685    6338 memcache.go:255] couldn't get resource list for external.metrics.k8s.io/v1beta1: Got empty response for: external.metrics.k8s.io/v1beta1
E0126 14:24:32.789270    6338 memcache.go:255] couldn't get resource list for external.metrics.k8s.io/v1beta1: Got empty response for: external.metrics.k8s.io/v1beta1
E0126 14:24:33.063156    6338 memcache.go:255] couldn't get resource list for external.metrics.k8s.io/v1beta1: Got empty response for: external.metrics.k8s.io/v1beta1
E0126 14:24:33.291941    6338 memcache.go:255] couldn't get resource list for external.metrics.k8s.io/v1beta1: Got empty response for: external.metrics.k8s.io/v1beta1
E0126 14:24:33.426387    6338 memcache.go:255] couldn't get resource list for external.metrics.k8s.io/v1beta1: Got empty response for: external.metrics.k8s.io/v1beta1
[....]

(The command still succeeds, but it’s messy / unsightly)

Output of helm version: version.BuildInfo{Version:"v3.11.0", GitCommit:"472c5736ab01133de504a826bd9ee12cbe4e7904", GitTreeState:"clean", GoVersion:"go1.19.5"}

Output of kubectl version:

Client Version: v1.25.2
Kustomize Version: v4.5.7
Server Version: v1.25.4-gke.2100

Cloud Provider/Platform (AKS, GKE, Minikube etc.): GKE

About this issue

  • Original URL
  • State: closed
  • Created a year ago
  • Reactions: 15
  • Comments: 28 (6 by maintainers)

Most upvoted comments

I finally had some time to dig in to this on my own cluster.

E0126 14:24:31.061339    6338 memcache.go:255] couldn't get resource list for external.metrics.k8s.io/v1beta1: Got empty response for: external.metrics.k8s.io/v1beta1

is client-go trying to fill its cache. When an APIService is defined that no longer has a controller servicing it, those messages are displayed by client-go.

In my case, I upgraded prometheus-adapter which seems to have changed from custom.metrics.k8s.io to metrics.k8s.io. To fix this I just had to delete the unwanted api service:

kubectl delete apiservices v1beta1.custom.metrics.k8s.io

This is not a helm problem.

Ignoring stderr doesn’t seem like a good fix. It would prevent one from seeing important and / or fatal errors as well as this noise.

Presumably, either helm should downgrade the version of that lib, look for an upstream fix of that lib, or swallow the errors itself in a more limited way.

Hello, If you are affected by this issue and you are using KEDA, there is a workaround you can use: https://github.com/kedacore/keda/issues/4224#issuecomment-1426749781

EKS 1.24, kubectl 1.25.4, getting the same error.

I am running into a likewise issue, the helm version is v3.11.0 and the installation process was completed successfully after throwing these warnings.

E0202 12:51:02.217710 99829 memcache.go:255] couldn’t get resource list for metrics.k8s.io/v1beta1: Unauthorized E0202 12:51:03.197970 99829 memcache.go:255] couldn’t get resource list for metrics.k8s.io/v1beta1: Unauthorized E0202 12:51:04.851745 99829 memcache.go:255] couldn’t get resource list for metrics.k8s.io/v1beta1: Unauthorized E0202 12:51:06.071295 99829 memcache.go:255] couldn’t get resource list for metrics.k8s.io/v1beta1: Unauthorized E0202 12:51:09.482704 99829 memcache.go:255] couldn’t get resource list for metrics.k8s.io/v1beta1: Unauthorized E0202 12:51:10.459653 99829 memcache.go:255] couldn’t get resource list for metrics.k8s.io/v1beta1: Unauthorized E0202 12:51:12.063372 99829 memcache.go:255] couldn’t get resource list for metrics.k8s.io/v1beta1: Unauthorized E0202 12:51:13.374725 99829 memcache.go:255] couldn’t get resource list for metrics.k8s.io/v1beta1: Unauthorized E0202 12:51:14.701756 99829 memcache.go:255] couldn’t get resource list for metrics.k8s.io/v1beta1: Unauthorized E0202 12:51:16.460971 99829 memcache.go:255] couldn’t get resource list for metrics.k8s.io/v1beta1: Unauthorized E0202 12:51:17.431306 99829 memcache.go:255] couldn’t get resource list for metrics.k8s.io/v1beta1: Unauthorized E0202 12:51:18.396043 99829 memcache.go:255] couldn’t get resource list for metrics.k8s.io/v1beta1: Unauthorized E0202 12:51:19.380456 99829 memcache.go:255] couldn’t get resource list for metrics.k8s.io/v1beta1: Unauthorized E0202 12:51:20.363210 99829 memcache.go:255] couldn’t get resource list for metrics.k8s.io/v1beta1: Unauthorized E0202 12:51:21.333310 99829 memcache.go:255] couldn’t get resource list for metrics.k8s.io/v1beta1: Unauthorized E0202 12:51:22.318098 99829 memcache.go:255] couldn’t get resource list for metrics.k8s.io/v1beta1: Unauthorized

Will this cause any problems in the future, so I have to roll back to helm version v3.10.3 and re-install?

Issue fixed. In my prevoius configuration for deploy application i used helm v3.10.3. My current helm version was updated to v3.11.0 It is cause why i am gettings the issue. When i rollback my client to previous version v3.10.3 - issue fixed.

In almost any type of CI pipeline that’s configured properly, the job will fail based on the process’s exit code (success on exit code 0, failure on any other exit code), so suppressing stderr should not make it “succeed” on failure. However, suppressing all stderr would potentially make debugging a failure more challenging in some cases IMO.

seems like this is probably the change that introduced it: https://github.com/helm/helm/pull/11622

I posted an issue upstream at: https://github.com/kubernetes/client-go/issues/1223

Two notes:

  1. memcache.go is part of kubernetes/client-gohttps://github.com/kubernetes/client-go/blob/master/discovery/cached/memory/memcache.go#L255
  2. external.metrics.k8s.io isn’t part of kubernetes upstream: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-metrics-apis

I don’t think (as @joejulian mentions) this is a helm issue. If I had to guess, a new version of a common metrics provider has removed its external.metrics.k8s.io/v1beta1 api.

hi @mmorejon This is the PR in the custom-metrics-apiserver (the upstream) and this is the PR in KEDA. As I said, the next (KEDA) release will have this problem fixed independently of the tooling version that you are using 😄

The problems is that tooling needs to be update to fixed version of the kubernetes client. Probably the brew source isn’t using latest versions…

In parallel, a fix has been released as part of custom-metrics-api service (the library that projects like prometheus-adapter or KEDA use), so future releases of metrics servers should work properly even though you use affected versions (for example, in KEDA we will solve it in next release)

No. Please check with the metrics provider.