keda: KEDA might break existing deployment on cluster which already has another External Metrics Adapter installed

KEDA is using metrics adapter based on custom-metrics-apiserver library. As part of the deployment, user need to specify cluster wide APIService object named v1beta1.external.metrics.k8s.io, see in the library example and in KEDA deployment .

I wonder what would happen, if user has already deployed another Metrics Adapter (which is using the same APIService based approach) and we try to install Keda. It will probably replace the original APIService definition with KEDA one, so KEDA will work, but the original stuff installed on cluster probably not. We should not break things or should make clear, that this could happen.

We should investigate what are the possibilities and whether there are a better solutions on how to deal with the metrics. Or my assumptions are wrong, so please correct me in this case.

About this issue

  • Original URL
  • State: open
  • Created 5 years ago
  • Reactions: 67
  • Comments: 53 (32 by maintainers)

Most upvoted comments

Ran into this because datadog helm chart also creates the APIService object with the same name of v1beta1.external.metrics.k8s.io

hello all,

we are also experiencing the same issue here, we are using kube-state-metrics as a metric provider for our scaling strategy and they keep overriding the v1beta1.external.metrics.k8s.io APIService; it has to be either one or the other, i would be glad to help if i can!

We are definately aware of this and that it’s a pain point, sorry! We have a very smart guy in our team who will look into a POC for contributing this upstream.

We ran into the same situation as @v-yarotsky mentioned. We have datadog installed with datadog-cluster-agent-metrics-api:

➜  ~ kubectl get apiservice | grep external.metrics                                                                                                              
v1beta1.external.metrics.k8s.io             default/datadog-cluster-agent-metrics-api

It does not overwrite or break existing v1beta1.external.metrics.k8s.io APIService. It just won’t install keda and complains about:

Error: UPGRADE FAILED: rendered manifests contain a resource that already exists. Unable to continue with update: APIService "v1beta1.external.metrics.k8s.io" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-name" must equal "keda": current value is "datadog"; annotation validation error: key "meta.helm.sh/release-namespace" must equal "keda": current value is "default"

We are working on a proposal to fix this limitation directly in k8s, but we are still drafting the KEP. I hope that during next months it can be fixed, but currently, thanks for your workaround!

Team I’m facing the below error. What is workaround? Error: INSTALLATION FAILED: rendered manifests contain a resource that already exists. Unable to continue with install: APIService "v1beta1.external.metrics.k8s.io" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "keda"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "default"

trying to deploy keda on a k8s cluster with existing datadog deployment :

 helm install keda kedacore/keda --namespace keda

Error: rendered manifests contain a resource that already exists. Unable to continue with install: APIService "v1beta1.external.metrics.k8s.io" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-name" must equal "keda": current value is "datadog-agent"; annotation validation error: key "meta.helm.sh/release-namespace" must equal "keda": current value is "default"

and same error when trying to deploy datadog on a cluster with existing keda deployment:

rendered manifests contain a resource that already exists. Unable to continue with install: APIService "v1beta1.external.metrics.k8s.io" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-name" must equal "datadog-agent": current value is "keda"; annotation validation error: key "meta.helm.sh/release-namespace" must equal "default": current value is "keda"

is there a workaround for this?

We are using k8s-cloudwatch-adapter which already deprecated, and we are planing for the migration to Keda. As expected both of them are using the same adapter, therefore, we will appreciate any solution for a smooth migration.

We have another metrics-server in place. Is there any workaround. By default we have custom-metrics/custom-metrics-stackdriver-adapter in place.

Hello guys, I managed to workaround this disabling the metricsProvider from Datadog. In this way you can deploy both charts in your cluster. Keep in mind that the Datadog metricProvider is for Datadog to autoscale using Custom Metrics.

https://docs.datadoghq.com/containers/guide/cluster_agent_autoscaling_metrics/

@slv-306 As @JorTurFer mentioned this is supported and documented in our FAQ: https://keda.sh/docs/latest/faq/

May I ask you to create a GitHub Discussion for asking questions that are not related to this issue please? This helps us keep the conversation more focused - Thank you!

I believe this issue over at the datadog-agent repo is related: https://github.com/DataDog/datadog-agent/issues/10764

We’re running a cluster that is using both the datadog agent as well as Keda with some datadog triggers and have run into the problems described in the datadog-agent issue linked above. It looks like:

2022-02-02 22:02:10 UTC | CLUSTER | ERROR | (app/app.go:292 in start) | Could not start admission controller: unable to retrieve the complete list of server APIs: external.metrics.k8s.io/v1beta1: the server is currently unable to handle the request

The name v1beta1.external.metrics.k8s.io is reserved by Kubernetes and is expected that this enpoint provides external metrics. And this is a cluster wide object, so there could be just one per cluster.

@Aarthisk I am planning to look at other options as well.

@zroubalik I’ll make sure you get label permissions 😃. Adding needs-discussion and help-wanted in case someone gets change to validate if it is a bug