pulumi-kubernetes: Can't install Prometheus Operator via Helm3: Duplicate resource URN
Problem description
Using Pulumi 2.7.1, I’ve been trying to install the prometheus-operator Helm chart from the stable repo, version 9.2.2.
I’m using the following Pulumi definition to do it:
public static void DeployPrometheus(CustomResourceOptions k8sCustomResourceOptions)
{
var appName = "prometheus";
var appLabels = new InputMap<string>
{
{"app", appName}
};
/**
* Enable Prometheus data storage on a persistent volume claim
*/
// See https://docs.microsoft.com/en-us/azure/aks/azure-disks-dynamic-pv for more details on provisioning
var prometheusVolumeClaim = new Pulumi.Kubernetes.Core.V1.PersistentVolumeClaim("promtheus-volume-claim", new PersistentVolumeClaimArgs()
{
Metadata = new ObjectMetaArgs
{
Name = $"prometheus",
},
ApiVersion = "v1",
Spec = new PersistentVolumeClaimSpecArgs
{
AccessModes = "ReadWriteOnce", // can only be mounted and written to by a single
StorageClassName = "default", // standard disk,
Resources = new ResourceRequirementsArgs
{
Requests =
{
// TODO: make storage amounts configurable
{ "storage", "15Gi"} // 15Gb of storage
}
}
}
}, k8sCustomResourceOptions);
// need to work around CRD bug here https://github.com/pulumi/pulumi-kubernetes/issues/1023
//var configGroup = new ConfigGroup("prometheus-crds", new ConfigGroupArgs
//{
// Files = new[]{ "https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.38/example/prometheus-operator-crd/monitoring.coreos.com_alertmanagers.yaml",
// "https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.38/example/prometheus-operator-crd/monitoring.coreos.com_podmonitors.yaml",
// "https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.38/example/prometheus-operator-crd/monitoring.coreos.com_prometheuses.yaml",
// "https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.38/example/prometheus-operator-crd/monitoring.coreos.com_prometheusrules.yaml",
// "https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.38/example/prometheus-operator-crd/monitoring.coreos.com_servicemonitors.yaml",
// "https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.38/example/prometheus-operator-crd/monitoring.coreos.com_thanosrulers.yaml"},
//}, new ComponentResourceOptions()
//{
// Provider = k8sCustomResourceOptions.Provider
//});
var prometheusChartValues = prometheusVolumeClaim.Metadata.Apply(f => new Dictionary<string, object>()
{
/* define Kubelet metrics scraping */
["kubelet"] = new Dictionary<string, object>
{
["enabled"] = true,
["serviceMonitor"] = new Dictionary<string, object>
{
["https"] = false
},
},
["kubeControllerManager"] = new Dictionary<string, object>
{
["enabled"] = false
},
["kubeScheduler"] = new Dictionary<string, object>
{
["enabled"] = false
},
["kubeEtcd"] = new Dictionary<string, object>
{
["enabled"] = false
},
["kubeProxy"] = new Dictionary<string, object>
{
["enabled"] = false
},
["prometheusOperator"] = new
{
createCustomResource = false
},
["prometheus"] = new Dictionary<string, object>
{
["prometheusSpec"] = new Dictionary<string, object>
{
["volumes"] = new[]
{
new Dictionary<string, object>
{
["name"] = "prometheus-storage",
["persistentVolumeClaim"] = new Dictionary<string, object>{
["claimName"] = f.Name
},
}
},
["volumeMounts"] = new[] {
new Dictionary<string, object>
{
["name"] = "prometheus-storage",
["mountPath"] = "/prometheus/"
}
}
}
},
});
var objectNames = new ConcurrentBag<string>();
ImmutableDictionary<string, object> OmitCustomResource(ImmutableDictionary<string, object> obj, CustomResourceOptions opts)
{
var metadata = (ImmutableDictionary<string, object>)obj["metadata"];
var resourceName = $"{obj["kind"]}.{metadata["name"]}";
if (objectNames.Contains(resourceName))
{
Console.WriteLine($"Deduplicated [{resourceName}]");
return new Dictionary<string, object>
{
["apiVersion"] = "v1",
["kind"] = "List",
["items"] = new Dictionary<string, object>(),
}.ToImmutableDictionary();
}
objectNames.Add(resourceName);
Console.WriteLine(resourceName);
return obj;
}
var prometheusOperatorChart = new Chart("pm", new Pulumi.Kubernetes.Helm.ChartArgs
{
Repo = "stable",
Chart = "prometheus-operator",
Version = "9.2.2",
Namespace = "monitoring",
Values = prometheusChartValues,
Transformations = { OmitCustomResource }
}, new ComponentResourceOptions()
{
Provider = k8sCustomResourceOptions.Provider,
//DependsOn = configGroup
});
}
I’m using the transformation mechanism for the Helm chart to attempt to resolve the error
Errors & Logs
I get a similar error to this one each time I attempt to run this chart installation:
error: Duplicate resource URN 'urn:pulumi:dev::sdkbin-resources::kubernetes:helm.sh/v2:Chart$kubernetes:apiextensions.k8s.io/v1:CustomResourceDefinition::monitoring/pm-prometheus-operator-alertmanager'; try giving it a unique name
It’s always one of the custom resource definitions - it’s not always the same one that throws the error though. This time it was the alertmanager CRD that caused the error - other times it’s been the prometheus CRD or the node-exporter.
I attempted to fix this issue by adding the OmitCustomResource filtering code to my application to omit any duplicates - I also added some logging to see what those filtered / unfiltered outputs were:
- CustomResourceDefinition.alertmanagers.monitoring.coreos.com
- CustomResourceDefinition.podmonitors.monitoring.coreos.com
- CustomResourceDefinition.prometheuses.monitoring.coreos.com
- CustomResourceDefinition.prometheusrules.monitoring.coreos.com
- CustomResourceDefinition.servicemonitors.monitoring.coreos.com
- CustomResourceDefinition.thanosrulers.monitoring.coreos.com
- PodSecurityPolicy.pm-grafana
- PodSecurityPolicy.pm-grafana-test
- PodSecurityPolicy.pm-kube-state-metrics
- PodSecurityPolicy.pm-prometheus-node-exporter
- PodSecurityPolicy.pm-prometheus-operator-alertmanager
- PodSecurityPolicy.pm-prometheus-operator-operator
- PodSecurityPolicy.pm-prometheus-operator-prometheus
- ServiceAccount.pm-grafana
- ServiceAccount.pm-grafana-test
- ServiceAccount.pm-kube-state-metrics
- ServiceAccount.pm-prometheus-node-exporter
- ServiceAccount.pm-prometheus-operator-alertmanager
- ServiceAccount.pm-prometheus-operator-operator
- ServiceAccount.pm-prometheus-operator-prometheus
- Secret.pm-grafana
- Secret.alertmanager-pm-prometheus-operator-alertmanager
- ConfigMap.pm-grafana-config-dashboards
- ConfigMap.pm-grafana
- ConfigMap.pm-grafana-test
- ConfigMap.pm-prometheus-operator-grafana-datasource
- ConfigMap.pm-prometheus-operator-apiserver
- ConfigMap.pm-prometheus-operator-cluster-total
- ConfigMap.pm-prometheus-operator-k8s-coredns
- ConfigMap.pm-prometheus-operator-k8s-resources-cluster
- ConfigMap.pm-prometheus-operator-k8s-resources-namespace
- ConfigMap.pm-prometheus-operator-k8s-resources-node
- ConfigMap.pm-prometheus-operator-k8s-resources-pod
- ConfigMap.pm-prometheus-operator-k8s-resources-workload
- ConfigMap.pm-prometheus-operator-k8s-resources-workloads-namespace
- ConfigMap.pm-prometheus-operator-kubelet
- ConfigMap.pm-prometheus-operator-namespace-by-pod
- ConfigMap.pm-prometheus-operator-namespace-by-workload
- ConfigMap.pm-prometheus-operator-node-cluster-rsrc-use
- ConfigMap.pm-prometheus-operator-node-rsrc-use
- ConfigMap.pm-prometheus-operator-nodes
- ConfigMap.pm-prometheus-operator-persistentvolumesusage
- ConfigMap.pm-prometheus-operator-pod-total
- ConfigMap.pm-prometheus-operator-prometheus
- ConfigMap.pm-prometheus-operator-statefulset
- ConfigMap.pm-prometheus-operator-workload-total
- ClusterRole.pm-grafana-clusterrole
- ClusterRole.pm-kube-state-metrics
- ClusterRole.psp-pm-kube-state-metrics
- ClusterRole.psp-pm-prometheus-node-exporter
- ClusterRole.pm-prometheus-operator-operator
- ClusterRole.pm-prometheus-operator-operator-psp
- ClusterRole.pm-prometheus-operator-prometheus
- ClusterRole.pm-prometheus-operator-prometheus-psp
- ClusterRoleBinding.pm-grafana-clusterrolebinding
- ClusterRoleBinding.pm-kube-state-metrics
- ClusterRoleBinding.psp-pm-kube-state-metrics
- ClusterRoleBinding.psp-pm-prometheus-node-exporter
- ClusterRoleBinding.pm-prometheus-operator-operator
- ClusterRoleBinding.pm-prometheus-operator-operator-psp
- ClusterRoleBinding.pm-prometheus-operator-prometheus
- ClusterRoleBinding.pm-prometheus-operator-prometheus-psp
- Role.pm-grafana
- Role.pm-grafana-test
- Role.pm-prometheus-operator-alertmanager
- RoleBinding.pm-grafana
- RoleBinding.pm-grafana-test
- RoleBinding.pm-prometheus-operator-alertmanager
- Service.pm-grafana
- Service.pm-kube-state-metrics
- Service.pm-prometheus-node-exporter
- Service.pm-prometheus-operator-alertmanager
- Service.pm-prometheus-operator-coredns
- Service.pm-prometheus-operator-operator
- Service.pm-prometheus-operator-prometheus
- DaemonSet.pm-prometheus-node-exporter
- Deployment.pm-grafana
- Deployment.pm-kube-state-metrics
- Deployment.pm-prometheus-operator-operator
- Alertmanager.pm-prometheus-operator-alertmanager
- MutatingWebhookConfiguration.pm-prometheus-operator-admission
- Prometheus.pm-prometheus-operator-prometheus
- PrometheusRule.pm-prometheus-operator-alertmanager.rules
- PrometheusRule.pm-prometheus-operator-general.rules
- PrometheusRule.pm-prometheus-operator-k8s.rules
- PrometheusRule.pm-prometheus-operator-kube-apiserver-availability.rules
- PrometheusRule.pm-prometheus-operator-kube-apiserver-slos
- PrometheusRule.pm-prometheus-operator-kube-apiserver.rules
- PrometheusRule.pm-prometheus-operator-kube-prometheus-general.rules
- PrometheusRule.pm-prometheus-operator-kube-prometheus-node-recording.rules
- PrometheusRule.pm-prometheus-operator-kube-state-metrics
- PrometheusRule.pm-prometheus-operator-kubelet.rules
- PrometheusRule.pm-prometheus-operator-kubernetes-apps
- PrometheusRule.pm-prometheus-operator-kubernetes-resources
- PrometheusRule.pm-prometheus-operator-kubernetes-storage
- PrometheusRule.pm-prometheus-operator-kubernetes-system-apiserver
- PrometheusRule.pm-prometheus-operator-kubernetes-system-kubelet
- PrometheusRule.pm-prometheus-operator-kubernetes-system
- PrometheusRule.pm-prometheus-operator-node-exporter.rules
- PrometheusRule.pm-prometheus-operator-node-exporter
- PrometheusRule.pm-prometheus-operator-node-network
- PrometheusRule.pm-prometheus-operator-node.rules
- PrometheusRule.pm-prometheus-operator-prometheus-operator
- PrometheusRule.pm-prometheus-operator-prometheus
- ServiceMonitor.pm-prometheus-operator-alertmanager
- ServiceMonitor.pm-prometheus-operator-coredns
- ServiceMonitor.pm-prometheus-operator-apiserver
- ServiceMonitor.pm-prometheus-operator-kube-state-metrics
- ServiceMonitor.pm-prometheus-operator-kubelet
- ServiceMonitor.pm-prometheus-operator-node-exporter
- ServiceMonitor.pm-prometheus-operator-grafana
- ServiceMonitor.pm-prometheus-operator-operator
- ServiceMonitor.pm-prometheus-operator-prometheus
- ValidatingWebhookConfiguration.pm-prometheus-operator-admission
- PodSecurityPolicy.pm-prometheus-operator-admission
- ServiceAccount.pm-prometheus-operator-admission
- ClusterRole.pm-prometheus-operator-admission
- ClusterRoleBinding.pm-prometheus-operator-admission
- Role.pm-prometheus-operator-admission
- RoleBinding.pm-prometheus-operator-admission
- Pod.pm-grafana-test
- Job.pm-prometheus-operator-admission-create
- Job.pm-prometheus-operator-admission-patch
As you can see, there were no duplicates included in this list - nor did my logging code for finding duplicates catch any.
Affected product version(s)
2.7.1
Reproducing the issue
The code I included above will reproduce it.
Suggestions for a fix
I’ve tried looking at other similar issues in this repo, such as https://github.com/pulumi/pulumi-kubernetes/issues/1023 - it’s not clear to me what the issue is here but I suspect it has something to do with the prometheus-operator chart pulling in other charts that all create the same custom resources under the covers. If my transform could also apply to those dependent charts that would be wonderful.
Or is my issue something else?
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Comments: 30 (13 by maintainers)
The problem here isn’t with the Kubernetes
CustomResourceDefinition, but with the URN generation of Pulumi.Specifically in @Aaronontheweb case, there is an
AlertManagergenerated by the helm chart calledmonitoring/pm-prometheus-operator-alertmanager. But there’s also aServiceMonitorcalledmonitoring/pm-prometheus-operator-alertmanager.Because these are not standard Kubernetes resource kinds, Pulumi generates a URN for a
CustomResourceDefinitiontype (is this the fallback type for unknown resource kinds?). The result is 2 resources with URNKubernetes has no issue with resources named the same, as long as they are not of the same
Kind. From the helm chart perspective there is no issue. I would consider it a bug/missing feature in Pulumi. It would be nice if Pulumi just takes the Kubernetes resource Kind in the URN, also for unknown kinds.I worked around it by applying this transform
FWIW - I do not consider this “a bug in the charts” exactly. Since the charts do work correctly via the Helm CLI, it is a bug in Pulumi’s support for deploying Helm charts that we do not handle this smoothly as well - even if it is “poor design on behalf of the chart author”.
We should definitely look into what we can do to fix/improve Pulumi’s support for charts like these (especially as the majority of the most used charts have this sort of “issue”).
I’ve got this problem with cert-manager’s CRDs. I’m deploying multiple clusters in my pulumi stack and cert-manager to each cluster.
Pulumi throws a duplicate URN error because the helm chart uses a hardcoded name for it’s CRD “certificaterequests.cert-manager.io”.
Unfortunately I can’t work around this issue by using “resourcePrefix” because then my resource names start to exceed 53 characters (also it makes the naming very silly with a double {name}-{name} prefix).
I also can’t work around this problem by using a resource transformation to rename the CRD because the CRD name
must be spec.names.plural+"."+spec.groupas enforced by the k8s api.I also can’t work around this problem by using k8s.yaml.ConfigFile or k8s.yaml.parse because they also map resource names to URNs.
I think pulumi needs to do something to better handle URN generation for helm charts.
Perhaps an
urnPrefixwould work?@lblackstone and I are having a discussion on how best to support this. It seems this was introduced by #1102 and we do need to figure out a way to either A) let the user opt-out of installing the CRDs, because they might be installed out of band by another process B) detect if the crd is in the api and omit them if automatically.
The workaround at the moment is to use a transformation to drop the resource, something a bit like this:
I can confirm that release v3.8.0 with #1741 fixes this issue for me.
Specifically, I can remove the crd name transformation hacks I mention above and it works as expected! 🎉
Would be nice to at least have the flag
--skip-crdsexposed on the v3.Chart object, for now this workaround works for us. 👍Yes - I believe @jaxxstorm is working on a plan for how best to expose this option.
If that’s the case then it severely limits Pulumi’s usefulness if we can’t install most Helm charts.