pulumi-kubernetes: Python Helm charts no longer deploy into EKS cluster

Since #1539, when deploying a Helm v3 chart in Python, the resources in that chart are no longer deployed to provider specified on the chart. This means that e.g. attempting to deploy a chart to an EKS cluster in Python silently deploys it to whatever is in your kubeconfig instead.

Steps to reproduce

Assuming eks.cluster is an EKS cluster resource, run a program like:

k8s.helm.v3.Chart(
    "some-helm-chart",
    k8s.helm.v3.ChartOpts(
        chart="whatever",
        # ...
    ),
    opts=pulumi.ResourceOptions(provider=eks.cluster.provider),
)

The chart’s resources will get deployed to whatever kubectl is configured to use, rather than the EKS cluster.

Likely cause

This is almost certainly fallout from #1539. v3.0.0 does not exhibit this bug, but v3.1.0 (which includes #1539) does.

cc @lukehoban @lblackstone

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Reactions: 1
  • Comments: 18 (11 by maintainers)

Most upvoted comments

@eliskovets we’ve been using the workaround provided by @lblackstone in https://github.com/pulumi/pulumi-kubernetes/issues/1565#issuecomment-834791256 to good effect for a while now.

I don’t mean to sound ungrateful, but I’d like to advocate for reopening this issue until pulumi/pulumi#7012 is fixed, or there are some guardrails put in place here! While I understand that the aforementioned issue is tracking the root cause, it’s not discoverable for folks who are just looking to understand why pulumi_eks and pulumi_kubernetes are not playing well together.

In particular, pulumi/pulumi#3383 means the bug presents as silently deploying into whatever cluster is currently active in the user’s kubecfg. This is the kind of bug that can result in taking down prod. (We were lucky, and it only took down our staging environment, but that’s only because I happened to have staging as my active cluster, not prod.)

Since pulumi/pulumi#7012 looks like it’s not a quick fix, it’d be great to get some some assertions in the SDK to at least prevent disaster, or at the very least some warnings in the docs.

@lblackstone, thanks for the workaround, but I don’t think it’ll work for us, since swapping the provider out like that results in Pulumi attempting to replace all the resources in the cluster. Tried to work around this with provider aliases but looks like those aren’t wired up yet (pulumi/pulumi#3979).

Closing as this is now tracked in pulumi/pulumi#7012.

Thanks, @lblackstone. For now we’re happy to stick on v3.0.0, but I’ll try out your workaround if we need to upgrade.