pulumi: Crash when using k8s.yaml.ConfigFile
What happened?
Using the k8s.yaml.ConfigFile
resource crashes during pulumi up
.
error: Program failed with an unhandled exception:
error: Traceback (most recent call last):
File "/home/alex/.pulumi/bin/pulumi-language-python-exec", line 107, in <module>
loop.run_until_complete(coro)
File "/usr/lib64/python3.9/asyncio/base_events.py", line 647, in run_until_complete
return future.result()
File "/home/alex/git2/cloud/venv/lib64/python3.9/site-packages/pulumi/runtime/stack.py", line 126, in run_in_stack
await run_pulumi_func(lambda: Stack(func))
File "/home/alex/git2/cloud/venv/lib64/python3.9/site-packages/pulumi/runtime/stack.py", line 49, in run_pulumi_func
func()
File "/home/alex/git2/cloud/venv/lib64/python3.9/site-packages/pulumi/runtime/stack.py", line 126, in <lambda>
await run_pulumi_func(lambda: Stack(func))
File "/home/alex/git2/cloud/venv/lib64/python3.9/site-packages/pulumi/runtime/stack.py", line 149, in __init__
func()
File "/home/alex/.pulumi/bin/pulumi-language-python-exec", line 106, in <lambda>
coro = pulumi.runtime.run_in_stack(lambda: runpy.run_path(args.PROGRAM, run_name='__main__'))
File "/usr/lib64/python3.9/runpy.py", line 288, in run_path
return _run_module_code(code, init_globals, run_name,
File "/usr/lib64/python3.9/runpy.py", line 97, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "/usr/lib64/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "pulumi_main.py", line 27, in <module>
from infra import regions as _
File "/home/alex/git2/cloud/./infra/regions.py", line 166, in <module>
Region(
File "/home/alex/git2/cloud/./infra/regions.py", line 127, in __init__
CertManager(cluster, cluster_dns_zone.provision_zone)
File "/home/alex/git2/cloud/./infra/tls.py", line 40, in __init__
self.crds = k8s.yaml.ConfigFile(
File "/home/alex/git2/cloud/venv/lib64/python3.9/site-packages/pulumi_kubernetes/yaml/yaml.py", line 380, in __init__
__ret__ = invoked.value["result"]
TypeError: 'NoneType' object is not subscriptable
error: an unhandled error occurred: Program exited with non-zero exit code: 1
Steps to reproduce
The following code triggers it when using pulumi-kubernetes 3.20.0 or 3.20.1:
self.crds = k8s.yaml.ConfigFile(
"cert-manager-crds",
file="https://github.com/jetstack/cert-manager/releases/download/v1.8.1/cert-manager.crds.yaml",
opts=pulumi.ResourceOptions(
parent=self,
provider=cluster.provider,
),
)
Note that this does not happen if the resource is initially created with an older version (confirmed with 3.19.1) and then upgraded to the new provider.
Expected Behavior
Pulumi should create the resources defined in the yaml file in the kubernetes cluster specified by the provider in the ResourceOptions.
Actual Behavior
Pulumi crashes in the preview.
Versions used
CLI
Version 3.34.1
Go Version go1.17.11
Go Compiler gc
Plugins
NAME VERSION
aws 5.9.1
cloudinit 1.3.0
command 0.2.0
docker-buildkit 0.1.17
eks 0.40.0
fivetran 0.1.6
frontegg 0.2.22
honeycomb 0.0.11
kubernetes 3.20.0
kubernetes-proxy 0.1.3
linkerd-link 0.0.7
postgresql 3.5.0
postgresql-exec 0.1.1
python unknown
random 4.7.0
tls 4.6.0
Host
OS fedora
Version 35
Arch x86_64
This project is written in python: executable='/home/alex/git3/cloud/venv/bin/python3' version='3.9.13'
{{Stack output redacted because it's huge.}}
Dependencies:
NAME VERSION
analytics-python 1.4.0
black 22.6.0
boto3 1.24.26
boto3-stubs 1.24.26
cryptography-347-stubs 1.0.0
dj-database-url 0.5.0
django-csp 3.7
django-migrations-formatter 1.0.0
django-proxy 1.2.1
django-ratelimit 3.0.1
django-simple-history 3.1.1
django-types 0.15.0
djangorestframework-camel-case 1.3.0
djangorestframework-types 0.7.0
docker-image-py 0.1.12
drf-spectacular 0.22.1
gunicorn 20.1.0
honeycomb-stubs 0.2.2
isort 5.10.1
jwcrypto 1.3.1
kubernetes 21.7.0
kubernetes-stubs 21.7.0
mypy-boto3-ec2 1.24.32
mypy-boto3-elb 1.24.0
mypy-boto3-elbv2 1.24.20.post14
mypy-boto3-resourcegroupstaggingapi 1.24.0
mypy-boto3-route53 1.24.1
mypy-boto3-s3 1.24.0
mypy-boto3-sts 1.24.0
oauth2client 4.1.3
pip 22.1.2
pre-commit 2.19.0
psycopg2 2.9.3
pulumi-cloudinit 1.3.0
pulumi-command 0.2.0
pulumi-docker-buildkit 0.1.17
pulumi-eks 0.40.0
pulumi-fivetran 0.1.6
pulumi-frontegg 0.2.22
pulumi-honeycomb 0.0.11
pulumi-kubernetes-proxy 0.1.3
pulumi-linkerd-link 0.0.7
pulumi-postgresql 3.5.0
pulumi-postgresql-exec 0.1.1
pulumi-random 4.7.0
pulumi-tls 4.6.0
pydantic 1.9.1
pyOpenSSL 22.0.0
python-json-logger 2.0.2
pytz 2022.1
sentry-sdk 1.5.12
slack-sdk 3.16.2
timeout-decorator 0.5.0
types-cachetools 5.2.1
types-pyOpenSSL 22.0.3
wheel 0.37.1
whitenoise 6.2.0
Additional context
This seems very similar (possible duplicate?) to https://github.com/pulumi/pulumi-kubernetes/issues/2038 , but that one is in typescript, so it’s hard to tell.
Contributing
Vote on this issue by adding a 👍 reaction. To contribute a fix for this issue, leave a comment (and link to your pull request, if you’ve opened one already).
About this issue
- Original URL
- State: closed
- Created 2 years ago
- Reactions: 1
- Comments: 22 (15 by maintainers)
@danylo-omelchenko The issue you are seeing is discussed in https://github.com/pulumi/pulumi-kubernetes/issues/2038. For the timebeing, you might want to consider splitting your kubernetes code to a separate dependent stack using a stack reference: https://www.pulumi.com/docs/intro/concepts/stack/#stackreferences. Alternatively, you could consider creating the configfile resource in an apply block but that comes with the downside of not having previews on the configfile resources. We are still looking at alternatives for the above issue at the moment.
figured out the issue was finally related to the k8s provider used. If some-one encounters the same error, there is , as of today an open bug with a work around here.
Found the right import for the new library, https://github.com/pulumi/pulumi/pull/10284 should fix this.