terraform-provider-flux: Customizing document unclear or incorrect?
Hi!
With reference to the doc
-
Question about the document, where it specifies this file and in the kustomization file it is supposed to generate there’s a different filename.
-
Via the above method, it seems possible to add strategic merge, however, I’d like to configure something like command line arguments (e.g.
--concurrent=6etc), but that seems only possible with json patch:
- op: add
path: /spec/template/spec/containers/0/args/0
value: --concurrent=6
Is there a way to add patchesJson6902 with the current customization implementation?
About this issue
- Original URL
- State: open
- Created 3 years ago
- Reactions: 1
- Comments: 19 (4 by maintainers)
Fwiw, I believe it won’t even matter if there was a single stage of
kubectl_manifestinstall. Ideally, a kustomize build would have to be done in terraform and then kubectl_manifest to apply the result. That way terraform will not see a diff in subsequent runs. I had a quick look at kustomize provider but couldn’t understand how to get that into the mix.If anybody agrees with the direction of performing the kustomize build in terraform and then applying the resultant yaml to the cluster, and happen to think of a solution how to get it done, please do share.
@kingdonb @cmharden first I’d like to thank you for your responses and taking the time to follow up on this thread.
@cmharden I have been using exactly that spec you’ve shared.
I think I have failed to explain the problem with this approach, and would like to try to explain it here again, and why I previously suggested that we do the JSON6902 patch in the
kustomization.yamlin addition to the existing strategic merge patch customization.The steps
So far all good, however:
Some output
We start with the original manifest deployed, this is taken from the running pod’s spec (kubectl get po -l app=helm-controller -oyaml):
The first-ever plan with the patch @cmharden shared (albeit for helm-controller, not kustomization controller):
We see the change being applied:
Change is in the arguments on the new pod:
So far so good.
But, now if I run terraform plan again it wants to update the deployment spec for helm-controller:
This is because the
module.flux.kubectl_manifest.installis trying to apply its contents, it has not idea about thekustomizationpatch of theflux-systemkustomization.Issue itself
I have achieved it by creating
./clusters/cluster1/kustomization.yamlfile with content:If you create such file in TF or create it manually it’s up to you. This won’t fix TF reporting changes in plans, but at least it’s much easier to manage, in my opinion.
To avoid the TF reporting changes I have used the
ignore_fieldsparameter in thekubectl_manifestresource to ignore the changes done by Kustomize.It’s not an ideal solution, but it works for now.
Some thoughts
Adding anything to
flux_syncresource won’t matter, because the manifests come fromflux_installresource.flux_installgenerates the YAML manifests without any changes/patches applied to them, and such original manifests are consumed bykubectl_manifestresource. The changes will be reported in TF plan until you provide patched manifests tokubectl_manifestso it won’t try to update/revert it. I have never used theflux_sync’spatch_namesparam, but it seems to me the entire idea is flawed. Specifying it influx_syncwill cause double roll-out, once bykubectl_manifestusing original manifests, and couple seconds later by Flux itself using Kustomize no matter what I do. In order to really provide a solution to customize Flux deployments theflux_installneeds to provide ready-to-go manifests tokubectl_manifest. Have I missed something here?I have found that provider as well and even raised FR to support our use-case https://github.com/kbst/terraform-provider-kustomization/issues/163, where I got pointed to another issue from 2021 in which they declined the idea. But it should be still doable with local_file, where you can save the generated manifests from
flux_installandflux_syncas files during plan/apply execution, have akustomization.yamlas file in repository and use them with kustomize_build to generate final manifests forkubectl_manifest. I am not a fan of this approach, because I ran TF in ephemeral k8s pods, so any generated local files would be as change in each TF plan/apply. But if you are running it from (as example) workstation it could work.Hello and thank you for bringing this up. My use case is simply to enable the AWS ECR auto login as described here.
Since it is container args that need to be edited the patchStrategicMerge will not work.
@kingdonb Assign to me, please