terraform-provider-flux: Customizing document unclear or incorrect?

Hi!

With reference to the doc

  1. Question about the document, where it specifies this file and in the kustomization file it is supposed to generate there’s a different filename.

  2. Via the above method, it seems possible to add strategic merge, however, I’d like to configure something like command line arguments (e.g. --concurrent=6 etc), but that seems only possible with json patch:

    - op: add
      path: /spec/template/spec/containers/0/args/0
      value: --concurrent=6

Is there a way to add patchesJson6902 with the current customization implementation?

About this issue

  • Original URL
  • State: open
  • Created 3 years ago
  • Reactions: 1
  • Comments: 19 (4 by maintainers)

Most upvoted comments

Fwiw, I believe it won’t even matter if there was a single stage of kubectl_manifest install. Ideally, a kustomize build would have to be done in terraform and then kubectl_manifest to apply the result. That way terraform will not see a diff in subsequent runs. I had a quick look at kustomize provider but couldn’t understand how to get that into the mix.

If anybody agrees with the direction of performing the kustomize build in terraform and then applying the resultant yaml to the cluster, and happen to think of a solution how to get it done, please do share.

@kingdonb @cmharden first I’d like to thank you for your responses and taking the time to follow up on this thread.

@cmharden I have been using exactly that spec you’ve shared.

I think I have failed to explain the problem with this approach, and would like to try to explain it here again, and why I previously suggested that we do the JSON6902 patch in the kustomization.yaml in addition to the existing strategic merge patch customization.

The steps

  • Add the patch to terraform (exactly the one you’ve shared @cmharden although I’ve patched helm-controller but that doesn’t matter)
  • Run terraform plan and apply - the files get applied in github, consequently the helm-controller deployment it patched and the pod is upgraded to use the new spec.

So far all good, however:

  • Now if I run terraform plan again, the plan shows that it wants to update the deployment spec of helm-controller. This is not a desired state because it doesn’t adhere to being idempotent and will result in a rolling upgrade.
  • I run terraform apply, the helm-controller deployment is reverted to its original state. Then a few seconds later the kustomization patch kicks in and applies itself. This results in another rolling upgrade.

Some output

We start with the original manifest deployed, this is taken from the running pod’s spec (kubectl get po -l app=helm-controller -oyaml):

  spec:
    containers:
    - args:
      - --events-addr=http://notification-controller.flux-system.svc.cluster.local/
      - --watch-all-namespaces=true
      - --log-level=info
      - --log-encoding=json
      - --enable-leader-election

The first-ever plan with the patch @cmharden shared (albeit for helm-controller, not kustomization controller):

Terraform will perform the following actions:

  # module.flux.github_repository_file.kustomize will be updated in-place
  ~ resource "github_repository_file" "kustomize" {
      ~ content             = <<-EOT

            apiVersion: kustomize.config.k8s.io/v1beta1
            kind: Kustomization
            resources:
            - gotk-sync.yaml
            - gotk-components.yaml
          + patchesStrategicMerge:
          + - patch-main.yaml
        EOT
        id                  = "gitops/clusters/dev-beta/flux-system/kustomization.yaml"
      ~ overwrite_on_create = false -> true
        # (8 unchanged attributes hidden)
    }

  # module.flux.github_repository_file.patches["main"] will be created
  + resource "github_repository_file" "patches" {
      + branch              = "dev-beta"
      + commit_author       = (known after apply)
      + commit_email        = (known after apply)
      + commit_message      = (known after apply)
      + commit_sha          = (known after apply)
      + content             = <<-EOT
            ---
            apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
            kind: Kustomization
            metadata:
              name: flux-system
              namespace: flux-system
            spec:
              patches:
                - patch: |-
                    - op: add
                      path: /spec/template/spec/containers/0/args/0
                      value: --concurrent=6
                    - op: replace
                      path: /spec/template/spec/containers/0/resources/limits/cpu
                      value: "2"
                    - op: replace
                      path: /spec/template/spec/containers/0/resources/limits/memory
                      value: "2Gi"
                  target:
                    group: apps
                    version: v1
                    kind: Deployment
                    name: helm-controller
                    namespace: flux-system
        EOT
      + file                = "clusters/dev-beta/flux-system/patch-main.yaml"
      + id                  = (known after apply)
      + overwrite_on_create = false
      + repository          = "gitops"
      + sha                 = (known after apply)
    }

Plan: 1 to add, 1 to change, 0 to destroy.

We see the change being applied:

NAME                                       READY   STATUS              RESTARTS   AGE
helm-controller-5648bfdd55-bxrf7           0/1     ContainerCreating   0          2s
helm-controller-f76d67496-zt25t            1/1     Running             0          4d15h
kustomize-controller-7977b87f7b-5nhxf      1/1     Running             0          4d15h
notification-controller-656cdc97bc-6x7kj   1/1     Running             0          4d15h
source-controller-68d4f75576-844vk         1/1     Running             0          4d15h

Change is in the arguments on the new pod:

    - args:
      - --concurrent=6   ### \o/
      - --events-addr=http://notification-controller.flux-system.svc.cluster.local/
      - --watch-all-namespaces=true
      - --log-level=info
      - --log-encoding=json
      - --enable-leader-election

So far so good.

But, now if I run terraform plan again it wants to update the deployment spec for helm-controller:

Terraform will perform the following actions:

  # module.flux.kubectl_manifest.install["apps/v1/deployment/flux-system/helm-controller"] will be updated in-place
  ~ resource "kubectl_manifest" "install" {
        id                      = "/apis/apps/v1/namespaces/flux-system/deployments/helm-controller"
        name                    = "helm-controller"
      ~ yaml_incluster          = (sensitive value)
        # (12 unchanged attributes hidden)
    }

Plan: 0 to add, 1 to change, 0 to destroy.

This is because the module.flux.kubectl_manifest.install is trying to apply its contents, it has not idea about the kustomization patch of the flux-system kustomization.

Issue itself

I have achieved it by creating ./clusters/cluster1/kustomization.yaml file with content:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - flux-system
  - my_other_manifests
  - some_other_manifest.yaml
patches:
  # enable Native AWS ECR Auto-Login https://fluxcd.io/docs/guides/image-update/#using-native-aws-ecr-auto-login
  - patch: |-
      - op: add
        path: /spec/template/spec/containers/0/args/0
        value: --aws-autologin-for-ecr
    target:
      kind: Deployment
      name: "image-reflector-controller"

If you create such file in TF or create it manually it’s up to you. This won’t fix TF reporting changes in plans, but at least it’s much easier to manage, in my opinion.

To avoid the TF reporting changes I have used the ignore_fields parameter in the kubectl_manifest resource to ignore the changes done by Kustomize.

resource "kubectl_manifest" "install" {
  for_each   = { for v in local.install : lower(join("/", compact([v.data.apiVersion, v.data.kind, lookup(v.data.metadata, "namespace", ""), v.data.metadata.name]))) => v.content }
  depends_on = [kubernetes_namespace.flux_system]
  yaml_body  = each.value
  ignore_fields = ["spec.template.spec.containers.0.args"]
}

It’s not an ideal solution, but it works for now.

Some thoughts

Adding anything to flux_sync resource won’t matter, because the manifests come from flux_install resource. flux_install generates the YAML manifests without any changes/patches applied to them, and such original manifests are consumed by kubectl_manifest resource. The changes will be reported in TF plan until you provide patched manifests to kubectl_manifest so it won’t try to update/revert it. I have never used the flux_sync’s patch_names param, but it seems to me the entire idea is flawed. Specifying it in flux_sync will cause double roll-out, once by kubectl_manifest using original manifests, and couple seconds later by Flux itself using Kustomize no matter what I do. In order to really provide a solution to customize Flux deployments the flux_install needs to provide ready-to-go manifests to kubectl_manifest. Have I missed something here?

I have found that provider as well and even raised FR to support our use-case https://github.com/kbst/terraform-provider-kustomization/issues/163, where I got pointed to another issue from 2021 in which they declined the idea. But it should be still doable with local_file, where you can save the generated manifests from flux_install and flux_sync as files during plan/apply execution, have a kustomization.yaml as file in repository and use them with kustomize_build to generate final manifests for kubectl_manifest. I am not a fan of this approach, because I ran TF in ephemeral k8s pods, so any generated local files would be as change in each TF plan/apply. But if you are running it from (as example) workstation it could work.

Hello and thank you for bringing this up. My use case is simply to enable the AWS ECR auto login as described here.

Since it is container args that need to be edited the patchStrategicMerge will not work.

@kingdonb Assign to me, please