flux2: Breaking changes in Flux due to Kustomize v4

Starting with version 0.15.0, Flux and its controllers have been upgraded to Kustomize v4. While Kustomize v4 comes with many improvements and bug fixes, it introduces a couple of breaking changes.

Remote archives

Due to the removal of hashicorp/go-getter from Kustomize v4, the set of URLs accepted by Kustomize in the resources filed is reduced to file system paths, URLs to plain YAMLs and values compatible with git clone.

This means you can no longer use resources from archives (zip, tgz, etc).

No longer works:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- https://github.com/rook/rook/archive/refs/heads/master.zip//rook-master/cluster/examples/kubernetes/ceph/crds.yaml

Works:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- https://raw.githubusercontent.com/rook/rook/v1.6.0/cluster/examples/kubernetes/ceph/crds.yaml

Non-string YAML keys

Due to a bug in Kustomize v4, if you have non-string keys in your manifests, the controller will fail to build the final manifest.

The non-string keys bug affects Helm release like the nginx-ingress one, for example:

apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
  name: nginx-ingress
spec:
  values:
    tcp:
      2222: "app/server:2222"

The above will fail with {}{2222:"app/server:2222:2222"}}}}: json: unsupported type: map[interface {}]interface {}.

To fix this issue, you have to make the YAML keys into strings, e.g.:

  values:
    tcp:
      "2222": "app/server:2222"

Duplicate YAML keys

Unlike Helm, the Kustomize yaml parser (kyaml) does not accept duplicate keys, while Helm drops the duplicates, Kustomize errors out. This impacts helm-controller as it uses kustomize/kyaml to label objects reconciled by a HelmRelease.

For example, a chart that adds the app.kubernetes.io/name more than once, will result in a HelmRelease install failure:

map[string]interface {}(nil): yaml: unmarshal errors:
line 21: mapping key "app.kubernetes.io/name" already defined at line 20

YAML formatting

Due to a bug in Kustomize v4 that makes the image-automation-controller crash when YAMLs contain non-ASCII characters, we had to update the underlying go-yaml package to fix the panics.

The gopkg.in/yaml.v3 update means that the indentation style changed:

From:

spec:
  containers:
  - name: one
    image: image1:v1.0.0 # {"$imagepolicy": "automation-ns:policy1"}
  - name: two
    image: image2:v1.0.0 # {"$imagepolicy": "automation-ns:policy2"}

To:

spec:
  containers:
    - name: one
      image: image1:v1.0.0 # {"$imagepolicy": "automation-ns:policy1"}
    - name: two
      image: image2:v1.0.0 # {"$imagepolicy": "automation-ns:policy2"}

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Reactions: 13
  • Comments: 35 (14 by maintainers)

Commits related to this issue

Most upvoted comments

maybe flux should not include such massive breaking changes in minor releases.

You may not be aware, but flux2 has no GA release so we can’t bump the major version before going GA aka 2.0.0. Every minor release of flux2 could come with breaking changes, we try to communicate those ahead of time, in this case with Kustomize v4, I documented the whole thing months ago here: https://github.com/fluxcd/flux2/issues/918

I am pretty shocked how easily the kustomize crowd breaks established standards, given how dogmatic they are about their templating philosophy. On that note, maybe flux should not include such massive breaking changes in minor releases.

From an operational perspective, this is a nightmare. I guess we will stay on flux 0.14.0 for some time until this has settled.

@stefanprodan Thank you for pushing back on this and clearly documenting the impacts. 👍

Due to the removal of hashicorp/go-getter from Kustomize v4…

🤦‍♂️

Looks like this is fixed in upstream now: kubernetes-sigs/kustomize#3675

Have a confirmed fix, but need to do the required writing of more extensive tests. Will aim to have it available before EOD UTC.

@onedr0p this is a newly introduced security constraint set too tight. I will get this sorted now, and ensure a regression test is added.

I stumbled upon the “Duplicate YAML keys” problem in one of my releases. Fixing it is rather easy.

I’m a little concerned about how to avoid this kind of failure in the future. What is the test that needs to be added to CI so it would break before merge to master/develop

I found a very awkward way to do it, but I wonder if someone found something more sustainable…

and what is your way?

Would there be a way maybe to to ignore those errors?

No there is no workaround, I think soon Helm itself will error out the same as Flux once they update their yaml processor package.

The helm-controller does run a default Kustomize plugin to be able to trace resources that originate from a HelmRelease by adding labels.

The impact of this may however been underestimated with the recent changes to Kustomize v4, and we may want to provide some sort of configuration flag to disable this default behavior for charts it does not cope with.

I’m getting similar error with flux version 0.30.2. It used to work with 0.28.0.

---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - gotk-components.yaml
  - gotk-sync.yaml
  - ../../base
patchesStrategicMerge:
  - ../../gotk-patches.yaml
✗ accumulating resources: accumulation err='accumulating resources from '../../base': fs-security-constraint read /tmp/flux-bootstrap-3660114182/clusters/base: path '/tmp/flux-bootstrap-3660114182/clusters/base' is not in or below '/tmp/flux-bootstrap-3660114182/clusters/staging'': fs-security-constraint abs /tmp/flux-bootstrap-3660114182/clusters/base: path '/tmp/flux-bootstrap-3660114182/clusters/base' is not in or below '/tmp/flux-bootstrap-3660114182/clusters/staging'

Users running into issues after updating to v0.29.0, should see smooth operation again with v0.29.1. Sorry about any inconvenience it may have caused, Terraform provider release will follow shortly.

I just upgraded to v0.29.0 and noticed that a kustomization like this is no longer supported, is this safe to assume now?

---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - github.com/rancher/system-upgrade-controller?ref=v0.9.1

I looked into it a bit more and discvoered this PR https://github.com/kubernetes-sigs/kustomize/pull/4453 which makes it seem like the below would work

---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - git::https://github.com/rancher/system-upgrade-controller?ref=v0.9.1

But I am getting an error:

kustomize build failed: accumulating resources: accumulation err='accumulating resources from 'system-upgrade': read /tmp/apps2651878754/cluster/apps/system-upgrade: is a directory': recursed accumulation of path '/tmp/apps2651878754/cluster/apps/system-upgrade': accumulating resources: accumulation err='accumulating resources from 'system-upgrade-controller': read /tmp/apps2651878754/cluster/apps/system-upgrade/system-upgrade-controller: is a directory': recursed accumulation of path '/tmp/apps2651878754/cluster/apps/system-upgrade/system-upgrade-controller': accumulating resources: accumulation err='accumulating resources from 'git::https://github.com/rancher/system-upgrade-controller?ref=v0.9.1': open /tmp/apps2651878754/cluster/apps/system-upgrade/system-upgrade-controller/git::https:/github.com/rancher/system-upgrade-controller?ref=v0.9.1: no such file or directory': fs-security-constraint abs /tmp/kustomize-356128291: path '/tmp/kustomize-356128291' is not in or below '/tmp/apps2651878754'

And there’s alreay a new kustomize release which includes the fix: https://github.com/kubernetes-sigs/kustomize/releases/tag/kustomize%2Fv4.4.0

Can’t wait for the update to this newer version of kustomize in Flux, anchor support is amazing.

@masterkain:

is this bug related?

No. That is an issue with your yaml, you are defining volumeMounts names twice for both initcontainers.

          volumeMounts:
          - name: data #### Here...
            mountPath: /var/www/html/sendy/uploads
            name: sendy-data #### ... and here.

@onedr0p I’ve update the issue with examples, let me know if it answers your question.