kustomize-controller: badrequest, error: data values must be of type string

Describe the bug

We had flux1 set up with 8 separate gitlab repos supporting 8 different kubernetes clusters.

About a month ago we started migrating to flux2 as the idea of them all sharing base code and just kustomizing the necessary parts was great (when we started we only had 2 clusters and this was far more manageable but it’s grown)

This all went really well until we moved one cluster and for some reason this seems to have a problem with all secrets, the error message is in title but an example is

Secret/kubernetes-dashboard/kubernetes-dashboard-csrf badrequest, error: data values must be of type string...

We haven’t touched this secret as it is just part of the original yaml for installing kubernetes-dashboard. The strange thing is it works on all the other 7 clusters sharing exactly the same code!?!

This is the flux version all clusters started out on

flux: v0.21.1
helm-controller: v0.12.1
kustomize-controller: v0.16.0
notification-controller: v0.18.1
source-controller: v0.17.2

and kubernetes version v1.21.1

I have been trying to debug for the last week and in doing so I upgraded the problem cluster to

flux: v0.23.0
helm-controller: v0.13.0
kustomize-controller: v0.18.0
notification-controller: v0.18.1
source-controller: v0.18.0

Didn’t change anything so I also upgraded kubernetes to v1.22.3 Still didn’t help

I have run out of ideas… nearly forgot to mention the important part which is of course I tried putting quotes round the data values, and when that didn’t work the keys as well but it didn’t make slightest bit of difference

An example of one secret that is fine in all 7 other clusters but not this one is…

apiVersion: v1
kind: Secret
type: Opaque
metadata:
  name: cert-manager-vault-approle
  namespace: cert-manager
data: 
  secretId: "base64encodedstring"

Even without the quotes around the base64encodedstring all of the others work fine.

This secret doesn’t already exist in the problem cluster as I deleted it to make sure it wasn’t a problem with existing values.

Please help, I really don’t want to revert to flux1 for this one cluster but I’ve run out of ideas for things to try

Steps to reproduce

Can’t tell you how to reproduce as like I said we have done the same for 8 clusters and only 1 is problem. It is also worth noting that apart from the different client sites that they host, this cluster has been configured identically to one of the other working clusters all the way through.

To bootstrap we ran

sudo GITLAB_TOKEN=our-token /usr/local/bin/flux bootstrap gitlab \
  --namespace=flux-system \
  --hostname=gitlab.selfhosted.com \
  --owner=admin \
  --repository=flux-system \
  --path=clusters/dev \
  --branch=main \
  --token-auth

Expected behavior

That it should work the same as the others

Screenshots and recordings

No response

OS / Distro

Centos Stream 8

Flux version

N/A

Flux check

► checking prerequisites ✔ Kubernetes 1.22.3 >=1.19.0-0 ► checking controllers ✔ helm-controller: deployment ready ► ghcr.io/fluxcd/helm-controller:v0.13.0 ✔ kustomize-controller: deployment ready ► ghcr.io/fluxcd/kustomize-controller:v0.18.0 ✔ notification-controller: deployment ready ► ghcr.io/fluxcd/notification-controller:v0.18.1 ✔ source-controller: deployment ready ► ghcr.io/fluxcd/source-controller:v0.18.0 ✔ all checks passed

Git provider

GitLab self hosted

Container Registry provider

No response

Additional context

No response

Code of Conduct

  • I agree to follow this project’s Code of Conduct

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Comments: 30 (9 by maintainers)

Most upvoted comments

Ah yes the infamous elasticsearch webhook that blocks server-side apply… going to close this as it’s nothing we can do in Flux, people have to update elasticsearch.