kustomize-controller: CrashLoopBackOff OOMKilled for kustomize-controller
We created a Flux configuration in our Kubernetes cluster. We keep getting an issue with the Kustomize-Controller pod getting stuck in a CrashLoopBackOff state. The logs aren’t pointing to any particular root cause for the crash.
Pod status:
$ kubectl get pods -n flux-system
NAME READY STATUS RESTARTS AGE
fluxconfig-agent-7bbdd4f98f-b6gpk 2/2 Running 0 2d22h
fluxconfig-controller-cc788b88f-wjs9l 2/2 Running 0 2d22h
helm-controller-67c6cf57b-qmgrz 1/1 Running 0 2d22h
kustomize-controller-7cfb84c5fd-bg94s 0/1 CrashLoopBackOff 21 (114s ago) 2d22h
notification-controller-5485c8d468-xgvg9 1/1 Running 0 2d15h
source-controller-95c44bbf8-f9cht 1/1 Running 0 2d22h
Pod events:
$ kubectl describe pod kustomize-controller-7cfb84c5fd-bg94s -n flux-system
...
Events:1 <none>
Type Reason Age From Message
---- ------ ---- ---- -------1 <none>
Normal Pulled 30m (x13 over 86m) kubelet Container image "mcr.microsoft.com/oss/fluxcd/kustomize-control
ler:v0.27.1" already present on machine1 <none>
Normal Created 30m (x13 over 86m) kubelet Created container manager
Normal Started 30m (x13 over 86m) kubelet Started container manager1 <none>
Warning BackOff 67s (x306 over 85m) kubelet Back-off restarting failed container
Pod logs:
$ kubectl logs kustomize-controller-7cfb84c5fd-bg94s -n flux-system
{"level":"info","ts":"2022-09-09T17:51:36.868Z","logger":"controller-runtime.metrics","msg":"Metrics server is starting to listen","addr":":8080"}
{"level":"info","ts":"2022-09-09T17:51:36.870Z","logger":"setup","msg":"starting manager"}
{"level":"info","ts":"2022-09-09T17:51:36.871Z","msg":"Starting server","kind":"health probe","addr":"[::]:9440"}
{"level":"info","ts":"2022-09-09T17:51:36.871Z","msg":"Starting server","path":"/metrics","kind":"metrics","addr":"[::]:8080"}
I0909 17:51:36.973225 7 leaderelection.go:248] attempting to acquire leader lease flux-system/kustomize-controller-leader-election...
I0909 17:52:21.794522 7 leaderelection.go:258] successfully acquired lease flux-system/kustomize-controller-leader-election
{"level":"info","ts":"2022-09-09T17:52:21.795Z","logger":"controller.kustomization","msg":"Starting EventSource","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization","source":"kind source: *v1beta2.Kustomization"}
{"level":"info","ts":"2022-09-09T17:52:21.795Z","logger":"controller.kustomization","msg":"Starting EventSource","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization","source":"kind source: *v1beta2.OCIRepository"}
{"level":"info","ts":"2022-09-09T17:52:21.795Z","logger":"controller.kustomization","msg":"Starting EventSource","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization","source":"kind source: *v1beta2.GitRepository"}
{"level":"info","ts":"2022-09-09T17:52:21.795Z","logger":"controller.kustomization","msg":"Starting EventSource","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization","source":"kind source: *v1beta2.Bucket"}
{"level":"info","ts":"2022-09-09T17:52:21.795Z","logger":"controller.kustomization","msg":"Starting Controller","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization"}
{"level":"info","ts":"2022-09-09T17:52:21.901Z","logger":"controller.kustomization","msg":"Starting workers","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization","worker count":4}
{"level":"info","ts":"2022-09-09T17:52:21.901Z","logger":"controller.kustomization","msg":"All dependencies are ready, proceeding with reconciliation","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization","name":"main-config4-data-api","namespace":"main-config4"}
{"level":"info","ts":"2022-09-09T17:52:21.925Z","logger":"controller.kustomization","msg":"Dependencies do not meet ready condition, retrying in 30s","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization","name":"main-config5-frontend","namespace":"main-config5"}
{"level":"info","ts":"2022-09-09T17:52:21.930Z","logger":"controller.kustomization","msg":"All dependencies are ready, proceeding with reconciliation","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization","name":"smilr-data-api","namespace":"smilr"}
{"level":"info","ts":"2022-09-09T17:52:21.932Z","logger":"controller.kustomization","msg":"Dependencies do not meet ready condition, retrying in 30s","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization","name":"smilr-frontend","namespace":"smilr"}
{"level":"info","ts":"2022-09-09T17:52:21.935Z","logger":"controller.kustomization","msg":"All dependencies are ready, proceeding with reconciliation","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization","name":"main-config-data-api","namespace":"main-config"}
{"level":"info","ts":"2022-09-09T17:52:22.856Z","logger":"controller.kustomization","msg":"server-side apply completed","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization","name":"smilr-data-api","namespace":"smilr","output":{"Deployment/default/data-api":"unchanged","Service/default/data-api":"unchanged"}}
{"level":"info","ts":"2022-09-09T17:52:22.898Z","logger":"controller.kustomization","msg":"server-side apply completed","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization","name":"main-config-data-api","namespace":"main-config","output":{"Deployment/default/data-api":"configured","Service/default/data-api":"configured"}}
{"level":"info","ts":"2022-09-09T17:52:22.903Z","logger":"controller.kustomization","msg":"server-side apply completed","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization","name":"main-config4-data-api","namespace":"main-config4","output":{"Deployment/default/data-api":"configured","Service/default/data-api":"configured"}}
About this issue
- Original URL
- State: closed
- Created 2 years ago
- Comments: 17 (9 by maintainers)
Ok then mystery solved, add a
.sourceignorefile to that repo and exclude all paths which don’t contain Kubernetes YAMLs. But still this is very problematic to download GB of unrelated data in-cluster, I suggest you create a dedicated repo for that app called<app-name>-deploywith only YAMLs and make Flux look at that.A better option would be to push the manifests from that repo to ACR and let Flux sync the manifest from there, see https://fluxcd.io/flux/cheatsheets/oci-artifacts/