flux2: Bootstrapping new cluster fails on k3s v1.20

I have a k3s cluster working on a Raspberry Pi connected to my home local network. Tried to bootstrap a new GOTK repo using the following command:

flux bootstrap github \
--owner=$GITHUB_USER \
--repository=$CONFIG_REPO \
--branch=master \
--path=./clusters/my-cluster \
--personal \
--kubeconfig=/etc/rancher/k3s/k3s.yaml

The output for the bootstrapping command (notice the “context deadline exceeded” after “waiting for Kustomization “flux-system/flux-system” to be reconciled”):

► connecting to github.com
► cloning branch "master" from Git repository "https://github.com/argamanza/raspberry-pi-flux-config.git"
✔ cloned repository
► generating component manifests
✔ generated component manifests
✔ component manifests are up to date
► installing toolkit.fluxcd.io CRDs
◎ waiting for CRDs to be reconciled
✔ CRDs reconciled successfully
► installing components in "flux-system" namespace
✔ installed components
✔ reconciled components
► determining if source secret "flux-system/flux-system" exists
✔ source secret up to date
► generating sync manifests
✔ generated sync manifests
✔ sync manifests are up to date
► applying sync manifests
✔ reconciled sync configuration
◎ waiting for Kustomization "flux-system/flux-system" to be reconciled
✗ context deadline exceeded
► confirming components are healthy
✔ source-controller: deployment ready
✔ kustomize-controller: deployment ready
✔ helm-controller: deployment ready
✔ notification-controller: deployment ready
✔ all components are healthy
✗ bootstrap failed with 1 health check failure(s)

The logs for the Kustomize Controller expose what the issue might be:

{"level":"info","ts":"2021-04-24T20:55:51.200Z","logger":"controller-runtime.metrics","msg":"metrics server is starting to listen","addr":":8080"}
{"level":"info","ts":"2021-04-24T20:55:51.202Z","logger":"setup","msg":"starting manager"}
I0424 20:55:51.206769       7 leaderelection.go:243] attempting to acquire leader lease flux-system/kustomize-controller-leader-election...
{"level":"info","ts":"2021-04-24T20:55:51.307Z","msg":"starting metrics server","path":"/metrics"}
I0424 20:56:30.436269       7 leaderelection.go:253] successfully acquired lease flux-system/kustomize-controller-leader-election
{"level":"info","ts":"2021-04-24T20:56:30.436Z","logger":"controller.kustomization","msg":"Starting EventSource","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization","source":"kind source: /, Kind="}
{"level":"info","ts":"2021-04-24T20:56:30.437Z","logger":"controller.kustomization","msg":"Starting EventSource","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization","source":"kind source: /, Kind="}
{"level":"info","ts":"2021-04-24T20:56:30.538Z","logger":"controller.kustomization","msg":"Starting EventSource","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization","source":"kind source: /, Kind="}
{"level":"info","ts":"2021-04-24T20:56:30.639Z","logger":"controller.kustomization","msg":"Starting Controller","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization"}
{"level":"info","ts":"2021-04-24T20:56:30.639Z","logger":"controller.kustomization","msg":"Starting workers","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization","worker count":4}
{"level":"info","ts":"2021-04-24T20:56:47.576Z","logger":"controller.kustomization","msg":"Kustomization applied in 2.713132582s","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization","name":"flux-system","namespace":"flux-system","output":{"clusterrole.rbac.authorization.k8s.io/crd-controller-flux-system":"unchanged","clusterrolebinding.rbac.authorization.k8s.io/cluster-reconciler-flux-system":"unchanged","clusterrolebinding.rbac.authorization.k8s.io/crd-controller-flux-system":"unchanged","customresourcedefinition.apiextensions.k8s.io/alerts.notification.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/buckets.source.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/gitrepositories.source.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/helmcharts.source.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/helmreleases.helm.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/helmrepositories.source.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/kustomizations.kustomize.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/providers.notification.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/receivers.notification.toolkit.fluxcd.io":"configured","deployment.apps/helm-controller":"configured","deployment.apps/kustomize-controller":"configured","deployment.apps/notification-controller":"configured","deployment.apps/source-controller":"configured","gitrepository.source.toolkit.fluxcd.io/flux-system":"unchanged","kustomization.kustomize.toolkit.fluxcd.io/flux-system":"unchanged","namespace/flux-system":"unchanged","networkpolicy.networking.k8s.io/allow-egress":"unchanged","networkpolicy.networking.k8s.io/allow-scraping":"unchanged","networkpolicy.networking.k8s.io/allow-webhooks":"unchanged","service/notification-controller":"unchanged","service/source-controller":"unchanged","service/webhook-receiver":"unchanged","serviceaccount/helm-controller":"unchanged","serviceaccount/kustomize-controller":"unchanged","serviceaccount/notification-controller":"unchanged","serviceaccount/source-controller":"unchanged"}}
{"level":"error","ts":"2021-04-24T20:56:47.609Z","logger":"controller.kustomization","msg":"unable to update status after reconciliation","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization","name":"flux-system","namespace":"flux-system","error":"Kustomization.kustomize.toolkit.fluxcd.io \"flux-system\" is invalid: status.snapshot.entries.namespace: Invalid value: \"null\": status.snapshot.entries.namespace in body must be of type string: \"null\""}
{"level":"error","ts":"2021-04-24T20:56:47.609Z","logger":"controller.kustomization","msg":"Reconciler error","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization","name":"flux-system","namespace":"flux-system","error":"Kustomization.kustomize.toolkit.fluxcd.io \"flux-system\" is invalid: status.snapshot.entries.namespace: Invalid value: \"null\": status.snapshot.entries.namespace in body must be of type string: \"null\""}
{"level":"info","ts":"2021-04-24T20:56:53.835Z","logger":"controller.kustomization","msg":"Kustomization applied in 2.470822475s","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization","name":"flux-system","namespace":"flux-system","output":{"clusterrole.rbac.authorization.k8s.io/crd-controller-flux-system":"unchanged","clusterrolebinding.rbac.authorization.k8s.io/cluster-reconciler-flux-system":"unchanged","clusterrolebinding.rbac.authorization.k8s.io/crd-controller-flux-system":"unchanged","customresourcedefinition.apiextensions.k8s.io/alerts.notification.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/buckets.source.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/gitrepositories.source.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/helmcharts.source.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/helmreleases.helm.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/helmrepositories.source.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/kustomizations.kustomize.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/providers.notification.toolkit.fluxcd.io":"configured","customresourcedefinition.apiextensions.k8s.io/receivers.notification.toolkit.fluxcd.io":"configured","deployment.apps/helm-controller":"configured","deployment.apps/kustomize-controller":"configured","deployment.apps/notification-controller":"configured","deployment.apps/source-controller":"configured","gitrepository.source.toolkit.fluxcd.io/flux-system":"unchanged","kustomization.kustomize.toolkit.fluxcd.io/flux-system":"unchanged","namespace/flux-system":"unchanged","networkpolicy.networking.k8s.io/allow-egress":"unchanged","networkpolicy.networking.k8s.io/allow-scraping":"unchanged","networkpolicy.networking.k8s.io/allow-webhooks":"unchanged","service/notification-controller":"unchanged","service/source-controller":"unchanged","service/webhook-receiver":"unchanged","serviceaccount/helm-controller":"unchanged","serviceaccount/kustomize-controller":"unchanged","serviceaccount/notification-controller":"unchanged","serviceaccount/source-controller":"unchanged"}}
{"level":"error","ts":"2021-04-24T20:56:53.863Z","logger":"controller.kustomization","msg":"unable to update status after reconciliation","reconciler group":"kustomize.toolkit.fluxcd.io","reconciler kind":"Kustomization","name":"flux-system","namespace":"flux-system","error":"Kustomization.kustomize.toolkit.fluxcd.io \"flux-system\" is invalid: status.snapshot.entries.namespace: Invalid value: \"null\": status.snapshot.entries.namespace in body must be of type string: \"null\""}

From the logs I can tell that status.snapshot.entries.namespace shouldn’t be null for the flux-system kustomization, and after testing the same bootstrap procedure on a local machine using cluster I provisioned using kind I can see that the kustomization indeed miss the status.snapshot data in the K3S cluster while on my local kind cluster it exists:

K3S@RaspberryPi:

kubectl describe kustomization flux-system -n flux-system

Name:         flux-system
Namespace:    flux-system
Labels:       kustomize.toolkit.fluxcd.io/checksum=1d4c5beef02b0043768a476cc3fed578aa3ed6f0
              kustomize.toolkit.fluxcd.io/name=flux-system
              kustomize.toolkit.fluxcd.io/namespace=flux-system
Annotations:  <none>
API Version:  kustomize.toolkit.fluxcd.io/v1beta1
Kind:         Kustomization
Metadata:
  Creation Timestamp:  2021-04-24T19:42:50Z
  Finalizers:
    finalizers.fluxcd.io
  Generation:  1
...
...
Status:
  Conditions:
    Last Transition Time:  2021-04-24T19:43:30Z
    Message:               reconciliation in progress
    Reason:                Progressing
    Status:                Unknown
    Type:                  Ready
Events:
  Type    Reason  Age   From                  Message
  ----    ------  ----  ----                  -------
  Normal  info    57m   kustomize-controller  customresourcedefinition.apiextensions.k8s.io/buckets.source.toolkit.fluxcd.io configured
...

kind@local:

kubectl describe kustomization flux-system -n flux-system

Name:         flux-system
Namespace:    flux-system
Labels:       kustomize.toolkit.fluxcd.io/checksum=1d4c5beef02b0043768a476cc3fed578aa3ed6f0
              kustomize.toolkit.fluxcd.io/name=flux-system
              kustomize.toolkit.fluxcd.io/namespace=flux-system
Annotations:  <none>
API Version:  kustomize.toolkit.fluxcd.io/v1beta1
Kind:         Kustomization
Metadata:
  Creation Timestamp:  2021-04-25T12:35:37Z
  Finalizers:
    finalizers.fluxcd.io
  Generation:  1
...
...
Status:
  Conditions:
    Last Transition Time:   2021-04-25T12:37:02Z
    Message:                Applied revision: master/dbce13415e4118bb071b58dab20d1f2bec527a14
    Reason:                 ReconciliationSucceeded
    Status:                 True
    Type:                   Ready
  Last Applied Revision:    master/dbce13415e4118bb071b58dab20d1f2bec527a14
  Last Attempted Revision:  master/dbce13415e4118bb071b58dab20d1f2bec527a14
  Observed Generation:      1
  Snapshot:
    Checksum:  1d4c5beef02b0043768a476cc3fed578aa3ed6f0
    Entries:
      Kinds:
        /v1, Kind=Namespace:                                     Namespace
        apiextensions.k8s.io/v1, Kind=CustomResourceDefinition:  CustomResourceDefinition
        rbac.authorization.k8s.io/v1, Kind=ClusterRole:          ClusterRole
        rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding:   ClusterRoleBinding
      Namespace:
      Kinds:
        /v1, Kind=Service:                                        Service
        /v1, Kind=ServiceAccount:                                 ServiceAccount
        apps/v1, Kind=Deployment:                                 Deployment
        kustomize.toolkit.fluxcd.io/v1beta1, Kind=Kustomization:  Kustomization
        networking.k8s.io/v1, Kind=NetworkPolicy:                 NetworkPolicy
        source.toolkit.fluxcd.io/v1beta1, Kind=GitRepository:     GitRepository
      Namespace:                                                  flux-system
Events:
  Type    Reason  Age    From                  Message
  ----    ------  ----   ----                  -------
  Normal  info    3m53s  kustomize-controller  customresourcedefinition.apiextensions.k8s.io/buckets.source.toolkit.fluxcd.io configured
...

This is also where my debugging process came to a dead end as I couldn’t find a reason why the status.snapshot doesn’t populate on my K3S@RaspberryPi while it does on Kind@Local using the same bootstrap process.

I believe the fact that the issue only occurs on my raspberry pi implies that it might be a networking issue of some kind that prevents the kustomize controller from getting status updates from GitHub and I need to handle port forwarding or something similar, but I’m not sure.

  • Kubernetes version: v1.20.6+k3s1
  • Git provider: GitHub
flux --version
flux version 0.13.1

flux check
► checking prerequisites
✔ kubectl 1.20.6+k3s1 >=1.18.0-0
✔ Kubernetes 1.20.6+k3s1 >=1.16.0-0
► checking controllers
✔ helm-controller: deployment ready
► ghcr.io/fluxcd/helm-controller:v0.10.0
✔ kustomize-controller: deployment ready
► ghcr.io/fluxcd/kustomize-controller:v0.11.1
✔ notification-controller: deployment ready
► ghcr.io/fluxcd/notification-controller:v0.13.0
✔ source-controller: deployment ready
► ghcr.io/fluxcd/source-controller:v0.12.1
✔ all checks passed

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Reactions: 1
  • Comments: 20 (7 by maintainers)

Most upvoted comments

This is a bug in k3s, hopefully it will be fixed once k3s catches up with Kubernetes 1.21

Upgraded to 1.19.10+k3s1 (as far as I can go) and still not working - odd as people above seem to have had luck?

Scratch that - k3os with k3s 1.1.19.10 works 😃