source-controller: HelmCharts red herring when k3s cluster supplies CRD first name conflict
I was testing an OCI Chart Repository:
apiVersion: source.toolkit.fluxcd.io/v1beta2
kind: HelmRepository
metadata:
name: podinfo
namespace: podinfo
spec:
interval: 30s
timeout: 60s
type: oci
url: oci://ghcr.io/kingdonb/podinfo/helm
and noticed I forgot to make it public. The errors were confusing:
$ flux get hr -n podinfo
NAME REVISION SUSPENDED READY MESSAGE
podinfo False False HelmChart 'podinfo/podinfo-podinfo' is not ready
There is no indication that there was an auth failure until I check the logs of source-controller:
{"level":"error","ts":"2022-07-13T13:47:47.998Z","logger":"controller.helmchart","msg":"Reconciler error","reconciler group":"source.toolkit.fluxcd.io","reconciler kind":"HelmChart","name":"podinfo-podinfo","namespace":"podinfo","error":"chart pull error: chart pull error: failed to get chart version for remote reference: GET \"https://ghcr.io/v2/kingdonb/podinfo/helm/podinfo/tags/list\": GET \"https://ghcr.io/token?scope=repository%3Akingdonb%2Fpodinfo%2Fhelm%2Fpodinfo%3Apull&service=ghcr.io\": unexpected status code 401: unauthorized: authentication required"}
OK, there’s the error. (An aside: I think this should be reported somewhere as a condition)
I made the chart public, and I’m explaining this in the sequence of events that I tested because I am not certain what will be important to reproduce the error
{"level":"info","ts":"2022-07-13T13:53:49.380Z","logger":"controller.helmchart","msg":"pulled 'podinfo' chart with version '6.1.14'","reconciler group":"source.toolkit.fluxcd.io","reconciler kind":"HelmChart","name":"podinfo-podinfo","namespace":"podinfo","reconcileID":"96c30552-3a25-4f16-bafd-05edd3fd85c6"}
{"level":"info","ts":"2022-07-13T13:54:49.391Z","logger":"controller.helmchart","msg":"garbage collected 1 artifacts","reconciler group":"source.toolkit.fluxcd.io","reconciler kind":"HelmChart","name":"podinfo-podinfo","namespace":"podinfo","reconcileID":"ca9eaf9d-b4a0-4ede-86a1-0af6dcb43790"}
{"level":"info","ts":"2022-07-13T13:54:49.600Z","logger":"controller.helmchart","msg":"artifact up-to-date with remote revision: '6.1.14'","reconciler group":"source.toolkit.fluxcd.io","reconciler kind":"HelmChart","name":"podinfo-podinfo","namespace":"podinfo","reconcileID":"ca9eaf9d-b4a0-4ede-86a1-0af6dcb43790"}
The release succeeded, but I was waiting to see HelmChart get created and never did observe it. The garbage collector has deleted it already. Not sure why, it seems a bug!
$ k get helmchart -A
No resources found
$ k get helmrelease
NAME AGE READY STATUS
podinfo 8m25s True Release reconciliation succeeded
The chart does work but it has been garbage collected apparently in error. Subsequent log messages do not show the chart being recreated and garbage collected again, but it is remembered by the source controller apparently:
{"level":"info","ts":"2022-07-13T13:56:49.947Z","logger":"controller.helmchart","msg":"artifact up-to-date with remote revision: '6.1.14'","reconciler group":"source.toolkit.fluxcd.io","reconciler kind":"HelmChart","name":"podinfo-podinfo","namespace":"podinfo","reconcileID":"820fe10e-059c-4206-86e7-4e03fa1b3701"}
{"level":"info","ts":"2022-07-13T13:57:50.112Z","logger":"controller.helmchart","msg":"artifact up-to-date with remote revision: '6.1.14'","reconciler group":"source.toolkit.fluxcd.io","reconciler kind":"HelmChart","name":"podinfo-podinfo","namespace":"podinfo","reconcileID":"bed4bd43-a507-4554-8bc6-5c5bf1c1fda7"}
{"level":"info","ts":"2022-07-13T13:58:50.282Z","logger":"controller.helmchart","msg":"artifact up-to-date with remote revision: '6.1.14'","reconciler group":"source.toolkit.fluxcd.io","reconciler kind":"HelmChart","name":"podinfo-podinfo","namespace":"podinfo","reconcileID":"24418ba2-5114-4ab5-b64a-ac33f9d3d363"}
{"level":"info","ts":"2022-07-13T13:59:50.466Z","logger":"controller.helmchart","msg":"artifact up-to-date with remote revision: '6.1.14'","reconciler group":"source.toolkit.fluxcd.io","reconciler kind":"HelmChart","name":"podinfo-podinfo","namespace":"podinfo","reconcileID":"b4b32138-0cc9-45fc-b98f-dfa60f723a4e"}
About this issue
- Original URL
- State: closed
- Created 2 years ago
- Comments: 19 (19 by maintainers)
Commits related to this issue
- Add troubleshooting note for k3s HelmChart Follow-up to cover common likely issue noticed while attempting to debug a red-herring issue in fluxcd/source-controller#831 Signed-off-by: Kingdon Barrett... — committed to fluxcd/website by kingdonb 2 years ago
- Add troubleshooting note for k3s HelmChart Follow-up to cover common likely issue noticed while attempting to debug a red-herring issue in fluxcd/source-controller#831 Signed-off-by: Kingdon Barrett... — committed to fluxcd/website by kingdonb 2 years ago
- Add troubleshooting note for k3s HelmChart Follow-up to cover common likely issue noticed while attempting to debug a red-herring issue in fluxcd/source-controller#831 Signed-off-by: Kingdon Barrett... — committed to fluxcd/website by kingdonb 2 years ago
Might still be good to have this noted somewhere as a K3s specific gotcha around
kubectl get helmchart, as I think there are more people going to be running into this.Sorry to waste so much time on this, there is no issue:
@souleb @somtochiama you had it right, this is a conflict with the helm controller provided by k3s, they have installed some CRDs on my vcluster that I didn’t ask for, and when I
kubectl get helmchartsI am reaching the wrong helmchart crd.Sorry for the noise everyone.
k3s has it’s own helm-controller. Maybe it’s interfering.
SC does not delete HelmCharts, this is a helm-controller bug. I’ll let @hiddeco move this issue if that’s the case.