helm: [helm-3.3.0-rc.1] Helm Lint fails due to blank resource name

I’ve got a helm chart which uses internal Helm Libraries to initialize a Database and User in a DB Cluster as a pre-install hook.

# templates/pg-job.yaml
{{- if .Values.pgJob.enabled }}
  {{- include "zenlibs.pgJob" (list . .Values.pgJob) }}
{{- end }}

# values.yaml
[...]
pgJob:
  enabled: true
  init: true
  clean: true
  admin:
    secret:
      name: dev-aurora
  host: *pgHost
  port: 5432
  version: 10.7

Brief output of helm template example .

# Source: example/templates/pg-job.yaml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: example-secret-role
[...]
# Source: example/templates/pg-job.yaml
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: example-secret-rolebinding
[...]
# Source: example/templates/pg-job.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: example-initpg
[...]
# Source: example/templates/pg-job.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: example-cleanpg
[...]

Output of helm lint

==> Linting .
[ERROR] templates/pg-job.yaml: object name does not conform to Kubernetes naming requirements: ""

Error: 1 chart(s) linted, 1 chart(s) failed

helm lint succeeds if I use helm lint --set pgJob.enabled=false

Extra Info

Output of helm version:

» helm version
version.BuildInfo{Version:"v3.3.0-rc.1", GitCommit:"5c2dfaad847df2ac8f289d278186d048f446c70c", GitTreeState:"dirty", GoVersion:"go1.14.4"}

Output of kubectl version:

» kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.5", GitCommit:"e6503f8d8f769ace2f338794c914a96fc335df0f", GitTreeState:"archive", BuildDate:"2020-07-01T16:28:46Z", GoVersion:"go1.14.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"15+", GitVersion:"v1.15.11-eks-af3caf", GitCommit:"af3caf6136cd355f467083651cc1010a499f59b1", GitTreeState:"clean", BuildDate:"2020-03-27T21:51:36Z", GoVersion:"go1.12.17", Compiler:"gc", Platform:"linux/amd64"}

Cloud Provider/Platform (AKS, GKE, Minikube etc.): AWS EKS

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Comments: 33 (8 by maintainers)

Most upvoted comments

@mattfarina I created a PR here: #8496 But I don’t know if this issue is considered blocking for 3.3. Feel free to set which release you want it on.

It may be possible that somehow an empty YAML document was being created, and the first lint rule to trigger was the name checker. An empty YAML document (e.g. between an --- or ===) will pass the parsing validation check, but would definitely fail every subsequent check.

Ah! Okay, I found the issue. The package sigs.k8s.io/yaml gets this YAML:

  apiVersion: v1
kind: Pod
metadata:
    name: foo

And parses it into this result:

rules.K8sYamlStruct{
  APIVersion:"v1", 
  Kind:"", 
  Metadata:rules.k8sYamlMetadata{
    Namespace:"",
    Name:"",
  }
}

So the parser figures out the indent depth by the first line, and seemingly ignores all content outdented from there. I think the linter is right to catch this case… but clearly we need a better error message.

It appears that in at least some parts of Helm, we to a strings.TrimWhitespace() on YAML docs prior to parsing, which we actually shouldn’t do… but which in this case actually saves us from an error. I’m definitely not going to change any of those. But we might have other bugs in Helm related to this sort of thing.

Edit: Fixed package name for YAML parser

I think I will try modifying the linter to provide an error when apiVersion is not found. That should definitely be the first thing that fails a lint, and I am not sure we check that

The - next to {{ or }} tells the template engine to remove all whitespace next to the brackets. So…

foo {{- "bar" }} would be foobar while foo {{ "bar" }} would be foo bar. The {{- removes the whitespace to the left of it… the space between the o and the {{-.

So, use the - when you want the whitespace removed.

@rblaine95 If you feel comfortable compiling Helm, I built a version that will print out each template before it sends it into the linter. I’m using it now to try to reproduce what you are encountering. But you might be able to spot things faster on your end.

Here’s the branch:

https://github.com/technosophos/k8s-helm/tree/fix/8467-linter-failing

I’ll keep updating here if I find anything else new.

@rblaine95 It might be worth running helm install with the --dry-run --debug. It might some more in the rendering and it doesn’t install it in the cluster.

@rblaine95 I tried to reproduce this but failed to. I created a chart with only a _helpers.tpl file and the following template:

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: {{ include "example.fullname" . }}-role
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: {{ include "example.fullname" . }}-rolebinding
---
apiVersion: batch/v1
kind: Job
metadata:
  name: {{ include "example.fullname" . }}-initpg
---
apiVersion: batch/v1
kind: Job
metadata:
  name: {{ include "example.fullname" . }}-cleanpg

The chart was created using the command helm create example and I used v3.3.0-rc.1 for all of this.

When running helm lint I was unable to reproduce the issue. I wonder if the problem is in the [...] that you didn’t share (and may not be able to). Or, if the actual name has an issue. The [names need to be safe for DNS. Helm does it’s checking for this with the same regular expression Kubernetes uses.