helm: Helm 3 doesn't create namespace

Output of helm version:

version.BuildInfo{Version:"v3.0.0-alpha.1", GitCommit:"b9a54967f838723fe241172a6b94d18caf8bcdca", GitTreeState:"clean"}

Output of kubectl version:

Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.3", GitCommit:"721bfa751924da8d1680787490c54b9179b1fed0", GitTreeState:"clean", BuildDate:"2019-02-01T20:08:12Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11+", GitVersion:"v1.11.8-eks-7c34c0", GitCommit:"7c34c0d2f2d0f11f397d55a46945193a0e22d8f3", GitTreeState:"clean", BuildDate:"2019-03-01T22:49:39Z", GoVersion:"go1.10.8", Compiler:"gc", Platform:"linux/amd64"}

Cloud Provider/Platform (AKS, GKE, Minikube etc.): AWS/EKS

centos7:[root@localhost linux-amd64]$ kubectl get ns nginx
Error from server (NotFound): namespaces "nginx" not found
centos7:[root@localhost linux-amd64]$ ./helm install nginx stable/nginx-ingress
Error: create: failed to create: namespaces "nginx" not found
centos7:[root@localhost linux-amd64]$ kubectl create ns nginx
namespace/nginx created
centos7:[root@localhost linux-amd64]$ ./helm install nginx stable/nginx-ingress
NAME: nginx
LAST DEPLOYED: 2019-05-17 15:30:04.283642019 +0100 BST m=+4.727707147
NAMESPACE: nginx
STATUS: deployed

NOTES:
The nginx-ingress controller has been installed.
It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status by running 'kubectl --namespace nginx get services -o wide -w nginx-nginx-ingress-controller'

An example Ingress that makes use of the controller:

  apiVersion: extensions/v1beta1
  kind: Ingress
  metadata:
    annotations:
      kubernetes.io/ingress.class: nginx
    name: example
    namespace: foo
  spec:
    rules:
      - host: www.example.com
        http:
          paths:
            - backend:
                serviceName: exampleService
                servicePort: 80
              path: /
    # This section is only required if TLS is to be enabled for the Ingress
    tls:
        - hosts:
            - www.example.com
          secretName: example-tls

If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:

  apiVersion: v1
  kind: Secret
  metadata:
    name: example-tls
    namespace: foo
  data:
    tls.crt: <base64 encoded cert>
    tls.key: <base64 encoded key>
  type: kubernetes.io/tls

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Reactions: 10
  • Comments: 18 (7 by maintainers)

Most upvoted comments

The change to remove the creation of the namespace during helm install was intentional. This was made to mimic the same behaviour as kubectl create --namespace foo -f deployment.yaml - namespaces are a global cluster resource and the user installing resources into that namespace may not have the proper administrative rights to install the namespace itself, as that implies you have full administrative rights to the cluster (as opposed to being bound to a role with restricted rights within a particular namespace). Without this change, users must have cluster admin rights to install a chart, and we want to ensure that administrators can ensure users only have a restricted set of roles applied to each install. This is also a big reason why TIller was removed in the first place.

Additionally, there have been several asks in the community to allow modifying the namespace helm install creates (e.g. #3503), and the UX to support use cases like that would be incredibly painful to achieve. Offloading the namespace creation to a separate tool (perhaps a plugin?) provides users a way to solve these issues without imposing a restrictive user experience around these use cases.

Note: I’m just relaying the information I know about this subject and why it was removed. I suggest asking @adamreese more about the justification for the removal.

Digging more into this. I was hoping that I could simply add a namespace.yaml definition into my chart as follow:

apiVersion: v1
kind: Namespace
metadata:
  name: {{ .Release.Namespace }}
  labels:
    app: {{ template "foo.name" . }}
    chart: {{ template "foo.chart" . }}
    release: {{ .Release.Name }}

Ideally nothing should prevent the namespace cration w/ all the “bells and whistles” that comminutiy members were proposing, as long as it is created in the previously documented order

However, as I suspected, the reason why the “target” or {{ .Release.Namespace }} must exist before invoking install (and I suspect update) command is directly tied to the Helm3 storage implementation, which “relies” on the “target” namespace to store release information. https://github.com/helm/helm/blob/master/pkg/action/install.go#L254

In very high-level understanding Helm3 performers following steps in the following order:

  • apply CRDs (crds do not require rendering, also, create collision errors are ignored)
  • render chart artifacts (other than crds)
  • save release information into the secret (requires target namespace), this is where: “namespace not found” error is triggered.
  • install rendred artifacts.

run in to the same issue… but resolved by switching context as mentioned in #5628 at this point i believe its just confusing to have --namespace an option.

@jeremy-donson - I 100% agree with you thats the best practice. but the error message is confusing, as you can see blow. Since its not suppose to work, these message is pointing you the wrong direction. “You must pass ‘–namespace=kube-system’ to perform this operation.”

$ helm install metricserver stable/metrics-server
Error: the namespace from the provided object "kube-system" does not match the namespace "default". You must pass '--namespace=kube-system' to perform this operation.

$ helm install metricserver stable/metrics-server --namespace=kube-system
Error: the namespace from the provided object "kube-system" does not match the namespace "default". You must pass '--namespace=kube-system' to perform this operation.

$ kubectl config set-context kube-system --cluster=kubernetes --user=kubernetes-admin --namespace=kube-system
Context "kube-system" created.

$ kubectl config use-context kube-system
Switched to context "kube-system".

$ kubectl config get-contexts
CURRENT   NAME                          CLUSTER      AUTHINFO           NAMESPACE
*         kube-system                   kubernetes   kubernetes-admin   kube-system
          kubernetes-admin@kubernetes   kubernetes   kubernetes-admin
          metallb                       kubernetes   kubernetes-admin   metallb
          nfstorage                     kubernetes   kubernetes-admin   nfstorage

$ helm install metricserver stable/metrics-server
NAME: metricserver
LAST DEPLOYED: 2019-05-26 14:37:45.582245559 -0700 PDT m=+2.942929639
NAMESPACE: kube-system
STATUS: deployed

NOTES:
The metric server has been deployed.

@javajon The “Cleanup” topic is relevant, yet I think it is of a wider scope specifically when it involves “cluster-level” components. Two points:

  1. Helm3 Consistency in handling CRD vs Namespace.
  2. The fact that a single namespace can be targetted by multiple HELM charts makes IMO a “compelling” case for having a “special” treatment for the Release Namespace (in line to what it used to be in Helm2 for namespaces, or what it is in Helm3 for CRD’s)

@bacongobbler, yes, you did. Unfortunately, for me, those did not fully make sense.

After getting to the bottom of this, I guess the explanation that would make more sense (for me) would be the one you gave earlier (multiple times) plus, “oh, yeah… we are using “target” or {{ .Release.Namespace }} namespace in Helm3 for other reasons as well, like saving release information. Thus, it is problematic maintaining namespace creation via --namepace flag” or something like that.

Either way, thank you for your help.

I explained the issue and the motivation for the removal of this feature several times in previous comments. Please read my previous comments more carefully.

However, it seems @bacongobbler thinks otherwise 😃

@ichekrygin, it is incredibly frustrating having to repeat the same statement over and over. I will repeat myself one more time.

I mentioned several times in that thread that the discussion is different than the one originally raised by the OP. The discussion in the first comment talks about the --namespace flag being ignored in earlier versions of Helm 3. Resources were being deployed in the default namespace regardless of what you provided to --namespace.

Here’s an excerpt from the first comment:

It seems the latest helm3 from dev-v3 branch doesn’t take the --namespace parameter.

The only similarity with these two tickets is that they both are related to the --namespace flag.

That’s it.

Hope that clears things up.

or any Helm3 maintainers/contributor comment on the desirability of having Helm3 as a “one-stop-shop”?

As I mentioned earlier:

By supporting the auto-creation of the namespace, community members were proposing new features to Helm which would allow them to modify the namespace during creation, including attaching annotations, labels, policies, constraints, quotas, etc. The list goes on. #3503 is one such example. The creation and management of the namespace is clearly out of scope of helm install, whose goal was to fetch, unpack and install a chart into a cluster. Do one thing and do it well.

[…]

That being said, we are open to suggestions and are always happy to discuss alternative solutions.

Do you have an alternative solution that does not fall to the same design flaws as indicated earlier?

Interesting #5628 pre-dates this issue and more importantly is dedicated to the same topic: However, it seems @bacongobbler thinks otherwise 😃

As noted earlier, this discussion is getting off topic from the OP’s original issue

That’s great feedback, thank you @sudermanjr.

In my initial cursory testing I am also seeing that the namespace flag is ignored and the current-context namespace is being used during a helm install command.

Yeah, we’re tracking that discussion in #5628 😃

I can verify this bug from the v3 master code, and using the scaffold chart as follows:

$ helm install chrt-5753 chrt-5753/ 
NAME: chrt-5753
LAST DEPLOYED: 2019-05-27 16:29:33.256126018 +0100 IST m=+0.094848038
NAMESPACE: default
STATUS: deployed

NOTES:
1. Get the application URL by running these commands:
  export POD_NAME=$(kubectl get pods -l "app=chrt-5753,release=chrt-5753" -o jsonpath="{.items[0].metadata.name}")
  echo "Visit http://127.0.0.1:8080 to use your application"

$ kubectl get all --namespace default
NAME                             READY   STATUS    RESTARTS   AGE
pod/chrt-5753-7f5d576f95-pgr5g   1/1     Running   0          13m

NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
service/chrt-5753    ClusterIP   10.97.177.223   <none>        80/TCP    13m
service/kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP   108d

NAME                        READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/chrt-5753   1/1     1            1           13m

NAME                                   DESIRED   CURRENT   READY   AGE
replicaset.apps/chrt-5753-7f5d576f95   1         1         1       13m

$  kubectl config set-context dind --namespace=new-ns
Context "dind" modified.

$ kubectl get namespaces
NAME          STATUS   AGE
default       Active   108d
kube-public   Active   108d
kube-system   Active   108d

$ helm install chrt-5753-v2 chrt-5753/ 
Error: create: failed to create: namespaces "new-ns" not found

$ kubectl create namespace new-ns
namespace/new-ns created

$ kubectl get namespaces
NAME          STATUS   AGE
default       Active   108d
kube-public   Active   108d
kube-system   Active   108d
new-ns        Active   16s

$ helm install chrt-5753-v2 chrt-5753/ 
NAME: chrt-5753-v2
LAST DEPLOYED: 2019-05-27 16:46:18.555525534 +0100 IST m=+0.082804290
NAMESPACE: new-ns
STATUS: deployed

NOTES:
1. Get the application URL by running these commands:
  export POD_NAME=$(kubectl get pods -l "app=chrt-5753,release=chrt-5753-v2" -o jsonpath="{.items[0].metadata.name}")
  echo "Visit http://127.0.0.1:8080 to use your application"
  kubectl port-forward $POD_NAME 8080:80

$ kubectl get all --namespace new-ns
NAME                                READY   STATUS    RESTARTS   AGE
pod/chrt-5753-v2-5fcbc484bd-8h8fn   1/1     Running   0          49s

NAME                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
service/chrt-5753-v2   ClusterIP   10.100.121.64   <none>        80/TCP    49s

NAME                           READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/chrt-5753-v2   1/1     1            1           49s

NAME                                      DESIRED   CURRENT   READY   AGE
replicaset.apps/chrt-5753-v2-5fcbc484bd   1         1         1       49s