dashboard: Dashboard complaints on startup: x509: failed to load system roots and no roots provided

Issue details

I’m following the documentation at http://kubernetes.io/docs/user-guide/ui/, but it fails at the first step already. The container fails to start with the log entry:

Starting HTTP server on port 9090
Creating API server client for https://10.101.10.1:443
E0927 10:59:50.111556       1 config.go:267] Expected to load root CA config from /var/run/secrets/kubernetes.io/serviceaccount/ca.crt, but got err: open /var/run/secrets/kubernetes.io/serviceaccount/ca.crt: no such file or directory
Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service accounts configuration) or the --apiserver-host param points to a server that does not exist. Reason: Get https://10.101.10.1:443/version: x509: failed to load system roots and no roots provided

Nowhere on that page is it explained how to deal with this issue and a Google search doesn’t provide enlightment. We have serviceaccounts enabled and the pod has the default one attached. When I take a look at the serviceaccount with describe, I get the following:

Name:           default
Namespace:      kube-system
Labels:         <none>

Image pull secrets:     <none>

Mountable secrets:      default-token-6x2t1

Tokens:                 default-token-6x2t1

I have no idea how to continue from here. Which cert is dashboard looking for? What’s the best way of getting that into the container? Also, is the documentation outdated or am I doing something weird, as the (pretty simple) recipe does not seem to work for me.

Environment

We’re running the containers on CoreOS running on AWS. Currently running 1.3.6, planning on updating to 1.4.0 somewhere soon.

Dashboard version: v1.4.0
Kubernetes version: v1.3.6
Operating system: CoreOS stable
Node.js version: Not sure, using the default gcr image
Go version: Same as Node.js version
Steps to reproduce

Follow the guide as described here: http://kubernetes.io/docs/user-guide/ui/

Observed result
Starting HTTP server on port 9090
Creating API server client for https://10.101.10.1:443
E0927 10:59:50.111556       1 config.go:267] Expected to load root CA config from /var/run/secrets/kubernetes.io/serviceaccount/ca.crt, but got err: open /var/run/secrets/kubernetes.io/serviceaccount/ca.crt: no such file or directory
Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service accounts configuration) or the --apiserver-host param points to 
a server that does not exist. Reason: Get https://10.101.10.1:443/version: x509: failed to load system roots and no roots provided

And than the pod stays in CrashLoopBackOff status.

Expected result

A working UI!

About this issue

  • Original URL
  • State: closed
  • Created 8 years ago
  • Reactions: 5
  • Comments: 27 (9 by maintainers)

Most upvoted comments

Cluster configuration

Keep in mind that this is my dev configuration. I’m also using certificate based authentication to connect to the cluster. You can enable more authentication/authorization plugins if you want. This is just my basic setup.

API Server

--bind-address=0.0.0.0 \
--etcd-servers=http://127.0.0.1:2379 \
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--secure-port=443 \
--advertise-address=192.168.0.101 \
--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota \
--tls-cert-file=/home/floreks/kubernetes/apiserver.crt \
--tls-private-key-file=/home/floreks/kubernetes/apiserver.key \
--client-ca-file=/home/floreks/kubernetes/ca.crt \
--service-account-key-file=/home/floreks/kubernetes/apiserver.key

Kubelet

--require-kubeconfig \
--kubeconfig=/home/floreks/.kube/config \
--allow-privileged=true \
--cluster-domain=cluster.local \
--hostname-override=floreks-ms-7916 \
--cluster-dns=10.0.0.10

Controller manager

--kubeconfig=/home/floreks/.kube/config \
--service-account-private-key-file=/home/floreks/kubernetes/apiserver.key \
--root-ca-file=/home/floreks/kubernetes/ca.crt

Proxy

--kubeconfig=/home/floreks/.kube/config \
--proxy-mode=iptables

Scheduler

--kubeconfig=/home/floreks/.kube/config

Kubeconfig

current-context: default-context
apiVersion: v1
clusters:
- cluster:
    certificate-authority: /home/floreks/kubernetes/ca.crt
    server: https://192.168.0.101
  name: default-cluster
contexts:
- context:
    cluster: default-cluster
    user: admin
  name: default-context
- context:
kind: Config
preferences: {}
users:
- name: admin
  user:
    client-certificate: /home/floreks/kubernetes/admin.crt
    client-key: /home/floreks/kubernetes/admin.key

Certificates configuration

I’m using my simple script to generate needed certs. Correct SAN address/hostname needs to be set in openssl config file.

Config & script

floreks@floreks-MS-7916:~/kubernetes$ cat worker-openssl.cnf 
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
IP.1 = 192.168.0.101

floreks@floreks-MS-7916:~/kubernetes$ cat openssl.cnf 
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = kubernetes
DNS.2 = kubernetes.default
DNS.3 = kubernetes.default.svc
DNS.4 = kubernetes.default.svc.cluster.local
IP.1 = 10.0.0.1
IP.2 = 192.168.0.101
floreks@floreks-MS-7916:~/kubernetes$ cat generate-certs.sh 
#!/bin/bash

# Generate CA
openssl genrsa -out ca.key 2048
openssl req -x509 -new -nodes -key ca.key -days 365 -out ca.crt -subj "/CN=kube-ca"

# Generate api server
openssl genrsa -out apiserver.key 2048
openssl req -new -key apiserver.key -out apiserver.csr -subj "/CN=kube-apiserver" -config openssl.cnf
openssl x509 -req -in apiserver.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out apiserver.crt -days 365 -extensions v3_req -extfile openssl.cnf

# Generate kubelet
openssl genrsa -out kubelet.key 2048
openssl req -new -key kubelet.key -out kubelet.csr -subj "/CN=kubelet" -config worker-openssl.cnf
openssl x509 -req -in kubelet.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out kubelet.crt -days 365 -extensions v3_req -extfile worker-openssl.cnf

# Generate admin
openssl genrsa -out admin.key 2048
openssl req -new -key admin.key -out admin.csr -subj "/CN=kube-admin"
openssl x509 -req -in admin.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out admin.crt -days 365

By installing my admin certificate in browser I can connect to deployed dashboard. More about how to do this kubernetes/kubernetes#31665.

zrzut ekranu z 2016-09-30 09-38-47

Note: You may have to delete default secrets and dashboard pod in order for it to pick up service accounts. After that it should work.

I’m still wondering why the default didn’t work for you… I’ll keep this open for further investigation.

Of course:

apiserver:

/usr/local/bin/apiserver \
--allow-privileged=true \
--bind-address=0.0.0.0 \
--secure-port=443 \
--etcd-servers=http://127.0.0.1:2379 \
--advertise-address=${COREOS_PRIVATE_IPV4} \
--service-cluster-ip-range=10.101.10.0/23 \
--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota \
--enable-swagger-ui=true \
--logtostderr=true \
--cloud-provider=aws \
--tls-cert-file=/etc/k8s/api-cert \
--tls-private-key-file=/etc/k8s/api-key \
--client-ca-file=/etc/k8s/ca-cert \
--service-account-key-file=/etc/k8s/api-key \
--token-auth-file=/etc/k8s/tokens

For good measure, here’s controller as well:

/usr/local/bin/controller-manager \
--address=0.0.0.0 \
--logtostderr=true \
--master=${INSECURE_KUBERNETES_API_ENDPOINT} \
--service-account-private-key-file=/etc/k8s/api-key \
--root-ca-file=/etc/k8s/ca-cert \
--cloud-provider=aws 

And scheduler:

/usr/local/bin/scheduler \
--address=0.0.0.0 \
--logtostderr=true \
--master=${INSECURE_KUBERNETES_API_ENDPOINT}

Also, our kubeconfig:

apiVersion: v1
kind: Config
clusters: 
- cluster:
    certificate-authority: /etc/k8s/ca-cert
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubelet
  name: kubelet
current-context: kubelet
users:
- name: kubelet
  user:
    token: XXXX

Ah, the kubeconfig flag has been added after 1.4 release. Can you check out latest :canary tag or compile dashboard at HEAD?

Alas:

/opt/dashboard # ./dashboard --kubeconfig=/etc/k8s/kubeconfig
unknown flag: --kubeconfig
Usage of ./dashboard:
      --alsologtostderr value          log to standard error as well as files
      --apiserver-host string          The address of the Kubernetes Apiserver to connect to in the format of protocol://address:port, e.g., http://localhost:8080. If not specified, the assumption is that the binary runs inside aKubernetes cluster and local discovery is attempted.
      --heapster-host string           The address of the Heapster Apiserver to connect to in the format of protocol://address:port, e.g., http://localhost:8082. If not specified, the assumption is that the binary runs inside aKubernetes cluster and service proxy will be used.
      --log-flush-frequency duration   Maximum number of seconds between log flushes (default 5s)
      --log_backtrace_at value         when logging hits line file:N, emit a stack trace (default :0)
      --log_dir value                  If non-empty, write log files in this directory
      --logtostderr value              log to standard error instead of files (default true)
      --port int                       The port to listen to for incoming HTTP requests (default 9090)
      --stderrthreshold value          logs at or above this threshold go to stderr (default 2)
  -v, --v value                        log level for V logs
      --vmodule value                  comma-separated list of pattern=N settings for file-filtered logging