rancher: kubernetes-dashboard and tiller won't connect to Kubernetes API on fresh install
Rancher Versions: Server: 1.3.3 kubernetes (if applicable): 1.5.1
Docker Version: 1.12.3
OS and where are the hosts located? (cloud, bare metal, etc): Single RancherOS host running on a VM.
Setup Details: (single node rancher vs. HA rancher, internal DB vs. external DB) Single node rancher.
Environment Type: (Cattle/Kubernetes/Swarm/Mesos) Single Kubernetes environment.
Steps to Reproduce: I’ve now been able to replicate this twice (both fresh installs) on two different remote VMs (one private cloud and one public cloud), but didn’t see this problem when I used VirtualBox running on my own laptop.
Results:
Neither kubernetes-dashboard
nor tiller
are able to connect to the Kubernetes API. On kubernetes-dashboard
I see this in the logs:
Using HTTP port: 9090
Creating API server client for https://10.43.0.1:443
Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service accounts configuration) or the --apiserver-host param points to a server that does not exist. Reason: the server has asked for the client to provide credentials
Refer to the troubleshooting guide for more information: https://github.com/kubernetes/dashboard/blob/master/docs/user-guide/troubleshooting.md
and on tiller
I see this:
Cannot initialize Kubernetes connection: the server has asked for the client to provide credentialspanic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x28 pc=0x401a19]
goroutine 1 [running]:
panic(0x155bd80, 0xc42000a030)
/usr/local/go/src/runtime/panic.go:500 +0x1a1
main.start(0x201e620, 0x2051450, 0x0, 0x0)
/home/ubuntu/.go_workspace/src/k8s.io/helm/cmd/tiller/tiller.go:93 +0x739
k8s.io/helm/vendor/github.com/spf13/cobra.(*Command).execute(0x201e620, 0xc42000a250, 0x0, 0x0, 0x201e620, 0xc42000a250)
/home/ubuntu/.go_workspace/src/k8s.io/helm/vendor/github.com/spf13/cobra/command.go:603 +0x439
k8s.io/helm/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0x201e620, 0x0, 0x1708aa4, 0x12)
/home/ubuntu/.go_workspace/src/k8s.io/helm/vendor/github.com/spf13/cobra/command.go:689 +0x367
k8s.io/helm/vendor/github.com/spf13/cobra.(*Command).Execute(0x201e620, 0x2051144, 0x16f6045)
/home/ubuntu/.go_workspace/src/k8s.io/helm/vendor/github.com/spf13/cobra/command.go:648 +0x2b
main.main()
/home/ubuntu/.go_workspace/src/k8s.io/helm/cmd/tiller/tiller.go:80 +0x16c
In both cases, this seems to be the pertinent bit:
Reason: the server has asked for the client to provide credentials
and seems to be indicating that the client is attempting to connect to the API without providing an authentication token.
Using one of the deployed containers, if I try to connect to the Kubernetes API without providing an authentication token (e.g. curl -k https://10.43.0.1:443
) then I get an Unauthorized
response, but I’m able to verify that I can connect to the API using the provided token as follows:
TOKEN_VALUE=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
curl --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt -H "Authorization: Bearer $TOKEN_VALUE" https://10.43.0.1:443
By running kubectl get secrets --all-namespaces | grep service-account
I’m able to find all of the service-account tokens that exist, and by running commands like get --namespace default -o jsonpath="{.data.token}" secret default-token-z00td | base64 -D
for each of these, I can see that the correct token has been set everywhere.
Finally, by viewing the Kubernetes config in the API for the Dashboard I can see that it has a volume definition for /var/run/secrets/kubernetes.io/serviceaccount
, though I’m unable to confirm this within the running container as the container doesn’t start.
I’ve also tried installing more recent versions of Rancher (1.3.4
, 1.4.0
and 1.4.1
), and the problem also seems to occur in those, but with the downside that you no longer have the Rancher dashboard to help you debug what’s going wrong in these more recent versions.
Any ideas on why this might be happening and/or where I go from here?
About this issue
- Original URL
- State: closed
- Created 7 years ago
- Comments: 17 (1 by maintainers)
Are those really “fresh” installs on newly provisioned hosts (VMs, droplets, whatever)?
Because I have seen this problem when trying to start from scratch on a host. Deleting the containers is not enough, I also had to delete the volumes. Bringing up a rancher k8s environment on a cleaned hosts works for me.
Apologies for not sharing sooner, but here’s the work-around I was provided for this issue until a proper fix is made available.