minikube: Using --profile with kubeadm causes kubeadm init: Process exited with status 1
Is this a BUG REPORT or FEATURE REQUEST? (choose one): bug report
Please provide the following details:
Environment:
- OS (e.g. from /etc/os-release): macOS 10.13.3
minikube version: v0.25.0
OS:
cat: /etc/os-release: No such file or directory
VM driver:
"DriverName": "virtualbox",
ISO version
"Boot2DockerURL": "file:///Users/kevinrosendahl/.minikube/cache/iso/minikube-v0.25.1.iso",
What happened: minikube fails to start when passing in a profile
What you expected to happen: minikube successfully starts a cluster with a given profile
How to reproduce it (as minimally and precisely as possible):
minikube start -p repro --bootstrapper kubeadm
Output of minikube logs (if applicable):
$ minikube -p repro logs
-- Logs begin at Sat 2018-02-24 00:55:03 UTC, end at Sat 2018-02-24 00:59:07 UTC. --
-- No entries -
Anything else do we need to know: Here are the logs of the command:
$ minikube start -p repro --bootstrapper kubeadm
Starting local Kubernetes v1.9.0 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
E0223 17:13:39.910248 90965 start.go:276] Error starting cluster: kubeadm init error running command: sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --skip-preflight-checks: Process exited with status 1
Rerunning kubeadm init shows that it’s failing to taint/label the node:
$ sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --skip-preflight-checks
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.9.0
[init] Using Authorization modes: [Node RBAC]
[preflight] Skipping pre-flight checks
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Using the existing ca certificate and key.
[certificates] Using the existing apiserver certificate and key.
[certificates] Using the existing apiserver-kubelet-client certificate and key.
[certificates] Using the existing sa key.
[certificates] Using the existing front-proxy-ca certificate and key.
[certificates] Using the existing front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/var/lib/localkube/certs/"
[kubeconfig] Using existing up-to-date KubeConfig file: "admin.conf"
[kubeconfig] Using existing up-to-date KubeConfig file: "kubelet.conf"
[kubeconfig] Using existing up-to-date KubeConfig file: "controller-manager.conf"
[kubeconfig] Using existing up-to-date KubeConfig file: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] This often takes around a minute; or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 10.503617 seconds
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node repro as master by adding a label and a taint
timed out waiting for the condition
It appears the issue is with the kubelet registering the node with the apiserver in the first place:
$ journalctl -u kubelet -f
-- Logs begin at Sat 2018-02-24 00:55:03 UTC. --
Feb 24 01:01:43 repro kubelet[3386]: I0224 01:01:43.604083 3386 kubelet_node_status.go:273] Setting node annotation to enable volume controller attach/detach
Feb 24 01:01:43 repro kubelet[3386]: I0224 01:01:43.620252 3386 kubelet_node_status.go:82] Attempting to register node minikube
Feb 24 01:01:43 repro kubelet[3386]: E0224 01:01:43.622745 3386 kubelet_node_status.go:106] Unable to register node "minikube" with API server: nodes "minikube" is forbidden: node "repro" cannot modify node "minikube"
Looks like the kubelet is being passed --hostname-override=minikube which I believe is causing it to attempt to register the node as minikube instead of repro even though it’s using the user system:node:repro.
Also FWIW this does not seem to effect the localkube bootstrapper.
Hope this helps, let me know if there’s any more information I can provide.
About this issue
- Original URL
- State: closed
- Created 6 years ago
- Reactions: 8
- Comments: 17 (5 by maintainers)
I believe we accidentally fixed this. I was able to startup two VM’s on Virtualbox & OSX with minikube v0.33.1:
$ minikube start -p repro --bootstrapper kubeadm $ minikube start -p repro2 --bootstrapper kubeadm $ minikube ssh -p repro hostname $ minikube ssh -p repro2 hostname
I do note the following warning however in kubeadm land: