linux-build: kubeadm init hanges indefinitely

I was running into an issue where kubeadm init was failing for me using the image bionic-containers-rock64-0.6.44-239-arm64.img. I was able to get around it by running apt-get purge kubelet kubectl kubeadm; apt-get install kubelet=1.9.8-00 kubeadm=1.9.8-00 kubectl=1.9.8-00. If others are running into this issue, it may be good to change which version of kubernetes is being installed out of the box.

Edit: My master node setup in case it helps anyone else:

  • setup hostname, ssh configuration, /etc/hosts
  • scp over my k8s files
  • install crictl:
    • sudo apt-get install -y golang
    • go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
    • sudo mv go/bin/crictl /usr/bin
  • move to version 1.9.8-00:
    • sudo apt-get purge kubectl kubeadm kubelet
    • sudo apt-get install kubectl=1.9.8-00 kubeadm=1.9.8-00 kubelet=1.9.8-00
    • sudo systemctl enable kubelet.service
  • sudo kubeadm init
  • kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
  • install helm:
    • wget https://storage.googleapis.com/kubernetes-helm/helm-v2.9.1-linux-arm64.tar.gz
    • tar -xzvf helm-v2.9.1-linux-arm64.tar.gz
    • sudo mv linux-arm64/helm /usr/bin/ && rm -rf linux-arm64/
    • kubectl create -f k8s/rbac-config.yml
    • helm init --service-account tiller --tiller-image=jessestuart/tiller:v2.9.1

I also tried moving docker back to the last verified version (docker-ce 17.03) for kubernetes as well when trying to get 1.10 to work to no avail.

About this issue

  • Original URL
  • State: open
  • Created 6 years ago
  • Comments: 20 (3 by maintainers)

Most upvoted comments

Ok so…

  1. install etcd from debian
  2. run kubeadm up and until it hang. before a kubeadm reset, save /etc/kuberetes/pki/etcd (way easier than doing the openssl manually). Alternatively you can use the alpha feature kubeadm alpha phase certs.
  3. Configure etcd. There’s at least 2 thing to be done :
  • in /etc/kubernetes/etcd.env force this : ETCD_UNSUPPORTED_ARCH=arm64
  • in the systemd service file (etcd.service) you should at least have these lines :
[Service]
EnvironmentFile=/etc/kubernetes/etcd.env
...
ExecStart=/usr/bin/etcd \
...
    --cert-file=/etc/kubernetes/pki/etcd/server.crt \
    --key-file=/etc/kubernetes/pki/etcd/server.key \
    --client-cert-auth \
    --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt \
    --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt \
    --peer-key-file=/etc/kubernetes/pki/etcd/peer.key \
    --peer-client-cert-auth \
    --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
  1. start etcd and check if it sane :
systemctl daemon-reload
systemctl start etcd
export ETCDCTL_CA_FILE=/etc/kubernetes/pki/etcd/ca.crt
export ETCDCTL_CERT_FILE=/etc/kubernetes/pki/etcd/client.crt
export ETCDCTL_KEY_FILE=/etc/kubernetes/pki/etcd/client.key
export ETCDCTL_ENDPOINTS=https://$YOURIP:2379
etcdctl cluster-health
  1. tell kubeadm where to find your etcd in a config file :
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
api:
  advertiseAddress:$YOURIP
etcd:
  endpoints:
  - https://$YOURIP:2379
  caFile: /etc/kubernetes/pki/etcd/ca.crt
  certFile: /etc/kubernetes/pki/etcd/client.crt
  keyFile: /etc/kubernetes/pki/etcd/client.key
networking:
  podSubnet: 10.244.0.0/16

and finally start kubeadm with kubeadm init --config=/path/to/previous.config --ignore-preflight-errors=all