kubernetes: kubernetes cluster, in kubeadm join: [discovery] Failed to request cluster info, will try again getsockopt: connection refused]

Hi, I have created a kubernetes cluster but the kubeadm join returns the error:

root@nodo1:~# kubeadm join --token 53762b.7b646ca3e558be4c 10.0.2.15:6443 --discovery-token-ca-cert-hash sha256:40b15a8b5914e531cd938d8635aab0ef9cdf3b977adf573365c4dd557f17f406
[preflight] Running pre-flight checks.
        [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 17.12.0-ce. Max validated version: 17.03
        [WARNING FileExisting-crictl]: crictl not found in system path
[discovery] Trying to connect to API Server "10.0.2.15:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.0.2.15:6443"
[discovery] Failed to request cluster info, will try again: [Get https://10.0.2.15:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 10.0.2.15:6443: getsockopt: connection refused]
[discovery] Failed to request cluster info, will try again: [Get https://10.0.2.15:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 10.0.2.15:6443: getsockopt: connection refused]

Can you help me?

The port 6443 is listen in master and I don’t have enabled the firewall.

tcp6       0      0 :::6443                 :::*                    LISTEN      17208/kube-apiserve
root@master:~# ufw status
Status: inactive
root@nodo1:~# ufw status
Status: inactive

Thanks so much!!.

Xavier.

Steps

Vagrantfile

# -*- mode: ruby -*-
# vi: set ft=ruby :

# Every Vagrant development environment requires a box. You can search for 
# boxes at https://atlas.hashicorp.com/search.

BOX_IMAGE = "bento/ubuntu-16.04"
nodo_COUNT = 2

Vagrant.configure("2") do |config|
  config.vm.define "master" do |subconfig|
      subconfig.vm.box = BOX_IMAGE
      subconfig.vm.hostname = "master"
      subconfig.vm.network :private_network, ip: "10.0.0.10"
  end
  
  (1..nodo_COUNT).each do |i|
  	config.vm.define "nodo#{i}" do |subconfig|
  		subconfig.vm.box = BOX_IMAGE
  		subconfig.vm.hostname = "nodo#{i}"
  		subconfig.vm.network :private_network, ip: "10.0.0.#{i + 10}"
  	end
  end
  	
   # Install avahi on all machines 
   config.vm.provision "shell", inline: <<-SHELL 
   apt-get install -y avahi-daemon libnss-mdns
   SHELL
end
vagrant up

Docker installation

Master and all nodes

sudo apt-get remove docker docker-engine docker.io
sudo apt-get update
sudo apt-get install linux-image-extra-$(uname -r) linux-image-extra-virtual
sudo apt-get update
sudo apt-get install apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt-get update
sudo apt-get install docker-ce
sudo docker run hello-world

Kubeadm installation

Master and all nodes

https://kubernetes.io/docs/setup/independent/install-kubeadm/

sudo -i
apt-get update && apt-get install -y apt-transport-https
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl

Cluster inicialization

Only in master

https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/

sudo -i
swapoff -a

For allways:

Now your swap entry on /etc/fstab will look similar to this:
#UUID=xxxxxxxx-xxxx-xxxxx-xxx-xxxxxxxxxxx none            swap    sw              0       0
kubeadm init

=========>

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join --token 53762b.7b646ca3e558be4c 10.0.2.15:6443 --discovery-token-ca-cert-hash sha256:40b15a8b5914e531cd938d8635aab0ef9cdf3b977adf573365c4dd557f17f406

Without root:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

With root

sudo -i
export KUBECONFIG=/etc/kubernetes/admin.conf

Weave net installation

sudo -i
export kubever=$(kubectl version | base64 | tr -d '\n')
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$kubever"
root@master:~# kubectl get pods --all-namespaces
NAMESPACE     NAME                             READY     STATUS    RESTARTS   AGE
kube-system   etcd-master                      1/1       Running   0          59s
kube-system   kube-apiserver-master            1/1       Running   0          59s
kube-system   kube-controller-manager-master   1/1       Running   0          59s
kube-system   kube-dns-6f4fd4bdf-wc56k         3/3       Running   0          7m
kube-system   kube-proxy-5frr9                 1/1       Running   0          7m
kube-system   kube-scheduler-master            1/1       Running   0          59s
kube-system   weave-net-7ccwp                  2/2       Running   0          1m

Node join

root@nodo1:~# kubeadm join --token 53762b.7b646ca3e558be4c 10.0.2.15:6443 --discovery-token-ca-cert-hash sha256:40b15a8b5914e531cd938d8635aab0ef9cdf3b977adf573365c4dd557f17f406
[preflight] Running pre-flight checks.
        [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 17.12.0-ce. Max validated version: 17.03
        [WARNING FileExisting-crictl]: crictl not found in system path
[discovery] Trying to connect to API Server "10.0.2.15:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.0.2.15:6443"
[discovery] Failed to request cluster info, will try again: [Get https://10.0.2.15:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 10.0.2.15:6443: getsockopt: connection refused]
[discovery] Failed to request cluster info, will try again: [Get https://10.0.2.15:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: dial tcp 10.0.2.15:6443: getsockopt: connection refused]

But in master:

root@master:~# netstat -ntlp | grep LISTEN
tcp        0      0 127.0.0.1:10248         0.0.0.0:*               LISTEN      16673/kubelet
tcp        0      0 127.0.0.1:10249         0.0.0.0:*               LISTEN      17363/kube-proxy
tcp        0      0 127.0.0.1:10251         0.0.0.0:*               LISTEN      17062/kube-schedule
tcp        0      0 127.0.0.1:2379          0.0.0.0:*               LISTEN      16982/etcd
tcp        0      0 127.0.0.1:10252         0.0.0.0:*               LISTEN      17125/kube-controll
tcp        0      0 127.0.0.1:2380          0.0.0.0:*               LISTEN      16982/etcd
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1077/sshd
tcp        0      0 0.0.0.0:6783            0.0.0.0:*               LISTEN      18096/weaver
tcp        0      0 127.0.0.1:6784          0.0.0.0:*               LISTEN      18096/weaver
tcp6       0      0 :::10250                :::*                    LISTEN      16673/kubelet
tcp6       0      0 :::6443                 :::*                    LISTEN      17208/kube-apiserve
tcp6       0      0 :::10255                :::*                    LISTEN      16673/kubelet
tcp6       0      0 :::10256                :::*                    LISTEN      17363/kube-proxy
tcp6       0      0 :::22                   :::*                    LISTEN      1077/sshd
tcp6       0      0 :::6781                 :::*                    LISTEN      18517/weave-npc
tcp6       0      0 :::6782                 :::*                    LISTEN      18096/weaver
root@master:~# ufw status
Status: inactive

And in “nodo1”…

root@nodo1:~# netstat -ntlp | grep LISTEN
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1100/sshd
tcp6       0      0 :::22                   :::*                    LISTEN      1100/sshd
root@nodo1:~# ufw status
Status: inactive
root@nodo1:~# ping 10.0.2.15
PING 10.0.2.15 (10.0.2.15) 56(84) bytes of data.
64 bytes from 10.0.2.15: icmp_seq=1 ttl=64 time=0.028 ms
64 bytes from 10.0.2.15: icmp_seq=2 ttl=64 time=0.026 ms
64 bytes from 10.0.2.15: icmp_seq=3 ttl=64 time=0.025 ms

Net configuration

root@master:~# ifconfig
datapath  Link encap:Ethernet  HWaddr 7e:d8:32:a6:f0:f0
          inet6 addr: fe80::7cd8:32ff:fea6:f0f0/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1376  Metric:1
          RX packets:46 errors:0 dropped:0 overruns:0 frame:0
          TX packets:16 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1
          RX bytes:5410 (5.4 KB)  TX bytes:2084 (2.0 KB)

docker0   Link encap:Ethernet  HWaddr 02:42:45:5d:27:2e
          inet addr:172.17.0.1  Bcast:172.17.255.255  Mask:255.255.0.0
          inet6 addr: fe80::42:45ff:fe5d:272e/64 Scope:Link
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:3 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:258 (258.0 B)

eth0      Link encap:Ethernet  HWaddr 08:00:27:10:47:e3
          inet addr:10.0.2.15  Bcast:10.0.2.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fe10:47e3/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:347437 errors:0 dropped:0 overruns:0 frame:0
          TX packets:154788 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:447474386 (447.4 MB)  TX bytes:9757559 (9.7 MB)

eth1      Link encap:Ethernet  HWaddr 08:00:27:82:cf:89
          inet addr:10.0.0.10  Bcast:10.0.0.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fe82:cf89/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:71 errors:0 dropped:0 overruns:0 frame:0
          TX packets:24 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:9602 (9.6 KB)  TX bytes:3594 (3.5 KB)
          Interrupt:16 Base address:0xd240

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:249310 errors:0 dropped:0 overruns:0 frame:0
          TX packets:249310 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1
          RX bytes:59983692 (59.9 MB)  TX bytes:59983692 (59.9 MB)

vethwe-bridge Link encap:Ethernet  HWaddr 12:86:24:e5:8a:fb
          inet6 addr: fe80::1086:24ff:fee5:8afb/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1376  Metric:1
          RX packets:31 errors:0 dropped:0 overruns:0 frame:0
          TX packets:47 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:4078 (4.0 KB)  TX bytes:6144 (6.1 KB)

vethwe-datapath Link encap:Ethernet  HWaddr 2e:9e:56:4c:12:df
          inet6 addr: fe80::2c9e:56ff:fe4c:12df/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1376  Metric:1
          RX packets:47 errors:0 dropped:0 overruns:0 frame:0
          TX packets:31 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:6144 (6.1 KB)  TX bytes:4078 (4.0 KB)

vethweplae45578 Link encap:Ethernet  HWaddr ee:25:03:21:b3:cd
          inet6 addr: fe80::ec25:3ff:fe21:b3cd/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1376  Metric:1
          RX packets:6895 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8427 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:1765424 (1.7 MB)  TX bytes:1746029 (1.7 MB)

vxlan-6784 Link encap:Ethernet  HWaddr b2:90:c2:01:0a:9f
          inet6 addr: fe80::b090:c2ff:fe01:a9f/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:65485  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:16 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

weave     Link encap:Ethernet  HWaddr ce:39:f3:93:17:2d
          inet addr:10.32.0.1  Bcast:0.0.0.0  Mask:255.240.0.0
          inet6 addr: fe80::cc39:f3ff:fe93:172d/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1376  Metric:1
          RX packets:6925 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8438 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:1672462 (1.6 MB)  TX bytes:1747837 (1.7 MB)
root@nodo1:~# ifconfig
docker0   Link encap:Ethernet  HWaddr 02:42:56:63:e1:5b
          inet addr:172.17.0.1  Bcast:172.17.255.255  Mask:255.255.0.0
          inet6 addr: fe80::42:56ff:fe63:e15b/64 Scope:Link
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:3 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:258 (258.0 B)

eth0      Link encap:Ethernet  HWaddr 08:00:27:10:47:e3
          inet addr:10.0.2.15  Bcast:10.0.2.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fe10:47e3/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:235452 errors:0 dropped:0 overruns:0 frame:0
          TX packets:113598 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:210171231 (210.1 MB)  TX bytes:7163294 (7.1 MB)

eth1      Link encap:Ethernet  HWaddr 08:00:27:23:4c:a7
          inet addr:10.0.0.11  Bcast:10.0.0.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fe23:4ca7/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:58 errors:0 dropped:0 overruns:0 frame:0
          TX packets:24 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:6878 (6.8 KB)  TX bytes:3582 (3.5 KB)
          Interrupt:16 Base address:0xd240

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:290 errors:0 dropped:0 overruns:0 frame:0
          TX packets:290 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1
          RX bytes:17152 (17.1 KB)  TX bytes:17152 (17.1 KB)

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Comments: 17 (3 by maintainers)

Most upvoted comments

The issue is that you have a firewall running on your master node that should be disabled. It’s blocking incoming traffic. systemctl stop firewalld and it will be fine.

By default, when we create VMs in VirtualBox using Vagrant, every VM will get an IP address of 10.0.2.15/24 with same MAC address on every VMs. Get the IP Addresses of the Nodes. example: Master - 192.168.33.101/24 Worker1 - 192.168.33.102/24 Worker2 - 192.168.33.103/24

Run the below command on master kubeadm init --apiserver-advertise-address=192.168.33.101

The issue is that you have a firewall running on your master node that should be disabled. It’s blocking incoming traffic. systemctl stop firewalld and it will be fine.

disabling the firewall worked but is there a way to create an expectation in the firewall, in that case, I won’t have to disable the firewall?

@jerryduren, you need to pass --apiserver-advertise-address=<master-node-external-ip> option when init a cluster.

In my case: kubeadm init --apiserver-advertise-address=10.1.1.2

Hi Ilya,

You can close the issue, the problem is the ip used in kubeadm join instruction.

Kubeadm has generated the ip of bridge interface and the correct ip is the ip of master host.

Solved this issue.

The battle continous.

Thanks so much!!!

Xavier.

El 26 ene. 2018 7:05 PM, “Ilya Dmitrichenko” notifications@github.com escribió:

I’m not entierly sure this is not one of the know issues, I’m tempted to close… cc @luxas https://github.com/luxas.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/kubernetes/kubernetes/issues/58876#issuecomment-360859069, or mute the thread https://github.com/notifications/unsubscribe-auth/AMvLlzw-pVbeEIlLSrTaHG0jh22dEfiVks5tOhPUgaJpZM4RuZir .