kubernetes: kubeadm --apiserver-advertise-adress not working
What happened:
I have 2 nodes and VPN between them. I also have a network interface with Internet access. When you enter the ip addr show
command, it looks like this:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether de:1c:44:57:a0:52 brd ff:ff:ff:ff:ff:ff
inet 10.64.214.163/31 brd 10.64.214.163 scope global dynamic ens2
valid_lft 69044sec preferred_lft 69044sec
inet6 2001:bc8:47a8:1b51::1/64 scope global
valid_lft forever preferred_lft forever
inet6 fe80::dc1c:44ff:fe57:a052/64 scope link
valid_lft forever preferred_lft forever
3: tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 100
link/none
inet 10.8.0.10 peer 10.8.0.9/32 scope global tun0
valid_lft forever preferred_lft forever
inet6 fe80::8964:83e6:a102:47be/64 scope link stable-privacy
valid_lft forever preferred_lft forever
After I create the cluster by running the command
kubeadm init --ignore-preflight-errors=NumCPU --apiserver-advertise-address=10.8.0.10 --pod-network-cidr=192.168.0.0/16
I enter the kubectl get nodes -o wide
command and get:
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
eh-s01 Ready master 5d v1.15.3 10.64.214.163 <none> Debian GNU/Linux 10 (buster) 4.19.0-5-amd64 docker://19.3.2
eh-s02 Ready <none> 5d v1.15.3 10.64.30.39 <none> Debian GNU/Linux 10 (buster) 4.19.0-5-amd64 docker://19.3.2
As you can see, the ip of the ens2
interface instead of tun0
is specified in the Internal IP
column
What you expected to happen:
The IP of tun0
interface must be specified in the Internal IP
column.
How to reproduce it (as minimally and precisely as possible):
- Create openvpn server
- Join openvpn server from node
- Use
kubeadm init --ignore-preflight-errors=NumCPU --apiserver-advertise-address=[node private ip] --pod-network-cidr=192.168.0.0/16
to create a cluster - Install calico cni:
kubectl apply -f https://docs.projectcalico.org/v3.8/manifests/calico.yaml >> cni.log
- Enter the command
kubectl get nodes -o wide
Environment:
- Kubernetes version (use
kubectl version
):
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-19T11:13:54Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-19T11:05:50Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
- Cloud provider or hardware configuration: Scaleway
- OS (e.g:
cat /etc/os-release
):
PRETTY_NAME="Debian GNU/Linux 10 (buster)"
NAME="Debian GNU/Linux"
VERSION_ID="10"
VERSION="10 (buster)"
VERSION_CODENAME=buster
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
- Kernel (e.g.
uname -a
):Linux EH-S01 4.19.0-5-amd64 #1 SMP Debian 4.19.37-5+deb10u2 (2019-08-08) x86_64 GNU/Linux
- Install tools: kubeadm
- Network plugin and version (if this is a network-related bug): calico v3.8.2
- Others:
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Comments: 33 (17 by maintainers)
Hi guys, I’m trying to set up a k8s cluster on bare-metal. For now, I have 3 servers with real IP and VLAN interfaces (network: 10.10.0.0/24) connected via vSwitch. I’m using kubeadm to create a single control plane cluster. I want to configure my cluster running on VLAN, internal IPs with access from the Internet through a real IP of Master node. Is it correct init command?
kubeadm init --control-plane-endpoint=10.10.0.1 --apiserver-advertise-address=10.10.0.1
How to check if all cluster’s traffic goes through the internal network, not public?