kubeadm: kubeadm init “--apiserver-advertise-address=publicIP” not working, private IP works 1.13 version
BUG REPORT
Versions
kubeadm version : v.1.13 Environment: Ubuntu-16.04.5 LTS (Xenial Xerus)
- Kubernetes version "v1.13.3
- Cloud provider or hardware configuration: GCP
- OS (e.g. from /etc/os-release): ubuntu-16.04
- Kernel (e.g.
uname -a): Linux ubuntu 4.15.0-1026-gcp #27~16.04.1-Ubuntu
What happened?
"sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=32.xxx.xx.xxx(PublicIP)"
when u we using Public IP got error , when using Private IP , got success.
What you expected to happen?
expected success when we give Public IP, but got failed.
How to reproduce it (as minimally and precisely as possible)?
after installed “apt-get install -y kubelet kubeadm kubectl” , trying to get single node cluster with kubadm
Anything else we need to know?
After installing Docker version 18.09.1, i tried single node cluster.
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Reactions: 8
- Comments: 42 (6 by maintainers)
I found a solution.
Instead of using --apiserver-advertise-address=publicIp use --apiserver-cert-extra-sans=publicIp
Don’t forget to replace the private ip for the public ip in your .kube/config if you use kubectl from remote.
Hi, I have reproduced your issue and found the root cause. Please find my analysis below.
During kubeadm init, kubelet will try to create kube-proxy, kube-apiserver, kube-controller, and kube-scheduller. It will try to bind all these services with the public IP address(GCP assigned) of the VM.
But the problem in GCP is, the public IP address does not rest on the VM, but it rather a NAT function. The tricky part here is to understand is, if you are sending a packet to the NAT it will forward the packet to the VM and vice-versa. But your process/application cannot bind to that NAT IP address. The IP address with which you intend to create the cluster has to reside on the VM.
That is why it is working with internal IP address but not with public IP address.
You can verify this by checking ‘tail -f /var/log/syslog’ while creating the cluster.
Please let me know if this addressed your issue.
-M
@Zhang21 after a lot of day and night research… i finally found a way to let k8s work over WAN.
short answer
--apiserver-advertise-address=publicIPis necessary, this tell k8s worker communicate to master over publicIP. default is privateIP, will lead to10.96.0.1:443: i/o timeout.flannel.alpha.coreos.com/public-ip-overwrite=publicIPis necessary,this set flannel pod node-ip to publicIPfull answer
ifconfigon master, check whether there have publicIP info on master interfaces. some cloud provider use Elastic IP, they dont have any publicIP interface info. you must add publicIP interface info yourself. follow Cloud_floating_IP_persistent. if you dont add publicIP interface,kubeadm init --apiserver-advertise-address=publicIPwill not success.--apiserver-advertise-address=publicIP, i use--control-plane-endpoint=publicIP --upload-certs --apiserver-advertise-address=publicIPfor my own. i think just apiserver-advertise-address also will be ok.I solved this problem by forwarding the private IP of the master node to the public IP of the master node on the worker node. Specifically, this was the command that I ran on worker node before running
kubeadm join:sudo iptables -t nat -A OUTPUT -d <Private IP of master node> -j DNAT --to-destination <Public IP of master node>@x22n Your solution results in a verification error.
Update
It turns out that the verification issue is due to leftover credentials (in
$HOME/.kube) from the last master, beforekubeadm resetwas performed.This can be resolved by,
And, the following seems to be a clean solution.
I just tested the solution above on Compute Engine.
--apiserver-cert-extra-sans=publicIpdo not solve the problem. Yes, it adds public IP to the Certs, but do not affects connection procedure. The worker nodes will look for apiserver-advertise-address during join. So they will not connect to the ptivateIP, if no route exist. Api-server itself have two params--advertise-address ipand--bind-address ip. It looks reasonable. But, how can this addresses configured duringkubeadm init?I manged to solve the problem by enabling inbound traffic in port 6443 and used the flag --control-plane-endpoint instead of --apiserver-advertise-address
Even am facing the same issue kubeadm init is failing with public ip and saying that kubelet is misconfigured This error is likely caused by:
yes… still have problem… I am tired, It’s not worth to speed lot of time for it, I finally buy an another machine under the lan, I found spending money makes me happy.
I am trying on different cloud provider but no luck . Any suggestion or solution ?
Thank you @spockmang for the explanation. Just for the record, i faced the same issue on Openstack.
I’m for closing the issue here because this is not a kubeadm problem
Probably the best option to get a suggestion here is to reopen the issue in k/k and tag SIG network and SIG cloud provider @neolit123 opinions?
Hi, Even i have the exact problem with trying on Azure VMs (Redhat 7 OS). Anyone has suggestions to resolve the issue?
This is what i get every time i do kubeadm init … I tried re-installing kubelet and kubeadm… tried yum update and restarted VM.
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred: timed out waiting for the condition
This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - ‘systemctl status kubelet’ - ‘journalctl -xeu kubelet’
Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker. Here is one example how you may list all Kubernetes containers running in docker: - ‘docker ps -a | grep kube | grep -v pause’ Once you have found the failing container, you can inspect its logs with: - ‘docker logs CONTAINERID’ error execution phase wait-control-plane: couldn’t initialize a Kubernetes cluster
Thanks Sagar
thanks, @spockmang for the insight. do you have the recommendation or step to make it work with public IP add ?
Iam able to access curl and ping.
On Thu 7 Feb, 2019, 7:37 PM Lubomir I. Ivanov <notifications@github.com wrote: