kubeadm: kubeadm 1.9.2 doesn't work over proxy

Versions

kubeadm version (use kubeadm version):

Environment:

  • Kubernetes version (use kubectl version): kubeadm version: &version.Info{Major:“1”, Minor:“9”, GitVersion:“v1.9.2”, GitCommit:“5fa2db2bd46ac79e5e00a4e6ed24191080aa463b”, GitTreeState:“clean”, BuildDate:“2018-01-18T09:42:01Z”, GoVersion:“go1.9.2”, Compiler:“gc”, Platform:“linux/amd64”}

  • Cloud provider or hardware configuration: Vmware / Proxmox

  • OS (e.g. from /etc/os-release): Debian GNU/Linux 9 (stretch)

  • Kernel (e.g. uname -a): 4.9.65-3+deb9u2

  • Others:

What happened?

I try to execute kubeadm init --pod-network-cidr=192.168.0.0/16 and stucks with

[init] This might take a minute or longer if the control plane images have to be pulled.

What you expected to happen?

The kubadm runs fine and I get a working cluster node.

Anything else we need to know?

The problem is, that the first time I created a cluster, I did it on my Vmware Player with NAT and full access to the internet. In the second try, I created Vms (two for master on Proxmox VE (KVM) and two nodes on Vmware vSphere. The network is restricted with no direct internet connection. So I added to /etc/profile:

export http_proxy="http://192.168.42.214:3128"
export https_proxy="http://192.168.42.214:3128"
export no_proxy="localhost,127.0.0.1,localaddress,.localdomain.com,.example.local,192.168.0.0/16,10.96.0.0/12,172.25.50.21,172.25.50.22,172.25.50.23,172.25.50.24"
export HTTP_PROXY="http://192.168.42.214:3128"
export HTTPS_PROXY="http://192.168.42.214:3128"
export NO_PROXY="localhost,127.0.0.1,localaddress,.localdomain.com,.example.local,192.168.0.0/16,10.96.0.0/12,172.25.50.21,172.25.50.22,172.25.50.23,172.25.50.24"

On the firewall log I can see, that there is still traffic to 173.194.76.82 (gcr.io) via HTTPS. That is bad. Also kubeadm hangs forever. So I added the host to the whitelist on the firewall (NAT) and than, I got:

...
[init] This might take a minute or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 75.501467 seconds
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node ina-test-kubm-01 as master by adding a label and a taint
[markmaster] Master ina-test-kubm-01 tainted and labelled with key/value: node-role.kubernetes.io/master=""
...

Now I can go forward with the network part.

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Comments: 18 (7 by maintainers)

Most upvoted comments

SOLVED - I had a cgroup driver mismatch between docker and kubelet. Rectified it and init completed successfully.

hi,

# env | grep -i _proxy
HTTP_PROXY=http://192.168.42.214:3128
https_proxy=http://192.168.42.214:3128
http_proxy=http://192.168.42.214:3128
no_proxy=localhost,127.0.0.1,localaddress,.localdomain.com,.localdomain.local,192.168.0.0/16,10.96.0.0/12,172.25.50.21,172.25.50.22,172.25.50.23,172.25.50.24
NO_PROXY=localhost,127.0.0.1,localaddress,.localdomain.com,.localdomain.local,192.168.0.0/16,10.96.0.0/12,172.25.50.21,172.25.50.22,172.25.50.23,172.25.50.24
HTTPS_PROXY=http://192.168.42.214:3128

I had the same problem on the worker-nodes too. So I assume, that one or more processes drops the env, or does not use them.

What I can image is, that the process (dash/sh/) doesn’t read the /etc/profile …