microk8s: Error: Get https://10.152.183.1:443/api/v1/namespaces/kube-system/configmaps?labelSelector=OWNER%!D(MISSING)TILLER: dial tcp 10.152.183.1:443: connect: no route to host

Please run microk8s.inspect and attach the generated tarball to this issue.

Inspecting services Service snap.microk8s.daemon-cluster-agent is running Service snap.microk8s.daemon-flanneld is running Service snap.microk8s.daemon-containerd is running Service snap.microk8s.daemon-apiserver is running Service snap.microk8s.daemon-apiserver-kicker is running Service snap.microk8s.daemon-proxy is running Service snap.microk8s.daemon-kubelet is running Service snap.microk8s.daemon-scheduler is running Service snap.microk8s.daemon-controller-manager is running Service snap.microk8s.daemon-etcd is running Copy service arguments to the final report tarball Inspecting AppArmor configuration Gathering system information Copy processes list to the final report tarball Copy snap list to the final report tarball Copy VM name (or none) to the final report tarball Copy disk usage information to the final report tarball Copy memory usage information to the final report tarball Copy server uptime to the final report tarball Copy current linux distribution to the final report tarball Copy openSSL information to the final report tarball Copy network configuration to the final report tarball Inspecting kubernetes cluster Inspect kubernetes cluster

WARNING: Docker is installed. Add the following lines to /etc/docker/daemon.json: { “insecure-registries” : [“localhost:32000”] } and then restart docker with: sudo systemctl restart docker Building the report tarball Report tarball is at /var/snap/microk8s/1079/inspection-report-20191210_050225.tar.gz

after initializing helm by creating serivce account tiller it successfully deploys tiller pod but helm is not able to communicate with tiller.

helm ls Error: Get https://10.152.183.1:443/api/v1/namespaces/kube-system/configmaps?labelSelector=OWNER%!D(MISSING)TILLER: dial tcp 10.152.183.1:443: connect: no route to host

alias is added for helm and kubectl alias helm = ‘microk8s.helm’ alias kubectl = ‘microk8s.kubectl’

kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-9b8997588-6lmzt 0/1 Running 0 18h tiller-deploy-68cff9d9cb-hgl2f 1/1 Running 0 22h

tiller pod is running without any error

also when enableing dns with microk8s.enable dns the pod is not up as it shows running

logs of coredns:

2019-12-10T04:25:56.424Z [INFO] plugin/ready: Still waiting on: "kubernetes"
2019-12-10T04:26:06.424Z [INFO] plugin/ready: Still waiting on: "kubernetes"
E1210 04:26:14.260047       1 reflector.go:134] pkg/mod/k8s.io/client-go@v10.0.0+incompatible/tools/cache/reflector.go:95: Failed to list *v1.Endpoints: Get https://10.152.183.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.152.183.1:443: i/o timeout
E1210 04:26:16.280384       1 reflector.go:134] pkg/mod/k8s.io/client-go@v10.0.0+incompatible/tools/cache/reflector.go:95: Failed to list *v1.Endpoints: Get https://10.152.183.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.152.183.1:443: connect: no route to host
2019-12-10T04:26:16.424Z [INFO] plugin/ready: Still waiting on: "kubernetes"
E1210 04:26:18.296309       1 reflector.go:134] pkg/mod/k8s.io/client-go@v10.0.0+incompatible/tools/cache/reflector.go:95: Failed to list *v1.Endpoints: Get https://10.152.183.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.152.183.1:443: connect: no route to host
E1210 04:26:20.304944       1 reflector.go:134] pkg/mod/k8s.io/client-go@v10.0.0+incompatible/tools/cache/reflector.go:95: Failed to list *v1.Service: Get https://10.152.183.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.152.183.1:443: i/o timeout
E1210 04:26:20.312509       1 reflector.go:134] pkg/mod/k8s.io/client-go@v10.0.0+incompatible/tools/cache/reflector.go:95: Failed to list *v1.Endpoints: Get https://10.152.183.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.152.183.1:443: connect: no route to host
2019-12-10T04:26:26.424Z [INFO] plugin/ready: Still waiting on: "kubernetes"
E1210 04:26:27.313029       1 reflector.go:134] pkg/mod/k8s.io/client-go@v10.0.0+incompatible/tools/cache/reflector.go:95: Failed to list *v1.Namespace: Get https://10.152.183.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.152.183.1:443: i/o timeout
E1210 04:26:29.336441       1 reflector.go:134] pkg/mod/k8s.io/client-go@v10.0.0+incompatible/tools/cache/reflector.go:95: Failed to list *v1.Namespace: Get https://10.152.183.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.152.183.1:443: connect: no route to host
2019-12-10T04:26:36.424Z [INFO] plugin/ready: Still waiting on: "kubernetes"
2019-12-10T04:26:46.424Z [INFO] plugin/ready: Still waiting on: "kubernetes"

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Comments: 20 (6 by maintainers)

Most upvoted comments

I spent some time on Oracle Cloud, here is what is probably biting us.

If you do a sudo iptables -S you will see the the input chain ends with:

-A INPUT -j REJECT --reject-with icmp-host-prohibited

The forward chain starts with:

-A FORWARD -j REJECT --reject-with icmp-host-prohibited

If you remove these two rules you should allow the traffic to flow to the API server.

sudo iptabled -D  INPUT -j REJECT --reject-with icmp-host-prohibited
sudo iptabled -D  FORWARD -j REJECT --reject-with icmp-host-prohibited

I do not know much about firewalls but it seems to me these two rules that are there by default work against the default policies:

-P INPUT ACCEPT
-P FORWARD ACCEPT

Here is some info in case you want to create your ingress/egress rules. The pods get IPs in 10.1.0.0/16 and the services get IPs in 10.152.183.1/24.

Also on oracle cloud, I ended up adding these to my /etc/iptables/rules.v4 before the existing -A INPUT -j REJECT rule:

-A INPUT -i vxlan.calico -j ACCEPT
-A INPUT -i cali+ -j ACCEPT

These mimic the ufw rules described in https://microk8s.io/docs/troubleshooting#heading--common-issues and are a bit more strict than -A INPUT -j ACCEPT.

I also commented out the FORWARD reject rule:

#-A FORWARD -j REJECT --reject-with icmp-host-prohibited

After making those two modifications to /etc/iptables/rules.v4, I ran:

 sudo iptables-restore < /etc/iptables/rules.v4

@ktsakalozos I am facing same issue on my on-prem, any help is appreciated I have freshly installed kubernetes cluster 1.16.0 version. This is a single master cluster that we use for testing and integrate with other network elements. I am facing issues with nginx pod talking to the api intermittently, like this Error trying to get the default server TLS secret nginx-ingress/default-server-secret: could not get nginx-ingress/default-server-secret: Get “https://192.168.209.1:443/api/v1/namespaces/nginx-ingress/secrets/default-server-secret”: dial tcp 192.168.209.1:443: i/o timeout [root@001 ~/kubernetes-ingress/deployments] kubectl logs -n nginx-ingress nginx-ingress-57cdc75bdb-9kdrk I0612 10:08:40.885600 1 main.go:169] Starting NGINX Ingress controller Version=1.6.3 GitCommit=b9378d56 F0612 10:09:10.893413 1 main.go:275] Error trying to get the default server TLS secret nginx-ingress/default-server-secret: could not get nginx-ingress/default-server-secret: Get https://192.168.209.1:443/api/v1/namespaces/nginx-ingress/secrets/default-server-secret: dial tcp 192.168.209.1:443: i/o timeout

Looks like the pod cannot access the kubernetes api. This is usually a network configuration solved with:

sudo iptables -P FORWARD ACCEPT

Please go through the common issues section at https://microk8s.io/docs/troubleshooting#common-issues