rancher: Can provision k8s cluster on AWS, but it's not available to manage

Rancher versions: rancher/server or rancher/rancher: v2.0.0-beta3 rancher/agent or rancher/rancher-agent: Rancher user intarface: v2.0.35

Operating system and kernel: (cat /etc/os-release, uname -r preferred) AWS EC2 t2.medium, eu-central-1a

Type/provider of hosts: (VirtualBox/Bare-metal/AWS/GCE/DO) AWS

Setup details: (single node rancher vs. HA rancher, internal DB vs. external DB) Single node Rancher, internal DB. Default setup.

Environment Template: (Cattle/Kubernetes/Swarm/Mesos) Kubernetes.

Steps to Reproduce:

  1. Install Rancher
  2. Create Kubernetes nodes on AWS - one master and one worker.

Results: EC2 are successfully created, but Rancher shows error: [workerPlane] Failed to bring up Worker Plane: Failed to verify healthcheck: Failed to check https://localhost:10250/healthz for service [kubelet] on host [35.158.243.106]: Get https://localhost:10250/healthz: dial tcp 127.0.0.1:10250: getsockopt: connection refused.

snimek obrazovky 2018-04-30 v 10 59 18

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Comments: 18 (6 by maintainers)

Most upvoted comments

If the error is Failed to check https://localhost:10250/healthz for service [kubelet], please post docker logs --tail=all kubelet as that will reveal why kubelet can’t start/is unreachable.

@phynias For using provider, please check https://rancher.com/docs/rancher/v2.x/en/concepts/clusters/cloud-providers/#amazon

and i just want to clarify the only time i see this error is when i edit a cluster and set it “Cloud Provider” to AWS. if it is set to None i see no errors and everything else appears to work. i need to set it to AWS so i can use EBS volumes.