kubernetes: Running Kubernetes Locally via Docker - `kubectl get nodes` returns `The connection to the server localhost:8080 was refused - did you specify the right host or port?`

Going through this guide to set up kubernetes locally via docker I end up with the error message as stated above.

Steps taken:

  • export K8S_VERSION='1.3.0-alpha.1' (tried 1.2.0 as well)
  • copy-paste the docker run command
  • download the appropriate kubectl binary and put in on PATH (which kubectl works)
  • (optionally) setup the cluster
  • run kubectl get nodes

In short, no magic. I am running this locally on Ubuntu 14.04, docker 1.10.3. If you need more information let me know

About this issue

  • Original URL
  • State: closed
  • Created 8 years ago
  • Reactions: 40
  • Comments: 62 (2 by maintainers)

Most upvoted comments

@xificurC @jankoprowski Have you checked whether the apiserver is running?

Please take a look at our troubleshooting guide: http://kubernetes.io/docs/troubleshooting/

If you still need help, please ask on stackoverflow.

You can solve this with “kubectl config”:

$ kubectl config set-cluster demo-cluster --server=http://master.example.com:8080
$ kubectl config set-context demo-system --cluster=demo-cluster
$ kubectl config use-context demo-system
$ kubectl get nodes
NAME                 STATUS    AGE
master.example.com   Ready     3h
node1.example.com    Ready     2h
node2.example.com    Ready     2h

Hello I’m getting the following error on Centos 7, how can solve this issue?

[root@ip-172-31-11-12 system]# kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?

Similar to @sumitkau, I solved my problem with setting new kubelet config location using: kubectl --kubeconfig /etc/kubernetes/admin.conf get no You can also copy /etc/kubernetes/admin.conf to ~/.kube/config and it works, but I don’t know that it’s a good work or not!

Running on Mac OS High Sierra, I solved this by enabling Kubernetes built into Docker itself.

screen shot 2018-10-16 at 12 48 10 pm

screen shot 2018-10-16 at 12 48 49 pm

If this happens in GCP, the below most likely will resolve the issue:

gcloud container clusters get-credentials your-cluster --zone your-zone --project your-project

Thanks to @mamirkhani. I solved this error. However I just found such info in “kubeadm init” output: Your Kubernetes master has initialized successfully! To start using your cluster, you need to run (as a regular user): sudo cp /etc/kubernetes/admin.conf $HOME/ sudo chown $(id -u)😒(id -g) $HOME/admin.conf export KUBECONFIG=$HOME/admin.conf

I think this is the recommended solution.

In my case I had just to remove ~/.kube/config which left from previous attempt.

I have this issues. This solution work for me:

sudo cp /etc/kubernetes/admin.conf $HOME/
sudo chown $(id -u):$(id -g) $HOME/admin.conf
export KUBECONFIG=$HOME/admin.conf

If you don’t have admin.conf, plz install kubeadm And then remove ~/.kube/cache

rm -rf ~/.kube/cache

I was trying to get status from remote system using ansible and I was facing same issue. I tried and it worked. kubectl --kubeconfig ./admin.conf get pods --all-namespaces -o wide

I have this issues. This solution work for me: export KUBECONFIG=/etc/kubernetes/admin.conf

Try using --server to specify your master: kubectl --server=16.187.189.90:8080 get pod -o wide

You need to switch context. kubectl config use-context docker-for-desktop

I had the same problem. When creating cluster via web gui in google cloud and trying to run kubectl I get

The connection to the server localhost:8080 was refused - did you specify the right host or port?

everything you have to do is fetch kubectl config for your cluser which will be stored in $HOME/.kubectl/config:

$ gcloud container clusters get-credentials guestbook2
Fetching cluster endpoint and auth data.
kubeconfig entry generated for guestbook2.

Now kubectl works just fine

You must run these commands first -

[user@k8s-master ~]# mkdir -p $HOME/.kube [user@k8s-master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config [user@k8s-master ~]# chown $(id -u)😒(id -g) $HOME/.kube/config

While I know that there might be multiple reasons for failure here, in my case removing ~/.kube/cache helped immediately.

Hi, I still met this problem with kubernetes-master-1.4.0-0.1.git87d9d8d.el7.x86_64 kubernetes-node-1.4.0-0.1.git87d9d8d.el7.x86_64 kubernetes-unit-test-1.4.0-0.1.git87d9d8d.el7.x86_64 kubernetes-ansible-0.6.0-0.1.gitd65ebd5.el7.noarch kubernetes-client-1.4.0-0.1.git87d9d8d.el7.x86_64 kubernetes-1.4.0-0.1.git87d9d8d.el7.x86_64

if I config KUBE_API_ADDRESS with below value KUBE_API_ADDRESS=“–insecure-bind-address=10.10.10.xx” I met this error, and it work if I pass options “–server=10.10.10.xx:8080” to cmd

if I config KUBE_API_ADDRESS with below value KUBE_API_ADDRESS=“–insecure-bind-address=0.0.0.0” it works good.

update the entry in /etc/kubernetes/apiserver ( on master server) KUBE_API_PORT=“–port=8080” then do a systemctl restart kube-apiserver

Use below command. It worked for me.

mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u)😒(id -g) $HOME/.kube/config

Deleted the old config from ~/.kube and then restarted docker (for macos) and it rebuilt the config folder. All good now when I do ‘kubectl get nodes’.

kubectl is expecting ~/.kube/config as the filename for its configuration.

The quick fix that worked for me was to create a symbolic link:

ln -s ~/.kube/config.conjure-canonical-kubern-e82 ~/.kube/config

N.B. This was for a “conjure-up kubernetes” deployment.

Running on Mac OS High Sierra, I solved this by enabling Kubernetes built into Docker itself.

screen shot 2018-10-16 at 12 48 10 pm

screen shot 2018-10-16 at 12 48 49 pm

It works. Quite simple. If you are using desktop software, better find the solution from the preference setting first. haha.

try reinstall minikube if you have one or try using kubectl proxy --port=8080.

Running on Mac OS High Sierra, I solved this by enabling Kubernetes built into Docker itself.

screen shot 2018-10-16 at 12 48 10 pm

screen shot 2018-10-16 at 12 48 49 pm

tks

This issue has been confused me for 1 week, it seems to be working for me now. If you have this issue, first of all, you need to know which node it happens on.

If it is a master node, then make sure all of kubernetes pods are running by command kubectl get pods --all-namespaces,

mine looks like this kube-system etcd-kubernetes-master01 1/1 Running 2 6d kube-system kube-apiserver-kubernetes-master01 1/1 Running 3 6d kube-system kube-controller-manager-kubernetes-master01 1/1 Running 2 6d kube-system kube-dns-2425271678-3kkl1 3/3 Running 6 6d kube-system kube-flannel-ds-brw34 2/2 Running 6 6d kube-system kube-flannel-ds-psxc8 2/2 Running 7 6d kube-system kube-proxy-45n1h 1/1 Running 2 6d kube-system kube-proxy-fsn6f 1/1 Running 2 6d kube-system kube-scheduler-kubernetes-master01 1/1 Running 2 6d

if it does not, then verify if you have those files in your /etc/kubernetes/ directory, admin.conf controller-manager.conf kubelet.conf manifests pki scheduler.conf, if you do, then copy those files with a normal user (not ROOT user) mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config

and then see if kubectl version works or not, if it still does not work, then follow the tutorial at https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/ and tear down your cluster and rebuilt your master.

If it happens on (slave) nodes, then make sure if you have the files kubelet.conf manifests pki in your directory of /etc/kubernetes/, and in this kubelet.conf, the server field should point to your master IP, which is the same settings as your master node admin.conf, If you dont have the kubelet.conf, that is probably because you haven’r run the command to join your nodes with your master kubeadm join --token f34tverg45ytt34tt 192.168.1.170:6443, you should get this command (token) after your master node is built.

after login as normal user on (slave) node, you probably wont see a config file in your ~/.kube, then create this folder then copy admin.conf from your master node into your ~/.kube/ directory on this (slave) node as config with a normal user, and then do the copy and try kubectl version, it works for me.

I faced similar issue which was resolved with export KUBECONFIG=/etc/kubernetes/admin.conf

I didn’t run this.

mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u)😒(id -g) $HOME/.kube/config

caused the problem.

Running on Mac OS High Sierra, I solved this by enabling Kubernetes built into Docker itself. screen shot 2018-10-16 at 12 48 10 pm screen shot 2018-10-16 at 12 48 49 pm

It works. Quite simple. If you are using desktop software, better find the solution from the preference setting first. haha.

Nope, still doesn’t work. And yes, this was the first thing I also tried.

Thanks to @mamirkhani. I solved this error. However I just found such info in “kubeadm init” output: Your Kubernetes master has initialized successfully! To start using your cluster, you need to run (as a regular user): sudo cp /etc/kubernetes/admin.conf $HOME/ sudo chown $(id -u)😒(id -g) $HOME/admin.conf export KUBECONFIG=$HOME/admin.conf

I think this is the recommended solution.

This answer works for me cause the machine needs to know where the master(admin) is, not localhost

I did a minikube status which indicated that the kubectl had a stale pointer, WARNING: Your kubectl is pointing to stale minikube-vm. To fix the kubectl context, run minikube update-context

I then ran minikube update-context and then minikube start --driver=docker After that kubectl get pods worked:

NAME READY STATUS RESTARTS AGE kubernetes-bootcamp-57978f5f5d-96b97 1/1 Running 1 47h

In my case, I had rebooted the master node of kubernetes, and when restarting, the SWAP partition of memory exchange is enabled by default

  1. sudo systemctl status kubelet
kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf, 90-local-extras.conf
   Active: activating (auto-restart) (Result: exit-code) since 금 2018-04-20 15:27:00 KST; 6s ago
     Docs: http://kubernetes.io/docs/
  Process: 17247 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CADVISOR_ARGS $KUBELET_CGROUP_ARGS $KUBELET_CERTIFICATE_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=255)
 Main PID: 17247 (code=exited, status=255)
  1. sudo swapon -s
Filename	type 		size	Used	priority
/dev/sda6	partition	950267	3580	-1
  1. sudo swapoff /dev/sda6

  2. sudo systemctl status kubelet

● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since Mon 2019-01-14 08:28:56 -05; 15min ago
     Docs: https://kubernetes.io/docs/home/
 Main PID: 7018 (kubelet)
    Tasks: 25 (limit: 3319)
   CGroup: /system.slice/kubelet.service
           └─7018 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes
  1. kubectl get nodes
NAME        STATUS   ROLES    AGE   VERSION
k8smaster   Ready    master   47h   v1.13.2
k8snode1    Ready    <none>   45h   v1.13.2
k8snode2    Ready    <none>   45h   v1.13.2

删除minikube虚机及配置文件,重新安装minikube(v0.25.2),其他版本可能会有坑

$ minikube delete
$ rm -rf ~/.minikube
$ curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.25.2/minikube-linux-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/

I am getting above error

[admin ~]$ kubectl cluster-info
Kubernetes master is running at https://xxxxx:6443

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
The connection to the server xxxxx:6443 was refused - did you specify the right host or port?
[admin~]$ kubectl cluster-info dump
The connection to the server xxxx:6443 was refused - did you specify the right host or port?

Getting same error while using kubectl get pods --all-namespaces