kubernetes: Running Kubernetes Locally via Docker - `kubectl get nodes` returns `The connection to the server localhost:8080 was refused - did you specify the right host or port?`
Going through this guide to set up kubernetes locally via docker I end up with the error message as stated above.
Steps taken:
export K8S_VERSION='1.3.0-alpha.1'
(tried 1.2.0 as well)- copy-paste the
docker run
command - download the appropriate
kubectl
binary and put in onPATH
(which kubectl
works) - (optionally) setup the cluster
- run
kubectl get nodes
In short, no magic. I am running this locally on Ubuntu 14.04, docker 1.10.3. If you need more information let me know
About this issue
- Original URL
- State: closed
- Created 8 years ago
- Reactions: 40
- Comments: 62 (2 by maintainers)
@xificurC @jankoprowski Have you checked whether the apiserver is running?
Please take a look at our troubleshooting guide: http://kubernetes.io/docs/troubleshooting/
If you still need help, please ask on stackoverflow.
You can solve this with “kubectl config”:
Hello I’m getting the following error on Centos 7, how can solve this issue?
Similar to @sumitkau, I solved my problem with setting new kubelet config location using: kubectl --kubeconfig /etc/kubernetes/admin.conf get no You can also copy /etc/kubernetes/admin.conf to ~/.kube/config and it works, but I don’t know that it’s a good work or not!
Running on Mac OS High Sierra, I solved this by enabling Kubernetes built into Docker itself.
If this happens in GCP, the below most likely will resolve the issue:
gcloud container clusters get-credentials your-cluster --zone your-zone --project your-project
Thanks to @mamirkhani. I solved this error. However I just found such info in “kubeadm init” output: Your Kubernetes master has initialized successfully! To start using your cluster, you need to run (as a regular user): sudo cp /etc/kubernetes/admin.conf $HOME/ sudo chown $(id -u)😒(id -g) $HOME/admin.conf export KUBECONFIG=$HOME/admin.conf
I think this is the recommended solution.
In my case I had just to remove
~/.kube/config
which left from previous attempt.I have this issues. This solution work for me:
If you don’t have
admin.conf
, plz installkubeadm
And then remove~/.kube/cache
I was trying to get status from remote system using ansible and I was facing same issue. I tried and it worked. kubectl --kubeconfig ./admin.conf get pods --all-namespaces -o wide
I have this issues. This solution work for me: export KUBECONFIG=/etc/kubernetes/admin.conf
Try using --server to specify your master: kubectl --server=16.187.189.90:8080 get pod -o wide
You need to switch context.
kubectl config use-context docker-for-desktop
I had the same problem. When creating cluster via web gui in google cloud and trying to run kubectl I get
The connection to the server localhost:8080 was refused - did you specify the right host or port?
everything you have to do is fetch kubectl config for your cluser which will be stored in $HOME/.kubectl/config:
Now kubectl works just fine
You must run these commands first -
[user@k8s-master ~]# mkdir -p $HOME/.kube [user@k8s-master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config [user@k8s-master ~]# chown $(id -u)😒(id -g) $HOME/.kube/config
While I know that there might be multiple reasons for failure here, in my case removing
~/.kube/cache
helped immediately.Hi, I still met this problem with kubernetes-master-1.4.0-0.1.git87d9d8d.el7.x86_64 kubernetes-node-1.4.0-0.1.git87d9d8d.el7.x86_64 kubernetes-unit-test-1.4.0-0.1.git87d9d8d.el7.x86_64 kubernetes-ansible-0.6.0-0.1.gitd65ebd5.el7.noarch kubernetes-client-1.4.0-0.1.git87d9d8d.el7.x86_64 kubernetes-1.4.0-0.1.git87d9d8d.el7.x86_64
if I config KUBE_API_ADDRESS with below value KUBE_API_ADDRESS=“–insecure-bind-address=10.10.10.xx” I met this error, and it work if I pass options “–server=10.10.10.xx:8080” to cmd
if I config KUBE_API_ADDRESS with below value KUBE_API_ADDRESS=“–insecure-bind-address=0.0.0.0” it works good.
update the entry in /etc/kubernetes/apiserver ( on master server) KUBE_API_PORT=“–port=8080” then do a systemctl restart kube-apiserver
Use below command. It worked for me.
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u)😒(id -g) $HOME/.kube/config
Deleted the old config from ~/.kube and then restarted docker (for macos) and it rebuilt the config folder. All good now when I do ‘kubectl get nodes’.
kubectl is expecting ~/.kube/config as the filename for its configuration.
The quick fix that worked for me was to create a symbolic link:
ln -s ~/.kube/config.conjure-canonical-kubern-e82 ~/.kube/config
N.B. This was for a “conjure-up kubernetes” deployment.
It works. Quite simple. If you are using desktop software, better find the solution from the preference setting first. haha.
try reinstall minikube if you have one or try using
kubectl proxy --port=8080
.tks
This issue has been confused me for 1 week, it seems to be working for me now. If you have this issue, first of all, you need to know which node it happens on.
If it is a master node, then make sure all of kubernetes pods are running by command
kubectl get pods --all-namespaces
,mine looks like this
kube-system etcd-kubernetes-master01 1/1 Running 2 6d kube-system kube-apiserver-kubernetes-master01 1/1 Running 3 6d kube-system kube-controller-manager-kubernetes-master01 1/1 Running 2 6d kube-system kube-dns-2425271678-3kkl1 3/3 Running 6 6d kube-system kube-flannel-ds-brw34 2/2 Running 6 6d kube-system kube-flannel-ds-psxc8 2/2 Running 7 6d kube-system kube-proxy-45n1h 1/1 Running 2 6d kube-system kube-proxy-fsn6f 1/1 Running 2 6d kube-system kube-scheduler-kubernetes-master01 1/1 Running 2 6d
if it does not, then verify if you have those files in your /etc/kubernetes/ directory,
admin.conf controller-manager.conf kubelet.conf manifests pki scheduler.conf
, if you do, then copy those files with a normal user (not ROOT user)mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
and then see if kubectl version works or not, if it still does not work, then follow the tutorial at https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/ and tear down your cluster and rebuilt your master.
If it happens on (slave) nodes, then make sure if you have the files
kubelet.conf manifests pki
in your directory of /etc/kubernetes/, and in this kubelet.conf, the server field should point to your master IP, which is the same settings as your master node admin.conf, If you dont have the kubelet.conf, that is probably because you haven’r run the command to join your nodes with your masterkubeadm join --token f34tverg45ytt34tt 192.168.1.170:6443
, you should get this command (token) after your master node is built.after login as normal user on (slave) node, you probably wont see a config file in your ~/.kube, then create this folder then copy admin.conf from your master node into your ~/.kube/ directory on this (slave) node as config with a normal user, and then do the copy and try kubectl version, it works for me.
I faced similar issue which was resolved with
export KUBECONFIG=/etc/kubernetes/admin.conf
I didn’t run this.
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u)😒(id -g) $HOME/.kube/config
caused the problem.
Nope, still doesn’t work. And yes, this was the first thing I also tried.
This answer works for me cause the machine needs to know where the master(admin) is, not localhost
I did a
minikube status
which indicated that the kubectl had a stale pointer, WARNING: Your kubectl is pointing to stale minikube-vm. To fix the kubectl context, runminikube update-context
I then ran
minikube update-context
and thenminikube start --driver=docker
After thatkubectl get pods
worked:NAME READY STATUS RESTARTS AGE kubernetes-bootcamp-57978f5f5d-96b97 1/1 Running 1 47h
In my case, I had rebooted the master node of kubernetes, and when restarting, the SWAP partition of memory exchange is enabled by default
systemctl status kubelet
swapon -s
sudo
swapoff /dev/sda6
sudo
systemctl status kubelet
kubectl get nodes
删除minikube虚机及配置文件,重新安装minikube(v0.25.2),其他版本可能会有坑
I am getting above error
Getting same error while using
kubectl get pods --all-namespaces