kubernetes: Error from server (InternalError): an error on the server ("") has prevented the request from succeeding
I don’t know why this error occurred, I reloaded it many times but the problem still exists.
kubectl version
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.3", GitCommit:"435f92c719f279a3a67808c80521ea17d5715c66", GitTreeState:"clean", BuildDate:"2018-11-27T01:14:37Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"darwin/amd64"}
Error from server (InternalError): an error on the server ("") has prevented the request from succeeding
/kind bug
About this issue
- Original URL
- State: closed
- Created 6 years ago
- Comments: 42 (7 by maintainers)
as the comment sugguets^^^, it might be the proxy issue in ur local cluster, the
/version
handler will never return a non-200 status unless u misconfigured ur proxy./close
Has this issue been resolved? @yue9944882 did not seem to help.
I am also getting the same error. Even
kubectl version
command is not working.kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.0", GitCommit:"9e991415386e4cf155a24b1da15becaa390438d8", GitTreeState:"clean", BuildDate:"2020-03-25T14:58:59Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"darwin/amd64"}
Error from server (InternalError): an error on the server ("") has prevented the request from succeeding
Same problem here on Mac OSX
Had the same issue, found myself a solution from https://github.com/docker/for-mac/issues/2990. In short, deleting the following folders + restarting docker did the thing for me:
rm -rf ~/Library/Group\ Containers/group.com.docker/pki/
rm -rf ~/.kube
From what I understood, the pki folder holds the kubernetes cert/key info and removing them results in them getting regenerated with restart - apparently making the stack operational at least in my case.maybe you set a proxy
solution:
I am attempting to run
kubectl
inside of a docker container. Both the OSX host and the docker container have the samekubectl
version installed, but the docker container instance doesn’t connect to the kubernetes nodes. Nodes are running in GKE.From Docker Container:
From OSX Host:
This has been working fine for the last couple of months when all of a sudden it just stopped working.
I solved it by upgrading docker for mac to version 3.3.3 I believe they reverted some stuff about https in 3.3.2 which probably fixes the issue.
retry several times and then failed:
success:
so, you may need to set your proxy to pass 192.168.0.0/16 directly
I am attempting to run kubectl inside of a docker container too.
This has been working fine till yesterday, and it doesn’t work for me suddenly.
Anyone can help?
The stress test reproduced the problem. kubelet cannot update node status after node has OOM.
Restarting kubelet will fix this problem.
Just happened to me too.
Hi, all. Restart Docker to solve the error. On Windows use the GUI-it’s easier. Right-click on the Docker ‘badge’ at the bottom right of screen. The one showing the whale-with-the-containers. Select ‘Restart Docker’. Docker will restart and you can run ‘kubectl cluster-info’ to verify. All the kubectl … command will now work on the command line/cmd/terminal.
If you want to activate/initialize Kubernetes right-click the same icon and go to ‘Settings’. Select ‘Kubernetes’ and you will be good. Good luck.
Hi All,
Please be informed that even im facing the same issue, but it was working fine two days before. Could you please help me how to solve the error.
sdv9719@kmaster:~$ kubectl get nodes Error from server (InternalError): an error on the server ("") has prevented the request from succeeding sdv9719@kmaster:~$
I have checked my proxy server and find the below details.
root@loadbalancer:~# nc -v localhost 6443 nc: connect to localhost port 6443 (tcp) failed: Connection refused root@loadbalancer:~#
i have created the below 3 master nodes 2 worker nodes 1 lb
I actually solved mine by modifying the way I was launching my docker container? What had been working for months now required that I specified
--network host
… Didn’t try other non-host network types, but this actually resolved it. DNS and other networking was working regardless of the change, but that allowedkubectl
to start behaving again.have the same issue after new install with kubeadm. Worked fine until reboot… Is there a way to get logs from apiserver to help ?