kubernetes: Error from server (InternalError): an error on the server ("") has prevented the request from succeeding

I don’t know why this error occurred, I reloaded it many times but the problem still exists.

kubectl version
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.3", GitCommit:"435f92c719f279a3a67808c80521ea17d5715c66", GitTreeState:"clean", BuildDate:"2018-11-27T01:14:37Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"darwin/amd64"}
Error from server (InternalError): an error on the server ("") has prevented the request from succeeding

/kind bug

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Comments: 42 (7 by maintainers)

Most upvoted comments

// CauseTypeUnexpectedServerResponse is used to report when the server responded to the client // without the expected return type. The presence of this cause indicates the error may be // due to an intervening proxy or the server software malfunctioning.

“code”: 502

as the comment sugguets^^^, it might be the proxy issue in ur local cluster, the /version handler will never return a non-200 status unless u misconfigured ur proxy.

/close

Has this issue been resolved? @yue9944882 did not seem to help.

I am also getting the same error. Even kubectl version command is not working. kubectl version Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.0", GitCommit:"9e991415386e4cf155a24b1da15becaa390438d8", GitTreeState:"clean", BuildDate:"2020-03-25T14:58:59Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"darwin/amd64"} Error from server (InternalError): an error on the server ("") has prevented the request from succeeding

Same problem here on Mac OSX

Same problem here on Mac OSX

Had the same issue, found myself a solution from https://github.com/docker/for-mac/issues/2990. In short, deleting the following folders + restarting docker did the thing for me: rm -rf ~/Library/Group\ Containers/group.com.docker/pki/ rm -rf ~/.kube From what I understood, the pki folder holds the kubernetes cert/key info and removing them results in them getting regenerated with restart - apparently making the stack operational at least in my case.

maybe you set a proxy

solution:

unset https_proxy

I am attempting to run kubectl inside of a docker container. Both the OSX host and the docker container have the same kubectl version installed, but the docker container instance doesn’t connect to the kubernetes nodes. Nodes are running in GKE.

From Docker Container:

kubectl version   
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.7", GitCommit:"1dd5338295409edcfff11505e7bb246f0d325d15", GitTreeState:"clean", BuildDate:"2021-01-13T13:23:52Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
Error from server (InternalError): an error on the server ("") has prevented the request from succeeding```

From OSX Host:

kubectl version                                                                                                                                                                                             
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.7", GitCommit:"1dd5338295409edcfff11505e7bb246f0d325d15", GitTreeState:"clean", BuildDate:"2021-01-13T13:23:52Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"18+", GitVersion:"v1.18.16-gke.2100", GitCommit:"36d0b0a39224fef7a40df3d2bc61dfd96c8c7f6a", GitTreeState:"clean", BuildDate:"2021-03-16T09:15:29Z", GoVersion:"go1.13.15b4", Compiler:"gc", Platform:"linux/amd64"}

This has been working fine for the last couple of months when all of a sudden it just stopped working.

I solved it by upgrading docker for mac to version 3.3.3 I believe they reverted some stuff about https in 3.3.2 which probably fixes the issue.

retry several times and then failed:

kubectl version --v=7

success:

http_proxy='' https_proxy='' kubectl version --v=7
[xxxxxxx@Mac-mini /Users/xxxxxxx ]$ kubectl version --v=7
I1029 00:24:48.805208   43758 loader.go:375] Config loaded from file:  /Users/xxxxxxx/.kube/config
I1029 00:24:48.805852   43758 cert_rotation.go:137] Starting client certificate rotation controller
I1029 00:24:48.805878   43758 round_trippers.go:420] GET https://192.168.64.3:8443/version?timeout=32s
I1029 00:24:48.805885   43758 round_trippers.go:427] Request Headers:
I1029 00:24:48.805888   43758 round_trippers.go:431]     Accept: application/json, */*
I1029 00:24:48.805892   43758 round_trippers.go:431]     User-Agent: kubectl/v1.18.8 (darwin/amd64) kubernetes/9f2892a
I1029 00:24:50.157245   43758 round_trippers.go:446] Response Status:  in 1351 milliseconds
I1029 00:24:50.157315   43758 request.go:907] Got a Retry-After 1s response for attempt 1 to https://192.168.64.3:8443/version?timeout=32s
I1029 00:24:51.158746   43758 round_trippers.go:420] GET https://192.168.64.3:8443/version?timeout=32s
I1029 00:24:51.158772   43758 round_trippers.go:427] Request Headers:
I1029 00:24:51.158780   43758 round_trippers.go:431]     Accept: application/json, */*
I1029 00:24:51.158786   43758 round_trippers.go:431]     User-Agent: kubectl/v1.18.8 (darwin/amd64) kubernetes/9f2892a
^C
[xxxxxxx@Mac-mini /Users/xxxxxxx ]$ http_proxy='' https_proxy='' kubectl version --v=7
I1029 00:25:11.782094   43766 loader.go:375] Config loaded from file:  /Users/xxxxxxx/.kube/config
I1029 00:25:11.782724   43766 cert_rotation.go:137] Starting client certificate rotation controller
I1029 00:25:11.782763   43766 round_trippers.go:420] GET https://192.168.64.3:8443/version?timeout=32s
I1029 00:25:11.782770   43766 round_trippers.go:427] Request Headers:
I1029 00:25:11.782773   43766 round_trippers.go:431]     Accept: application/json, */*
I1029 00:25:11.782777   43766 round_trippers.go:431]     User-Agent: kubectl/v1.18.8 (darwin/amd64) kubernetes/9f2892a
I1029 00:25:11.795078   43766 round_trippers.go:446] Response Status: 200 OK in 12 milliseconds
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.8", GitCommit:"9f2892aab98fe339f3bd70e3c470144299398ace", GitTreeState:"clean", BuildDate:"2020-08-13T16:12:48Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.2", GitCommit:"f5743093fd1c663cb0cbc89748f730662345d44d", GitTreeState:"clean", BuildDate:"2020-09-16T13:32:58Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"}
[xxxxxxx@Mac-mini /Users/xxxxxxx ]$

so, you may need to set your proxy to pass 192.168.0.0/16 directly

I am attempting to run kubectl inside of a docker container too.

FROM python:3.6.6-stretch

# Setup
ENV LANG C.UTF-8
ENV EDITOR vim
ADD . /var/www/app
WORKDIR /var/www/app/ansible
RUN apt-get update && apt-get install -y curl git vim

# Install gcloud
RUN apt-get update && apt-get -y install apt-transport-https ca-certificates gnupg
RUN echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] https://packages.cloud.google.com/apt cloud-sdk main" | tee -a /etc/apt/sources.list.d/google-cloud-sdk.list
RUN curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key --keyring /usr/share/keyrings/cloud.google.gpg add -
RUN apt-get update && apt-get install -y google-cloud-sdk

# Install docker
ENV DOCKER_CLIENT_VERSION 1.13.0
RUN curl -fsSL https://get.docker.com/builds/Linux/x86_64/docker-${DOCKER_CLIENT_VERSION}.tgz | tar -xzC /usr/local/bin --strip=1 docker/docker

# Install kubectl
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.19.7/bin/linux/amd64/kubectl
RUN chmod +x ./kubectl
RUN mv ./kubectl /usr/local/bin/kubectl

# Install ansible
RUN python3 -m pip install --upgrade pip
RUN python3 -m pip install install -r requirements.txt
RUN ansible-galaxy collection install google.cloud:==1.0.0
RUN ansible-galaxy collection install community.kubernetes:==1.1.1
RUN ansible-galaxy collection install community.general:==1.2.0

This has been working fine till yesterday, and it doesn’t work for me suddenly.

#root@12a1e67ef6eb:/var/www/app/ansible# kubectl version --v=10
#I0505 13:39:32.849948     281 loader.go:375] Config loaded from file:  /root/.kube/config
#I0505 13:39:32.851008     281 round_trippers.go:424] curl -k -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubectl/v1.19.7 (linux/amd64) kubernetes/1dd5338" 'https://example.com/version?timeout=32s'
#I0505 13:39:32.853138     281 round_trippers.go:444] GET https://example.com/version?timeout=32s  in 1 milliseconds
#I0505 13:39:32.853278     281 round_trippers.go:450] Response Headers:
#I0505 13:39:32.853465     281 request.go:933] Got a Retry-After 1s response for attempt 1 to https://example.com/version?timeout=32s
#I0505 13:39:33.858936     281 round_trippers.go:424] curl -k -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubectl/v1.19.7 (linux/amd64) kubernetes/1dd5338" 'https://example.com/version?timeout=32s'
#I0505 13:39:33.863128     281 round_trippers.go:444] GET https://example.com/version?timeout=32s  in 4 milliseconds
#I0505 13:39:33.863228     281 round_trippers.go:450] Response Headers:
#I0505 13:39:42.870215     281 request.go:1097] Response Body:
#Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.7", GitCommit:"1dd5338295409edcfff11505e7bb246f0d325d15", GitTreeState:"clean", BuildDate:"2021-01-13T13:23:52Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
#I0505 13:39:42.870813     281 helpers.go:216] server response object: [{
#  "metadata": {},
#  "status": "Failure",
#  "message": "an error on the server (\"\") has prevented the request from succeeding",
#  "reason": "InternalError",
#  "details": {
#    "causes": [
#      {
#        "reason": "UnexpectedServerResponse"
#      }
#    ],
#    "retryAfterSeconds": 1
#  },
#  "code": 500
#}]
#F0505 13:39:42.870886     281 helpers.go:115] Error from server (InternalError): an error on the server ("") has prevented the request from succeeding
#goroutine 1 [running]:
#k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc00013a001, 0xc000397770, 0x97, 0xe8)
#        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:996 +0xb9
#k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x2d04b80, 0xc000000003, 0x0, 0x0, 0xc0006273b0, 0x2ae4041, 0xa, 0x73, 0x40b300)
#        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:945 +0x191
#k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x2d04b80, 0x3, 0x0, 0x0, 0x2, 0xc000877af0, 0x1, 0x1)
#        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:718 +0x165
#k8s.io/kubernetes/vendor/k8s.io/klog/v2.FatalDepth(...)
#        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1442
#k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.fatal(0xc0001acfc0, 0x68, 0x1)
#        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:93 +0x1f0
#k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.checkErr(0x1e5b480, 0xc0006ad540, 0x1d06cf0)
#        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:188 +0x945
#k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.CheckErr(...)
#        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:115
#k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/version.NewCmdVersion.func1(0xc0006af080, 0xc000197940, 0x0, 0x1)
#        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/version/version.go:79 +0x117
#k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc0006af080, 0xc000197910, 0x1, 0x1, 0xc0006af080, 0xc000197910)
#        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:846 +0x2c2
#k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc000439600, 0xc00013c120, 0xc000118150, 0x3)
#        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:950 +0x375
#k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)
#        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:887
#main.main()
#        _output/dockerized/go/src/k8s.io/kubernetes/cmd/kubectl/kubectl.go:49 +0x21d
#
#goroutine 18 [chan receive]:
#k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).flushDaemon(0x2d04b80)
#        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1131 +0x8b
#created by k8s.io/kubernetes/vendor/k8s.io/klog/v2.init.0
#        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:416 +0xd8
#
#goroutine 8 [select]:
#k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x1d06c28, 0x1e5b780, 0xc000476000, 0x1700000001, 0xc0001140c0)
#        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
#k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x1d06c28, 0x12a05f200, 0x0, 0x1, 0xc0001140c0)
#        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
#k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0x1d06c28, 0x12a05f200, 0xc0001140c0)
#        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x4d
#created by k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/util/logs.InitLogs
#        /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/util/logs/logs.go:51 +0x96

Anyone can help?

The stress test reproduced the problem. kubelet cannot update node status after node has OOM.

2020-08-31T11:59:05+0800 node-105-152 kubelet[47510]: E0831 11:59:05.877473   47510 kubelet_node_status.go:402] Error updating node status, will retry: error getting node "node-105-152": an error on the server ("") has prevented the request from succeeding (get nodes node-105-152)
2020-08-31T11:59:05+0800 node-105-152 kubelet[47510]: E0831 11:59:05.877580   47510 kubelet_node_status.go:389] Unable to update node status: update node status exceeds retry count

Restarting kubelet will fix this problem.

Just happened to me too.

image

Hi, all. Restart Docker to solve the error. On Windows use the GUI-it’s easier. Right-click on the Docker ‘badge’ at the bottom right of screen. The one showing the whale-with-the-containers. Select ‘Restart Docker’. Docker will restart and you can run ‘kubectl cluster-info’ to verify. All the kubectl … command will now work on the command line/cmd/terminal.

If you want to activate/initialize Kubernetes right-click the same icon and go to ‘Settings’. Select ‘Kubernetes’ and you will be good. Good luck.

Hi All,

Please be informed that even im facing the same issue, but it was working fine two days before. Could you please help me how to solve the error.

sdv9719@kmaster:~$ kubectl get nodes Error from server (InternalError): an error on the server ("") has prevented the request from succeeding sdv9719@kmaster:~$

I have checked my proxy server and find the below details.

root@loadbalancer:~# nc -v localhost 6443 nc: connect to localhost port 6443 (tcp) failed: Connection refused root@loadbalancer:~#

i have created the below 3 master nodes 2 worker nodes 1 lb

I actually solved mine by modifying the way I was launching my docker container? What had been working for months now required that I specified --network host… Didn’t try other non-host network types, but this actually resolved it. DNS and other networking was working regardless of the change, but that allowed kubectl to start behaving again.

have the same issue after new install with kubeadm. Worked fine until reboot… Is there a way to get logs from apiserver to help ?