kubernetes: kubectl commands timeout
I’m running a Kubernetes cluster。 Now, today, when I was about to deploy some updates, I get timeouts from the server.
Running $ kubectl get nodes yields
Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
Running $ kubectl get pods --all-namespaces yields
Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get pods)
Running $ kubectl get deployments yields
Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.extensions)
Running $ kubectl get svc yields
Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get services)
Running $ kubectl cluster-info yields (note no output after the master)
Kubernetes master is running at http://localhost:8080
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
As I get these timeouts for every command, troubleshooting is impossible.
I ran $ kubectl get pods -v 7 and after a bunch of cache returns got this:
I0309 11:31:57.517168 18939 round_trippers.go:414] GET http://localhost:8080/api
I0309 11:31:57.517229 18939 round_trippers.go:421] Request Headers:
I0309 11:31:57.517251 18939 round_trippers.go:424] Accept: application/json, */*
I0309 11:31:57.517270 18939 round_trippers.go:424] User-Agent: kubectl/v1.8.8 (linux/amd64) kubernetes/2f73858
I0309 11:31:59.092098 18939 round_trippers.go:439] Response Status: 200 OK in 1574 milliseconds
I0309 11:31:59.102091 18939 round_trippers.go:414] GET http://localhost:8080/apis
I0309 11:31:59.102130 18939 round_trippers.go:421] Request Headers:
I0309 11:31:59.102149 18939 round_trippers.go:424] Accept: application/json, */*
I0309 11:31:59.102168 18939 round_trippers.go:424] User-Agent: kubectl/v1.8.8 (linux/amd64) kubernetes/2f73858
I0309 11:31:59.290325 18939 round_trippers.go:439] Response Status: 200 OK in 188 milliseconds
I0309 11:31:59.308861 18939 round_trippers.go:414] GET http://localhost:8080/apis/apiregistration.k8s.io/v1beta1
I0309 11:31:59.308900 18939 round_trippers.go:421] Request Headers:
I0309 11:31:59.308919 18939 round_trippers.go:424] Accept: application/json, */*
I0309 11:31:59.309211 18939 round_trippers.go:424] User-Agent: kubectl/v1.8.8 (linux/amd64) kubernetes/2f73858
I0309 11:31:59.721500 18939 round_trippers.go:439] Response Status: 200 OK in 408 milliseconds
I0309 11:31:59.756420 18939 round_trippers.go:414] GET http://localhost:8080/api/v1
I0309 11:31:59.756537 18939 round_trippers.go:421] Request Headers:
I0309 11:31:59.756571 18939 round_trippers.go:424] Accept: application/json, */*
I0309 11:31:59.756591 18939 round_trippers.go:424] User-Agent: kubectl/v1.8.8 (linux/amd64) kubernetes/2f73858
I0309 11:31:59.792099 18939 round_trippers.go:439] Response Status: 200 OK in 35 milliseconds
I0309 11:31:59.936968 18939 cached_discovery.go:119] returning cached discovery info from /root/.kube/cache/discovery/localhost_8080/servergroups.json
I0309 11:31:59.937209 18939 cached_discovery.go:72] returning cached discovery info from /root/.kube/cache/discovery/localhost_8080/apiregistration.k8s.io/v1beta1/serverresources.json
I0309 11:31:59.937674 18939 cached_discovery.go:72] returning cached discovery info from /root/.kube/cache/discovery/localhost_8080/v1/serverresources.json
I0309 11:31:59.937822 18939 cached_discovery.go:119] returning cached discovery info from /root/.kube/cache/discovery/localhost_8080/servergroups.json
I0309 11:31:59.937953 18939 cached_discovery.go:72] returning cached discovery info from /root/.kube/cache/discovery/localhost_8080/apiregistration.k8s.io/v1beta1/serverresources.json
I0309 11:31:59.938375 18939 cached_discovery.go:72] returning cached discovery info from /root/.kube/cache/discovery/localhost_8080/v1/serverresources.json
I0309 11:31:59.938511 18939 cached_discovery.go:119] returning cached discovery info from /root/.kube/cache/discovery/localhost_8080/servergroups.json
I0309 11:31:59.938658 18939 cached_discovery.go:72] returning cached discovery info from /root/.kube/cache/discovery/localhost_8080/apiregistration.k8s.io/v1beta1/serverresources.json
I0309 11:31:59.939083 18939 cached_discovery.go:72] returning cached discovery info from /root/.kube/cache/discovery/localhost_8080/v1/serverresources.json
I0309 11:31:59.940167 18939 cached_discovery.go:119] returning cached discovery info from /root/.kube/cache/discovery/localhost_8080/servergroups.json
I0309 11:31:59.940296 18939 cached_discovery.go:72] returning cached discovery info from /root/.kube/cache/discovery/localhost_8080/apiregistration.k8s.io/v1beta1/serverresources.json
I0309 11:31:59.940732 18939 cached_discovery.go:72] returning cached discovery info from /root/.kube/cache/discovery/localhost_8080/v1/serverresources.json
I0309 11:31:59.941176 18939 round_trippers.go:414] GET http://localhost:8080/api/v1/namespaces/default/pods
I0309 11:31:59.941200 18939 round_trippers.go:421] Request Headers:
I0309 11:31:59.941219 18939 round_trippers.go:424] Accept: application/json
I0309 11:31:59.941237 18939 round_trippers.go:424] User-Agent: kubectl/v1.8.8 (linux/amd64) kubernetes/2f73858
I0309 11:33:00.217042 18939 round_trippers.go:439] Response Status: 504 Gateway Timeout in 60275 milliseconds
I0309 11:33:00.217951 18939 helpers.go:207] server response object: [{
"metadata": {},
"status": "Failure",
"message": "the server was unable to return a response in the time allotted, but may still be processing the request (get pods)",
"reason": "Timeout",
"details": {
"kind": "pods",
"causes": [
{
"reason": "UnexpectedServerResponse",
"message": "{\"metadata\":{},\"status\":\"Failure\",\"message\":\"Timeout: request did not complete within 1m0s\",\"reason\":\"Timeout\",\"details\":{},\"code\":504}"
}
]
},
"code": 504
}]
F0309 11:33:00.218018 18939 helpers.go:120] Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get pods)
About this issue
- Original URL
- State: closed
- Created 6 years ago
- Reactions: 14
- Comments: 21 (5 by maintainers)
@magic-chenyang Are you using any proxies?
If so, try export
no_proxy="127.0.0.1,[apiserver_ip]"
before running kubectl subcommands.A very similar issue is affecting a Kubernetes cluster that I’m working with (Kubernetes 1.11.2).
We have some long running jobs which can take minutes or hours to complete. Clients poll the status of the jobs every 5 seconds or so until they complete (using the batch jobs client from the
client-go
package https://godoc.org/k8s.io/client-go/kubernetes/typed/batch/v1#JobInterface).What happens is that polling will run happily for a while but then one of the requests will fail with this error message:
This seems very similar to OP’s issue.