kubernetes: kubectl --request-timeout (or any other client configuration flag) prevents use of in-cluster configuration

What happened: When running a pod within the cluster, using the --request-timeout option always fails instantly

What you expected to happen: It should respect --request-timeout (or, at a minimum, give a better error message). The real issue is that one can not make sure that kubectl will timeout.

We expect to need to timeout under adverse conditions but kubectl in-cluster never times out and thus we have to do process-level timeouts and kill operations.

How to reproduce it (as minimally and precisely as possible): Create a trivial container that can run kubectl and give the pod a service role to access “kubectl get pods” (for the example below). It does not matter if you give full access or limited access roles - the --request-timeout option always causes kubectl to fail when run within the cluster.

Anything else we need to know?:

Environment:

  • Kubernetes version (use kubectl version): Client Version: version.Info{Major:“1”, Minor:“18”, GitVersion:“v1.18.4”, GitCommit:“c96aede7b5205121079932896c4ad89bb93260af”, GitTreeState:“clean”, BuildDate:“2020-06-17T15:57:19Z”, GoVersion:“go1.13.6”, Compiler:“gc”, Platform:“linux/amd64”} Server Version: version.Info{Major:“1”, Minor:“18”, GitVersion:“v1.18.4”, GitCommit:“c96aede7b5205121079932896c4ad89bb93260af”, GitTreeState:“clean”, BuildDate:“2020-06-17T15:57:19Z”, GoVersion:“go1.13.6”, Compiler:“gc”, Platform:“linux/amd64”}

  • Cloud provider or hardware configuration: Azure

  • OS (e.g: cat /etc/os-release): ubuntu 18.04.4 LTS

  • Kernel (e.g. uname -a): Linux k8s-master-18861755-2 5.3.0-1031-azure #32~18.04.1-Ubuntu SMP Mon Jun 22 15:27:23 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux

  • Install tools: aks-engine 0.53.0

  • Network plugin and version (if this is a network-related bug): Azure CNI

  • Others:

In a cluster, on a pod, I can issue kubectl get pods without trouble but if it includes --request-timeout it instantly fails:

# kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.4", GitCommit:"c96aede7b5205121079932896c4ad89bb93260af", GitTreeState:"clean", BuildDate:"2020-06-17T15:57:19Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.4", GitCommit:"c96aede7b5205121079932896c4ad89bb93260af", GitTreeState:"clean", BuildDate:"2020-06-17T15:57:19Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"linux/amd64"}
# kubectl get pods --request-timeout 30
The connection to the server localhost:8080 was refused - did you specify the right host or port?
# kubectl get pods
NAME                                                              READY   STATUS      RESTARTS   AGE
acr-transformer-6f497d647c-5rx7d                                  1/1     Running     0          18m
acr-transformer-6f497d647c-d9j2s                                  1/1     Running     0          18m
. . . .

About this issue

  • Original URL
  • State: open
  • Created 4 years ago
  • Reactions: 1
  • Comments: 37 (6 by maintainers)

Most upvoted comments

@smlx: What do vegetarian zombies eat? Grrrrrainnnnnssss.

In response to this:

/lifecycle frozen /joke

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Note that this is still real - if you have a container that is trying to correctly use rbac to control api access from within it, the kubectl tool can not use the --request-timeout option and still work. It must then generate its own kubeconfig file with auth and rights just to use --request-timeout which has other problems/concerns.

I have not come up with a good work around for this as it really causes all sorts of problems not having the timeout. kubectl can effectively get into an infinite timeout without that setting and thus lose forward progress in whatever script or work it is doing.