kubernetes: Unable to connect to container or do port-forward while kubectl is behind a proxy

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug

/kind feature

What happened: I can’t connect to a pod (kubectl exec -it) or do a port forwarding (kubectl port-porward) behind a proxy.

What you expected to happen: Connect to the pod container and be able to do a port-forward (this is mostly for be able to run helm locally and communicate with tiller running inside the cluster).

How to reproduce it (as minimally and precisely as possible): I’m using a socks proxy to communicate with cluster nodes. I set all environment variables to use proxy (http_proxy, https_proxy and no_proxy) for both docker and kubectl. I can communicate with the cluster using kubectl (so kubectl get node, kubectl create, etc) but I can’t connect to a container or do port-forward.

$ kubectl exec -it prometheus-kube-prometheus-0 bash -n monitoring
Defaulting container name to prometheus.
Use 'kubectl describe pod/prometheus-kube-prometheus-0 -n monitoring' to see all of the containers in this pod.
error: error sending request: Post https://<server ip>/api/v1/namespaces/monitoring/pods/prometheus-kube-prometheus-0/exec?command=bash&container=prometheus&container=prometheus&stdin=true&stdout=true&tty=true: EOF

same with

$ kubectl port-forward tiller-deploy-546cf9696c-qx5vx 4134 -n kube-system
error: error upgrading connection: error sending request: Post https://<server ip>/api/v1/namespaces/kube-system/pods/tiller-deploy-546cf9696c-qx5vx/portforward: EOF

Anything else we need to know?:

Environment:

  • Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.1", GitCommit:"3a1c9449a956b6026f075fa3134ff92f7d55f812", GitTreeState:"clean", BuildDate:"2018-01-04T20:00:41Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.1+coreos.0", GitCommit:"59359d9fdce74738ac9a672d2f31e9a346c5cece", GitTreeState:"clean", BuildDate:"2017-10-12T21:53:13Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
  • OS (e.g. from /etc/os-release):
NAME="Red Hat Enterprise Linux Server"
VERSION="7.3 (Maipo)"
ID="rhel"
ID_LIKE="fedora"
VERSION_ID="7.3"
PRETTY_NAME="Red Hat Enterprise Linux Server 7.3 (Maipo)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:redhat:enterprise_linux:7.3:GA:server"
HOME_URL="https://www.redhat.com/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"

REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 7"
REDHAT_BUGZILLA_PRODUCT_VERSION=7.3
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="7.3"
  • Kernel (e.g. uname -a):
Linux tapps773 3.10.0-514.26.1.el7.x86_64 #1 SMP Tue Jun 20 01:16:02 EDT 2017 x86_64 x86_64 x86_64 GNU/Linux
  • Install tools: kubespray / kubeadmin

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Reactions: 38
  • Comments: 75 (12 by maintainers)

Most upvoted comments

I use socks5 proxies all the time via ssh, and it would be great if I could use it with kubectl.

Any ideas for fix working kubectl with a socks5 proxy?

For non-SPDY requests, kubectl uses the proxying support in Go’s net/http without modification. This works correctly for SOCKS5 proxies.

For SPDY connections, there’s a separate RoundTripper in the k8s.io/apimachinery/pkg/util/httpstream/spdy package. If I’m understanding the code correctly, it’s assuming that if a proxy is configured, it’s an HTTP or HTTPS proxy, and attempts to setup the proxy with the CONNECT method.


The difference is very evident in a tcpdump of the connection between kubectl and the SOCKS5 proxy. For an kubectl exec we can see two requests in the dump.

$ env HTTPS_PROXY=socks5://127.0.0.1:1337 kubectl -n default -v6 exec shell-47h9q date
I0324 20:35:25.751847 1307825 loader.go:375] Config loaded from file:  /home/terin/.kube/config
I0324 20:35:25.984289 1307825 round_trippers.go:443] GET https://kubernetes.local/api/v1/namespaces/default/pods/shell-47h9q 200 OK in 218 milliseconds
I0324 20:35:26.006706 1307825 round_trippers.go:443] POST https://kubernetes.local/api/v1/namespaces/default/pods/shell-47h9q/exec?command=date&container=shell&stderr=true&stdout=true  in 1 milliseconds
F0324 20:35:26.006800 1307825 helpers.go:115] error: error sending request: Post https://kubernetes.local/api/v1/namespaces/default/pods/shell-47h9q/exec?command=date&container=shell&stderr=true&stdout=true: read tcp 127.0.0.1:39334->127.0.0.1:1337: read: connection reset by peer

2020-03-24-20:27:03-screenshot

The first connection is kubectl making a GET request for information about the pod. In packets 4 and 6 we can see proxy authentication being negotiated and accepted, followed by packets 8 and 10 setting up the upstream connection and the success message. Standard application traffic (in this case, the TLS handshake followed by HTTP request and response) follows on the same connection.

The second connection is kubectl attempting to setup the SPDY connection for the exec command. The first message from the client is not setting up the SOCKS5 proxy, but instead jumps ahead entirely to the TLS handshake. The SOCKS5 proxy, rightly, resets the connection.


I note that Go’s implementation switches on the proxy URL scheme to chose different behaviors. It looks like would need to add an implementation for SOCKS5 (it’s unfortunate the one in net/http is unexported).

So I’m encountering this issue using an SSH proxy (using -D [port_number]) and I’ve worked around this by installing an HTTP proxy (tinyproxy in my case) on the remote machine and then using port forwarding instead (-L-L8888:127.0.0.1:8888).

A similar workaround in case you don’t control the SOCKS proxy or don’t use SSH for your proxy is to use a local HTTP(S) proxy which supports a SOCKS proxy as its target as noted here. I haven’t tested this method, but it seems like a solid solution.

For my use case, I am attempting to use a bastion host via google cloud IAP to connect to a private kubernetes endpoint. @11xor6 's comment works for me.

In on the bastion, you only need to run (example - ubuntu)

sudo apt-get -y update && sudo apt-get install -y tinyproxy

By default, tinyproxy will accept connections from the localhost on 8888, which is what we want anyways. You should be able to start the port forward to access the proxy like this (access on local port 1234)

gcloud beta compute ssh --zone ${google_compute_instance.bastion.zone} ${google_compute_instance.bastion.name} --tunnel-through-iap --ssh-flag='-L 1234:127.0.0.1:8888 -C -N'

For a normal SSH connection, just do this

ssh username@hostname -L 1234:127.0.0.1:8888 -C -N

then this command should work

HTTPS_PROXY=http://127.0.0.1:1234 helm list

This is affecting me as well. I did a packet capture, and what I found with kubectl exec was that:

  • It made an initial GET request for the Pod over the SOCKS proxy just fine
  • Then opens up a second TCP connection to the SOCKS proxy (so far so good)
  • Then tries to do a TLS handshake on this socket, without first sending a SOCKS connect request through it

At a guess, it’s forgotten that the proxy is socks5 and not https for the second connection?

Another workaround, while waiting for https://github.com/kubernetes/kubernetes/pull/84205 to merge: use Polipo to convert the SOCKS5 proxy into an HTTP proxy.

polipo socksParentProxy=proxy-url:port
env HTTPS_PROXY=http://127.0.0.1:8123 kubectl exec -it ubuntu -- bash

@leonK-BO You can try this:

git clone https://github.com/kubernetes/kubernetes.git
cd kubernetes
curl -s https://github.com/jtyr/kubernetes/commit/43deaf0ba43ee5d257418fd95e9a93c5d4afa0c5.diff | git apply -
go build -o ~/bin/kubectl ./cmd/kubectl

This is using a patch that I have extracted from the #84205.

FYI the work to fully support SOCKS proxies has been merged into master a while ago https://github.com/kubernetes/kubernetes/pull/105632.

As of today you can build kubectl from the master branch so patching is not needed anymore. It should be part of Kubernetes 1.24 that will be released in a month.

I would love to get this feature, same problem here.

So I’m encountering this issue using an SSH proxy (using -D [port_number]) and I’ve worked around this by installing an HTTP proxy (tinyproxy in my case) on the remote machine and then using port forwarding instead (-L-L8888:127.0.0.1:8888).

A similar workaround in case you don’t control the SOCKS proxy or don’t use SSH for your proxy is to use a local HTTP(S) proxy which supports a SOCKS proxy as its target as noted here. I haven’t tested this method, but it seems like a solid solution.

The fix has been introduced in Kubernetes 1.24. /close

Quick update: The work to add SOCKS5 compatibility has moved from #84205 to #105632 and may be close to being completed soon.

TL;DR / ELI5: To get everything to work smoothly right now, use an HTTP proxy over SOCKS because kubectl and SOCKS don’t seem to get along when it comes to streaming stuff like kubectl exec. So, either use tinyproxy on your proxy server or opt for Polipo combined with an existing SOCKS proxy.

Also confirming that both of these approaches work fine with self-signed certificates when interacting with the API server as well (e.g. with certificate-authority-data in .kube/config). FWIW (people googling to get here): My use case was proxying a connection to GKE Autopilot, which uses a self-signed certificate and an endpoint pointing directly to an IP address. I wasn’t sure how all of these proxies would affect TLS (particularly since my situation required finding a way to work around Netskope).

Hello,

I have a similiar issue. My client is behind a corporate proxy and most of the kubectl commands work fine. But if I try to exec into a pod shell than I get following error:

kubectl exec -i -t -n default web-6798b588cf-4gzvq -c web "--" sh -c "clear; (bash || ash || sh)"
error: unable to upgrade connection: error dialing backend: dial tcp: lookup .....eks.amazonaws.com on 10.58.194.16:53: no such host

So kubectl tries to resolute the public EKS domain with our internal DNS. Our internal DNS does not support resolution of public domains. It looks like the similiar root cause for me. Do you know if there is a workaround for that or will this bug be fixed soon?

use socks5

ssh -D 1337 -qCN [ssh server IP]
HTTPS_PROXY=socks5://localhost:1337 kubectl get pods

@mulatinho: here’s the active PR: https://github.com/kubernetes/kubernetes/pull/84205

There’s still a few open questions since last fall and I haven’t gotten a response from the reviewers.

@k8s-triage-robot: Closing this issue, marking it as “Not Planned”.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.