kubernetes: Port-forwarding not working due to missing socat command
I think #17157 might still be broken. I’m using the spark example w/ ubuntu provider. I’m forwarding 8081 below because I’m running the command on my k8s master, which is already using 8080.
$ which socat
/usr/bin/socat
$ kubectl port-forward zeppelin-controller-oea9w 8081:8080
I0116 11:51:55.986614 22704 portforward.go:213] Forwarding from 127.0.0.1:8081 -> 8080
I0116 11:51:55.986782 22704 portforward.go:213] Forwarding from [::1]:8081 -> 8080
I0116 11:52:03.982497 22704 portforward.go:247] Handling connection for 8081
E0116 11:52:04.116196 22704 portforward.go:318] an error occurred forwarding 8081 -> 8080: error forwarding port 8080 to pod zeppelin-controller-oea9w_default, uid : unable to do port forwarding: socat not found.
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"1", GitVersion:"v1.1.2", GitCommit:"3085895b8a70a3d985e9320a098e74f545546171", GitTreeState:"clean"}
Server Version: version.Info{Major:"1", Minor:"1", GitVersion:"v1.1.2", GitCommit:"3085895b8a70a3d985e9320a098e74f545546171", GitTreeState:"clean"}
Purportedly 1.1.2 should include the fix in #17157
About this issue
- Original URL
- State: closed
- Created 8 years ago
- Reactions: 3
- Comments: 33 (23 by maintainers)
Commits related to this issue
- Add socat, which is required for port-forwarding (https://github.com/kubernetes/kubernetes/issues/19765) — committed to slapers/centos-k8s by slapers 8 years ago
- Add new socat package to kubelet job This is necessary to make 'kubectl port-forward' work. Details: https://github.com/kubernetes/kubernetes/issues/19765#issuecomment-178271557 The socat blob wi... — committed to Amit-PivotalLabs/kubo-release by amitkgupta 7 years ago
- Merge pull request #19765 from soltysh/job_scaler UPSTREAM: 64028: Tolarate negative values when calculating job scale progress Origin-commit: eed51bb15d34e3f5e63478bbf1a0688190ce04b0 — committed to openshift/kubernetes by k8s-publishing-bot 6 years ago
The reason using the recommended kubelet-wrapper approach bothers me, and why I have opted not to use it thus far, is that it relies on a separate distribution of kubelet itself on CoreOS’s quay.io account. On a few occasions I’ve tried it, the version of k8s I was trying to use had not yet been packaged and released, and I only found this out when I brought up a new cluster and everything failed because it couldn’t download the kubelet. I don’t think it’s a reasonable compromise to add a custom distribution of kubelet to my dependency chain, even though it is of course my choice to use CoreOS in the first place. Sure, there could be improvements to CoreOS’s release process to make sure new versions of the kubelet get there faster, but either way, it’s another thing that can break that just doesn’t need to be there.
One of the main points of containers (at least as they’ve been sold since the initial Docker hype) is that you no longer have to worry about system dependencies, because applications have everything they need packaged along with them. I think the best way forward is for the official kubelet image to either package system-level dependencies along with itself, or have a mechanism to download and run separate images containing the tools it needs via whatever container runtime the user selects. Any additional images needed by kubelet should also be maintained on gcr.io and be included as part of the k8s release process.
One option for socat in particular (which would likely apply to most/all other external kubelet executable dependencies) is to add it to the pause container. I’m actually going to look at doing that as a part of #25113
Originally I had assumed a good place to put socat and other necessary utilities would be in the pod infrastructure image.
On Wed, Jun 8, 2016 at 10:14 AM, Andy Goldstein notifications@github.com wrote:
Fair points @crawford, and thanks for the additional context. As it applies to this issue, it still seems like the the official kubelet release needs a better story around packaging its dependencies with it in some way. There may be more tweaks that CoreOS has made to the distro-provided kubelet image, but the changes we’re talking about would benefit any theoretical system with similar properties, and as stable as these networking tools are, it’s probably a good idea to lock down specific versions of them that have been shown to work with the kubelet in e2e tests.
If you run the kubelet via
kubelet-wrapper
on CoreOS, the dependencies (like socat) are included in the rkt fly container the kubelet is run in.CoreOS issue: https://github.com/coreos/bugs/issues/1114
/cc @crawford @aaronlevy