helm: write: broken pipe

Since version 2.8.0, I’m getting the following error while running helm upgrade --install release_name chart:

E0209 11:21:52.322201 93759 portforward.go:303] error copying from remote stream to local connection: readfrom tcp4 127.0.0.1:53674->127.0.0.1:53680: write tcp4 127.0.0.1:53674->127.0.0.1:53680: write: broken pipe

Anyone got a hint, what could be a possible cause for this?

Edit: 2.8.1 does not fix this. I’m on MacOS

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Reactions: 82
  • Comments: 99 (19 by maintainers)

Commits related to this issue

Most upvoted comments

use safari instead of chrome

So you ending maintenance of helm 2 now, where Helm3 is even not yet stable and merging everything will also take time.?

I am afraid that it must be deferred until 2.13 because it required a regeneration of one of the protobuf classes (due to a gRPC internal changes). Consequently, the binary protocol may be incompatible with earlier versions of 2.12. This is actually the reason why we only update gRPC when necessary.

That said, I will see if we can speed up the 2.13 release process at all. If this fixes a major issue, it’s worth getting out the door.

Helm3 is out. Try using that.

Why is the issue closed? As it stands right now it is impossible to use helm with CI / CD with TLS due to this issue. Do we have to wait for Helm 3?

Update: I believe it should be stated on the front page that there is a major issue so that people won’t waste time with helm for the time being?

Update 2, my temporary solution: As a temporary solution in CI / CD script prior to running any Helm install / upgrade commands I disable exit on non 0 code (set +e), run the helm command, re-enable exit on non 0 code (set -e) and use kubectl rollout status to wait for deployment to become available with a timeout set. If timeout is hit, it means something had gone wrong. In my case I only care about deployments becoming available. For example: set +e helm upgrade --install prometheus stable/prometheus set -e kubectl rollout status --timeout=30s deployment/prometheus-server

I opened a PR that updates us to a newer version of gRPC, which seems to have about a dozen network and TLS fixes since our last version. I’m hopeful that will fix this issue.

We’re experiencing this on about half our deploys and unfortunately makes the CD system think the deployment failed.

2018-09-20T10:54:29.4029987Z E0920 10:54:26.165265   25441 portforward.go:303] error copying from remote stream to local connection: readfrom tcp4 127.0.0.1:34899->127.0.0.1:42674: write tcp4 127.0.0.1:34899->127.0.0.1:42674: write: broken pipe
2018-09-20T10:54:29.4182951Z ##[error]E0920 10:54:26.165265   25441 portforward.go:303] error copying from remote stream to local connection: readfrom tcp4 127.0.0.1:34899->127.0.0.1:42674: write tcp4 127.0.0.1:34899->127.0.0.1:42674: write: broken pipe

Tested using Helm 2.10, 2.8.2 on both linux and windows. Using TLS and Azure’s Kubernetes 1.11.2

It occurs very regularly and is considerably inconveniencing us.

Just was dealing with similar issue and doing helm init --upgrade --history-max=0 seemed to fix it for me.

error:

E0531 14:54:27.094118   97288 portforward.go:303] error copying from remote stream to local connection: readfrom tcp4 127.0.0.1:53496->127.0.0.1:53498: write tcp4 127.0.0.1:53496->127.0.0.1:53498: write: broken pipe
Error: UPGRADE FAILED: "dev" has no deployed releases
kubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.6", GitCommit:"9f8ebd171479bec0ada837d7ee641dec2f8c6dd1", GitTreeState:"clean", BuildDate:"2018-03-21T15:21:50Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.6", GitCommit:"9f8ebd171479bec0ada837d7ee641dec2f8c6dd1", GitTreeState:"clean", BuildDate:"2018-03-21T15:13:31Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}

helm version
Client: &version.Version{SemVer:"v2.8.2", GitCommit:"a80231648a1473929271764b920a8e346f6de844", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.8.2", GitCommit:"a80231648a1473929271764b920a8e346f6de844", GitTreeState:"clean"}

I was able to track down the cause of this issue to expired Tiller and Helm certificates.

Issue Description

I originally secured my Tiller installation following these instructions https://v2.helm.sh/docs/using_helm/#using-ssl-between-helm-and-tiller. In the tutorial, the duration for the validity of the Helm and Tiller certificates is set to 365 days. I originally generated the certificates with this value, but over 365 days ago.

When running any Helm command (e.g. helm version or helm list), I received an error message similar to this:

an error occurred forwarding 49855 -> 44134: error forwarding port 44134 to pod 4106da54d86955cc3f88c866cf45afdaf0c6edf9f471ad669f23ba56dc77e6ab, uid : exit status 1: 2020/05/27 21:33:00 socat[15077] E write(5, 0x5642d8387150, 24): Broken pipe

Issue Resolution

  • Back up your existing certificates
  • Using your exiting Tiller key, Helm Key, CA Key, and CA certificate, generate new Helm and Tiller certificates following the original instructions described in https://v2.helm.sh/docs/using_helm/#using-ssl-between-helm-and-tiller
  • Upgrade your Tiller setup with the these new certificates. I did not have to to edit the Tiller secrets in the cluster manually, as the following command updated the secrets for me (note that this will also upgrade Tiller in your cluster to the corresponding version of Helm you’re using):
helm init \
--service-account tiller \
--tiller-namespace tiller \
--tiller-tls \
--tiller-tls-cert tiller.crt \
--tiller-tls-key ~/.ssh/tiller.key \
--tiller-tls-verify \
--tls-ca-cert ~/.ssh/ca.helm.crt \
--upgrade

I believe the AKS folks are aware of this bug and have hit it themselves. It appears to be an upstream issue, not necessarily a helm problem. I’ll see if I can ping one of them and see if they can provide any updates on this ticket.

Can confirm that this is still an issue in 2.13

$ helm install appscode/kubedb-catalog --name kubedb-catalog --version 0.10.0 --namespace kube-system 
NAME:   kubedb-catalog                                                                                                                     
E0303 20:13:56.088186   21260 portforward.go:363] error copying from remote stream to local connection: readfrom tcp4 127.0.0.1:33299->127.0.0.1:44514: write tcp4 127.0.0.1:33299->127.0.0.1:44514: write: broken pipe
LAST DEPLOYED: Sun Mar  3 20:13:54 2019
NAMESPACE: kube-system
STATUS: DEPLOYED
...

I was just trying out kubedb right now and got this. I see a lot of people saying that removing --wait solved the problem for them, but what I think happens is that when you have that flag you are just a lot more likely to have this happen to you, as you will keep the connection up until you’re done. But it does not remove the issue completely.

Just making sure it’s included:

Azure AKS running k8 version 1.12.6

export TILLER_NAMESPACE="tiller"
export HELM_TLS_ENABLE="true"

Is there anything else we can provide for debugging this?

Upgraded to 2.13.0 and this issue is still happening, @technosophos.

Kubectl v1.13.4 Helm 2.13.0 AKS 1.11.3 Using TLS

Using Azure Devops hosted agents


2019-03-01T06:39:43.4897111Z [command]C:\hostedtoolcache\windows\helm\2.13.0\x64\windows-amd64\helm.exe upgrade --tiller-namespace [tillerns] --namespace [snip] --install --values [values] --wait --tls --tls-ca-cert D:\a\_temp\ca.cert.pem --tls-cert D:\a\_temp\helm-vsts.cert.pem --tls-key D:\a\_temp\helm-vsts.key.pem --values [values] --values [values] --values [values] --values [values] [release] [chart]
2019-03-01T06:40:01.6728072Z E0301 06:39:44.751630    3996 portforward.go:363] error copying from remote stream to local connection: readfrom tcp4 127.0.0.1:1704->127.0.0.1:1706: write tcp4 127.0.0.1:1704->127.0.0.1:1706: wsasend: An established connection was aborted by the software in your host machine.
2019-03-01T06:40:01.6739267Z Release "[release]" has been upgraded. Happy Helming!
2019-03-01T06:40:01.6762749Z E0301 06:40:00.814924    3996 portforward.go:363] error copying from remote stream to local connection: readfrom tcp4 127.0.0.1:1704->127.0.0.1:1707: write tcp4 127.0.0.1:1704->127.0.0.1:1707: wsasend: An established connection was aborted by the software in your host machine.

Client and server version for completeness,

2019-03-01T07:03:22.2064935Z Client: &version.Version{SemVer:"v2.13.0", GitCommit:"79d07943b03aea2b76c12644b4b54733bc5958d6", GitTreeState:"clean"}
2019-03-01T07:03:22.2065659Z Server: &version.Version{SemVer:"v2.13.0", GitCommit:"79d07943b03aea2b76c12644b4b54733bc5958d6", GitTreeState:"clean"}
2019-03-01T07:09:04.9754415Z [command]C:\hostedtoolcache\windows\kubectl\1.13.2\x64\kubectl.exe version -o json
2019-03-01T07:09:05.8327359Z {
2019-03-01T07:09:05.8327545Z   "clientVersion": {
2019-03-01T07:09:05.8327674Z     "major": "1",
2019-03-01T07:09:05.8327817Z     "minor": "13",
2019-03-01T07:09:05.8327886Z     "gitVersion": "v1.13.2",
2019-03-01T07:09:05.8327992Z     "gitCommit": "cff46ab41ff0bb44d8584413b598ad8360ec1def",
2019-03-01T07:09:05.8328073Z     "gitTreeState": "clean",
2019-03-01T07:09:05.8328164Z     "buildDate": "2019-01-10T23:35:51Z",
2019-03-01T07:09:05.8328241Z     "goVersion": "go1.11.4",
2019-03-01T07:09:05.8328355Z     "compiler": "gc",
2019-03-01T07:09:05.8328441Z     "platform": "windows/amd64"
2019-03-01T07:09:05.8328539Z   },
2019-03-01T07:09:05.8328646Z   "serverVersion": {
2019-03-01T07:09:05.8328712Z     "major": "1",
2019-03-01T07:09:05.8328796Z     "minor": "11",
2019-03-01T07:09:05.8328863Z     "gitVersion": "v1.11.3",
2019-03-01T07:09:05.8328972Z     "gitCommit": "a4529464e4629c21224b3d52edfe0ea91b072862",
2019-03-01T07:09:05.8329052Z     "gitTreeState": "clean",
2019-03-01T07:09:05.8329146Z     "buildDate": "2018-09-09T17:53:03Z",
2019-03-01T07:09:05.8329265Z     "goVersion": "go1.10.3",
2019-03-01T07:09:05.8329352Z     "compiler": "gc",
2019-03-01T07:09:05.8329437Z     "platform": "linux/amd64"
2019-03-01T07:09:05.8329507Z   }
2019-03-01T07:09:05.8329588Z }

Edit:

Upgraded the cluster to rule out k8s version. Same behavior on AKS 1.12.5

Edit 2:

I just noticed the error message has changed somewhat, from broken pipe write: broken pipe to wsasend: An established connection was aborted by the software Maybe it’s different but related?

Edit 3:

Tried Linux for good measure, The difference in error message was windows vs linux.

2019-03-01T08:24:47.4912692Z E0301 08:24:34.921142    4263 portforward.go:363] error copying from remote stream to local connection: readfrom tcp4 127.0.0.1:40201->127.0.0.1:35456: write tcp4 127.0.0.1:40201->127.0.0.1:35456: write: broken pipe

same issue as @Vhab;

using helm/tiller 2.11.0 with TLS and AKS 1.11.2.

helm client is being run via a vsts-agent docker image running in AKS; its part of several azure devops CD pipelines .I’ve also received the error running the helm client locally on Debian stretch.

I’ve only experienced the errors with helm upgrade and/or install commands and typically after the post deploy report

E1004 08:10:19.411418   14567 portforward.go:303] error copying from remote stream to local connection: readfrom tcp4 127.0.0.1:35543->127.0.0.1:41728: write tcp4 127.0.0.1:35543->127.0.0.1:41728: write: broken pipe

be good if there was a fix so my CD pipeline stops alerting a failure when the actually helm deployment works fine etc 😄

edit: tried @bacongobbler suggestion to manually setup the connection and the following error intermittently occurred using kubectl port-forward for multiple commands. kubectl port-forward for other uses apart from helm works okay on my cluster.

E1004 10:30:03.107023      35 portforward.go:303] error copying from remote stream to local connection: readfrom tcp4 127.0.0.1:44134->127.0.0.1:38992: write tcp4 127.0.0.1:44134->127.0.0.1:38992: write: broken pipe

I can confirm export HELM_TLS_HOSTNAME=<tiller-deployment-name> works sufficiently, I didn’t get the broken pipe error. 😃

The current thinking is that this is an upstream bug. So I’m not sure there is much left that we can do. But it definitely won’t impact Helm 3, which no longer uses the port forwarding/tunneling aspect of Kubernetes.

Also, for those of you impacted… are you using TLS? I hadn’t thought about whether the gRPC layer might be having TLS-related troubles.

@marrobi no updates to share at this time, sorry.

Same Issue here, using helm install EKS Kubernetes version: 1.11.0 Helm Client: 2.12.1 Helm Server: 2.12.0

@bacongobbler any update on this? Again Azure DevOps/VSTS hosted agent and AKS. Thanks.

same problem on running job from VSTS, that deploys to AKS on azure:

Client: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}
Kubernetes: &version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:08:19Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Server: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}

E1002 11:57:35.278788 207 portforward.go:303] error copying from remote stream to local connection: readfrom tcp4 127.0.0.1:38037->127.0.0.1:40594: write tcp4 127.0.0.1:38037->127.0.0.1:40594: write: broken pipe

a bit scary that issue is hanging since February…

I’m getting the same issue by using and AKS cluster within Azure.

This issue is in the way kubectl port forwarding is handled and not related to helm itself. There is an issue open on the kubernetes repo about this.

I was facing a similar issue in a different setting where I was uploading files into the pod and the reason I was getting broken pipe turned out to be the memory limits set on the pod. The file I was uploading was larger than the pod memory limits and so I was getting the following error: portforward.go:400] an error occurred forwarding 5000 -> 5000: error forwarding port 5000 to pod 9d0e07887b021ac9a2144416bc7736ce9b22302da25483ac730c5737e2554d7c, uid : exit status 1: 2019/05/17 03:54:30 socat[13000] E write(5, 0x186ed70, 8192): Broken pipe On increasing the pod limits I was able to upload the file successfully.

Here is the thing, errors in TLS validation close the connection unexpectedly and the kubectl proxy in the background complains about it without helm printing the actual error.

In my case it was as simple as adding “localhost” to the server certificate hosts and

export HELM_TLS_HOSTNAME=localhost

openssl s_client -connect was key to narrow it down, then translate it to helm flags.

This should be definitelly flagged as a bug. The real error is completelly silent even with --debug

Thank you, I forgot to re-open this. Yes, right now signs are pointing towards a bug upstream affecting Helm.

On the bright side, this issue should not be a problem for Helm 3 given that we’ve removed tiller and interact directly with the API server. 😃

still seeing this occur in our deployments, and causing some issues with a tool we’re writing for Helm.

Version information if it helps at all:

ubuntu@kubenode01:/opt/flagship$ helm version
Client: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}
ubuntu@kubenode01:/opt/flagship$ kubectl version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.3", GitCommit:"a4529464e4629c21224b3d52edfe0ea91b072862", GitTreeState:"clean", BuildDate:"2018-09-09T18:02:47Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.3", GitCommit:"a4529464e4629c21224b3d52edfe0ea91b072862", GitTreeState:"clean", BuildDate:"2018-09-09T17:53:03Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
ubuntu@kubenode01:/opt/flagship$

Our tool output, but this is really just regurgitating information returned to Helm:

ubuntu@kubenode01:/opt/flagship$ barrelman apply --diff barrelman-testing.yaml
INFO[0000] Using config                                  file=/home/ubuntu/.barrelman/config
NewSession Context:
INFO[0000] Connected to Tiller                           Host=":44160" clientServerCompatible=true tillerVersion=v2.11.0
INFO[0000] Using kube config                             file=/home/ubuntu/.kube/config
INFO[0000] syncronizing with remote chart repositories
Enumerating objects: 25, done.
Counting objects: 100% (25/25), done.
Compressing objects: 100% (24/24), done.
Total 30 (delta 7), reused 6 (delta 1), pack-reused 5
E1201 18:12:09.572396   16908 portforward.go:316] error copying from local connection to remote stream: read tcp4 127.0.0.1:44160->127.0.0.1:60512: read: connection reset by peer
E1201 18:12:10.122456   16908 portforward.go:303] error copying from remote stream to local connection: readfrom tcp4 127.0.0.1:44160->127.0.0.1:60532: write tcp4 127.0.0.1:44160->127.0.0.1:60532: write: broken pipe
E1201 18:12:10.379437   16908 portforward.go:303] error copying from remote stream to local connection: readfrom tcp4 127.0.0.1:44160->127.0.0.1:60546: write tcp4 127.0.0.1:44160->127.0.0.1:60546: write: broken pipe
E1201 18:12:11.184402   16908 portforward.go:303] error copying from remote stream to local connection: readfrom tcp4 127.0.0.1:44160->127.0.0.1:60556: write tcp4 127.0.0.1:44160->127.0.0.1:60556: write: broken pipe
ERRO[0003] Failed to get results from Tiller             cause="rpc error: code = Unknown desc = \"kube-proxy\" has no deployed releases"
ubuntu@kubenode01:/opt/flagship$

Would you mind explaining a bit more in detail? What didn’t work for you and why? Do you have logs? That would be most helpful.

@oivindoh I’m running on k8s in AWS. It used to be ok up until 2.7.2.

I face the same issue. Any suggested workarounds? This is very annoying.