devspace: on windows 10, devspace keeps exiting and throwing an error after running fine for a few minutes.

What happened?

  1. I run devspace up.
  2. devspace deployed and I’m able to work in the container.
  3. After a few minutes I get kicked out of the container and devspace.
  • I can use devspace up again to reconnect and keep working, but it keeps happening.
  1. I see this in the logs: {"level":"error","msg":"Runtime error occurred: error copying from remote stream to local connection: readfrom tcp4 127.0.0.1:61572-\u003e127.0.0.1:61574: write tcp4 127.0.0.1:61572-\u003e127.0.0.1:61574: wsasend: An established connection was aborted by the software in your host machine.","time":"2018-10-31T11:11:26-05:00"}

What did you expect to happen instead?
I should be able to keep working in the container however long I need to.

How can we reproduce the bug? (as minimally and precisely as possible)
Follow the same steps I did in the “What happened” section.

Local Environment:

  • Operating System: windows 10
  • Deployment method: helm

Kubernetes Cluster:

  • Cloud Provider: Baremetal via Rancher 2.0 rancher
  • Kubernetes Version: Client Version: v1.10.2 Server version: v1.11.1

Anything else we need to know?
This might be happening during the sync operation, I’m not sure, it seems more stable after everything is synced up. I’ll update further when I’m more sure about that.

/kind bug

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Comments: 19

Most upvoted comments

@KaelBaldwin Okay I’ll open a new issue for that and close this one. That is also the only solution I see currently that we can implement, while we cannot exactly find out who and why the connections are getting closed.

EDIT: Should you find out, why the connections are getting closed and that there is a better solution, feel free to reopen the issue

@FabianKramm they all failed this time, including kubectl exec, which had persisted every other time. I did some monitoring after getting that result:

image

Looks like the data transfer is spiking very high at a certain point, which is impressive! Haha. I’m thinking what’s happening here is my network interface is getting 100% used by the sync and losing connections.

Further looking into it, I have a rather large file in my repo at the moment that I was using for test data. I’m betting when it gets transferred it’s causing the overload.

So noticed today while starting it up, that I got kicked out again. The first time I got kicked out, I decided to run exec into the container via kubectl as you mentioned while also running devspace up in another terminal.

I ran devspace up first and it connected fine.

I ran kubectl exec and it failed to connect. I then checked the devspace terminal and it did indeed disconnect.

So this does seem to be a connectivity issue.

I wonder if the data transfer going on during the sync process is causing a timeout that makes devspace give up it’s connection somewhere and kick out.

FYI this is a on premises bare metal cluster so connectivity to the nodes should be fine. But the cluster can get pretty busy. There are test servers being built on it regularly, so it could be a combination of traffic or maybe resource usage.