moby: write unix /var/run/docker.sock->@: write: broken pipe
I’m sending commands to different containers; this was working correctly until a few days ago. Since then, the only thing that changed is; a new Docker version was released and I switched from a TCP connection to a socket connection. Rest of the commands execute fine.
Information:
This environment runs inside of a virtual machine with Debian Jessie.
$ docker version
Client:
Version: 1.11.0
API version: 1.23
Go version: go1.5.4
Git commit: 4dc5990
Built: Wed Apr 13 18:17:17 2016
OS/Arch: linux/amd64
Server:
Version: 1.11.0
API version: 1.23
Go version: go1.5.4
Git commit: 4dc5990
Built: Wed Apr 13 18:17:17 2016
OS/Arch: linux/amd64
$ docker info
Containers: 9
Running: 6
Paused: 0
Stopped: 3
Images: 12
Server Version: 1.11.0
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 89
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge null host
Kernel Version: 3.16.0-4-amd64
Operating System: Debian GNU/Linux 8 (jessie)
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 1.963 GiB
Name: docker-host
ID: 4NGE:LLRW:4LEL:CE6Q:S7BN:UZ6Q:LA6K:4WHG:PNRI:LHAD:TWFW:WDBJ
Docker Root Dir: /var/lib/docker
Debug mode (client): false
Debug mode (server): false
Registry: https://index.docker.io/v1/
WARNING: No memory limit support
WARNING: No swap limit support
WARNING: No kernel memory limit support
WARNING: No oom kill disable support
WARNING: No cpu cfs quota support
WARNING: No cpu cfs period support
$ uname -a
Linux docker-host 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt25-1 (2016-03-06) x86_64 GNU/Linux
$ cat /etc/issue
Debian GNU/Linux 8 \n \l
Apr 21 07:35:46 docker-host docker[11709]: time="2016-04-21T07:35:46.790486036Z" level=error msg="attach: stdout: write unix /var/run/docker.sock->@: write: broken pipe"
Apr 21 07:35:46 docker-host docker[11709]: time="2016-04-21T07:35:46.931491694Z" level=error msg="Error running exec in container: attach failed with error: write unix /var/run/docker.sock->@: write: broken pipe\n"
Apr 21 07:35:46 docker-host docker[11709]: time="2016-04-21T07:35:46.931563085Z" level=error msg="Handler for POST /exec/fc6c278bc319d31ae125e6bfe62aab3bed6ae54d64223597e2c7a367fd540153/start returned error: attach failed with error: write unix /var/run/docker.sock->@: write: broken pipe"
Apr 21 07:35:46 docker-host docker[11709]: 2016/04/21 07:35:46 http: response.WriteHeader on hijacked connection
Apr 21 07:35:46 docker-host docker[11709]: 2016/04/21 07:35:46 http: response.Write on hijacked connection
Update: The problem occurs also when using a TCP connection: Update: When using a TCP connection there are less of these errors then when using a socket connection.
Apr 21 08:33:24 docker-host docker[25468]: time="2016-04-21T08:33:24.772961231Z" level=error msg="attach: stderr: write tcp 10.99.0.99:2375->172.17.0.2:40968: write: broken pipe"
Apr 21 08:33:25 docker-host docker[25468]: time="2016-04-21T08:33:25.890528192Z" level=error msg="Error running exec in container: attach failed with error: write tcp 10.99.0.99:2375->172.17.0.2:40968: write: broken pipe\n"
Apr 21 08:33:25 docker-host docker[25468]: time="2016-04-21T08:33:25.890563806Z" level=error msg="Handler for POST /exec/aca8b3206f70586afa35766edf858827b57dd1e7a5643878df1996c8c818f061/start returned error: attach failed with error: write tcp 10.99.0.99:2375->172.17.0.2:40968: write: broken pipe"
Apr 21 08:33:25 docker-host docker[25468]: 2016/04/21 08:33:25 http: response.WriteHeader on hijacked connection
Apr 21 08:33:25 docker-host docker[25468]: 2016/04/21 08:33:25 http: response.Write on hijacked connection
docker-php issue: https://github.com/docker-php/docker-php/issues/195
About this issue
- Original URL
- State: open
- Created 8 years ago
- Reactions: 15
- Comments: 35 (8 by maintainers)
I hit the same issue on my Kubernetes worker node, I have no idea on this. And I’m not sure this issue breaks the status of the nodes.
I am getting this issue under heavy stdout loads
This is because docker is trying to write a response to a client connection and the client has closed the connection.
2016/08/02 17:04:43 http: multiple response.WriteHeader calls time="2016-08-02T17:04:43.813279646Z" level=error msg="Handler for GET /containers/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
$ docker -v Docker version 1.11.2, build b9f10c9/1.11.2
AWS/ECS AMI
Investigating…
We encountered:
on Docker Engine version
19.03.15
on a AWS instance with an attachedgp3
ebs with the default values of3000 IOPS
and125 MiB/s
throughput, and considered increasing the IOPS value after reading this comment:However that gave us an idea that maybe the daemon was having issue starting with some of the unused/dangling images in the server, as the daemon was using 50% CPU and almost exhausting the available memory even without running containers (with the exception of a single Datadog container), after running:
Which took an unusually long time (15 min) to delete just 2.8GB, the problem went away, and memory and CPU went back to normal idle levels.
As mentioned above, we do have a running Datadog container on every instance but only 1 out of many was having this issue, so it could also be a combination of a bad unused/dangling image and the Datadog container trying to collect data.
@thaJeztah sorry for bothering, but maybe you can provide some information about the state of this issue? I have the same trouble with 17.06.2-ce
@Aisuko yes, we started noticing fewer
docker.sock broken pipe
errors. In our case, we had a combination of some containers logging as hell, constantly crashing, being OOMKilled, restarting and too many containers in a single node. Maybe monitoring and tweaking this can help to reduce the load on the Docker API, as well as switching to SSD.The problem went away for us when we stopped using a particular setting in the Datadog agent (collecting Docker image stats).
I’m seeing this problem when running the DataDog agent (v6) in Docker, which monitors docker.sock to upload stats. See https://github.com/aws/amazon-ecs-agent/issues/1489
It’s happening very consistently to me, but it would be fairly complicated for me to make a minimal repro, since our setup has most of the work going through Amazon’s ECS agent.
@allencloud I would say yes… but I’m sure you’ve got some situation that will cause me to eat my words… 😃
I’m getting this all over the place. Thought it was an issue for older versions, but this one was just updated after seeing it the first time and now I’m getting it again less than a day later.