moby: docker info and docker ps hangs indefinitely after daemon has been up for a few weeks

Output of docker version:

docker --version
Docker version 1.12.0, build 8eab29e
(however have seen this on v1.10.0 and v1.11.0 also)

Output of docker info:

Containers: 32
 Running: 0
 Paused: 0
 Stopped: 32
Images: 188
Server Version: 1.12.0
Storage Driver: devicemapper
 Pool Name: vg--docker-thinpool-tpool
 Pool Blocksize: 65.54 kB
 Base Device Size: 10.74 GB
 Backing Filesystem: ext4
 Data file: 
 Metadata file: 
 Data Space Used: 83.23 GB
 Data Space Total: 219.9 GB
 Data Space Available: 136.7 GB
 Metadata Space Used: 107.3 MB
 Metadata Space Total: 2.751 GB
 Metadata Space Available: 2.644 GB
 Thin Pool Minimum Free Space: 21.99 GB
 Udev Sync Supported: true
 Deferred Removal Enabled: false
 Deferred Deletion Enabled: false
 Deferred Deleted Device Count: 0
 Library Version: 1.02.77 (2012-10-15)
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: null bridge host overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Security Options: apparmor
Kernel Version: 3.13.0-74-generic
Operating System: Ubuntu 14.04.3 LTS
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 60 GiB
Name: REDACTED
ID: SGZ3:CS2C:KBGQ:OD6P:CI4C:ZT3C:KATV:CA6M:LU3K:WVRG:NMP2:5CC4
Docker Root Dir: /mnt/docker/1.12.0-0
Debug Mode (client): false
Debug Mode (server): true
 File Descriptors: 15
 Goroutines: 23
 System Time: 2016-08-24T22:47:32.632920815Z
 EventsListeners: 0
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
Insecure Registries:
 127.0.0.0/8

Additional environment details (AWS, VirtualBox, physical, etc.):

uname -a
Linux REDACTED 3.13.0-74-generic #118-Ubuntu SMP Thu Dec 17 22:52:10 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
hosted on: AWS EC2

Steps to reproduce the issue:

  1. Build containers constantly (we use in our build infrastructure).
  2. docker info / docker ps hangs forever

Describe the results you received: docker info / docker ps hangs forever

Describe the results you expected: daemon info / docker ps returns (the output in this ticket is after killing the daemon and restating)

Additional information you deem important (e.g. issue happens only occasionally): Issue happens the most often w/ our build servers, they are building hundreds of images day. Also happens with our hosting servers on AWS ECS (which don’t build containers, just pull them down and run. We are using docker-gc to clean up old images and containers.

some log output

time="2016-08-23T21:08:35.942361075Z" level=debug msg="Calling GET /v1.17/containers/ca227290a2a5b4f288cd755a179064e50814df49c288e72a5e7d6f304ff7a650/json"

time="2016-08-23T21:08:35.942421925Z" level=error msg="Handler for GET /v1.17/containers/ca227290a2a5b4f288cd755a179064e50814df49c288e72a5e7d6f304ff7a650/j
son returned error: No such container: ca227290a2a5b4f288cd755a179064e50814df49c288e72a5e7d6f304ff7a650" 
time="2016-08-23T21:08:35.942507414Z" level=debug msg="Calling GET /v1.17/containers/ca227290a2a5b4f288cd755a179064e50814df49c288e72a5e7d6f304ff7a650/json"

time="2016-08-23T21:08:35.942553081Z" level=error msg="Handler for GET /v1.17/containers/ca227290a2a5b4f288cd755a179064e50814df49c288e72a5e7d6f304ff7a650/j
son returned error: No such container: ca227290a2a5b4f288cd755a179064e50814df49c288e72a5e7d6f304ff7a650" 
time="2016-08-23T21:08:36.336137532Z" level=debug msg="sandbox set key processing took 54.229861ms for container d100e952c5b95e1f1ae4e74920f4b41598b7f9a375
418e3df198e67e0b9bcc54" 
time="2016-08-23T21:08:37.062898936Z" level=debug msg="libcontainerd: received containerd event: &types.Event{Type:\"start-container\", Id:\"d100e952c5b95e
1f1ae4e74920f4b41598b7f9a375418e3df198e67e0b9bcc54\", Status:0x0, Pid:\"\", Timestamp:(*timestamp.Timestamp)(0xc827aa3a60)}" 
time="2016-08-23T21:08:37.246141086Z" level=debug msg="libcontainerd: event unhandled: type:\"start-container\" id:\"d100e952c5b95e1f1ae4e74920f4b41598b7f9
a375418e3df198e67e0b9bcc54\" timestamp:<seconds:1471986517 nanos:62683323 > " 
time="2016-08-23T21:08:37.246636283Z" level=debug msg="Calling GET /v1.18/containers/d100e952c5b95e1f1ae4e74920f4b41598b7f9a375418e3df198e67e0b9bcc54/json"

time="2016-08-23T21:08:37.246808273Z" level=debug msg="Calling GET /v1.17/containers/d100e952c5b95e1f1ae4e74920f4b41598b7f9a375418e3df198e67e0b9bcc54/json"

time="2016-08-23T21:08:37.246867087Z" level=debug msg="Calling GET /v1.17/containers/d100e952c5b95e1f1ae4e74920f4b41598b7f9a375418e3df198e67e0b9bcc54/json"

time="2016-08-23T21:08:37.925925818Z" level=debug msg="devmapper: activateDeviceIfNeeded(7491b49612670c8475364e05c83ce6ce6ed7e54373353ff438267346839eb774-i
nit)" 
time="2016-08-23T21:08:37.928587328Z" level=debug msg="container mounted via layerStore: /mnt/docker/1.12.0-0/devicemapper/mnt/3183570706a237711af2ea0c561b
465bc0e53bf14d121b34df67c60d8e166129/rootfs" 
time="2016-08-23T21:08:37.930307759Z" level=debug msg="devmapper: UnmountDevice(hash=3183570706a237711af2ea0c561b465bc0e53bf14d121b34df67c60d8e166129)" 
time="2016-08-23T21:08:41.560392730Z" level=debug msg="devmapper: Unmount(/mnt/docker/1.12.0-0/devicemapper/mnt/3183570706a237711af2ea0c561b465bc0e53bf14d1
21b34df67c60d8e166129)" 

About this issue

  • Original URL
  • State: closed
  • Created 8 years ago
  • Comments: 38 (17 by maintainers)

Most upvoted comments

@jonaseberle your hang should be solved by https://github.com/docker/docker/pull/27405

Your daemon is blocked on openReaderFromFifo OpenFile call.