moby: [1.11-rc2] Cannot start a container because of oci runtime error(mkdir reports file exists)

Cannot start a simple container in 1.11-rc2. This is the first time I updated to 1.11-rc2 and so far no successful container started.

Output of docker version:

mrjana@dev-1:/vagrant/gowork/src/github.com/docker/libnetwork$ docker version
Client:
 Version:      1.11.0-rc2
 API version:  1.23
 Go version:   go1.5.3
 Git commit:   388f544
 Built:        Fri Mar 25 19:58:22 2016
 OS/Arch:      linux/amd64

Server:
 Version:      1.11.0-rc2
 API version:  1.23
 Go version:   go1.5.3
 Git commit:   388f544
 Built:        Fri Mar 25 19:58:22 2016
 OS/Arch:      linux/amd64

Output of docker info:

mrjana@dev-1:/vagrant/gowork/src/github.com/docker/libnetwork$ docker info
Containers: 0
 Running: 0
 Paused: 0
 Stopped: 0
Images: 268
Server Version: 1.11.0-rc2
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 3744
 Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge null host
Kernel Version: 4.2.0-30-generic
Operating System: Ubuntu 15.10
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 3.86 GiB
Name: dev-1
ID: ZOYO:M435:W46N:OV2A:5W5F:PUNV:2UBJ:JSDH:RNHQ:GPHY:SOJY:RP26
Docker Root Dir: /var/lib/docker
Debug mode (client): false
Debug mode (server): true
 File Descriptors: 12
 Goroutines: 29
 System Time: 2016-03-31T13:56:53.886266826-07:00
 EventsListeners: 0
Username: mrjana
Registry: https://index.docker.io/v1/
WARNING: No swap limit support

Additional environment details (AWS, VirtualBox, physical, etc.):

Steps to reproduce the issue:

  1. Start the daemon sudo /usr/bin/docker daemon -D
  2. docker run -it --rm busybox sh

Describe the results you received:

docker: Error response from daemon: rpc error: code = 2 desc = "oci runtime error: could not synchronise with container process: mkdir /run/docker/libcontainerd/3d8f9d203304cb7e2f803e70ea5dfa7baaa533f26b01d71a7a86eb0b2a48ec6d/rootfs: file exists".

Excerpts from docker daemon logs with debugs enabled

DEBU[0018] Calling POST /v1.23/containers/3d8f9d203304cb7e2f803e70ea5dfa7baaa533f26b01d71a7a86eb0b2a48ec6d/attach?stderr=1&stdin=1&stdout=1&stream=1
DEBU[0018] attach: stdin: begin
DEBU[0018] attach: stdout: begin
DEBU[0018] attach: stderr: begin
DEBU[0018] Calling POST /v1.23/containers/3d8f9d203304cb7e2f803e70ea5dfa7baaa533f26b01d71a7a86eb0b2a48ec6d/start
DEBU[0018] container mounted via layerStore: /var/lib/docker/aufs/mnt/fbb07bd1a3790112ff71520711f43164681336f20e919788c0958cb2e9b640a9
DEBU[0018] Assigning addresses for endpoint trusting_williams's interface on network bridge
DEBU[0018] RequestAddress(LocalDefault/172.17.0.0/16, <nil>, map[])
DEBU[0018] Assigning addresses for endpoint trusting_williams's interface on network bridge
DEBU[0018] Programming external connectivity on endpoint trusting_williams (2e9c13275ef22433978f7116976a56adc22b2e58c28ef548cb3b605d5716d8a7)
DEBU[0018] attach: stderr: end
DEBU[0018] Revoking external connectivity on endpoint trusting_williams (2e9c13275ef22433978f7116976a56adc22b2e58c28ef548cb3b605d5716d8a7)
DEBU[0018] attach: stdin: end
DEBU[0018] attach: stdout: end
DEBU[0018] Releasing addresses for endpoint trusting_williams's interface on network bridge
DEBU[0018] ReleaseAddress(LocalDefault/172.17.0.0/16, 172.17.0.2)
ERRO[0018] Handler for POST /v1.23/containers/3d8f9d203304cb7e2f803e70ea5dfa7baaa533f26b01d71a7a86eb0b2a48ec6d/start returned error: rpc error: code = 2 desc = "oci runtime error: could not synchronise with container process: mkdir /run/docker/libcontainerd/3d8f9d203304cb7e2f803e70ea5dfa7baaa533f26b01d71a7a86eb0b2a48ec6d/rootfs: file exists"
DEBU[0018] Calling DELETE /v1.23/containers/3d8f9d203304cb7e2f803e70ea5dfa7baaa533f26b01d71a7a86eb0b2a48

About this issue

  • Original URL
  • State: closed
  • Created 8 years ago
  • Reactions: 15
  • Comments: 64 (27 by maintainers)

Most upvoted comments

Ok, the real problem in my case was --volume option. I had this line on docker run

--volume "/opt/app/records.json:${WORKDIR}/records.json"

The BIG problem here is that --volume flag creates /opt/app/records.json as a FOLDER, anyway ${WORKDIR}/records.json is a FILE and that is the problem.

In sum, DO NOT use volume to files if you want that this volume path be created automatically

Maybe it can be considered a bug?

seems like a reboot fixed this issue for me.

Same issue here … (reboot doesn’t help).

→ docker info
Containers: 9
 Running: 0
 Paused: 0
 Stopped: 9
Images: 1301
Server Version: 1.11.0
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 1172
 Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins: 
 Volume: local
 Network: null host bridge
Kernel Version: 4.2.0-35-generic
Operating System: Ubuntu 15.10
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 15.37 GiB
Docker Root Dir: /var/lib/docker
Debug mode (client): false
Debug mode (server): false
Registry: https://index.docker.io/v1/
WARNING: No swap limit support

I should stop updating Docker immediately, had a broken 1.7.0, 1.8.0, 1.9.0 and 1.11.0 😉

This is still a problem. I cannot reliably have my containers come back up after a daemon restart. Anyone have a better solution rather than manual intervention (which of course will cause a human-time-to-ssh-into-box dependent outage!)

I got the same problem. and a reboot host system resolve it

@tdterry I think your issue is a little different.

You should be able to fix it by running docker-runc delete a0033db8ff204a3fb2b550cc74890c65a58824f21bc88e46241b8f4761bb8945 or cleaning out the directories yourself under:

/run/runc
/run/containerd

In case this helps anyone. I found if docker ends abnormally like a

kill -9 {DOCKER_PID}

This can happen. The solution I found that doesn’t seem to have any ill effects is to:

rm -rf /run/runc/80768bc717f353484ab54b306bca0506861688d0b1ae0f3d724208cb37cad047
rm -rf /run/containerd/80768bc717f353484ab54b306bca0506861688d0b1ae0f3d724208cb37cad047
docker start 80768bc717f353484ab54b306bca0506861688d0b1ae0f3d724208cb37cad047

Turns out that my issue was unrelated to this in the end. It was merely an incorrect volume mount that completely threw me off.