moby: Cannot start container: [0] Id already in use

Can’t start existing docker container with mongodb. docker ps -a shows that this container exists:

6d33bbf4bce9 mongo:latest "/entrypoint.sh mongo" 47 minutes ago Exited (-1) 22 minutes ago hotels-mongo

And when i start it with docker start hotels-mongo, this exception throws: Error response from daemon: Cannot start container hotels-mongo: [0] Id already in use: Container with id exists: 6d33bbf4bce902279721419a062bcd7d2ddcffdd8ef13d70bcd5e7ae71cf5c10 Error: failed to start containers: [hotels-mongo]

Also can’t start through kitematic. This container worked for 2-4 weeks and now it doesn’t.

Docker version: Client: Version: 1.8.3 API version: 1.20

Server: Version: 1.9.1 API version: 1.21

About this issue

  • Original URL
  • State: closed
  • Created 8 years ago
  • Comments: 23 (6 by maintainers)

Most upvoted comments

i got that problem also, and my solution is very simple docker-compose down -v then docker-compose up -d again.

Just found a manual workaround, all it needs is to delete the following directory: /run/docker/execdriver/native/dockerID. For instance, in my case:

sudo rm /run/docker/execdriver/native/1fefdf0e627a2e68fbfe00208963fb6f059aeb20ddc54e06f21fe89e98a3be31

For me, restarting the machine solved the problem

i got that problem also, and my solution is very simple docker-compose down -v then docker-compose up -d again.

Thanks! That worked for me!

We’re experiencing this issue where we need to rm the /run/docker/runtime-runc/moby/<containerid> directory when the container is stopped in order to start it up again due to the error message: OCI runtime create failed: container with id exists: <containerid>: unknown

We do not have a reliable way to repro this behavior, but it does happen very often in our rather large environment (more than 30 impacted containers out of 900+).

Restarting dockerd or the server itself is not an option for us, any suggestions? I’d be happy to help create a patch if there’s any ideas where to start looking.

Server:
 Containers: 977
  Running: 403
  Paused: 0
  Stopped: 574
 Images: 31
 Server Version: 19.03.5
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: b34a5c8af56e510852c35414db4c1f4fa6172339
 runc version: 3e425f80a8c931f88e6d94a8c831b9d5aa481657
 init version: fec3683
 Security Options:
  apparmor
  seccomp
   Profile: default
 Kernel Version: 4.15.0-1054-aws
 Operating System: Ubuntu 18.04.3 LTS
 OSType: linux
 Architecture: x86_64
 CPUs: 72
 Total Memory: 137.5GiB
 Name: docker
 ID: IGSA:ZFGF:6WUS:GYDG:J5OV:AWYO:UNRD:HDFN:R6UR:SINP:X5A3:BEXB
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

WARNING: No swap limit support

Have the same issue

Server: Docker Engine - Community
 Engine:
  Version:          18.09.0
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.4
  Git commit:       4d60db4
  Built:            Wed Nov  7 00:16:44 2018
  OS/Arch:          linux/amd64
  Experimental:     false

I had daily reboots enabled on one of my webservers (Ubuntu 18.04.1). On SOME boots, the containers are not startet. I called service docker restart several times - and after third call the starting was successful.

Previous log entries:

Dez 28 03:16:19 www dockerd[939]: time="2018-12-28T03:16:19.530886826Z" level=error msg="Failed to start container a8f0c68e1aa88c02c6a974c52d3f3063e3bb42198d95da1ece3045a353729ae1: id already in use"
Dez 28 03:16:19 www dockerd[939]: time="2018-12-28T03:16:19.872740411Z" level=error msg="Failed to start container 3d9141cd31048c5d56cca05e89c706ab497a42f135e8a634d13c9f28bb526c7e: id already in use"
Dez 28 03:16:20 www dockerd[939]: time="2018-12-28T03:16:20.871394183Z" level=error msg="Failed to start container 492bc3d57ddf9b03d1a6f93900f8fe256252b6e760c875221fbe4f5076e7e329: id already in use"
Dez 28 03:16:21 www dockerd[939]: time="2018-12-28T03:16:21.547818052Z" level=error msg="Failed to start container 274aea0d83a8db66a048dfe07135285953bcf8295e8e165e775b3c3ba039843e: id already in use"
Dez 28 03:16:21 www dockerd[939]: time="2018-12-28T03:16:21.582269285Z" level=error msg="Failed to start container eee6c4e907d0ab1c2d4d055abaad09e4072200fb1720e6c12626857fdc24bc9a: id already in use"
Dez 28 03:16:21 www dockerd[939]: time="2018-12-28T03:16:21.582393418Z" level=info msg="Loading containers: done."
Dez 28 03:16:22 www dockerd[939]: time="2018-12-28T03:16:22.137824204Z" level=info msg="Docker daemon" commit=4d60db4 graphdriver(s)=overlay2 version=18.09.0
Dez 28 03:16:22 www dockerd[939]: time="2018-12-28T03:16:22.149087112Z" level=info msg="Daemon has completed initialization"
Dez 28 03:16:22 www dockerd[939]: time="2018-12-28T03:16:22.305528298Z" level=info msg="API listen on /var/run/docker.sock"
Dez 28 03:16:22 www systemd[1]: Started Docker Application Container Engine.

At time of the log entries NO containers were up…

I just had this issue on 1.13.1 immediately after upgrading, I can’t start some (not all) of my containers.

A reboot of Ubuntu fixed it.

Closing this since this is not a valid error since at least 1.11… I think it may even be cleared up by 1.10… but definitely at least 1.11.

Docker does not itself execute containers anymore since 1.11 and relies on runc from OCI. There is no execdriver anymore. The error itself was fixed in libcontainer (which OCI uses) awhile back and the error itself doesn’t exist anymore.

If you run into more issues, please open a new one. Thanks!