moby: error: level=fatal msg=\"no sandbox present
Output of docker version
:
Docker version 1.11.0, build 4dc5990
Output of docker info
:
Containers: 52
Running: 43
Paused: 0
Stopped: 9
Images: 48
Server Version: swarm/1.2.0
Role: primary
Strategy: spread
Filters: health, port, dependency, affinity, constraint
Nodes: 3
node02: 192.168.150.151:2375
└ Status: Healthy
└ Containers: 11
└ Reserved CPUs: 0 / 8
└ Reserved Memory: 0 B / 36.09 GiB
└ Labels: executiondriver=, kernelversion=4.2.0-16-generic, operatingsystem=Ubuntu 15.10, storagedriver=aufs
└ Error: (none)
└ UpdatedAt: 2016-05-10T08:41:06Z
└ ServerVersion: 1.11.0
node01: 192.168.150.152:2375
└ Status: Healthy
└ Containers: 28
└ Reserved CPUs: 0 / 9
└ Reserved Memory: 0 B / 41.26 GiB
└ Labels: executiondriver=native-0.2, kernelversion=4.2.0-16-generic, operatingsystem=Ubuntu 15.10, storagedriver=aufs
└ Error: (none)
└ UpdatedAt: 2016-05-10T08:41:12Z
└ ServerVersion: 1.10.3
node03: 192.168.150.150:2375
└ Status: Healthy
└ Containers: 13
└ Reserved CPUs: 0 / 6
└ Reserved Memory: 0 B / 24.72 GiB
└ Labels: executiondriver=native-0.2, kernelversion=4.2.0-16-generic, operatingsystem=Ubuntu 15.10, storagedriver=aufs
└ Error: (none)
└ UpdatedAt: 2016-05-10T08:41:14Z
└ ServerVersion: 1.10.3
Plugins:
Volume:
Network:
Kernel Version: 4.2.0-16-generic
Operating System: linux
Architecture: amd64
CPUs: 23
Total Memory: 102.1 GiB
Name: a698c6c4c271
I have cluster swarm cluster with consul and registrator. I create overlay network and run all my containers with this network, but after restart node2 I can`t start old containers and have next message:
Error response from daemon: rpc error: code = 2 desc = “oci runtime error: exit status 1: time="2016-05-10T11:34:07+03:00" level=fatal msg="no sandbox present for 3edaede1e26068579d9fc2d200156c7ab0df736eddb31119e6ff92cc0260d923" \n”
What should i do to resolve this problem?
About this issue
- Original URL
- State: open
- Created 8 years ago
- Reactions: 2
- Comments: 20 (6 by maintainers)
And the only way to fix that issue after it has happened -
None of which is an option for production docker use.
I have the same issue, my
workaroud
solution was to rename the container. It’s just a single container name which i aren’t able to use anymore.