moby: [1.12-rc2] docker should not allow `start` and `restart` on swarm-managed containers

test case 1 (no ports published)

docker service create --name foobar --replicas 5  nginx:alpine

# (wait tasks to be up)

docker stop $(docker ps --filter label=com.docker.swarm.service.name=foobar -q)

# (wait for swarm to recover)

docker start $(docker ps --filter label=com.docker.swarm.service.name=foobar -aq)

After this, there’s 10 containers running, docker doesn’t kill/remove the excessive containers

I’d expect docker to kill/remove those containers

test case 2

docker service create --name foobar --replicas 5 -p 8008:80  nginx:alpine

# (wait tasks to be up)

docker stop $(docker ps --filter label=com.docker.swarm.service.name=foobar -q)

# (wait for swarm to recover - note in my case it didn't fully recover, but hung at 3/5)

docker start $(docker ps --filter label=com.docker.swarm.service.name=foobar -aq)

results in:

Error response from daemon: Address already in use
Error response from daemon: Address already in use
Error response from daemon: Address already in use
Error response from daemon: Address already in use
Error response from daemon: Address already in use
Error response from daemon: Address already in use
0467a6877366
2ab26cbf3e20
Error response from daemon: Address already in use
fd07976c407f
Error response from daemon: Address already in use
Error response from daemon: Address already in use
79d552f268aa
Error response from daemon: Address already in use
0d659b1bafa1
Error: failed to start containers: 7f4936520a2a, 26a3b00257e1, 616064cf7938, 3c3fcb883565, 9e15f45321e6, b2fcb6182fe7, 8583b7c2dabf, f887d00e4811, 72e0596b161a, 249800a970a0

Tested on Docker for Mac, but similar results on other platforms

Client:
 Version:      1.12.0-rc2
 API version:  1.24
 Go version:   go1.6.2
 Git commit:   906eacd
 Built:        Fri Jun 17 20:35:33 2016
 OS/Arch:      darwin/amd64
 Experimental: true

Server:
 Version:      1.12.0-rc2
 API version:  1.24
 Go version:   go1.6.2
 Git commit:   a7119de
 Built:        Fri Jun 17 22:09:20 2016
 OS/Arch:      linux/amd64
 Experimental: true

About this issue

  • Original URL
  • State: open
  • Created 8 years ago
  • Comments: 15 (12 by maintainers)

Most upvoted comments

@thaJeztah i changed the title on this issue to make it clear.

Containers are managed by a task that must maintain an accurate state of the container. Tasks can only proceed in one direction, towards failure or success. As tasks proceed through their lifecycle, they report state that monotonically increases. If you startup containers that are managed as tasks, this guarantee is broken and the task will no longer manage container.

The lifecycle of containers managed by a service is set by the manager (via the dispatcher). The manager passes down the list of assigned tasks to a worker and the worker will hold on to those as long as they are there. This allows the manager to have some control over the lifecycle of the containers in the cluster to support investigation after a crash.

For the most, its okay to stop containers. They will be restarted by the orchestrator. If a container is restarted by another mechanism, the task controller won’t try to shut it down again. This is a property of the system to handle cases like this. The issue here is that they won’t be shutdown automatically, since the user has effectively taken control of them. If we didn’t do this, we would shut it down immediately after you started it.

The current behavior is that these will go away when the service is deleted. For a long running service, this may be problematic but they can be pruned from the assignment set to keep it manageable.

TL; DR If you take over the management of an individual container, swarm mode will no longer manage its lifecycle. It is up to the user to delete the extra container. In general, don’t start containers managed by a service but there isn’t a good reason to prevent it.

Also, there is a networking problem preventing the startup of the new containers, which I suspect is the issue with 3/5 replicas.