compose: docker compose up recreates running container that does not have their configs changed in docker-compose.yml
Description
docker compose up
stops and recreates running containers that do not have their config changed.
Steps to reproduce the issue:
- Run
docker compose up
- Change config for one service in
docker-compose.yml
- Run
docker compose up
again
Describe the results you received: A lot of containers that are not related to the service that has its config changed are recreated and restarted.
Describe the results you expected: Only the container of the service that has new config should be recreated and those that depend on it should be restarted.
Additional information you deem important (e.g. issue happens only occasionally): After switching to v2 it happens almost all of the time. Previously on v1 there was no such problem.
Output of docker compose version
:
Docker Compose version v2.6.0
Output of docker info
:
Client:
Context: default
Debug Mode: false
Plugins:
app: Docker App (Docker Inc., v0.9.1-beta3)
buildx: Docker Buildx (Docker Inc., v0.8.2-docker)
compose: Docker Compose (Docker Inc., v2.6.0)
scan: Docker Scan (Docker Inc., v0.17.0)
Server:
Containers: 45
Running: 39
Paused: 0
Stopped: 6
Images: 36
Server Version: 20.10.17
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
userxattr: false
Logging Driver: json-file
Cgroup Driver: systemd
Cgroup Version: 2
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1
runc version: v1.1.2-0-ga916309
init version: de40ad0
Security Options:
apparmor
seccomp
Profile: default
cgroupns
Kernel Version: 5.15.0-40-generic
Operating System: Ubuntu 22.04 LTS
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 11.66GiB
Name: aardvark-vm
ID: CKOC:QAHC:ZJ6M:5INA:JRIE:BFCW:EUIA:OCHV:BRZ2:MF3K:C4PO:EGDQ
Docker Root Dir: /var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
Additional environment details:
About this issue
- Original URL
- State: closed
- Created 2 years ago
- Reactions: 23
- Comments: 63 (12 by maintainers)
I’m getting the same issue and all images are available locally.
In my case, it’s triggered by a
docker-compose run
, and a linked running container gets recreated even though nothing has changed.I checked the hash, it does not change across the unexpected recreates.
EDIT: as pointed out in #10068, in my case the result of
docker-compose config --hash <service>
does not change, but the result ofdocker inspect <container> -f '{{json .Config.Labels}}' | jq -r '."com.docker.compose.config-hash"'
does changev2.12.0
@leranp download binary from https://github.com/docker/compose/releases and install under
~/.docker/cli-plugins
@ndeloof I created this minimal working example, I hope you can reproduce this.
docker-compose
=docker compose
it’s just that shell completion still works (zsh).docker-compose: aliased to /usr/libexec/docker/cli-plugins/docker-compose
It did happen to me with newer docker-compose binaries. The interesting thing is that some (semi-random) containers got restarted even when working correctly. I’ve used a very simple script to check if there are new images, and if so - to restart the containers. However, some of the containers got restarted even while working. It seems that if the original cration of these containers (docker-compose up -d) was done using older docker-compose binary (pre v2), the current version would sometimes decide that this container needs restart. I’ve had to take down all my containers and then start them up using the more modern v2.16 docker-compose binary. FYI.
Here is my example for you. It definely seems to be something that only occurs after an initial docker-compose up when all container images are initially pulled. As an example here is a bringing up our docker compose stack ( i have redacted unimportant details like container names and aws account id):
docker-compose.yml:
terminal log:
Then down one of the running containers and use docker-compose up -d to bring it back. You will notice that this version recreated ALL the containers where as the docker-compose 1.x only restarts the stopped container:
If I then stop one container again, and then use docker-compose up -d to restart it, it now behaves as per ver 1.x and simply restarts the one stopped container:
I can consistently reproduce this, their is a clear issue here. Note that this also occurs if after an initial up, where all images are pulled, you then edit the docker-compose file and update the image tag of one of the containers and do a docker-compose up -d it will recreate all containers, howeve if you then try again, it will only recreate the container with the change tag as one would expect. Example of this:
There seems to be some bug here somewhere
@debdutdeb It worked in 2.2.x I believe. This is a regression. It’s also nonsense behaviour, there’s no way on earth it is desirable.
@debdutdeb I tried with your compose file, and ran
docker compose up -d
twice (with around one minute in between) without modifying anything.The first run, both of the containers came up. This is expected. The second run, both of them are recreated, but I did not modify anything.
Afterwards, I tried it again, it does not happen. Even with modified compose file, it doesn’t always occur.
@ndeloof You missed the
s
at the end~/.docker/cli-plugins
(I wasted some time figuring out why it doesn’t work)@imjuzcy please check the
com.docker.compose.config-hash
label set on containers, and compare withdocker config --hash "*"
. If compose file has no change, those should match (and container won’t be recreated) Please notice hash computation had some bugs in the past, so switching between compose releases might trigger containers to be recreated, but this is expected then.Same here, the recreation is making me crazy 🤣
@ndeloof how likely are we to get a point release including the bugfix? Or is there a release cycle expected imminently that will also include the aforementioned changes?
found the root cause, indeed related to changes in 2.16.0 unfortunately unrelated to comparable https://github.com/docker/compose/issues/10068.
I can reproduce with your example script, thanks ! investigating to understand what’s wrong with hash computation while setting service labels…
@ndeloof This is exactly what I’m observing as wrong behaviour in #10068.
@marzzzello please use following commands to check configuration hashes:
Those should show the exact same hash
I can’t see any change between 2.15.1 and 2.16.0 that could explain a different behavior…
I’m seeing this behaviour on 2.16.0. Will try to find a minimal example tomorrow.
@debdutdeb We are still working through updating our test environment with the latest version to confirm if we are still seeing the issue or not, we will update here once we know the result either way
@sillidev does it not make sense to recreate those containers if the one you’re running (
run
means a new container than whatup -d
created) depends on those directly or indirectly? ~You lost me there. This doesn’t seem like a regression.~ rethinking - if run creates a new container to “run” a command, maybe it should run the dependencies individually as well?Hi guys, this is a pretty bad regression on well established behaviour of docker compose. We will likey need to return to version 1.x until this is addressed - what needs to happen to have the core developers take a serious look at this?