moby: Re-deploying docker stack not updating containers with latest
Description docker stacks re-deployed using a compose-file are not updated even if the image:latest changes.
Steps to reproduce the issue:
Compose-file uses a locally built image via myapp:latest
. I can easily deploy the stack using docker stack deploy --compose-file docker-compose.yaml dev1
. If I run the docker stack deploy --compose-file docker-compose.yaml dev1
command again, I get a bunch of warnings of not pinning the image:
Updating service dev1_myapp (id: zi9n4b4u1fl2et4q8p7qjyz9d)
unable to pin image myappi:latest to digest: errors:
denied: requested access to the resource is denied
unauthorized: authentication required
But otherwise the container is not updated because the myapp:latest
image didn’t change. However, I did expect that triggering docker stack deploy --compose-file docker-compose.yaml dev1
after building a newer myapp
image (re-tagged latest) would trigger the shutdown of the older container/start of a new one. Instead nothing happens (other than the warning messages shown above).
The only workaround for triggering for the new containers to start is either run docker rm -f <container-id>
or docker service update --image myapp:latest dev1_myapp
.
Using Docker for Mac (v1.13.1) in Swarm mode (no registries). Thanks!
Output of docker version
:
Client:
Version: 1.13.1
API version: 1.26
Go version: go1.7.5
Git commit: 092cba3
Built: Wed Feb 8 08:47:51 2017
OS/Arch: darwin/amd64
Server:
Version: 1.13.1
API version: 1.26 (minimum version 1.12)
Go version: go1.7.5
Git commit: 092cba3
Built: Wed Feb 8 06:38:28 2017
OS/Arch: linux/amd64
Experimental: false
Output of docker info
:
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 7
Server Version: 1.13.1
Storage Driver: overlay
Backing Filesystem: extfs
Supports d_type: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Swarm: active
NodeID: 53z5vdy0zsvbxt23l47smfsvz
Is Manager: true
ClusterID: l6vaez9qzonfs0cw3mrpsc7kl
Managers: 1
Nodes: 1
Orchestration:
Task History Retention Limit: 5
Raft:
Snapshot Interval: 10000
Number of Old Snapshots to Retain: 0
Heartbeat Tick: 1
Election Tick: 3
Dispatcher:
Heartbeat Period: 5 seconds
CA Configuration:
Expiry Duration: 3 months
Node Address: 10.0.2.15
Manager Addresses:
10.0.2.15:2377
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1
runc version: 9df8b306d01f59d3a8029be411de015b7304dd8f
init version: 949e6fa
Security Options:
seccomp
Profile: default
Kernel Version: 3.10.0-514.6.1.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 1.796 GiB
Name: msc-vbox
ID: ZHPK:HUJZ:TYAB:CHFG:YDGH:6XC6:C2A2:P76O:I7Q7:DQPN:5YWS:QA7Q
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
Additional environment details (AWS, VirtualBox, physical, etc.): Docker For Mac, in Swarm mode. No registries (private or otherwise).
About this issue
- Original URL
- State: open
- Created 7 years ago
- Reactions: 61
- Comments: 44 (8 by maintainers)
Links to this issue
Commits related to this issue
- script/server: add Swarm support This sources the env file we're currently using, but this might change. We don't remove the stack, just deploy it. This should updated things, although we might need... — committed to mysociety/docker-consul by sagepe 5 years ago
- script/server: add Swarm support This sources the env file we're currently using, but this might change. We don't remove the stack, just deploy it. This should updated things, although we might need... — committed to mysociety/docker-consul by sagepe 5 years ago
For those reading this issue in late 2018, two thoughts on current swarm designs/workflows:
Don’t use
latest
tag for clusters, servers, etc.In general, using
latest
or any image tag over and over, especially with CI/CD, will get you into trouble eventually. Every team I’ve worked with hit this issue at some point (just google “don’t use latest tag”). This is for any container technology (docker run, kuberentes, swarm, etc.) Even if Swarm properly resolved the image tag to the latest sha hash (which it now does, see below), it makes troubleshooting and validation much harder. with everything always runningaccount/image:latest
commands likedocker service ls
anddocker stack ps
don’t help you know what versions you’re truly running. If you ever do service updates or rollbacks, the ls/ps commands all look the same and you end up having to dig in with lots of inspect commands. There are lots of other reasons to not uselatest
, and not just swarm related.latest
is good for having the latest commit tagged in your master branch for quick one-off testing, demo solutions, etc. A good comparison is that for testing/learning we all justapt-get install mysql-server
but on servers our ops will want a specific version withapt-get install mysql-server=5.7.21-1ubuntu1
. We do that with our dependencies, so expect to do that with container images.The most common way to do this is set your stack file with a environment variable for the image tag, which is fed into each docker stack deploy run by your CI/CD system which knows the git commit ID, tags the image it just built with that ID, pushes it to a registry, and sets it to a envvar during the stack deploy.
But I still want to use
latest
anyway, and have swarm check for latest sha digest (or any tag that I reuse every deploy)Turns out this should work as of 17.06 out of the box.
docker stack deploy
should pull sha digests from the registry each time. This changed a bit since then, and bugs were fixed, so this is where we’re at today:docker stack deploy -c stack.yml mystack
with a 18.06 client and server versions, will turn thelatest
tag (or lack of tag) in the stack.yml into a sha and send that to theservice update
command. If the sha digests match, and nothing else in yaml has changed, service will not replace the task.Stack deploy now has
docker stack deploy --resolve-image
options, which defaults toalways
so it should always check the sha digest.never
means it will not compare digests from registery or local cash (even if the node image cache has a different version). I can’t figure out whatchanged
does.Same issue here. Since there are supporters of both default behaviors - auto-pulling and digest-hardcoding - I would suggest to have
--pull
option in CLI orpull: always
in YML.@saamalik, is the docker repo that your image is being pulled from private? I just did some more testing and this seems to be a repository authentication failure. Doing
docker stack deploy -c docker-compose.yml --with-registry-auth test
solved the problem for me.Related #29295
We had a similar issue and were able to fix it by tagging our images with progressing version numbers in our build script so docker was able to identify when an update was necessary.
Docker Version 20.10.5 on ubuntu and it still does not work! I have pulled a new image with the latest tag before running
docker stack deploy -c ...
to update a running stack, but it does not restart services for which a new image with different digest exist locally. The only option to update images is to manually do service restarts or to tear down the whole stack.I’m having the same issue. Using docker stack deploy does not update to the new digest. This is a deal breaker for me because I have a versioned docker-compose.yml (on GIT) which triggers a docker-stack update on every change, so I need to have a static tag.
P.S: I dont think it’s an authentication issue because I’m using --with-registry-auth.
What a shame. Having your stack pointing to latest and using a CI process to tag the newly generated version with latest is much more convenient than updating the version every time you deploy a new version of a service IMHO.
It will be great to add a pull:always option in the docker stack to force the service to point to the latest digest.
I have a problem that if I update my service with docker service update --image it works well, but when I update the stack, it rolls back to an older version (because points to latest, not to latest@digest)
This is the same conclusion I got - and the same workflow I got to.
Alright, I have now hit this issue again.
The build and tagging and pushing of the new image all goes fine. Image can be verified present in nexus private registry, latest tag is updated.
Then the
docker stack deploy --with-registry-auth -c stack.yml [stackname]
command runs fine, no errors.And that’s it.
docker service ps [servicename]
shows no hint of an in progress update. Nor doesdocker service inspect [servicename]
.No errors from
journalctrl | grep docker
.It is basically as if the
stack deploy
command did absolutely 100% nothing.Removing the stack manually, and then rerunning the same
docker stack deploy
command rectifies the problem, and the correctlatest
version is actually used (verified withdocker service inspect [servicename]
).docker info
output:just tested again with an image with tag latest from our own registry and updated the stack with
docker stack deploy --prune --with-registry-auth -c $FILE $STACK
and the 7 containers based on that image were updated. But, as I mentioned in https://github.com/moby/moby/issues/31357#issuecomment-359363903 , after getting the prompt I had to wait another 30 seconds until the services were really updated.
At this time there is only one thing left: The tags of the images listed with
docker image list
were not updated:The image with the ID 9c827b80dd79 is the new latest, as you can see with:
and
just a short comment - a mistake I made several times:
When doing
docker stack deploy
it takes some time until the containers will be updated / changed. We have to notice that when testing the different behaviour ofdocker stack deploy
anddocker service update
. I sometimes did adocker stack deploy
and thought nothing happened. After that I did adocker service update --force
and saw updating the service. It could be thatdocker stack deploy
even did the update - but not quite immediately.Without wishing to add noise I think it’s noteworthy that we (too) are moving away from
latest
during deployments. Instead, we tag images with a build number and update the compose/stack file. This adds a step but does provide an improved audit trail for changes being deployed.@saamalik using the
--image
flag withservice update
is the recommended way of updating the image, but I’m not sure how this links withstack deploy
. It is likely that we may have to add some logic for updating the image when certain flags are provided for stack files.As for the warnings, I’m working on improving those.
@thaJeztah what do you suggest in this case?
After some external discussions I have been led to the conclusion that updates of
latest
tolatest
is simply not really supported, and you should not do that.The sane work flow is apparently to do “green fields” deployments with the stack file and
docker stack deploy
. The stack file will then have all service tags aslatest
.After that, use
docker service update
to update to a new tag of the specific service.This works for me, even though it complicates my life.
Having the same problems on 17.05-ce and 17.09-ce clusters.
docker service update --with-registry-auth --image
gives flaky results. Sometimes it does nothing, sometimes will update one replica and leave the other one alone. Rarely it will fully update a service (but this is very, very rare).Scaling the service back to 0, and then up to 2 seems to bring up new containers with the correct image.
I am also having this issue when pulling from a private registry, despite using --with-registry-auth flag and using a service update with force flag separately.
I always have to remove the stack to get the changes to go in.
@mujiburger @nishanttotla Not using a public/private registry; local images on dev systems.
Initially the
--with-registry-auth
option seemed promising since all service containers were restarted; so I assumed that running ‘–with-registry-auth’ again would probably just update and restart the services with changed image-ids. Unfortunately, the second run (and subsequent) runs ofdocker stack deploy --with-registry-auth ...
didn’t do anything (despite newer images mapped to:latest
). Probably the initial mass restart was because the--with-registry-auth
caused some portion of the service spec to change FOR all services.The ideal behavior in my opinion would be for
docker stack deploy
(with some option or not) to be able to update services targeting:latest
if their image changes.docker service update --image ...
does this correctly. Would be cool if we could provide the same fromdocker deploy
@TQuy the behaviour you’re talking about means an the image is pushed up and you’re expecting a locally built image to override it. Unfortunately the pushed image takes priority from what I can tell, so if you want to ensure you’re using the locally built image you should not
docker push
a new version and reserve a label e.g.local
which will use the one that is locally built and in the local repo.Same thing for me on a testing Swarm cluster consisting of a mix of Rancher OS and Ubuntu nodes. A stack uses the original images even if I perform a
docker stack rm
and then re-deploy; that has been blowing my mind and burning time. I have a CI pipeline deploying to the cluster and the workaround for me is to append the CI build number to the end of the stack name. This makes the stack unique and it pulls the latest images.In my CI script, I am basically: Finding the name of the existing stack, removing it, and then replacing it with a unique name via a build number from the CI process appended to the name. For me, this makes it pull the latest images.
In my case, ${1} is the CI build number to make the stack name unique. This appears to force obtaining new images if there are any. I’m not currently incrementing the image version numbers though I will be and I’m sure that will address it too as someone else mentioned.
Experiencing the exact same behavior on OSX 10.11.6 using
docker stack deploy -c my-stack.yml my-stack
to update ‘my-stack’. The initial deployment works fine but updating fails withdocker info