moby: Configs not updated when redeploying stack
Description Unable to update configs when stack is redeployed
Steps to reproduce the issue:
- Create a file config.yml
- Run this stack - docker stack deploy --compose-file stack.yml stack
Contents of stack.yml
version: "3.3"
services:
test:
image: effectivetrainings/runner
configs:
- source: config.yml
target: /my-config.yml
configs:
config.yml:
file: ./config.yml
- Update the config.yml
- Redeploy the stack
Describe the results you received: Error response from daemon: rpc error: code = InvalidArgument desc = only updates to Labels are allowed
Describe the results you expected: Configs should be updated.
Additional information you deem important (e.g. issue happens only occasionally):
Output of docker version
:
Client:
Version: 17.06.2-ce
API version: 1.30
Go version: go1.8.3
Git commit: cec0b72
Built: Tue Sep 5 20:12:06 2017
OS/Arch: darwin/amd64
Server:
Version: 17.09.0-ce
API version: 1.32 (minimum version 1.12)
Go version: go1.8.3
Git commit: afdb6d4
Built: Tue Sep 26 22:40:56 2017
OS/Arch: linux/amd64
Experimental: true
Output of docker info
:
Containers: 34
Running: 4
Paused: 0
Stopped: 30
Images: 8
Server Version: 17.09.0-ce
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: active
NodeID: osdigy0m5vmld2txqjhse4uk7
Is Manager: true
ClusterID: laqdnfslcq3ecfbq66q3xmuwa
Managers: 1
Nodes: 3
Orchestration:
Task History Retention Limit: 5
Raft:
Snapshot Interval: 10000
Number of Old Snapshots to Retain: 0
Heartbeat Tick: 1
Election Tick: 3
Dispatcher:
Heartbeat Period: 5 seconds
CA Configuration:
Expiry Duration: 3 months
Force Rotate: 0
Root Rotation In Progress: false
Node Address: 192.168.33.49
Manager Addresses:
192.168.33.49:2377
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 06b9cb35161009dcb7123345749fef02f7cea8e0
runc version: 3f2f8b84a77f73d38244dd690525642a72156c64
init version: 949e6fa
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 4.4.0-93-generic
Operating System: Ubuntu 16.04.3 LTS
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 992.1MiB
Name: worker-1
ID: GLS7:AHRQ:M3IM:FSLB:TE7N:DBVQ:5P7Y:IG4P:HSFF:G73W:VWUH:WVIY
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: true
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false```
**Additional environment details (AWS, VirtualBox, physical, etc.):**
About this issue
- Original URL
- State: open
- Created 7 years ago
- Reactions: 64
- Comments: 62 (14 by maintainers)
Links to this issue
Commits related to this issue
- Append hash to config name https://github.com/moby/moby/issues/35048#issuecomment-424372653 — committed to cjolowicz/docker-buildbot by cjolowicz 5 years ago
@danpantry you can if you the
name
parameter. Doing so creates the secret with the given name (but not prefixed with the stack name);Without an env-var set (uses the
0
default value, as was specified in the compose file);Deploying it with
CONFIG_VERSION=1
CONFIG_VERSION=2
(etc …)Note that the configs were created as part of the stack, so will be labeled as being part of it. As a result, removing the stack will remove all versions of the config, not just the latest one that was used;
just have faced the same issue. I solved this by mixing @thaJeztah and @BretFisher comments.
SETTINGS_TIMESTAMP=$(date +%s) docker stack deploy...
TL;DR currently there are 4 options:
docker deploy
command workflowAs you can see, none of these are as elegant as
docker config update
. So, here’s another bash script similar to the solutions above:I’m sure Docker can do better.
But the question is, should they really be. It is impractical to follow the suggested process to rename a config to be able to update it. I guess its technically immutable by default since its stored in the raft log?
I’m really surprised this conceptual problem is still not solved in Docker. Volumes or bind-mounts aren’t swarm-compatible. Burning configs (forget about secrets) into images is painful workaround. This is really a serious blocker for us, so we are starting to look at other orcherstrators.
I get that it’s not a bug, per se…however it does fundamentally break a very valid use case for configs in a typical devops lifecycle. Can we explore an enhancement request to track the config versions? Even in secrets, we can track the version without exposing the secret. @thaJeztah
Let me try explaining why I think versioning is important;
At this point:
myservice
uses the old version of the configmyservice2
uses the new version of the configmyservice
will failSimilar to the above:
Will update the service to use the new config; if the config happens to have an error, there’s no way to roll back, i.e., attempting to recover the failing service;
Won’t resolve the situation.
With versioning or “pinning”, something like this could be done:
Similar to how image-digests are resolved when reploying a service, “resolve” / “pin” to the current version of a config;
Pinning to a specific version would allow;
Some UX would have to be worked on;
config@version
?)docker service update --force
may not be ideal, as it will also update the image that’s used)For me, even the suggested config rotation approach does not work because it apparently requires distinct config targets:
Error response from daemon: rpc error: code = InvalidArgument desc = config references 'old.conf' and 'new.conf' have a conflicting target: '/config/current.conf'
So my container would have to check 2 locations for configs in order to make this work? This is impractical.
Also having to choose a different name for the updated config is impractical because my compose file still references the original config name, i.e. when I do
docker deploy --compose-file docker-compose.yml
after rotation it will fail because the original config will have been removed (except if the config is not external). So I would be forced to rotate once more before being able to use the compose file for updating my stack/service again.Ok, so there are the following options when dealing with swarm service configuration:
Environment based configs always require container to be recreated so that configs takes effect: you can’t change neither cli nor environment variables for already running processes.
For file based configs to config being applied there are 3 possibilities:
So there’s only one use case that justifies this complex workflow of swarm config rolling updates: You want to change configuration without recreating the container. But even this use case is not possible with swarm configs: container is always recreated when you use
--config-add
.So why exactly we don’t have
--config-update
option anddocker config update
command? Well, with all due respect, explanation do sound artificial: do we really want rollbacks to revert to a previous config version? Configuration files behave a lot more like volumes rather than images. So it’s expected that if you rundocker config update
it will restart all the services using that config. And it would not revert it automatically if config file is incompatible with these versions of services.Would it make config update a more dangerous command than service update with
--config-add
and--config-rm
options? Well, yes, if service update with--config-rm
fails config will still be there, while--config-update
permanently rewrites config file as there’s only one version of config file stored. But shouldn’t it be up to the user to take that risk? And isn’t it a path forward for swarm configs be much more useful and easier to use than they are now?I’m fairly new to Docker but I’m at a loss to the expected workflow here. I have a
docker-compose.yml
which I’m trying to deploy in CI to a staging server with adocker stack deploy
. One of the containers has a config file mapped in, where the file is coming from the source repo. The first time I pushed a change updating the config file, I got the error mentioned by OP. Am I missing something, or is this use case (IMO fairly simple) not supported in an automated stack deploy setup?@ntwrkguru thanks for your constructive feedback
We must pin it to a specific version/hash, otherwise rescheduling tasks would lead to different tasks running with a different configuration (consider a node going down, and docker deploying new instances on a different node, and those use a different configuration than the other tasks).
Yes, this is a bit what I had in mind with:
config@version
?)Swarm doesn’t expose the “sha” of secrets (and configs) to prevent possible data leaking through the API, I was thinking of using a revision/version for that. (I’m also thinking out loud here; we’ll have to verify if it would work from a technical perspective 😄).
Admittedly, that wouldn’t give you the option to manually set the
:tag
, but when creating or updating a config, docker would print the revision. (I would personally not be against a:tag
option, but should give it some thought)Using the “latest” revision if
@version
is omitted; perhaps it’s an option, but (sorry, there’s a "but"😃those are the parts that need to be thought out. The reason I think the config should not automatically be updated is that it would make it difficult (or: impossible) to update a property of the service without also updating the configuration. You (“the user”) should remain in full control over what happens when you (re-)deploy the stack;
Think of a situation where you modified the local configuration file (perhaps you were debugging locally, or in the progress of updating the configuration), and you need to deploy an update to the stack (say: update the memory-limit). You update the compose file, and deploy the stack. Now both the memory-limit and the configuration are updated.
So even in the
config@latest
situation, this may not be desirable.Perhaps something similar to
--resolve-image=<always | changed | never >
, but for configs (and secrets)?@tkgregory
In my setup I create the compose file from a template with Ansible. I append a sha-256 sum (first 7 digits only) of the content of the config file to the config name in the compose file.
When the content of the file changes, the sha sum changes as well and the redeployment is triggered.
You have to make sure that there is no timestamp or something like that in the config file. From time to time old unused configs should be purged.
Please notice that using
name
property does not prefix config name with stack name.@gaui fixing this is more important for configs than secrets IMO. All of my configs are specified in my
docker-compose.yml
asfile:
, referencing files stored in the source tree and version controlled. Ideally, CI/CD will be able to seamlessly update the running stacks whenever a config file changes in version control. All of the secrets are specified asexternal
and are manually pushed to the swarm and not stored in a file. I think (hope?) this is the normal use case.@djmaze , possibly. It’s just all workarounds, not solutions.
I don’t like the fact that for update in configuration or a secret I have to change stack definition (docker-compose.yml). It feels unnatural.
You wouldn’t design a C++ program that you have to recompile each time a user wants to pass new input parameters, would you?
I’m not sure why we need to track the versioning of those items (apart from whatever the ‘current’ one is). These should be ephemeral from Docker’s perspective, right?
So, that was exactly what the users did as a workaround to this missing feature.
https://github.com/cjolowicz/docker-buildbot/commit/7dd84136ef8a04f8d26c78a4be45b7c4441edec0
Can’t we just do that instead inside Docker stack deploy? If someone made the code for that, would it be accepted?
It looks like kubernetes is a lot more powerful in this regard:
If ConfigMaps change the new files will be pushed to the running pods without needing a restart. So how do they achieve that? Apparently, they ignore automatic rollback scenario. If you changed config and container fails, it’s your fault, and you should bring it back manually rather than relying on orchestration tools to save you from downtime. And even if users want to be cautious in kubernetes they still can create a new config and update service to use a new one.
So to be in par with k8s, easier config update scenario should be implemented. Ideally, user should also have a choice whether she wants containers to be recreated when updating configs, or config files to be updated “on the fly”.
@IvanBoyko I can’t blame you; so are we. We use Ansible to create a Gluster multi-host volume and bind-mount to that, if it helps.
@ntwrkguru but with bind mounts, you have to make sure that the file makes it to a known location on the host system, right? That doesn’t make things much easier, and adds a requirement during deploying and dependency on the host’s state.
I’m getting away with this:
A simple handy oneliner to remove your secrets/config before updating them
For secrets
docker secret ls | grep -v '^ID' | awk '{print $1}' | xargs docker secret rm
For configs
docker config ls | grep -v '^ID' | awk '{print $1}' | xargs docker config rm
Do you guys think we are ever going to get this goodie?
It’s a very frustrating behavior. No point in keeping a docker-compose file if I need to manually update all my services on stack change (config included). I’m probably going to fall back to a cloud config management and just read configs right from the application on redeployment
I also made a quick tool to replace the
docker stack deploy
command using the same idea. The tool scans all the referenced configs/secrets defined on the compose file, calculate their hashes and use that for the variables that are passed directly to the real docker command, this way you don’t need to remember those env variables anymore.Hope it can be useful to somebody, i use it everyday: https://github.com/codestation/docker-deploy
@tkgregory
Well if you want to test it i made a bash script to happen the hash after the config name 😃 : https://github.com/moby/moby/issues/35048#issuecomment-384315250
I’m thinking the best way in your CI auto-deploy scenerio is a config-name with a hash, but not a git-commit based hash (which changes each commit) but rather a simple date/time eval on the config file. I talk a bit about that and a sample script here https://youtu.be/oWrwi1NiViw
I agree with John Laurel, there should be a way to update the configs and Kubernetes proves that it is doable. None of the proposed alternatives are as clean as a proper
docker config update
mechanism.I have made a script to be able to update configs automatically, especially when using a CI If you want to try it: https://gist.github.com/mastertheif/233edf1b25bee9ca4365434ba6548571 It is a bit crude but it works, at least for me, you could event modify it to handle secrets i think, it requires bash4 Basically it takes a compose file and suffix every config name by the hash of the actual file, doing so, if a file change it’s name will be also changed. Rollback still should works as the previous config is preserved on the other hand the rest of the specific stack config is pruned
Feel free to customize it for your needs
Cheers
How about a flag for
docker stack deploy
… something like--update-configs
and--update-secrets
? This lack of control of configs/secrets, is a real issue for us. Hope this will be tackled soon.@thaJeztah I’d like to revisit your earlier post:
Could we not pin the config version to the service to which it is attached? I.E. when a config is created, it has some hash. When it is attached to a service, the service sees service.hash such that, if needed, it can be recaptured (unless the user manually prunes the configs, of course).
docker config update
would simply add a new commit hash (similar to how a git commit works). Actually…could even use a concept similar to image where configs can be tagged and the default tag is latest? This way, updating service(s) would pull the latest config by default, but can also use a specific tag. Admittedly, managing SHA hashes is not intuitive, but if there’s also the possibility of tagging the configs, it would help. Just thinking out loud…I’m tired of using Ansible and host files for configs. 😃I’m also hitting this bug. Very annoying as i deploy to a swarm cluster from our automated build servers and managing configurations manually is a PITA.
I’m not sure I get why the rollback scenario is an issue. Docker swarm already has to keep track of which version of the image it needs to rollback to. Why cant the configuration and secrets associated with that deployed service be attached in the same way?