moby: Configs not updated when redeploying stack

Description Unable to update configs when stack is redeployed

Steps to reproduce the issue:

  1. Create a file config.yml
  2. Run this stack - docker stack deploy --compose-file stack.yml stack

Contents of stack.yml

version: "3.3"
services:
   test:
     image: effectivetrainings/runner
     configs:
     - source: config.yml
       target: /my-config.yml
configs:
  config.yml:
    file: ./config.yml
  1. Update the config.yml
  2. Redeploy the stack

Describe the results you received: Error response from daemon: rpc error: code = InvalidArgument desc = only updates to Labels are allowed

Describe the results you expected: Configs should be updated.

Additional information you deem important (e.g. issue happens only occasionally):

Output of docker version:

Client:
 Version:      17.06.2-ce
 API version:  1.30
 Go version:   go1.8.3
 Git commit:   cec0b72
 Built:        Tue Sep  5 20:12:06 2017
 OS/Arch:      darwin/amd64

Server:
 Version:      17.09.0-ce
 API version:  1.32 (minimum version 1.12)
 Go version:   go1.8.3
 Git commit:   afdb6d4
 Built:        Tue Sep 26 22:40:56 2017
 OS/Arch:      linux/amd64
 Experimental: true

Output of docker info:

Containers: 34
 Running: 4
 Paused: 0
 Stopped: 30
Images: 8
Server Version: 17.09.0-ce
Storage Driver: overlay2
 Backing Filesystem: extfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host ipvlan macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: active
 NodeID: osdigy0m5vmld2txqjhse4uk7
 Is Manager: true
 ClusterID: laqdnfslcq3ecfbq66q3xmuwa
 Managers: 1
 Nodes: 3
 Orchestration:
  Task History Retention Limit: 5
 Raft:
  Snapshot Interval: 10000
  Number of Old Snapshots to Retain: 0
  Heartbeat Tick: 1
  Election Tick: 3
 Dispatcher:
  Heartbeat Period: 5 seconds
 CA Configuration:
  Expiry Duration: 3 months
  Force Rotate: 0
 Root Rotation In Progress: false
 Node Address: 192.168.33.49
 Manager Addresses:
  192.168.33.49:2377
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 06b9cb35161009dcb7123345749fef02f7cea8e0
runc version: 3f2f8b84a77f73d38244dd690525642a72156c64
init version: 949e6fa
Security Options:
 apparmor
 seccomp
  Profile: default
Kernel Version: 4.4.0-93-generic
Operating System: Ubuntu 16.04.3 LTS
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 992.1MiB
Name: worker-1
ID: GLS7:AHRQ:M3IM:FSLB:TE7N:DBVQ:5P7Y:IG4P:HSFF:G73W:VWUH:WVIY
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: true
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false```

**Additional environment details (AWS, VirtualBox, physical, etc.):**

About this issue

  • Original URL
  • State: open
  • Created 7 years ago
  • Reactions: 64
  • Comments: 62 (14 by maintainers)

Commits related to this issue

Most upvoted comments

@danpantry you can if you the name parameter. Doing so creates the secret with the given name (but not prefixed with the stack name);

version: '3.7'
services:
  nginx:
    image: nginx:alpine
    configs:
      - source: nginxconf
        target: /etc/nginx/foobar.conf
configs:
  nginxconf:
    name: nginx.conf-${CONFIG_VERSION:-0}
    file: ./nginx.conf

Without an env-var set (uses the 0 default value, as was specified in the compose file);

$ docker stack deploy -c docker-compose.yml mystack
Creating network mystack_default
Creating config nginx.conf-0
Creating service mystack_nginx

Deploying it with CONFIG_VERSION=1

$ CONFIG_VERSION=1 docker stack deploy -c docker-compose.yml mystack
Creating config nginx.conf-1
Updating service mystack_nginx (id: mjyzcchohvdak671lu9r581ba)

CONFIG_VERSION=2 (etc …)

$ CONFIG_VERSION=2 docker stack deploy -c docker-compose.yml mystack
Creating config nginx.conf-2
Updating service mystack_nginx (id: mjyzcchohvdak671lu9r581ba)

Note that the configs were created as part of the stack, so will be labeled as being part of it. As a result, removing the stack will remove all versions of the config, not just the latest one that was used;

$ docker stack rm mystack
Removing service mystack_nginx
Removing config nginx.conf-2
Removing config nginx.conf-1
Removing config nginx.conf-0
Removing network mystack_default

just have faced the same issue. I solved this by mixing @thaJeztah and @BretFisher comments.

configs:
  settings.yml:
    name: settings-${SETTINGS_TIMESTAMP}.yml
    file: foo.yml

SETTINGS_TIMESTAMP=$(date +%s) docker stack deploy...

TL;DR currently there are 4 options:

  • without swarm config:
  1. Use volumes with a driver that allows multi-host storage + service update --force
  2. embed config file into docker image (thus making it truly immutable)
  • with swarm config:
  1. docker stack rm + docker config + docker config create + docker stack deploy, potentially suffering from downtime (with blue/green deployment strategy to mitigate that)
  2. ugly docker service update --config-add/–config-rm option, thus completely destroying idempotent docker deploy command workflow

As you can see, none of these are as elegant as docker config update. So, here’s another bash script similar to the solutions above:

#!/bin/bash
set -e
set +x
config_update () {
  config_name=$1
  config_filepath=$2
  service_name=$(docker config rm $config_name 2>&1 |grep -oP 'is in use by the following service: \K\w+' || true)
  if [ -z $service_name ]; then
    echo "There is no service using config $config_name, use docker deploy";
    exit 0;
  fi
  docker service update --config-rm ${config_name}_temp $service_name || true
  docker config rm ${config_name}_temp
  docker config create ${config_name}_temp $config_filepath
  mount_filepath=$(docker service inspect --format '{{(index .Spec.TaskTemplate.ContainerSpec.Configs 0).File.Name }}' $service_name)
  docker service update --config-rm $config_name --config-add source=${config_name}_temp,target=${mount_filepath} $service_name
  docker config rm $config_name
  docker config create $config_name $config_filepath
  docker service update --config-rm ${config_name}_temp --config-add source=${config_name},target=${mount_filepath} $service_name
}

config_name=$1
config_filepath=$2

if [ -z $config_name ] || [ -z $config_filepath ]; then
  echo "Usage: $0 <config name> <config file path>"
  exit -1
fi
config_update $config_name $config_filepath

I’m sure Docker can do better.

But the question is, should they really be. It is impractical to follow the suggested process to rename a config to be able to update it. I guess its technically immutable by default since its stored in the raft log?

I’m really surprised this conceptual problem is still not solved in Docker. Volumes or bind-mounts aren’t swarm-compatible. Burning configs (forget about secrets) into images is painful workaround. This is really a serious blocker for us, so we are starting to look at other orcherstrators.

I get that it’s not a bug, per se…however it does fundamentally break a very valid use case for configs in a typical devops lifecycle. Can we explore an enhancement request to track the config versions? Even in secrets, we can track the version without exposing the secret. @thaJeztah

I’m not sure why we need to track the versioning of those items (apart from whatever the ‘current’ one is). These should be ephemeral from Docker’s perspective, right?

Let me try explaining why I think versioning is important;

$ docker config create myconfig ./config.cnf

$ docker service create --config src=myconfig,target=/foo/config.cnf --name myservice myimage

$ docker config update myconfig ./config-new.cnf

$ docker service create --config src=myconfig,target=/foo/config.cnf --name myservice2 myimage

At this point:

  • myservice uses the old version of the config
  • myservice2 uses the new version of the config
  • if a task fails / is re-deployed, that task will use the new config, and
  • if the config has an error, new tasks for myservice will fail

Similar to the above:

$ docker service update --force myservice

Will update the service to use the new config; if the config happens to have an error, there’s no way to roll back, i.e., attempting to recover the failing service;

$ docker service rollback myservice

Won’t resolve the situation.

With versioning or “pinning”, something like this could be done:

Similar to how image-digests are resolved when reploying a service, “resolve” / “pin” to the current version of a config;

$ docker service create --config src=myconfig,target=/foo/config.cnf --name myservice myimage

$ docker service inspect myservice --format '{{ json .Spec.TaskTemplate.ContainerSpec.Configs}}' | jq
[
  {
    "File": {
      "Name": "/foo/config.cnf",
      "UID": "0",
      "GID": "0",
      "Mode": 292
    },
    "ConfigID": "vbc6o4k6xdct0oojky0hdpahw",
    "ConfigName": "myconfig",
    "Version": {
        "Index": 1007
    }
  }
]

Pinning to a specific version would allow;

  • updating a config / secret without affecting existing services
  • updating, and rolling back services to a previous version
  • running services with different versions of a config/secret (blue/green deployment, e.g.)

Some UX would have to be worked on;

  • be able to explicitly specify the version of a config/secret (config@version?)
  • update a config/secret to the latest version (just docker service update --force may not be ideal, as it will also update the image that’s used)
  • not directly related, but have a central command to update all services that use a config / secret, and update them all to the latest version (e.g., a key has been compromised, so rotating the key for all services that use it)

For me, even the suggested config rotation approach does not work because it apparently requires distinct config targets: Error response from daemon: rpc error: code = InvalidArgument desc = config references 'old.conf' and 'new.conf' have a conflicting target: '/config/current.conf'

So my container would have to check 2 locations for configs in order to make this work? This is impractical.

Also having to choose a different name for the updated config is impractical because my compose file still references the original config name, i.e. when I do docker deploy --compose-file docker-compose.yml after rotation it will fail because the original config will have been removed (except if the config is not external). So I would be forced to rotate once more before being able to use the compose file for updating my stack/service again.

Ok, so there are the following options when dealing with swarm service configuration:

  1. File based configs a. volumes b. bind mounts c. config mounts
  2. Environment based configs a. cli options b. environment variables

Environment based configs always require container to be recreated so that configs takes effect: you can’t change neither cli nor environment variables for already running processes.

For file based configs to config being applied there are 3 possibilities:

  • application watches for file to be changed
  • you can force application to read config file by sending a signal
  • config file is read during start of application (essentially requiring container to be recreated)

So there’s only one use case that justifies this complex workflow of swarm config rolling updates: You want to change configuration without recreating the container. But even this use case is not possible with swarm configs: container is always recreated when you use --config-add.

So why exactly we don’t have --config-update option and docker config update command? Well, with all due respect, explanation do sound artificial: do we really want rollbacks to revert to a previous config version? Configuration files behave a lot more like volumes rather than images. So it’s expected that if you run docker config update it will restart all the services using that config. And it would not revert it automatically if config file is incompatible with these versions of services.

Would it make config update a more dangerous command than service update with --config-add and --config-rm options? Well, yes, if service update with --config-rm fails config will still be there, while --config-update permanently rewrites config file as there’s only one version of config file stored. But shouldn’t it be up to the user to take that risk? And isn’t it a path forward for swarm configs be much more useful and easier to use than they are now?

I’m fairly new to Docker but I’m at a loss to the expected workflow here. I have a docker-compose.yml which I’m trying to deploy in CI to a staging server with a docker stack deploy. One of the containers has a config file mapped in, where the file is coming from the source repo. The first time I pushed a change updating the config file, I got the error mentioned by OP. Am I missing something, or is this use case (IMO fairly simple) not supported in an automated stack deploy setup?

@ntwrkguru thanks for your constructive feedback

Could we not pin the config version to the service to which it is attached? I.E. when a config is created, it has some hash. When it is attached to a service, the service sees service.hash such that, if needed, it can be recaptured (unless the user manually prunes the configs, of course).

We must pin it to a specific version/hash, otherwise rescheduling tasks would lead to different tasks running with a different configuration (consider a node going down, and docker deploying new instances on a different node, and those use a different configuration than the other tasks).

docker config update would simply add a new commit hash (similar to how a git commit works). Actually…could even use a concept similar to image where configs can be tagged and the default tag is latest?

Yes, this is a bit what I had in mind with:

  • be able to explicitly specify the version of a config/secret (config@version?)

Swarm doesn’t expose the “sha” of secrets (and configs) to prevent possible data leaking through the API, I was thinking of using a revision/version for that. (I’m also thinking out loud here; we’ll have to verify if it would work from a technical perspective 😄).

Admittedly, that wouldn’t give you the option to manually set the :tag, but when creating or updating a config, docker would print the revision. (I would personally not be against a :tag option, but should give it some thought)

This way, updating service(s) would pull the latest config by default, but can also use a specific tag. Admittedly, managing SHA hashes is not intuitive, but if there’s also the possibility of tagging the configs, it would help. Just thinking out loud.

Using the “latest” revision if @version is omitted; perhaps it’s an option, but (sorry, there’s a "but"😃

those are the parts that need to be thought out. The reason I think the config should not automatically be updated is that it would make it difficult (or: impossible) to update a property of the service without also updating the configuration. You (“the user”) should remain in full control over what happens when you (re-)deploy the stack;

Think of a situation where you modified the local configuration file (perhaps you were debugging locally, or in the progress of updating the configuration), and you need to deploy an update to the stack (say: update the memory-limit). You update the compose file, and deploy the stack. Now both the memory-limit and the configuration are updated.

So even in the config@latest situation, this may not be desirable.

Perhaps something similar to --resolve-image=<always | changed | never >, but for configs (and secrets)?

@tkgregory

Any suggestion how to use the dynamic config name (I’m appending a git hash) without forcing a redeployment of the services defined in a stack?

In my setup I create the compose file from a template with Ansible. I append a sha-256 sum (first 7 digits only) of the content of the config file to the config name in the compose file.

When the content of the file changes, the sha sum changes as well and the redeployment is triggered.

You have to make sure that there is no timestamp or something like that in the config file. From time to time old unused configs should be purged.

Please notice that using name property does not prefix config name with stack name.

@gaui fixing this is more important for configs than secrets IMO. All of my configs are specified in my docker-compose.yml as file:, referencing files stored in the source tree and version controlled. Ideally, CI/CD will be able to seamlessly update the running stacks whenever a config file changes in version control. All of the secrets are specified as external and are manually pushed to the swarm and not stored in a file. I think (hope?) this is the normal use case.

@djmaze , possibly. It’s just all workarounds, not solutions.
I don’t like the fact that for update in configuration or a secret I have to change stack definition (docker-compose.yml). It feels unnatural.
You wouldn’t design a C++ program that you have to recompile each time a user wants to pass new input parameters, would you?

I’m not sure why we need to track the versioning of those items (apart from whatever the ‘current’ one is). These should be ephemeral from Docker’s perspective, right?

There’s no hash stored/exposed; initially a hash was exposed, but during review this was removed out of security concerns (that was for secrets, but “configs” use the same mechanisms).

So, that was exactly what the users did as a workaround to this missing feature.

https://github.com/cjolowicz/docker-buildbot/commit/7dd84136ef8a04f8d26c78a4be45b7c4441edec0

Can’t we just do that instead inside Docker stack deploy? If someone made the code for that, would it be accepted?

It looks like kubernetes is a lot more powerful in this regard:

If ConfigMaps change the new files will be pushed to the running pods without needing a restart. So how do they achieve that? Apparently, they ignore automatic rollback scenario. If you changed config and container fails, it’s your fault, and you should bring it back manually rather than relying on orchestration tools to save you from downtime. And even if users want to be cautious in kubernetes they still can create a new config and update service to use a new one.

So to be in par with k8s, easier config update scenario should be implemented. Ideally, user should also have a choice whether she wants containers to be recreated when updating configs, or config files to be updated “on the fly”.

@IvanBoyko I can’t blame you; so are we. We use Ansible to create a Gluster multi-host volume and bind-mount to that, if it helps.

@ntwrkguru but with bind mounts, you have to make sure that the file makes it to a known location on the host system, right? That doesn’t make things much easier, and adds a requirement during deploying and dependency on the host’s state.

I’m getting away with this:

docker config rm $(docker config ls -q) || echo ''

A simple handy oneliner to remove your secrets/config before updating them

For secrets docker secret ls | grep -v '^ID' | awk '{print $1}' | xargs docker secret rm

For configs docker config ls | grep -v '^ID' | awk '{print $1}' | xargs docker config rm

I have a docker-compose.yml which I’m trying to deploy in CI to a staging server with a docker stack deploy. One of the containers has a config file mapped in, where the file is coming from the source repo. The first time I pushed a change updating the config file, I got the error mentioned by OP. Am I missing something, or is this use case (IMO fairly simple) not supported in an automated stack deploy setup?

Do you guys think we are ever going to get this goodie?

It’s a very frustrating behavior. No point in keeping a docker-compose file if I need to manually update all my services on stack change (config included). I’m probably going to fall back to a cloud config management and just read configs right from the application on redeployment

I also made a quick tool to replace the docker stack deploy command using the same idea. The tool scans all the referenced configs/secrets defined on the compose file, calculate their hashes and use that for the variables that are passed directly to the real docker command, this way you don’t need to remember those env variables anymore.

Hope it can be useful to somebody, i use it everyday: https://github.com/codestation/docker-deploy

@tkgregory

Well if you want to test it i made a bash script to happen the hash after the config name 😃 : https://github.com/moby/moby/issues/35048#issuecomment-384315250

I’m thinking the best way in your CI auto-deploy scenerio is a config-name with a hash, but not a git-commit based hash (which changes each commit) but rather a simple date/time eval on the config file. I talk a bit about that and a sample script here https://youtu.be/oWrwi1NiViw

I agree with John Laurel, there should be a way to update the configs and Kubernetes proves that it is doable. None of the proposed alternatives are as clean as a proper docker config update mechanism.

I have made a script to be able to update configs automatically, especially when using a CI If you want to try it: https://gist.github.com/mastertheif/233edf1b25bee9ca4365434ba6548571 It is a bit crude but it works, at least for me, you could event modify it to handle secrets i think, it requires bash4 Basically it takes a compose file and suffix every config name by the hash of the actual file, doing so, if a file change it’s name will be also changed. Rollback still should works as the previous config is preserved on the other hand the rest of the specific stack config is pruned

Feel free to customize it for your needs

Cheers

How about a flag for docker stack deploy… something like --update-configs and --update-secrets ? This lack of control of configs/secrets, is a real issue for us. Hope this will be tackled soon.

@thaJeztah I’d like to revisit your earlier post:

Some UX would have to be worked on;

be able to explicitly specify the version of a config/secret (config@version?)
update a config/secret to the latest version (just docker service update --force may not be ideal, as it will also update the image that's used)
not directly related, but have a central command to update all services that use a config / secret, and update them all to the latest version (e.g., a key has been compromised, so rotating the key for all services that use it)

Could we not pin the config version to the service to which it is attached? I.E. when a config is created, it has some hash. When it is attached to a service, the service sees service.hash such that, if needed, it can be recaptured (unless the user manually prunes the configs, of course).

docker config update would simply add a new commit hash (similar to how a git commit works). Actually…could even use a concept similar to image where configs can be tagged and the default tag is latest? This way, updating service(s) would pull the latest config by default, but can also use a specific tag. Admittedly, managing SHA hashes is not intuitive, but if there’s also the possibility of tagging the configs, it would help. Just thinking out loud…I’m tired of using Ansible and host files for configs. 😃

I’m also hitting this bug. Very annoying as i deploy to a swarm cluster from our automated build servers and managing configurations manually is a PITA.

I’m not sure I get why the rollback scenario is an issue. Docker swarm already has to keep track of which version of the image it needs to rollback to. Why cant the configuration and secrets associated with that deployed service be attached in the same way?