moby: Secrets: write-up best practices, do's and don'ts, roadmap

Handling secrets (passwords, keys and related) in Docker is a recurring topic. Many pull-requests have been ‘hijacked’ by people wanting to (mis)use a specific feature for handling secrets.

So far, we only discourage people to use those features, because they’re either provenly insecure, or not designed for handling secrets, hence “possibly” insecure. We don’t offer them real alternatives, at least, not for all situations and if, then without a practical example.

I just think “secrets” is something that has been left lingering for too long. This results in users (mis)using features that are not designed for this (with the side effect that discussions get polluted with feature requests in this area) and making them jump through hoops just to be able to work with secrets.

Features / hacks that are (mis)used for secrets

This list is probably incomplete, but worth a mention

  • Environment Variables. Probably the most used, because it’s part of the “12 factor app”. Environment variables are discouraged, because they are;
    • Accessible by any proces in the container, thus easily “leaked”
    • Preserved in intermediate layers of an image, and visible in docker inspect
    • Shared with any container linked to the container
  • Build-time environment variables (https://github.com/docker/docker/pull/9176, https://github.com/docker/docker/pull/15182). The build-time environment variables were not designed to handle secrets. By lack of other options, people are planning to use them for this. To prevent giving the impression that they are suitable for secrets, it’s been decided to deliberately not encrypt those variables in the process.
  • Mark … Squash / Flatten layers. (https://github.com/docker/docker/issues/332, https://github.com/docker/docker/pull/12198, https://github.com/docker/docker/pull/4232, https://github.com/docker/docker/pull/9591). Squashing layers will remove the intermediate layers from the final image, however, secrets used in those intermediate layers will still end up in the build cache.
  • Volumes. IIRC some people were able to use the fact that volumes are re-created for each build-step, allowing them to store secrets. I’m not sure this actually works, and can’t find the reference to how that’s done.
  • Manually building containers. Skip using a Dockerfile and manually build a container, commiting the results to an image
  • Custom Hacks. For example, hosting secrets on a server, curl-ing the secrets and remove them afterwards, all in a single layer. (also see https://github.com/dockito/vault)

So, what’s needed?

  • Add documentation on “do’s” and “don’ts” when dealing with secrets; @diogomonica made some excellent points in https://github.com/docker/docker/pull/9176#issuecomment-99542089
  • Describe the officially “endorsed” / approved way to handle secrets, if possible, using the current features
  • Provide roadmap / design for officially handling secrets, we may want to make this pluggable, so that we don’t have to re-invent the wheel and use existing offerings in this area, for example, Vault, Keywiz, Sneaker

The above should be written / designed with both build-time and run-time secrets in mind

@calavera created a quick-and-dirty proof-of-concept on how the new Volume-Drivers (https://github.com/docker/docker/pull/13161) could be used for this; https://github.com/calavera/docker-volume-keywhiz-fs

Note: Environment variables are used as the de-facto standard to pass configuration/settings, including secrets to containers. This includes official images on Docker Hub (e.g. MySQL, WordPress, PostgreSQL). These images should adopt the new ‘best practices’ when written/implemented.

In good tradition, here are some older proposals for handling secrets;

About this issue

  • Original URL
  • State: open
  • Created 9 years ago
  • Reactions: 310
  • Comments: 222 (35 by maintainers)

Commits related to this issue

Most upvoted comments

I know it’s off topic but has anyone else noticed that this issue has been active for almost a full year now! Tomorrow is its anniversary. 👏

So the thread begins with DO NOT DO ANY OF THESE THINGS…

… but I don’t see any PLEASE DO THESE THINGS INSTEAD…only various proposals/hacks that have mostly been rejected/closed.

What IS the official best-practice for now? As a docker user it’s somewhat frustrating to see a long list of things we shouldn’t do but then have no official alternatives offered up. Am I missing something? Does one not exist? I’m sure things are happening behind-the-scenes and that this is something that the docker team is working on, but as of right now, how do we best handle secret management until a canonical solution is presented?

After reading through this issue in full, I believe it would benefit immensely from being split into separate issues for “build-time” and “run-time” secrets, which have very different requirements

I guess what is being done here is kept secret 😄

This thread started in 2015.

It’s 2017.

Why is there no solution for build-time secrets that isn’t hackish and terrible, yet? It’s very obviously a big issue for a lot of people, but there’s still no actually good solution!

Multi-stage docker builds solves a lot of these issues.

In its simplest form you can inject secrets as build-args, and they will only be part of the image history of the images that explicitly says they need the argument. As neclimdul points out, the secrets will be available in the process listing during the build. IMO not a big issue, but we’ve taken another approach.

Our build server runs with some secrets mounted as volumes, so our CI copy in f.ex. /mnt/secrets/.npmrc into the current work directory. Then we use a Dockerfile simliar to the one below.

FROM node:latest
WORKDIR /usr/src/app
COPY .npmrc .
RUN echo '{ "dependencies": [ "lodash" ] }' > package.json
RUN npm install
RUN ls -lah

FROM alpine:latest
WORKDIR /usr/src/app
COPY --from=0 /usr/src/app/node_modules ./node_modules
RUN ls -lah
CMD ["ls", "./node_modules"]

The resulting image will have the installed dependencies, but not the .npmrc or any traces of it’s content.

Using multi-stage builds gives you full control on how to expose build time secrets to the build process. You can get secrets from external stores like Vault, through volumes (which we mount from the Secrets store in Kubernetes), having them gpg encrypted in the repository, Travis secrets, etc.

@kepkin this is how I pass an ssh-key to docker build:

# serve the ssh private key once over http on a private port.
which ncat
if [ "$?" = "0" ]; then
  ncat -lp 8000 < $HOME/.ssh/id_rsa &
else
  nc -lp 8000 < $HOME/.ssh/id_rsa &
fi
nc_pid=$!
docker build --no-cache -t bob/app .
kill $nc_pid || true

and inside the Dockerfile where 172.17.0.1 is the docker gateway IP:

RUN \
  mkdir -p /root/.ssh && \
  curl -s http://172.17.0.1:8000 > /root/.ssh/id_rsa && \
  chmod 600 /root/.ssh/id_rsa && chmod 700 /root/.ssh && \
  ssh-keyscan -t rsa,dsa github.com > ~/.ssh/known_hosts && \
  git clone --depth 1 --single-branch --branch prod git@github.bob/app.git . && \
  npm i --production && \
  ... && \
  rm -rf /root/.npm /root/.node-gyp /root/.ssh

If someone has something simpler let us know.

When using multi stage builds for this use case make sure you realize that the secret data will remain inside an untagged image in the local daemon until that image is deleted so that this data can be used for build cache in subsequent builds. But it is not pushed to the registry when pushing the final tagged image.

@Vanuan, so I guess your approach is basically: don’t use docker build, for anything more than basic environment. This is an issue created to change that. “You have to do it differently” IS the problem, not the solution.

People who push the issue want to have simpler and more straightforward approaches with docker images, not having to hack around docker limitations.

This is appalling. The self proclaimed “world’s leading software container platform” cannot be bothered to securely pass build-time secrets into containers for past 3 years now.

By “we know better” and “don’t make software allowing mistakes” approach and what can be at best described as unfortunate omission at design phase, there is no support and no visible progress towards one of the required features of DevOps software. All community suggested and sometimes even developed to point of being merge-ready improvements are shut down in fear of someone abusing them. As a result of this cluster… failure, all ways to pass private keys needed only for build phase of docker container requires saving those secrets in build history, or being visible in process list with hope that, respectively, build history never leaves the trusted machine or no-one who is not supposed to ever sees the process list. Both of which will fail even most permissive security audits.

This issue is open for over 2 years now to summarize what was known about the problem then and what to do about it. There is still no solution. By that, I don’t mean a comprehensive solution that will support out of the box most complex secret management schemes. There is no solution AT ALL, no host environment variables, no loading secrets from file path outside build context. Nothing that can be deemed secure in even least stringent terms.

That is indeed a great article. Very good read. And exactly the sort of thing we have been hoping to see.

BTW:

Also found a couple of other secrets tools which seem to have been missed there from the article. Sorry for any repetitions / duplication. Didn’t notice them mentioned here yet either yet:

Build time secrets:

https://github.com/defunctzombie/docket

Run time secrets:

https://github.com/ehazlett/docker-volume-libsecret

What do people think? Many thanks.

For me:

These newer tools ^^ look very good now. And they certainly didnt exist when we first started this ticket. BUT the main thing I now feel still remains missing the most:

Having a better capability for build-time secrets on the DockerHub. Which is poor there and forces an either-or choice. We must forgo the benefits of one solution for the benefits of the other one. Depending which overall set of feature(s) are more important. As local building is definately better for keeping secrets safe, but understandably worse than the Dockerhub in other ways.

Since my issue #45642 was closed and points to this one, I want to add that since “runtime secrets” are not possible, the recommendation is to use a one-node swarm to access swarm secrets. But this is not easy to do and maintain, and no longer works if we use docker rootless (no overlay network, so no swarm mode).

Now that compose is part of the main cli tool (docker compose), please please please give us runtime secrets. (Exactly the same as swarm runtime secrets, just without swarm.)

Wow. It’s like nobody on the product management team has ever considered the use case where anything but unauthenticated open source software gets built in a docker container or any language besides golang, where all dependencies are copied and pasted, sorry, ‘versioned’ into the Git repo.

I just can’t understand how folk would be so incredibly obtuse. Only explanation I can think of is that product management team are not practitioners and have never used the product. I often see this characteristic manifest itself when organisation hire on the basis jira/agile skills.

I’ll just keep using rocker until 2019 or whenever someone sees sense then.

On Sun, 22 Jan 2017, 23:47 Shane StClair, notifications@github.com wrote:

Also docker secrets only manages run time secrets, not build time secrets.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/docker/docker/issues/13490#issuecomment-274370450, or mute the thread https://github.com/notifications/unsubscribe-auth/AAZk5vJVJe4OeypWd1Cwqmh8Gzyn8P-mks5rU-qqgaJpZM4Eq021 .

With regard to build-time secrets, Rocker’s MOUNT directive has proven to be useful for creating transient directories and files that only exist at build-time. Some of their templating features may also help in this situation but I haven’t thoroughly used those yet.

I’d love to see this functionality implemented as a Builder plugin in Docker core (as well as some of the other useful features Rockerfiles have)!

Put me in the camp of folk who need a good way to handle secrets during docker build. We use composer for some php projects and reference some private github repos for dependencies. This means if we want to build everything inside of containers then it needs ssh keys to access these private repos.

I’ve not found a good and sensible way to handle this predicament without defeating some of the other things that I find beneficial about docker (see: docker squash).

I’ve now had to regress in building parts of the application outside of the container and using COPY to bring in the final product into the container. Meh.

I think docker build needs some functionality to handle ephemeral data like secrets so that they don’t find their way into the final shipping container.

@alexkolson As far as I understood, if you need secrets in runtime, you should either use volumes (filesystem secrets) or some services like HashiCorp Vault (network secrets).

For build-time secrets, it’s more complicated. Volumes are not supported at build time, so you should use containers to execute commands that modify filesystem, and use docker commit.

So what’s missing is an ability to manage secrets on the build time using nothing except a Dockerfile, without the need to use docker commit.

Some people even say that using filesystem for secrets is not secure, and that docker daemon should provide some API to provide secrets securely (using network/firewall/automounted volume?). But nobody even have an idea of what this API would look like and how one would use it.

Sorry but I don’t like how the solutions are getting more complex as more people pitch in. Hashi Corp Vault for instance is a full client server solution with encrypted back end storage. That adds considerable more moving parts. I’m sure some use cases demand this level of complexity but I doubt most would. If the competing solution is to use host environment variables I’m fairly sure which will end up being used by the majority of developers.

I’m looking at a solution that covers development (eg: github keys) and deployment (eg: nginx cert keys, db credentials). I don’t want to pollute the host with env vars or build tools and of course no secrets should end up in github (unencrypted) or a docker image directory, even a private one.

build time secrets are now possible when using buildkit as builder; see the blog post here https://medium.com/@tonistiigi/build-secrets-and-ssh-forwarding-in-docker-18-09-ae8161d066

and the documentation; https://docs.docker.com/develop/develop-images/build_enhancements/

the RUN --mount option used for secrets will graduate to the default (stable) Dockerfile syntax soon

Commenting partially, so I get notification 5 years from now, when Docker finally decides to give us a tiny step in the direction of proper credential management… and also, to give an outline of the hack I’m using at the moment, to help others, or to get holes poked in it that I’m unaware of.

In following @mumoshu issue, I finally got the hint of using the predefined-args for build secrets.

So, essentially, I can use docker-compose, with a mapping like this:

  myProject:
    build:
      context: ../myProject/
      args: 
        - HTTPS_PROXY=${NEXUS_USERNAME}
        - NO_PROXY=${NEXUS_PASSWORD}

And then, in folder with the docker-compose.yml file, create a file named “.env” with key-value pairs of NEXUS_USERNAME and NEXUS_PASSWORD - and the proper values there.

Finally, in the Dockerfile itself, we specify our run command like so: RUN wget --user $HTTPS_PROXY --password $NO_PROXY <protected url>

And do NOT declare those as ARGs in the DockerFile.

I haven’t found my credentials floating in the resulting build anywhere yet… but I don’t know if I’m looking everywhere… And for the rest of the developers on my project, they just each have to create the .env file with the proper values for them.

What’s wrong with just implementing MOUNT like rocker mounts as @agilgur5 remarked earlier? I can’t believe this debate has gone so long that a team has had to effectively fork the docker build command in order to satisfy this really easy use case. Do we need another HTTP server in the mix? KISS.

I’ve recently read a good article about that from @jrslv where he propose to build a special docker image with secrets just to build your app, and than build another image for distribution using results from running build image.

So you have two Dockerfiles:

  • Dockerfile.build (here you simply copy all your secrets)
  • Dockerfile.dist (this one you will push to registry)

Now we can build our distribution like that:

# !/bin/sh
docker build -t hello-world-build -f Dockerfile.build .
docker run hello-world-build >build.tar.gz 
docker build -t hello-world -f Dockerfile.dist ^

Your secrets are safe, as you never push hello-world-build image.

I recommend to read @jrslv article for more details http://resources.codeship.com/ebooks/continuous-integration-continuous-delivery-with-docker

Thank you @thaJeztah I did just a little more digging and found that article shortly after posting (previous post is now deleted). Thanks again!

Secret mounts support was added to buildkit in https://github.com/moby/buildkit/pull/522 . They appear strictly on tmpfs, are excluded from build cache and can use a configurable data source. No PR yet that exposes it in a dockerfile syntax but should be a simple addition.

+1 build time secret != run time secret

As Paul points out. It is not desirable to bake internal repository credentials into the image.

Why is this so hard to comprehend?

On Thu, 16 Feb 2017, 14:42 Paul van der Linden, notifications@github.com wrote:

Why do we keep confusing build-time secrets with runtime secrets? There are many good ways already for docker (or related tools like kubernetes) to provide the runtime secrets. The only thing really missing is build-time secrets. These secrets are not used during run time, they are used during install time, this could be internal repositories for example. The only working way I have seen in this and related topics (but also advised against it), is exposing an http server to the container during build time. The http server approach makes things quiet complicated to actually get to those secrets.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/docker/docker/issues/13490#issuecomment-280348116, or mute the thread https://github.com/notifications/unsubscribe-auth/AAZk5h0Z2OGwApVnLNEFWKRdOfGxLOmRks5rdGBagaJpZM4Eq021 .

@gtmtech thanks for the suggestion, it inspired me to write this entrypoint:

#!/bin/bash

if [ -d "/var/secrets" ]; then
  tmpfile="$(mktemp)"
  for file in /var/secrets/*
  do
    if [ -f $file ]; then
      file_contents=$(cat $file)
      filename=$(basename "$file")
      underscored_filename="${filename//-/_}"
      capitalized_filename=${underscored_filename^^}
      echo "export $capitalized_filename=$file_contents" >> $tmpfile
    fi
  done

  source $tmpfile
  rm -f $tmpfile
fi

exec "$@"

I just add it into the Dockerfile like this (don’t forget to chmod + x on it):

ENTRYPOINT ["/app/docker-entrypoint.sh"]

And voila. ENV vars available at runtime. Good enough 😃

@andriy-f yes, that works, as long as you;

  • (obviously) don’t copy the secret to the final stage 😉, or:
  • use the build stage / stage in which a secret is present as a “parent” for the final image
  • never push the build-stage to a registry
  • trust the host on which your daemon runs; i.e. taking into account that your “build” stage is preserved as an image; someone with access to that image would be able to get access to your secret.

There’s a new “docker secret” command in Docker 1.13. This issue should be able to be closed when the documentation for that feature is adequate to the use cases mentioned here.

I think it might be worthwhile defining some tests for whatever (runtime) secret mechanism is come up with by anyone. Because there are a lot of people on this thread that are advocating for very weak security.

As a start I suggest:

  • The secret does not show up in docker inspect
  • After process 1 has been started, the secret is not available within any file accessible from the container (including volume mounted files)
  • The secret is not available in /proc/1/cmdline
  • The secret is transmitted to the container in an encrypted form

Any solution suggested above that violates one of these is problematic.

If we can agree on a definition of what behaviour a secret should follow, then at least that will weed out endless solutions that are not fit for purpose.

@Vanuan [Dockerfile] can’t have reproducibility. The RUN command guarantees that you and I cannot reasonably expect to get the exact same image out of two runs. Why? Because most of the time people use RUN to access network resources. If you want the same image as me you need to create your own image ‘FROM’ mine. No other arrangement will give us the same images. No other arrangement can give us the same images. All durable reproducibility comes from Docker Hub, not Dockerfile.

If the only defense for why we can’t have ephemeral data is because Docker thinks they can remove all of the ephemeral data, then you have to deprecate the RUN instruction.

docker build --secret is finally available in Docker 18.09 https://medium.com/@tonistiigi/build-secrets-and-ssh-forwarding-in-docker-18-09-ae8161d066

@thaJeztah Are we ready to close this issue?

Why do we keep confusing build-time secrets with runtime secrets? There are many good ways already for docker (or related tools like kubernetes) to provide the runtime secrets. The only thing really missing is build-time secrets. These secrets are not used during run time, they are used during install time, this could be internal repositories for example. The only working way I have seen in this and related topics (but also advised against it), is exposing an http server to the container during build time. The http server approach makes things quiet complicated to actually get to those secrets.

The docker secret command looks to only apply currently to Docker Swarm (i.e. docker services) so is not currently viable for generic Docker containers.

Worth noting (for anyone else like me that is stumbling upon this) that Docker Compose has support for an env_file option.

https://docs.docker.com/compose/compose-file/#env-file

Oh, Dockerfile is far from self-sufficient. Especially for development environment. Consider this:

  • most developers don’t know all the different options of docker build, but almost everybody knows how to run bash scripts
  • docker build depends on the context directory. So unless you’re willing to wait for gigabytes of data (your source code with dependencies) to travel from one location to another for every single source line change, you won’t use it for development.
  • unless you build EVERYTHING from scratch, you have a dependency on the docker registry
  • it’s likely that you will depend on OS repositories (whether you use Debian or Alpine-based images), unless you boot up a container straight to the statically-built binary
  • unless you commit everything to git, you will have some project-level dependencies, be it npm, python package index, rubygems or anything else. So you’ll depend on some external package registry or its mirror
  • as most people noticed here you’ll depend on some secret package location for your private dependencies which you can’t publish to public repository, so you’ll depend on that
  • secrets provisioning is required to access that secure location, so you’ll depend on some system that will distribute secrets to developers
  • in addition to Dockefile, you’ll need docker-compose.yml, and it’s not cross-platform: you still depend on forward-/backslash differences.

Cross platform compatibility: With providing a Dockerfile, I know that any system that can run docker build will be able to build the image.

Dockerfile doesn’t ensure cross-platform compatibility. You still have to provide multiple Dockerfiles for multiple platforms. “Can run docker build” doesn’t mean “Uses Linux” anymore. Docker also supports Windows native images. You still have to use Cygwin + Linux VM if you want to run something specifically targeted for Linux machines on a Windows host.

Oh, and I didn’t even mention x86 vs ARM…

Known interface for users: If you know docker, you know some of the behavior of docker build for any project, even without looking at the Dockerfile

Unless you don’t. Everybody knows how to run a bash script without parameters or a single make command. Few people know how to correctly specify all the different command line options for docker build, docker run or docker-compose. It’s inevitable that you’ll have some wrapper bash or cmd script.


With all due respect to what the Docker folks did, I think you’re asking too much. I’m afraid the Mobyproject doesn’t have such a broad scope as to support all the development workflows imaginable.

I think the new “multi-stage build” functionality included inside the last Docker CE release solves a big part of ours problems.

https://docs.docker.com/engine/userguide/eng-image/multistage-build/

FYI, I wrote https://github.com/abourget/secrets-bridge to address the build-time secrets problem.

It creates a throw-away configuration that you can pass as arguments, during the build process, it will connect to the host and fetch the secrets, use them, and then you can kill the host bridge. Even if the build-args are saved somewhere, they become useless the moment the server is killed.

The server supports SSH Agent Forwarding, tunnelled through a TLS websocket communication. It works on Windows too !

@hmalphettes This means you miss out on the benefits of shared lower layers between builds.

Since I didn’t see it mentioned, here’s another good article about handling secrets in AWS ECS: https://aws.amazon.com/blogs/security/how-to-manage-secrets-for-amazon-ec2-container-service-based-applications-by-using-amazon-s3-and-docker/

Note that there’s a work-in-progress PR for build-time secrets here; https://github.com/docker/docker/pull/28079 (runtime secrets for services will be in docker 1.13, see https://github.com/docker/docker/pull/27794)

If you are like me and you come here trying to decide what to do right now, then FWIW I’ll describe the solution I settled on, until something better comes around.

For run-time secrets I decided to use http://kubernetes.io/docs/user-guide/secrets/. This only works if you use kubernetes. Otherwise vault looks ok. Anything secret either in generated image or temporary layer is a bad idea.

Regarding build-time secrets - I can’t think of other build-time secrets use case other than distributing private code. At this point, I don’t see better solution than relying on performing anything “secret” on the host side, and ADD the generated package/jar/wheel/repo/etc. to the image. Saving one LOC generating the package on the host side is not worth risking exposing ssh keys or complexity of running proxy server as suggested in some comments.

Maybe adding a “-v” flag to the docker build, similar to docker run flag could work well? It would temporarily share a directory between host and image, but also ensure it would appear empty in cache or in the generated image.

After what seems like forever (originally I heard it was slated for Q4 2015 release), AWS ECS seems to have finally come thru on their promise to bring IAM roles to docker apps. Here is the blog post as well.

Seems like this combined with some KMS goodness is a viable near term solution. In theory you just have to make the secrets bound to certain principals/IAM roles to keep non-auth roles from asking for something they shouldn’t and leave safe storage to KMS.

Haven’t tried it yet,but its on my short list…

Kubernetes also seems to have some secrets handling that reminds me a lot of Chef encrypted databags.

I understand this isn’t the platform-indepentant OSS way that is the whole point of this thread, but wanted to throw those two options out there for people playing in those infrastructure spaces who need something NOW

After researching this for a few hours, I cannot believe that there seems to be no officially recommended solution or workaround for build-time secrets, and something like https://github.com/dockito/vault seems to be the only viable option for build-time secrets (short of squashing the whole resulting image or building it manually in the first place). Unfortunately https://github.com/dockito/vault is quite specific to ssh keys, so off I go to try to adapt it for hosting git https credential store files as well…

@Vanuan They should both be kept as secret as possible, yes.

The app-id’s main purpose is to restrict access to certain secrets inside Vault via Policies. Anyone with access to the app-id gains access to that app-id’s policies’ secrets. The app-id should be provided by your deployment strategy. For example, if using Chef, you could set it in the parameter bags (or CustomJSON for OpsWorks). However, on its own, it won’t allow anyone access to Vault. So someone who gained access to Chef wouldn’t then be able to then go access Vault.

The user-id is NOT provided by Chef, and should be tied to specific machines. If your app is redundantly scaled across instances, each instance should have its own user-id. It doesn’t really matter where this user-id originates from (though they give suggestions), but it should not come from the same place that deployed the app-id (ie, Chef). As they said, it can be scripted, just through other means. Whatever software you use to scale instances could supply user-ids to the instances/docker containers and authorize the user-id to the app-id. It can also be done by hand if you don’t dynamically scale your instances. Every time a human adds a new instance, they create a new user-id, authorize it to the app-id, and supply it to the instance via whatever means best suites them.

Is this better than firewalling instances? Guess that depends. Firewalling doesn’t restrict access to secrets in Vault (afaik), and if someone gained access to your instances, they could easily enter your Vault.

This way, it’s hard for them to get all the pieces of the puzzle. To take it one step further, app-id also allows for CIDR blocks which you should use. If someone somehow got the app-id and user-id, they still couldn’t access Vault without being on that network.

(Again, this is my interpretation after grokking the documentation the best I could)

There are 2 solutions to build images with secrets.

Multi-stage build :

FROM ubuntu as intermediate
ARG USERNAME
ARG PASSWORD
RUN git clone https://${USERNAME}:${PASSWORD}@github.com/username/repository.git

FROM ubuntu
# copy the repository form the previous image
COPY --from=intermediate /your-repo /srv/your-repo

Then : docker build --build-arg USERNAME=username --build-arg PASSWORD=password my-image .

Using a image builder : docker-build-with-secrets

@OJezu As I have stated multiple times on this issue, there is an open proposal with basically 0 comments on it. If you want to see secrets pushed forward, then please take the time to comment on the proposal.

Instead of coming guns blazing attacking people who work on this every day, next time try asking questions and reading at least the latest comments on the issue you are commenting on.

Things can often look stalled when really there are just people hard at work. For build, see github.com/moby/buildkit where most of this work is happening today.

Thanks.

This is still pretty important because multistage builds with arguments don’t leak in the image but still expose your secrets as part of process lists on the running system so its not really solved.

Also docker secret only manages run time secrets, not build time secrets.

Sure @mcmatthew, though I must preface by saying I’m also still trying to master Vault so my experience is pretty light.

The way I have been trying to code it is that the only info you pass to the container is something needed for your code to be able to authenticate with Vault. If you’re using app-id backend, that would be the app-id itself, and the address of your Vault.

On container boot, your Rails app will notice it doesn’t have secrets yet, and must fetch them from Vault. It has the provided app-id, and will need to somehow generate it’s user-id. This user-id generation will need to be determined by you, but their documentation hints as “it is generally a value unique to a machine, such as a MAC address or instance ID, or a value hashed from these unique values.”

Once your Rails app has the app-id and user-id ready, it can then use Vault’s API to /login. From there you can then make API calls to get your needed secrets.

Now to clarify what I meant about storing them in memory – this varies depending on the type of app you’re using, but with Rails there should be a way to store your secrets in a userland variable cache that will allow Rails to access the secrets from memory every request instead of getting them from Vault over and over (which as you can imagine would be slow). Take a look at this guide about caching in Rails. Namely, section 2.0, but ensuring it’s using memory_cache and not disk.

Lastly, make sure that however you code it, that you do it in Rails and not with a special Docker entrypoint script or similar. Rails should detect for secrets in memory, and if not exist, fetch them.

I hope that helps. I know, a little high level, but this is how we’ve planned to tackle it.

@weemen AFAIK storing secrets in your image is also not a good idea. Your image should have no credentials baked in (including Vault tokens). Instead, use Vault’s app-id auth backend for your containers to get secrets on load time. Store them in the container’s memory somehow, depending on the app stack you’re using.

Also, Vault is working on an aws auth backend that will provide useful in the future if you’re using AWS as a cloud provider.

Nice one! You should use shred to safely delete the file though.

On Thursday, March 3, 2016, Juan Ignacio Donoso notifications@github.com wrote:

If I understand correctly, the /var/secrets dir should be mounted through volumes right?? Also, when there are comment about secrets not being written to disc, how bad is write them to disc and then delete them???

— Reply to this email directly or view it on GitHub https://github.com/docker/docker/issues/13490#issuecomment-191887424.

Rui Marinho

+1 Docker + Hashi Corp Vault

Out of respect to this thread’s subscribers, I don’t think that it needs to devolve to tribalist snark.

If that’s the only problem, @binarytemple, then simply adding the flag docker image build --args-file ./my-secret-file should be a pretty easy fix for this whole problem, isn’t it? 🤔

It’s so simple and obvious, allow the mounting of volumes, (files or directories) into the container during the build.

This is not a technical limitation, it’s a decision to not allow secrets in order to preserve the behaviour of - check out, run build, same inputs, same output, change a build arg if you want to invalidate the cache…

The problem is, the abstraction has becoming increasingly leaky with people using all sorts of kludgy, insecure hacks to get a “secret” into a container.

Newsflash, exposing your SSH keyring via TCP, even on localhost is not secure, neither is passing credentials via environment variables (hint, run ps, or peek in the /proc filesystem), command arguments, and environmental variables are all there, naked, for the world to see.

For developers of golang code this traditionally hasn’t been an issue as they copy-and-paste their dependencies into their projects rather than using a dependency management tool, golang developers call this practice, ‘vendoring’.

For anyone working in other ecosystems where the build system fetches dependencies from Git, or repositories that require authentication, it’s a big problem.

I’m pretty sure there is some startup rule somewhere along the lines of, “don’t presume you know, how, or why, your users use the product”.

On Thu, 26 Jul 2018, 22:00 Dan Armbrust, notifications@github.com wrote:

This entire bug is smelly. I haven’t found a better way… there are several other approaches above, but I think all of the other secure ones require standing up a little http server to feed the information into the image. maybe less smelly, but more complexity, more tools, more moving parts.

Not sure that anyone has found a “good” solution… we are all stuck waiting on the docker people to do something about it… don’t hold your breath, since this bug was written in 2015, and they haven’t even proposed a roadmap yet, much less a solution.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/moby/moby/issues/13490#issuecomment-408233258, or mute the thread https://github.com/notifications/unsubscribe-auth/AAZk5nvbBTj4BAv5TtELNIHhJN8mU0Ctks5uKi38gaJpZM4Eq021 .

I’m sorry, but each person in this ticket has a slightly different use case. Those are corner cases and require different solutions.

  1. I want to run production images on development machines. Use docker registry
  2. I want a distributed CI system, so that each developer has a reproducible build. Use docker run to build your project, use docker prune to clean up
  3. I want to build docker images so that I can distribute them. Use a dedicated CI server where you can run multistage builds.

I’m not going to refute all your points individually. Firstly, you can of course always find situations where the “single Dockerfile” approach does not work at all. However, I would argue, that for almost all of your points that you raised (which all are valid and relevant), the “custom script or makefile” approach is either just as bad or worse. Just as an example one point:

most developers don’t know all the different options of docker build, but almost everybody knows how to run bash scripts

If I am involved in 10 projects, and they all use a Dockerfile, I need to learn about docker only once, but with your suggestion I need to learn 10 totally different build scripts. How do I wipe the cache and start from scratch for project Foo’s build_image.sh again? It’s not clear. If building the image is done with docker build, it is clear (ofc I need to know how docker works, but I also need to do that for using the image that comes out of build_image.sh).

Overall, I guess the point that me and other’s are trying to make is that for /many/ scenarios the “single Dockerfile” approach seems to work really nicely for folks (which is a reason for docker being so popular), in particular in the open source world where usually all resources are accessible without secrets. But if you try to apply the same pattern that you have come to love in a context where part of your resources need credentials to access, the approach breaks down. There have been a number of suggestions (and implementations) of technologically not too complex ways to make it work, but nothing has become of it over a long time (this has been layed many times above). Hence the frustration.

I appreciate that people are putting effort into this, for example with the linked proposal in #33343. My post is about motivating what some people what and why they keep coming back asking for it here.

So, I can’t have a simple flow in which I just run docker build . wherever - either on dev machine or CI, but I have to depend on CI to build packages. Why even bother with docker then? I can write a travis file, or configure the flow in bamboo.

@dmitriid I understand your frustration that this feature has been missing. However this is not how to address an open source community (or any community).

I posted a link to a proposal above, and have seen exactly 0 comments on it except my own.

Here is the latest secrets proposal: #33343

I agree 100% with @cpuguy83. Relying on a build time flag to keep out secrets would be pretty risky. There was a proposal PR for build time (https://github.com/docker/docker/pull/30637) I’ll work on a rebase to get more feedback.

I’d add to the list of run-time requirements:

  • Container authentication/authorization when bootstrapping the first secret.

For instance, Vault provides for authorization with the AppRole Backend but is open-ended regarding how containers identify themselves.

Nick Sullivan presented on Clouflare’s PAL project a few weeks ago, promising to open source it soon, which should provide one potential answer to the authentication question using docker notary.

I am currently working on a solution using Vault:

  1. Builder machine has Vault installed and has a token saved locally
  2. When build starts, the builder machine requests a new temporary token only valid for minutes (based on the build, 1h would even be acceptable)
  3. Injects the token as build arg
  4. Docker image also has Vault installed (or installs and removes it during the build) and using this token it can fetch the real secrets

It is important the the secrets are removed within the same command, so when docker caches the given layer there are no leftovers. (This of course only applies to build time secrets)

I haven’t build this yet, but working on it.

I just ran across something that might help in this regard: https://github.com/docker/docker/pull/13587

This looks like it is available starting with docker v1.10.0, but I hadn’t noticed it till now. I think the solution I’m leaning toward at this point is using https://www.vaultproject.io/ to store and retrieve the secrets, storing them inside the container in a tmpfs file system mounted to /secrets or something of that nature. With the new ECS feature enabling IAM roles on containers, I believe I should be able to use vault’s AWS EC2 auth to secure the authorization to the secrets themselves. (For platform independent I might be inclined to go with their App ID auth.)

In any case, the missing piece for me was where to securely put the secrets once they were retrieved. The tmpfs option seems like a good one to me. The only thing missing is that ECS doesn’t seem to support this parameter yet, which is why I submitted this today: https://github.com/aws/amazon-ecs-agent/issues/469

All together that seems like a pretty comprehensive solution IMHO.

@jaredm4 Can you please clarify this statement?:

“Instead, use Vault’s app-id auth backend for your containers to get secrets on load time. Store them in the container’s memory somehow, depending on the app stack you’re using.”

I’m not yet clear on when/where to retrieve the secrets from Vault (or Keywhiz, etc). Is this done before the docker run and passed to the run command? Is this happening at some point during container initialization (if so, any examples)? Should my application retrieve these when needed? For example, my rails app needs Google API keys, do I write something inside rails to call to vault when the keys are needed?

I think I’m clear on the need for using something like Vault, and clear on how to configure it, I’m just not clear on how to consume the service and get my yml files updated amd ready when rails boots.

Any guidance here would be appreciated. Thanks

@stephank I’ve implemented a docker build tool at work that takes a slightly different approach. My main concern was not for build time secrets, but it takes care of that as well (keeping the secrets out of the built image, that is, how you get hold of the secrets in the first place is still up to you).

And that is by running a “build manager” with the project code in a VOLUME. The manager then runs any number of build tools in separate containers that mount the project code using volumes from the manager. So any built artifacts and other produced files are kept in the manager volume and follows along the build pipeline for each build step. At the end, the manager can build a final production image using the produced build result. Any secrets needed along the way have been available in the manager and/or the build containers, but not the final image. No docker image wizardry used, and build caches work as expected.

What the build pipeline looks like is entirely up to the project using a spec file configuring the build requirements.

As a matter of fact, I’m rather hyped about this tool, I’m just waiting for us to be able to release it as open source (pending company policy guidelines to be adopted)…

Been reading a bunch of these threads now and one feature that would solve some usecases here and would have usecases outside of secrets is a --add flag for docker run that copies a file into the container, just like the ADD statement in Dockerfiles

I may be wrong, but why these complicated methods? I rely on standard unix file permissions. Hand over all secrets to docker with -v /etc/secrets/docker1:/etc/secrets readable only by root and then there’s a script running at container startup as root, which passes the secrets to appropriate places for relevant programs (for example apache config). These programs drop root permissions at startup so if hacked, they cannot read the root-owned secret later. Is this method I use somehow flawed?

Hi, I agree and think this approach ^^ should be generally recommended as best way for RUNTIME secrets. Unless anybody else here has a strong objection against that. After which can then subsequently also list any remaining corner cases (at RUNTIME) which are not covered by that ^^.

Unfortunately I can’t see the secret squirrel taking off because its simply too complicated for most regular non-technical persons to learn and adopt as some popular strategy.

So then that leaves (you’ve probably guessed it already)… Build-time secrets!

But I think thats a progress! Since after a long time not really getting anywhere, maybe cuts things in half and solves approx 45-50% of the total problem.

And if theres still remaining problems around secrets, at least they will be more specific / focussed ones and can keep progressing / takle afterwards.

@kepkin no offense but that doesn’t make any sense. Secrets are definitely not safe, since they are in the tarball and the tarball is being ADDed to production image – even if you remove the tarball, without squashing, it will leak in some layer.

CATZY BOT

RequirementsInstallationThanks to Official Group BotDonate

</div> ---

CATZY BOT Whatsapp MD

For Users Termux Install Module here

Information

CATZY BOT whatsapp using a Baileys library. Jika kamu menemukan semacam bug, harap untuk dimaklumi sementara

• NOTE: Pastikan Jaringan kalian lancar dan device kalian bagus:v,

• Kalo pake termux mungkin bakal lama respon nya, saya sarankan pake heroku

Made by :

Tester Bot

  • Jika kamu menemukan bug jangan lupa buka Issues
  • Info Lebih Lanjut, Chat owner-Shiraori
  • Kamu bisa testing fitur catzy-bot md disini

How To Change Menu Display


Gif Menu Display

 let message = await prepareWAMessageMedia({ video: fs.readFileSync('./media/shiro.mp4'), gifPlayback: true }, { upload: conn.waUploadToServer })
     const template = generateWAMessageFromContent(m.chat, proto.Message.fromObject({
     templateMessage: {
         hydratedTemplate: {
           videoMessage: message.videoMessage,
           hydratedContentText: text.trim(),
           hydratedFooterText: wm,
           hydratedButtons: [{

Image Menu Display

let message = await prepareWAMessageMedia({ image: fs.readFileSync('./media/shiraori.jpg')}, { upload: conn.waUploadToServer })
     const template = generateWAMessageFromContent(m.chat, proto.Message.fromObject({
     templateMessage: {
         hydratedTemplate: {
           imageMessage: message.imageMessage,
           hydratedContentText: text.trim(),
           hydratedFooterText: wm,
           hydratedButtons: [{

Location Menu Display

 const template = generateWAMessageFromContent(m.chat, proto.Message.fromObject({
     templateMessage: {
         hydratedTemplate: {
           hydratedContentText: text.trim(),
           locationMessage: { 
           jpegThumbnail: fs.readFileSync('./media/shiraori.jpg') },
           hydratedFooterText: wm,
           hydratedButtons: [{       

Video Menu Display

let message = await prepareWAMessageMedia({ video: fs.readFileSync('./media/shiro.mp4')}, { upload: conn.waUploadToServer })
     const template = generateWAMessageFromContent(m.chat, proto.Message.fromObject({
     templateMessage: {
         hydratedTemplate: {
           videoMessage: message.videoMessage,
           hydratedContentText: text.trim(),
           hydratedFooterText: wm,
           hydratedButtons: [{           	

HOW TO CONNECT TO MONGODB WHEN RUN IN HEROKU

  • Create account and database in mongodb atlas watch here
  • when you already have a database, you just need to take mongourl
  • Put mongourl in Procfile web: node . --db 'mongourl'
  • Example web: node . -- db 'mongodb+srv://ilman:<password>@cluster0.iiede.mongodb.net/ShiraoriBOT?retryWrites=true&w=majority'

UNTUK PENGGUNA WINDOWS/VPS/RDP

git clone https://github.com/Ilhamskhyi/botv1-Md
cd botv1-Md
npm install
npm update
npm index

UNTUK PENGGUNA TERMUX

pkg update && pkg upgrade
pkg install git
pkg install nodejs
pkg install ffmpeg
pkg install imagemagick
pkg install yarn
git clone https://github.com/Ilhamskzyi/botv1-Md
cd botv1-Md
yarn
node .

UNTUK PENGGUNA HEROKU

Instal Buildpack

Installing the FFmpeg for Windows

  • Unduh salah satu versi FFmpeg yang tersedia dengan mengklik di sini.
  • Extract file ke C:\ path.
  • Ganti nama folder yang telah di-extract menjadi ffmpeg.
  • Run Command Prompt as Administrator.
  • Jalankan perintah berikut::
> setx /m PATH "C:\ffmpeg\bin;%PATH%"

Jika berhasil, akan memberikanmu pesan seperti: SUCCESS: specified value was saved.

  • Sekarang setelah Anda menginstal FFmpeg, verifikasi bahwa itu berhasil dengan menjalankan perintah ini untuk melihat versi:
> ffmpeg -version

Thanks to

Nurutomo Ilman Istikmal
Nurutomo Ilman Istikmal
Author yg nambah fitur yg punya sc

Correct, with the build-secrets, the secrets will not be stored in the image-layer. They will be mounted during build, and no longer there after the build-step completes (there may be an empty file afterwards, due to how Linux mount points work (they require the destination to exist), but the secret themselves are not there).

For git operations, you can also use the --ssh option when using BuildKit, which will forward an ssh-agent into the container (https://docs.docker.com/develop/develop-images/build_enhancements/#using-ssh-to-access-private-data-in-builds)

@petef19 you might benefit from the bit of knowledge on mounting secrets using the new DOCKER_BUILDKIT support in Docker: https://snyk.io/blog/10-best-practices-to-containerize-nodejs-web-applications-with-docker/

Isn’t it the code bugs which trigger those bills, but the keys itself 😂

On Fri, Mar 5, 2021, 11:12 Bryan Hunt notifications@github.com wrote:

I’ve seen this pattern many times and wondered to myself, “why would the software development team (who are busy installing npm and god knows what on their machines)“, be entrusted with runtime secrets such as the magic key that allows you to run up a $50k google bill overnight?

Surely this is a problem for the operations/security teams? Do they really need the application software developers handling this stuff?

On 4 Mar 2021, at 19:47, brybalicious notifications@github.com wrote:

Best practice is to encrypt secrets with a public key and allow only your runtime controller to decrypt and mount them. Encrypted secrets can be stored in a Git repository.

For Kubernetes there is https://github.com/bitnami-labs/sealed-secrets

Nice. What about in a simple setup that is not a cluster? Just a single Dockerfile…?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/moby/moby/issues/13490#issuecomment-791281696, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAECQNTALUYPOFAKTP7PAPTTCCOBTANCNFSM4BFLJW2Q .

I’ve seen this pattern many times and wondered to myself, “why would the software development team (who are busy installing npm and god knows what on their machines)“, be entrusted with runtime secrets such as the magic key that allows you to run up a $50k google bill overnight?

Surely this is a problem for the operations/security teams? Do they really need the application software developers handling this stuff?

Best practice is to encrypt secrets with a public key and allow only your runtime controller to decrypt and mount them. Encrypted secrets can be stored in a Git repository.

For Kubernetes there is https://github.com/bitnami-labs/sealed-secrets

Cool. That closes the build time secrets question. Anything for runtime/devtime (ssh in OS X)?

@caub here’s some CLI help:

Docker docs for formatting help come up with the rest of your inspect format:

docker service inspect --format='{{range .Spec.TaskTemplate.ContainerSpec.Secrets}}{{println .SecretName}}{{end}}'

That’ll list all secret names in a service. If you wanted both name and ID, you could:

docker service inspect --format='{{range .Spec.TaskTemplate.ContainerSpec.Secrets}}{{println .SecretName .SecretID}}{{end}}' nginx

I always have my CI/CD (service update commands) or stack files hardcode the path so you don’t have that issue on rotation.

With labels you can have CI/CD automation identify the right secret if you’re not using stack files (without needing the secret name, which would be different each time).

Unfortunately most of the workarounds mentioned in these and the many other tickets still expose the secrets to the resulting image, or only works with specific languages where you only need dependencies during compile time and not during installation.

@binarytemple that will never happen, the docker maintainers have already killed at least one PR fully documented and fully implemented of a safe secret feature. Given the rest of history (this 3 year old ticket isn’t the oldest and definitely not the only ticket/PR on this topic) I think it’s safe to say the docker maintainers don’t understand the need for security, which is a big problem.

@yajo could be, yes it’s at least a workaround until buildkit ships with secrets mount. Good suggestion. Thanks. B

Yes but there are some usages where security matters a little less:

  • you want to build on your own computer
  • you build on your entreprise CI server (like jenkins). Most of the time it’s about having access to a private repository (nexus, git, npm, etc), so your CI may have her own credentials for that.
  • you can use a VM created from docker-machine and remove it after.

With all due respect to what the Docker folks did, I think you’re asking too much. I’m afraid the Mobyproject doesn’t have such a broad scope as to support all the development workflows imaginable.

It seems to me that what most people are asking for here is nothing of the sort, but only for a simple way to use secrets in docker build in a way that is not less secure than using them in your custom build_image.sh. One way that would satisfy this need seems to be build time mounts. They have downsides, there are probably better ways, but what is being asked is not about covering every possible corner case.

@androa I like that solution but I’m not sure how I feel about the secrets being copied to the work directory. That’s probably fine on a private CI server, but it’s not so great for local building where you would be copying files you shouldn’t out of protected locations (not to mention that the copying itself is both annoying and dangerous in that they might accidentally end up in source control). The other option would be to use a wider Docker build context, but for a lot of common secrets that could mean the whole root volume. Any suggestions on how to make this nice for local and CI?

It’s weird. I talk to people and they’re doing all kinds of stupid hacks like using a HTTP service - throwing away everything (monitoring/granular permissions/simplicity) the POSIX/SELinux combo provides. I just don’t understand. The refusal seems illogical to me.

On Wed, 23 Aug 2017, 23:03 Michael Scott Shappe notifications@github.com wrote:

@binarytemple https://github.com/binarytemple I’ve started looking at Rocker as an alternative, actually…but only because of this strange mental block docker seems to have about build-time secrets.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/moby/moby/issues/13490#issuecomment-324461257, or mute the thread https://github.com/notifications/unsubscribe-auth/AAZk5ppZYsOhdfvgotCUk5l41Truo_EEks5sbJOLgaJpZM4Eq021 .

@cpuguy83 Awesome! I kinda skipped of the last third of this discussion (and a few others) as it’s a lot of things to read (while at the same time looking for a solution), so I really missed your comment, sorry 😦

Alexandre, what you have done is extremely creative and skilled. It just makes me sad that is necessary to jump thorough all these steps just to achieve the same as could be done if ‘docker build’ supported the ‘mount’ command rather than the blind insistence that everything be copied into the container.

I’m my case I’m going to abandon ‘docker build’ and instead use rocker or something if my own creation.

On Thu, 13 Jul 2017, 16:23 Alexandre Bourget, notifications@github.com wrote:

FYI, I wrote https://github.com/abourget/secrets-bridge to address the build-time secrets problem.

It creates a throw-away configuration that you can pass as arguments, during the build process, it will connect to the host and fetch the secrets, use them, and then you can kill the host bridge. Even if the build-args are saved somewhere, they become useless the moment the server is killed.

The server supports SSH Agent Forwarding, tunnelled through a TLS websocket communication. It works on Windows too !

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/moby/moby/issues/13490#issuecomment-315111388, or mute the thread https://github.com/notifications/unsubscribe-auth/AAZk5hZqTAgPBjS9cFP_IsYNa9wv-yoAks5sNjaBgaJpZM4Eq021 .

I thought the same thing as @mixja in that the secret command only helps swarm users is not a more general solution (like they did with attaching persistent volumes). How you manage your secrets (what they are and who has access to them) is very system dependent and depends on which bits of paid and/or OSS you cobble together to make your “platform”. With Docker the company moving into providing a platform, I’m not surprised that their first implementation is swarm based just as Hashicorp is integrating Vault into Atlas – it makes sense.

Really how the secrets are passed falls outside the space of docker run. AWS does this kind of thing with roles and policies to grant/deny permissions plus an SDK. Chef does it using encrypted databags and crypto “bootstrapping” to auth. K8S has their own version of what just got released in 1.13. I’m sure mesos will add a similar implementation in time.

These implementations seem to fall into 2 camps.

  1. pass the secret via volume mount that the “platform” provides or (chef/docker secret/k8s
  2. pass credentials to talk to an external service to get things at boot (iam/credstash/etc)

I think I was hoping to see something more along the lines of the second option. In the first option, I don’t think there is enough separation of concerns (the thing doing the launching also has access to all the keys), but this is preference, and like everything else in system building, everybody likes to do it different.

I’m encouraged that this first step has been taken by docker and hope that a more general mechanism for docker run comes out of this (to support camp #2) – which sadly means I don’t think this thread’s initial mission has been met and shouldn’t be closed yet.

Would be great if it were implemented the same way as rocker, can be simple, doesn’t need to be ‘enterprise’.

On Tue, 29 Nov 2016, 15:53 Michael Warkentin, notifications@github.com wrote:

Sounds like it:

This is currently for Swarm mode only as the backing store is Swarm and as such is only for Linux. This is the foundation for future secret support in Docker with potential improvements such as Windows support, different backing stores, etc.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/docker/docker/issues/13490#issuecomment-263608915, or mute the thread https://github.com/notifications/unsubscribe-auth/AAZk5vlLwsBHHTTbUS_vvx-qTuwnkp6Oks5rDEpjgaJpZM4Eq021 .

I see all 4 proposals currently in OP are about secret storage 🙁

I’d say dockers should facilitate passing a secret/password to a docker instance but that storing/managing these secrets is (and should be) out of the scope of docker.

When passing a secret i’d say a run parameter is almost perfect except being that this is usually logged. So i’d narrow this down to a non-plaintext parameter feature. An approach would be to use encryption with keys generated per docker instance.

As for how to manage secrets, i’d say anything that the user wants, from a homebrew bash script to integration by software like Kubernetes.

I think the main problem/feature in all this is that you log into Docker as root, thus anything you put inside a container can be inspected, be it a token, a volume, a variable, an encryption key… anything.

So one idea would be to remove sudo and su from your container and add a USER command before any ENTRYPOINT or CMD. Anybody running your container should now get no chance to run as root (if I’m not wrong) and thus you could now actually hide something from him.

Another idea (best IMHO) would be to add the notion of users and groups to the Docker socket and to the containers, so that you could tell GROUP-A has access to containers with TAG-B, and USER-C belongs to GROUP-A so it has access to those containers. It could even be a permission per operation (GROUP-A has access to start/stop for TAG-B, GROUP-B has access to exec, GROUP-C has access to rm/inspect, and so on).

If I understand correctly, the /var/secrets dir should be mounted through volumes right?? Also, when there are comment about secrets not being written to disc, how bad is write them to disc and then delete them???

Vault -1. Vault has some operational characteristics (unsealing) that make it really undesirable for a lot of people.

Having a pluggable API would make the most sense.

We’ve been using the kind of approach that @gtmtech, describes, with great success. We inject KMS-encrypted secrets via environment variables, then let code inside the container decrypt as required.

Typically that involves a simple shim entrypoint, in front of the application. We currently implementing that shim with a combination of shell and a small Golang binary (https://github.com/realestate-com-au/shush), but I like the sound of the pure-Go approach.

Example code and working scenarios here @dreamcat4 @kaos >

https://github.com/gtmtechltd/secret-squirrel

Dont know if this is any use or would work, but here’s a bit of a leftfield suggestion for solving the case where I want to inject a secret into a container at runtime (e.g. a postgres password)

If I could override at docker run time the entrypoint and set it to a script of my choosing e.g. /sbin/get_secrets, which after getting secrets from a mechanism of my choosing (e.g. KMS), it would exec the original entrypoint (thus becoming a mere wrapper whose sole purpose was to set up environment variables with secrets in them INSIDE the container. Such a script could be supplied at runtime via a volume mount. Such a mechanism would not involve secrets ever being written down to disk (one of my pet hates), or being leaked by docker (not part of docker inspect), but would ensure they only exist inside the environment of process 1 inside the container, which keeps the 12-factorness.

You can already do this (I believe) if entrypoint is not used in the image metadata, but only cmd is, as entrypoint then wraps the command. As mentioned the wrapper could then be mounted at runtime via a volmount. If entrypoint is already used in the image metadata, then I think you cannot accomplish this at present unless it is possible to see what the original entrypoint was from inside the container (not the cmdline override) - not sure whether you can do that or not.

Finally it would I think even be possible to supply an encrypted one-time key via traditional env var injection with which the external /sbin/get_secrets could use to request the actual secrets (e.g. the postgres password), thus adding an extra safeguard into docker leaking the one-time key.

I cant work out if this is just layers on layers, or whether it potentially solves the issue… apologies if just the first.

+1 to a local file storage backend, for more advanced use cases, I would however prefer the full power of a Hashicorp Vault - like solution. When we are talking deployment, in an organisation, the argument is, that those people who provide and control secrets are other persons than those who use them. This is a common security measure to keep the circle of persons with controlling power limited to very trusted security engineers…