moby: Bug in --volumes-from, starting from docker 1.13

Description

Bug appears if you try to use “–volumes-from” with not exists folders

Steps to reproduce the issue:

  1. docker create --name test -v /some/not/exists/dir:/some/dir node:6.9.1
  2. docker run -it --rm --volumes-from test node:6.9.1 bash

Describe the results you received:

docker: Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused “process_linux.go:359: container init caused "rootfs_linux.go:54: mounting \"/some/not/exists/dir\" to rootfs \"/var/lib/docker/aufs/mnt/ea72e72f3c1d539878c6ff558bf8e386265d7cdd3e68e9d225697adaaa4b6a8d\" at \"/some/dir\" caused \"stat /some/not/exists/dir: no such file or directory\""”.

Describe the results you expected:

Docker must create dir /some/not/exists/dir

Output of docker version:

Client:
 Version:      1.13.0
 API version:  1.25
 Go version:   go1.7.3
 Git commit:   49bf474
 Built:        Tue Jan 17 09:58:26 2017
 OS/Arch:      linux/amd64

Server:
 Version:      1.13.0
 API version:  1.25 (minimum version 1.12)
 Go version:   go1.7.3
 Git commit:   49bf474
 Built:        Tue Jan 17 09:58:26 2017
 OS/Arch:      linux/amd64
 Experimental: false

Output of docker info:

Containers: 0
 Running: 0
 Paused: 0
 Stopped: 0
Images: 1
Server Version: 1.13.0
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 7
 Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins: 
 Volume: local
 Network: bridge host macvlan null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 03e5862ec0d8d3b3f750e19fca3ee367e13c090e
runc version: 2f7393a47307a16f8cee44a37b262e8b81021e3e
init version: 949e6fa
Security Options:
 apparmor
 seccomp
  Profile: default
Kernel Version: 4.4.0-47-generic
Operating System: Ubuntu 16.04.1 LTS
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 1.953 GiB
Name: servers-dev-modera-org
ID: GP4V:2OYT:VNEG:7KJQ:AQJT:XZ34:2SWE:32Y4:OZMD:LX6K:EBEN:PUWK
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false

About this issue

  • Original URL
  • State: open
  • Created 7 years ago
  • Comments: 23 (9 by maintainers)

Most upvoted comments

Hm, well, it’s not really a data-volume container, because the data is not stored in the container, or a volume, but uses a bind-mount from the host.

That section is pretty outdated, and in real need of an overhaul; Starting with version 1.7, docker added support for “volume” management commands (e.g. docker volume create), which allows you to manage volumes without having to start a “dummy” container for that.

Let me add a bit of information;

If you want to store persistent data, you can use a volume. When creating a volume, and attaching it to a container, the data from inside the container (at the location you attach the volume) is copied to the volume the first time it’s run. This allows you to ship an image with some default data, and initialize the volume with that default data. For example;

# create a new, empty volume named "database-data"
$ docker volume create database-data

$ docker run -d \
  -e MYSQL_ROOT_PASSWORD=my-secret-pw \
  --name mydb \
  -v database-data:/var/lib/mysql \
  mysql:5.7.16

Because the “database-data” volume is still empty, the content of /var/lib/mysql is written to the volume. Any data written to /var/lib/mysql is written to the volume, and is preserved after the container is deleted. This allows you to upgrade the container without loosing your data, for example, to upgrade from mysql 5.7.16 to mysql 5.7.17:

# stop and remove the database container
$ docker stop mydb
$ docker rm mydb

# start a new database container, but re-use the volume;
$ docker run -d \
  -e MYSQL_ROOT_PASSWORD=my-secret-pw \
  --name mydb \
  -v database-data:/var/lib/mysql \
  mysql:5.7.17

The location where the actual data is stored for a volume depends on the volume-driver that is used. By default (with the “local” volume driver), it’s stored in a directory inside /var/lib/docker on the host that the daemon runs on, but there are other plugins that allow you to store the data on (e.g.) s3, or ceph. Here is an overview of volume drivers. More enhancements are coming though; docker 1.13 adds support for “managed” plugins; this allows you to install plugins (such as volume drivers) through the docker plugin install command.

bind-mounts

During development you sometimes want (e.g) your local source code to be available inside the container, so that you can make code changes without having to rebuild the image; doing so is called “bind-mounting” a host directory. Bind-mounts are not volumes (but use the same flag, which can be confusing from a naming perspective).

Contrary to volumes, bind-mounts do not copy data from the container to the mounted directory. This is by design, because you’re giving the container access to a directory on the host, and not the other way round.

Although host-directories are automatically created when bind-mounting a directory, this feature is only for backward compatibility, and it’s advisable to create the directory on the host before bind-mounting it into the container;

$ mkdir -p ~/my-project/www
$ echo "Hello world!" > ~/my-project/www/index.html

Now you can bind-mount the directory into the container;

$ docker run -d -v ~/my-project/www:/usr/share/nginx/html/ -p 8080:80 nginx:alpine

The directory is bind-mounted “over” the existing files in the container, so when visiting http://localhost:8080, you’ll see “Hello world!”. Editing the local index.html will directly update the file that’s used in the container.

Note that when bind-mounting files from the host, there’s a number of things to take into account;

  • bind-mounting files/directories mounts them from the host that the daemon runs on. If the daemon is running on a remote server (which may be a “VM”), it won’t have access to your local files; instead, it will create an empty directory on the host, and bind-mount that directory. Docker for Mac, Docker for Windows, and Docker Toolbox have a special setup to automatically share your local files with the daemon running in a VM, making this work (even though the daemon is running in a Virtual Machine)
  • bind-mounting files/directories preserves the permissions as they are on the host, so the process in the container may not have permissions to read them (Docker for Mac uses a mechanism to automatically grant these permissions, but on Linux or Windows this is not the case)

I have found a solution

DockerCompose v3

version: ‘3’

services:

Workspace Utilities Container

workspace:
    build:
        context: ./docker/workspace
    volumes:
        - www:/var/www
    ports:
       - "22:22"
    tty: true

PHP-FPM Container

php-fpm:
    build:
        context: ./docker/php-fpm
        dockerfile: Dockerfile-56.alpine
    depends_on:
        - redis
    volumes:
        - www:/var/www
    expose:
        - "9000"
    links:
        - workspace

Apache Server Container

apache2:
    build:
        context: ./docker/apache2
        args:
            - PHP_SOCKET=php-fpm:9000
    volumes:
        - www:/var/www
        - ./docker/logs/apache2:/var/log/apache2
    ports:
        - "80:80"
        - "443:443"
    links:
        - php-fpm

MariaDB Container

mariadb:
    build: ./docker/mariadb
    volumes:
        - mariadb:/var/lib/mysql
    ports:
        - "3306:3306"

Redis Container

redis:
    build: ./docker/redis
    volumes:
        - redis:/data
    ports:
        - "6379:6379"

Volumes Setup

volumes:
	www:
		driver: local
		driver_opts:
			o: bind
			type: none
			device: /path/to/folder/on/host/machine
	redis:
		driver: local
	mariadb:
		driver: local
	sessions:    ## nothing is connected to this (- ./data/sessions:/sessions)
		driver: local

@vv3d0x I can confirm that adding the below driver_opts works for me as a way of avoiding having to use volumes_from, but is there a way to do it without having to specify the full path on the host machine in docker-compose.yml?

		driver_opts:
			o: bind
			type: none
			device: /path/to/folder/on/host/machine

With v2 syntax, I could just mount the current working directory with .:/var/www/html but if I put . in the above driver_opts, it mounts / and not .!

but what if I want to change a source folder for volume ? When I using named volume with “driver: local” I did not found a solution how can I change mountpoint for it.

Basically, if you specify a host-directory, you’re using “bind-mounts”, not “volumes”. The confusing bit here is that they are both defined through volumes / --volume. Bind-mounts will work in a local development setup, but are discouraged for deploying an application as using bind-mounts will make your application host-dependent. (i.e. you need to prepare the host before being able to use a bind-mount).

There is a volume-plugin that allows you to specify a custom path on the host for its storage (see https://github.com/docker/docker/issues/19990, and https://github.com/CWSpear/local-persist), but it’s possible to do with just the local volume driver (also see the linked issue) (edit just see you found that solution);

version: "3.0"

services:
  web:
    image: nginx:alpine
    volumes:
      - log-data:/usr/share/nginx/html/

volumes:
  log-data:
    driver_opts:
      type: none
      device: /var/log/
      o: bind

This assumes the host-directory (/var/log) is present on the host

@vv3d0x some things I notice in your docker-compose file;

  • you’re using links. “legacy links” are deprecated, and discouraged as they are a “static” link between containers. For example, restarting the container you linked to will break the link, resulting in the other container no longer being able to connect. If you switch to the compose “2.x” or higher format, you can use “docker networks”, doing so allows you to connect to other containers using their service-name as hostname. Restarting containers will no longer break that connectivity. Legacy links also (by the original design) leak environment variables to every container that is connected, possibly leaking credentials to connected containers (see https://docs.docker.com/engine/userguide/networking/default_network/dockerlinks/)
  • I see you’re writing your apache logs to a file; you may want to consider having apache write logs to stdout / stderr. That way, you can use logging drivers, and use docker logs to view your logs, or send logs to a log collector (see https://docs.docker.com/engine/admin/logging/view_container_logs/)
  • Be aware of depends_on, although depends_on may work for initially starting up your stack, it does not resolve situations where a service is temporarily unavailable (e.g., during an update of that service, or if the service is running on a different machine, due to a network disruption). It’s advisable to design your services to be resilient against temporary outages of connected services, for example, by implementing a retry-loop if a dependent service is not available.

@thaJeztah In reference to your answer https://github.com/moby/moby/issues/30441#issuecomment-275124802 (thanks for the very clear explanation):

In a setup like in the answer, with two separate images (and containers): nginx (a web server) and mydataimage (a specific website - static data files - to serve), how to deploy a new version of mydataimage without restarting nginx? More generally, how to deploy data packaged into a data image - a new version of it - to be consumed by other running container(s), using Docker?

IIUC, deploying a new version of mydataimage by merely restarting its container will not update the shared volume uploads-data and so nginx will not see the new data from the new mydataimage. The shared volume uploads-data will not be updated because it only gets initialized once, when mounted for the first time while it’s empty. Here it won’t be empty because it contains the previous version of mydataimage.

You can still use the same volume for two services. If I understand correctly, you have a separate image holding the data, purely for propagating the volume? Say that that image is named mydataimage, you can do something like this;

version: "3.0"

services:
  init:
    image: mydataimage
    volumes:
      - uploads-data:/usr/share/nginx/html/uploads/
  web:
    image: nginx
    volumes:
      - uploads-data:/usr/share/nginx/html/uploads/

volumes:
  uploads-data:

Be aware though, that there will be a “race” condition; if the web service is started before the init service, then the volume is propagated by the content of the web service’s image, not the init service’s image.

So, you may want to do docker-compose up init manually to start the init service first (and propagate the volume), then start the other services. Alternatively (since propagating the volume is a one-time task), you can manually declare the volume “external” (i.e., the volume is not created by docker-compose, but created up-front), and run the image manually to propagate the volume;

version: "3.0"

services:
  web:
    image: nginx
    volumes:
      - uploads-data:/usr/share/nginx/html/uploads/

volumes:
  uploads-data:
    external:
      name: my-volume

Then;

# create the volume
docker volume create my-volume

# propagate the volume with data from the `/data` directory in the mydata image
# afterwards, the container is no longer needed, so the `--rm` flag
# removes the container after it exits (and has propagated the volume)
docker run -it --rm -v my-volume:/data mydata

# once done, start the docker-compose stack
docker-compose up -d

Is there a reason you’re using a separate image to propagate the volume, and not putting that in the services’ image itself?

Generally; volumes are used for runtime data (e.g. database data, uploads, sessions, etc), but the source code itself of your service should not be in a volume. That way you can update the source code by deploying a new version of the image (myapp:v1.0 -> myapp:v1.1), which will update the version of your application, but preserve the “runtime” data.

Why do you think I am getting the error:

Because you’re using the compose-file V1 schema, and that schema does not have support for defining volumes (i.e., no volumes: section); https://docs.docker.com/compose/compose-file/compose-file-v1/#volumes-volume_driver

So when parsing your compose-file with the V1 schema, volumes: is seen as the name of a service, and logstash_dir: is seen as an option for that service, which of course is not valid.

So far I have been using the volumes_from not to copy the volume definition but instead to share the content provided by a volume container’s exposed volume(s), i.e. I have created an image that defines a volume and have populated that image with content under the volume path. I then start a container for that volume image and configure other containers to mount volumes_from that volume container in order to read the content from the volume shared by the volume container. This doesn’t seem to be possible anymore or is it?

The volumes_from is basically a “lazy” way to copy volume definitions from one container to another, so;

docker run -d --name one -v myvolume:/foo image-one

docker run -d --volumes-from=one image-two

Is the same as running;

docker run -d --name one -v myvolume:/foo image-one
docker run -d --name two -v myvolume:/foo image-two

If you are deploying to AWS you should not use bind-mounts, but use named volumes instead (as in my example above), for example;

version: "3.0"

services:
  db:
    image: nginx
    volumes:
      - uploads-data:/usr/share/nginx/html/uploads/

volumes:
  uploads-data:

Which you can run with docker-compose;

docker-compose up -d
Creating network "foo_default" with the default driver
Creating volume "foo_uploads-data" with default driver
Creating foo_db_1