moby: mount volumes within ro volumes causes permanent machine mount failure
TL;DR - doing this in your docker-compose.yml
:
volumes:
- ./abcd:/abcd:ro
- ./defg:/abcd/defg:ro
will permanently break volume mounts in your current docker-machine and all subsequently created docker-machines on the same host, until the host is rebooted.
Output of docker version
:
Client:
Version: 1.11.1
API version: 1.23
Go version: go1.5.4
Git commit: 5604cbe
Built: Tue Apr 26 23:44:17 2016
OS/Arch: darwin/amd64
Server:
Version: 1.12.0
API version: 1.24
Go version: go1.6.3
Git commit: 8eab29e
Built: Thu Jul 28 23:54:00 2016
OS/Arch: linux/amd64
Output of docker info
:
Containers: 3
Running: 1
Paused: 0
Stopped: 2
Images: 36
Server Version: 1.12.0
Storage Driver: aufs
Root Dir: /mnt/sda1/var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 59
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: null bridge host overlay
Kernel Version: 4.4.16-boot2docker
Operating System: Boot2Docker 1.12.0 (TCL 7.2); HEAD : e030bab - Fri Jul 29 00:29:14 UTC 2016
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 995.9 MiB
Name: newm
ID: ZWTS:ZUF2:VZOL:IDVC:WUWH:J3IK:GCR7:JT4G:QDTV:IYG4:RBBQ:ZC4R
Docker Root Dir: /mnt/sda1/var/lib/docker
Debug mode (client): false
Debug mode (server): true
File Descriptors: 24
Goroutines: 43
System Time: 2016-08-12T10:27:29.323758623Z
EventsListeners: 1
Registry: https://index.docker.io/v1/
Labels:
provider=virtualbox
Additional environment details (AWS, VirtualBox, physical, etc.): Running docker on OSX 10.11.5 locally.
Steps to reproduce the issue:
1 - create docker-compose.yml
version: '2'
services:
crashtestdummy:
container_name: testcont
volumes:
- ./abcd:/abcd:ro
- ./defg:/abcd/defg:ro
# ^^^ this line is the problem, mounting within a ro mount
image: nginx
2 - create test data
mkdir abcd && touch abcd/foo
mkdir defg && touch defg/bar
3 - build
Pulling crashtestdummy (nginx:latest)...
latest: Pulling from library/nginx
51f5c6a04d83: Pull complete
a3ed95caeb02: Pull complete
51d229e136d0: Pull complete
bcd41daec8cc: Pull complete
Digest: sha256:0fe6413f3e30fcc5920bc8fa769280975b10b1c26721de956e1428b9e2f29d04
Status: Downloaded newer image for nginx:latest
Creating testcont
ERROR: for crashtestdummy oci runtime error: rootfs_linux.go:53: mounting "/mnt/sda1/var/lib/docker/aufs/mnt/db0db52cc8e9b82a60cef552707ffe6e37f738f1fe57efaf87a95cac618edc60/abcd/defg" to rootfs "/mnt/sda1/var/lib/docker/aufs/mnt/db0db52cc8e9b82a60cef552707ffe6e37f738f1fe57efaf87a95cac618edc60" caused "mkdir /mnt/sda1/var/lib/docker/aufs/mnt/db0db52cc8e9b82a60cef552707ffe6e37f738f1fe57efaf87a95cac618edc60/abcd/defg: read-only file system"
Attaching to
4 - create a new machine
$ docker-machine create --driver virtualbox new4
$ eval $(docker-machine env new4)
5 - fix docker-compose.yml
version: '2'
services:
crashtestdummy:
container_name: testcont
volumes:
- ./abcd:/abcd:ro
- ./defg:/defg:ro
image: nginx
6 - build
Pulling crashtestdummy (nginx:latest)...
latest: Pulling from library/nginx
51f5c6a04d83: Pull complete
a3ed95caeb02: Pull complete
51d229e136d0: Pull complete
bcd41daec8cc: Pull complete
Digest: sha256:0fe6413f3e30fcc5920bc8fa769280975b10b1c26721de956e1428b9e2f29d04
Status: Downloaded newer image for nginx:latest
Creating testcont
7 - attach and inspect the mounts
docker exec -it testcont bash
root@c84386f82b63:/# ls -l /abcd
total 0
root@c84386f82b63:/# ls -l /defg
total 0
root@c84386f82b63:/#
Describe the results you received: All mount points are now broken for this docker machine (and all docker machines created after this point) until the host system is rebooted. Existing docker machines created before this point continue to work normally. If I fix the docker-compose.yml file, create a new docker machine and repeat the above steps, there are no errors about mount points failing, they just aren’t mounted at all.
Describe the results you expected: The second mount point described in the docker file should be broken, the other should be fine. Resolving the issue in the docker-compose.yml file should fix the problem when the container is rebuilt.
Additional information you deem important (e.g. issue happens only occasionally): This is reproducible every time for me.
docker-inspect on the second machine shows the mounts should be fine, no errors were reported, but the volumes are simply not mounted.
"Mounts": [
{
"Source": "/data/docker-crash-test/defg",
"Destination": "/defg",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Source": "/data/docker-crash-test/abcd",
"Destination": "/abcd",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
}
],
About this issue
- Original URL
- State: closed
- Created 8 years ago
- Comments: 21 (10 by maintainers)
So that’s the issue; docker cannot create a directory to use as mount point to mount the directory from the host there. Create (empty) directories for those paths in your project and it probably works.
It won’t be fixed as it’s not a bug
@stampycode access to the docker socket is equal to having root access on the host, for example:
gives you a root shell on the host, and this attack vector is described in the security docs; https://docs.docker.com/engine/security/security/#/docker-daemon-attack-surface
@stampycode Any person with access to the docker socket has full root access to the host.