moby: Unable to run Docker 1.0 inside LXC

A lot of people (including me) had no issues running LXC containers inside an LXC container, but I’m unable to run Docker 1.0 inside an LXC container. Not sure if it’s an issue with Docker itself, or an issue with Docker’s use of libcontainer.

The content of the config file associated with the host LXC container that I’m testing with is as follows:

# Template used to create this container: /usr/share/lxc/templates/lxc-ubuntu
# Parameters passed to the template:
# For additional config options, please look at lxc.container.conf(5)

# Common configuration
lxc.include = /usr/share/lxc/config/ubuntu.common.conf

# Container specific configuration
lxc.rootfs = /var/lib/lxc/dockerception/rootfs
lxc.mount = /var/lib/lxc/dockerception/fstab
lxc.utsname = dockerception
lxc.arch = amd64

# Network configuration
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = lxcbr0
lxc.network.hwaddr = 00:16:3e:ce:da:87

lxc.aa_profile = unconfined
lxc.cgroup.devices.allow = b 7:* rwm
lxc.cgroup.devices.allow = c 10:237 rwm

And then, when ssh’d in to the LXC container, running docker -d yields the following and an exit code of 1:

2014/06/30 20:33:54 docker daemon: 1.0.1 990021a; execdriver: native; graphdriver: 
[c32d6fbe] +job initserver()
[c32d6fbe.initserver()] Creating server
[c32d6fbe] +job serveapi(unix:///var/run/docker.sock)
2014/06/30 20:33:54 Listening for HTTP on unix (/var/run/docker.sock)
[c32d6fbe] +job init_networkdriver()
[c32d6fbe.init_networkdriver()] creating new bridge for docker0
[c32d6fbe.init_networkdriver()] getting iface addr
[c32d6fbe] -job init_networkdriver() = OK (0)
2014/06/30 20:33:54 WARNING: mountpoint not found
Error loading docker apparmor profile: exit status 243 (/sbin/apparmor_parser: Unable to replace "docker-default".  Permission denied; attempted to load a profile while confined?
Warning failed to create cache: docker
)
[c32d6fbe] -job initserver() = ERR (1)
2014/06/30 20:33:55 )

Any idea what’s going on?

This is what uname -a outputed:

Linux dockerception 3.13.0-29-generic #53-Ubuntu SMP Wed Jun 4 21:00:20 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

For [sudo] docker version:

Client version: 1.0.1
Client API version: 1.12
Go version (client): go1.2.1
Git commit (client): 990021a
2014/06/30 21:03:37 Cannot connect to the Docker daemon. Is 'docker -d' running on this host?

And for [sudo] docker -D info

2014/06/30 21:04:23 Cannot connect to the Docker daemon. Is 'docker -d' running on this host?

About this issue

  • Original URL
  • State: closed
  • Created 10 years ago
  • Comments: 30 (9 by maintainers)

Most upvoted comments

Reproducing:

  1. Get a host with Ubuntu 14.04.
  2. Install LXC. Version 1.0.5
  3. Create a container. Let’s say a Ubuntu 15.04 one. lxc-create -t download -n YOURNAME -- -d ubuntu -r vivid -a amd64
  4. Edit the /var/lib/lxc/YOURNAME/config file and add the lines lxc.cgroup.devices.allow = a, lxc.mount.auto = cgroup and lxc.aa_profile = unconfined (throwing all security overboard)
  5. Start the container. lxc-start -n YOURNAME -d
  6. Attach to it. lxc-attach -n YOURNAME
  7. Install that latest and greatest docker. curl -sSL https://get.docker.com/ubuntu/ | sudo sh
  8. Notice that docker won’t start. Read why in /var/log/upstart/docker
  9. Install apparmor. apt-get install apparmor. Notice the Permission denied error, see also point 10.
  10. Running docker -d now works. Using service docker start yields yet another error in the log: /sbin/apparmor_parser: Unable to replace "docker-default". Permission denied; attempted to load a profile while confined?
  11. I can create a container now with the manually started Docker instance. docker run -i -t ubuntu /bin/bash

I think this is still broken and this ticket should be reopened. I’ve installed cgroups-lite too to no avail, using Ubuntu 14.04 as the LXC container didn’t work. Also removing all the security is rather blunt.

I’m wondering why the docker package doesn’t depend on apparmor and cgroup-lite BTW.

@VictorArgote. This is all I need to do after a default ‘lxc-create -t download …’

cat >>/var/lib/lxc/$HOST/config <<EnD
lxc.aa_profile = unconfined
lxc.cgroup.devices.allow = a
lxc.cap.drop =
EnD

Then populate/mount ‘/sys/fs/cgroup’ directories with cgroupfs_mount. Works well for me on Trusty with 1.0.7 and 1.1.0. I’ve not tried 1.1.1 yet.

I think LXC defaults to using lxcfs in certain circumstances, which doesn’t help docker. Make sure ‘cat /proc/1/mountinfo’ shows actual cgroup mounts. For example:

141 167 0:25 / /sys/fs/cgroup/cpu rw,relatime - cgroup cgroup rw,cpu,release_agent=/run/cgmanager/agents/cgm-release-agent.cpu

I think the key to get docker working is to make sure that /sys/fs/cgroup has the right stuff in it - the right stuff, with the right permissions, and that they are actually cgroup mounts rather than tmpfs mounts (or whatever lxcfs does).

EDIT: And I use the default docker execution driver, not lxc.

Thanks @haarts, for the ‘lxc.cgroup.devices.allow = a’. I also needed ‘lxc.cap.drop =’ to get it to work without any errors (or cheats).

Is there any way to make this more secure?

@echinthaka, for example, on Proxmox 4.0 you do it like this:

root@pve:/usr/share/lxc/config/common.conf.d# mv 00-lxcfs.conf 00-lxcfs.conf.disabled

and then restart your LXC instance.

I fixed it by first installing Ubuntu’s lxc package, and then ensuring that Docker daemon’s execution driver is LXC, by running

docker -d --exec-driver=lxc

And the daemon starts up like a charm. In fact, I can spawn up new Docker containers without a problem.

If you want this use case to work you will have to help find a solution

My bad. I only left this issue open because I thought that there may genuinely be people that will want to run Docker from inside an LXC, with only the default settings (i.e. running the Docker daemon using docker -d without any additional flags).

I personally don’t need Docker to be run using the default settings from inside an LXC; at least not yet.