moby: [graphdriver] prior storage driver "aufs" failed: invalid argument

When trying docker 1.7 (from the docker maintained repo) package on Ubuntu 14.04, I now get the following error on boot:

INFO[0000] Listening for HTTP on unix (/var/run/docker.sock)
ERRO[0000] [graphdriver] prior storage driver "aufs" failed: invalid argument
FATA[0000] Error starting daemon: error initializing graphdriver: invalid argument
/var/run/docker.sock is up

This led me to this bug #7321, because we use btrfs for /var/lib/docker. I’ve since changed it to ext4, but I’m still getting the errors above.

root@ip-10-128-16-91:/etc/init# uname -a
Linux ip-10-128-16-91 3.13.0-55-generic #92-Ubuntu SMP Sun Jun 14 18:32:20 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
root@ip-10-128-16-91:/etc/init# mount | grep docker
/dev/xvdh on /var/lib/docker type ext4 (rw)
root@ip-10-128-16-91:/etc/init# docker --version
Docker version 1.7.0, build 0baf609

About this issue

  • Original URL
  • State: closed
  • Created 9 years ago
  • Reactions: 1
  • Comments: 67 (18 by maintainers)

Most upvoted comments

Had the same issue when upgrading to kernel 4.0.x on Ubuntu, where aufs is no longer supported and rm -rf /var/lib/docker/aufs did the trick.

CAUTION! rm -rf /var/lib/docker/aufs also deletes all docker containers…

@zave the “fix” mentioned is basically to get rid of an actual error; the fact that you previously ran docker with the aufs driver, but the driver doesn’t work. Removing the aufs directory just nukes your containers and images, so that docker picks the next available driver (which may be vfs in a worst case scenario).

The Linux-image-extra package is tied to your kernel version, so has to be reinstalled after upgrading the kernel; check https://docs.docker.com/engine/installation/linux/ubuntulinux/#prerequisites-by-ubuntu-version on how to install it

@PhE you have a completely different error message and as such a different issue. Most likely you upgraded your kernel and did not install that kernels extras package, which includes the aufs kernel mods.

This happens to me all the time. It seems like it happens after running dist-upgrades in Ubuntu, and I always have to go in and run this:

sudo apt-get install linux-headers-$(uname -r) linux-image-extra-$(uname -r)

My issue came from a kernel upgrade. I was running

    # apt-get -y update
    # apt-get -y upgrade
    # apt-get -y dist-upgrade
    # apt-get -y install aufs-tools openjdk-7-jre curl wget git vim make php-pear \
php5-dev php5-curl python mc gawk ssh grep sudo htop nmon mysql-client php5-cli \
sqlite3 sysstat sysdig linux-headers-$(uname -r) lxc bsdtar
    # apt-get -y install linux-image-extra-$(uname -r)
    # modprobe aufs
    # apt-get clean

So i think the uname -r was using the current kernel and not the new kernel that the dist-upgrade. So aufs wasn’t working properly.

Had the same issue. Was resolved after:

   sudo rm -rf /var/lib/docker/aufs
   sudo service docker start

Solution found there: http://log.rowanto.com/broken-docker-in-debian-jessie/

I’m running Linux Mint 18 and I’ve updated from kernel 4.4.0 to 4.8.11. (29.11.2016).

● docker.service - Docker Application Container Engine
   Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
   Active: failed (Result: exit-code) since Tue 2016-11-29 13:50:20 CET; 8s ago
     Docs: https://docs.docker.com
 Main PID: 9128 (code=exited, status=1/FAILURE)

Nov 29 13:50:19 XPS systemd[1]: Starting Docker Application Container Engine...
Nov 29 13:50:19 XPS dockerd[9128]: time="2016-11-29T13:50:19.400041011+01:00" level=info msg="libcontainerd: new containerd process, pid: 9135"
Nov 29 13:50:20 XPS dockerd[9128]: time="2016-11-29T13:50:20.406203559+01:00" level=error msg="[graphdriver] prior storage driver \"aufs\" failed: driver not supported"
Nov 29 13:50:20 XPS dockerd[9128]: time="2016-11-29T13:50:20.406360943+01:00" level=fatal msg="Error starting daemon: error initializing graphdriver: driver not supported"
Nov 29 13:50:20 XPS systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Nov 29 13:50:20 XPS systemd[1]: Failed to start Docker Application Container Engine.
Nov 29 13:50:20 XPS systemd[1]: docker.service: Unit entered failed state.
Nov 29 13:50:20 XPS systemd[1]: docker.service: Failed with result 'exit-code'.

This helps me out.

 sudo rm -rf /var/lib/docker/aufs
 sudo service docker start

@jocull correct; if a dist-upgrade upgrades the kernel, chances are you need to install the correct version of the linux-image-extra package for the updated kernel. I think that installing the linux-image-extra-virtual package would help in that situation (and was recently added to the docs, to match what the install script installs; https://github.com/docker/docker/pull/25614)

My experience show docker doesn’t like to run under a root directory, meaning /var/lib/docker or what is specified by -g cannot be a root directory of an EBS volume or a LVM of EBS volume. Giving docker one more level of directory in the same volume through -g solved the problem.

I have Docker version 1.10.2, build c3959b1 and ubuntu 14.04, Linux 3.13.0-79

# df
Filesystem                     1K-blocks    Used Available Use% Mounted on
...
/dev/mapper/data-docker         10190136   23144   9626320   1% /data/docker

DOCKER_OPTS="-g /data/docker/docker" works but DOCKER_OPTS="-g /data/docker" doesn’t.

Be aware that sudo rm -rf /var/lib/docker/aufs removes all existing images and containers. Basically it’s a “factory reset”

So, I had this bite me again today. I was able to dig in some more. Here’s some background for what happens on my servers:

  1. I build an AMI using ansible that installs most of my non-server specific software. This includes the docker ubuntu package, which comes with startup scripts.
  2. When I boot various servers, I check to see if there is a drive mounted @ /dev/xvdh. If there is, that is assumed to be for /var/lib/docker. I then:
  3. shut down docker (using the upstart init script)
  4. initialize (mkfs) the device
  5. mount it as /var/lib/docker
  6. Start docker

The issue seems to be that sometimes, when stopping docker, it does not umount not only /var/lib/docker/aufs, but a subdirectory (in this last case: /var/lib/docker/aufs/mnt/ebe69f270843f32a24b4c16343f8c312897d38128bd36df8b25cfe4442a1de3e).

Since I’ve mounted over /var/lib/docker already with the new device, it doesn’t give me the ability to unmount leftover volumes. I was able to manually fix it by doing the following:

  1. stop docker (using the init script, though it was already down due to crashing repeatedly due to this bug).
  2. umount /var/lib/docker
  3. umount /var/lib/docker/aufs/mnt/ebe69f270843f32a24b4c16343f8c312897d38128bd36df8b25cfe4442a1de3e
  4. umount /var/lib/docker/aufs
  5. mount /var/lib/docker
  6. start docker (using the init script)

At this point I’m not sure how I can deal with this in a safe way without having to get manually involved, or writing a script to handle it. My current workflow, in ansible, goes like this:

- service: name=docker state=stopped
  when: new_docker_fs|changed
# Make absolutely sure aufs doesn't exist or docker freaks
# https://github.com/docker/docker/issues/14026
- shell: /bin/umount -t aufs -a -f
  ignore_errors: yes
  when: new_docker_fs|changed
- shell: /bin/umount -f /var/lib/docker/aufs
  ignore_errors: yes
  when: new_docker_fs|changed
- mount: name=/var/lib/docker src=/dev/xvdh fstype=ext4 state=mounted
  when: new_docker_fs|changed
- file: path=/var/lib/docker/aufs state=absent
  ignore_errors: yes
  when: new_docker_fs|changed

The change being that now I call out to the umount command directly and try to force the host to unmount every aufs mount it has. I’m not sure that this will actually work, but I thought I’d at least give an update as to where I am today just in case this bites anyone else, and in case this gives @cpuguy83 any other info that might help.

see https://gist.github.com/parente/025dcb2b9400a12d1a9f

This is another example of Docker having major showstopping issues with one of the most widely-used distros (see #3182 )

@cpuguy83

root@*:~# stop docker
stop: Unknown instance:
root@*:~# df
Filesystem     1K-blocks    Used Available Use% Mounted on
/dev/xvda1       8115168 1782524   5897368  24% /
none                   4       0         4   0% /sys/fs/cgroup
udev             3816100      12   3816088   1% /dev
tmpfs             765956     356    765600   1% /run
none                5120       0      5120   0% /run/lock
none             3829776       0   3829776   0% /run/shm
none              102400       0    102400   0% /run/user
/dev/xvdh      104857600     512 102731520   1% /var/lib/docker
root@*:~# umount /var/lib/docker
root@*:~# umount /var/lib/docker/aufs
root@*:~# df
Filesystem     1K-blocks    Used Available Use% Mounted on
/dev/xvda1       8115168 1782532   5897360  24% /
none                   4       0         4   0% /sys/fs/cgroup
udev             3816100      12   3816088   1% /dev
tmpfs             765956     356    765600   1% /run
none                5120       0      5120   0% /run/lock
none             3829776       0   3829776   0% /run/shm
none              102400       0    102400   0% /run/user
root@*:~# mount /var/lib/docker
root@*:~# start docker
docker start/running, process 25124
root@*:~# docker ps
Cannot connect to the Docker daemon. Is 'docker -d' running on this host?
root@*:~#

Docker starts correctly and works if I umount /var/lib/docker. We use it with an external EBS volume.

@Kelindar , thanks, I gave you a thumbs up and down because I think moving is better than deleting.

sudo mv /var/lib/docker/aufs /root/var-lib-docker-aufs-backup`date +%Y-%m-%d`

Yes, if they upgraded the kernel and did not install the required kernel modules after that, then aufs would no longer start. Docker does not automatically pick the next available driver in that situation, because then you won’t see your existing images and containers, so it deliberately fails to start to give you an option to resolve the situation.

aufs is not supported by latest kernels, so you should use overlayfs instead. I just edit the /etc/systemd/system/docker.service.d/docker.conf this way:

[Service]
ExecStart=
ExecStart=/usr/bin/docker daemon -H fd:// --storage-driver=overlay

Then do:

# systemctl stop docker.service
# systemctl daemon-reload
# systemctl start docker.service

I had the same problem, after updating to 1.12.1 and rebooting. I solved it by removing /var/lib/docker/overlay (or just renaming it smth else, just in case, you know) and restarting docker. I have no idea why it worked, but glad it did.

can also confirm @Kelindar solution worked for me. Debian stretch.

+1 to @Kelindar. I also had the same problem after upgrading 4.0. But now all my containers are gone 😦

I don’t think there is any way to get those back ?

Thanks @Kelindar! I also updated the linux kernel on Ubuntu and removing aufs saved me. 😃

In the end it looks like we had to manually remove /var/lib/docker/aufs after mounting the newly formatted external volume and starting docker1.7. I’m not sure why it would be getting created in the first place, and I’m not entirely sure what is creating it.

We also made sure to unmount /var/lib/docker/aufs before we mounted the new volume. I’m not entirely sure which of our steps finally fixed it for us, but I hope to have some time to narrow it down.