moby: Slow IO performance inside container compared with the host.

Output of docker version:

docker version
Client:
 Version:      1.9.1
 API version:  1.21
 Go version:   go1.4.2
 Git commit:   a34a1d5
 Built:
 OS/Arch:      linux/amd64

Server:
 Version:      1.9.1
 API version:  1.21
 Go version:   go1.4.2
 Git commit:   a34a1d5
 Built:
 OS/Arch:      linux/amd64

Output of docker info:

docker info
Containers: 2
Images: 3
Server Version: 1.9.1
Storage Driver: devicemapper
 Pool Name: docker-254:1-4458480-pool
 Pool Blocksize: 65.54 kB
 Base Device Size: 10.74 GB
 Backing Filesystem:
 Data file: /dev/loop0
 Metadata file: /dev/loop1
 Data Space Used: 2.52 GB
 Data Space Total: 107.4 GB
 Data Space Available: 96.38 GB
 Metadata Space Used: 2.081 MB
 Metadata Space Total: 2.147 GB
 Metadata Space Available: 2.145 GB
 Udev Sync Supported: true
 Deferred Removal Enabled: false
 Deferred Deletion Enabled: false
 Deferred Deleted Device Count: 0
 Data loop file: /var/lib/docker/devicemapper/devicemapper/data
 Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
 Library Version: 1.03.01 (2011-10-15)
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.16.6-2-desktop
Operating System: openSUSE 13.2 (Harlequin) (x86_64)
CPUs: 4
Total Memory: 7.722 GiB
Name: gustavo-host
ID: MRRI:5WIP:JOYH:4KZT:BVMU:HMMR:4BL6:6NKP:VM5H:36AN:6LFR:YHK7
WARNING: No swap limit support

Additional environment details (AWS, VirtualBox, physical, etc.):

  • Physical Environment (8GB RAM, Core i5-4590, ext4)

Steps to reproduce the issue:

  1. Install openSuSE 13.2
  2. Install docker
  3. run Iometer benchmark on the host
  4. run Iometer benchmark on a container based on SuSE. A data volume was used as the partition were to execute the benchmark.
  5. Compare results

Describe the results you received: Below is a table showing the results. For all tests, the performance inside the container was around 40% of the performance of the host. image

Describe the results you expected: I expected the performance of the container be closer to the host.

About this issue

  • Original URL
  • State: closed
  • Created 8 years ago
  • Reactions: 4
  • Comments: 62 (30 by maintainers)

Most upvoted comments

Here are the testings. I attached an external disk with 1GB size for this.

  1. ext3, data=ordered
echo y | mkfs.ext3 /dev/sdb
mount /dev/sdb /workspace
docker run --rm --net=host --read-only -v "/workspace:/workspace" opensuse bash -c "time dd if=/dev/zero of=/workspace/image.img bs=512 count=1000 oflag=dsync"
time dd if=/dev/zero of=/workspace/image.img bs=512 count=1000 oflag=dsync
umount /workspace

Results: 31.8 kB/s on docker and 293 kB/s on host

  1. ext3, data=journal
echo y | mkfs.ext3 /dev/sdb
mount /dev/sdb /workspace -o data=journal
docker run --rm --net=host --read-only -v "/workspace:/workspace" opensuse bash -c "time dd if=/dev/zero of=/workspace/image.img bs=512 count=1000 oflag=dsync"
time dd if=/dev/zero of=/workspace/image.img bs=512 count=1000 oflag=dsync
umount /workspace

Results: 406 kB/s on docker and 409 kB/s on host

  1. ext2
echo y | mkfs.ext2 /dev/sdb
mount /dev/sdb /workspace
docker run --rm --net=host --read-only -v "/workspace:/workspace" opensuse bash -c "time dd if=/dev/zero of=/workspace/image.img bs=512 count=1000 oflag=dsync"
time dd if=/dev/zero of=/workspace/image.img bs=512 count=1000 oflag=dsync
umount /workspace

Results: 757 kB/s on docker and 722 kB/s on host

  1. btrfs
mkfs.btrfs -f /dev/sdb
mount /dev/sdb /workspace
docker run --rm --net=host --read-only -v "/workspace:/workspace" opensuse bash -c "time dd if=/dev/zero of=/workspace/image.img bs=512 count=1000 oflag=dsync"
time dd if=/dev/zero of=/workspace/image.img bs=512 count=1000 oflag=dsync
umount /workspace

Results: 330 kB/s on docker and 317 kB/s on host

There was not much impact when running on a host vs a container.

Basically, there should be no impact; when using a bind-mounted directory, or a volume, there’s nothing between the process and the disk, it’s just a mounted directory. The only thing that docker can do, is set a constraint (but these are disabled by default), such as;

--device-read-bps=[]          Limit read rate (bytes per second) from a device (e.g., --device-read-bps=/dev/sda:1mb)
--device-read-iops=[]         Limit read rate (IO per second) from a device (e.g., --device-read-iops=/dev/sda:1000)
--device-write-bps=[]         Limit write rate (bytes per second) to a device (e.g., --device-write-bps=/dev/sda:1mb)
--device-write-iops=[]        Limit write rate (IO per second) to a device (e.g., --device-write-bps=/dev/sda:1000)