moby: Windows image build Docker runs out of memory on Windows 10

I’m opening this issue as a bug because from what I’ve read docker is supposed to not enforce memory or disk space limitations during container’s build or run actions.

I have a docker file that does a lot when building and I can’t really share to help with reproducing. This docker file behaves differently on different hosts. In one of them runs out of memory which contradicts the above. To help me troubleshoot the issue I’ve added a cmd that reports the free memory before it runs out of memory

(Get-Counter -Counter "\Memory\Available MBytes").CounterSamples[0].CookedValue
  1. On my Windows 10 host (my workstation laptop) the container fails to build. The workstation has 16GB of memory. The reported free memory before the out of memory crash is 200MB.
  2. On a Windows 2016 on a Hyper-V instanced hosted on my workstation the container builds successfully. The Hyper-V instance is assgined 4GB of memory. The reported free memory before the out of memory crash is 538MB.
  3. On a Windows 2016 host on azure the container builds successfully. The Azure VM is running with 7GB. The reported free memory before the out of memory crash is 3000MB.

Each of the hosts reports the following version 1. Windows 10 Host

Client:
 Version:      17.03.0-ce
 API version:  1.26
 Go version:   go1.7.5
 Git commit:   60ccb22
 Built:        Thu Feb 23 10:40:59 2017
 OS/Arch:      windows/amd64

Server:
 Version:      17.03.0-ce
 API version:  1.26 (minimum version 1.24)
 Go version:   go1.7.5
 Git commit:   60ccb22
 Built:        Thu Feb 23 10:40:59 2017
 OS/Arch:      windows/amd64
 Experimental: true

2. Windows 2016 Host on Hyper-V

Client:
 Version:      17.03.0-ee-1
 API version:  1.26
 Go version:   go1.7.5
 Git commit:   9094a76
 Built:        Wed Mar  1 00:49:51 2017
 OS/Arch:      windows/amd64

Server:
 Version:      17.03.0-ee-1
 API version:  1.26 (minimum version 1.24)
 Go version:   go1.7.5
 Git commit:   9094a76
 Built:        Wed Mar  1 00:49:51 2017
 OS/Arch:      windows/amd64
 Experimental: false

2. Windows 2016 Host on Azure

Client:
Version:      1.12.2-cs2-ws-beta
API version:  1.25
Go version:   go1.7.1
Git commit:   050b611
Built:        Tue Oct 11 02:35:40 2016
OS/Arch:      windows/amd64

Server:
Version:      1.12.2-cs2-ws-beta
API version:  1.25
Go version:   go1.7.1
Git commit:   050b611
Built:        Tue Oct 11 02:35:40 2016
OS/Arch:      windows/amd64

I understand I’ve not provided all necessary information to help you reproduce but the artifacts being referenced in the container are not freely available. What I can additionally say, is that the same succeeds on Windows 10 host (case 1) when building on microsoft/windowsservercore:latest but fails when building on asarafian/mssql-server-windows-express:2014SP2. The difference between the two is the extra disk space required for SQL Server 2014SP2 and the memory that the sql server process take. Keep in mind that the sql server has one very small database attached, so it’s strange that it makes that big of a difference.

I’m more than willing to help troubleshoot this issue but I need some help on how. My feeling is that there is a difference between how docker and containers behave between Windows 10 and Windows Server hosts. The windows 10 machine has the most memory available from all the other one and reports 8-9GB free when the our of memory is thrown. On the other hand, Windows Server 2016 manages better with less memory available to the host. As there is a difference on setting up docker for Windows 10 and Windows Server, is it possible that there some limitations for Windows 10? If so, then I would consider this bug’s resolution and documentation fix because I can’t find any relevant information, besides limiting the memory available to the container.

About this issue

  • Original URL
  • State: closed
  • Created 7 years ago
  • Comments: 23 (13 by maintainers)

Most upvoted comments

I agree that is completely not clear that containers ONLY work with Hyper-V isolation on Windows 10, and thus have the “hidden” 1GB memory limit.

Also I can assume we are probably missing a CPU usage/cores limit if Hyper-V is involved? Is there a 1 core limit by default?

Hold on, there are three completely seperate scenarios here from the added me-toos. Let’s try to keep them isolated and not overload one issue. From a pure moby/moby perspective:

a) Windows containers running on Server OS’s using the default “process” isolation. b) Windows containers running on Client OS’s using the default (and only available) “Hyper-V” isolation, or running on Server OS’s using Hyper-V isolation (ie --isolation=hyperv) c) Linux containers on Windows (aka LCOW) which are by definition Hyper-V isolation using a ‘utility’ VM.

There’s also d) which is the Docker-For-Windows non-experimental “LCOW” where a “real” or “not ‘utility’” Linux VM is used to host containers running on Windows. I’m not going to comment on that mode in which D4W runs as it’s a closed-source solution owned/maintained by Docker Inc.

On top of that, there’s docker compose which is a different beast entirely, and I’m also not going to comment on. Any issues there need to be addressed in that repo.

a) The -m option should work fine b) The -m option should work fine - here, the memory is applied to the utility VM hosting the container. The container itself is not constrained inside the UVM. c) The -m option (along with many other parameters) is not hooked up and defaults to 1GB. Note also LCOW is experimental (requires the daemon to be started with --experimental) and is NOT production ready. Many pieces remain missing from this still.

For c) while you can build a private daemon to set the memory the ultimate answer isn’t that simple. There really needs to be two “memory” CLI/API parameters - one for the size of the utility VM, one to apply to the container (or containers from RS5+) running inside the utility VM. This work is still undefined and requires agreement from docker maintainers on how this should look. HCS (the underlying interface to the Windows platform) is capable of supporting this, and I’m doing work to enable this in the go-binding for HCS (HCSShim) for RS5 support. It will still be a while before agreement is reached on what any docker (moby) API/CLI should look like for this. An interim PR (I can dig out link, but should be easy enough to find) to force -m to refer to both the UVM and container memory was not accepted and remains pending.

In other words, the only workaround for c) is to build your own private dockerd.exe.

+1 - This caused days of head scratching, wondering why our containers were near-constantly sluggish when the host machine had gigabytes of memory free and we had not specified any limits on the containers. Adding mem_limit=4G to the service in the compose file fixed this.

The appropriate fix would be to remove the magical 1GB limit. Until then at least update the documentation which states contradictory information:

By default, a container has no resource constraints and can use as much of a given resource as the host’s kernel scheduler will allow

I would like to clarify a small difference for -m/--memory option for an out of the box configuration between Client and Server OS and their respected default isolation modes.

  • With server os and default isolation mode process it is only to limit the memory available for the container.
  • Win client OS and default isiolation mode ‘hyperv’ it would used to increase the available memory as the default setting of 1GB (I presume) is already to low especially for microsoft/windowsservercore based images.

This is one of the reasons I got confused initially. I’ve read the general statements of

  • docker doesn’t enforce memory restrictions by default.
  • -m/--memory enforces optional memory restrictions.

And I made the logical deduction that there was something wrong and created this issue. In my opinion, the documentation needs to improve a bit to make it more clear when as a developer you are working with windows containers on a client OS, e.g. Windows 10.

Sorry to add noise… I’ve been fishing on various issues for any information I could find, or for anyone who could give me feedback. Thank you @jhowardmsft for detailing the scenarios so well, and confirming my suspicion about c). Can anyone point me at information about d) the LCOW that is “non-experimental” LCOW? Google might have a hard time with that…

… I’m only concerned with Linux images on windows server since I’m unfortunate enough to have a customer who will only sysadmin windows server.

(I wont comment on the failure of D4W to conform to the principle of least surprise. I Lied. I’d submit that -m should be the only API needed, and the user should be informed of the behavior/consequences in the documentation. No one expects Windows to behave like Linux, and I’m sure we all have a hunch that the Windows kernel will be replaced with a custom Linux kernel eventually, so why add a second memory API to accommodate a temporary situation? Just say “in situation c) -m will allocate fixed memory in utility VM, the default is 1GB.” – The situation now is that -m does nothing and containers run out of memory and crash.)

Also, @jhowardmsft, is this the PR you mention?

What’s the status on this? Is there some workaround I’m missing? I’m on Windows Server Core 1709, running Docker EE preview. Neither -m or --memory parameters work for my Linux images (with or without LCOW enabled).

PS C:\> docker run -it -m 2g --rm busybox free -m
             total       used       free     shared    buffers     cached
Mem:           972        152        819         20          0         20
-/+ buffers/cache:        132        840

I have an image that peeks at 1.5 GB of memory usage (when I docker stats it on linux host). On docker for windows EE, the container crashes due to out of memory error (obviously due to only 927 megs being available).

Not sure where to post my question, because it involves both docker-compose and this memory issue.

I try to increase available memory of a container on Windows 10 using docker-compose.yml, like @FFLSH did:

version: '3.1'
...
deploy:
  resources:
    limits:
      memory: 3g

The problem is that when I check total memory inside the container, it reports 1 Gb. So what is wrong here: is this setting not mapped to docker run --memory option or I used it in a wrong way?

> docker version
Client:
 Version:      17.06.1-ce
 API version:  1.30
 Go version:   go1.8.3
 Git commit:   874a737
 Built:        Thu Aug 17 22:48:20 2017
 OS/Arch:      windows/amd64

Server:
 Version:      17.06.1-ce
 API version:  1.30 (minimum version 1.24)
 Go version:   go1.8.3
 Git commit:   874a737
 Built:        Thu Aug 17 23:03:03 2017
 OS/Arch:      windows/amd64
 Experimental: true