moby: OOM when following "local" logs of high-log-output container

Description

When logging to the local driver, Docker will quickly run out of memory when following the logs of a container with a lot of log output.

This also occurs when following local logs while using a driver such as syslog that supports dual logging.

This issue appears to be similar or identical to #39963.

Steps to reproduce the issue:

  1. Run a container that logs heavily to the local driver:

    ID=$(docker run --rm -d --log-driver local alpine cat /dev/urandom)
    
  2. Follow the container’s logs:

    docker logs -f $ID > /dev/null
    

Alternatively:

  1. Run a container that logs heavily to a driver that supports dual logging:

    ID=$(docker run --rm -d --log-driver syslog --log-opt syslog-address=udp://127.0.0.1:0 alpine cat /dev/urandom)
    
  2. Follow the container’s logs:

    docker logs -f $ID > /dev/null
    

Describe the results you received: dockerd exits with fatal error: runtime: out of memory.

Stack trace: https://gist.github.com/2f674f1ff24c679ba4778f458facb7b2

Describe the results you expected: Docker remains stable and keeps a reasonable memory footprint.

Additional information you deem important (e.g. issue happens only occasionally): I have been unable to reproduce the issue when logging to the json-file driver.

Output of docker version:

Client: Docker Engine - Community
 Version:           20.10.5
 API version:       1.41
 Go version:        go1.13.15
 Git commit:        55c4c88
 Built:             Tue Mar  2 20:18:20 2021
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          20.10.5
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.13.15
  Git commit:       363e9a8
  Built:            Tue Mar  2 20:16:15 2021
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.4.3
  GitCommit:        269548fa27e0089a8b8278fc4fc781d7f65a939b
 runc:
  Version:          1.0.0-rc92
  GitCommit:        ff819c7e9184c13b7c2607fe6c30ae19403a7aff
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

Output of docker info:

Client:
 Context:    default
 Debug Mode: false
 Plugins:
  app: Docker App (Docker Inc., v0.9.1-beta3)
  buildx: Build with BuildKit (Docker Inc., v0.5.1-docker)

Server:
 Containers: 1
  Running: 1
  Paused: 0
  Stopped: 0
 Images: 1
 Server Version: 20.10.5
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
 Logging Driver: local
 Cgroup Driver: cgroupfs
 Cgroup Version: 1
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 269548fa27e0089a8b8278fc4fc781d7f65a939b
 runc version: ff819c7e9184c13b7c2607fe6c30ae19403a7aff
 init version: de40ad0
 Security Options:
  apparmor
  seccomp
   Profile: default
 Kernel Version: 5.4.0-54-generic
 Operating System: Ubuntu 20.04.1 LTS
 OSType: linux
 Architecture: x86_64
 CPUs: 2
 Total Memory: 981.1MiB
 Name: ubuntu-focal
 ID: 5X6F:5OLY:MQHX:I5T3:AVJQ:AMIX:5OBO:XI4L:CI62:LRA7:INKW:26IT
 Docker Root Dir: /var/lib/docker
 Debug Mode: true
  File Descriptors: 30
  Goroutines: 40
  System Time: 2021-03-08T20:11:15.100104538Z
  EventsListeners: 0
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

WARNING: No swap limit support

Additional environment details (AWS, VirtualBox, physical, etc.):

This was initially observed in a Nomad environment. Nomad follows logs by default for Docker tasks. The host was a lightly loaded AWS m5.xlarge instance.

I have reproduced it on Ubuntu 16.04 and 20.04 using Virtualbox VMs, and on Docker for Mac 3.1.0 (20.10.2).

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Comments: 36 (15 by maintainers)

Commits related to this issue

Most upvoted comments

FWIW I do not think “high” log output is neccessary. I managed to trigger this in a nomad setup where nomad (I think) runs log on every container and the amount of logs for a day are few MB (in this testcase).