moby: After upgrading to 17.12.0-ce containers are reported as "unknown" on start

Description

Just upgraded to docker 17.12.0-ce on one of our CentOS VPS-boxes. Everything seems to run fine so far, but a journalctl -u docker shows errors about “unknown” containers, e.g.:

Dec 28 21:11:37 ***** dockerd[20364]: time="2017-12-28T21:11:37.395119873+01:00" level=warning msg="unknown container" container=380a254343d466da3481d10a9eb00b5ceb0246d5af80ed458875b4cff0ce5272 module=libcontainerd namespace=plugins.moby

Apparently it complains about every container that runs on the machine. I tried to recreate all the containers, but that does not change anything. After a short investigation I noticed, that this message is emitted on every container creation, at least in our setup.

Steps to reproduce the issue:

  1. docker pull alpine:3.7
  2. docker run --rm -it alpine:3.7 sh
  3. [ctrl+d]

Describe the results you received: On container start, I get:

Dec 28 21:37:16 ***** dockerd[20364]: time="2017-12-28T21:37:16.078480737+01:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/containers/create type="*events.ContainerCreate"
Dec 28 21:37:16 ***** dockerd[20364]: time="2017-12-28T21:37:16+01:00" level=info msg="shim docker-containerd-shim started" address="/containerd-shim/moby/3c63b33d715570dd874f244f3dc4dba4ac606f1420ce7e24d52401fbcdcb3402/shim.sock" debug=false module="containerd/tasks" pid=22712
Dec 28 21:37:16 ***** dockerd[20364]: time="2017-12-28T21:37:16.278804687+01:00" level=warning msg="unknown container" container=3c63b33d715570dd874f244f3dc4dba4ac606f1420ce7e24d52401fbcdcb3402 module=libcontainerd namespace=plugins.moby
Dec 28 21:37:16 ***** dockerd[20364]: time="2017-12-28T21:37:16.313901450+01:00" level=warning msg="unknown container" container=3c63b33d715570dd874f244f3dc4dba4ac606f1420ce7e24d52401fbcdcb3402 module=libcontainerd namespace=plugins.moby

On container stop:

Dec 28 21:37:28 ***** dockerd[20364]: time="2017-12-28T21:37:28.254949107+01:00" level=warning msg="unknown container" container=3c63b33d715570dd874f244f3dc4dba4ac606f1420ce7e24d52401fbcdcb3402 module=libcontainerd namespace=plugins.moby
Dec 28 21:37:28 ***** dockerd[20364]: time="2017-12-28T21:37:28+01:00" level=info msg="shim reaped" id=3c63b33d715570dd874f244f3dc4dba4ac606f1420ce7e24d52401fbcdcb3402 module="containerd/tasks"
Dec 28 21:37:28 ***** dockerd[20364]: time="2017-12-28T21:37:28.330940300+01:00" level=info msg="ignoring event" module=libcontainerd namespace=plugins.moby topic=/tasks/delete type="*events.TaskDelete"
Dec 28 21:37:28 ***** dockerd[20364]: time="2017-12-28T21:37:28.331907436+01:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Dec 28 21:37:28 ***** dockerd[20364]: time="2017-12-28T21:37:28.376950290+01:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/containers/delete type="*events.ContainerDelete"

Describe the results you expected:

There should be no warnings.

Output of docker version:

Client:
 Version:       17.12.0-ce
 API version:   1.35
 Go version:    go1.9.2
 Git commit:    c97c6d6
 Built: Wed Dec 27 20:10:14 2017
 OS/Arch:       linux/amd64

Server:
 Engine:
  Version:      17.12.0-ce
  API version:  1.35 (minimum version 1.12)
  Go version:   go1.9.2
  Git commit:   c97c6d6
  Built:        Wed Dec 27 20:12:46 2017
  OS/Arch:      linux/amd64
  Experimental: false

Output of docker info:

Containers: 6
 Running: 4
 Paused: 0
 Stopped: 2
Images: 48
Server Version: 17.12.0-ce
Storage Driver: overlay2
 Backing Filesystem: extfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 89623f28b87a6004d4b785663257362d1658a729
runc version: b2567b37d7b75eb4cf325b77297b140ea686ce8f
init version: 949e6fa
Security Options:
 seccomp
  Profile: default
Kernel Version: 4.4.27-x86_64-jb1
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 3.903GiB
Name: **************
ID: VBJF:EXSO:5X7A:CY5M:ITIS:6SNX:5JIK:HXKR:YHBY:QUNE:245Q:5LJG
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Username: nscheer
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false

Additional environment details (AWS, VirtualBox, physical, etc.):

The environment is provided by our hoster, as is the kernel. I’ve tested another machine using CentOS 7.4 as well, but with a elrepo kernel (4.13.8-1.el7.elrepo.x86_64), but IMHO this does not look like a kernel issue.

The daemon.json is quite simple, so no surprises here:

{
    "storage-driver": "overlay2",
    "log-driver": "json-file",
    "log-opts":
    {
        "max-size": "10m",
        "max-file": "3"
    }
}

About this issue

  • Original URL
  • State: closed
  • Created 7 years ago
  • Reactions: 9
  • Comments: 18 (9 by maintainers)

Most upvoted comments

FYI encountering the same problem. I have servers where the docker journalctl logs are full of ‘unkown container’ logs. Appearantly if a container does a healthcheck every minute, then there will be an ‘unknown container’ log every minute.

More importantly: Just like bor8 commented: I also encounter problems with completely unresponsive containers. Even ‘docker inspect XYZ’ hangs forever) - even though the application inside works fine. Other containers might respond fine. But then, I need to completely reboot the server to get docker ok again.

In my impression all this started with the 17-12.0 ce update. So I’m curious if this could be caused by this problem (… then I just wait for the patch to arrive) or still is some other issue.

Interestingly the most errors in journalctl for docker also seem related to healthchecks:

Jan 24 11:57:44 dockersrv01 dockerd[1142]: time="2018-01-24T11:57:44.905869208+01:00" level=warning msg="Health check for container 5789bcff5bf2a93f5b72091664e24ae068c8adaa0e677fd9aa370bbf2182c533 error: context cancelled"
Jan 24 11:58:19 dockersrv01 dockerd[1142]: time="2018-01-24T11:58:19.981091888+01:00" level=warning msg="Health check for container 5789bcff5bf2a93f5b72091664e24ae068c8adaa0e677fd9aa370bbf2182c533 error: context cancelled"
Jan 28 01:16:31 dockersrv01 dockerd[1142]: time="2018-01-28T01:16:31.733072314+01:00" level=warning msg="Health check for container 5789bcff5bf2a93f5b72091664e24ae068c8adaa0e677fd9aa370bbf2182c533 error: context cancelled"
Jan 28 01:21:37 dockersrv01 dockerd[1142]: time="2018-01-28T01:21:37.469955160+01:00" level=warning msg="Health check for container 5789bcff5bf2a93f5b72091664e24ae068c8adaa0e677fd9aa370bbf2182c533 error: context cancelled"
Jan 28 01:23:18 dockersrv01 dockerd[1142]: time="2018-01-28T01:23:18.682106518+01:00" level=warning msg="Health check for container 5789bcff5bf2a93f5b72091664e24ae068c8adaa0e677fd9aa370bbf2182c533 error: context cancelled"
Jan 28 01:31:20 dockersrv01 dockerd[1142]: time="2018-01-28T01:31:20.508894226+01:00" level=warning msg="Health check for container 5789bcff5bf2a93f5b72091664e24ae068c8adaa0e677fd9aa370bbf2182c533 error: context cancelled"