moby: level=error msg="Handler for GET /containers/XXX/json returned error: No such container

Environment

  • CentOS 7.1.1503 kernel 3.10.0-229.el7.x86_64
  • docker version: 1.12.1

Problem description:

I recently encountered a problem: I have removed the container by docker rm $containerID, but it is not really removed. there are lots of errors in /var/log/messages, such as

[op@localhost ~]$ sudo tail -f /var/log/messages
Dec  2 15:12:10 localhost dockerd: time="2016-12-02T15:12:10.114497059+08:00" level=error msg="Handler for GET /containers/30d8fecac0f9fcf78c56e9b1cab3939d52f5ac7cda53e5b6df46a175f8af6fbb/json returned error: No such container: 30d8fecac0f9fcf78c56e9b1cab3939d52f5ac7cda53e5b6df46a175f8af6fbb"
Dec  2 15:12:10 localhost dockerd: time="2016-12-02T15:12:10.114806985+08:00" level=error msg="Handler for GET /containers/e5a6cdf84d37efa8a8185e142cb326ff622f15a168b26ba08382c1b01f938db5/json returned error: No such container: e5a6cdf84d37efa8a8185e142cb326ff622f15a168b26ba08382c1b01f938db5"
Dec  2 15:12:10 localhost dockerd: time="2016-12-02T15:12:10.115129370+08:00" level=error msg="Handler for GET /containers/98e8c3bda8ca0adb138a7d3d6f2116453ba7543e4384c2d56b018fa6c7540018/json returned error: No such container: 98e8c3bda8ca0adb138a7d3d6f2116453ba7543e4384c2d56b018fa6c7540018"
Dec  2 15:12:10 localhost dockerd: time="2016-12-02T15:12:10.115444802+08:00" level=error msg="Handler for GET /containers/effb24af21f7f3cb7b682e4baaaf383f39fb3ca9b1cd20f1c01461c139ac02f2/json returned error: No such container: effb24af21f7f3cb7b682e4baaaf383f39fb3ca9b1cd20f1c01461c139ac02f2"
Dec  2 15:12:10 localhost dockerd: time="2016-12-02T15:12:10.115739337+08:00" level=error msg="Handler for GET /containers/f038482af075bbfdaaa4ac1844ef4ccacd0c7ba9b5f95a16229468c45913ccb1/json returned error: No such container: f038482af075bbfdaaa4ac1844ef4ccacd0c7ba9b5f95a16229468c45913ccb1"
Dec  2 15:12:10 localhost dockerd: time="2016-12-02T15:12:10.116081434+08:00" level=error msg="Handler for GET /containers/42097edcc80320f3727e7d9fe3f122ddf415af3827db40604d24aceee84c883b/json returned error: No such container: 42097edcc80320f3727e7d9fe3f122ddf415af3827db40604d24aceee84c883b"
Dec  2 15:12:10 localhost dockerd: time="2016-12-02T15:12:10.116373021+08:00" level=error msg="Handler for GET /containers/25e380857eda37c25b01b14afb51c74eba83978c358a8a489d8b59b14712ea41/json returned error: No such container: 25e380857eda37c25b01b14afb51c74eba83978c358a8a489d8b59b14712ea41"
Dec  2 15:12:10 localhost dockerd: time="2016-12-02T15:12:10.116663990+08:00" level=error msg="Handler for GET /containers/9a71ef7a2cd5ceae29e54d5827800af7acc1742b8b5cc7481071e3ec5c94a76f/json returned error: No such container: 9a71ef7a2cd5ceae29e54d5827800af7acc1742b8b5cc7481071e3ec5c94a76f"
Dec  2 15:12:10 localhost dockerd: time="2016-12-02T15:12:10.116973822+08:00" level=error msg="Handler for GET /containers/f5b24d5887ef1bcd2e021f365ae029254c6be9be1a0ec34aa540e627e076ddc8/json returned error: No such container: f5b24d5887ef1bcd2e021f365ae029254c6be9be1a0ec34aa540e627e076ddc8"

In the  storage directory, i can find the containID. but it can’t be see in docker ps -a

[root@localhost docker]# ls
containers  devicemapper  image  network  swarm  tmp  trust  volumes

the removed containerID can be find in devicemapper directory, but can not be find in containers directory.

how can i do to solve the problem?

About this issue

  • Original URL
  • State: closed
  • Created 8 years ago
  • Reactions: 6
  • Comments: 20 (5 by maintainers)

Most upvoted comments

I’ve been having the same problem.

I’m running a kubernetes cluster and I figured that these might be leftover container ID’s from a previous installation, but that’s not the case. Kubernetes seems to flush out old ETCD settings when you re-install a cluster.

I took some snapshots of my current runtime to give you more information on what’s going on:

This is a “df -h” screen shot 2017-03-03 at 17 00 03

Here I check my journal for a specific id of one of the overlayd filesystems above: screen shot 2017-03-03 at 17 02 10

Here are the running containers: screen shot 2017-03-03 at 17 04 46

And here are the docker images: screen shot 2017-03-03 at 17 05 36

FILE: /run/docker/libcontainerd/containerd/events.log screen shot 2017-03-03 at 17 07 36

NOW HERE I FOUND SOMETHING INTERESTING: I searched for the missing container id within the /run/docker folder and guess what I found…

container: 89b51ae9c3315147cb404414e7c2b46167d09e50697cea58215f724d0fcf6635

had this setting:

“root”:{“path”:“/var/lib/docker/overlay/28bb1dbcbcd10c02f59193cbd15203b6a504bd89b2a7e2cd1c7c2535061eb073/merged”}

Something seems to be using this root path thing to query for information, but it’s obviously not the right container id …

screen shot 2017-03-03 at 17 11 37

I don’t know what this means since I’m not a developer of docker… but can you guys look into this ? possibly explain what is going on 😃

I did have a quick looks at the kubernetes source, and it looks like their reconciliation loop defaults to 60 seconds. Each 60 seconds, they poll each Pod’s containers to check if the actual state matches the expected state, so it’s possible that somehow it’s using the wrong information there, but I’m not familiar enough with the code base to say anything sensible other than that.