moby: Error response from daemon: OCI runtime create failed: unable to retrieve OCI runtime error

Steps to reproduce the issue:

  1. just start docker run hello-world

Describe the results you received:

docker: Error response from daemon: OCI runtime create failed: unable to retrieve OCI runtime error (open /run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/262f67d9beb653ac60b1c7cb3b2e183d7595b4a4a93f0dcfb0ce689a588cedcd/log.json: no such file or directory): docker-runc did not terminate sucessfully: unknown. ERRO[0000] error waiting for container: context canceled

Describe the results you expected:

run success

Additional information you deem important (e.g. issue happens only occasionally):

Output of docker version:

Client:
 Version:	17.12.0-ce
 API version:	1.35
 Go version:	go1.9.2
 Git commit:	c97c6d6
 Built:	Wed Dec 27 20:10:14 2017
 OS/Arch:	linux/amd64

Server:
 Engine:
  Version:	17.12.0-ce
  API version:	1.35 (minimum version 1.12)
  Go version:	go1.9.2
  Git commit:	c97c6d6
  Built:	Wed Dec 27 20:12:46 2017
  OS/Arch:	linux/amd64
  Experimental:	false

Output of docker info:

Containers: 16
 Running: 0
 Paused: 0
 Stopped: 16
Images: 2
Server Version: 17.12.0-ce
Storage Driver: devicemapper
 Pool Name: docker-253:0-1231531-pool
 Pool Blocksize: 65.54kB
 Base Device Size: 10.74GB
 Backing Filesystem: xfs
 Udev Sync Supported: true
 Data file: /dev/loop0
 Metadata file: /dev/loop1
 Data loop file: /var/lib/docker/devicemapper/devicemapper/data
 Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
 Data Space Used: 1.127GB
 Data Space Total: 107.4GB
 Data Space Available: 49.91GB
 Metadata Space Used: 1.974MB
 Metadata Space Total: 2.147GB
 Metadata Space Available: 2.146GB
 Thin Pool Minimum Free Space: 10.74GB
 Deferred Removal Enabled: true
 Deferred Deletion Enabled: true
 Deferred Deleted Device Count: 0
 Library Version: 1.02.140-RHEL7 (2017-05-03)
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 89623f28b87a6004d4b785663257362d1658a729
runc version: N/A (expected: b2567b37d7b75eb4cf325b77297b140ea686ce8f)
init version: 949e6fa
Security Options:
 seccomp
  Profile: default
Kernel Version: 3.10.0-229.20.1.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 15.51GiB
Name: gerrit-android-apigateway-dev001-bjdxt9ssd.qiyi.virtual
ID: 2NZJ:NXOY:3V7L:RUNL:3LFA:NN4Z:LECB:5SPK:U7FJ:3QMH:KTAQ:DTIH
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false

WARNING: devicemapper: usage of loopback devices is strongly discouraged for production use.
         Use `--storage-opt dm.thinpooldev` to specify a custom block storage device.
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled

Additional environment details (AWS, VirtualBox, physical, etc.):

Linux gerrit-android-apigateway-dev001-bjdxt9ssd.qiyi.virtual 3.10.0-229.20.1.el7.x86_64 #1 SMP Tue Nov 3 19:10:07 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

CentOS Linux release 7.1.1503 (Core)

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Comments: 19 (4 by maintainers)

Most upvoted comments

CentOS Linux release 7.4.1708 (Core) Docker version 17.12.1-ce, build 7390fc6

Having the same issue.

I have the same problem. Installing an older docker-ce should work.

cat /etc/centos-release
CentOS Linux release 7.1.1503 (Core) 

yum install docker-ce-17.06.1.ce
systemctl start docker
docker pull nginx

sudo ln -s /usr/bin/nvidia-container-toolkit /usr/bin/nvidia-container-runtime-hook Solved the problem for me

I had the same issue. This is like a fatal error for Docker. Running sudo service docker restart is not enough here. Have to reboot.