moby: OCI runtime exec failed: exec failed: unable to start container process: open /dev/pts/0: operation not permitted: unknown

Description

I cannot enter any containers with “docker exec -it …” on my CentOS 9 VMs with this runc version:

runc version 1.1.3
commit: v1.1.3-0-g6724737
spec: 1.0.2-dev
go: go1.17.13
libseccomp: 2.5.2

I get this error message:

OCI runtime exec failed: exec failed: unable to start container process: open /dev/pts/0: operation not permitted: unknown

Reproduce

Run any container (e.g. alpine:latest) and try to enter it:

docker run exec <mycontainer> /bin/sh -l

OCI runtime exec failed: exec failed: unable to start container process: open /dev/pts/0: operation not permitted: unknown

Expected behavior

It should be possible to get inside the container with “run exec -it …”.

docker version

Client: Docker Engine - Community
 Version:           20.10.17
 API version:       1.41
 Go version:        go1.17.11
 Git commit:        100c701
 Built:             Mon Jun  6 23:03:29 2022
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          20.10.17
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.17.11
  Git commit:       a89b842
  Built:            Mon Jun  6 23:01:12 2022
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.6.7
  GitCommit:        0197261a30bf81f1ee8e6a4dd2dea0ef95d67ccb
 runc:
  Version:          1.1.3
  GitCommit:        v1.1.3-0-g6724737
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

docker info

Client:
 Context:    default
 Debug Mode: false
 Plugins:
  app: Docker App (Docker Inc., v0.9.1-beta3)
  buildx: Docker Buildx (Docker Inc., v0.8.2-docker)
  scan: Docker Scan (Docker Inc., v0.17.0)

Server:
 Containers: 2
  Running: 2
  Paused: 0
  Stopped: 0
 Images: 16
 Server Version: 20.10.17
 Storage Driver: overlay2
  Backing Filesystem: xfs
  Supports d_type: true
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: systemd
 Cgroup Version: 2
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 0197261a30bf81f1ee8e6a4dd2dea0ef95d67ccb
 runc version: v1.1.3-0-g6724737
 init version: de40ad0
 Security Options:
  seccomp
   Profile: default
  cgroupns
 Kernel Version: 5.14.0-142.el9.x86_64
 Operating System: CentOS Stream 9
 OSType: linux
 Architecture: x86_64
 CPUs: 2
 Total Memory: 3.569GiB
 Name: xxx
 ID: YKYV:PDNS:DNMI:S6P2:ZMFQ:HB7Q:UEU3:CKQY:JDSM:LDN3:WJQN:T6GR
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled

Additional Info

Systems I setup with runc 1.1.2 and kernel 5.14.0-130.el9.x86_64 did not have this issue.

About this issue

  • Original URL
  • State: closed
  • Created 2 years ago
  • Comments: 25 (8 by maintainers)

Most upvoted comments

help me

help me 截图 2022-08-21 18-13-37

Check this way

https://askubuntu.com/questions/1424317/docker-20-10-ubuntu-22-04-oci-runtime-exec-failed

Downgrade containerd.io to 1.6.6: sudo apt install containerd.io=1.6.6-1

No need to add more “+1”'s; A new release of runc is being worked on, and should hopefully be available soon; https://github.com/opencontainers/runc/pull/3564. In the meantime, as workaround;

same issue on ubuntu 22.04 Server Version: 20.10.17
Kernel Version: 5.15.0-46-generic

same here

We published containerd.io packages for containerd v1.6.8 with runc v1.1.4 (which contains a fix for this issue) to download.docker.com; if you installed docker and containerd using our RPM or DEB packages, then updating the package should resolve this issue.

The static binary packages (.tgz) for docker did not yet have runc v1.1.3 includes, so should not be affected, but an upcoming v20.10.18 patch release will contain containerd v1.6.8 and runc v1.1.4, and should likely arrive somewhere next week.

It was an SELinux issue. I have now found a workaround applying this:

setsebool -P daemons_dump_core on

and this policy:

module mydocker 1.0;

require {
        type container_runtime_t;
        type init_t;
        class bpf prog_run;
}

#============= container_runtime_t ==============

#!!!! This avc is allowed in the current policy
allow container_runtime_t init_t:bpf prog_run;

Restart container and try again.

Looks like the same issue that’s being discussed in https://github.com/moby/moby/discussions/43960

And a regression in runc; see https://github.com/opencontainers/runc/issues/3551

@rdziwinski could you perhaps open a new ticket with details (as asked in the form if you open the ticket)? It could still be the same issue, but also could be something else resulting in the same error. Just to avoid potentially conflating separate issues, it’d be good to start with a fresh ticket.

I see you’re using crictl exec; when opening the ticket

  • could you also try if you see the same issue when starting a fresh container using docker run and docker exec (to narrow down possible causes).
  • do you know if dockerd was restarted after upgrading containerd? (I recall we had issues where containerd shims were kept in memory, causing odd failures that were resolved after restarting).