moby: Rootless mode doesn't start on Fedora 32 with SELinux enabled (but works on CentOS 8.2): "can't open lock file /run/xtables.lock: Permission denied"
EDIT : workaround : sudo dnf install -y policycoreutils-python-utils && sudo semanage permissive -a iptables_t
Description
Rootless mode doesn’t start on Fedora 32 with SELinux enabled. It works when SELinux is disabled.
(NOTE: “SELinux enabled” in this context just means getenforce
= Enforcing
, with the system_u:object_r:container_runtime_exec_t:s0
context for running dockerd
. This issue is NOT about running dockerd-rootless.sh
with --selinux-enabled
)
Steps to reproduce the issue:
- Set up Fedora 32
- Download
moby-snapshot-fedora-32-x86_64-rpm.tbz
from https://github.com/AkihiroSuda/moby-snapshot/releases/tag/snapshot-20200717 tar xjvf moby-snapshot-fedora-32-x86_64-rpm.tbz
sudo dnf install -y *.rpm
5 .dockerd-rootless.sh
Describe the results you received:
[vagrant@localhost ~]$ dockerd-rootless.sh
+ '[' -w /run/user/1000 ']'
+ '[' -w /home/vagrant ']'
+ rootlesskit=
+ for f in docker-rootlesskit rootlesskit
+ which docker-rootlesskit
+ for f in docker-rootlesskit rootlesskit
+ which rootlesskit
+ rootlesskit=rootlesskit
+ break
+ '[' -z rootlesskit ']'
+ : ''
+ : ''
+ : builtin
+ : auto
+ : auto
+ net=
+ mtu=
+ '[' -z ']'
+ which slirp4netns
+ slirp4netns --help
+ grep -qw -- --netns-type
+ net=slirp4netns
+ '[' -z ']'
+ mtu=65520
+ '[' -z slirp4netns ']'
+ '[' -z 65520 ']'
+ '[' -z ']'
+ _DOCKERD_ROOTLESS_CHILD=1
+ export _DOCKERD_ROOTLESS_CHILD
+ exec rootlesskit --net=slirp4netns --mtu=65520 --slirp4netns-sandbox=auto --slirp4netns-seccomp=auto --disable-host-loopback --port-driver=builtin --copy-up=/etc --copy-up=/run --propagation=rslave /usr/bin/dockerd-rootless.sh
+ '[' -w /run/user/1000 ']'
+ '[' -w /home/vagrant ']'
+ rootlesskit=
+ for f in docker-rootlesskit rootlesskit
+ which docker-rootlesskit
+ for f in docker-rootlesskit rootlesskit
+ which rootlesskit
+ rootlesskit=rootlesskit
+ break
+ '[' -z rootlesskit ']'
+ : ''
+ : ''
+ : builtin
+ : auto
+ : auto
+ net=
+ mtu=
+ '[' -z ']'
+ which slirp4netns
+ slirp4netns --help
+ grep -qw -- --netns-type
+ net=slirp4netns
+ '[' -z ']'
+ mtu=65520
+ '[' -z slirp4netns ']'
+ '[' -z 65520 ']'
+ '[' -z 1 ']'
+ '[' 1 = 1 ']'
+ rm -f /run/docker /run/xtables.lock
+ exec dockerd
INFO[2020-07-18T02:51:29.598987874Z] Starting up
WARN[2020-07-18T02:51:29.599044054Z] Running in rootless mode. This mode has feature limitations.
INFO[2020-07-18T02:51:29.599049153Z] Running with RootlessKit integration
INFO[2020-07-18T02:51:29.600299588Z] libcontainerd: started new containerd process pid=9337
INFO[2020-07-18T02:51:29.600571807Z] parsed scheme: "unix" module=grpc
INFO[2020-07-18T02:51:29.600703491Z] scheme "unix" not registered, fallback to default scheme module=grpc
INFO[2020-07-18T02:51:29.600829684Z] ccResolverWrapper: sending update to cc: {[{unix:///run/user/1000/docker/containerd/containerd.sock <nil> 0 <nil>}] <nil> <nil>} module=grpc
INFO[2020-07-18T02:51:29.600998464Z] ClientConn switching balancer to "pick_first" module=grpc
INFO[2020-07-18T02:51:29.618626058Z] starting containerd revision=4feb8c462393ce6834dda9e3464c4fee8ee73232 version="0.20200717.014906~4feb8c4"
INFO[2020-07-18T02:51:29.638162035Z] loading plugin "io.containerd.content.v1.content"... type=io.containerd.content.v1
INFO[2020-07-18T02:51:29.638216642Z] loading plugin "io.containerd.snapshotter.v1.aufs"... type=io.containerd.snapshotter.v1
INFO[2020-07-18T02:51:29.639887838Z] skip loading plugin "io.containerd.snapshotter.v1.aufs"... error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.6.6-300.fc32.x86_64\\n\"): skip plugin" type=io.containerd.snapshotter.v1
INFO[2020-07-18T02:51:29.639969108Z] loading plugin "io.containerd.snapshotter.v1.devmapper"... type=io.containerd.snapshotter.v1
WARN[2020-07-18T02:51:29.640017186Z] failed to load plugin io.containerd.snapshotter.v1.devmapper error="devmapper not configured"
INFO[2020-07-18T02:51:29.640032709Z] loading plugin "io.containerd.snapshotter.v1.native"... type=io.containerd.snapshotter.v1
INFO[2020-07-18T02:51:29.640056787Z] loading plugin "io.containerd.snapshotter.v1.overlayfs"... type=io.containerd.snapshotter.v1
INFO[2020-07-18T02:51:29.640128945Z] loading plugin "io.containerd.snapshotter.v1.zfs"... type=io.containerd.snapshotter.v1
INFO[2020-07-18T02:51:29.640252839Z] skip loading plugin "io.containerd.snapshotter.v1.zfs"... error="path /home/vagrant/.local/share/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
INFO[2020-07-18T02:51:29.640268957Z] loading plugin "io.containerd.metadata.v1.bolt"... type=io.containerd.metadata.v1
WARN[2020-07-18T02:51:29.640284172Z] could not use snapshotter devmapper in metadata plugin error="devmapper not configured"
INFO[2020-07-18T02:51:29.640291902Z] metadata content store policy set policy=shared
INFO[2020-07-18T02:51:29.640441355Z] loading plugin "io.containerd.differ.v1.walking"... type=io.containerd.differ.v1
INFO[2020-07-18T02:51:29.640672100Z] loading plugin "io.containerd.gc.v1.scheduler"... type=io.containerd.gc.v1
INFO[2020-07-18T02:51:29.640856727Z] loading plugin "io.containerd.service.v1.introspection-service"... type=io.containerd.service.v1
INFO[2020-07-18T02:51:29.640884286Z] loading plugin "io.containerd.service.v1.containers-service"... type=io.containerd.service.v1
INFO[2020-07-18T02:51:29.640894221Z] loading plugin "io.containerd.service.v1.content-service"... type=io.containerd.service.v1
INFO[2020-07-18T02:51:29.640906814Z] loading plugin "io.containerd.service.v1.diff-service"... type=io.containerd.service.v1
INFO[2020-07-18T02:51:29.640916268Z] loading plugin "io.containerd.service.v1.images-service"... type=io.containerd.service.v1
INFO[2020-07-18T02:51:29.640928865Z] loading plugin "io.containerd.service.v1.leases-service"... type=io.containerd.service.v1
INFO[2020-07-18T02:51:29.640938611Z] loading plugin "io.containerd.service.v1.namespaces-service"... type=io.containerd.service.v1
INFO[2020-07-18T02:51:29.640950067Z] loading plugin "io.containerd.service.v1.snapshots-service"... type=io.containerd.service.v1
INFO[2020-07-18T02:51:29.640972084Z] loading plugin "io.containerd.runtime.v1.linux"... type=io.containerd.runtime.v1
INFO[2020-07-18T02:51:29.641032736Z] loading plugin "io.containerd.runtime.v2.task"... type=io.containerd.runtime.v2
INFO[2020-07-18T02:51:29.641104258Z] loading plugin "io.containerd.monitor.v1.cgroups"... type=io.containerd.monitor.v1
INFO[2020-07-18T02:51:29.641486265Z] loading plugin "io.containerd.service.v1.tasks-service"... type=io.containerd.service.v1
INFO[2020-07-18T02:51:29.641536295Z] loading plugin "io.containerd.internal.v1.restart"... type=io.containerd.internal.v1
INFO[2020-07-18T02:51:29.641599367Z] loading plugin "io.containerd.grpc.v1.containers"... type=io.containerd.grpc.v1
INFO[2020-07-18T02:51:29.641619825Z] loading plugin "io.containerd.grpc.v1.content"... type=io.containerd.grpc.v1
INFO[2020-07-18T02:51:29.641634736Z] loading plugin "io.containerd.grpc.v1.diff"... type=io.containerd.grpc.v1
INFO[2020-07-18T02:51:29.641643317Z] loading plugin "io.containerd.grpc.v1.events"... type=io.containerd.grpc.v1
INFO[2020-07-18T02:51:29.641657485Z] loading plugin "io.containerd.grpc.v1.healthcheck"... type=io.containerd.grpc.v1
INFO[2020-07-18T02:51:29.641686643Z] loading plugin "io.containerd.grpc.v1.images"... type=io.containerd.grpc.v1
INFO[2020-07-18T02:51:29.641697750Z] loading plugin "io.containerd.grpc.v1.leases"... type=io.containerd.grpc.v1
INFO[2020-07-18T02:51:29.641709417Z] loading plugin "io.containerd.grpc.v1.namespaces"... type=io.containerd.grpc.v1
INFO[2020-07-18T02:51:29.641717946Z] loading plugin "io.containerd.internal.v1.opt"... type=io.containerd.internal.v1
INFO[2020-07-18T02:51:29.641744782Z] loading plugin "io.containerd.grpc.v1.snapshots"... type=io.containerd.grpc.v1
INFO[2020-07-18T02:51:29.641755111Z] loading plugin "io.containerd.grpc.v1.tasks"... type=io.containerd.grpc.v1
INFO[2020-07-18T02:51:29.641764533Z] loading plugin "io.containerd.grpc.v1.version"... type=io.containerd.grpc.v1
INFO[2020-07-18T02:51:29.641778388Z] loading plugin "io.containerd.grpc.v1.introspection"... type=io.containerd.grpc.v1
INFO[2020-07-18T02:51:29.642519423Z] serving... address=/run/user/1000/docker/containerd/containerd-debug.sock
INFO[2020-07-18T02:51:29.642649591Z] serving... address=/run/user/1000/docker/containerd/containerd.sock.ttrpc
INFO[2020-07-18T02:51:29.642724411Z] serving... address=/run/user/1000/docker/containerd/containerd.sock
INFO[2020-07-18T02:51:29.642740391Z] containerd successfully booted in 0.024729s
INFO[2020-07-18T02:51:29.648118109Z] parsed scheme: "unix" module=grpc
INFO[2020-07-18T02:51:29.648142231Z] scheme "unix" not registered, fallback to default scheme module=grpc
INFO[2020-07-18T02:51:29.648157728Z] ccResolverWrapper: sending update to cc: {[{unix:///run/user/1000/docker/containerd/containerd.sock <nil> 0 <nil>}] <nil> <nil>} module=grpc
INFO[2020-07-18T02:51:29.648165737Z] ClientConn switching balancer to "pick_first" module=grpc
INFO[2020-07-18T02:51:29.649079074Z] parsed scheme: "unix" module=grpc
INFO[2020-07-18T02:51:29.649142022Z] scheme "unix" not registered, fallback to default scheme module=grpc
INFO[2020-07-18T02:51:29.649156372Z] ccResolverWrapper: sending update to cc: {[{unix:///run/user/1000/docker/containerd/containerd.sock <nil> 0 <nil>}] <nil> <nil>} module=grpc
INFO[2020-07-18T02:51:29.649161973Z] ClientConn switching balancer to "pick_first" module=grpc
INFO[2020-07-18T02:51:29.649994007Z] [graphdriver] using prior storage driver: fuse-overlayfs
WARN[2020-07-18T02:51:29.652815432Z] Unable to find cpu controller
WARN[2020-07-18T02:51:29.653025241Z] Unable to find io controller
WARN[2020-07-18T02:51:29.653155342Z] Unable to find cpuset controller
INFO[2020-07-18T02:51:29.653690214Z] Loading containers: start.
WARN[2020-07-18T02:51:29.655152150Z] Running iptables --wait -t nat -L -n failed with message: `Fatal: can't open lock file /run/xtables.lock: Permission denied`, error: exit status 4
INFO[2020-07-18T02:51:29.672504396Z] stopping event stream following graceful shutdown error="context canceled" module=libcontainerd namespace=plugins.moby
INFO[2020-07-18T02:51:29.672563671Z] stopping healthcheck following graceful shutdown module=libcontainerd
INFO[2020-07-18T02:51:29.672613758Z] stopping event stream following graceful shutdown error="context canceled" module=libcontainerd namespace=moby
failed to start daemon: Error initializing network controller: error obtaining controller instance: failed to create NAT chain DOCKER: iptables failed: iptables -t nat -N DOCKER: Fatal: can't open lock file /run/xtables.lock: Permission denied
(exit status 4)
[rootlesskit:child ] error: command [/usr/bin/dockerd-rootless.sh] exited: exit status 1
[rootlesskit:parent] error: child exited: exit status 1
Describe the results you expected:
It should work
Additional information you deem important (e.g. issue happens only occasionally):
It works by allowing iptables_t
to do everything (sudo semanage permissive -a iptables_t
) or by just disabling SELinux (sudo setenforce 0
).
dockerd-rootless.sh --iptables=false
also works. (But not really useful.)
Output of docker version
:
$ DOCKER_HOST=unix:///run/user/1000/docker.sock docker version
Client: Moby Engine
Version: 0.0.0-20200716165816-bece8cc41c
API version: 1.41
Go version: go1.13.10
Git commit: bece8cc41c
Built: Fri Jul 17 07:07:23 2020
OS/Arch: linux/amd64
Context: default
Experimental: false
Server: Moby Engine
Engine:
Version: 0.0.0-20200716165816-bece8cc41c
API version: 1.41 (minimum version 1.12)
Go version: go1.13.10
Git commit: 260c26b7be
Built: Fri Jul 17 08:29:21 2020
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 0.20200717.014906~4feb8c4
GitCommit: 4feb8c462393ce6834dda9e3464c4fee8ee73232
runc:
Version: 1.0.0-rc91+dev
GitCommit: f9850afa9153b48b654b5c901ae20cabaa4089f8
docker-init:
Version: 0.18.0
GitCommit: fec3683
Output of docker info
:
$ DOCKER_HOST=unix:///run/user/1000/docker.sock docker info
Client:
Context: default
Debug Mode: false
Server:
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 1
Server Version: 0.0.0-20200716165816-bece8cc41c
Storage Driver: fuse-overlayfs
Logging Driver: json-file
Cgroup Driver: systemd
Cgroup Version: 2
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc io.containerd.runc.v2 io.containerd.runtime.v1.linux
Default Runtime: runc
Init Binary: docker-init
containerd version: 4feb8c462393ce6834dda9e3464c4fee8ee73232
runc version: f9850afa9153b48b654b5c901ae20cabaa4089f8
init version: fec3683
Security Options:
seccomp
Profile: default
rootless
cgroupns
Kernel Version: 5.6.6-300.fc32.x86_64
Operating System: Fedora 32 (Cloud Edition)
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 1.933GiB
Name: localhost.localdomain
ID: RKTZ:KLY6:LPTN:TXJ2:3LWP:6YBW:KXQX:XSMG:OO7V:JXR5:TDM7:ZHNN
Docker Root Dir: /home/vagrant/.local/share/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
WARNING: No kernel memory limit support
WARNING: No kernel memory TCP limit support
WARNING: No oom kill disable support
WARNING: No cpu cfs quota support
WARNING: No cpu cfs period support
WARNING: No cpu shares support
WARNING: No cpuset support
WARNING: Support for cgroup v2 is experimental
WARNING: No blkio weight support
WARNING: No blkio weight_device support
WARNING: No blkio throttle.read_bps_device support
WARNING: No blkio throttle.write_bps_device support
WARNING: No blkio throttle.read_iops_device support
WARNING: No blkio throttle.write_iops_device support
Additional environment details (AWS, VirtualBox, physical, etc.): container-selinux-2.132.0-1.fc32.noarch kernel-core-5.6.6-300.fc32.x86_64
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Reactions: 1
- Comments: 25 (25 by maintainers)
I’ve just emailed the package maintainer
Perhaps is it something to do with iptables This command will make iptables a permissive domain. sudo semanager permissive -a iptables_t