podman: Rootless Podman Wireguard container fails to configure iptables

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

I am attempting to setup Wireguard in a rootless podman container on Fedora Silverblue 36 using https://hub.docker.com/r/linuxserver/wireguard. The container starts but either fails due to permissions errors, or fails silently (depending on which caps are added). I am not sure how the container will be able to modify IP tables when my user (on the host) is not able to.

Steps to reproduce the issue:

  1. Install Fedora Silverblue
  2. Overlay wireguard-tools with rpm-ostree wireguard-tools
  3. sudo modprobe wireguard to load the wireguard module
  4. Create a docker-compose file:
version: "3.3"

services:
  wireguard:
    image: lscr.io/linuxserver/wireguard
    container_name: wireguard
    hostname: wireguard
    restart: always
    env_file: ./.settings.env
    sysctls:
      - net.ipv4.conf.all.src_valid_mark=1
    ports:
      - 51820:51820/udp
    cap_add:
      - NET_ADMIN
#      - NET_RAW
      - SYS_MODULE
    volumes:
      - ./data/wireguard:/config:Z
      - /lib/modules:/lib/modules:ro
  1. Start the container with docker-compose up

Describe the results you received:

Without the cap NET_RAW the container fails to start with the following error:

wireguard    | [#] iptables -A FORWARD -i wg0 -j ACCEPT; iptables -A FORWARD -o wg0 -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
wireguard    | iptables v1.8.4 (legacy): can't initialize iptables table `filter': Permission denied (you must be root)

With the cap NET_RAW added (I found this as a recommendation in this repo), the container simply hangs on the iptables step:

iptables -A FORWARD -i wg0 -j ACCEPT; iptables -A FORWARD -o wg0 -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE

My user on the host does not have permissions to mess around with IPTABLES:

iptables v1.8.7 (nf_tables): Could not fetch rule set generation id: Permission denied (you must be root)

I have a feeling it might be stalling with NET_RAW due to SELinux but I am not too familar with it or how I would debug it.

Describe the results you expected:

I would expect that with the correct caps the container would start.

Additional information you deem important (e.g. issue happens only occasionally):

Output of podman version:

Client:       Podman Engine
Version:      4.1.1
API Version:  4.1.1
Go Version:   go1.18.4
Built:        Fri Jul 22 21:05:59 2022
OS/Arch:      linux/amd64

Output of podman info --debug:

host:
  arch: amd64
  buildahVersion: 1.26.1
  cgroupControllers:
  - cpu
  - io
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.1.0-2.fc36.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.0, commit: '
  cpuUtilization:
    idlePercent: 96.46
    systemPercent: 1.51
    userPercent: 2.03
  cpus: 8
  distribution:
    distribution: fedora
    variant: silverblue
    version: "36"
  eventLogger: journald
  hostname: bones
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
  kernel: 5.18.13-200.fc36.x86_64
  linkmode: dynamic
  logDriver: journald
  memFree: 3536007168
  memTotal: 15641862144
  networkBackend: netavark
  ociRuntime:
    name: crun
    package: crun-1.5-1.fc36.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 1.5
      commit: 54ebb8ca8bf7e6ddae2eb919f5b82d1d96863dea
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
  os: linux
  remoteSocket:
    exists: true
    path: /run/user/1000/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.2.0-0.2.beta.0.fc36.x86_64
    version: |-
      slirp4netns version 1.2.0-beta.0
      commit: 477db14a24ff1a3de3a705e51ca2c4c1fe3dda64
      libslirp: 4.6.1
      SLIRP_CONFIG_VERSION_MAX: 3
      libseccomp: 2.5.3
  swapFree: 8589930496
  swapTotal: 8589930496
  uptime: 6h 9m 50.42s (Approximately 0.25 days)
plugins:
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  volume:
  - local
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - docker.io
  - quay.io
store:
  configFile: /var/home/bones/.config/containers/storage.conf
  containerStore:
    number: 23
    paused: 0
    running: 18
    stopped: 5
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /var/home/bones/.local/share/containers/storage
  graphRootAllocated: 1998678130688
  graphRootUsed: 45631213568
  graphStatus:
    Backing Filesystem: btrfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 31
  runRoot: /run/user/1000/containers
  volumePath: /var/home/bones/.local/share/containers/storage/volumes
version:
  APIVersion: 4.1.1
  Built: 1658516759
  BuiltTime: Fri Jul 22 21:05:59 2022
  GitCommit: ""
  GoVersion: go1.18.4
  Os: linux
  OsArch: linux/amd64
  Version: 4.1.1

Package info (e.g. output of rpm -q podman or apt list podman):

podman-4.1.1-3.fc36.x86_64

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide? (https://github.com/containers/podman/blob/main/troubleshooting.md)

Yes

Additional environment details (AWS, VirtualBox, physical, etc.):

About this issue

  • Original URL
  • State: closed
  • Created 2 years ago
  • Comments: 19 (12 by maintainers)

Commits related to this issue

Most upvoted comments

Final update, if anyone is trying Rootless Wireguard on Fedora bases distros, following modules needs to be enabled

ip_tables
iptable_filter
iptable_nat
wireguard
xt_MASQUERADE

Thank everyone who all helped here and Sorry for hijacking here @scottsweb Would like to keep discussions at one place for future reference.

@Luap99 Ideas?

@Luap99 you have given me some hope… but I have yet to find a path forwards.

On the host I ran sudo modprobe ip-tables and then checking with lsmod | grep ip shows that ip_tables, ip6_tables and nf_tables are all present:

iptable_nat            16384  0
iptable_filter         16384  0
ip6_udp_tunnel         16384  1 wireguard
nft_fib_ipv4           16384  1 nft_fib_inet
nft_fib_ipv6           16384  1 nft_fib_inet
nft_fib                16384  3 nft_fib_ipv6,nft_fib_ipv4,nft_fib_inet
nf_reject_ipv4         16384  1 nft_reject_inet
nf_reject_ipv6         20480  1 nft_reject_inet
nf_nat                 57344  4 xt_nat,nft_chain_nat,iptable_nat,xt_MASQUERADE
nf_defrag_ipv6         24576  1 nf_conntrack
nf_defrag_ipv4         16384  1 nf_conntrack
ip_set                 61440  0
nf_tables             270336  695 nft_ct,nft_compat,nft_reject_inet,nft_fib_ipv6,nft_objref,nft_fib_ipv4,nft_chain_nat,nft_reject,nft_fib,nft_fib_inet
nfnetlink              20480  4 nft_compat,nf_tables,ip_set
ipmi_devintf           20480  0
ipmi_msghandler       122880  1 ipmi_devintf
ip6_tables             36864  0
ip_tables              36864  2 iptable_filter,iptable_nat

Running iptables --list or iptables-nft --list or ip6tables --list all get permission denied from my user on the host.

Switching into the container with NET_ADMIN, NET_RAW and SYS_MODULE caps. I get the following for lsmod:

iptable_nat            16384  1
iptable_filter         16384  1
ip6_udp_tunnel         16384  1 wireguard
nft_fib_ipv4           16384  1 nft_fib_inet
nft_fib_ipv6           16384  1 nft_fib_inet
nft_fib                16384  3 nft_fib_ipv6,nft_fib_ipv4,nft_fib_inet
nf_reject_ipv4         16384  1 nft_reject_inet
nf_reject_ipv6         20480  1 nft_reject_inet
nf_nat                 57344  4 xt_nat,nft_chain_nat,iptable_nat,xt_MASQUERADE
nf_defrag_ipv6         24576  1 nf_conntrack
nf_defrag_ipv4         16384  1 nf_conntrack
ip_set                 61440  0
nf_tables             270336  734 nft_ct,nft_compat,nft_reject_inet,nft_fib_ipv6,nft_objref,nft_fib_ipv4,nft_chain_nat,nft_reject,nft_fib,nft_fib_inet
nfnetlink              20480  4 nft_compat,nf_tables,ip_set
ipmi_devintf           20480  0
ipmi_msghandler       122880  1 ipmi_devintf
ip6_tables             36864  0
ip_tables              36864  2 iptable_filter,iptable_nat

Which looks the same. iptables --list and iptables-nft --list both work. With the output being:

root@wireguard:/# iptables --list
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         
ACCEPT     all  --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere            

Chain OUTPUT (policy ACCEPT)
target     prot opt source 
root@wireguard:/# iptables-nft --list
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination
# Warning: iptables-legacy tables present, use iptables-legacy to see them

That warning for iptables-nft is interesting to me.

The container boot is still hanging though at the log:

iptables -A FORWARD -i wg0 -j ACCEPT; iptables -A FORWARD -o wg0 -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE

Running this command manually within the container I get no feedback. I also tried swapping iptables with iptables-nft. I also noticed that the iptables rule from Wireguard references eth0 but the network interface in the container is actually eth0@if136. Right now I feel like I need more verbose output or logs to see if I can find out why it stalls.

Thanks for the pointers.