podman: A container publishing/listening to IPv6 interface leads to all connections going into time-out

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

A container publishing/listening to IPv6 interface leads to all connections going into time-out.

Steps to reproduce the issue:

  1. create as root a container publishing to IPv6 e.g. podman container run --name proxy-cnt --pod ncpod --volume certs:/etc/nginx/certs:ro,z --volume proxy-vhost:/etc/nginx/vhost.d:z --volume proxy-html:/usr/share/nginx/html:z --volume /var/lib/containers/sources/proxy.d:/etc/nginx/conf.d:ro,Z --volume /var/lib/containers/sources/proxy_dhparam.pem:/etc/nginx/dhparam/dhparam.pem:ro,Z --publish 80:80 --publish 443:443 --publish [::]:80:80 --publish [::]:443:443 --env com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy=true --network proxy-nw --detach nginx:stable-alpine

  2. try to reach https://[deed:beaf:etc] (i.e. through the IPv6 address of the host)

Describe the results you received:

The HTTPS connection hangs until it goes into a timeout:

curl: (28) Operation timed out after 300507 milliseconds with 0 out of 0 bytes received

And nothing appears in the logs of the of the container.

Describe the results you expected:

That the IPv6 connection works as well as the IPv4 connection.

Additional information you deem important (e.g. issue happens only occasionally):

  1. After one night with the pod running, the IPv4 access seemed to have stopped working as well. But I’m not sure and haven’t had the chance to reproduce this behaviour.
  2. On IRC #podman, I’ve described only that I am publishing [::]:443:443 and it was sufficient for @Luap99 to reproduce the issue so it doesn’t seem to be due to my particular setup.
  3. I have the similar setup with docker working since years. It’s only about the external address being IPv6, the internal container networks are IPv4 (and the container started is a reverse proxy).
  4. ss -tulpen show that the container is listening on the right IPs & ports.
  5. firewalld is on but the port is open, and it didn’t make any difference as I stopped the firewalld service.

Output of podman version:

Version:      2.0.0-rc7
API Version:  1
Go Version:   go1.13.4
Built:        Thu Jan  1 01:00:00 1970
OS/Arch:      linux/amd64

Output of podman info --debug:

host:
  arch: amd64
  buildahVersion: 1.15.0
  cgroupVersion: v1
  conmon:
    package: conmon-2.0.18-1.module_el8.3.0+432+2e9cbcd8.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.0.18, commit: 4d7da6016270217928f56161842ad4367c88dbb6'
  cpus: 4
  distribution:
    distribution: '"centos"'
    version: "8"
  eventLogger: file
  hostname: pe140centos.home
  idMappings:
    gidmap: null
    uidmap: null
  kernel: 4.18.0-227.el8.x86_64
  linkmode: dynamic
  memFree: 14331277312
  memTotal: 16439975936
  ociRuntime:
    name: runc
    package: runc-1.0.0-66.rc10.module_el8.3.0+401+88d810c7.x86_64
    path: /usr/bin/runc
    version: 'runc version spec: 1.0.1-dev'
  os: linux
  remoteSocket:
    path: /run/podman/podman.sock
  rootless: false
  slirp4netns:
    executable: ""
    package: ""
    version: ""
  swapFree: 25186791424
  swapTotal: 25186791424
  uptime: 16h 5m 16.77s (Approximately 0.67 days)
registries:
  search:
  - registry.access.redhat.com
  - registry.redhat.io
  - docker.io
store:
  configFile: /etc/containers/storage.conf
  containerStore:
    number: 6
    paused: 0
    running: 5
    stopped: 1
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /var/lib/containers/storage
  graphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Using metacopy: "false"
  imageStore:
    number: 8
  runRoot: /var/run/containers/storage
  volumePath: /var/lib/containers/storage/volumes
version:
  APIVersion: 1
  Built: 0
  BuiltTime: Thu Jan  1 01:00:00 1970
  GitCommit: ""
  GoVersion: go1.13.4
  OsArch: linux/amd64
  Version: 2.0.0-rc7

Package info (e.g. output of rpm -q podman or apt list podman):

podman-2.0.0-0.9.rc7.module_el8.3.0+432+2e9cbcd8.x86_64

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide?

Yes, Fedora 32, podman 2.0.4, httpd instead of nginx, 80 instead of 443 -> same result

podman run --publish 80:80 --publish '[::]:80:80' httpd
$ curl http://127.0.0.1
<html><body><h1>It works!</h1></body></html>
$ curl http://[::1]
(I didn't wait for the time-out, but a few minutes at least)

Additional environment details (AWS, VirtualBox, physical, etc.):

Physical box, CentOS 8 Stream

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Comments: 58 (14 by maintainers)

Most upvoted comments

@tdashton Not quite do not use the fe80 range, this is reserved for link local addresses. If you need a private ipv6 subnet you need to use a fd.... subnet, see https://simpledns.plus/private-ipv6

Also I think it has to be a two dimensional array (https://www.cni.dev/plugins/current/ipam/host-local/):

"ranges": [
[
            {
              "subnet": "10.88.0.0/16",
              "gateway": "10.88.0.1"
            }
],
[
            {
              "subnet": "fe80::/10",
              "gateway": "fe80::1:1:1:1"
            }
]
]