podman: [v1.8] /kind bug Unable to access application publically/outside after exposing port with podman

BUG REPORT

Description

Actually, exact copy of https://github.com/containers/libpod/issues/4715

I have 6 containers running. podman ps tells me they are. netstat -ntlp does not include ports allocated by the containers. However, each of them (internally) has access to all others, but not from outside the container. Thus, if my API runs on port 8000, I can’t access it, but if I go into any of the containers, I do.

Steps to reproduce the issue:

podman-compose up

Describe the results you received:

Containers running, but ports are not accessible from outside containers.

Describe the results you expected:

I expect ports to be accessible from outside containers.

Additional information you deem important (e.g. issue happens only occasionally):

Output of podman version:

Version:            1.8.0
RemoteAPI Version:  1
Go Version:         go1.13.6
OS/Arch:            linux/amd64

Output of podman info --debug:

$ podman info --debug
debug:
  compiler: gc
  git commit: ""
  go version: go1.13.6
  podman version: 1.8.0
host:
  BuildahVersion: 1.13.1
  CgroupVersion: v2
  Conmon:
    package: conmon-2.0.10-2.fc31.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.0.10, commit: 6b526d9888abb86b9e7de7dfdeec0da98ad32ee0'
  Distribution:
    distribution: fedora
    version: "31"
  IDMappings:
    gidmap:
    - container_id: 0
      host_id: 1001
      size: 1
    - container_id: 1
      host_id: 2328224
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1001
      size: 1
    - container_id: 1
      host_id: 2328224
      size: 65536
  MemFree: 393715712
  MemTotal: 16372674560
  OCIRuntime:
    name: crun
    package: crun-0.12.1-1.fc31.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 0.12.1
      commit: df5f2b2369b3d9f36d175e1183b26e5cee55dd0a
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL
  SwapFree: 412708864
  SwapTotal: 943714304
  arch: amd64
  cpus: 8
  eventlogger: journald
  hostname: localhost.localdomain
  kernel: 5.4.17-200.fc31.x86_64
  os: linux
  rootless: true
  slirp4netns:
    Executable: /usr/bin/slirp4netns
    Package: slirp4netns-0.4.0-20.1.dev.gitbbd6f25.fc31.x86_64
    Version: |-
      slirp4netns version 0.4.0-beta.3+dev
      commit: bbd6f25c70d5db2a1cd3bfb0416a8db99a75ed7e
  uptime: 56m 40.69s
registries:
  search:
  - docker.io
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - registry.centos.org
  - quay.io
store:
  ConfigFile: /home/user.local/.config/containers/storage.conf
  ContainerStore:
    number: 20
  GraphDriverName: overlay
  GraphOptions:
    overlay.mount_program:
      Executable: /usr/bin/fuse-overlayfs
      Package: fuse-overlayfs-0.7.5-2.fc31.x86_64
      Version: |-
        fusermount3 version: 3.6.2
        fuse-overlayfs: version 0.7.5
        FUSE library version 3.6.2
        using FUSE kernel interface version 7.29
  GraphRoot: /home/user.local/.local/share/containers/storage
  GraphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "false"
  ImageStore:
    number: 19
  RunRoot: /run/user/1001
  VolumePath: /home/user.local/.local/share/containers/storage/volumes

Package info (e.g. output of rpm -q podman or apt list podman):

podman-1.8.0-2.fc31.x86_64

Additional environment details (AWS, VirtualBox, physical, etc.):

Fedora 31

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Comments: 35 (8 by maintainers)

Commits related to this issue

Most upvoted comments

Since upgrading to Podman v1.8.0 I’ve also started having this issue in two different machines (both running Ubuntu 19.10), so I had to downgrade to v1.7.0.

I can consistently reproduce the issue like this:

> podman run --rm -d -ti -p 8000:8000 --userns=keep-id python:2.7-alpine python -m SimpleHTTPServer
2a8fc5c6b3917741e6d49207c2545cdc5d64773fa50c216642a356791852cc1e

> curl -I localhost:8000
curl: (7) Failed to connect to localhost port 8000: Connection refused

However if I remove either the -d or the --userns=keep-id flags, it works:

> podman run --rm -d -ti -p 8000:8000 python:2.7-alpine python -m SimpleHTTPServer
ec84418f635af5a0f9b07156b8537cf8ec7f868b9017d02a08c1fad569df5f4d

> curl -I localhost:8000
HTTP/1.0 200 OK
Server: SimpleHTTP/0.6 Python/2.7.17
Date: Wed, 12 Feb 2020 22:44:47 GMT
Content-type: text/html; charset=UTF-8
Content-Length: 666

The point is that I’m unconvinced that it is a workaround, as opposed to something in the middle of a set of intermittent failures. I have also managed to reproduce the problem with a fresh container:

√ podman run --name tbw -v /tmp/bw-data:/data -p 7080:80 bitwardenrs/server:alpine
✗ curl http://localhost:7080/
curl: (7) Failed to connect to localhost port 7080: Connection refused
√ podman ps 
CONTAINER ID  IMAGE                                      COMMAND        CREATED         STATUS                 PORTS                    NAMES
9e9e0dcbac9b  docker.io/bitwardenrs/server:alpine        /bitwarden_rs  34 seconds ago  Up 33 seconds ago      0.0.0.0:7080->80/tcp     tbw
√ podman exec -it 9e9 /bin/sh
/ # netstat -nltp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:7080            0.0.0.0:*               LISTEN      -
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      1/bitwarden_rs
/ # curl http://localhost:7080/
<!DOCTYPE html>
<html>

<head>
...

So in this case the port mapping has again been created inside the container - rather than exposed outside.