podman: Network and restart issues with Exited containers in named networks

/kind bug

Description

Exited containers in named networks cannot be restarted and when deleted leave orphan iptables rules in place.

Steps to reproduce the issue:

  1. podman network create testing

  2. podman run -d --name test1 --network testing -p 8141:80 busybox echo Test

  3. podman ps -a

724f841d17dc  docker.io/library/busybox:latest         echo Test             2 seconds ago      Exited (0) 3 seconds ago  0.0.0.0:8141->80/tcp                                  test1
  1. podman restart test1
ERRO[0000] Error adding network: failed to allocate for range 0: 10.89.1.2 has been allocated to 724f841d17dc559fd4151a162cad478ef07987e6ba37a22341ee81c93f1eeaa3, duplicate allocation is not allowed 
ERRO[0000] Error while adding pod to CNI network "testing": failed to allocate for range 0: 10.89.1.2 has been allocated to 724f841d17dc559fd4151a162cad478ef07987e6ba37a22341ee81c93f1eeaa3, duplicate allocation is not allowed 
Error: error configuring network namespace for container 724f841d17dc559fd4151a162cad478ef07987e6ba37a22341ee81c93f1eeaa3: failed to allocate for range 0: 10.89.1.2 has been allocated to 724f841d17dc559fd4151a162cad478ef07987e6ba37a22341ee81c93f1eeaa3, duplicate allocation is not allowed`
  1. podman rm test1

  2. iptables -L -v -n -t nat | grep 724f841d17dc

    0     0 CNI-0f5128dbe0b1899499f26d93  all  --  *      *       10.89.1.2            0.0.0.0/0            /* name: "testing" id: "724f841d17dc559fd4151a162cad478ef07987e6ba37a22341ee81c93f1eeaa3" */
    0     0 ACCEPT     all  --  *      *       0.0.0.0/0            10.89.1.0/24         /* name: "testing" id: "724f841d17dc559fd4151a162cad478ef07987e6ba37a22341ee81c93f1eeaa3" */
    0     0 MASQUERADE  all  --  *      *       0.0.0.0/0           !224.0.0.0/4          /* name: "testing" id: "724f841d17dc559fd4151a162cad478ef07987e6ba37a22341ee81c93f1eeaa3" */
    0     0 CNI-DN-0f5128dbe0b1899499f26  tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            /* dnat name: "testing" id: "724f841d17dc559fd4151a162cad478ef07987e6ba37a22341ee81c93f1eeaa3" */ multiport dports 8141

Describe the results you received:

Couldn’t restart the container, removing it left dirty iptables state which “intercepted” the traffic of the next container having the same port published.

Describe the results you expected:

Restarting the container should work. iptables entries should be cleaned up when the container which triggered the creation of them was removed.

Additional information you deem important (e.g. issue happens only occasionally):

Happens always.

Output of podman version:

Version:      3.1.2
API Version:  3.1.2
Go Version:   go1.15.2
Built:        Thu Jan  1 01:00:00 1970
OS/Arch:      linux/amd64

Output of podman info --debug:

host:
  arch: amd64
  buildahVersion: 1.20.1
  cgroupManager: systemd
  cgroupVersion: v1
  conmon:
    package: 'conmon: /usr/libexec/podman/conmon'
    path: /usr/libexec/podman/conmon
    version: 'conmon version 2.0.27, commit: '
  cpus: 4
  distribution:
    distribution: ubuntu
    version: "20.04"
  eventLogger: journald
  hostname: xyz
  idMappings:
    gidmap: null
    uidmap: null
  kernel: 5.8.0-55-generic
  linkmode: dynamic
  memFree: 6839197696
  memTotal: 8347869184
  ociRuntime:
    name: crun
    package: 'crun: /usr/bin/crun'
    path: /usr/bin/crun
    version: |-
      crun version 0.20.1.5-925d-dirty
      commit: 0d42f1109fd73548f44b01b3e84d04a279e99d2e
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
  os: linux
  remoteSocket:
    path: /run/podman/podman.sock
  security:
    apparmorEnabled: true
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: false
    seccompEnabled: true
    selinuxEnabled: false
  slirp4netns:
    executable: ""
    package: ""
    version: ""
  swapFree: 2147479552
  swapTotal: 2147479552
  uptime: 56m 17.23s
registries:
  search:
  - docker.io
  - quay.io
store:
  configFile: /etc/containers/storage.conf
  containerStore:
    number: 8
    paused: 0
    running: 8
    stopped: 0
  graphDriverName: overlay
  graphOptions:
    overlay.mountopt: nodev,metacopy=on
  graphRoot: /var/lib/containers/storage
  graphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "true"
  imageStore:
    number: 18
  runRoot: /run/containers/storage
  volumePath: /var/lib/containers/storage/volumes
version:
  APIVersion: 3.1.2
  Built: 0
  BuiltTime: Thu Jan  1 01:00:00 1970
  GitCommit: ""
  GoVersion: go1.15.2
  OsArch: linux/amd64
  Version: 3.1.2

Package info (e.g. output of rpm -q podman or apt list podman):

# apt list podman                          
Listing... Done
podman/unknown,now 100:3.1.2-1 amd64 [installed]
podman/unknown 100:3.1.2-1 arm64
podman/unknown 100:3.1.2-1 armhf
podman/unknown 100:3.1.2-1 s390x

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide? (https://github.com/containers/podman/blob/master/troubleshooting.md)

This is the last version for Ubuntu 20.04 installed via apt.

Additional environment details (AWS, VirtualBox, physical, etc.):

VM on Contabo.

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Comments: 25 (14 by maintainers)

Commits related to this issue

Most upvoted comments

I suspect that we’ll break something, but I don’t think it’ll be bad enough that we can’t fix whatever happens. Worth a shot IMO.