podman: "Address already in use" when restarting container

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

Steps to reproduce the issue:

  1. sudo podman run --name=test_port -d -p 9999:9999 ubuntu:18.04 sleep infinity
  2. sudo podman --log-level debug restart test_port

Describe the results you received: When restarting, it shows “address already in use”. If I stop, and then start, it works however.

DEBU[0000] Initializing boltdb state at /home/docker-data/libpod/bolt_state.db
DEBU[0000] Using graph driver overlay
DEBU[0000] Using graph root /home/docker-data
DEBU[0000] Using run root /var/run/containers/storage
DEBU[0000] Using static dir /home/docker-data/libpod
DEBU[0000] Using tmp dir /var/run/libpod
DEBU[0000] Using volume path /home/docker-data/volumes
DEBU[0000] Set libpod namespace to ""
DEBU[0000] [graphdriver] trying provided driver "overlay"
DEBU[0000] cached value indicated that overlay is supported
DEBU[0000] cached value indicated that metacopy is not being used
DEBU[0000] cached value indicated that native-diff is usable
DEBU[0000] backingFs=xfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false
DEBU[0000] Initializing event backend journald
INFO[0000] Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist
DEBU[0000] Setting maximum workers to 2
DEBU[0000] Stopping ctr 5871e240b73da769154be80cc039e9fdbd0bbe8167f0f3d02e3649cd0e7d5767 (timeout 10)
DEBU[0000] Stopping container 5871e240b73da769154be80cc039e9fdbd0bbe8167f0f3d02e3649cd0e7d5767 (PID 1253)
DEBU[0000] Sending signal 15 to container 5871e240b73da769154be80cc039e9fdbd0bbe8167f0f3d02e3649cd0e7d5767
DEBU[0010] container 5871e240b73da769154be80cc039e9fdbd0bbe8167f0f3d02e3649cd0e7d5767 did not die within timeout 10000000000
WARN[0010] Timed out stopping container 5871e240b73da769154be80cc039e9fdbd0bbe8167f0f3d02e3649cd0e7d5767, resorting to SIGKILL
DEBU[0010] Created root filesystem for container 5871e240b73da769154be80cc039e9fdbd0bbe8167f0f3d02e3649cd0e7d5767 at /home/docker-data/overlay/f1399b213c963b07f07ccd091f8b3ce133d4aaacec12b859c99fd1b106c2a024/merged
DEBU[0010] Recreating container 5871e240b73da769154be80cc039e9fdbd0bbe8167f0f3d02e3649cd0e7d5767 in OCI runtime
DEBU[0010] Successfully cleaned up container 5871e240b73da769154be80cc039e9fdbd0bbe8167f0f3d02e3649cd0e7d5767
DEBU[0010] /etc/system-fips does not exist on host, not mounting FIPS mode secret
DEBU[0010] Setting CGroups for container 5871e240b73da769154be80cc039e9fdbd0bbe8167f0f3d02e3649cd0e7d5767 to machine.slice:libpod:5871e240b73da769154be80cc039e9fdbd0bbe8167f0f3d02e3649cd0e7d5767
DEBU[0010] reading hooks from /usr/share/containers/oci/hooks.d
DEBU[0010] reading hooks from /etc/containers/oci/hooks.d
DEBU[0010] Created OCI spec for container 5871e240b73da769154be80cc039e9fdbd0bbe8167f0f3d02e3649cd0e7d5767 at /home/docker-data/overlay-containers/5871e240b73da769154be80cc039e9fdbd0bbe8167f0f3d02e3649cd0e7d5767/userdata/config.json
DEBU[0010] /usr/libexec/podman/conmon messages will be logged to syslog
DEBU[0010] running conmon: /usr/libexec/podman/conmon    args="[-s -c 5871e240b73da769154be80cc039e9fdbd0bbe8167f0f3d02e3649cd0e7d5767 -u 5871e240b73da769154be80cc039e9fdbd0bbe8167f0f3d02e3649cd0e7d5767 -n test_port -r /usr/sbin/runc -b /home/docker-data/overlay-containers/5871e240b73da769154be80cc039e9fdbd0bbe8167f0f3d02e3649cd0e7d5767/userdata -p /var/run/containers/storage/overlay-containers/5871e240b73da769154be80cc039e9fdbd0bbe8167f0f3d02e3649cd0e7d5767/userdata/pidfile --exit-dir /var/run/libpod/exits --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/docker-data --exit-command-arg --runroot --exit-command-arg /var/run/containers/storage --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /var/run/libpod --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 5871e240b73da769154be80cc039e9fdbd0bbe8167f0f3d02e3649cd0e7d5767 --socket-dir-path /var/run/libpod/socket -l k8s-file:/home/docker-data/overlay-containers/5871e240b73da769154be80cc039e9fdbd0bbe8167f0f3d02e3649cd0e7d5767/userdata/ctr.log --log-level debug --syslog]"
DEBU[0010] Cleaning up container 5871e240b73da769154be80cc039e9fdbd0bbe8167f0f3d02e3649cd0e7d5767
DEBU[0010] Tearing down network namespace at /var/run/netns/cni-a36cd478-3857-e576-6e7d-face72a879d1 for container 5871e240b73da769154be80cc039e9fdbd0bbe8167f0f3d02e3649cd0e7d5767
INFO[0010] Got pod network &{Name:test_port Namespace:test_port ID:5871e240b73da769154be80cc039e9fdbd0bbe8167f0f3d02e3649cd0e7d5767 NetNS:/var/run/netns/cni-a36cd478-3857-e576-6e7d-face72a879d1 PortMappings:[{HostPort:9999 ContainerPort:9999 Protocol:tcp HostIP:}] Networks:[] NetworkConfig:map[]}
INFO[0010] About to del CNI network podman (type=bridge)
DEBU[0010] unmounted container "5871e240b73da769154be80cc039e9fdbd0bbe8167f0f3d02e3649cd0e7d5767"
DEBU[0010] Failed to restart container 5871e240b73da769154be80cc039e9fdbd0bbe8167f0f3d02e3649cd0e7d5767: cannot listen on the TCP port: listen tcp4 :9999: bind: address already in use
DEBU[0010] Worker#0 finished job [(*LocalRuntime) Restart func1]/5871e240b73da769154be80cc039e9fdbd0bbe8167f0f3d02e3649cd0e7d5767 (cannot listen on the TCP port: listen tcp4 :9999: bind: address already in use)
DEBU[0010] Pool[restart, 5871e240b73da769154be80cc039e9fdbd0bbe8167f0f3d02e3649cd0e7d5767: cannot listen on the TCP port: listen tcp4 :9999: bind: address already in use]
ERRO[0010] cannot listen on the TCP port: listen tcp4 :9999: bind: address already in use

Describe the results you expected: Should be able to restart successfully.

Additional information you deem important (e.g. issue happens only occasionally):

Output of podman version:

podman version 1.4.4

Output of podman info --debug:

debug:
  compiler: gc
  git commit: ""
  go version: go1.10.8
  podman version: 1.4.4
host:
  BuildahVersion: 1.9.0
  Conmon:
    package: podman-1.4.4-2.el7.x86_64
    path: /usr/libexec/podman/conmon
    version: 'conmon version 0.3.0, commit: unknown'
  Distribution:
    distribution: '"rhel"'
    version: "7.6"
  MemFree: 176705536
  MemTotal: 1920000000
  OCIRuntime:
    package: containerd.io-1.2.5-3.1.el7.x86_64
    path: /usr/sbin/runc
    version: |-
      runc version 1.0.0-rc6+dev
      commit: 2b18fe1d885ee5083ef9f0838fee39b62d653e30
      spec: 1.0.1-dev
  SwapFree: 0
  SwapTotal: 0
  arch: amd64
  cpus: 1
  hostname: MASKED
  kernel: 3.10.0-957.21.3.el7.MASKED.20190617.34.x86_64
  os: linux
  rootless: false
  uptime: 114h 57m 18.7s (Approximately 4.75 days)
registries:
  blocked: null
  insecure: null
  search:
  - registry.access.redhat.com
  - docker.io
  - registry.fedoraproject.org
  - quay.io
  - registry.centos.org
store:
  ConfigFile: /etc/containers/storage.conf
  ContainerStore:
    number: 8
  GraphDriverName: overlay
  GraphOptions: null
  GraphRoot: /home/docker-data
  GraphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Using metacopy: "false"
  ImageStore:
    number: 15
  RunRoot: /var/run/containers/storage
  VolumePath: /home/docker-data/volumes

Additional environment details (AWS, VirtualBox, physical, etc.): OpenStack VM

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Comments: 24 (17 by maintainers)

Most upvoted comments

We fixed this one by forcibly killing Conmon on restart, I believe

It’s a bug in conmon, preparing a fix.