podman: Container restart fails with "address already in use"
Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug
Description
Restarting a podman rootless container results in address already in use
errors for me. It does not happen always (and maybe the way the image works might somehow cause that, but imo podman should handle that)
Steps to reproduce the issue:
➜ ~ podman run -d -p 8000:8000 --name wiki crazymax/dokuwiki:edge
ba5b7ab5b59d3c84c05dcc795a3cb81247956aa7d31067538501daa5339d0cd9
➜ ~ podman restart wiki
Error: rootlessport listen tcp 0.0.0.0:8000: bind: address already in use
➜ ~ podman restart wiki
ba5b7ab5b59d3c84c05dcc795a3cb81247956aa7d31067538501daa5339d0cd9
➜ ~ podman restart wiki
ba5b7ab5b59d3c84c05dcc795a3cb81247956aa7d31067538501daa5339d0cd9
➜ ~ podman restart wiki
Error: rootlessport listen tcp 0.0.0.0:8000: bind: address already in use
Describe the results you expected:
Proper restarts all the time.
Additional information you deem important (e.g. issue happens only occasionally):
Output of podman version
:
Version: 2.2.1
API Version: 2.1.0
Go Version: go1.15.5
Built: Tue Dec 8 15:37:50 2020
OS/Arch: linux/amd64
Output of podman info --debug
:
host:
arch: amd64
buildahVersion: 1.18.0
cgroupManager: systemd
cgroupVersion: v2
conmon:
package: conmon-2.0.26-1.fc33.x86_64
path: /usr/bin/conmon
version: 'conmon version 2.0.26, commit: 777074ecdb5e883b9bec233f3630c5e7fa37d521'
cpus: 8
distribution:
distribution: fedora
version: "33"
eventLogger: journald
hostname: apollo13
idMappings:
gidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 100000
size: 65536
uidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 100000
size: 65536
kernel: 5.10.15-200.fc33.x86_64
linkmode: dynamic
memFree: 8343269376
memTotal: 16419074048
ociRuntime:
name: crun
package: crun-0.17-1.fc33.x86_64
path: /usr/bin/crun
version: |-
crun version 0.17
commit: 0e9229ae34caaebcb86f1fde18de3acaf18c6d9a
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
os: linux
remoteSocket:
exists: true
path: /run/user/1000/podman/podman.sock
rootless: true
slirp4netns:
executable: /usr/bin/slirp4netns
package: slirp4netns-1.1.8-1.fc33.x86_64
version: |-
slirp4netns version 1.1.8
commit: d361001f495417b880f20329121e3aa431a8f90f
libslirp: 4.3.1
SLIRP_CONFIG_VERSION_MAX: 3
libseccomp: 2.5.0
swapFree: 38654697472
swapTotal: 38654697472
uptime: 19m 6.72s
registries:
search:
- docker.io
store:
configFile: /home/florian/.config/containers/storage.conf
containerStore:
number: 1
paused: 0
running: 0
stopped: 1
graphDriverName: overlay
graphOptions:
overlay.mount_program:
Executable: /usr/bin/fuse-overlayfs
Package: fuse-overlayfs-1.4.0-1.fc33.x86_64
Version: |-
fusermount3 version: 3.9.3
fuse-overlayfs: version 1.4
FUSE library version 3.9.3
using FUSE kernel interface version 7.31
graphRoot: /home/florian/.local/share/containers/storage
graphStatus:
Backing Filesystem: extfs
Native Overlay Diff: "false"
Supports d_type: "true"
Using metacopy: "false"
imageStore:
number: 19
runRoot: /run/user/1000/containers
volumePath: /home/florian/.local/share/containers/storage/volumes
version:
APIVersion: 2.1.0
Built: 1607438270
BuiltTime: Tue Dec 8 15:37:50 2020
GitCommit: ""
GoVersion: go1.15.5
OsArch: linux/amd64
Version: 2.2.1
Package info (e.g. output of rpm -q podman
or apt list podman
):
podman-2.2.1-1.fc33.x86_64
Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide?
No, can retest as soon as podman 3 hits fedora 33
About this issue
- Original URL
- State: closed
- Created 3 years ago
- Reactions: 1
- Comments: 20 (6 by maintainers)
For others stumbling into this - after I followed these instructions: https://github.com/containers/podman/issues/12983
$ sudo apt install dbus-user-session
> wsl --shutdown
…I could no longer reproduce the issue
bind: address already in use
.workarounds that kind of work but I’m sure have significant drawbacks:
pkill containers-rootlessport
Any chance you can retest with Conmon 2.0.27 (just released)? It fixed a similar issue for root Podman, so I think it might also fix this