podman: The status of the container is stuck in stopping.
Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug
Description
I was connecting to one of my containers by ssh,and looking at some code above, then I’m off to eat 😦. When I came back after lunch, I found that ssh was disconnected and container was not accessible. I tried podman stop、podman restart and podman kill but nothing worked. I guarantee that after I used ssh to connect to the container, I only looked at some code and did not run any programs. My appeal is how to make the status of this container no longer stop, so that I can use it again. Thank you! Steps to reproduce the issue:
1.Use ssh to connect to a container: ssh username@IP -p container‘s port 2.go to lunch 😦
Describe the results you received:
The status of this container is always stopping.
xx represents my container id.
podman stop xx
Error: can only stop created or running containers. xx is in state stopping: container state improper
podman restart xx Error: unable to restart a container in a paused or unknown state: container state improper
podman kill xx
ERRO[0000] container not running
Error: error sending signal to container xx : /usr/bin/runc kill xx 9 failed: exit status 1
Describe the results you expected:
The status of the container is stopped or running
Additional information you deem important (e.g. issue happens only occasionally):
Output of podman version:
podman version 4.0.0-dev
Output of podman info:
host:
arch: amd64
buildahVersion: 1.23.0
cgroupControllers:
- cpuset
- cpu
- cpuacct
- blkio
- memory
- devices
- freezer
- net_cls
- perf_event
- net_prio
- hugetlb
- pids
- rdma
cgroupManager: systemd
cgroupVersion: v1
conmon:
package: conmon-2.1.2-1.1.6.x86_64
path: /usr/bin/conmon
version: 'conmon version 2.1.2, commit: '
cpus: 80
distribution:
distribution: '"centos"'
version: "8"
eventLogger: journald
hostname: localhost.localdomain
idMappings:
gidmap: null
uidmap: null
kernel: 4.18.0-348.7.1.el8_5.x86_64
linkmode: dynamic
logDriver: k8s-file
memFree: 56096075776
memTotal: 134365310976
ociRuntime:
name: crun
package: crun-1.0-1.module_el8.5.0+911+f19012f9.x86_64
path: /usr/bin/crun
version: |-
crun version 1.0
commit: 139dc6971e2f1d931af520188763e984d6cdfbf8
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL
os: linux
remoteSocket:
exists: true
path: /run/podman/podman.sock
security:
apparmorEnabled: false
capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT,CAP_AUDIT_CONTROL,CAP_AUDIT_WRITE
rootless: false
seccompEnabled: true
seccompProfilePath: /usr/share/containers/seccomp.json
selinuxEnabled: true
serviceIsRemote: false
slirp4netns:
executable: /usr/bin/slirp4netns
package: slirp4netns-1.1.8-1.module_el8.5.0+890+6b136101.x86_64
version: |-
slirp4netns version 1.1.8
commit: d361001f495417b880f20329121e3aa431a8f90f
libslirp: 4.4.0
SLIRP_CONFIG_VERSION_MAX: 3
libseccomp: 2.5.1
swapFree: 694501376
swapTotal: 4294963200
uptime: 330h 10m 25.02s (Approximately 13.75 days)
plugins:
log:
- k8s-file
- none
- journald
network:
- bridge
- macvlan
- ipvlan
volume:
- local
registries:
search:
- registry.fedoraproject.org
- registry.access.redhat.com
- docker.io
store:
configFile: /etc/containers/storage.conf
containerStore:
number: 21
paused: 0
running: 4
stopped: 17
graphDriverName: overlay
graphOptions:
overlay.mountopt: nodev,metacopy=on
graphRoot: /var/lib/containers/storage
graphStatus:
Backing Filesystem: xfs
Native Overlay Diff: "false"
Supports d_type: "true"
Using metacopy: "true"
imageCopyTmpDir: /var/tmp
imageStore:
number: 23
runRoot: /var/run/containers/storage
volumePath: /var/lib/containers/storage/volumes
version:
APIVersion: 4.0.0-dev
Built: 1633015040
BuiltTime: Thu Sep 30 23:17:20 2021
GitCommit: ""
GoVersion: go1.16.7
OsArch: linux/amd64
Version: 4.0.0-dev
Package info (e.g. output of rpm -q podman or apt list podman or brew info podman):
podman-4.0.0-0.10.module_el8.6.0+944+d413f95e.x86_64
Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide? (https://github.com/containers/podman/blob/main/troubleshooting.md)
No
Additional environment details (AWS, VirtualBox, physical, etc.):
About this issue
- Original URL
- State: open
- Created 2 years ago
- Comments: 16 (8 by maintainers)
OK, I thought to have seen a wrong check but Podman allows for stopping a container in the “stopping” state to avoid such a situation.
@gbraad if you run into this situation again, please collect debug logs via
podman --log-level=debug. Access to the machine may help as well.