podman: podman in corrupted state after filesystem filled up to 100 %
/kind bug
Hello,
Env: rootless container in user namespace 6/6 containers are running fine managed by systemd
crash: 100% home full
current: 5 of 6 containers are working again 1 has problems.
infos: /bin/podman run --rm --name test_service --image-volume=ignore --authfile /home/cadmin/.podman_creds.json registry.example/test/alpine:3.12.0 Error: error creating container storage: the container name “test_service” is already in use by “6e5d7bcf14a33187db1667493281a2a939859954b4a90c54de168243411fada9”. You have to remove that container to be able to reuse that name.: that name is already in use
/bin/podman ps -a no showing any other container than the 5 running. I would expect a stopped/exited/created one.
also tried --sync
/bin/podman rm -f --storage 6e5d7bcf14a33187db1667493281a2a939859954b4a90c54de168243411fada9 Error: error unmounting container “6e5d7bcf14a33187db1667493281a2a939859954b4a90c54de168243411fada9”: layer not known
Debug Level also didnt show any other errors. Where does podman search for this names ?
Output of podman version:
Version: 2.0.4
API Version: 1
Go Version: go1.13.4
Built: Thu Jan 1 01:00:00 1970
OS/Arch: linux/amd64
Output of podman info --debug:
host:
arch: amd64
buildahVersion: 1.15.0
cgroupVersion: v1
conmon:
package: conmon-2.0.20-1.el8.x86_64
path: /usr/bin/conmon
version: 'conmon version 2.0.20, commit: 838d2c05b5b53eff3f1cd1a06dbd81d8153feea3'
cpus: 4
distribution:
distribution: '"centos"'
version: "8"
eventLogger: file
hostname: herewasahostname
idMappings:
gidmap:
- container_id: 0
host_id: 1002
size: 1
- container_id: 1
host_id: 231072
size: 65536
uidmap:
- container_id: 0
host_id: 1002
size: 1
- container_id: 1
host_id: 231072
size: 65536
kernel: 4.18.0-193.6.3.el8_2.x86_64
linkmode: dynamic
memFree: 8551780352
memTotal: 16644939776
ociRuntime:
name: runc
package: runc-1.0.0-65.rc10.module_el8.2.0+305+5e198a41.x86_64
path: /usr/bin/runc
version: 'runc version spec: 1.0.1-dev'
os: linux
remoteSocket:
path: /run/user/1002/podman/podman.sock
rootless: true
slirp4netns:
executable: /usr/bin/slirp4netns
package: slirp4netns-0.4.2-3.git21fdece.module_el8.2.0+305+5e198a41.x86_64
version: |-
slirp4netns version 0.4.2+dev
commit: 21fdece2737dc24ffa3f01a341b8a6854f8b13b4
swapFree: 5000392704
swapTotal: 5003800576
uptime: 7h 12m 54.6s (Approximately 0.29 days)
registries:
search:
- registry.example.de
store:
configFile: /home/user/.config/containers/storage.conf
containerStore:
number: 8
paused: 0
running: 8
stopped: 0
graphDriverName: overlay
graphOptions:
overlay.mount_program:
Executable: /bin/fuse-overlayfs
Package: fuse-overlayfs-0.7.2-5.module_el8.2.0+305+5e198a41.x86_64
Version: |-
fuse-overlayfs: version 0.7.2
FUSE library version 3.2.1
using FUSE kernel interface version 7.26
graphRoot: /home/user/.local/share/containers/storage
graphStatus:
Backing Filesystem: xfs
Native Overlay Diff: "false"
Supports d_type: "true"
Using metacopy: "false"
imageStore:
number: 17
runRoot: /tmp/run-1002
volumePath: /home/user/.local/share/containers/storage/volumes
version:
APIVersion: 1
Built: 0
BuiltTime: Thu Jan 1 01:00:00 1970
GitCommit: ""
GoVersion: go1.13.4
OsArch: linux/amd64
Version: 2.0.4
Package info (e.g. output of rpm -q podman or apt list podman):
podman-2.0.4-1.el8.x86_64
Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide?
Troubleshooting Guide Yes
i changed the names
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Comments: 17 (11 by maintainers)
I’m noting a few issues immediately:
KillMode=noneon unit files launching Podman. We launch several processes after the container exits to clean up after it, and systemd has an annoying habit of shutting down these cleanup processes mid-execution when it wants to stop or restart a unit, which can lead to issues depending on when it was stopped.type=forkingand using PID files to manage Podman under systemd. The container is not actually a direct child of Podman (it’s a child of a monitor process we launch called Conmon, which double-forks to daemonize before launching the container) and, as part of creating the container, we also leave the cgroup of the systemd unit - so it can’t actually track the state of the container itself unless given a PID file.You can use
podman generate systemd --newto generate a sample unit file to show our recommended format for these.As of Podman v2.1.1, you can you
podman ps --storageto see containers that are not in Podman’s database but are present in the storage library. They can then be removed viapodman rm --storageon the container ID.I removed an entry the id of which matches with the error message from storage/overlay-containers/containers.json manually. Then it seems to work well.
podman system reset
Should clean up all containers and images and reset you to initial state.