podman: Podman may be leaking storage files after cleanup (rootless)

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

Unable to fully clean up the container storage of podman (rootless) using podman commands. In an effort to clean my homedir I tried to use podman commands to “clean up after itself” but a number of fairly large-ish dirs are left behind.

Willing to be told this is pebcak but and that I missed a command but I couldn’t find one in the docs that jumped out at me. I sort of assumed that podman system prune -a would be the ultimate clean up command but 18G of data are still left behind in ~/.local/share/containers/storage.

Steps to reproduce the issue:

  1. Delete all running containers.
  2. Remove all old containers podman container prune
  3. Remove most old storage podman image prune
  4. Force remove some aliased images: podman image rm --force ...
  5. Try to remove more podman system prune -a

Describe the results you received: 18G still used in ~/.local/share/containers/storage

Describe the results you expected: Storage usage in the megabytes or below range.

Additional information you deem important (e.g. issue happens only occasionally):

du info and paths:

[jmulliga@popcorn ~/.local/share/containers/storage]$ sudo du -sh .
18G     .
[jmulliga@popcorn ~/.local/share/containers/storage]$ sudo du -sh vfs/dir/*
209M    vfs/dir/042f0e57cc358272ea11fb0d0f3caef848ee0bce9c3cea8f63269b80612da061
511M    vfs/dir/06507e0668ac7f050b4913208e2e0f2edca47a9489d4e02a78b68f1aaf664a78
511M    vfs/dir/0aafc6c55e06fb285d23d3df950c6a73c496fa803affb9d596083fa5a7aff88c
728M    vfs/dir/0d2713c3b786ff706733d8ea434594daa78737339e7fd138ca39c9c3ea80c366
150M    vfs/dir/15a244bb49eca26e25b5ffdb47b98c95eb6b2285913af8970f784502a93ed1d1
366M    vfs/dir/1a11e1453f3b00267b3db7110d69e4e875af7982efaa11a1a4f2e3d51ebba600
378M    vfs/dir/1cddfee79778381fee0cf250d168a766eb34b09e888abd765f698562d20d6941
324M    vfs/dir/28d6ab72867c6b41ec2241d255548cb1de326c1271055a68305cad218619ceea
461M    vfs/dir/2efe3d19536c9fa50ad12ab3a955e9894537a7599cb24d9e40d8dfdfc0dcf31d
517M    vfs/dir/328661f5ed5f53ed42342ef659a9c8412cde4ba475337549a58ae35fe182da73
155M    vfs/dir/42442b8f704229719018208e1209d59e8a1a0be56447589137795354324bf426
347M    vfs/dir/4478698b1e6885a99c86c55df8b3d8d9453b82a05becebfcf96dbf08e7cf561d
834M    vfs/dir/49366c0c9555a454a315ec4099727367cfcb02c2ebc51ec68e676381fa25e067
461M    vfs/dir/52b87403beb8b40af5b7293be169411ccc33fa636bc15d57cbad790504d2df43
238M    vfs/dir/57449bbe19ba1375b80cf4163251805f074229a6f2f26a6114185ef21229085f
761M    vfs/dir/5bda29dd8a0ae426fe1ac7b6227da0c9dd62bc6d71b5bd5389583fa0b773ae51
86M     vfs/dir/6063da5b498749d56c29c2bc0cc32b59454695b92a3766b036e27281511c3222
508M    vfs/dir/66921a25400861ca0b7c0dd1c5af0791bc811adc01c6a8f1aad6f2480b31d6d1
259M    vfs/dir/6c813be9b028415af6c54f27751b3af132a00a6a5add41e84ff8ced79d5a1499
511M    vfs/dir/730487ea007c1a0a651797fe3286e5ea762fa4db959f98f7058bb0e8063cf9ae
854M    vfs/dir/784a1a8d959e4bf2c80880c611a02f07136a0b60559ec56a11f44040f58a1460
581M    vfs/dir/7f74d141890e3f316cea7937bdf8581d9ac5dbbc1a57b7d174a98155fc0e0993
499M    vfs/dir/88e5a31ddaa5ddb5807a6365aa7876f3454b5f3cde6c37f3fe21973948b89630
128M    vfs/dir/8abae375a87f3385ee37b721e1076588403d3d730601eab1a425cab8094f73ee
727M    vfs/dir/8eadbdee0fb8cbdb48dba647703fb72cfe17c2d038b2c34cd92afeeea9c09283
508M    vfs/dir/96b67bd92d34ca03f64a186afe5c8fe2532a1f760f4d31986333045064f7a5ed
260M    vfs/dir/9a57692d1163a66e581bf8cbba7b697d4b51d2984facc48b4f0dd201cdb21938
362M    vfs/dir/9f7136a981c01276def81543093970c451fee09356aeb7ceee34e9cb8317b6f4
679M    vfs/dir/a1f54eff57f492124d9344d67776cf6351291eca315aad75eaca85c5cef37a87
378M    vfs/dir/a332da330995c7227dee07d33b62b0e7406c54e2ff71812c56cc1c6ff0e01fd8
328M    vfs/dir/ab6fad4ca0b902f1d4fb3d94e5d6fbba5bf9fd0922c97e4a63f7de7583679416
600M    vfs/dir/b3dd53d1377eee9c903feb5f651f763a067b9639edd1927ebf3572ab2bd2db73
326M    vfs/dir/be6479440c7e45990b8ee75105bc13a6a3a668cbc24249f72ce1f66a9cebe542
464M    vfs/dir/bf591d6d02a98c32d93a2bbdf923f322eb1c76a084b34ded04fa44fe2da8c02e
363M    vfs/dir/ccc27d324d4f59ee12143f0879a54a97bb77806818c6ed0e63e93ca920bad0c5
314M    vfs/dir/ccdb99cf27958e3489adea4060b630bb6f948b6807aa84a37b744c9f131de41c
92M     vfs/dir/d309c3b4543571f11e3787e8930ee4269eba201937e0b879ae5664e4298baf46
420M    vfs/dir/d83327b2e0431f627f28000477e11714b0305d26939b89fd3398de330b412177
505M    vfs/dir/d88e6eb48c802daed1b03df0712c90e792b5825c505b62f1fd444b7ee630c788
127M    vfs/dir/e70f0111d161001e0b708e21bb601aae74e4f7cf6a4fb5aeb7888233c9ac33c7
355M    vfs/dir/edcd13d9660aefcfaa844abcf5ae8355d7e01d0afa6e880a016e7de1c9fdffd6
348M    vfs/dir/edd4ffa57844615cb5ae2f4fb3919e931f467e5b95d078fa0e55b1e8ce665df0
452M    vfs/dir/f0194d0adcbc0dfd6bbe4e674eca9b98d1dff5b090aced01dfbb85f99d88fa1b
210M    vfs/dir/f2d1e4f0dc6a1fe8e9d2ac2f91449fafe22a5c55e7cbeed9e8887fc5405bd1a1
76M     vfs/dir/f737a0f5ebff523d44c123d3a67d0653a3f98d78b8f1e2fd9780d557f4e2db04

Output of podman version:

Version:            1.4.4
RemoteAPI Version:  1
Go Version:         go1.11.11
OS/Arch:            linux/amd64

Output of podman info --debug:

debug:
  compiler: gc
  git commit: ""
  go version: go1.11.11
  podman version: 1.4.4
host:
  BuildahVersion: 1.9.0
  Conmon:
    package: podman-1.4.4-4.fc29.x86_64
    path: /usr/libexec/podman/conmon
    version: 'conmon version 1.0.0-dev, commit: 130ae2bc326106e335a44fac012005a8654a76cc'
  Distribution:
    distribution: fedora
    version: "29"
  MemFree: 7546503168
  MemTotal: 32940535808
  OCIRuntime:
    package: runc-1.0.0-93.dev.gitb9b6cc6.fc29.x86_64
    path: /usr/bin/runc
    version: |-
      runc version 1.0.0-rc8+dev
      commit: 82f4855a8421018c9f4d74fbcf2da7f8ad1e11fa
      spec: 1.0.1-dev
  SwapFree: 16308228096
  SwapTotal: 16542330880
  arch: amd64
  cpus: 8
  hostname: popcorn
  kernel: 5.1.21-200.fc29.x86_64
  os: linux
  rootless: true
  uptime: 51h 54m 36.77s (Approximately 2.12 days)
registries:
  blocked: null
  insecure: null
  search:
  - docker.io
  - registry.fedoraproject.org
  - quay.io
  - registry.access.redhat.com
  - registry.centos.org
store:
  ConfigFile: /home/jmulliga/.config/containers/storage.conf
  ContainerStore:
    number: 0
  GraphDriverName: vfs
  GraphOptions:
  - vfs.mount_program=/usr/bin/fuse-overlayfs
  GraphRoot: /home/jmulliga/.local/share/containers/storage
  GraphStatus: {}
  ImageStore:
    number: 0
  RunRoot: /run/user/1000
  VolumePath: /home/jmulliga/.local/share/containers/storage/volumes

Additional environment details (AWS, VirtualBox, physical, etc.): fedora 29, physical host

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Reactions: 6
  • Comments: 44 (17 by maintainers)

Most upvoted comments

We have added podman system reset in the upstream repo, which will clean up all changes in your system. If run as a user it basically does the equivalent of podman unshare rm -rf ~/.local/share/container ~/.config/containers

@rhatdan yes, still reproducible with fuse-overlay too.

Just thought I’d report that podman volume prune might be a good idea in cases where podman prune does not give the desired result.

You need to enter the user namepsace podman unshare du -hs /home/USERNAME/.local/share/containers/storage/*

This is happening on Fedora Silverblue 30.

Is there any way to clean up these files, short of nuking the entire directory?

Edit:

Running podman volume prune inside of a podman unshare shell seems to have knocked it down.

Could you attach du -a -m ~/.local/share/containers/

To see if there is anything interesting under there?

Are you saying podman system reset does not work for you?