podman: Compose not working after upgrade to 4.6.2

Issue Description

After upgrading to podman 4.6.2 I cannot use docker-compose anymore to launch any containers. Launching containers using podman run works fine however.

Steps to reproduce the issue

Steps to reproduce the issue

  1. Create a simple compose file like this:
version: "3"
services:
  app:
    image: nginx:alpine
  1. Ensure podman socket is exported and activated:
export DOCKER_HOST=unix://$XDG_RUNTIME_DIR/podman/podman.sock
systemctl --user enable --now podman.socket
  1. Run docker-compose up

Describe the results you received

The following error occurs:

Error response from daemon: crun: [conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied

write to `/proc/self/oom_score_adj`: Permission denied: OCI permission denied

Describe the results you expected

Should just spawn an nginx image. This still works by running podman run --rm nginx:alpine

podman info output

host:
  arch: amd64
  buildahVersion: 1.31.2
  cgroupControllers:
  - cpu
  - io
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.1.7-2.fc38.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.7, commit: '
  cpuUtilization:
    idlePercent: 92.62
    systemPercent: 2.15
    userPercent: 5.23
  cpus: 8
  databaseBackend: boltdb
  distribution:
    distribution: fedora
    variant: silverblue
    version: "38"
  eventLogger: journald
  freeLocks: 2042
  hostname: spectre
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 524288
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 524288
      size: 65536
  kernel: 6.4.14-200.fc38.x86_64
  linkmode: dynamic
  logDriver: journald
  memFree: 322568192
  memTotal: 7916376064
  networkBackend: netavark
  networkBackendInfo:
    backend: netavark
    dns:
      package: aardvark-dns-1.7.0-1.fc38.x86_64
      path: /usr/libexec/podman/aardvark-dns
      version: aardvark-dns 1.7.0
    package: netavark-1.7.0-1.fc38.x86_64
    path: /usr/libexec/podman/netavark
    version: netavark 1.7.0
  ociRuntime:
    name: crun
    package: crun-1.9-1.fc38.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 1.9
      commit: a538ac4ea1ff319bcfe2bf81cb5c6f687e2dc9d3
      rundir: /run/user/1000/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +LIBKRUN +WASM:wasmedge +YAJL
  os: linux
  pasta:
    executable: /usr/bin/pasta
    package: passt-0^20230823.ga7e4bfb-1.fc38.x86_64
    version: |
      pasta 0^20230823.ga7e4bfb-1.fc38.x86_64
      Copyright Red Hat
      GNU Affero GPL version 3 or later <https://www.gnu.org/licenses/agpl-3.0.html>
      This is free software: you are free to change and redistribute it.
      There is NO WARRANTY, to the extent permitted by law.
  remoteSocket:
    exists: true
    path: /run/user/1000/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.2.1-1.fc38.x86_64
    version: |-
      slirp4netns version 1.2.1
      commit: 09e31e92fa3d2a1d3ca261adaeb012c8d75a8194
      libslirp: 4.7.0
      SLIRP_CONFIG_VERSION_MAX: 4
      libseccomp: 2.5.3
  swapFree: 7821848576
  swapTotal: 7915696128
  uptime: 1h 0m 17.00s (Approximately 0.04 days)
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - docker.io
  - quay.io
store:
  configFile: /var/home/philipp/.config/containers/storage.conf
  containerStore:
    number: 1
    paused: 0
    running: 0
    stopped: 1
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /var/home/philipp/.local/share/containers/storage
  graphRootAllocated: 998483427328
  graphRootUsed: 230994952192
  graphStatus:
    Backing Filesystem: btrfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 1
  runRoot: /run/user/1000/containers
  transientStore: false
  volumePath: /var/home/philipp/.local/share/containers/storage/volumes
version:
  APIVersion: 4.6.2
  Built: 1693251588
  BuiltTime: Mon Aug 28 21:39:48 2023
  GitCommit: ""
  GoVersion: go1.20.7
  Os: linux
  OsArch: linux/amd64
  Version: 4.6.2


### Podman in a container

No

### Privileged Or Rootless

Rootless

### Upstream Latest Release

Yes

### Additional environment details

I already ran `podman system migrate`, `podman system reset` and `rm -rf ~/.local/share/containers`, none of which had any impact

Docker Compose version v2.17.2

### Additional information

_No response_

About this issue

  • Original URL
  • State: closed
  • Created 10 months ago
  • Reactions: 12
  • Comments: 31 (14 by maintainers)

Commits related to this issue

Most upvoted comments

Thanks @Luap99 @giuseppe, I rolled back to a previous ostree state using crun 1.8.7 which fixed the issue. Looking forward to podman 4.6.3 being released 😄

Found this:

https://fedoraproject.org/coreos/release-notes/?arch=x86_64&stream=stable

Had to manually download the old image using url hacking because there doesn’t seem to be links from the release notes pages.

curl -LO https://builds.coreos.fedoraproject.org/prod/streams/stable/builds/38.20230819.3.0/aarch64/fedora-coreos-38.20230819.3.0-qemu.aarch64.qcow2.xz
podman machine init --image-path fedora-coreos-38.20230819.3.0-qemu.aarch64.qcow2.xz

That worked! 🎉

Freaking love the simplicity of podman. Even with obscure error messages like the one thrown, wasn’t so bad to dig into the issue and find out how to fix it. Great work team!

Maybe sometime I’ll learn rpm-ostree, but I’ll save that for a later date. 😃

~Hopefully I don’t have to worry about auto-updates?~. Of course it updated before I finished writing the comment…

Time to try again. Raced podman machine init with podman machine ssh then ran this when I got a shell:

[core@localhost ~]$ sudo systemctl disable --now zincati.service

Seems to have worked 🤞.

[core@localhost ~]$ sudo systemctl status zincati
â—‹ zincati.service - Zincati Update Agent
     Loaded: loaded (/usr/lib/systemd/system/zincati.service; disabled; preset: enabled)
    Drop-In: /usr/lib/systemd/system/service.d
             └─10-timeout-abort.conf
     Active: inactive (dead) since Fri 2023-09-22 18:14:57 PDT; 15s ago
   Duration: 1.821s
       Docs: https://github.com/coreos/zincati
    Process: 1949 ExecStart=/usr/libexec/zincati agent ${ZINCATI_VERBOSITY} (code=exited, status=0/SUCCESS)
   Main PID: 1949 (code=exited, status=0/SUCCESS)
        CPU: 97ms

Sep 22 18:14:55 localhost.localdomain zincati[1949]: [INFO  zincati::update_agent::actor] registering as the update driver for rpm-ostree
Sep 22 18:14:55 localhost.localdomain zincati[1949]: [INFO  zincati::update_agent::actor] initialization complete, auto-updates logic enabled
Sep 22 18:14:55 localhost.localdomain zincati[1949]: [INFO  zincati::strategy] update strategy: immediate
Sep 22 18:14:55 localhost.localdomain zincati[1949]: [INFO  zincati::update_agent::actor] reached steady state, periodically polling for updates
Sep 22 18:14:55 localhost.localdomain systemd[1]: Started zincati.service - Zincati Update Agent.
Sep 22 18:14:56 localhost.localdomain zincati[1949]: [INFO  zincati::cincinnati] current release detected as not a dead-end
Sep 22 18:14:56 localhost.localdomain zincati[1949]: [INFO  zincati::update_agent::actor] target release '38.20230902.3.0' selected, proceeding to stage it
Sep 22 18:14:57 localhost.localdomain systemd[1]: Stopping zincati.service - Zincati Update Agent...
Sep 22 18:14:57 localhost.localdomain systemd[1]: zincati.service: Deactivated successfully.
Sep 22 18:14:57 localhost.localdomain systemd[1]: Stopped zincati.service - Zincati Update Agent.

Podman 4.7.0 is releasing today with a fix allowing the most recent crun version to be used. Closing as such.

how can i downgrade crun to 1.8.7 ?

@Rtapy you can run sudo dnf downgrade crun

FYI this is still an issue with podman 4.7 and fedora-coreos-38.20230918.2.0-qemu.x86_64.qcow2.xz.

$ docker run --rm -it busybox
unable to upgrade to tcp, received 409

Recreating a machine with stable works:

$ podman machine init --cpus 4 --memory 8192 --now --image-path stable
Downloading VM image: fedora-coreos-38.20230902.3.0-qemu.x86_64.qcow2.xz: done
...
Machine "podman-machine-default" started successfully

$ docker run --rm -it busybox
Unable to find image 'busybox:latest' locally
3f4d90098f5b: Download complete
a416a98b71e2: Download complete
/ #

FYI this is still an issue with podman 4.7 and fedora-coreos-38.20230918.2.0-qemu.x86_64.qcow2.xz.

I also upgraded to Podman 4.7.0 and am still receiving the crun: [common.d] failed to write to /proc/self/oom_score_adj: Permission denied error.

As stated above you need v4.7 on the server side, you can check with podman version.

Hi! I’m using podman with podman machine on MacOS. Trying to figure out the right way to rollback crun.

The default machine is Fedora CoreOS (maybe this can be swapped, but like to keep things as vanilla as possible).

I haven’t used CoreOS much at all since I prefer for this stuff to be transparent, but I looked at the docs for rolling back the image and found this command, but there’s nothing to rollback to.

[root@localhost core]# rpm-ostree rollback -r
error: No rollback deployment found

Found via: https://docs.fedoraproject.org/en-US/fedora-coreos/auto-updates/#_manual_rollbacks

Any recommendations for a workaround on MacOS?

17:55:56 $ podman version
Client:       Podman Engine
Version:      4.6.1
API Version:  4.6.1
Go Version:   go1.20.7
Git Commit:   f3069b3ff48e30373c33b3f5976f15abf8cfee20
Built:        Thu Aug 10 11:13:43 2023
OS/Arch:      darwin/arm64

Server:       Podman Engine
Version:      4.6.2
API Version:  4.6.2
Go Version:   go1.20.7
Built:        Tue Sep 12 13:07:26 2023
OS/Arch:      linux/arm64

I also tried downgrading podman and resetting the machine but the image appears to have the new engine installed anyway. Maybe theres a way I can specify which image version I want to download with podman? I haven’t found that yet.

how can i downgrade crun to 1.8.7 ?

For Fedora 38, you can install crun 1.8.7 with:

# dnf install -y https://kojipkgs.fedoraproject.org//packages/crun/1.8.7/1.fc38/x86_64/crun-1.8.7-1.fc38.x86_64.rpm

This is the debug log of docker-compose start for the given compose file. Only suspicious message I encountered (except of the previously mentioned messages) is the Received: -1 after the netavark setup

time="2023-09-11T22:43:51+02:00" level=debug msg="IdleTracker:new 0m+0h/0t connection(s)" X-Reference-Id=0xc0007be008
time="2023-09-11T22:43:51+02:00" level=debug msg="IdleTracker:active 0m+0h/1t connection(s)" X-Reference-Id=0xc0007be008
@ - - [11/Sep/2023:22:43:51 +0200] "HEAD /_ping HTTP/1.1" 200 0 "" "Docker-Client/unknown-version (linux)"
time="2023-09-11T22:43:51+02:00" level=debug msg="IdleTracker:idle 1m+0h/1t connection(s)" X-Reference-Id=0xc0007be008
time="2023-09-11T22:43:51+02:00" level=debug msg="IdleTracker:active 1m+0h/1t connection(s)" X-Reference-Id=0xc0007be008
time="2023-09-11T22:43:51+02:00" level=debug msg="Looking up image \"433dbc17191a7830a9db6454bcc23414ad36caecedab39d1e51d41083ab1d629\" in local containers storage"
time="2023-09-11T22:43:51+02:00" level=debug msg="Trying \"433dbc17191a7830a9db6454bcc23414ad36caecedab39d1e51d41083ab1d629\" ..."
time="2023-09-11T22:43:51+02:00" level=debug msg="parsed reference into \"[overlay@/var/home/philipp/.local/share/containers/storage+/run/user/1000/containers]@433dbc17191a7830a9db6454bcc23414ad36caecedab39d1e51d41083ab1d629\""
time="2023-09-11T22:43:51+02:00" level=debug msg="Found image \"433dbc17191a7830a9db6454bcc23414ad36caecedab39d1e51d41083ab1d629\" as \"433dbc17191a7830a9db6454bcc23414ad36caecedab39d1e51d41083ab1d629\" in local containers storage"
time="2023-09-11T22:43:51+02:00" level=debug msg="Found image \"433dbc17191a7830a9db6454bcc23414ad36caecedab39d1e51d41083ab1d629\" as \"433dbc17191a7830a9db6454bcc23414ad36caecedab39d1e51d41083ab1d629\" in local containers storage ([overlay@/var/home/philipp/.local/share/containers/storage+/run/user/1000/containers]@433dbc17191a7830a9db6454bcc23414ad36caecedab39d1e51d41083ab1d629)"
@ - - [11/Sep/2023:22:43:51 +0200] "GET /v1.41/containers/json?all=1&filters=%7B%22label%22%3A%7B%22com.docker.compose.oneoff%3DFalse%22%3Atrue%2C%22com.docker.compose.project%3Dtest%22%3Atrue%7D%7D HTTP/1.1" 200 1407 "" "Docker-Client/unknown-version (linux)"
time="2023-09-11T22:43:51+02:00" level=debug msg="IdleTracker:idle 1m+0h/1t connection(s)" X-Reference-Id=0xc0007be008
time="2023-09-11T22:43:51+02:00" level=debug msg="IdleTracker:active 1m+0h/1t connection(s)" X-Reference-Id=0xc0007be008
time="2023-09-11T22:43:51+02:00" level=debug msg="Cached value indicated that idmapped mounts for overlay are not supported"
time="2023-09-11T22:43:51+02:00" level=debug msg="Check for idmapped mounts support "
time="2023-09-11T22:43:51+02:00" level=debug msg="overlay: mount_data=lowerdir=/var/home/philipp/.local/share/containers/storage/overlay/l/6M6AHOY3JC2AMCQVUNFP354NV6:/var/home/philipp/.local/share/containers/storage/overlay/l/R4Z4QTOMMDQY4X4ARJ5XL53YXP:/var/home/philipp/.local/share/containers/storage/overlay/l/VTS64L4U7YNP6GCROTH2EOSF74:/var/home/philipp/.local/share/containers/storage/overlay/l/ZKRKA7L4SAHZ7CSDDRP4FZIV3N:/var/home/philipp/.local/share/containers/storage/overlay/l/KQGIW6IWVP2TZZX44HPGTGBHPF:/var/home/philipp/.local/share/containers/storage/overlay/l/JIYAT3EPP4ATZ73AOYCFTJZDU2:/var/home/philipp/.local/share/containers/storage/overlay/l/5XWI5U27TF43OYNGQJL7HYCTC7:/var/home/philipp/.local/share/containers/storage/overlay/l/OOI6XHAQCXHYNEVWWDX7C7JRM3,upperdir=/var/home/philipp/.local/share/containers/storage/overlay/078711c1cfa8eb9da2b57232154c6720295b161eb6d610aa9451c7efc823361b/diff,workdir=/var/home/philipp/.local/share/containers/storage/overlay/078711c1cfa8eb9da2b57232154c6720295b161eb6d610aa9451c7efc823361b/work,,userxattr,context=\"system_u:object_r:container_file_t:s0:c458,c539\""
time="2023-09-11T22:43:51+02:00" level=debug msg="Made network namespace at /run/user/1000/netns/netns-5a70bcc3-ce29-e06a-d6b5-c4be348954bf for container 1eefe3e8d6f0c121c4cc9d8c3a690129677469bc4adb4066c3175e39082ac5e7"
time="2023-09-11T22:43:51+02:00" level=debug msg="creating rootless network namespace with name \"rootless-netns-366f197f86d21c9c17ef\""
time="2023-09-11T22:43:51+02:00" level=debug msg="Mounted container \"1eefe3e8d6f0c121c4cc9d8c3a690129677469bc4adb4066c3175e39082ac5e7\" at \"/var/home/philipp/.local/share/containers/storage/overlay/078711c1cfa8eb9da2b57232154c6720295b161eb6d610aa9451c7efc823361b/merged\""
time="2023-09-11T22:43:51+02:00" level=debug msg="Created root filesystem for container 1eefe3e8d6f0c121c4cc9d8c3a690129677469bc4adb4066c3175e39082ac5e7 at /var/home/philipp/.local/share/containers/storage/overlay/078711c1cfa8eb9da2b57232154c6720295b161eb6d610aa9451c7efc823361b/merged"
time="2023-09-11T22:43:51+02:00" level=debug msg="slirp4netns command: /usr/bin/slirp4netns --disable-host-loopback --mtu=65520 --enable-sandbox --enable-seccomp --enable-ipv6 -c -r 3 --netns-type=path /run/user/1000/netns/rootless-netns-366f197f86d21c9c17ef tap0"
time="2023-09-11T22:43:51+02:00" level=debug msg="found local resolver, using \"/run/systemd/resolve/resolv.conf\" to get the nameservers"
time="2023-09-11T22:43:51+02:00" level=debug msg="The path of /etc/resolv.conf in the mount ns is \"/run/systemd/resolve/stub-resolv.conf\""
time="2023-09-11T22:43:51+02:00" level=debug msg="Successfully loaded network test_default: &{test_default 69c15d74c57dd1952a1a25e266a700e54a3914ab8d776b5bd85639429ec6bad9 bridge podman1 2023-09-11 20:52:40.305670075 +0200 CEST [{{{10.89.0.0 ffffff00}} 10.89.0.1 <nil>}] [] false false true [] map[com.docker.compose.network:default com.docker.compose.project:test com.docker.compose.version:2.17.2] map[isolate:true] map[driver:host-local]}"
time="2023-09-11T22:43:51+02:00" level=debug msg="Successfully loaded 2 networks"
[DEBUG netavark::network::validation] "Validating network namespace..."
[DEBUG netavark::commands::setup] "Setting up..."
[INFO  netavark::firewall] Using iptables firewall driver
[DEBUG netavark::network::bridge] Setup network test_default
[DEBUG netavark::network::bridge] Container interface name: eth0 with IP addresses [10.89.0.5/24]
[DEBUG netavark::network::bridge] Bridge name: podman1 with IP addresses [10.89.0.1/24]
[DEBUG netavark::network::core_utils] Setting sysctl value for net.ipv4.ip_forward to 1
[DEBUG netavark::network::core_utils] Setting sysctl value for /proc/sys/net/ipv6/conf/eth0/autoconf to 0
[INFO  netavark::network::netlink] Adding route (dest: 0.0.0.0/0 ,gw: 10.89.0.1, metric 100)
[DEBUG netavark::firewall::varktables::types] Add extra isolate rules
[DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-CEDCA8372B348 created on table nat
[DEBUG netavark::firewall::varktables::helpers] chain NETAVARK_ISOLATION_1 created on table filter
[DEBUG netavark::firewall::varktables::helpers] chain NETAVARK_ISOLATION_2 created on table filter
[DEBUG netavark::firewall::varktables::helpers] chain NETAVARK_ISOLATION_3 created on table filter
[DEBUG netavark::firewall::varktables::helpers] chain NETAVARK_FORWARD created on table filter
[DEBUG netavark::firewall::varktables::helpers] rule -d 10.89.0.0/24 -j ACCEPT created on table nat and chain NETAVARK-CEDCA8372B348
[DEBUG netavark::firewall::varktables::helpers] rule ! -d 224.0.0.0/4 -j MASQUERADE created on table nat and chain NETAVARK-CEDCA8372B348
[DEBUG netavark::firewall::varktables::helpers] rule -s 10.89.0.0/24 -j NETAVARK-CEDCA8372B348 created on table nat and chain POSTROUTING
[DEBUG netavark::firewall::varktables::helpers] rule -d 10.89.0.0/24 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT created on table filter and chain NETAVARK_FORWARD
[DEBUG netavark::firewall::varktables::helpers] rule -s 10.89.0.0/24 -j ACCEPT created on table filter and chain NETAVARK_FORWARD
[DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-HOSTPORT-SETMARK created on table nat
[DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-HOSTPORT-MASQ created on table nat
[DEBUG netavark::firewall::varktables::helpers] chain NETAVARK-HOSTPORT-DNAT created on table nat
[DEBUG netavark::firewall::varktables::helpers] rule -j MARK  --set-xmark 0x2000/0x2000 created on table nat and chain NETAVARK-HOSTPORT-SETMARK
[DEBUG netavark::firewall::varktables::helpers] rule -j MASQUERADE -m comment --comment 'netavark portfw masq mark' -m mark --mark 0x2000/0x2000 created on table nat and chain NETAVARK-HOSTPORT-MASQ
[DEBUG netavark::firewall::varktables::helpers] rule -j NETAVARK-HOSTPORT-DNAT -m addrtype --dst-type LOCAL created on table nat and chain PREROUTING
[DEBUG netavark::firewall::varktables::helpers] rule -j NETAVARK-HOSTPORT-DNAT -m addrtype --dst-type LOCAL created on table nat and chain OUTPUT
[DEBUG netavark::dns::aardvark] Spawning aardvark server
[DEBUG netavark::dns::aardvark] start aardvark-dns: ["systemd-run", "-q", "--scope", "--user", "/usr/libexec/podman/aardvark-dns", "--config", "/run/user/1000/containers/networks/aardvark-dns", "-p", "53", "run"]
[DEBUG netavark::commands::setup] {
        "test_default": StatusBlock {
            dns_search_domains: Some(
                [
                    "dns.podman",
                ],
            ),
            dns_server_ips: Some(
                [
                    10.89.0.1,
                ],
            ),
            interfaces: Some(
                {
                    "eth0": NetInterface {
                        mac_address: "86:d9:b2:39:3b:bc",
                        subnets: Some(
                            [
                                NetAddress {
                                    gateway: Some(
                                        10.89.0.1,
                                    ),
                                    ipnet: 10.89.0.5/24,
                                },
                            ],
                        ),
                    },
                },
            ),
        },
    }
[DEBUG netavark::commands::setup] "Setup complete"
time="2023-09-11T22:43:52+02:00" level=debug msg="/etc/system-fips does not exist on host, not mounting FIPS mode subscription"
time="2023-09-11T22:43:52+02:00" level=debug msg="Setting Cgroups for container 1eefe3e8d6f0c121c4cc9d8c3a690129677469bc4adb4066c3175e39082ac5e7 to user.slice:libpod:1eefe3e8d6f0c121c4cc9d8c3a690129677469bc4adb4066c3175e39082ac5e7"
time="2023-09-11T22:43:52+02:00" level=debug msg="reading hooks from /usr/share/containers/oci/hooks.d"
time="2023-09-11T22:43:52+02:00" level=debug msg="Workdir \"/\" resolved to host path \"/var/home/philipp/.local/share/containers/storage/overlay/078711c1cfa8eb9da2b57232154c6720295b161eb6d610aa9451c7efc823361b/merged\""
time="2023-09-11T22:43:52+02:00" level=debug msg="Created OCI spec for container 1eefe3e8d6f0c121c4cc9d8c3a690129677469bc4adb4066c3175e39082ac5e7 at /var/home/philipp/.local/share/containers/storage/overlay-containers/1eefe3e8d6f0c121c4cc9d8c3a690129677469bc4adb4066c3175e39082ac5e7/userdata/config.json"
time="2023-09-11T22:43:52+02:00" level=debug msg="/usr/bin/conmon messages will be logged to syslog"
time="2023-09-11T22:43:52+02:00" level=debug msg="running conmon: /usr/bin/conmon" args="[--api-version 1 -c 1eefe3e8d6f0c121c4cc9d8c3a690129677469bc4adb4066c3175e39082ac5e7 -u 1eefe3e8d6f0c121c4cc9d8c3a690129677469bc4adb4066c3175e39082ac5e7 -r /usr/bin/crun -b /var/home/philipp/.local/share/containers/storage/overlay-containers/1eefe3e8d6f0c121c4cc9d8c3a690129677469bc4adb4066c3175e39082ac5e7/userdata -p /run/user/1000/containers/overlay-containers/1eefe3e8d6f0c121c4cc9d8c3a690129677469bc4adb4066c3175e39082ac5e7/userdata/pidfile -n test-app-1 --exit-dir /run/user/1000/libpod/tmp/exits --full-attach -s -l journald --log-level debug --syslog --conmon-pidfile /run/user/1000/containers/overlay-containers/1eefe3e8d6f0c121c4cc9d8c3a690129677469bc4adb4066c3175e39082ac5e7/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /var/home/philipp/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/1000/containers --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/user/1000/libpod/tmp --exit-command-arg --network-config-dir --exit-command-arg  --exit-command-arg --network-backend --exit-command-arg netavark --exit-command-arg --volumepath --exit-command-arg /var/home/philipp/.local/share/containers/storage/volumes --exit-command-arg --db-backend --exit-command-arg boltdb --exit-command-arg --transient-store=false --exit-command-arg --runtime --exit-command-arg crun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 1eefe3e8d6f0c121c4cc9d8c3a690129677469bc4adb4066c3175e39082ac5e7]"
time="2023-09-11T22:43:52+02:00" level=info msg="Running conmon under slice user.slice and unitName libpod-conmon-1eefe3e8d6f0c121c4cc9d8c3a690129677469bc4adb4066c3175e39082ac5e7.scope"
[conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied

time="2023-09-11T22:43:52+02:00" level=debug msg="Received: -1"
time="2023-09-11T22:43:52+02:00" level=debug msg="Cleaning up container 1eefe3e8d6f0c121c4cc9d8c3a690129677469bc4adb4066c3175e39082ac5e7"
time="2023-09-11T22:43:52+02:00" level=debug msg="Tearing down network namespace at /run/user/1000/netns/netns-5a70bcc3-ce29-e06a-d6b5-c4be348954bf for container 1eefe3e8d6f0c121c4cc9d8c3a690129677469bc4adb4066c3175e39082ac5e7"
time="2023-09-11T22:43:52+02:00" level=debug msg="The path of /etc/resolv.conf in the mount ns is \"/run/systemd/resolve/stub-resolv.conf\""
[DEBUG netavark::commands::teardown] "Tearing down.."
[INFO  netavark::firewall] Using iptables firewall driver
[INFO  netavark::network::bridge] removing bridge podman1
[DEBUG netavark::firewall::varktables::types] Add extra isolate rules
[DEBUG netavark::commands::teardown] "Teardown complete"
time="2023-09-11T22:43:52+02:00" level=debug msg="Cleaning up rootless network namespace"
time="2023-09-11T22:43:52+02:00" level=debug msg="Unmounted container \"1eefe3e8d6f0c121c4cc9d8c3a690129677469bc4adb4066c3175e39082ac5e7\""
time="2023-09-11T22:43:52+02:00" level=info msg="Request Failed(Internal Server Error): crun: [conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied\n\nwrite to `/proc/self/oom_score_adj`: Permission denied: OCI permission denied"
@ - - [11/Sep/2023:22:43:51 +0200] "POST /v1.41/containers/1eefe3e8d6f0c121c4cc9d8c3a690129677469bc4adb4066c3175e39082ac5e7/start HTTP/1.1" 500 223 "" "Docker-Client/unknown-version (linux)"
time="2023-09-11T22:43:52+02:00" level=debug msg="IdleTracker:idle 1m+0h/1t connection(s)" X-Reference-Id=0xc0007be008
time="2023-09-11T22:43:52+02:00" level=debug msg="IdleTracker:closed 1m+0h/1t connection(s)" X-Reference-Id=0xc0007be008
time="2023-09-11T22:43:56+02:00" level=info msg="Received shutdown signal \"interrupt\", terminating!" PID=17007
time="2023-09-11T22:43:56+02:00" level=info msg="Invoking shutdown handler \"service\"" PID=17007
time="2023-09-11T22:43:56+02:00" level=debug msg="API service forced shutdown, ignoring timeout Duration"
time="2023-09-11T22:43:56+02:00" level=debug msg="API service shutdown, 0/1 connection(s)"
time="2023-09-11T22:43:56+02:00" level=debug msg="API service forced shutdown, ignoring timeout Duration"
time="2023-09-11T22:43:56+02:00" level=debug msg="Completed shutdown handler \"service\", duration 0s" PID=17007
time="2023-09-11T22:43:56+02:00" level=info msg="Invoking shutdown handler \"libpod\"" PID=17007

I can reproduce on fedora 38 with podman version 4.6.2 and docker-compose version 2.20.3