podman: Array-like --entrypoint [ "cmd" ] syntax not supported in podman run (in systemd file), but supported in podman create (via podman-compose)?

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

Starting my

Steps to reproduce the issue: I used podman-compose to create podman containers/a pod.

The thing is, with podman-compose everything starts fine, but as soon as I use podman to generate systemd units out of it and start it as rootless (i.e. --user ) services, it fails.

Specifically, I see issues with the Nextcloud Cron service.

Here is the YAML part:

  cron:
    image: nextcloud:22-fpm
    restart: unless-stopped
    volumes: *nextcloud_volumes
    entrypoint: /cron.sh
    networks:
    - redisnet
    - dbnet
    environment: *nextcloud_environment
    env_file: *nextcloud_env_file
    depends_on: *nextcloud_depends_on
    labels:
      - io.containers.autoupdate=registry

The references YAML variables are:

    volumes: &nextcloud_volumes
      - nc_data:/var/www/html
      - type: bind
        source: /var/mnt/******
        target: /var/mnt/******:Z
      - type: bind
        source: ${HOME:?HOME variable missing}/nextcloud_config
        target: /var/www/html/config:Z
    depends_on: &nextcloud_depends_on
      - db
      - redis
    env_file: &nextcloud_env_file
      - db-shared.env
    environment: &nextcloud_environment
      - REDIS_HOST=redis
      - REDIS_HOST_PASSWORD=${REDIS_HOST_PASSWORD:?REDIS_HOST_PASSWORD variable missing}
      - MYSQL_HOST=db

Here the generated service with `podman generate systemd “$serviceName” --restart-policy=always --new --name --files

# /var/home/USERNAME/.config/systemd/user/container-nextcloud_cron_1.service
# container-nextcloud_cron_1.service
# autogenerated by Podman 3.4.1
# Thu Dec  2 01:49:03 CET 2021

[Unit]
Description=Podman container-nextcloud_cron_1.service
Documentation=man:podman-generate-systemd(1)
Wants=network-online.target
After=network-online.target
RequiresMountsFor=%t/containers
BindsTo=pod-nextcloud.service
After=pod-nextcloud.service

[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n
Restart=always
TimeoutStopSec=70
ExecStartPre=/bin/rm -f %t/%n.ctr-id
ExecStart=/usr/bin/podman run --cidfile=%t/%n.ctr-id --cgroups=no-conmon --rm --pod-id-file %t/pod-nextcloud.pod-id --sdnotify=conmon -d --replace --name=nextcloud_cron_1 --label io.containers.autoupdate=registry --label io.podman.compose.config-hash=123 --label io.podman.compose.project=nextcloud --label io.podman.compose.version=0.0.1 --label com.docker.compose.project=nextcloud --label com.docker.compose.project.working_dir=/var/home/USERNAME/************** --label com.docker.compose.project.config_files=docker-compose.yml --label com.docker.compose.container-number=1 --label com.docker.compose.service=cron --env-file /var/home/USERNAME/*****/db-shared.env -e REDIS_HOST=redis -e REDIS_HOST_PASSWORD=****************************** -e MYSQL_HOST=db -v nextcloud_nc_data:/var/www/html -v /var/mnt/************:/var/mnt/************:Z -v /var/home/USERNAME/nextcloud_config:/var/www/html/config:Z --add-host caddy:127.0.0.1 --add-host nextcloud_caddy_1:127.0.0.1 --add-host nc:127.0.0.1 --add-host nextcloud_nc_1:127.0.0.1 --add-host redis:127.0.0.1 --add-host nextcloud_redis_1:127.0.0.1 --add-host cron:127.0.0.1 --add-host nextcloud_cron_1:127.0.0.1 --add-host db:127.0.0.1 --add-host nextcloud_db_1:127.0.0.1 --entrypoint ["/cron.sh"] nextcloud:22-fpm
ExecStop=/usr/bin/podman stop --ignore --cidfile=%t/%n.ctr-id
ExecStopPost=/usr/bin/podman rm -f --ignore --cidfile=%t/%n.ctr-id
Type=notify
NotifyAccess=all

[Install]
WantedBy=multi-user.target default.target

Describe the results you received:

Specifically, I get this error:

Dec 02 02:18:06 minipure podman[79094]: Error: executable file `[/cron.sh]` not found in $PATH: No such file or directory: OCI runtime attempted to invoke a command that was not found

Which indicates it cannot find the included cron script inside of the container, which is absolutely strange. To workaround this, I tried manually editing the command line and replaced --entrypoint ["/cron.sh"] with --entrypoint ["sh", "/cron.sh"], whcih should be the same.

Interestingly, I get a completely strange PIDFile error from podman then:

Dec 02 02:07:55 ******** podman[22006]: 2021-12-02 02:07:55.319712497 +0100 CET m=+0.068058468 image pull  nextcloud:22-fpm
Dec 02 02:07:55 ******** podman[22005]: 2021-12-02 02:07:55.400383065 +0100 CET m=+0.148812068 container create 552ea1d669618b0abcd9ab3a93a3cc44ab7b725ee3bc71ed1a3611f17833cdab (image=docker.io/library/mariadb:10.5, name=nextcloud_db_1, com.docker.compose.project.working_dir=/var/home/USERNAME/***************, com.docker.compose.container-number=1, io.podman.compose.config-hash=123, com.docker.compose.project.config_files=docker-compose.yml, io.podman.compose.project=nextcloud, com.docker.compose.project=nextcloud, io.containers.autoupdate=registry, io.podman.compose.version=0.0.1, PODMAN_SYSTEMD_UNIT=container-nextcloud_db_1.service, com.docker.compose.service=db)
Dec 02 02:07:55 ******** podman[22123]: Error: error reading CIDFile: open /run/user/1002/container-nextcloud_cron_1.service.ctr-id: no such file or directory
Dec 02 02:07:55 ******** systemd[1400]: container-nextcloud_cron_1.service: Control process exited, code=exited, status=125/n/a
Dec 02 02:07:55 ******** systemd[1400]: container-nextcloud_cron_1.service: Failed with result 'exit-code'.
Dec 02 02:07:55 ******** systemd[1400]: Failed to start Podman container-nextcloud_cron_1.service.

Describe the results you expected:

Start as usual, file needs to be found.

Additional information you deem important (e.g. issue happens only occasionally): N/A

Output of podman version:

Version:      3.4.1
API Version:  3.4.1
Go Version:   go1.16.8
Built:        Wed Oct 20 16:31:56 2021
OS/Arch:      linux/amd64

Output of podman info --debug:

host:
  arch: amd64
  buildahVersion: 1.23.1
  cgroupControllers:
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.0.30-2.fc35.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.0.30, commit: '
  cpus: 4
  distribution:
    distribution: fedora
    variant: coreos
    version: "35"
  eventLogger: journald
  hostname: ********
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1002
      size: 1
    - container_id: 1
      host_id: 231072
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1002
      size: 1
    - container_id: 1
      host_id: 231072
      size: 65536
  kernel: 5.14.14-300.fc35.x86_64
  linkmode: dynamic
  logDriver: journald
  memFree: 60287238144
  memTotal: 67296378880
  ociRuntime:
    name: crun
    package: crun-1.2-1.fc35.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 1.2
      commit: 4f6c8e0583c679bfee6a899c05ac6b916022561b
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
  os: linux
  remoteSocket:
    path: /run/user/1002/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.1.12-2.fc35.x86_64
    version: |-
      slirp4netns version 1.1.12
      commit: 7a104a101aa3278a2152351a082a6df71f57c9a3
      libslirp: 4.6.1
      SLIRP_CONFIG_VERSION_MAX: 3
      libseccomp: 2.5.0
  swapFree: 4294963200
  swapTotal: 4294963200
  uptime: 19m 21.32s
plugins:
  log:
  - k8s-file
  - none
  - journald
  network:
  - bridge
  - macvlan
  volume:
  - local
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - docker.io
  - quay.io
store:
  configFile: /var/home/USERNAME/.config/containers/storage.conf
  containerStore:
    number: 0
    paused: 0
    running: 0
    stopped: 0
  graphDriverName: overlay
  graphOptions:
    overlay.mount_program:
      Executable: /usr/bin/fuse-overlayfs
      Package: fuse-overlayfs-1.7.1-2.fc35.x86_64
      Version: |-
        fusermount3 version: 3.10.5
        fuse-overlayfs: version 1.7.1
        FUSE library version 3.10.5
        using FUSE kernel interface version 7.31
  graphRoot: /var/home/USERNAME/.local/share/containers/storage
  graphStatus:
    Backing Filesystem: btrfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "false"
  imageStore:
    number: 6
  runRoot: /run/user/1002/containers
  volumePath: /var/home/USERNAME/.local/share/containers/storage/volumes
version:
  APIVersion: 3.4.1
  Built: 1634740316
  BuiltTime: Wed Oct 20 16:31:56 2021
  GitCommit: ""
  GoVersion: go1.16.8
  OsArch: linux/amd64
  Version: 3.4.1

Package info (e.g. output of rpm -q podman or apt list podman):

podman-3.4.1-1.fc35.x86_64

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide? (https://github.com/containers/podman/blob/master/troubleshooting.md)

Not easily doable, however only a minor release is between the current one and I checked the changelog and found nothing apparent.

Additional environment details (AWS, VirtualBox, physical, etc.): Physical, Fedora CoreOs Stable v35.20211029.3.0 (2021-11-17T23:45:08Z)

Starting via podman-compose:

$ podman-compose -p nextcloud up
['podman', '--version', '']
using podman version: 3.4.1
podman pod create --name=nextcloud --share net --infra-name=nextcloud_infra -p ****:80
12d82d3e74644c371aa3f1614873aba98b5511581ef23975ba8cf150d4b6d091
0
[…]
podman volume inspect nextcloud_nc_data || podman volume create nextcloud_nc_data
['podman', 'volume', 'inspect', 'nextcloud_nc_data']
podman create --name=nextcloud_cron_1 --pod=nextcloud --label io.containers.autoupdate=registry --label io.podman.compose.config-hash=123 --label io.podman.compose.project=nextcloud --label io.podman.compose.version=0.0.1 --label com.docker.compose.project=nextcloud --label com.docker.compose.project.working_dir=/var/home/USERNAME/***************** --label com.docker.compose.project.config_files=docker-compose.yml --label com.docker.compose.container-number=1 --label com.docker.compose.service=cron --env-file /var/home/USERNAME/*****************/db-shared.env -e REDIS_HOST=redis -e REDIS_HOST_PASSWORD=******************************* -e MYSQL_HOST=db -v nextcloud_nc_data:/var/www/html -v /var/mnt/*****:/var/mnt/******:Z -v /var/home/USERNAME/nextcloud_config:/var/www/html/config:Z --add-host caddy:127.0.0.1 --add-host nextcloud_caddy_1:127.0.0.1 --add-host nc:127.0.0.1 --add-host nextcloud_nc_1:127.0.0.1 --add-host redis:127.0.0.1 --add-host nextcloud_redis_1:127.0.0.1 --add-host cron:127.0.0.1 --add-host nextcloud_cron_1:127.0.0.1 --add-host db:127.0.0.1 --add-host nextcloud_db_1:127.0.0.1 --restart unless-stopped --entrypoint ["/cron.sh"] nextcloud:22-fpm
9474b6f62b4a8967a9b1340f5819db706430e8e24e27eecc60c4ffc4bb61d05e
$ podman-compose -v
['podman', '--version', '']
using podman version: 3.4.1
podman-composer version  0.1.7dev
podman --version 
podman version 3.4.1

Also, the same YAML file has worked previously and I am not sure what has changed now. I alos, unsuccessfully, tried rebooting, in case that CIDFile error was some overflow of too many files (due to the healthcheck system properly stops containers and restarts them trying to fix that crazy issue).

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Comments: 15 (15 by maintainers)

Commits related to this issue

Most upvoted comments

Found the issue and prepared a fix in https://github.com/containers/podman/pull/12545.

If the specified entrypoint is an array/slice, then it must be specified as a JSON string. podman-compose does that but the quoting got lost. generate systemd is now preserving the quotes.

Alos, BTW, when you fix that, is there anything you can do to (also or at least) improve that crazy error message?

The error message comes from the runtime (e.g., runc or crun). I understand it’s confusing in this specific issue but I have no idea how we could improve the error message.

Thanks for tracking the issue down. I will have a close look and see what we can do.

Can you replace --entrypoint ["/cron.sh"] with --entrypoint /cron.sh and try again?