buildah: --pull=always does not work with local images

Issue Description

--pull=always flag in $ podman build is not compatible with locally built images. Locally built images are automatically prefixed with localhost/ which makes --pull=always think it’s a locally hosted repository.

This does not seem to be the case with --pull=newer as it doesn’t complain about network failure.

Related discussion: https://github.com/containers/podman/discussions/20121

Steps to reproduce the issue

Steps to reproduce the issue

  1. $ less Containerfile.base
    FROM archlinux:base
    
  2. $ podman build -f Containerfile.base -t mylocalimage --pull=always

    Successfully tagged localhost/mylocalimage:latest

  3. $ less Containerfile.extended
    FROM mylocalimage
    
  4. $ podman build -f Containerfile.extended --pull=always

    WARN[0000] Failed, retrying in 2s … (1/3). Error: initializing source docker://localhost/mylocalimage:latest: pinging container registry localhost: Get “https://localhost/v2/”: dial tcp [::1]:443: connect: connection refused

  5. $ podman image ls
    REPOSITORY                       TAG         IMAGE ID      CREATED            SIZE
    localhost/mylocalimage           latest      74269d97bbd7  3 days ago         448 MB
    docker.io/library/archlinux      base        74269d97bbd7  3 days ago         448 MB
    
    (Not related to this issue but I don’t know why it thinks it was created 3 days ago when I just made it $ date (Mon Sep 25 09:13:50 AM UTC 2023))

Describe the results you received

--pull=always pings localhost instead of querying local images

Describe the results you expected

--pull=always should realize the image is available locally instead of pinging localhost

Understandably the best workaround is to not specify --pull=always (as Podman automatically uses the newest image for locally built images) but this breaks my CI/CD workflow which uses the same parameters for building multiple Containerfiles with the same flags in a loop

podman info output

host:
  arch: amd64
  buildahVersion: 1.31.2
  cgroupControllers:
  - cpuset
  - cpu
  - io
  - memory
  - hugetlb
  - pids
  - rdma
  - misc
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: /usr/bin/conmon is owned by conmon 1:2.1.8-1
    path: /usr/bin/conmon
    version: 'conmon version 2.1.8, commit: 00e08f4a9ca5420de733bf542b930ad58e1a7e7d'
  cpuUtilization:
    idlePercent: 99.89
    systemPercent: 0.08
    userPercent: 0.03
  cpus: 32
  databaseBackend: boltdb
  distribution:
    distribution: arch
    version: 20230921.0.180222
  eventLogger: journald
  freeLocks: 2015
  hostname: ostree
  idMappings:
    gidmap: null
    uidmap: null
  kernel: 6.5.4-arch2-1
  linkmode: dynamic
  logDriver: journald
  memFree: 27389308928
  memTotal: 33651609600
  networkBackend: netavark
  networkBackendInfo:
    backend: netavark
    dns:
      package: Unknown
    package: /usr/lib/podman/netavark is owned by netavark 1.7.0-1
    path: /usr/lib/podman/netavark
    version: netavark 1.7.0
  ociRuntime:
    name: crun
    package: /usr/bin/crun is owned by crun 1.9-1
    path: /usr/bin/crun
    version: |-
      crun version 1.9
      commit: a538ac4ea1ff319bcfe2bf81cb5c6f687e2dc9d3
      rundir: /run/user/0/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
  os: linux
  pasta:
    executable: ""
    package: ""
    version: ""
  remoteSocket:
    path: /run/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: false
    seccompEnabled: true
    seccompProfilePath: /etc/containers/seccomp.json
    selinuxEnabled: false
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: /usr/bin/slirp4netns is owned by slirp4netns 1.2.2-1
    version: |-
      slirp4netns version 1.2.2
      commit: 0ee2d87523e906518d34a6b423271e4826f71faf
      libslirp: 4.7.0
      SLIRP_CONFIG_VERSION_MAX: 4
      libseccomp: 2.5.4
  swapFree: 0
  swapTotal: 0
  uptime: 2h 52m 13.00s (Approximately 0.08 days)
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries: {}
store:
  configFile: /etc/containers/storage.conf
  containerStore:
    number: 33
    paused: 0
    running: 0
    stopped: 33
  graphDriverName: overlay
  graphOptions:
    overlay.mountopt: nodev
  graphRoot: /var/lib/containers/storage
  graphRootAllocated: 26506952704
  graphRootUsed: 16536727552
  graphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 47
  runRoot: /run/containers/storage
  transientStore: false
  volumePath: /var/lib/containers/storage/volumes
version:
  APIVersion: 4.6.2
  Built: 1693343961
  BuiltTime: Tue Aug 29 21:19:21 2023
  GitCommit: 5db42e86862ef42c59304c38aa583732fd80f178-dirty
  GoVersion: go1.21.0
  Os: linux
  OsArch: linux/amd64
  Version: 4.6.2

Podman in a container

No

Privileged Or Rootless

Privileged

Upstream Latest Release

Yes

Additional environment details

N/A

Additional information

N/A

About this issue

  • Original URL
  • State: open
  • Created 9 months ago
  • Reactions: 1
  • Comments: 17 (6 by maintainers)

Most upvoted comments

You would like a Debug message saying the image was not pulled?

--pull=always on an image with localhost is not strictly related to the docs issue. What it boils down to is: Do we want to “relax” the always pull policy when the reference points to localhost/?

I feel rather against it, because --pull=always instructs to always pull. Podman has no knowledge whether the user is referencing a local-only image or not. So I prefer users who decide for --pull=always to make a conscious decision about it.

After reading PR changes here https://github.com/containers/podman/pull/20124 I think maybe buildah documentation is also not correct, lets wait for comments on PR