buildah: podman build tags images in the local registry with a leading localhost/ and docker does not

Description

Does podman build have the goal of being a drop-in replacement for docker? If not, feel free to close this issue. If so, the issue is that podman build -t bodhi-ci/f27 tags the resulting image into the local engine with a leading localhost/ in the repository name, and docker does not do this. This results in a difference in how the image must be referenced later when running docker/podman run, because docker run… bodhi-ci/f27 will work, but podman run… bodhi-ci/f27 will not work (and will try to find that image in a variety of external registries).

Steps to reproduce the issue:

  1. Build a Dockerfile with podman and docker, and use the -t flag.
  2. For example, I’ve built my Bodhi CI images with -t bodhi-ci/f27 in the examples shown below.

Describe the results you received:

$ sudo docker images
REPOSITORY                          TAG                 IMAGE ID            CREATED             SIZE
bodhi-ci/f27                        latest              4668aacb1a2a        11 hours ago        1.29 GB
$ sudo podman images
REPOSITORY                          TAG      IMAGE ID       CREATED          SIZE
localhost/bodhi-ci/f27              latest   351ffa92bc74   2 minutes ago    1.34GB

Describe the results you expected:

$ sudo podman images
REPOSITORY                          TAG      IMAGE ID       CREATED          SIZE
bodhi-ci/f27              latest   351ffa92bc74   2 minutes ago    1.34GB

Output of rpm -q buildah or apt list buildah:

Buildah is not installed, but the text at https://github.com/containers/libpod/issues/new said to file issues with podman build here.

Output of buildah version:

See above.

Output of cat /etc/*release:

$ cat /etc/redhat-release 
Fedora release 30 (Rawhide)

Output of uname -a:

Linux host.example.com 4.19.0-0.rc3.git3.1.fc30.x86_64 #1 SMP Fri Sep 14 18:31:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

Output of cat /etc/containers/storage.conf:

# This file is is the configuration file for all tools
# that use the containers/storage library.
# See man 5 containers-storage.conf for more information
# The "container storage" table contains all of the server options.
[storage]

# Default Storage Driver
driver = "overlay"

# Temporary storage location
runroot = "/var/run/containers/storage"

# Primary Read/Write location of container storage
graphroot = "/var/lib/containers/storage"

[storage.options]
# Storage options to be passed to underlying storage drivers

# AdditionalImageStores is used to pass paths to additional Read/Only image stores
# Must be comma separated list.
additionalimagestores = [
]

# Size is used to set a maximum size of the container image.  Only supported by
# certain container storage drivers.
size = ""

# Path to an helper program to use for mounting the file system instead of mounting it
# directly.
#mount_program = "/usr/bin/fuse-overlayfs"

# OverrideKernelCheck tells the driver to ignore kernel checks based on kernel version
override_kernel_check = "true"

# mountopt specifies comma separated list of extra mount options
mountopt = "nodev"

# Remap-UIDs/GIDs is the mapping from UIDs/GIDs as they should appear inside of
# a container, to UIDs/GIDs as they should appear outside of the container, and
# the length of the range of UIDs/GIDs.  Additional mapped sets can be listed
# and will be heeded by libraries, but there are limits to the number of
# mappings which the kernel will allow when you later attempt to run a
# container.
#
# remap-uids = 0:1668442479:65536
# remap-gids = 0:1668442479:65536

# Remap-User/Group is a name which can be used to look up one or more UID/GID
# ranges in the /etc/subuid or /etc/subgid file.  Mappings are set up starting
# with an in-container ID of 0 and the a host-level ID taken from the lowest
# range that matches the specified name, and using the length of that range.
# Additional ranges are then assigned, using the ranges which specify the
# lowest host-level IDs first, to the lowest not-yet-mapped container-level ID,
# until all of the entries have been used for maps.
#
# remap-user = "storage"
# remap-group = "storage"

[storage.options.thinpool]
# Storage Options for thinpool

# autoextend_percent determines the amount by which pool needs to be
# grown. This is specified in terms of % of pool size. So a value of 20 means
# that when threshold is hit, pool will be grown by 20% of existing
# pool size.
# autoextend_percent = "20"

# autoextend_threshold determines the pool extension threshold in terms
# of percentage of pool size. For example, if threshold is 60, that means when
# pool is 60% full, threshold has been hit.
# autoextend_threshold = "80"

# basesize specifies the size to use when creating the base device, which
# limits the size of images and containers.
# basesize = "10G"

# blocksize specifies a custom blocksize to use for the thin pool.
# blocksize="64k"

# directlvm_device specifies a custom block storage device to use for the
# thin pool. Required if you setup devicemapper.
# directlvm_device = ""

# directlvm_device_force wipes device even if device already has a filesystem.
# directlvm_device_force = "True"

# fs specifies the filesystem type to use for the base device.
# fs="xfs"

# log_level sets the log level of devicemapper.
# 0: LogLevelSuppress 0 (Default)
# 2: LogLevelFatal
# 3: LogLevelErr
# 4: LogLevelWarn
# 5: LogLevelNotice
# 6: LogLevelInfo
# 7: LogLevelDebug
# log_level = "7"

# min_free_space specifies the min free space percent in a thin pool require for
# new device creation to succeed. Valid values are from 0% - 99%.
# Value 0% disables
# min_free_space = "10%"

# mkfsarg specifies extra mkfs arguments to be used when creating the base.
# device.
# mkfsarg = ""

# use_deferred_removal marks devicemapper block device for deferred removal.
# If the thinpool is in use when the driver attempts to remove it, the driver 
# tells the kernel to remove it as soon as possible. Note this does not free
# up the disk space, use deferred deletion to fully remove the thinpool.
# use_deferred_removal = "True"

# use_deferred_deletion marks thinpool device for deferred deletion.
# If the device is busy when the driver attempts to delete it, the driver
# will attempt to delete device every 30 seconds until successful.
# If the program using the driver exits, the driver will continue attempting
# to cleanup the next time the driver is used. Deferred deletion permanently
# deletes the device and all data stored in device will be lost.
# use_deferred_deletion = "True"

# xfs_nospace_max_retries specifies the maximum number of retries XFS should
# attempt to complete IO when ENOSPC (no space) error is returned by
# underlying storage device.
# xfs_nospace_max_retries = "0"

# If specified, use OSTree to deduplicate files with the overlay backend
ostree_repo = ""

# Set to skip a PRIVATE bind mount on the storage home directory.  Only supported by
# certain container storage drivers
skip_mount_home = "false"

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Reactions: 11
  • Comments: 78 (40 by maintainers)

Commits related to this issue

Most upvoted comments

I take this closing of the ticket to mean that podman does not want to be compatible with docker. This will make it very difficult for those of us advocating people move from docker to podman as they now have an incompatibility that there is no work around for. If podman were to have a command line option to suppress this “localhost/” prefix that would be sufficient. Then I could tell people that podman save works just like docker, except you need this extra flag and it works.

I’d be interested in being able to remove the localhost/ prefix for the image names with podman as well. As another use case, I think it is preventing me from remotely developing on a Linux box running podman from PyCharm on my Mac.

Screen Shot 2020-12-04 at 5 28 09 PM Screen Shot 2020-12-04 at 5 22 10 PM

I want podman save foo to create a file that has the same manifest.json and repositories files that docker save foo has.

For example when I execute docker save foo I see this in manifest.json:

[
  {
    "Config": "cd998886946a19daf9b43f35476427b9e800999f343ca463eae89f4570c10cc9.json",
    "RepoTags": [
      "foo:latest"
    ],
    "Layers": [
...
    ]
  }
]

When I executed podman save foo I see this in manifest.json:

[
  {
    "Config": "cd998886946a19daf9b43f35476427b9e800999f343ca463eae89f4570c10cc9.json",
    "RepoTags": [
      "localhost/foo:latest"
    ],
    "Layers": [
...
    ]
  }
]

The key here is the prefix on the “RepoTags” property.

The repositories file has a similar issue: docker shows

{
  "foo": {
    "latest": "cdd290f400b43594611786fe015d8f78205421db0bd8afbbf3a12d55fffdd1ed"
  }
}

podman shows

{
  "localhost/frc":{
      "latest":"92b78c85f2b4b16c8ea4f6235af775f09d227062270fdd84bd7052c19d850528"
    }
}

+1 for removing the localhost/ prefix. How does this pain me? Kind doesn’t assume that localhost/myimage:v0.1 and myimage:v0.1 are the same. So now I need to modify all my k8s yaml files to add the localhost/ prefix.

The images from docker.io worked fine, it was the locally built ones that podman puts localhost/ in front of that are the ‘problem’.

It looks like this was a design decision by the podman team, so it’s not a bug. I put my comment in because google brought me to this issue and I thought it might be useful for others to see another option for getting around that when google brings them here.

I’m using podman 1.6.2 and still seeing this issue of images being prefixed with “localhost/”. Should this still be the case? This makes compatibility with docker scripts difficult as the image names aren’t the same.

Please open new issues or discussions.

Must there be a prefix at all?

Does this work for you?

# podman build -t myimage -f /tmp/test/Dockerfile /tmp/test/
STEP 1: FROM alpine
STEP 2: COMMIT myimage
--> 28f6e270574
28f6e27057430ed2a40dbdd50d2736a3f0a295924016e294938110eeb8439818
# podman images | grep myimage
localhost/myimage          latest    28f6e2705743  4 weeks ago  5.88 MB
# podman tag myimage docker.io/myimage
# podman images | grep myimage
localhost/myimage          latest    28f6e2705743  4 weeks ago  5.88 MB
docker.io/library/myimage  latest    28f6e2705743  4 weeks ago  5.88 MB
# podman save docker.io/library/myimage -o /tmp/myimage.tar
# docker load -i /tmp/myimage.tar 
cb381a32b229: Loading layer [==================================================>]   5.88MB/5.88MB
Loaded image: myimage:latest
# docker images | grep myimage
myimage      latest    28f6e2705743   4 weeks ago   5.61MB

A friendly reminder that this issue had no activity for 30 days.

@vrothberg Let’s talk about this next week, perhaps at Watercooler.

This sounds like an excellent weekend project for myself. Let me see if I can come up with a way to use a new option to remove localhost if the option is set to true. --nolocalhost perhaps?

I suggest to wait until we have consensus on the way forward.

IMHO a flag isn’t as user-friendly as I’d like it to be. If users wish to change the behaviour, they had to specify the flag all the time which is a bit annoying and error prone as it can easily be forgotten.

I could imagine, however, a new option in the registries.conf.

This sounds like an excellent weekend project for myself. Let me see if I can come up with a way to use a new option to remove localhost if the option is set to true. --nolocalhost perhaps?

@rhatdan, as you reopened. What’s your take on it?

The localhost prefixing is incompatible with Docker. We’ve been discussing this issue a couple of times already but the conclusion was to keep the behaviour. I am no fan of it because if I tag something as foo I want it to be foo.

However, if we changed the behaviour, it would be a breaking change for both, Podman and Buildah.

Any plans to address this? This is a deal breaker for allowing both engines to be used in parallel even for the most basic CI/CD tasks, like:

ENGINE=`shell command -v podman docker|head -n1`
$ENGINE build -t elastic-recheck .
# docker only:
$ENGINE run -it elastic-recheck /bin/bash
# podman only:
$ENGINE run -it localhost/elastic-recheck /bin/bash

@TomSweeneyRedHat do you know of a work around that I can use until this is fixed? Is there a way to tell podman to not prefix the tag with “localhost/”?

I believe this is fixed in the latest release.

Well it is intended to be a drop in replacement for the most part. The localhost versus docker.io was intentional.

The registry component of an image should indicate where it came from. And we don’t want to hard code the default location of docker.io.

Now if podman can not instantly run the image created by buildah or podman build, then that is a bug.