podman: Stuck on Storing Signatures

/kind bug

Description Using Void Linux, latest podman gets stuck on sitespeed.io container at “Storing Signatures” step.

Steps to reproduce the issue: podman run --rm -v “$(pwd)”:/sitespeed.io sitespeedio/sitespeed.io:9.3.2 https://sitespeed.io

Describe the results you received: [vlad@void-vm ~]$ podman run --rm -v “$(pwd)”:/sitespeed.io sitespeedio/sitespeed.io:9.3.2 https://sitespeed.io Trying to pull docker.io/sitespeedio/sitespeed.io:9.3.2…Getting image source signatures Copying blob edb2b29fa1e2 skipped: already exists Copying blob c46b5fa4d940 skipped: already exists Copying blob 6b1eed27cade skipped: already exists Copying blob 93ae3df89c92 skipped: already exists Copying blob 76d5ea985833 skipped: already exists Copying blob 473ede7ed136 skipped: already exists Copying blob cf82bd0b1aa3 skipped: already exists Copying blob 7dc6cf341fb3 skipped: already exists Copying blob 3c9757b8e6c7 skipped: already exists Copying blob 275861923052 skipped: already exists Copying blob 4c29465436d4 skipped: already exists Copying blob 66700b5e3941 skipped: already exists Copying blob 160f3f39f1b5 skipped: already exists Copying blob a0507231acd7 skipped: already exists Copying blob b965ed368ed7 skipped: already exists Copying blob ad9103b58e2d skipped: already exists Copying blob 946c4c8160b3 skipped: already exists Copying blob df426434925b skipped: already exists Copying blob 8791c156ea54 skipped: already exists Copying blob b1ac729adf6d skipped: already exists Copying blob 4d916c8de88f skipped: already exists Copying blob 67578fe28a3d skipped: already exists Copying blob 47f6b4d4a060 done Copying blob 203d22208385 done Copying blob 671ce6b824ea done Copying config 681cf2afef done Writing manifest to image destination Storing signatures

Describe the results you expected: I would expect it to not freeze at “Storing signatures” step and actually run the container

Output of podman version:

Version:            1.4.0
RemoteAPI Version:  1
Go Version:         go1.12.5
OS/Arch:            linux/amd64

Output of podman info --debug:

debug:
  compiler: gc
  git commit: ""
  go version: go1.12.5
  podman version: 1.4.0
host:
  BuildahVersion: 1.8.3
  Conmon:
    package: Unknown
    path: /usr/libexec/podman/conmon
    version: 'conmon version 0.2.0, commit: '
  Distribution:
    distribution: '"void"'
    version: unknown
  MemFree: 135749632
  MemTotal: 2013794304
  OCIRuntime:
    package: Unknown
    path: /usr/bin/runc
    version: 'runc version spec: 1.0.1-dev'
  SwapFree: 0
  SwapTotal: 0
  arch: amd64
  cpus: 4
  hostname: void-vm
  kernel: 4.19.50_1
  os: linux
  rootless: true
  uptime: 2h 52m 7.41s (Approximately 0.08 days)
registries:
  blocked: null
  insecure: null
  search:
  - docker.io
  - registry.fedoraproject.org
  - registry.access.redhat.com
store:
  ConfigFile: /home/vlad/.config/containers/storage.conf
  ContainerStore:
    number: 1
  GraphDriverName: vfs
  GraphOptions: null
  GraphRoot: /home/vlad/.local/share/containers/storage
  GraphStatus: {}
  ImageStore:
    number: 1
  RunRoot: /tmp/1000
  VolumePath: /home/vlad/.local/share/containers/storage/volumes

Additional environment details (AWS, VirtualBox, physical, etc.): Hyper-V VM on Windows 10 Ent

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Comments: 26 (9 by maintainers)

Most upvoted comments

My bad, turns out pulling the image exited correctly, it’s just that I ran podman container run and the container has no output, so I thought it was stuck on “Storing signatures”.

Finally solved the problem by doing sudo rm -rf ~/.config/containers/storage.conf and then running podman pull without the --storage-driver option.

Podman sees that I now have fuse-overlayfs and libfuse3-dev installed and creates new config files with overlay as the default filesystem driver.

Specifying the --storage-driver overlay option results in the problem detailed here: https://github.com/containers/buildah/issues/1745, even though if I understand correctly it shouldn’t change anything due to overlay being the default

Also, for anyone else googling and finding this, --storage-driver fuse-overlayfs is (I think) incorrect syntax. Proper usage is --storage-driver overlay.

It would be nice if the overlay packages were required dependencies of podman and then overlay was used as the default driver from the beginning. That would save new users a lot of time not having to deal with this difficult to diagnose issue.

@Dulani I believe fuse-overlay should work in 7.8.

@Jousboxx podman info should contain an option GraphDriverName that will tell you what your current driver is.

After I logged this ticket I set up a bunch of other containers and worked with those. I just tried the container in question here and it actually worked this time around. I should’ve mentioned that it was a fresh install of podman and I did hello world container (which worked) and next container that I tried was the one mentioned here that failed for me.

Since it works now we can close the ticket but there might be something funky going on with a fresh install and particular sequence of containers.

Still attaching debug log for a successful run -

time="2019-06-13T16:17:58-04:00" level=info msg="running as rootless" 
time="2019-06-13T16:17:58-04:00" level=debug msg="Initializing boltdb state at /home/vlad/.local/share/containers/storage/libpod/bolt_state.db" 
time="2019-06-13T16:17:58-04:00" level=debug msg="Using graph driver vfs" 
time="2019-06-13T16:17:58-04:00" level=debug msg="Using graph root /home/vlad/.local/share/containers/storage" 
time="2019-06-13T16:17:58-04:00" level=debug msg="Using run root /tmp/1000" 
time="2019-06-13T16:17:58-04:00" level=debug msg="Using static dir /home/vlad/.local/share/containers/storage/libpod" 
time="2019-06-13T16:17:58-04:00" level=debug msg="Using tmp dir /tmp/run-1000/libpod/tmp" 
time="2019-06-13T16:17:58-04:00" level=debug msg="Using volume path /home/vlad/.local/share/containers/storage/volumes" 
time="2019-06-13T16:17:58-04:00" level=debug msg="Set libpod namespace to """ 
time="2019-06-13T16:17:58-04:00" level=debug msg="[graphdriver] trying provided driver "vfs"" 
time="2019-06-13T16:17:58-04:00" level=debug msg="Initializing event backend file" 
time="2019-06-13T16:17:58-04:00" level=debug msg="parsed reference into "[vfs@/home/vlad/.local/share/containers/storage+/tmp/1000]docker.io/sitespeedio/sitespeed.io:9.3.2"" 
time="2019-06-13T16:17:58-04:00" level=debug msg="reference "[vfs@/home/vlad/.local/share/containers/storage+/tmp/1000]docker.io/sitespeedio/sitespeed.io:9.3.2" does not resolve to an image ID" 
time="2019-06-13T16:17:58-04:00" level=debug msg="parsed reference into "[vfs@/home/vlad/.local/share/containers/storage+/tmp/1000]localhost/sitespeedio/sitespeed.io:9.3.2"" 
time="2019-06-13T16:17:58-04:00" level=debug msg="reference "[vfs@/home/vlad/.local/share/containers/storage+/tmp/1000]localhost/sitespeedio/sitespeed.io:9.3.2" does not resolve to an image ID" 
time="2019-06-13T16:17:58-04:00" level=debug msg="parsed reference into "[vfs@/home/vlad/.local/share/containers/storage+/tmp/1000]docker.io/sitespeedio/sitespeed.io:9.3.2"" 
time="2019-06-13T16:17:58-04:00" level=debug msg="parsed reference into "[vfs@/home/vlad/.local/share/containers/storage+/tmp/1000]registry.fedoraproject.org/sitespeedio/sitespeed.io:9.3.2"" 
time="2019-06-13T16:17:58-04:00" level=debug msg="parsed reference into "[vfs@/home/vlad/.local/share/containers/storage+/tmp/1000]registry.access.redhat.com/sitespeedio/sitespeed.io:9.3.2"" 
Trying to pull docker.io/sitespeedio/sitespeed.io:9.3.2...time="2019-06-13T16:17:58-04:00" level=debug msg="Trying to pull "docker.io/sitespeedio/sitespeed.io:9.3.2" from endpoint "docker.io"" 
time="2019-06-13T16:17:58-04:00" level=debug msg="reference rewritten from 'docker.io/sitespeedio/sitespeed.io:9.3.2' to 'docker.io/sitespeedio/sitespeed.io:9.3.2'" 
time="2019-06-13T16:17:58-04:00" level=debug msg="Using registries.d directory /etc/containers/registries.d for sigstore configuration" 
time="2019-06-13T16:17:58-04:00" level=debug msg=" Using "default-docker" configuration" 
time="2019-06-13T16:17:58-04:00" level=debug msg=" No signature storage configuration found for docker.io/sitespeedio/sitespeed.io:9.3.2" 
time="2019-06-13T16:17:58-04:00" level=debug msg="Looking for TLS certificates and private keys in /etc/docker/certs.d/docker.io" 
time="2019-06-13T16:17:58-04:00" level=debug msg="GET https://registry-1.docker.io/v2/" 
time="2019-06-13T16:17:59-04:00" level=debug msg="Ping https://registry-1.docker.io/v2/ status 401" 
time="2019-06-13T16:17:59-04:00" level=debug msg="GET https://auth.docker.io/token?scope=repository%3Asitespeedio%2Fsitespeed.io%3Apull&service=registry.docker.io" 
time="2019-06-13T16:17:59-04:00" level=debug msg="GET https://registry-1.docker.io/v2/sitespeedio/sitespeed.io/manifests/9.3.2" 
time="2019-06-13T16:17:59-04:00" level=debug msg="Using blob info cache at /home/vlad/.local/share/containers/cache/blob-info-cache-v1.boltdb" 
time="2019-06-13T16:17:59-04:00" level=debug msg="IsRunningImageAllowed for image docker:docker.io/sitespeedio/sitespeed.io:9.3.2" 
time="2019-06-13T16:17:59-04:00" level=debug msg=" Using default policy section" 
time="2019-06-13T16:17:59-04:00" level=debug msg=" Requirement 0: allowed" 
time="2019-06-13T16:17:59-04:00" level=debug msg="Overall: allowed" 
time="2019-06-13T16:17:59-04:00" level=debug msg="Downloading /v2/sitespeedio/sitespeed.io/blobs/sha256:681cf2afef031c611e7f01d5adfdf99bba0570f499d27b562fd41cef2023ebc7" 
time="2019-06-13T16:17:59-04:00" level=debug msg="GET https://registry-1.docker.io/v2/sitespeedio/sitespeed.io/blobs/sha256:681cf2afef031c611e7f01d5adfdf99bba0570f499d27b562fd41cef2023ebc7" 
Getting image source signatures
time="2019-06-13T16:18:00-04:00" level=debug msg="Manifest has MIME type application/vnd.docker.distribution.manifest.v2+json, ordered candidate list [application/vnd.docker.distribution.manifest.v2+json, application/vnd.docker.distribution.manifest.v1+prettyjws, application/vnd.oci.image.manifest.v1+json, application/vnd.docker.distribution.manifest.v1+json]" 
time="2019-06-13T16:18:00-04:00" level=debug msg="... will first try using the original manifest unmodified" 
time="2019-06-13T16:18:00-04:00" level=debug msg="Skipping blob sha256:edb2b29fa1e2c4cc66a334b4ba3a332d57264934be5f89af8ef3be96eb888be9 (already present):" 
Copying blob sha256:edb2b29fa1e2c4cc66a334b4ba3a332d57264934be5f89af8ef3be96eb888be9
time="2019-06-13T16:18:00-04:00" level=debug msg="Skipping blob sha256:473ede7ed136b710ab2dd51579af038b7d00fbbf6a1790c6294c93666203c0a6 (already present):" 
Copying blob sha256:473ede7ed136b710ab2dd51579af038b7d00fbbf6a1790c6294c93666203c0a6
time="2019-06-13T16:18:00-04:00" level=debug msg="Skipping blob sha256:6b1eed27cadec5de8051d56697b0b67527e4076deedceefb41b7b2ea9b900459 (already present):" 
Copying blob sha256:6b1eed27cadec5de8051d56697b0b67527e4076deedceefb41b7b2ea9b900459
time="2019-06-13T16:18:00-04:00" level=debug msg="Skipping blob sha256:93ae3df89c92cb1d20e9c09f499e693d3a8a8cef161f7158f7a9a3b5d06e4ef2 (already present):" 
Copying blob sha256:93ae3df89c92cb1d20e9c09f499e693d3a8a8cef161f7158f7a9a3b5d06e4ef2
time="2019-06-13T16:18:00-04:00" level=debug msg="Skipping blob sha256:76d5ea985833d96e837de773e874939bb93d7100db1cfc95919a75cee4e75f4f (already present):" 
Copying blob sha256:76d5ea985833d96e837de773e874939bb93d7100db1cfc95919a75cee4e75f4f
time="2019-06-13T16:18:00-04:00" level=debug msg="Skipping blob sha256:275861923052b47d0ee66f64972e8a5b8bc16efb629234b611b7c1d8a07798f5 (already present):" 
Copying blob sha256:275861923052b47d0ee66f64972e8a5b8bc16efb629234b611b7c1d8a07798f5
time="2019-06-13T16:18:00-04:00" level=debug msg="Skipping blob sha256:c46b5fa4d940569e49988515c1ea0295f56d0a16228d8f854e27613f467ec892 (already present):" 
Copying blob sha256:c46b5fa4d940569e49988515c1ea0295f56d0a16228d8f854e27613f467ec892
time="2019-06-13T16:18:00-04:00" level=debug msg="Skipping blob sha256:cf82bd0b1aa395c69cddc81e6d3c32a17e63ae953a1d2b5018506acf788a9ec4 (already present):" 
Copying blob sha256:cf82bd0b1aa395c69cddc81e6d3c32a17e63ae953a1d2b5018506acf788a9ec4
time="2019-06-13T16:18:00-04:00" level=debug msg="Skipping blob sha256:3c9757b8e6c7a1e12a6dfe4a303dd3495b1225ad8a3058393dc86d74d641a73d (already present):" 
Copying blob sha256:3c9757b8e6c7a1e12a6dfe4a303dd3495b1225ad8a3058393dc86d74d641a73d
time="2019-06-13T16:18:00-04:00" level=debug msg="Skipping blob sha256:66700b5e394146d8b5f8b78b298b27ef8e34904cf730da408dc1a516299128d4 (already present):" 
Copying blob sha256:66700b5e394146d8b5f8b78b298b27ef8e34904cf730da408dc1a516299128d4
time="2019-06-13T16:18:00-04:00" level=debug msg="Skipping blob sha256:946c4c8160b3c6046ebecc50981bbda4e659dcde72c5ff3e397d762e218b93eb (already present):" 
Copying blob sha256:946c4c8160b3c6046ebecc50981bbda4e659dcde72c5ff3e397d762e218b93eb
time="2019-06-13T16:18:00-04:00" level=debug msg="Skipping blob sha256:4c29465436d4cc42a30ef90c0e365c781c1a44a6c8ffd2ee319782f5d406abe7 (already present):" 
Copying blob sha256:4c29465436d4cc42a30ef90c0e365c781c1a44a6c8ffd2ee319782f5d406abe7
time="2019-06-13T16:18:00-04:00" level=debug msg="Skipping blob sha256:7dc6cf341fb33d2c5ffc8a1e94b6c4fe58f29ba494e71dc3cf4cad63ed718f4e (already present):" 
Copying blob sha256:7dc6cf341fb33d2c5ffc8a1e94b6c4fe58f29ba494e71dc3cf4cad63ed718f4e
time="2019-06-13T16:18:00-04:00" level=debug msg="Skipping blob sha256:b1ac729adf6d626ba6af0e27d3b5b1c5558e6760d2760c87ef85a23e94d9cf1d (already present):" 
Copying blob sha256:b1ac729adf6d626ba6af0e27d3b5b1c5558e6760d2760c87ef85a23e94d9cf1d
time="2019-06-13T16:18:00-04:00" level=debug msg="Skipping blob sha256:a0507231acd737214180fef1cc28b96e7529dc870eb6c62af591be8fe60e27ec (already present):" 
Copying blob sha256:a0507231acd737214180fef1cc28b96e7529dc870eb6c62af591be8fe60e27ec
time="2019-06-13T16:18:00-04:00" level=debug msg="Skipping blob sha256:b965ed368ed7febaede36922a2e0944eda9e5c1b3375fe73fdcb83733f14eaac (already present):" 
Copying blob sha256:b965ed368ed7febaede36922a2e0944eda9e5c1b3375fe73fdcb83733f14eaac
time="2019-06-13T16:18:00-04:00" level=debug msg="Skipping blob sha256:160f3f39f1b553d652de2b6eb7061e378834db0bf6ceca96ffe8a512fd71cefc (already present):" 
Copying blob sha256:160f3f39f1b553d652de2b6eb7061e378834db0bf6ceca96ffe8a512fd71cefc
time="2019-06-13T16:18:00-04:00" level=debug msg="Skipping blob sha256:4d916c8de88f66ec0c301780b8311394711c1c0447223caf3484e29fe7b7a6ca (already present):" 
time="2019-06-13T16:18:00-04:00" level=debug msg="Skipping blob sha256:67578fe28a3d1ad8f4d81700e61b6fa8eabe8054b30bf7bc6f487ee30c1a8c22 (already present):" 
Copying blob sha256:67578fe28a3d1ad8f4d81700e61b6fa8eabe8054b30bf7bc6f487ee30c1a8c22
time="2019-06-13T16:18:00-04:00" level=debug msg="Skipping blob sha256:8791c156ea54ffe9242508667be84754b317611b2241d44d3b00cdc74b3c88c2 (already present):" 
time="2019-06-13T16:18:00-04:00" level=debug msg="Skipping blob sha256:ad9103b58e2da6cd2340bcaed5fc504a69f187b804169f7529de148017888479 (already present):" 
Copying blob sha256:4d916c8de88f66ec0c301780b8311394711c1c0447223caf3484e29fe7b7a6ca
Copying blob sha256:ad9103b58e2da6cd2340bcaed5fc504a69f187b804169f7529de148017888479
Copying blob sha256:8791c156ea54ffe9242508667be84754b317611b2241d44d3b00cdc74b3c88c2
time="2019-06-13T16:18:00-04:00" level=debug msg="Skipping blob sha256:203d222083851df55e2854d8e4517328785a58d716182e7275330d189296c9e1 (already present):" 
Copying blob sha256:203d222083851df55e2854d8e4517328785a58d716182e7275330d189296c9e1
time="2019-06-13T16:18:00-04:00" level=debug msg="Skipping blob sha256:df426434925b00d62438ab04437e4ef4960c388b97f6788de3dc7ee45e9996e7 (already present):" 
Copying blob sha256:df426434925b00d62438ab04437e4ef4960c388b97f6788de3dc7ee45e9996e7
time="2019-06-13T16:18:00-04:00" level=debug msg="Downloading /v2/sitespeedio/sitespeed.io/blobs/sha256:671ce6b824eaaa746e9108951838971528b1ac66b1128b2ca29dd85109ea4cd5" 
time="2019-06-13T16:18:00-04:00" level=debug msg="GET https://registry-1.docker.io/v2/sitespeedio/sitespeed.io/blobs/sha256:671ce6b824eaaa746e9108951838971528b1ac66b1128b2ca29dd85109ea4cd5" 
time="2019-06-13T16:18:00-04:00" level=debug msg="Downloading /v2/sitespeedio/sitespeed.io/blobs/sha256:47f6b4d4a06098db5452efcbe091d1de1a93cd08483309389ef416998b924014" 
time="2019-06-13T16:18:00-04:00" level=debug msg="GET https://registry-1.docker.io/v2/sitespeedio/sitespeed.io/blobs/sha256:47f6b4d4a06098db5452efcbe091d1de1a93cd08483309389ef416998b924014" 
Copying blob sha256:47f6b4d4a06098db5452efcbe091d1de1a93cd08483309389ef416998b924014
time="2019-06-13T16:18:00-04:00" level=debug msg="Detected compression format gzip" 
time="2019-06-13T16:18:00-04:00" level=debug msg="Using original blob without modification" 
Copying blob sha256:671ce6b824eaaa746e9108951838971528b1ac66b1128b2ca29dd85109ea4cd5
time="2019-06-13T16:18:00-04:00" level=debug msg="Detected compression format gzip" 
time="2019-06-13T16:18:00-04:00" level=debug msg="Using original blob without modification" 
Copying config sha256:681cf2afef031c611e7f01d5adfdf99bba0570f499d27b562fd41cef2023ebc7
time="2019-06-13T16:18:00-04:00" level=debug msg="No compression detected" 
time="2019-06-13T16:18:00-04:00" level=debug msg="Using original blob without modification" 
Writing manifest to image destination
Storing signatures
time="2019-06-13T16:18:59-04:00" level=debug msg="Start untar layer" 
time="2019-06-13T16:18:59-04:00" level=debug msg="Untar time: 0.1234235s" 
time="2019-06-13T16:19:47-04:00" level=debug msg="Start untar layer" 
time="2019-06-13T16:19:47-04:00" level=debug msg="Untar time: 0.1500408s" 
time="2019-06-13T16:19:47-04:00" level=debug msg="setting image creation date to 2019-06-13 10:25:35.2120051 +0000 UTC" 
time="2019-06-13T16:19:47-04:00" level=debug msg="created new image ID "681cf2afef031c611e7f01d5adfdf99bba0570f499d27b562fd41cef2023ebc7"" 
time="2019-06-13T16:19:48-04:00" level=debug msg="set names of image "681cf2afef031c611e7f01d5adfdf99bba0570f499d27b562fd41cef2023ebc7" to [docker.io/sitespeedio/sitespeed.io:9.3.2]" 
time="2019-06-13T16:19:48-04:00" level=debug msg="saved image metadata "{}"" 
time="2019-06-13T16:19:48-04:00" level=debug msg="parsed reference into "[vfs@/home/vlad/.local/share/containers/storage+/tmp/1000]docker.io/sitespeedio/sitespeed.io:9.3.2"" 
time="2019-06-13T16:19:48-04:00" level=debug msg="parsed reference into "[vfs@/home/vlad/.local/share/containers/storage+/tmp/1000]@681cf2afef031c611e7f01d5adfdf99bba0570f499d27b562fd41cef2023ebc7"" 
time="2019-06-13T16:19:48-04:00" level=debug msg="exporting opaque data as blob "sha256:681cf2afef031c611e7f01d5adfdf99bba0570f499d27b562fd41cef2023ebc7"" 
time="2019-06-13T16:19:48-04:00" level=debug msg="parsed reference into "[vfs@/home/vlad/.local/share/containers/storage+/tmp/1000]@681cf2afef031c611e7f01d5adfdf99bba0570f499d27b562fd41cef2023ebc7"" 
time="2019-06-13T16:19:48-04:00" level=debug msg="exporting opaque data as blob "sha256:681cf2afef031c611e7f01d5adfdf99bba0570f499d27b562fd41cef2023ebc7"" 
time="2019-06-13T16:19:48-04:00" level=debug msg="parsed reference into "[vfs@/home/vlad/.local/share/containers/storage+/tmp/1000]@681cf2afef031c611e7f01d5adfdf99bba0570f499d27b562fd41cef2023ebc7"" 
time="2019-06-13T16:19:48-04:00" level=debug msg="User mount /home/vlad:/sitespeed.io options []" 
time="2019-06-13T16:19:48-04:00" level=debug msg="Got mounts: [{/sitespeed.io bind /home/vlad []}]" 
time="2019-06-13T16:19:48-04:00" level=debug msg="Got volumes: []" 
time="2019-06-13T16:19:48-04:00" level=debug msg="Using slirp4netns netmode" 
time="2019-06-13T16:19:48-04:00" level=debug msg="Adding mount /proc" 
time="2019-06-13T16:19:48-04:00" level=debug msg="Adding mount /dev" 
time="2019-06-13T16:19:48-04:00" level=debug msg="Adding mount /dev/pts" 
time="2019-06-13T16:19:48-04:00" level=debug msg="Adding mount /dev/mqueue" 
time="2019-06-13T16:19:48-04:00" level=debug msg="Adding mount /sys" 
time="2019-06-13T16:19:48-04:00" level=debug msg="Adding mount /sys/fs/cgroup" 
time="2019-06-13T16:19:48-04:00" level=debug msg="created OCI spec and options for new container" 
time="2019-06-13T16:19:48-04:00" level=debug msg="Allocated lock 1 for container 748d486246c5a121bb6b3c6158657bba8cb1c830bd1a7148b49314bc14b8fa4a" 
time="2019-06-13T16:19:48-04:00" level=debug msg="parsed reference into "[vfs@/home/vlad/.local/share/containers/storage+/tmp/1000]@681cf2afef031c611e7f01d5adfdf99bba0570f499d27b562fd41cef2023ebc7"" 
time="2019-06-13T16:19:48-04:00" level=debug msg="exporting opaque data as blob "sha256:681cf2afef031c611e7f01d5adfdf99bba0570f499d27b562fd41cef2023ebc7"" 
time="2019-06-13T16:20:46-04:00" level=debug msg="created container "748d486246c5a121bb6b3c6158657bba8cb1c830bd1a7148b49314bc14b8fa4a"" 
time="2019-06-13T16:20:47-04:00" level=debug msg="container "748d486246c5a121bb6b3c6158657bba8cb1c830bd1a7148b49314bc14b8fa4a" has work directory "/home/vlad/.local/share/containers/storage/vfs-containers/748d486246c5a121bb6b3c6158657bba8cb1c830bd1a7148b49314bc14b8fa4a/userdata"" 
time="2019-06-13T16:20:47-04:00" level=debug msg="container "748d486246c5a121bb6b3c6158657bba8cb1c830bd1a7148b49314bc14b8fa4a" has run directory "/tmp/1000/vfs-containers/748d486246c5a121bb6b3c6158657bba8cb1c830bd1a7148b49314bc14b8fa4a/userdata"" 
time="2019-06-13T16:20:47-04:00" level=debug msg="New container created "748d486246c5a121bb6b3c6158657bba8cb1c830bd1a7148b49314bc14b8fa4a"" 
time="2019-06-13T16:20:47-04:00" level=debug msg="container "748d486246c5a121bb6b3c6158657bba8cb1c830bd1a7148b49314bc14b8fa4a" has CgroupParent "/libpod_parent/libpod-748d486246c5a121bb6b3c6158657bba8cb1c830bd1a7148b49314bc14b8fa4a"" 
time="2019-06-13T16:20:47-04:00" level=debug msg="Not attaching to stdin" 
time="2019-06-13T16:20:47-04:00" level=debug msg="mounted container "748d486246c5a121bb6b3c6158657bba8cb1c830bd1a7148b49314bc14b8fa4a" at "/home/vlad/.local/share/containers/storage/vfs/dir/6d36569a39082a6a1f51980a939fb2b538343deeb9719edaf9b9720d0675d618"" 
time="2019-06-13T16:20:47-04:00" level=debug msg="Created root filesystem for container 748d486246c5a121bb6b3c6158657bba8cb1c830bd1a7148b49314bc14b8fa4a at /home/vlad/.local/share/containers/storage/vfs/dir/6d36569a39082a6a1f51980a939fb2b538343deeb9719edaf9b9720d0675d618" 
time="2019-06-13T16:20:47-04:00" level=debug msg="skipping loading default AppArmor profile (rootless mode)" 
time="2019-06-13T16:20:47-04:00" level=debug msg="/etc/system-fips does not exist on host, not mounting FIPS mode secret" 
time="2019-06-13T16:20:47-04:00" level=debug msg="Created OCI spec for container 748d486246c5a121bb6b3c6158657bba8cb1c830bd1a7148b49314bc14b8fa4a at /home/vlad/.local/share/containers/storage/vfs-containers/748d486246c5a121bb6b3c6158657bba8cb1c830bd1a7148b49314bc14b8fa4a/userdata/config.json" 
time="2019-06-13T16:20:47-04:00" level=debug msg="/usr/libexec/podman/conmon messages will be logged to syslog" 
time="2019-06-13T16:20:47-04:00" level=debug msg="running conmon: /usr/libexec/podman/conmon" args=[-c 748d486246c5a121bb6b3c6158657bba8cb1c830bd1a7148b49314bc14b8fa4a -u 748d486246c5a121bb6b3c6158657bba8cb1c830bd1a7148b49314bc14b8fa4a -n tender_kirch -r /usr/bin/runc -b /home/vlad/.local/share/containers/storage/vfs-containers/748d486246c5a121bb6b3c6158657bba8cb1c830bd1a7148b49314bc14b8fa4a/userdata -p /tmp/1000/vfs-containers/748d486246c5a121bb6b3c6158657bba8cb1c830bd1a7148b49314bc14b8fa4a/userdata/pidfile --exit-dir /tmp/run-1000/libpod/tmp/exits --conmon-pidfile /tmp/1000/vfs-containers/748d486246c5a121bb6b3c6158657bba8cb1c830bd1a7148b49314bc14b8fa4a/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/vlad/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /tmp/1000 --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg cgroupfs --exit-command-arg --tmpdir --exit-command-arg /tmp/run-1000/libpod/tmp --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg vfs --exit-command-arg container --exit-command-arg cleanup --exit-command-arg --rm --exit-command-arg 748d486246c5a121bb6b3c6158657bba8cb1c830bd1a7148b49314bc14b8fa4a --socket-dir-path /tmp/run-1000/libpod/tmp/socket -l k8s-file:/home/vlad/.local/share/containers/storage/vfs-containers/748d486246c5a121bb6b3c6158657bba8cb1c830bd1a7148b49314bc14b8fa4a/userdata/ctr.log --log-level debug --syslog] 
time="2019-06-13T16:20:47-04:00" level=warning msg="Failed to add conmon to cgroupfs sandbox cgroup: mkdir /sys/fs/cgroup/freezer/libpod_parent: permission denied" 
[conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied

time="2019-06-13T16:20:47-04:00" level=debug msg="Received container pid: 9085" 
time="2019-06-13T16:20:47-04:00" level=debug msg="Created container 748d486246c5a121bb6b3c6158657bba8cb1c830bd1a7148b49314bc14b8fa4a in OCI runtime" 
time="2019-06-13T16:20:47-04:00" level=debug msg="Attaching to container 748d486246c5a121bb6b3c6158657bba8cb1c830bd1a7148b49314bc14b8fa4a" 
time="2019-06-13T16:20:47-04:00" level=debug msg="connecting to socket /tmp/run-1000/libpod/tmp/socket/748d486246c5a121bb6b3c6158657bba8cb1c830bd1a7148b49314bc14b8fa4a/attach" 
time="2019-06-13T16:20:47-04:00" level=debug msg="Starting container 748d486246c5a121bb6b3c6158657bba8cb1c830bd1a7148b49314bc14b8fa4a with command [/start.sh https://sitespeed.io]" 
time="2019-06-13T16:20:47-04:00" level=debug msg="Started container 748d486246c5a121bb6b3c6158657bba8cb1c830bd1a7148b49314bc14b8fa4a" 
time="2019-06-13T16:20:47-04:00" level=debug msg="Enabling signal proxying" 
Google Chrome 75.0.3770.80 
Mozilla Firefox 67.0.1
[2019-06-13 20:20:49] INFO: Versions OS: linux 4.19.50_1 nodejs: v10.13.0 sitespeed.io: 9.3.2 browsertime: 5.2.5 coach: 3.6.0
[2019-06-13 20:20:50] INFO: Running tests using Chrome - 3 iteration(s)
[2019-06-13 20:20:54] INFO: Testing url https://sitespeed.io iteration 1
[2019-06-13 20:21:07] INFO: https://sitespeed.io BackEndTime: 804 DomInteractiveTime: 850 DomContentLoadedTime: 1005 FirstPaint: 1026 PageLoadTime: 1044
[2019-06-13 20:21:07] INFO: VisualMetrics FirstVisualChange: 1034 SpeedIndex: 1040 PerceptualSpeedIndex: 1040 LastVisualChange: 1067
[2019-06-13 20:21:08] INFO: Testing url https://sitespeed.io iteration 2
[2019-06-13 20:21:19] INFO: https://sitespeed.io BackEndTime: 433 DomInteractiveTime: 477 DomContentLoadedTime: 638 FirstPaint: 648 PageLoadTime: 639
[2019-06-13 20:21:19] INFO: VisualMetrics FirstVisualChange: 666 SpeedIndex: 672 PerceptualSpeedIndex: 673 LastVisualChange: 700
[2019-06-13 20:21:20] INFO: Testing url https://sitespeed.io iteration 3
[2019-06-13 20:21:32] INFO: https://sitespeed.io BackEndTime: 537 DomInteractiveTime: 595 DomContentLoadedTime: 647 FirstPaint: 797 PageLoadTime: 766
[2019-06-13 20:21:32] INFO: VisualMetrics FirstVisualChange: 800 SpeedIndex: 800 PerceptualSpeedIndex: 800 LastVisualChange: 800
[2019-06-13 20:21:32] INFO: https://sitespeed.io 14 requests, 168.26 kb, backEndTime: 591ms (±90.22ms), firstPaint: 824ms (±89.76ms), firstVisualChange: 833ms (±87.80ms), DOMContentLoaded: 763ms (±98.68ms), Load: 816ms (±97.65ms), speedIndex: 837 (±88.07), visualComplete85: 856ms (±89.44ms), lastVisualChange: 856ms (±89.44ms), rumSpeedIndex: 824 (±89.76) (3 runs)
[2019-06-13 20:21:34] INFO: HTML stored in /sitespeed.io/sitespeed-result/sitespeed.io/2019-06-13-20-20-49
time="2019-06-13T16:21:38-04:00" level=debug msg="Checking container 748d486246c5a121bb6b3c6158657bba8cb1c830bd1a7148b49314bc14b8fa4a status..." 
time="2019-06-13T16:21:38-04:00" level=debug msg="Attempting to read container 748d486246c5a121bb6b3c6158657bba8cb1c830bd1a7148b49314bc14b8fa4a exit code from file /tmp/run-1000/libpod/tmp/exits/748d486246c5a121bb6b3c6158657bba8cb1c830bd1a7148b49314bc14b8fa4a-old" 
time="2019-06-13T16:21:38-04:00" level=debug msg="[graphdriver] trying provided driver "vfs""