cri-o: Pulling container images fails on CentOS Stream 8

What happened?

Pulling images fails due to a unknown key in /etc/containers/policy.json on Centos Stream 8 with cri-o 1.24.2 installed.

[root@localhost ~]# crictl pull k8s.gcr.io/kube-apiserver:v1.24.4
FATA[0000] pulling image: rpc error: code = Unknown desc = invalid policy in "/etc/containers/policy.json": Unknown key "keyPaths" 

The same image can be pulled successfully using podman

[root@localhost brian]# podman pull k8s.gcr.io/kube-apiserver:v1.24.4
Trying to pull k8s.gcr.io/kube-apiserver:v1.24.4...
Getting image source signatures
Copying blob f5bb0a2b916a done  
Copying blob b9f88661235d done  
Copying blob cca57b588e6e done  
Copying config 6cab9d1bed done  
Writing manifest to image destination
Storing signatures
6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d

This issue looks to have been introduced by updates in the latest containers-common rpm (containers-common-1-40.module_el8.7.0+1196+721f4eb0.x86_64)

Pulling images succeed after downgrading containers-common

[root@localhost ~]# rpm -qa | grep containers-common
containers-common-1-40.module_el8.7.0+1196+721f4eb0.x86_64

[root@localhost ~]# cat /etc/containers/policy.json | grep keyPath
		    "keyPaths": ["/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release", "/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-beta"]
		    "keyPaths": ["/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release", "/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-beta"]

[root@localhost ~]# crictl pull quay.io/kubevirtci/fedora:36-2208010931
FATA[0003] pulling image: rpc error: code = Unknown desc = invalid policy in "/etc/containers/policy.json": Unknown key "keyPaths"

[root@localhost ~]# dnf downgrade containers-common
....
...

[root@localhost ~]# rpm -qa | grep containers-common
containers-common-1-23.module_el8.7.0+1106+45480ee0.x86_64

[root@localhost ~]# cat /etc/containers/policy.json | grep keyPath
		    "keyPath": "/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release"
		    "keyPath": "/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release"

[root@localhost ~]# crictl pull quay.io/kubevirtci/fedora:36-2208010931
Image is up to date for quay.io/kubevirtci/fedora@sha256:486fd5578f93fbc57a519e34ad4b7cac927c3f8a95409baedf0c19e9f287c207

What did you expect to happen?

Images are pulled successfully

How can we reproduce it (as minimally and precisely as possible)?

  • Install Centos Stream 8
  • Run a dnf upgrade dnf upgrade
  • Install cri-o following the documented install procedure for CentOS_8_Stream
  • Install cri-tools to get crictl dnf install -y cri-tools
  • Enable crio and start crio systemctl enable crio --now
  • Try to pull an image
[root@localhost ~]# crictl pull quay.io/kubevirtci/fedora:36-2208010931
FATA[0003] pulling image: rpc error: code = Unknown desc = invalid policy in "/etc/containers/policy.json": Unknown key "keyPaths"

Anything else we need to know?

No response

CRI-O and Kubernetes version

$ crio --version
[root@localhost ~]# crio --version
WARN[0000] Failed to decode the keys ["network.network_backend"] from "/usr/share/containers/containers.conf". 
crio version 1.24.2
Version:          1.24.2
GitCommit:        bd548b04f78a30e1e9d7c17162714edd50edd6ca
GitTreeState:     clean
BuildDate:        2022-08-09T18:58:47Z
GoVersion:        go1.18.2
Compiler:         gc
Platform:         linux/amd64
Linkmode:         dynamic
BuildTags:        exclude_graphdriver_devicemapper, seccomp
SeccompEnabled:   true
AppArmorEnabled:  false
$ kubectl --version
NOT INSTALLED

OS version

# On Linux:
$ cat /etc/os-release
[root@localhost ~]# cat /etc/os-release 
NAME="CentOS Stream"
VERSION="8"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="8"
PLATFORM_ID="platform:el8"
PRETTY_NAME="CentOS Stream 8"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:8"
HOME_URL="https://centos.org/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 8"
REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream"
$ uname -a
[root@localhost ~]# uname -a
Linux localhost.localdomain 4.18.0-408.el8.x86_64 #1 SMP Mon Jul 18 17:42:52 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux

Additional environment details (AWS, VirtualBox, physical, etc.)

Tested in a Centos Stream 8 VM

Also seen in some automation testing carried out by Kubevirt For example: https://prow.ci.kubevirt.io/view/gs/kubevirt-prow/pr-logs/pull/kubevirt_kubevirtci/857/check-provision-k8s-1.24/1565236465217048576

About this issue

  • Original URL
  • State: open
  • Created 2 years ago
  • Reactions: 12
  • Comments: 16 (6 by maintainers)

Commits related to this issue

Most upvoted comments

Thank you for reporting on this, I was about to as I had been dealing with this as well.

To fix this for now I created the following ansible step:

- name: keyPaths Fix
  lineinfile:
    path: /etc/containers/policy.json
    regexp: '^.*keyPaths.*'
    line: '                    "keyPath": "/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release"'

Similar issue on RHEL 8.6 with containers-common-2:1-43 . Had to edit etc/containers/policy.json. Set keypath as @ccravens instructed. Edited keypath

{ "default": [ { "type": "insecureAcceptAnything" } ], "transports": { "docker": { "registry.access.redhat.com": [ { "type": "signedBy", "keyType": "GPGKeys", "keyPath": "/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release" } ], "registry.redhat.io": [ { "type": "signedBy", "keyType": "GPGKeys", "keyPath": "/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release" } ] }, "docker-daemon": { "": [ { "type": "insecureAcceptAnything" } ] } } }

Similar Problem on k8s version : v1.25.4 :

# rpm -qa | grep containers-common
containers-common-1-19.el8.28.12.noarch

# kubectl describe pod kube-proxy-sfs92 -n kube-system

Name:                 kube-proxy-sfs92
Namespace:            kube-system
Priority:             2000001000
Priority Class Name:  system-node-critical
Service Account:      kube-proxy
Node:                 rh91-worker2/10.10.90.101
Start Time:           Fri, 02 Dec 2022 16:30:04 +0800
Labels:               controller-revision-hash=b9c5d5dc4
                      k8s-app=kube-proxy
                      pod-template-generation=1
Annotations:          <none>
Status:               Pending
IP:                   10.10.90.101
IPs:
  IP:           10.10.90.101
Controlled By:  DaemonSet/kube-proxy
Containers:
  kube-proxy:
    Container ID:  
    Image:         registry.k8s.io/kube-proxy:v1.25.3
    Image ID:      
    Port:          <none>
    Host Port:     <none>
    Command:
      /usr/local/bin/kube-proxy
      --config=/var/lib/kube-proxy/config.conf
      --hostname-override=$(NODE_NAME)
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Environment:
      NODE_NAME:   (v1:spec.nodeName)
    Mounts:
      /lib/modules from lib-modules (ro)
      /run/xtables.lock from xtables-lock (rw)
      /var/lib/kube-proxy from kube-proxy (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2q5b2 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  kube-proxy:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      kube-proxy
    Optional:  false
  xtables-lock:
    Type:          HostPath (bare host directory volume)
    Path:          /run/xtables.lock
    HostPathType:  FileOrCreate
  lib-modules:
    Type:          HostPath (bare host directory volume)
    Path:          /lib/modules
    HostPathType:  
  kube-api-access-2q5b2:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 op=Exists
                             node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                             node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/network-unavailable:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists
                             node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                             node.kubernetes.io/unreachable:NoExecute op=Exists
                             node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type     Reason                  Age                   From     Message
  ----     ------                  ----                  ----     -------
  Warning  FailedCreatePodSandBox  85s (x5062 over 18h)  kubelet  Failed to create pod sandbox: rpc error: code = Unknown desc = error creating pod sandbox with name "k8s_kube-proxy-sfs92_kube-system_ce2a652d-ab3f-4647-b8d3-7aebddb9f990_0": invalid policy in "/etc/containers/policy.json": Unknown key "keyPaths"

solution

# sudo podman image trust set -f /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release registry.access.redhat.com
# sudo podman image trust set -f /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release registry.redhat.io
# cat <<EOF > /etc/containers/registries.d/registry.access.redhat.com.yaml
docker:
     registry.access.redhat.com:
         sigstore: https://access.redhat.com/webassets/docker/content/sigstore
EOF

> cat <<EOF > /etc/containers/registries.d/registry.redhat.io.yaml
docker:
     registry.redhat.io:
         sigstore: https://registry.redhat.io/containers/sigstore
EOF

that’s it. the kube-proxy pod successfully created.

[root@rh91-worker1 crio]# kubectl get pod -A
NAMESPACE     NAME                                   READY   STATUS    RESTARTS   AGE
kube-system   coredns-565d847f94-2jklz               1/1     Running   1          26h
kube-system   coredns-565d847f94-ksg8n               1/1     Running   1          26h
kube-system   etcd-rh91-worker1                      1/1     Running   1          26h
kube-system   kube-apiserver-rh91-worker1            1/1     Running   1          26h
kube-system   kube-controller-manager-rh91-worker1   1/1     Running   1          26h
kube-system   kube-proxy-j69mk                       1/1     Running   1          26h
kube-system   kube-proxy-sfs92                       1/1     Running   0          26h
kube-system   kube-scheduler-rh91-worker1            1/1     Running   1          26h