kyverno: [Bug] spec.containers[0].image: Required value error when applying manifest in parallel

Kyverno Version

1.6.x

Description

This error can be reproduced as follow.

  1. install kyverno
  2. have a policy active that loop over all containers in a pod. for example
  3. After kyverno is newly installed run a parallel apply job to install some pods into the cluster and trigger the example policy like so
k apply -n test -f 1.yaml & k apply -n test -f 2.yaml & k apply -n test -f 3.yaml & k apply -n test -f 4.yaml & k apply -n test -f 5.yaml & k apply -n test -f 6.yaml
  1. The error should appear.

The following is also true about this issue.

  1. It only seems to happen on the first apply run after kyverno is installed. If you would apply the manifests a second time the issue will not appear.
  2. So far for me this issue only seems to appear when you run in HA mode. Single replica this issue will also not occur.
  3. Also I have not been able to get this error when not applying the manifests in parallel
  4. After kyverno is installed and you perform a rollout restart of the kyverno deployment you can run a parallel apply job again and the issue is there again.

Example policy

apiVersion : kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: add-default-resources
  annotations:
    policies.kyverno.io/title: Add Default Resources
    policies.kyverno.io/category: Other
    policies.kyverno.io/severity: medium
    kyverno.io/kyverno-version: 1.6.0
    policies.kyverno.io/minversion: 1.6.0
    kyverno.io/kubernetes-version: "1.23"
    policies.kyverno.io/subject: Pod
    policies.kyverno.io/description: >-
      Pods which don't specify at least resource requests are assigned a QoS class
      of BestEffort which can hog resources for other Pods on Nodes. At a minimum,
      all Pods should specify resource requests in order to be labeled as the QoS
      class Burstable. This sample mutates any container in a Pod which doesn't
      specify memory or cpu requests to apply some sane defaults.      
spec:
  background: false
  rules:
  - name: add-default-requests
    match:
      any:
      - resources:
          kinds:
          - Pod
    preconditions:
      any:
      - key: "{{request.operation}}"
        operator: In
        value:
        - CREATE
        - UPDATE
    mutate:
      patchStrategicMerge:
        spec:
          containers:
            - (name): "*"
              resources:
                requests:
                  +(memory): "100Mi"
                  +(cpu): "100m"

Slack discussion

https://kubernetes.slack.com/archives/CLGR9BJU9/p1649331737310139

Troubleshooting

  • I have read and followed the documentation AND the troubleshooting guide.
  • I have searched other issues in this repository and mine is not recorded.

About this issue

  • Original URL
  • State: closed
  • Created 2 years ago
  • Reactions: 3
  • Comments: 24 (13 by maintainers)

Most upvoted comments

@chipzoller @nickvanwegen I’m working with @eddycharly. Fixing the issue seems to be not straightforward. We are working on it.

I think it was directed at @vyankyGH but possibly that comment got deleted.

@nickvanwegen Extremely sorry for the inconvenience, we are taking the issue on priority will fix ASAP

I can confirm I can still reproduce this on 1.8.1 with 3 replicas of Kyverno. Using all the supplied manifests as well as the command, it returns the following:

$ k apply -n test -f 1.yaml & k apply -n test -f 2.yaml & k apply -n test -f 3.yaml & k apply -n test -f 4.yaml & k apply -n test -f 5.yaml & k apply -n test -f 6.yaml
[1] 5654
[2] 5655
[3] 5656
[4] 5657
[5] 5662
Error from server: error when creating "3.yaml": admission webhook "mutate.kyverno.svc-fail" denied the request: failed to add image information to the policy rule context: invalid value
pod/kyverno-require-run-as-non-root-user created
Error from server: error when creating "6.yaml": admission webhook "mutate.kyverno.svc-fail" denied the request: failed to add image information to the policy rule context: invalid value
[3]   Exit 1                  kubectl apply -n test -f 3.yaml
[5]+  Done                    kubectl apply -n test -f 5.yaml
Error from server: error when creating "2.yaml": admission webhook "mutate.kyverno.svc-fail" denied the request: failed to add image information to the policy rule context: invalid value
pod/kyverno-disallow-capabilities-strict created
pod/kyverno-restrict-volume-types created
[1]   Done                    kubectl apply -n test -f 1.yaml
[2]-  Exit 1                  kubectl apply -n test -f 2.yaml
[4]+  Done                    kubectl apply -n test -f 4.yaml

I did not see this error with just a single replica.