kyverno: [Bug] Pod restart doesn't trigger when two rules combined in a policy

Kyverno Version

1.8.1

Description

Hi,

We would like to define a policy that updates a config map and then restart the pod ( example policy below:

The issue is that when the policies are separated, the restart on new configmap creation or deletion triggers a pod restart. But when the policies are combined and applied, we face an error it tries to update the config map with the annotation rather than the pod definition.

Example Policy:

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: generate-cm-for-kube-state-metrics-crds
  annotations:
    policies.kyverno.io/description: >-
      This policy generates and synchronizes a configmap for custom resource kube-state-metrics.
spec:
  generateExistingOnPolicyUpdate: true
  mutateExistingOnPolicyUpdate: true
  rules:
    - name: generate-cm-for-kube-state-metrics-crds
      match:
        any:
          - resources:
              names:
                - "*"
              kinds:
                - ConfigMap
              namespaces:
                - "kube-state-metrics"
              selector:
                matchLabels:
                  kubestatemetrics.platform.example: source
      preconditions:
        all:
          - key: '{{ request.object.metadata.labels."kubestatemetrics.platform.example" || "" }}'
            operator: Equals
            value: source
      context:
        - name: configMapList
          apiCall:
            urlPath: "/api/v1/configmaps?labelSelector=kubestatemetrics.platform.example=source"
            jmesPath: "items[?metadata.name.contains(@, 'kube-state-metrics')]"
        - name: kubeStateMetricsCrds
          variable:
            value: |
              {{ configMapList | [].[
                  data."kube-state-metrics-crds.yaml" | parse_yaml(@).spec.resources[]][][]
              }}
            jmesPath: "to_string(@)"
      generate:
        synchronize: true
        apiVersion: v1
        kind: ConfigMap
        name: kube-state-metrics-crds
        namespace: kube-state-metrics
        data:
          metadata:
            labels:
              generatedBy: kyverno
          data:
            kube-state-metrics-crds.yaml: |
              kind: CustomResourceStateMetrics
              spec:
                resources:
                  {{ kubeStateMetricsCrds }}
    - name: restart-kube-state-metrics-on-cm-change
      match:
        any:
          - resources:
              kinds:
                - ConfigMap
              names:
                - "kube-state-metrics-crds"
              namespaces:
                - "kube-state-metrics"
      preconditions:
        all:
          - key: "{{ request.object.metadata.labels.\"kubestatemetrics.platform.example\" || '' }}"
            operator: NotEquals
            value: source
          - key: "{{request.operation || 'BACKGROUND'}}"
            operator: Equals
            value: UPDATE
      mutate:
        targets:
          - apiVersion: apps/v1
            kind: Deployment
            name: kube-state-metrics
            namespace: kube-state-metrics
        patchStrategicMerge:
          spec:
            template:
              metadata:
                annotations:
                  platform.cloud.allianz/triggerrestart: "{{request.object.metadata.resourceVersion}}"

Error looks like :

background "msg"="failed to apply generate rule" "error"="admission webhook \"mutate.kyverno.svc-fail\" denied the request: mutation policy generate-cm-for-kube-state-metrics-crds error: failed to validate resource mutated by policy generate-cm-for-kube-state-metrics-crds: ValidationError(io.k8s.api.core.v1.ConfigMap): unknown field \"spec\" in io.k8s.api.core.v1.ConfigMap" "apiVersion"="v1" "kind"="ConfigMap" "name"="kube-state-metrics-appserver" "namespace"="kube-state-metrics" "policy"="generate-cm-for-kube-state-metrics-crds" 

Slack discussion

No response

Troubleshooting

  • I have read and followed the documentation AND the troubleshooting guide.
  • I have searched other issues in this repository and mine is not recorded.

About this issue

  • Original URL
  • State: closed
  • Created 2 years ago
  • Reactions: 1
  • Comments: 22 (18 by maintainers)

Most upvoted comments

We face a similar issue when we want to use a clusterpolicy with two rules, one with secret match and one with configmap match.

If we apply the following example, we only get the first rule applied:

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: test-policy
spec:
  background: true
  generateExistingOnPolicyUpdate: true
  rules:
  - name: copy-secret
    generate:
      apiVersion: v1
      data:
        data:
          test: '{{request.object.data.test}}'
      kind: Secret
      name: newtestdata
      namespace: default
      synchronize: true
    match:
      any:
      - resources:
          kinds:
          - Secret
          names:
          - testdata
          namespaces:
          - default
  - name: copy-configmap
    generate:
      apiVersion: v1
      data:
        data:
          test: '{{request.object.data.test}}'
        type: kubernetes.io/tls
      kind: ConfigMap
      name: newtestdata
      namespace: default
      synchronize: true
    match:
      any:
      - resources:
          kinds:
          - ConfigMap
          names:
          - testdata
          namespaces:
          - default

If we also separate it into two clusterpolicies, it works as expected.

(Note that we have defined two different source objects called testdata. One is a secret and one is a configmap.)

We saw this on the tips and tricks section:

generate rules which trigger off the same source object should be organized in the same policy definition.

Does this imply that for different kinds of source objects multiple policies should be used?

Hi @steadyk - I have tested your policy against latest main and both rules were applied, note that generateExistingOnPolicyUpdate has been replaced with generateExisting:

✗ k apply -f test.yaml
clusterpolicy.kyverno.io/test-policy created

✗ k get cm                 
NAME               DATA   AGE
kube-root-ca.crt   1      5d22h
testdata           1      5m28s
newtestdata        1      4s

✗ k get secret                 
NAME                  TYPE                                  DATA   AGE
default-token-sbwwr   kubernetes.io/service-account-token   3      5d22h
testdata              Opaque                                1      6m9s
newtestdata           Opaque                                1      7s

The fix will be available in Kyverno 1.10.0.

We face a similar issue when we want to use a clusterpolicy with two rules, one with secret match and one with configmap match.

If we apply the following example, we only get the first rule applied:

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: test-policy
spec:
  background: true
  generateExistingOnPolicyUpdate: true
  rules:
  - name: copy-secret
    generate:
      apiVersion: v1
      data:
        data:
          test: '{{request.object.data.test}}'
      kind: Secret
      name: newtestdata
      namespace: default
      synchronize: true
    match:
      any:
      - resources:
          kinds:
          - Secret
          names:
          - testdata
          namespaces:
          - default
  - name: copy-configmap
    generate:
      apiVersion: v1
      data:
        data:
          test: '{{request.object.data.test}}'
        type: kubernetes.io/tls
      kind: ConfigMap
      name: newtestdata
      namespace: default
      synchronize: true
    match:
      any:
      - resources:
          kinds:
          - ConfigMap
          names:
          - testdata
          namespaces:
          - default

If we also separate it into two clusterpolicies, it works as expected.

(Note that we have defined two different source objects called testdata. One is a secret and one is a configmap.)

We saw this on the tips and tricks section:

generate rules which trigger off the same source object should be organized in the same policy definition.

Does this imply that for different kinds of source objects multiple policies should be used?