kyverno: [Bug] Pod restart doesn't trigger when two rules combined in a policy
Kyverno Version
1.8.1
Description
Hi,
We would like to define a policy that updates a config map and then restart the pod ( example policy below:
The issue is that when the policies are separated, the restart on new configmap creation or deletion triggers a pod restart. But when the policies are combined and applied, we face an error it tries to update the config map with the annotation rather than the pod definition.
Example Policy:
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: generate-cm-for-kube-state-metrics-crds
annotations:
policies.kyverno.io/description: >-
This policy generates and synchronizes a configmap for custom resource kube-state-metrics.
spec:
generateExistingOnPolicyUpdate: true
mutateExistingOnPolicyUpdate: true
rules:
- name: generate-cm-for-kube-state-metrics-crds
match:
any:
- resources:
names:
- "*"
kinds:
- ConfigMap
namespaces:
- "kube-state-metrics"
selector:
matchLabels:
kubestatemetrics.platform.example: source
preconditions:
all:
- key: '{{ request.object.metadata.labels."kubestatemetrics.platform.example" || "" }}'
operator: Equals
value: source
context:
- name: configMapList
apiCall:
urlPath: "/api/v1/configmaps?labelSelector=kubestatemetrics.platform.example=source"
jmesPath: "items[?metadata.name.contains(@, 'kube-state-metrics')]"
- name: kubeStateMetricsCrds
variable:
value: |
{{ configMapList | [].[
data."kube-state-metrics-crds.yaml" | parse_yaml(@).spec.resources[]][][]
}}
jmesPath: "to_string(@)"
generate:
synchronize: true
apiVersion: v1
kind: ConfigMap
name: kube-state-metrics-crds
namespace: kube-state-metrics
data:
metadata:
labels:
generatedBy: kyverno
data:
kube-state-metrics-crds.yaml: |
kind: CustomResourceStateMetrics
spec:
resources:
{{ kubeStateMetricsCrds }}
- name: restart-kube-state-metrics-on-cm-change
match:
any:
- resources:
kinds:
- ConfigMap
names:
- "kube-state-metrics-crds"
namespaces:
- "kube-state-metrics"
preconditions:
all:
- key: "{{ request.object.metadata.labels.\"kubestatemetrics.platform.example\" || '' }}"
operator: NotEquals
value: source
- key: "{{request.operation || 'BACKGROUND'}}"
operator: Equals
value: UPDATE
mutate:
targets:
- apiVersion: apps/v1
kind: Deployment
name: kube-state-metrics
namespace: kube-state-metrics
patchStrategicMerge:
spec:
template:
metadata:
annotations:
platform.cloud.allianz/triggerrestart: "{{request.object.metadata.resourceVersion}}"
Error looks like :
background "msg"="failed to apply generate rule" "error"="admission webhook \"mutate.kyverno.svc-fail\" denied the request: mutation policy generate-cm-for-kube-state-metrics-crds error: failed to validate resource mutated by policy generate-cm-for-kube-state-metrics-crds: ValidationError(io.k8s.api.core.v1.ConfigMap): unknown field \"spec\" in io.k8s.api.core.v1.ConfigMap" "apiVersion"="v1" "kind"="ConfigMap" "name"="kube-state-metrics-appserver" "namespace"="kube-state-metrics" "policy"="generate-cm-for-kube-state-metrics-crds"
Slack discussion
No response
Troubleshooting
- I have read and followed the documentation AND the troubleshooting guide.
- I have searched other issues in this repository and mine is not recorded.
About this issue
- Original URL
- State: closed
- Created 2 years ago
- Reactions: 1
- Comments: 22 (18 by maintainers)
Hi @steadyk - I have tested your policy against latest main and both rules were applied, note that
generateExistingOnPolicyUpdate
has been replaced withgenerateExisting
:The fix will be available in Kyverno 1.10.0.
We face a similar issue when we want to use a clusterpolicy with two rules, one with secret match and one with configmap match.
If we apply the following example, we only get the first rule applied:
If we also separate it into two clusterpolicies, it works as expected.
(Note that we have defined two different source objects called testdata. One is a secret and one is a configmap.)
We saw this on the tips and tricks section:
Does this imply that for different kinds of source objects multiple policies should be used?