kyverno: [Bug] Policy reports are not removed once violating pods are deleted
Kyverno Version
1.7.3
Description
Hi,
I am experimenting with the policy reports and have noticed that some of them are not removed once the violating pods are deleted.
I am using this example policy for the test:
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: disallow-privileged-containers
annotations:
policies.kyverno.io/title: Disallow Privileged Containers
policies.kyverno.io/category: Pod Security Standards (Baseline)
policies.kyverno.io/severity: medium
policies.kyverno.io/subject: Pod
kyverno.io/kyverno-version: 1.6.0
kyverno.io/kubernetes-version: "1.22-1.23"
policies.kyverno.io/description: >-
Privileged mode disables most security mechanisms and must not be allowed. This policy
ensures Pods do not call for privileged mode.
spec:
validationFailureAction: audit
background: true
rules:
- name: privileged-containers
match:
any:
- resources:
kinds:
- Pod
validate:
message: >-
Privileged mode is disallowed. The fields spec.containers[*].securityContext.privileged
and spec.initContainers[*].securityContext.privileged must be unset or set to `false`.
pattern:
spec:
=(ephemeralContainers):
- =(securityContext):
=(privileged): "false"
=(initContainers):
- =(securityContext):
=(privileged): "false"
containers:
- =(securityContext):
=(privileged): "false"
And I am creating two pods and two deployments (one good and one bad of each):
Ξ demo/disallow-privileged-containers git:(main) ▶ kubectl get pods
No resources found in default namespace.
Ξ demo/disallow-privileged-containers git:(main) ▶ kubectl get cpol
No resources found
Ξ demo/disallow-privileged-containers git:(main) ▶ kubectl get polr -A
NAMESPACE NAME PASS FAIL WARN ERROR SKIP AGE
default polr-ns-default 0 0 0 0 0 22d
Ξ demo/disallow-privileged-containers git:(main) ▶ kubectl apply -f disallow-privileged-containers.yaml
clusterpolicy.kyverno.io/disallow-privileged-containers created
Ξ demo/disallow-privileged-containers git:(main) ▶ kubectl get cpol
NAME BACKGROUND ACTION READY
disallow-privileged-containers true audit true
Ξ demo/disallow-privileged-containers git:(main) ▶ kubectl get polr -A
NAMESPACE NAME PASS FAIL WARN ERROR SKIP AGE
default polr-ns-default 0 0 0 0 0 22d
Ξ demo/disallow-privileged-containers git:(main) ▶ kubectl apply -f resource.yaml
pod/badpod created
pod/goodpod created
deployment.apps/baddeployment created
deployment.apps/gooddeployment created
Ξ demo/disallow-privileged-containers git:(main) ▶ kubectl get pods
NAME READY STATUS RESTARTS AGE
baddeployment-7ddc55f889-vn8k9 1/1 Running 0 10s
badpod 1/1 Running 0 10s
gooddeployment-684fd58ff4-2cdx6 1/1 Running 0 10s
goodpod 1/1 Running 0 10s
Ξ demo/disallow-privileged-containers git:(main) ▶ kubectl get polr -A
NAMESPACE NAME PASS FAIL WARN ERROR SKIP AGE
default polr-ns-default 2 2 0 0 0 22d
Ξ demo/disallow-privileged-containers git:(main) ▶ kubectl delete -f resource.yaml
pod "badpod" deleted
pod "goodpod" deleted
deployment.apps "baddeployment" deleted
deployment.apps "gooddeployment" deleted
Ξ demo/disallow-privileged-containers git:(main) ▶ kubectl get pods
No resources found in default namespace.
Ξ demo/disallow-privileged-containers git:(main) ▶ kubectl get deployments
No resources found in default namespace.
Ξ demo/disallow-privileged-containers git:(main) ▶ kubectl get polr -A
NAMESPACE NAME PASS FAIL WARN ERROR SKIP AGE
default polr-ns-default 1 1 0 0 0 22d
You can see that there are still 1 PASS and 1 FAIL that I understand shouldn’t be there?
Looking details on that PASS and FAIL, this is what I see, which are actually a pod and deployment that no longer exist:
{
"apiVersion": "v1",
"items": [
{
"apiVersion": "wgpolicyk8s.io/v1alpha2",
"kind": "PolicyReport",
"metadata": {
"creationTimestamp": "2022-09-07T12:18:13Z",
"generation": 32,
"labels": {
"managed-by": "kyverno"
},
"name": "polr-ns-default",
"namespace": "default",
"ownerReferences": [
{
"apiVersion": "v1",
"controller": true,
"kind": "Namespace",
"name": "kyverno",
"uid": "de79196e-29ee-4a3b-8fa0-b10404b7d389"
}
],
"resourceVersion": "27158256",
"uid": "3613d5f7-4562-42c0-b582-71aaf19e0eda"
},
"results": [
{
"category": "Pod Security Standards (Baseline)",
"message": "validation error: Privileged mode is disallowed. The fields spec.containers[*].securityContext.privileged and spec.initContainers[*].securityContext.privileged must be unset or set to `false`. Rule autogen-privileged-containers failed at path /spec/template/spec/containers/0/securityContext/privileged/",
"policy": "disallow-privileged-containers",
"resources": [
{
"apiVersion": "apps/v1",
"kind": "Deployment",
"name": "baddeployment",
"namespace": "default",
"uid": "c4245d08-ffde-41c5-aba2-aca65c619eef"
}
],
"result": "fail",
"rule": "autogen-privileged-containers",
"scored": true,
"severity": "medium",
"source": "Kyverno",
"timestamp": {
"nanos": 0,
"seconds": 1664538108
}
},
{
"category": "Pod Security Standards (Baseline)",
"message": "validation rule 'autogen-privileged-containers' passed.",
"policy": "disallow-privileged-containers",
"resources": [
{
"apiVersion": "apps/v1",
"kind": "Deployment",
"name": "gooddeployment",
"namespace": "default",
"uid": "ec62de75-029f-4736-a290-fd60e8b31c67"
}
],
"result": "pass",
"rule": "autogen-privileged-containers",
"scored": true,
"severity": "medium",
"source": "Kyverno",
"timestamp": {
"nanos": 0,
"seconds": 1664538108
}
}
],
"summary": {
"error": 0,
"fail": 1,
"pass": 1,
"skip": 0,
"warn": 0
}
}
],
"kind": "List",
"metadata": {
"resourceVersion": ""
}
}
Is this expected to happen? Thanks
Slack discussion
No response
Troubleshooting
- I have read and followed the documentation AND the troubleshooting guide.
- I have searched other issues in this repository and mine is not recorded.
About this issue
- Original URL
- State: closed
- Created 2 years ago
- Comments: 15 (11 by maintainers)
Closing, the issue raised by @monotek has also been fixed and will be in 1.8.1.
Thanks for bringing those issues to our attention, it helps a lot improving our report system !
Here it is https://github.com/kyverno/website/issues/655
We still need to update docs with the current state of Policy Reports. I’ll create an issue to track.
@pealtrufo yes this is different, it’s a decision we took while creating the new system. It didn’t make sense to us to exclude resources from background scan as the risk to get an unresponsive cluster does not apply (which is the primary reason for filtering admission requests).
@monotek if you’re on a dev cluster you can try
latest
image, it should contain most of the fixes. 1.8.1 RC is coming soon, probably on Monday.