kyverno: [BUG] Kyverno reports are becoming very large in case of many resources inside a namespace

Software version numbers State the version numbers of applications involved in the bug.

  • Kubernetes version: 1.20.11
  • Kubernetes platform (if applicable; ex., EKS, GKE, OpenShift): Vanilla
  • Kyverno version: 1.5.0

Describe the bug Since Kyverno is generating reports on regular basis in case we have few rules (in this case only those from upstream) and large number of pods inside a single namespace, the report CRD becomes very large and cannot be updated inside ETCD.

PolicyReportGenerator "msg"="failed to process policy report" "error"="failed to update policy report: etcdserver: request is too large"  "key"="default"

We’re experecing such behavior with namespaces with ~2000 pods running. I tried to set all policies to “enforce”, however, Kyverno still generates very large reports. Here is output of currently configured/installed policies.

NAME                             BACKGROUND   ACTION
deny-privilege-escalation        false        enforce
disallow-add-capabilities        false        enforce
disallow-default-namespace       false        enforce
disallow-host-namespaces         true         enforce
disallow-host-path               false        enforce
disallow-host-ports              false        enforce
disallow-privileged-containers   true         enforce
disallow-selinux                 false        enforce
require-default-proc-mount       true         enforce
restrict-apparmor-profiles       false        enforce
restrict-image-registries        false        enforce
restrict-seccomp                 false        enforce
restrict-sysctls                 false        enforce

To Reproduce Steps to reproduce the behavior:

  1. Install Kyverno
  2. Install kyverno-policies helm chart
  3. Create a namespace with 1800-2000 pods

Expected behavior Not to generate huge reports if there are no “audit” policies.

Screenshots

Additional context

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Reactions: 1
  • Comments: 16 (8 by maintainers)

Most upvoted comments

Hi @damienleger - I verified with your policies and found that two of the policies, disallow-default-namespace and disallow-host-path do not have auto-gen enabled (pod-policies.kyverno.io/autogen-controllers is set to none), that’s why the results were reported for stand-alone pods.

With all your policies installed, here are the results for pods. You can see they were reported for those two policies only.

✗ k get polr -o yaml | grep -e "kind: \+Pod" -A2 -B3
    policy: disallow-default-namespace
    resources:
    - apiVersion: v1
      kind: Pod
      name: nginx-599964c558-ccx49
      namespace: default
--
--
    policy: disallow-default-namespace
    resources:
    - apiVersion: v1
      kind: Pod
      name: nginx-protected-599964c558-2wfhj
      namespace: default
--
--
    policy: disallow-default-namespace
    resources:
    - apiVersion: v1
      kind: Pod
      name: nginx-protected-599964c558-2wfhj
      namespace: default
--
--
    policy: disallow-host-path
    resources:
    - apiVersion: v1
      kind: Pod
      name: nginx-599964c558-ccx49
      namespace: default
--
--
    policy: disallow-host-path
    resources:
    - apiVersion: v1
      kind: Pod
      name: nginx-protected-599964c558-2wfhj
      namespace: default
--
--
    policy: disallow-default-namespace
    resources:
    - apiVersion: v1
      kind: Pod
      name: nginx-599964c558-ccx49
      namespace: default

We have an enhancement issue logged to reduce report’s size, please track in https://github.com/kyverno/kyverno/issues/2981.

Closing.

$ kubectl describe polr polr-ns-company-features -n company-features | grep -c “Kind: +Deployment” 1356

$ kubectl describe polr polr-ns-company-features -n company-features | grep -c “Kind: +Job” 683

$ kubectl describe polr polr-ns-company-features -n company-features | grep -c “Kind: +Pod” 841

But the count doesn’t add up, maybe it’s because the report exceeds the limit?

And the pods are managed by both jobs and deployments?

I’ll try again with your policies.