kyverno: [BUG] Kyverno implicitly skips the kyverno namespace

Software version numbers

  • Kubernetes version: n/a
  • Kubernetes platform (if applicable; ex., EKS, GKE, OpenShift): n/a
  • Kyverno version: 1.6.0

Describe the bug

I installed Kyverno with default settings and configured the sample policies in the quick start:

kubectl create -f- << EOF
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: require-labels
spec:
  validationFailureAction: enforce
  rules:
  - name: check-for-labels
    match:
      resources:
        kinds:
        - Pod
    validate:
      message: "label 'app.kubernetes.io/name' is required"
      pattern:
        metadata:
          labels:
            app.kubernetes.io/name: "?*"
EOF

Then create the deployment on the Kyverno namespace:

kubectl create deployment nginx --image=nginx -n kyverno

It is not blocked.

To Reproduce

See above.

Expected behavior

Policies should be applied on all namespaces by default. The user can configure select namespaces to exclude using the namespaceSelector (https://main.kyverno.io/docs/installation/#namespace-selectors) in the Kyverno configuration.

About this issue

  • Original URL
  • State: closed
  • Created 2 years ago
  • Comments: 16 (11 by maintainers)

Most upvoted comments

Currently we have two options to exclude namespace globally, 1. through the ConfigMap arg; 2. through the resourceFilters. And I agree, the resourceFilters adds additional value to exclude by Kind and Name combination globally.

We can keep the resourceFilters but definitely need to re-visit the default filters we currently have.

Personally I believe it’s good to have resourceFilters included as it is, since it provides an additional flexibility to globally exclude resources that should be further filtered out even in the presence of dynamic webhooks. Although we might need to perform additional tests to confirm the behaviour and check for the difference in performance in presence vs. absence of the resourceFilters implementation.

Another point that we should consider here is that once Kyverno is installed on a cluster in the absence of resourceFilters, the user is also going to apply policies that would match the resources created by Kyverno itself such as the kyverno deployment and services. In a scenario where any of these has to be updated or restarted, chances are that one or many of such existing policies would reject that admission request, and in a case where the Kyverno Pod is restarted, it just won’t be allowed due to policy violations. Such a scenario can only be handled by either excluding the resources created by Kyverno in the kyverno namespace from all the existing as well as future policies, or just having them excluded globally using resourceFilters.

There is a PR for the updated kyverno-policies chart to bring them into alignment with upstream Kubernetes PSS as well as to provide such an exclude provision. You can view that PR here: https://github.com/kyverno/kyverno/pull/3126