kyverno: [BUG] Validation fails but Resource creates anyways

  • Kubernetes version: GKE v1.16

Describe the bug Validation fails on logs , but policy not enforced

To Reproduce

kind: ClusterPolicy
metadata:
  name: no-loadbalancers
spec:
  validationFailureAction: enforce
  rules:
  - name: no-LoadBalancer
    match:
      resources:
        kinds:
        - Service
    validate:
      message: "Service of type ClusterIP are not allowed."
      pattern:
        spec:
          type: "!ClusterIP"

And then create a service of type ClusterIP , i get the following message on events:

default     0s          Warning   PolicyViolation     service/pepinos                 policy 'no-loadbalancers' (Validation) rule 'no-LoadBalancer' failed. Validation error: Service of type ClusterIP are not allowed.; Validation rule no-LoadBalancer failed at path /spec/type/

But the svc gets created no issues.

Some logs from the kyverno pod:

E1129 15:17:55.712025       1 checker.go:99] LastReqTime "msg"="webhook check failed" "error"="admission control configuration error"  "deadline"=180000000000
I1129 15:17:55.716874       1 status.go:81] LastReqTime/StatusControl "msg"="updating deployment annotation" "name"="kyverno" "namespace"="kyverno" "key"="kyverno.io/webhookActive" "val"="false"
E1129 15:18:55.712082       1 checker.go:99] LastReqTime "msg"="webhook check failed" "error"="admission control configuration error"  "deadline"=180000000000
E1129 15:19:55.712014       1 checker.go:99] LastReqTime "msg"="webhook check failed" "error"="admission control configuration error"  "deadline"=180000000000
E1129 15:20:55.711996       1 checker.go:99] LastReqTime "msg"="webhook check failed" "error"="admission control configuration error"  "deadline"=180000000000
E1129 15:21:55.712008       1 checker.go:99] LastReqTime "msg"="webhook check failed" "error"="admission control configuration error"  "deadline"=180000000000

I’m very new to kyverno , in fact a few hours , but i could be doing something wrong

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Comments: 32 (12 by maintainers)

Most upvoted comments

Alright!! Ok in private (masters/nodes) GKE clusters the /28 you assign to the masters is not automatically allowed to speak to all the nodes in all the ports automatically. The way i was able to validate there could be an issue with master speaking to the webhook svc was by doing:

~/Projects/si/kyvernoresources kubectl get svc kyverno-svc -n kyverno -o jsonpath='{.metadata.selfLink}'
/api/v1/namespaces/kyverno/services/kyverno-svc                                                                                                                              

~/Projects/si/kyvernoresources kubectl get --raw /api/v1/namespaces/kyverno/services/https:kyverno-svc:/proxy/
Error from server (NotFound): the server could not find the requested resource

That would timeout , notice the NotFound is an awfully 404 rendered by an exception Once I added a firewall rule allowing the /28 for the masters into the target_tags of the gke nodes , it worked!

Even tho the svc listens on a clusterIP:443 , allowing tcp:443 doesn’t seem to work , i had to allow all tcp connections going to the GKE nodes , its pretty odd but that has nothing to do with kyverno.

For anyone reading this is a fully privated gke cluster(nodes/api) running on a shared vpc(on a spoke project) @chipzoller Thank you so much!