triggers: Removing Triggers from Cluster with kubectl delete -f Does Not Remove EventListeners

Expected Behavior

Using kubectl delete -f https://storage.googleapis.com/tekton-releases/triggers/previous/<TRIGGERS_VERSION>/release.yaml should remove the triggers component from a Kubernetes cluster and all associated triggers resources.

Actual Behavior

If an EventListener is installed on the cluster, the kubectl delete command will hang until timing out as shown below:

kubectl delete -f https://storage.googleapis.com/tekton-releases/triggers/previous/v0.8.1/release.yaml
podsecuritypolicy.policy "tekton-triggers" deleted
clusterrole.rbac.authorization.k8s.io "tekton-triggers-admin" deleted
serviceaccount "tekton-triggers-controller" deleted
clusterrolebinding.rbac.authorization.k8s.io "tekton-triggers-controller-admin" deleted
customresourcedefinition.apiextensions.k8s.io "clustertriggerbindings.triggers.tekton.dev" deleted
customresourcedefinition.apiextensions.k8s.io "eventlisteners.triggers.tekton.dev" deleted
customresourcedefinition.apiextensions.k8s.io "triggers.triggers.tekton.dev" deleted
customresourcedefinition.apiextensions.k8s.io "triggerbindings.triggers.tekton.dev" deleted
customresourcedefinition.apiextensions.k8s.io "triggertemplates.triggers.tekton.dev" deleted
secret "triggers-webhook-certs" deleted
validatingwebhookconfiguration.admissionregistration.k8s.io "validation.webhook.triggers.tekton.dev" deleted
mutatingwebhookconfiguration.admissionregistration.k8s.io "webhook.triggers.tekton.dev" deleted
validatingwebhookconfiguration.admissionregistration.k8s.io "config.webhook.triggers.tekton.dev" deleted
clusterrole.rbac.authorization.k8s.io "tekton-triggers-aggregate-edit" deleted
clusterrole.rbac.authorization.k8s.io "tekton-triggers-aggregate-view" deleted
configmap "config-logging-triggers" deleted
configmap "config-observability-triggers" deleted
service "tekton-triggers-controller" deleted
deployment.apps "tekton-triggers-controller" deleted
service "tekton-triggers-webhook" deleted
deployment.apps "tekton-triggers-webhook" deleted

The EventListener never appears to be deleted.

Steps to Reproduce the Problem

  1. Install triggers (Using v0.8.1)
  2. Create an EventListener ( kubectl apply -f https://raw.githubusercontent.com/tektoncd/triggers/b6d0aac0992f7cb0fb28c129d6f41b2bbcbbd2c7/examples/github/github-eventlistener-interceptor.yaml)
  3. Run kubectl delete -f https://storage.googleapis.com/tekton-releases/triggers/previous/v0.8.1/release.yaml

/kind bug

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Comments: 16 (13 by maintainers)

Most upvoted comments

I tried to set foregroundDeletion in eventlistener finalizer manually to just verify by referring this https://github.com/kubernetes/apimachinery/blob/master/pkg/apis/meta/v1/types.go#L315 and it works means respective Deployment and pods are deleted

I just tested this with kind on a v1.20 cluster. Created a file:

# eventlistener.yaml
apiVersion: triggers.tekton.dev/v1alpha1
kind: EventListener
metadata:
  name: eventlistener
  namespace: tekton-pipelines
spec:
  serviceAccountName: tekton-trigger-sa
  serviceType: ClusterIP
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: tekton-trigger-sa
  namespace: tekton-pipelines
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: tektoncd-triggers-github-binding
  namespace: tekton-pipelines
subjects:
- kind: ServiceAccount
  name: tekton-trigger-sa
  namespace: tekton-pipelines
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: tekton-trigger
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: tekton-triggers-clusterbinding
subjects:
- kind: ServiceAccount
  name: tekton-trigger-sa
  namespace: tekton-pipelines
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: tekton-triggers-aggregate-view

Ran the following commands with kind 0.10 (kubernetes v1.20.2) to install triggers:

$ kind create cluster
$ kubectl create ns tekton-pipelines
$ kubectl apply -f https://storage.googleapis.com/tekton-releases/triggers/previous/v0.11.1/release.yaml
$ kubectl apply -f eventlistener.yaml

Validate the pods are up and running:

$ kubectl get pods -n tekton-pipelines

Then, ran the following command to delete triggers:

$ kubectl delete --cascade=foreground -f https://storage.googleapis.com/tekton-releases/triggers/previous/v0.11.1/release.yaml

The command completed successfully (after all pods were deleted) and the eventlistener was removed.

The other workaround would be to manually remove the finalizers. kubectl patch crd/eventlisteners.triggers.tekton.dev -p '{"metadata":{"finalizers":[]}}' --type=merge

Wonder if knative has run into this at all? /cc @n3wscott