istio: Helm delete does not clean the custom resource definitions

Helm:

Client: &version.Version{SemVer:"v2.10.0-rc.2", GitCommit:"56154102a2f25ebf679c791907fd355bb0377f05", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.10.0-rc.2", GitCommit:"56154102a2f25ebf679c791907fd355bb0377f05", GitTreeState:"clean"}

Istio: 1.0.0

Kubectl:

Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.7", GitCommit:"dd5e1a2978fd0b97d9b78e1564398aeea7e7fe92", GitTreeState:"clean", BuildDate:"2018-04-19T00:05:56Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:43:26Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}

Following a deletion of Istio 1.0.0 using helm delete --purge I notice that it leaves the crds as residue and a reinstall causes the error,

Error: customresourcedefinitions.apiextensions.k8s.io "gateways.networking.istio.io" already exists

In Tiller logs, I can see,

[tiller] 2018/08/07 12:07:28 executing 55 post-delete hooks for is
[kube] 2018/08/07 12:07:28 building resources from manifest
[kube] 2018/08/07 12:07:28 creating 1 resource(s)

However, the resources remain,

k get customresourcedefinitions | grep istio
adapters.config.istio.io                      1h
apikeys.config.istio.io                       1h
attributemanifests.config.istio.io            1h
authorizations.config.istio.io                1h
bypasses.config.istio.io                      1h
checknothings.config.istio.io                 1h
circonuses.config.istio.io                    1h
deniers.config.istio.io                       1h
destinationrules.networking.istio.io          1h
edges.config.istio.io                         1h
envoyfilters.networking.istio.io              1h
fluentds.config.istio.io                      1h
gateways.networking.istio.io                  1h
handlers.config.istio.io                      1h
httpapispecbindings.config.istio.io           1h
httpapispecs.config.istio.io                  1h
instances.config.istio.io                     1h
....

Has anyone noticed this with Helm v2.10.0-rc.2 ?

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Reactions: 12
  • Comments: 37 (20 by maintainers)

Commits related to this issue

Most upvoted comments

That’s right. In 1.0.0 there CRDs were taken out of the Helm management into their own YAML file and we require the users (who install with Helm) to first install that CRDs yaml.

Therefore, since they are unmanaged Helm won’t delete them. Similar to the installation the users are expected to delete them by executing kubectl delete -f install/kubernetes/helm/istio/templates/crds.yaml -n istio-system.

And add this one

kubectl get customresourcedefinition  -n istio-system | grep 'istio'|awk '{print $1}'|xargs kubectl delete customresourcedefinition  -n istio-system

Since helm version > 2.10.0 does not require to install Istio’s CRDs on a separate command via kubectl apply, I think it’s fair to expect helm delete istio --purge should delete the CRDs without having to explicitly delete them.

me, too. I just tested this with 2.10 and 1.0 of istio, same error. did this happen?

Heres a much simplified version of the above for the curious,

apiVersion: v1
kind: ServiceAccount
metadata:
  name: istio-custom-job-cc-sa
  annotations:
    "helm.sh/hook": post-delete
    "helm.sh/hook-delete-policy": hook-succeeded
    "helm.sh/hook-weight": "1"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: istio-custom-job-cc-cr
  annotations:
    "helm.sh/hook": post-delete
    "helm.sh/hook-delete-policy": hook-succeeded
    "helm.sh/hook-weight": "1"
rules:
- apiGroups: ["apiextensions.k8s.io"]
  resources: ["customresourcedefinitions"]
  verbs: ["list", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: istio-custom-job-cc-crb
  annotations:
    "helm.sh/hook": post-delete
    "helm.sh/hook-delete-policy": hook-succeeded
    "helm.sh/hook-weight": "2"
subjects:
- name: istio-custom-job-cc-sa
  kind: ServiceAccount
  namespace: {{ .Release.Namespace }}
roleRef:
  name: istio-custom-job-cc-cr
  kind: ClusterRole
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: batch/v1
kind: Job
metadata:
  name: istio-custom-job-cc
  annotations:
    "helm.sh/hook": post-delete
    "helm.sh/hook-delete-policy": hook-succeeded
    "helm.sh/hook-weight": "3"
spec:
  template:
    metadata:
      name: istio-custom-job-cc
    spec:
      serviceAccountName: istio-custom-job-cc-sa
      restartPolicy: OnFailure
      affinity:
      containers:
        - name: istio-custom-job-cc-kubectl
          image: gcr.io/istio-release/kubectl:release-1.1-20181101-09-15
          imagePullPolicy: IfNotPresent
          command:
          - /bin/bash
          - -c
          - >
              kubectl get customresourcedefinitions | grep "istio.io" | while read -r entry; do
                name=$(echo $entry | awk '{print $1}');
                kubectl delete customresourcedefinitions $name;
              done

I am facing this issue even though I have done: helm del --purge istio kubectl delete -f .\install\kubernetes\istio-demo.yaml kubectl delete -f install/kubernetes/helm/istio/templates/crds.yaml -n istio-system

But, helm install install/kubernetes/helm/istio --name istio --namespace istio-system still gives me Error: release istio failed: customresourcedefinitions.apiextensions.k8s.io “deniers.config.istio.io” already exists

I am on Windows10. istio-1.1.0-snapshot.3 helm-v2.12.0-rc.1-windows-amd64

@nixgadget it works, thank you very much!

because i delete istio failed before(i must use internal hub’s kubectl image repository), i must cleanup the serviceaccount, clusterrole, clusterrolebinding of istio:

kubectl get serviceaccount | grep 'istio'|awk '{print $1}'|xargs kubectl delete serviceaccount
kubectl get clusterrole | grep 'istio'|awk '{print $1}'|xargs kubectl delete clusterrole
kubectl get clusterrolebindings | grep 'istio'|awk '{print $1}'|xargs kubectl delete clusterrolebindings

kubectl get serviceaccount -n istio-system | grep 'istio'|awk '{print $1}'|xargs kubectl delete serviceaccount  -n istio-system
kubectl get clusterrole  -n istio-system | grep 'istio'|awk '{print $1}'|xargs kubectl delete clusterrole  -n istio-system
kubectl get clusterrolebindings  -n istio-system | grep 'istio'|awk '{print $1}'|xargs kubectl delete clusterrolebindings  -n istio-system

Upgrading to helm/tiller 2.12.1 resolved the issue for me as well.

Hey folks,

The Helm upstream recommends letting CRDs leak with the hook policy of crd-install. The reason the CRDs should leak is because the human operator should have full control over the deletion of the mesh configuration which is stored in the CRDs. As a result, I am marking this closed as it works as intended. For those folks that need to be able to clean up in their evaluations, there is the documentation here: https://istio.io/docs/setup/kubernetes/helm-install/#uninstall

Note the last step:

If desired, delete the CRDs:

$ kubectl delete -f install/kubernetes/helm/istio/templates/crds.yaml -n istio-system

Cheers -steve

@ymesika what you have stated which reflects in the Istio installation steps (https://istio.io/docs/setup/kubernetes/helm-install/#installation-steps),

If using a Helm version prior to 2.10.0, install Istio’s Custom Resource Definitions via kubectl apply, and wait a few seconds for the CRDs to be committed in the kube-apiserver

is right for Helm < 2.10.0 versions.

What about versions > 2.10.0 ? I was under the impression that with Helm > 2.10.0 there is the possibility to inject crds with crd-install hooks and I can see that this is already enabled in Istio 1.0.0.

I posted a question on Helm about this as well, https://github.com/helm/helm/issues/4440

upgrading to 2.12.1 and doing helm reset --force fixed the issue

@CodeJjang Reverting to helm/tiller v2.11.0 (both as the client and in the cluster) resolved this for me – thanks! Is there an issue open in the helm issue tracker? If there is, I’m having a hard time finding it. As of now, the latest release (v2.12.0) of helm/tiller seems quite broken.

I had to install helm v2.11.0, run helm reset --force, delete any remaining CRDs, and then install istio via helm as per the docs.

@nixgadget the upgrade issue is being worked here: https://github.com/istio/istio/issues/9884

Cheers -steve

Ok peeps, apologies for my silence so far on this thread. I have been sorting out a path for CRDs to work properly in a Helm upgrade scenario. That work is here: https://github.com/istio/istio/pull/10120

It is more important that Helm upgrade work than helm delete --purge work for the case of dangling CRDs. I have commented on several issues with Helm upstream, and the conclusion I am coming to is crd-install is not a priority for 2.y series. As Helm is being completely reworked in the 3.y.z series, crd-install may longer be a solution.

In summary, crd-install does not work in a helm upgrade scenario. It has many negative side effects depending on where you upgraded from and to using crd-install which we are just finding out about now.

Since most people on this issue tracker are using Helm 2.10+ with istio 1.0.z, I want to provide you a smooth upgrade experience. I am unclear if the CRDs can be removed in an automated way. They certainly can’t via helm delete --purge as they are unmanaged objects. The Helm community is well aware of this limitation and offers no solutions.

in the meantime, I’d encourage folks to use the two-step installation/removal process. I believe this causes crd-install to be a noop.

** Note **

The helm community has indicated the unmanaged resources in CRDs are a conscious choice so that people do not lose their custom resource information during a helm delete --purge. Instead, you have to work a little harder to remove it. This is logically sound, although clearly not ideal for many individuals - especially those doing evaluations.

use caution

If you are in eval, The solution from @AbrahamAlcaina works well enough https://github.com/istio/istio/issues/7688#issuecomment-440243447. Another solution is kubectl delete -f install/kubernetes/helm/istio/templates/crds.yaml. Note this will delete all of your existing custom resources, which you may not care about if in an evaluation but certainly care about if your in production.

Yeah I ended up having a custom helm job to achieve this using,

"helm.sh/hook": post-delete
"helm.sh/hook-delete-policy": hook-succeeded

I’d highly recommend against this solution. This will make upgrades very difficult for you in the future.

template “istio.customJob” not defined, how to resolve it?

For anyone wanting to fix this until a permanent fix is being released,

{{- if .Values.global.rbacEnabled }}
apiVersion: v1
kind: ServiceAccount
metadata:
  name: {{ template "istio.customJob" . }}-cc-sa
  labels:
    app: {{ template "istio.name" . }}
    chart: {{ .Chart.Name }}-{{ .Chart.Version }}
    heritage: {{ .Release.Service }}
    istio: customjob-cc
    release: {{ .Release.Name }}
  annotations:
    "helm.sh/hook": post-delete
    "helm.sh/hook-delete-policy": hook-succeeded
    "helm.sh/hook-weight": "1"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: {{ template "istio.customJob" . }}-cc-cr
  labels:
    app: {{ template "istio.name" . }}
    chart: {{ .Chart.Name }}-{{ .Chart.Version }}
    heritage: {{ .Release.Service }}
    istio: customjob-cc
    release: {{ .Release.Name }}
  annotations:
    "helm.sh/hook": post-delete
    "helm.sh/hook-delete-policy": hook-succeeded
    "helm.sh/hook-weight": "1"
rules:
- apiGroups: ["apiextensions.k8s.io"]
  resources: ["customresourcedefinitions"]
  verbs: ["list", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: {{ template "istio.customJob" . }}-cc-crb
  labels:
    app: {{ template "istio.name" . }}
    chart: {{ .Chart.Name }}-{{ .Chart.Version }}
    heritage: {{ .Release.Service }}
    istio: customjob-cc
    release: {{ .Release.Name }}
  annotations:
    "helm.sh/hook": post-delete
    "helm.sh/hook-delete-policy": hook-succeeded
    "helm.sh/hook-weight": "2"
subjects:
- name: {{ template "istio.customJob" . }}-cc-sa
  kind: ServiceAccount
  namespace: {{ .Release.Namespace }}
roleRef:
  name: {{ template "istio.customJob" . }}-cc-cr
  kind: ClusterRole
  apiGroup: rbac.authorization.k8s.io
---
{{- end }}
apiVersion: batch/v1
kind: Job
metadata:
  name: {{ template "istio.customJob" . }}-cc
  labels:
    app: {{ template "istio.name" . }}
    chart: {{ .Chart.Name }}-{{ .Chart.Version }}
    heritage: {{ .Release.Service }}
    istio: customjob-cc
    release: {{ .Release.Name }}
  annotations:
    "helm.sh/hook": post-delete
    "helm.sh/hook-delete-policy": hook-succeeded
    "helm.sh/hook-weight": "3"
spec:
  template:
    metadata:
      name: {{ template "istio.customJob" . }}-cc
      labels:
        app: {{ template "istio.name" . }}
        istio: customjob-cc
        release: {{ .Release.Name }}
    spec:
      {{- if .Values.global.rbacEnabled }}
      serviceAccountName: {{ template "istio.customJob" . }}-cc-sa
      {{- end }}
      restartPolicy: OnFailure
      affinity:
        {{- if .Values.global.kubectl.nodeAffinity }}
        nodeAffinity:
{{ toYaml .Values.global.kubectl.nodeAffinity | indent 10 }}
        {{- end }}
        {{- if eq .Values.global.kubectl.antiAffinity "hard" }}
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - topologyKey: kubernetes.io/hostname
              labelSelector:
                matchLabels:
                  app: {{ template "istio.name" . }}
                  istio: customjob-cc
                  release: {{ .Release.Name }}
        {{- else if eq .Values.global.kubectl.antiAffinity "soft" }}
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 1
            podAffinityTerm:
              topologyKey: kubernetes.io/hostname
              labelSelector:
                matchLabels:
                  app: {{ template "istio.name" . }}
                  istio: customjob-cc
                  release: {{ .Release.Name }}
        {{- end }}
      {{- if .Values.global.kubectl.nodeSelector }}
      nodeSelector:
{{ toYaml .Values.global.kubectl.nodeSelector | indent 8 }}
      {{- end }}
      containers:
        - name: {{ template "istio.customJob" . }}-cc-kubectl
          image: {{ .Values.global.image.repo }}/{{ .Values.global.kubectl.image }}:{{ .Values.global.image.tag }}
          imagePullPolicy: {{ .Values.global.image.pullPolicy }}
          command:
          - /bin/bash
          - -c
          - >
              kubectl get customresourcedefinitions | grep "istio.io" | while read -r entry; do
                name=$(echo $entry | awk '{print $1}');
                kubectl delete customresourcedefinitions $name;
              done

For anyone wanting to fix this until a permanent fix is being released,

{{- if .Values.global.rbacEnabled }}
apiVersion: v1
kind: ServiceAccount
metadata:
  name: {{ template "istio.customJob" . }}-cc-sa
  labels:
    app: {{ template "istio.name" . }}
    chart: {{ .Chart.Name }}-{{ .Chart.Version }}
    heritage: {{ .Release.Service }}
    istio: customjob-cc
    release: {{ .Release.Name }}
  annotations:
    "helm.sh/hook": post-delete
    "helm.sh/hook-delete-policy": hook-succeeded
    "helm.sh/hook-weight": "1"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: {{ template "istio.customJob" . }}-cc-cr
  labels:
    app: {{ template "istio.name" . }}
    chart: {{ .Chart.Name }}-{{ .Chart.Version }}
    heritage: {{ .Release.Service }}
    istio: customjob-cc
    release: {{ .Release.Name }}
  annotations:
    "helm.sh/hook": post-delete
    "helm.sh/hook-delete-policy": hook-succeeded
    "helm.sh/hook-weight": "1"
rules:
- apiGroups: ["apiextensions.k8s.io"]
  resources: ["customresourcedefinitions"]
  verbs: ["list", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: {{ template "istio.customJob" . }}-cc-crb
  labels:
    app: {{ template "istio.name" . }}
    chart: {{ .Chart.Name }}-{{ .Chart.Version }}
    heritage: {{ .Release.Service }}
    istio: customjob-cc
    release: {{ .Release.Name }}
  annotations:
    "helm.sh/hook": post-delete
    "helm.sh/hook-delete-policy": hook-succeeded
    "helm.sh/hook-weight": "2"
subjects:
- name: {{ template "istio.customJob" . }}-cc-sa
  kind: ServiceAccount
  namespace: {{ .Release.Namespace }}
roleRef:
  name: {{ template "istio.customJob" . }}-cc-cr
  kind: ClusterRole
  apiGroup: rbac.authorization.k8s.io
---
{{- end }}
apiVersion: batch/v1
kind: Job
metadata:
  name: {{ template "istio.customJob" . }}-cc
  labels:
    app: {{ template "istio.name" . }}
    chart: {{ .Chart.Name }}-{{ .Chart.Version }}
    heritage: {{ .Release.Service }}
    istio: customjob-cc
    release: {{ .Release.Name }}
  annotations:
    "helm.sh/hook": post-delete
    "helm.sh/hook-delete-policy": hook-succeeded
    "helm.sh/hook-weight": "3"
spec:
  template:
    metadata:
      name: {{ template "istio.customJob" . }}-cc
      labels:
        app: {{ template "istio.name" . }}
        istio: customjob-cc
        release: {{ .Release.Name }}
    spec:
      {{- if .Values.global.rbacEnabled }}
      serviceAccountName: {{ template "istio.customJob" . }}-cc-sa
      {{- end }}
      restartPolicy: OnFailure
      affinity:
        {{- if .Values.global.kubectl.nodeAffinity }}
        nodeAffinity:
{{ toYaml .Values.global.kubectl.nodeAffinity | indent 10 }}
        {{- end }}
        {{- if eq .Values.global.kubectl.antiAffinity "hard" }}
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - topologyKey: kubernetes.io/hostname
              labelSelector:
                matchLabels:
                  app: {{ template "istio.name" . }}
                  istio: customjob-cc
                  release: {{ .Release.Name }}
        {{- else if eq .Values.global.kubectl.antiAffinity "soft" }}
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 1
            podAffinityTerm:
              topologyKey: kubernetes.io/hostname
              labelSelector:
                matchLabels:
                  app: {{ template "istio.name" . }}
                  istio: customjob-cc
                  release: {{ .Release.Name }}
        {{- end }}
      {{- if .Values.global.kubectl.nodeSelector }}
      nodeSelector:
{{ toYaml .Values.global.kubectl.nodeSelector | indent 8 }}
      {{- end }}
      containers:
        - name: {{ template "istio.customJob" . }}-cc-kubectl
          image: {{ .Values.global.image.repo }}/{{ .Values.global.kubectl.image }}:{{ .Values.global.image.tag }}
          imagePullPolicy: {{ .Values.global.image.pullPolicy }}
          command:
          - /bin/bash
          - -c
          - >
              kubectl get customresourcedefinitions | grep "istio.io" | while read -r entry; do
                name=$(echo $entry | awk '{print $1}');
                kubectl delete customresourcedefinitions $name;
              done