kubernetes: kubectl apply --prune --all fails to remove existing resources

Is this a BUG REPORT or FEATURE REQUEST?:

/kind bug

What happened:

I ran

kubectl apply --all --namespace default --prune -f base/ --recursive

It applied all my new yaml in base/ but did not remove existing resources from the cluster due to an error:

error: error pruning nonNamespaced object /v1, Kind=Namespace: namespaces “kube-system” is forbidden: this namespace may not be deleted

What you expected to happen:

I expected all the yaml in base/ to get applied to my cluster, and all other resources currently running on the cluster that were not represented in base/ to be removed.

From the error message it is unclear what resource preventing this from succeeding, and why it was included given I attempted to scope the command to the default namespace (not kube-system).

How to reproduce it (as minimally and precisely as possible):

  1. Start a clean cluster

  2. Deploy nginx kubectl run nginx --image=nginx

  3. Create a simple service

    # myservice.yaml
    apiVersion: v1
    kind: Service
    metadata:
      name: myservice
    spec:
      ports:
      - name: http
        port: 80
        targetPort: http
      selector:
        app: myservice
      type: ClusterIP
    
  4. Deploy myservice with the intent to prune nginx

    kubectl apply --all --namespace default --prune -f myservice.yaml
    
  5. See unexpected error

    error: error pruning nonNamespaced object /v1, Kind=Namespace: namespaces “kube-system” is forbidden: this namespace may not be deleted

Anything else we need to know?:

I really want a way to deploy a known set of configs to a cluster and delete everything else.

Environment:

  • Kubernetes version (use kubectl version): Client Version: version.Info{Major:“1”, Minor:“11”, GitVersion:“v1.11.1”, GitCommit:“b1b29978270dc22fecc592ac55d903350454310a”, GitTreeState:“clean”, BuildDate:“2018-07-17T18:53:20Z”, GoVersion:“go1.10.3”, Compiler:“gc”, Platform:“darwin/amd64”} Server Version: version.Info{Major:“1”, Minor:“9+”, GitVersion:“v1.9.7-gke.3”, GitCommit:“9b5b719c5f295c99de68ffb5b63101b0e0175376”, GitTreeState:“clean”, BuildDate:“2018-05-31T18:32:23Z”, GoVersion:“go1.9.3b4”, Compiler:“gc”, Platform:“linux/amd64”}
  • Cloud provider or hardware configuration: Google Kubernetes Engine
  • OS (e.g. from /etc/os-release): macOS
  • Kernel (e.g. uname -a): Darwin Nicks-MacBook-Pro.local 17.6.0 Darwin Kernel Version 17.6.0: Tue May 8 15:22:16 PDT 2018; root:xnu-4570.61.1~1/RELEASE_X86_64 x86_64

About this issue

  • Original URL
  • State: open
  • Created 6 years ago
  • Reactions: 19
  • Comments: 45 (4 by maintainers)

Most upvoted comments

I believe I read in some other issue that there are (still?) some plans to improve the whole apply/prune handling. However, for this specific issue, there is some kind of workaround by using prune-whitelist.

The command from the initial description

kubectl apply --all --namespace default --prune -f myservice.yaml

could be rewritten like this:

kubectl apply --namespace default --prune --prune-whitelist 'core/v1/ConfigMap' \
  --prune-whitelist 'core/v1/Endpoints' --prune-whitelist 'core/v1/PersistentVolumeClaim' \
  --prune-whitelist 'core/v1/Pod' --prune-whitelist 'core/v1/ReplicationController' \
  --prune-whitelist 'core/v1/Secret' --prune-whitelist 'core/v1/Service' \
  --prune-whitelist 'batch/v1/Job' --prune-whitelist 'batch/v1beta1/CronJob' \
  --prune-whitelist 'extensions/v1beta1/DaemonSet' \
  --prune-whitelist 'extensions/v1beta1/Deployment' \
  --prune-whitelist 'extensions/v1beta1/Ingress' --prune-whitelist 'extensions/v1beta1/ReplicaSet' \
  --prune-whitelist 'apps/v1beta1/StatefulSet' --prune-whitelist 'apps/v1beta1/Deployment' \
  -f myservice.yaml

However, this approach has the following drawbacks:

  1. I don’t know if the list of objects is complete
  2. When new objects are added by Kubernetes, this list needs to be updated
  3. It doesn’t consider custom resource definitions

So if you really need to make it work right now, I think this workaround could be OK. For the long run, hopefully the apply/prune mechanism will be improved to better handle cases like this.

seems I have the same problem. save this to a file named a.yaml

apiVersion: v1
kind: Service
metadata:
  annotations:
    environment: sandbox
    provider: kustomize
  labels:
    app: nginx
  name: nginx
  namespace: sandbox
spec:
  ports:
  - port: 80
  selector:
    app: nginx
  type: NodePort

And run kubectl apply --prune --dry-run=client --all -n sandbox -f a.yaml, the results

service/nginx created (dry run)
namespace/cattle-system pruned (dry run)

I have no idea why it want to prune the namespace/cattle-system.

/remove-lifecycle stale

Unrelated to the original problem related to namespaces, but this is one of the first issues that comes up on Google when looking for why only certain resource types are pruned (because of the default whitelist) - so just in case it’s helpful for anyone else:

We came up with a one-liner to generate a --prune-whitelist argument list based on available resources in the cluster:

kubectl api-resources -o wide > /tmp/api-resources; grep 'delete' /tmp/api-resources | cut -c$(grep -b -o APIVERSION /tmp/api-resources | awk -F ':' '{ print $1 }')- | awk '{ print $1"/"$3 }' | sed 's#^v1/#core/v1/#' | xargs -I {} echo "--prune-whitelist="{} | xargs

Short explanation:

  1. save available API resources to a file
  2. grep only those that are deletable
  3. cut the columns until APIVERSION (as SHORTNAMES can be empty, grep and awk to find the column number)
  4. awk together APIVERSION/KIND
  5. sed if APIVERSION is v1, make it core/v1
  6. xargs generate list of arguments from lines

It’s not very performant and probably has some other issues, but for our use case it works fine.

get the same results

 kubectl apply --prune --dry-run=server --all -n sandbox -f a.yaml
service/nginx created (server dry run)
namespace/cattle-system pruned (server dry run)

and my enviroment

Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.2", GitCommit:"f5743093fd1c663cb0cbc89748f730662345d44d", GitTreeState:"clean", BuildDate:"2020-09-16T21:51:49Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.4", GitCommit:"8d8aa39598534325ad77120c120a22b3a990b5ea", GitTreeState:"clean", BuildDate:"2020-03-12T20:55:23Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}

It is probably because one of the resources in a.yaml is referencing cattle-system namespace. The namespace will be kept in track and used for pruning:

https://github.com/kubernetes/kubernetes/blob/e22e9b4f836a401c61d1967300e853c9de0ffb36/staging/src/k8s.io/kubectl/pkg/cmd/apply/apply.go#L645

Alas, I’m out of ideas, sorry…