kubernetes: Multiple service delete with Foreground propagation policy results in hanging services

What happened: Observed on minikube 1.9.2 with kubernetes 1.18. When I do multiple service delete requests with propagationPolicy: Foreground, some services are not deleted, only finalizer and deletionTimestamp is applied.

What you expected to happen: All services that were requested to delete are actually deleted.

How to reproduce it (as minimally and precisely as possible): https://gist.github.com/sparkoo/21fc1ef85872d2919f73e9d03f06693e this script creates 10 services, then sends delete request with curl to all of them. It uses localhost:8088 so run kubectl proxy --port=8088 first and also create test namespace.

Anything else we need to know?: I think this issue might be related https://github.com/kubernetes/kubernetes/issues/87603. We’ve discovered the issue in Eclipse Che project https://github.com/eclipse/che/issues/16610 and fixed it by changing policy to Background, which works fine.

Environment:

  • Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:20:10Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.0", GitCommit:"9e991415386e4cf155a24b1da15becaa390438d8", GitTreeState:"clean", BuildDate:"2020-03-25T14:50:46Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
  • Cloud provider or hardware configuration:
  • OS (e.g: cat /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Network plugin and version (if this is a network-related bug):
  • Others: minikube 1.9.2

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Reactions: 1
  • Comments: 17 (10 by maintainers)

Most upvoted comments

I now checked and I can confirm that this issue is fixed by #91311 in the master branch (and v1.19.0), release-1.18 cherry-pick is opened today - #94253.

I also hit the issue. Using the script to reproduce against minikube v1.18.5, all svcs hang in deletion:

$ k get no
NAME         STATUS   ROLES    AGE   VERSION
test-v1.18   Ready    master   13m   v1.18.5

$ k create ns test
$ bash ~/reproduce-90512.sh

$ k -n test get svc
NAME              TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
test-service-1    ClusterIP   10.107.255.134   <none>        80/TCP    8m37s
test-service-10   ClusterIP   10.105.118.202   <none>        80/TCP    8m35s
test-service-3    ClusterIP   10.111.87.65     <none>        80/TCP    8m36s
test-service-4    ClusterIP   10.99.8.106      <none>        80/TCP    8m36s
test-service-5    ClusterIP   10.108.54.255    <none>        80/TCP    8m36s
test-service-6    ClusterIP   10.106.79.239    <none>        80/TCP    8m36s
test-service-7    ClusterIP   10.102.81.223    <none>        80/TCP    8m36s
test-service-8    ClusterIP   10.105.229.57    <none>        80/TCP    8m35s
test-service-9    ClusterIP   10.97.29.131     <none>        80/TCP    8m35s

$ k -n test get svc test-service-1 -o yaml
apiVersion: v1
kind: Service
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"test-service-1","namespace":"test"},"spec":{"ports":[{"port":80,"protocol":"TCP","targetPort":9376}],"selector":{"app":"nginx"}}}
  creationTimestamp: "2020-08-26T08:39:28Z"
  deletionGracePeriodSeconds: 0
  deletionTimestamp: "2020-08-26T08:39:35Z"
  finalizers:
  - foregroundDeletion
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:kubectl.kubernetes.io/last-applied-configuration: {}
      f:spec:
        f:ports:
          .: {}
          k:{"port":80,"protocol":"TCP"}:
            .: {}
            f:port: {}
            f:protocol: {}
            f:targetPort: {}
        f:selector:
          .: {}
          f:app: {}
        f:sessionAffinity: {}
        f:type: {}
    manager: kubectl
    operation: Update
    time: "2020-08-26T08:39:28Z"
  name: test-service-1
  namespace: test
  resourceVersion: "781"
  selfLink: /api/v1/namespaces/test/services/test-service-1
  uid: eaefbb83-a0a9-4ee0-a84a-170ada955544
spec:
  clusterIP: 10.107.255.134
  ports:
  - port: 80
    protocol: TCP
    targetPort: 9376
  selector:
    app: nginx
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}