calico: With non-Calico CNI, Calico networkpolicy enforcement does not allow Terminating pods to gracefully shut down

Expected Behavior

When calico for networkpolicy enforcement is installed; pods in Terminating-state should keep their network access until containers are actually removed by the container-runtime.

Current Behavior

Pods that are in state Terminating immediately lose all network connectivity. Applications that are still handling in-flight network connections or applications that might want to reach out to the network for a graceful shutdown can not do so.

In our case, this is causing at least the following issues:

  • Interrupted (Cron)Jobs can not clean up after themselves
  • In-flight (HTTP) requests to Pods that are Terminating are timing out because return traffic is silently dropped/rejected and have to be retried

Possible Solution

Postpone iptables cleanup until pod/containers are actually removed.

Steps to Reproduce (for bugs)

  1. Install an AWS EKS-cluster with the AWS VPC CNI. (vpc-cni installation is listed below)
  2. Install calico for network policy enforcement (our installation is listed below)
  3. No need to install any (global)networkpolicies whatsoever, keeping the default all allow behavior
    • adding (global)networkpolices does not help
  4. Install debug deployment (see below)
  5. Tail the logs from the debug pod
  6. Terminate the pod with kubectl delete pod <pod>
  7. Observe that the container immediately loses network access

Below are the logs of aws-node running on the same node that hosted one of my debug pods. Of particular interest is the following line, which seems to suggest that all iptables rules for my pod are removed, even though the pod is still in Terminating state and cleaning up after itself.

calico-node-8xbsr calico-node 2021-04-08 11:31:32.159 [INFO][65] felix/endpoint_mgr.go 544: Workload removed, deleting old state. id=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"default/debug-77df46cc65-nhdl4", EndpointId:"eth0"}
+ calico-node-8xbsr › calico-node

# immediately after running `kubectl delete pod <pod>`

calico-node-8xbsr calico-node 2021-04-08 11:31:32.158 [INFO][65] felix/calc_graph.go 411: Local endpoint deleted id=WorkloadEndpoint(node=ip-10-28-97-106.eu-west-1.compute.internal, orchestrator=k8s, workload=default/debug-77df46cc65-nhdl4, name=eth0)
calico-node-8xbsr calico-node 2021-04-08 11:31:32.159 [INFO][65] felix/int_dataplane.go 1430: Received *proto.WorkloadEndpointRemove update from calculation graph msg=id:<orchestrator_id:"k8s" workload_id:"default/debug-77df46cc65-nhdl4" endpoint_id:"eth0" > 
calico-node-8xbsr calico-node 2021-04-08 11:31:32.159 [INFO][65] felix/int_dataplane.go 1430: Received *proto.ActiveProfileRemove update from calculation graph msg=id:<name:"ksa.default.default" > 
calico-node-8xbsr calico-node 2021-04-08 11:31:32.159 [INFO][65] felix/table.go 537: Queuing deletion of chain. chainName="cali-pri-ksa.default.default" ipVersion=0x4 table="filter"
calico-node-8xbsr calico-node 2021-04-08 11:31:32.159 [INFO][65] felix/table.go 537: Queuing deletion of chain. chainName="cali-pro-ksa.default.default" ipVersion=0x4 table="filter"
calico-node-8xbsr calico-node 2021-04-08 11:31:32.159 [INFO][65] felix/table.go 537: Queuing deletion of chain. chainName="cali-pro-ksa.default.default" ipVersion=0x4 table="mangle"
calico-node-8xbsr calico-node 2021-04-08 11:31:32.159 [INFO][65] felix/int_dataplane.go 1430: Received *proto.ActiveProfileRemove update from calculation graph msg=id:<name:"kns.default" > 
calico-node-8xbsr calico-node 2021-04-08 11:31:32.159 [INFO][65] felix/table.go 537: Queuing deletion of chain. chainName="cali-pri-kns.default" ipVersion=0x4 table="filter"
calico-node-8xbsr calico-node 2021-04-08 11:31:32.159 [INFO][65] felix/table.go 537: Queuing deletion of chain. chainName="cali-pro-kns.default" ipVersion=0x4 table="filter"
calico-node-8xbsr calico-node 2021-04-08 11:31:32.159 [INFO][65] felix/table.go 537: Queuing deletion of chain. chainName="cali-pro-kns.default" ipVersion=0x4 table="mangle"
calico-node-8xbsr calico-node 2021-04-08 11:31:32.159 [INFO][65] felix/endpoint_mgr.go 667: Workload removed, deleting its chains. id=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"default/debug-77df46cc65-nhdl4", EndpointId:"eth0"}
calico-node-8xbsr calico-node 2021-04-08 11:31:32.159 [INFO][65] felix/table.go 537: Queuing deletion of chain. chainName="cali-tw-eni70482c87d9a" ipVersion=0x4 table="filter"
calico-node-8xbsr calico-node 2021-04-08 11:31:32.159 [INFO][65] felix/table.go 591: Chain no longer referenced, marking it for removal chainName="cali-pri-kns.default"
calico-node-8xbsr calico-node 2021-04-08 11:31:32.159 [INFO][65] felix/table.go 591: Chain no longer referenced, marking it for removal chainName="cali-pri-ksa.default.default"
calico-node-8xbsr calico-node 2021-04-08 11:31:32.159 [INFO][65] felix/table.go 537: Queuing deletion of chain. chainName="cali-fw-eni70482c87d9a" ipVersion=0x4 table="filter"
calico-node-8xbsr calico-node 2021-04-08 11:31:32.159 [INFO][65] felix/table.go 591: Chain no longer referenced, marking it for removal chainName="cali-pro-kns.default"
calico-node-8xbsr calico-node 2021-04-08 11:31:32.159 [INFO][65] felix/table.go 591: Chain no longer referenced, marking it for removal chainName="cali-pro-ksa.default.default"
calico-node-8xbsr calico-node 2021-04-08 11:31:32.159 [INFO][65] felix/table.go 537: Queuing deletion of chain. chainName="cali-sm-eni70482c87d9a" ipVersion=0x4 table="filter"
calico-node-8xbsr calico-node 2021-04-08 11:31:32.159 [INFO][65] felix/endpoint_mgr.go 544: Workload removed, deleting old state. id=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"default/debug-77df46cc65-nhdl4", EndpointId:"eth0"}
calico-node-8xbsr calico-node 2021-04-08 11:31:32.160 [INFO][65] felix/table.go 506: Queueing update of chain. chainName="cali-from-wl-dispatch" ipVersion=0x4 table="filter"
calico-node-8xbsr calico-node 2021-04-08 11:31:32.160 [INFO][65] felix/table.go 591: Chain no longer referenced, marking it for removal chainName="cali-from-wl-dispatch-7"
calico-node-8xbsr calico-node 2021-04-08 11:31:32.160 [INFO][65] felix/table.go 506: Queueing update of chain. chainName="cali-to-wl-dispatch" ipVersion=0x4 table="filter"
calico-node-8xbsr calico-node 2021-04-08 11:31:32.160 [INFO][65] felix/table.go 591: Chain no longer referenced, marking it for removal chainName="cali-to-wl-dispatch-7"
calico-node-8xbsr calico-node 2021-04-08 11:31:32.160 [INFO][65] felix/table.go 537: Queuing deletion of chain. chainName="cali-from-wl-dispatch-7" ipVersion=0x4 table="filter"
calico-node-8xbsr calico-node 2021-04-08 11:31:32.160 [INFO][65] felix/table.go 537: Queuing deletion of chain. chainName="cali-to-wl-dispatch-7" ipVersion=0x4 table="filter"
calico-node-8xbsr calico-node 2021-04-08 11:31:32.160 [INFO][65] felix/table.go 591: Chain no longer referenced, marking it for removal chainName="cali-tw-eni70482c87d9a"
calico-node-8xbsr calico-node 2021-04-08 11:31:32.160 [INFO][65] felix/table.go 506: Queueing update of chain. chainName="cali-set-endpoint-mark" ipVersion=0x4 table="filter"
calico-node-8xbsr calico-node 2021-04-08 11:31:32.160 [INFO][65] felix/table.go 591: Chain no longer referenced, marking it for removal chainName="cali-set-endpoint-mark-7"
calico-node-8xbsr calico-node 2021-04-08 11:31:32.160 [INFO][65] felix/table.go 506: Queueing update of chain. chainName="cali-from-endpoint-mark" ipVersion=0x4 table="filter"
calico-node-8xbsr calico-node 2021-04-08 11:31:32.160 [INFO][65] felix/table.go 591: Chain no longer referenced, marking it for removal chainName="cali-fw-eni70482c87d9a"
calico-node-8xbsr calico-node 2021-04-08 11:31:32.160 [INFO][65] felix/table.go 537: Queuing deletion of chain. chainName="cali-set-endpoint-mark-7" ipVersion=0x4 table="filter"
calico-node-8xbsr calico-node 2021-04-08 11:31:32.160 [INFO][65] felix/table.go 591: Chain no longer referenced, marking it for removal chainName="cali-sm-eni70482c87d9a"
calico-node-8xbsr calico-node 2021-04-08 11:31:32.160 [INFO][65] felix/endpoint_mgr.go 476: Re-evaluated workload endpoint status adminUp=false failed=false known=false operUp=false status="" workloadEndpointID=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"default/debug-77df46cc65-nhdl4", EndpointId:"eth0"}
calico-node-8xbsr calico-node 2021-04-08 11:31:32.161 [INFO][65] felix/status_combiner.go 58: Storing endpoint status update ipVersion=0x4 status="" workload=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"default/debug-77df46cc65-nhdl4", EndpointId:"eth0"}
calico-node-8xbsr calico-node 2021-04-08 11:31:32.161 [INFO][65] felix/conntrack.go 90: Removing conntrack flows ip=100.81.217.188
calico-node-8xbsr calico-node 2021-04-08 11:31:32.204 [INFO][65] felix/status_combiner.go 86: Reporting endpoint removed. id=proto.WorkloadEndpointID{OrchestratorId:"k8s", WorkloadId:"default/debug-77df46cc65-nhdl4", EndpointId:"eth0"}

# after the pod is actually removed, note the terminationGracePeriodSeconds: 30

calico-node-8xbsr calico-node 2021-04-08 11:32:03.106 [INFO][65] felix/iface_monitor.go 187: Netlink address update. addr="fe80::ec14:2bff:fee9:cce" exists=false ifIndex=78
calico-node-8xbsr calico-node 2021-04-08 11:32:03.106 [INFO][65] felix/int_dataplane.go 1036: Linux interface addrs changed. addrs=set.mapSet{} ifaceName="eni70482c87d9a"
calico-node-8xbsr calico-node 2021-04-08 11:32:03.106 [INFO][65] felix/int_dataplane.go 1001: Linux interface state changed. ifIndex=78 ifaceName="eni70482c87d9a" state="down"
calico-node-8xbsr calico-node 2021-04-08 11:32:03.106 [INFO][65] felix/int_dataplane.go 1463: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"eni70482c87d9a", Addrs:set.mapSet{}}
calico-node-8xbsr calico-node 2021-04-08 11:32:03.107 [INFO][65] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"eni70482c87d9a", Addrs:set.mapSet{}}
calico-node-8xbsr calico-node 2021-04-08 11:32:03.107 [INFO][65] felix/int_dataplane.go 1445: Received interface update msg=&intdataplane.ifaceUpdate{Name:"eni70482c87d9a", State:"down", Index:78}
calico-node-8xbsr calico-node 2021-04-08 11:32:03.108 [INFO][65] felix/int_dataplane.go 1036: Linux interface addrs changed. addrs=<nil> ifaceName="eni70482c87d9a"
calico-node-8xbsr calico-node 2021-04-08 11:32:03.108 [INFO][65] felix/int_dataplane.go 1463: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"eni70482c87d9a", Addrs:set.Set(nil)}
calico-node-8xbsr calico-node 2021-04-08 11:32:03.108 [INFO][65] felix/hostip_mgr.go 84: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"eni70482c87d9a", Addrs:set.Set(nil)}
calico-node-8xbsr calico-node 2021-04-08 11:32:12.545 [INFO][65] felix/summary.go 100: Summarising 18 dataplane reconciliation loops over 1m3.1s: avg=12ms longest=60ms (resync-ipsets-v4)

Your Environment

  • Calico version: v3.18.1
  • EKS version: 1.18
  • EKS AMI: v20210302
  • AWS VPC CNI version: v1.7.9
  • containerd version: 1.4.1

Context

debug.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: debug
  name: debug
spec:
  replicas: 1
  selector:
    matchLabels:
      app: debug
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: debug
    spec:
      terminationGracePeriodSeconds: 30
      securityContext:
        runAsUser: 1000
        runAsNonRoot: true
      containers:
        - image: krallin/ubuntu-tini:trusty
          name: debug
          resources: {}
          command:
            - /bin/sh
            - -c
            - |
              while true; do
                date
                timeout 1s getent hosts kubernetes.default
                sleep 0.25
              done
          lifecycle:
            preStop:
              exec:
                command:
                  - /bin/sh
                  - -c
                  - |
                    echo BEGIN preStop > /proc/1/fd/1;
                    sleep 10
                    echo END preStop > /proc/1/fd/1;
status: {}

kubectl -n kube-system get ds aws-node -o yaml

apiVersion: apps/v1
kind: DaemonSet
metadata:
  annotations:
    deprecated.daemonset.template.generation: "2"
  labels:
    k8s-app: aws-node
  name: aws-node
  namespace: kube-system
spec:
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: aws-node
  template:
    metadata:
      creationTimestamp: null
      labels:
        k8s-app: aws-node
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: beta.kubernetes.io/os
                operator: In
                values:
                - linux
              - key: beta.kubernetes.io/arch
                operator: In
                values:
                - amd64
                - arm64
              - key: eks.amazonaws.com/compute-type
                operator: NotIn
                values:
                - fargate
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
              - key: kubernetes.io/arch
                operator: In
                values:
                - amd64
                - arm64
              - key: eks.amazonaws.com/compute-type
                operator: NotIn
                values:
                - fargate
      containers:
      - env:
        - name: ENI_CONFIG_LABEL_DEF
          value: failure-domain.beta.kubernetes.io/zone
        - name: ADDITIONAL_ENI_TAGS
          value: '{}'
        - name: AWS_VPC_CNI_NODE_PORT_SUPPORT
          value: "true"
        - name: AWS_VPC_ENI_MTU
          value: "9001"
        - name: AWS_VPC_K8S_CNI_CONFIGURE_RPFILTER
          value: "false"
        - name: AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG
          value: "true"
        - name: AWS_VPC_K8S_CNI_EXTERNALSNAT
          value: "false"
        - name: AWS_VPC_K8S_CNI_LOGLEVEL
          value: DEBUG
        - name: AWS_VPC_K8S_CNI_LOG_FILE
          value: /host/var/log/aws-routed-eni/ipamd.log
        - name: AWS_VPC_K8S_CNI_RANDOMIZESNAT
          value: prng
        - name: AWS_VPC_K8S_CNI_VETHPREFIX
          value: eni
        - name: AWS_VPC_K8S_PLUGIN_LOG_FILE
          value: /var/log/aws-routed-eni/plugin.log
        - name: AWS_VPC_K8S_PLUGIN_LOG_LEVEL
          value: DEBUG
        - name: DISABLE_INTROSPECTION
          value: "false"
        - name: DISABLE_METRICS
          value: "false"
        - name: ENABLE_POD_ENI
          value: "false"
        - name: MY_NODE_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: spec.nodeName
        - name: WARM_ENI_TARGET
          value: "1"
        image: 602401143452.dkr.ecr.eu-west-1.amazonaws.com/amazon-k8s-cni:v1.7.9
        imagePullPolicy: Always
        livenessProbe:
          exec:
            command:
            - /app/grpc-health-probe
            - -addr=:50051
          failureThreshold: 3
          initialDelaySeconds: 60
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        name: aws-node
        ports:
        - containerPort: 61678
          hostPort: 61678
          name: metrics
          protocol: TCP
        readinessProbe:
          exec:
            command:
            - /app/grpc-health-probe
            - -addr=:50051
          failureThreshold: 3
          initialDelaySeconds: 1
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        resources:
          requests:
            cpu: 10m
        securityContext:
          capabilities:
            add:
            - NET_ADMIN
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /host/opt/cni/bin
          name: cni-bin-dir
        - mountPath: /host/etc/cni/net.d
          name: cni-net-dir
        - mountPath: /host/var/log/aws-routed-eni
          name: log-dir
        - mountPath: /var/run/aws-node
          name: run-dir
        - mountPath: /var/run/cri.sock
          name: cri-sock
        - mountPath: /run/xtables.lock
          name: xtables-lock
      dnsPolicy: ClusterFirst
      hostNetwork: true
      initContainers:
      - env:
        - name: DISABLE_TCP_EARLY_DEMUX
          value: "false"
        image: 602401143452.dkr.ecr.eu-west-1.amazonaws.com/amazon-k8s-cni-init:v1.7.9
        imagePullPolicy: Always
        name: aws-vpc-cni-init
        securityContext:
          privileged: true
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /host/opt/cni/bin
          name: cni-bin-dir
      priorityClassName: system-node-critical
      restartPolicy: Always
      schedulerName: default-scheduler
      serviceAccount: aws-node
      serviceAccountName: aws-node
      terminationGracePeriodSeconds: 10
      tolerations:
      - operator: Exists
      volumes:
      - hostPath:
          path: /opt/cni/bin
          type: ""
        name: cni-bin-dir
      - hostPath:
          path: /etc/cni/net.d
          type: ""
        name: cni-net-dir
      - hostPath:
          path: /run/containerd/containerd.sock
          type: ""
        name: cri-sock
      - hostPath:
          path: /run/xtables.lock
          type: ""
        name: xtables-lock
      - hostPath:
          path: /var/log/aws-routed-eni
          type: DirectoryOrCreate
        name: log-dir
      - hostPath:
          path: /var/run/aws-node
          type: DirectoryOrCreate
        name: run-dir
  updateStrategy:
    rollingUpdate:
      maxUnavailable: 100%
    type: RollingUpdate

kubectl -n kube-system get ds calico-node -o yaml

apiVersion: apps/v1
kind: DaemonSet
metadata:
  annotations:
    deprecated.daemonset.template.generation: "2"
    meta.helm.sh/release-name: aws-calico
    meta.helm.sh/release-namespace: kube-system
  labels:
    app.kubernetes.io/managed-by: Helm
    k8s-app: calico-node
  name: calico-node
  namespace: kube-system
spec:
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: calico-node
  template:
    metadata:
      creationTimestamp: null
      labels:
        k8s-app: calico-node
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
              - key: kubernetes.io/arch
                operator: In
                values:
                - amd64
              - key: eks.amazonaws.com/compute-type
                operator: NotIn
                values:
                - fargate
      containers:
      - env:
        - name: DATASTORE_TYPE
          value: kubernetes
        - name: FELIX_INTERFACEPREFIX
          value: eni
        - name: FELIX_LOGSEVERITYSCREEN
          value: info
        - name: CALICO_NETWORKING_BACKEND
          value: none
        - name: CLUSTER_TYPE
          value: k8s,ecs
        - name: CALICO_DISABLE_FILE_LOGGING
          value: "true"
        - name: FELIX_TYPHAK8SSERVICENAME
          value: calico-typha
        - name: FELIX_DEFAULTENDPOINTTOHOSTACTION
          value: ACCEPT
        - name: FELIX_IPTABLESMANGLEALLOWACTION
          value: Return
        - name: FELIX_IPV6SUPPORT
          value: "false"
        - name: WAIT_FOR_DATASTORE
          value: "true"
        - name: FELIX_LOGSEVERITYSYS
          value: none
        - name: FELIX_PROMETHEUSMETRICSENABLED
          value: "true"
        - name: FELIX_ROUTESOURCE
          value: WorkloadIPs
        - name: NO_DEFAULT_POOLS
          value: "true"
        - name: NODENAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: spec.nodeName
        - name: IP
        - name: FELIX_HEALTHENABLED
          value: "true"
        image: quay.io/calico/node:v3.18.1
        imagePullPolicy: IfNotPresent
        livenessProbe:
          exec:
            command:
            - /bin/calico-node
            - -felix-live
          failureThreshold: 6
          initialDelaySeconds: 10
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        name: calico-node
        ports:
        - containerPort: 9091
          hostPort: 9091
          name: metrics
          protocol: TCP
        readinessProbe:
          exec:
            command:
            - /bin/calico-node
            - -felix-ready
          failureThreshold: 3
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        securityContext:
          privileged: true
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /lib/modules
          name: lib-modules
          readOnly: true
        - mountPath: /run/xtables.lock
          name: xtables-lock
        - mountPath: /var/run/calico
          name: var-run-calico
        - mountPath: /var/lib/calico
          name: var-lib-calico
      dnsPolicy: ClusterFirst
      hostNetwork: true
      nodeSelector:
        beta.kubernetes.io/os: linux
      priorityClassName: system-node-critical
      restartPolicy: Always
      schedulerName: default-scheduler
      serviceAccount: calico-node
      serviceAccountName: calico-node
      terminationGracePeriodSeconds: 0
      tolerations:
      - effect: NoSchedule
        operator: Exists
      - key: CriticalAddonsOnly
        operator: Exists
      - effect: NoExecute
        operator: Exists
      volumes:
      - hostPath:
          path: /lib/modules
          type: ""
        name: lib-modules
      - hostPath:
          path: /var/run/calico
          type: ""
        name: var-run-calico
      - hostPath:
          path: /var/lib/calico
          type: ""
        name: var-lib-calico
      - hostPath:
          path: /run/xtables.lock
          type: FileOrCreate
        name: xtables-lock
  updateStrategy:
    rollingUpdate:
      maxUnavailable: 1
    type: RollingUpdate

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Reactions: 21
  • Comments: 34 (11 by maintainers)

Commits related to this issue

Most upvoted comments

I would also be keen to know if there is an ETA?

This one is being worked on as a high-priority issue - if anyone needs guidance in the meantime, please feel free to get in touch with me.

Just to let you know what’s going on here; in v3.16 we fixed a different bug and inadvertently caused this one.

The other bug was that pods that terminate without being deleted, such as Jobs, would still appear to Calico to have an IP address even though the CNI plugin had been called to remove the pod from the network. That was a security problem because Calico would think that the IP still belonged to the old Job pod even though it was now free and could be re-used by a new pod with different permissions.

We closed that loophole by changing Calico CNI to add an annotation to the pod to mark whether it has an IP or not. However, you’re not using Calico CNI so you don’t get part of the fix; instead, with EKS CNI, we’re checking for the annotation and not finding it and treating that as “no IP”. It’s not easy to fix because the k8s API doesn’t cleanly expose the fact that a pod is “terminating but may be in its grace period” rather than “terminated and no longer owns its IP”.

To add an additional data point, we’re also experiencing this issue as well.

Calico version: v3.17.3 AWS VPC CNI version: v1.7.9-rc1 containerd version: 1.4.3

Calico v3.19.2 has now been released, which also has the fix.

Hi @primeroz, yes v3.17.5 and v3.18.5 both have the pod termination fix backported.

@nikskiz sadly not but we’ve got the fix in hand. We’re just going to restore the old behaviour for non-Calico CNIs. The old behaviour was to use the “Phase” of the Pod to decide if its IP was legit. That works almost always but there was a bug in older k8s versions that meant that a pod could get stuck in the wrong phase for too long when it was terminating.

We are also having this issue with v3.17.2 and v3.16.10. When pods are being gracefully shut down, their networking is terminated immediately as their iptables rules are removed instantly. This causes that open requests cannot be served and database connections are left open.

v3.13.2 seems to work as expected. We are currently trying to narrow down the version.

After a lot of investigation we came up with the same observation: that a route to a Pod whose state is switching to Terminating is getting deleted right away - not respecting the Pod lifecycle. Effect include Kubelet marking the Pod as Unhealthy right away, if Pod is part of an svc/endpoint then traffic is dropped.

It’s a significant impact … but I am glad that reverting to 3.15.1 works (3.15.5 seem to be okay too).

Thank you very much for giving this issue the attention it deserves as ongoing support for EKS using AWS-VPC-CNI is an important feature of Calico project.

We have been testing various versions of calico and it seems that v3.15.5 is the last one to work as expected and that the next release v3.16.0 is broken, i.e. iptables rules are immediately deleted when SIGTERM is received. We have also tested that the problems still persists on the latest release v3.19.1.