kubernetes: Static pods cannot be started any more after kubelet is restarted

What happened?

We are trying to deploy our k8s with v1.22.8. During the k8s cluster deploying, the control plane components is started by kubelet, but with changes of the static pod manifest files and kubelet restarted, the control plane components won’t start any more. The component containers has been killed, but the sandbox containers is still live. The kubelet log shows:

Apr 15 11:55:17 w71-qiurui-1330-12b kubelet[13249]: I0415 11:55:17.484952   13249 pod_workers.go:1213] "Pod worker has been requested for removal but is still not fully terminated" podUID=5bfc554f3aad946fbf22edab693ba6d3
Apr 15 11:55:17 w71-qiurui-1330-12b kubelet[13249]: I0415 11:55:17.484966   13249 pod_workers.go:1213] "Pod worker has been requested for removal but is still not fully terminated" podUID=e43c8c031472539371a5cbfce25d5a0d
Apr 15 11:55:17 w71-qiurui-1330-12b kubelet[13249]: I0415 11:55:17.484978   13249 pod_workers.go:1213] "Pod worker has been requested for removal but is still not fully terminated" podUID=6cb9b1d380102bffaa399a5147810cdd
Apr 15 11:55:17 w71-qiurui-1330-12b kubelet[13249]: I0415 11:55:17.484991   13249 pod_workers.go:1213] "Pod worker has been requested for removal but is still not fully terminated" podUID=5cdc8bc8da1745e6e46c7a1be804aa5c

The start of new static pods is blocked. Unti the kubelet be restarted again, the new pod can be started successfully.

The 1.22.8 has the #106394 to fix the pod restart issue, but it seems not fix it completely. And we did not find the issue on 1.22.3.

What did you expect to happen?

The sandbox containers of the old static pods should be killed, and the new pods should be started.

How can we reproduce it (as minimally and precisely as possible)?

It’s hard to reproduce it with the standard k8s deploying. It can only be reproduced in our private environment with our deploy tools.

It’s some possible reproduce steps:

  1. Install docker and kubelet.
  2. Create k8s controller plane yaml files in /etc/kubernetes/manifests.
  3. Modify the controller plane yaml files, such as adding annotations, etc.
  4. Restart the kubelet.
  5. Repeat the step 3 a couple times.

Anything else we need to know?

The kubelet log kubelet.tar.gz

Kubernetes version

1.22.8

Cloud provider

None

OS version

# On Linux:
$ cat /etc/os-release
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"

CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"$ uname -a
$ uname -a
Linux global-0 3.10.0-1160.53.1.el7.x86_64 #1 SMP Fri Jan 14 13:59:45 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux

Install tools

Container runtime (CRI) and version (if applicable)

Related plugins (CNI, CSI, …) and versions (if applicable)

About this issue

  • Original URL
  • State: open
  • Created 2 years ago
  • Reactions: 2
  • Comments: 20 (13 by maintainers)

Most upvoted comments

Problem is similar to #105543 - when we don’t have apiserver pod info, the kubelet behaves incorrectly for static pods (which shouldn’t be impacted by the apiserver). I am starting to believe the “all sources ready” check in both housekeeping and delete pod is not sufficient - housekeeping needs to be able to detect whether a specific pod is truly missing or not, and that depends on knowing the source of the pod. I added some comments in https://github.com/kubernetes/kubernetes/pull/111901#issuecomment-1240963051 around something that could potentially address both issues.

The conditions of case are as follows:

  1. kubelet restarting
  2. kubelet received the static pod kube-apiserver
  3. kubelet started to syncPod…, hadn’t finished this sync
  4. the manifest file of static pod was modified
  5. kubelet received REMOVE event for the pod, and removed the old pod from podManager but failed to delete pod as the source hadn’t ready
  6. kubelet received ADD event for the new static pod, add it to the podManager, but could not start it as the old one with the same name was still running in podworker
  7. as of the old pod’s deletion from podManager, the step 3 synced failed because of mountvolumes timeout. and would be failed forever
  8. the old podworker was still in sync workstate, so the housekeeping also could not kill the old pod, then the new pod would not be started forever

the general logs are as follows:

315: I0818 02:09:58.354946    2462 config.go:383] "Receiving a new pod" pod="kube-system/kube-apiserver-1111-2222--ab01"
	  459: I0818 02:09:58.416078    2462 manager.go:999] Added container: "/kubepods/burstable/pod17b60a8466d5776e928eb61d70f0ad95/49cd61b36c8a7b27dc18d74785238211b08548afc464e931e8a4a5c9ade7533f" (aliases: [k8s_kube-apiserver_kube-apiserver-1111-2222--ab01_kube-system_17b60a8466d5776e928eb61d70f0ad95_0 49cd61b36c8a7b27dc18d74785238211b08548afc464e931e8a4a5c9ade7533f], namespace: "docker")
	  839: I0818 02:09:58.475269    2462 manager.go:999] Added container: "/kubepods/burstable/pod17b60a8466d5776e928eb61d70f0ad95/451669bee6185a2c158bda56786e0d255426cd451602716e490940b01174c360" (aliases: [k8s_POD_kube-apiserver-1111-2222--ab01_kube-system_17b60a8466d5776e928eb61d70f0ad95_0 451669bee6185a2c158bda56786e0d255426cd451602716e490940b01174c360], namespace: "docker")
	  875: I0818 02:09:58.488433    2462 kuberuntime_manager.go:1143] "getSandboxIDByPodUID got sandbox IDs for pod" podSandboxID=[451669bee6185a2c158bda56786e0d255426cd451602716e490940b01174c360] pod="kube-system/kube-apiserver-1111-2222--ab01"
	  899: E0818 02:09:58.493728    2462 generic.go:411] "PLEG: Write status" pod="kube-system/kube-apiserver-1111-2222--ab01"
	 2233: I0818 02:09:58.746604    2462 kubelet.go:2100] "SyncLoop ADD" source="file" pods=[kube-system/kube-apiserver-1111-2222--ab01 kube-system/kube-controller-manager-1111-2222--ab01 kube-system/kube-proxy-1111-2222--ab01 kube-system/kube-scheduler-1111-2222--ab01]
	 2235: I0818 02:09:58.747053    2462 pod_workers.go:596] "Pod is being synced for the first time" pod="kube-system/kube-apiserver-1111-2222--ab01" podUID=17b60a8466d5776e928eb61d70f0ad95
	 2241: I0818 02:09:58.747445    2462 pod_workers.go:921] "Processing pod event" pod="kube-system/kube-apiserver-1111-2222--ab01" podUID=17b60a8466d5776e928eb61d70f0ad95 updateType=0

------  the old pod start to sync at this time

	 2245: I0818 02:09:58.747639    2462 kubelet.go:1557] "syncPod enter" pod="kube-system/kube-apiserver-1111-2222--ab01" podUID=17b60a8466d5776e928eb61d70f0ad95
	 2250: I0818 02:09:58.747766    2462 kubelet_pods.go:1455] "Generating pod status" pod="kube-system/kube-apiserver-1111-2222--ab01"
	 2253: I0818 02:09:58.747815    2462 kubelet.go:2138] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-apiserver-1111-2222--ab01" event=&{ID:17b60a8466d5776e928eb61d70f0ad95 Type:ContainerStarted Data:49cd61b36c8a7b27dc18d74785238211b08548afc464e931e8a4a5c9ade7533f}
	 2254: I0818 02:09:58.747864    2462 kubelet_pods.go:1467] "Got phase for pod" pod="kube-system/kube-apiserver-1111-2222--ab01" oldPhase=Pending phase=Running
	 2255: I0818 02:09:58.747864    2462 kubelet.go:2138] "SyncLoop (PLEG): event for pod" pod="kube-system/kube-apiserver-1111-2222--ab01" event=&{ID:17b60a8466d5776e928eb61d70f0ad95 Type:ContainerStarted Data:451669bee6185a2c158bda56786e0d255426cd451602716e490940b01174c360}
	 2285: I0818 02:09:58.748971    2462 kubelet.go:1734] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-1111-2222--ab01"
	 2322: I0818 02:09:58.866601    2462 config.go:383] "Receiving a new pod" pod="kube-system/kube-apiserver-1111-2222--ab01"


--------- received the old pod REMOVE event
	 2323: I0818 02:09:58.866787    2462 kubelet.go:2110] "SyncLoop REMOVE" source="file" pods=[kube-system/kube-apiserver-1111-2222--ab01]


---------- the old pod removed failed
	 2324: I0818 02:09:58.866841    2462 kubelet.go:2308] "Failed to delete pod" pod="kube-system/kube-apiserver-1111-2222--ab01" err="skipping delete because sources aren't ready yet"
	
----------- received the new  pod ADD event
	 2325: I0818 02:09:58.866873    2462 kubelet.go:2100] "SyncLoop ADD" source="file" pods=[kube-system/kube-apiserver-1111-2222--ab01]
	 2327: I0818 02:09:58.867026    2462 pod_workers.go:596] "Pod is being synced for the first time" pod="kube-system/kube-apiserver-1111-2222--ab01" podUID=b7691e2ed5f0757e7c63d38cfe3e6e9c

----------- the new pod cannot start
	 2328: I0818 02:09:58.867139    2462 pod_workers.go:915] "Pod cannot start yet" pod="kube-system/kube-apiserver-1111-2222--ab01" podUID=b7691e2ed5f0757e7c63d38cfe3e6e9c
	 2329: I0818 02:09:58.909974    2462 status_manager.go:670] "Pod was deleted and then recreated, skipping status update" pod="kube-system/kube-apiserver-1111-2222--ab01" oldPodUID=17b60a8466d5776e928eb61d70f0ad95 podUID=233fb29f-89e9-432a-8aee-69f347c39fa4
	 2335: I0818 02:09:58.942828    2462 desired_state_of_world_populator.go:349] "Added volume to desired state" pod="kube-system/kube-apiserver-1111-2222--ab01" volumeName="local-time" volumeSpecName="local-time"
	 2336: I0818 02:09:58.942880    2462 desired_state_of_world_populator.go:349] "Added volume to desired state" pod="kube-system/kube-apiserver-1111-2222--ab01" volumeName="certs" volumeSpecName="certs"
	 2337: I0818 02:09:58.942929    2462 desired_state_of_world_populator.go:349] "Added volume to desired state" pod="kube-system/kube-apiserver-1111-2222--ab01" volumeName="admission-control" volumeSpecName="admission-control"
	 2338: I0818 02:09:58.942982    2462 desired_state_of_world_populator.go:349] "Added volume to desired state" pod="kube-system/kube-apiserver-1111-2222--ab01" volumeName="pki" volumeSpecName="pki"
	 2339: I0818 02:09:58.943033    2462 desired_state_of_world_populator.go:349] "Added volume to desired state" pod="kube-system/kube-apiserver-1111-2222--ab01" volumeName="log-dir" volumeSpecName="log-dir"
	 2340: I0818 02:09:58.943079    2462 desired_state_of_world_populator.go:349] "Added volume to desired state" pod="kube-system/kube-apiserver-1111-2222--ab01" volumeName="webhook" volumeSpecName="webhook"
	 2341: I0818 02:09:58.943132    2462 desired_state_of_world_populator.go:349] "Added volume to desired state" pod="kube-system/kube-apiserver-1111-2222--ab01" volumeName="audit" volumeSpecName="audit"
	 2342: I0818 02:09:58.943178    2462 desired_state_of_world_populator.go:349] "Added volume to desired state" pod="kube-system/kube-apiserver-1111-2222--ab01" volumeName="cluster-id" volumeSpecName="cluster-id"
	 2361: E0818 02:09:59.064605    2462 kubelet.go:1736] "Failed creating a mirror pod for" err="pods \"kube-apiserver-1111-2222--ab01\" already exists" pod="kube-system/kube-apiserver-1111-2222--ab01"
	 2361: E0818 02:09:59.064605    2462 kubelet.go:1736] "Failed creating a mirror pod for" err="pods \"kube-apiserver-1111-2222--ab01\" already exists" pod="kube-system/kube-apiserver-1111-2222--ab01"
	 2362: I0818 02:09:59.064723    2462 volume_manager.go:410] "Waiting for volumes to attach and mount for pod" pod="kube-system/kube-apiserver-1111-2222--ab01"
	 2370: I0818 02:09:59.142571    2462 reconciler.go:227] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-time\" (UniqueName: \"kubernetes.io/host-path/b7691e2ed5f0757e7c63d38cfe3e6e9c-local-time\") pod \"kube-apiserver-1111-2222--ab01\" (UID: \"b7691e2ed5f0757e7c63d38cfe3e6e9c\") "
	 2371: I0818 02:09:59.142771    2462 reconciler.go:227] "operationExecutor.VerifyControllerAttachedVolume started for volume \"admission-control\" (UniqueName: \"kubernetes.io/host-path/b7691e2ed5f0757e7c63d38cfe3e6e9c-admission-control\") pod \"kube-apiserver-1111-2222--ab01\" (UID: \"b7691e2ed5f0757e7c63d38cfe3e6e9c\") "
	 2372: I0818 02:09:59.142839    2462 reconciler.go:227] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pki\" (UniqueName: \"kubernetes.io/host-path/b7691e2ed5f0757e7c63d38cfe3e6e9c-pki\") pod \"kube-apiserver-1111-2222--ab01\" (UID: \"b7691e2ed5f0757e7c63d38cfe3e6e9c\") "
	 2373: I0818 02:09:59.142886    2462 reconciler.go:227] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/b7691e2ed5f0757e7c63d38cfe3e6e9c-log-dir\") pod \"kube-apiserver-1111-2222--ab01\" (UID: \"b7691e2ed5f0757e7c63d38cfe3e6e9c\") "
	 2377: I0818 02:09:59.143118    2462 reconciler.go:227] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/host-path/b7691e2ed5f0757e7c63d38cfe3e6e9c-audit\") pod \"kube-apiserver-1111-2222--ab01\" (UID: \"b7691e2ed5f0757e7c63d38cfe3e6e9c\") "
	 2393: I0818 02:09:59.144117    2462 reconciler.go:227] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/b7691e2ed5f0757e7c63d38cfe3e6e9c-certs\") pod \"kube-apiserver-1111-2222--ab01\" (UID: \"b7691e2ed5f0757e7c63d38cfe3e6e9c\") "
	 2395: I0818 02:09:59.144224    2462 reconciler.go:227] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook\" (UniqueName: \"kubernetes.io/host-path/b7691e2ed5f0757e7c63d38cfe3e6e9c-webhook\") pod \"kube-apiserver-1111-2222--ab01\" (UID: \"b7691e2ed5f0757e7c63d38cfe3e6e9c\") "
	 2397: I0818 02:09:59.144269    2462 reconciler.go:227] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-id\" (UniqueName: \"kubernetes.io/host-path/b7691e2ed5f0757e7c63d38cfe3e6e9c-cluster-id\") pod \"kube-apiserver-1111-2222--ab01\" (UID: \"b7691e2ed5f0757e7c63d38cfe3e6e9c\") "
	 2414: I0818 02:09:59.344353    2462 config.go:383] "Receiving a new pod" pod="kube-system/kube-apiserver-1111-2222--ab01"
	 2420: I0818 02:09:59.345661    2462 reconciler.go:257] "Starting operationExecutor.MountVolume for volume \"admission-control\" (UniqueName: \"kubernetes.io/host-path/b7691e2ed5f0757e7c63d38cfe3e6e9c-admission-control\") pod \"kube-apiserver-1111-2222--ab01\" (UID: \"b7691e2ed5f0757e7c63d38cfe3e6e9c\") "
	 2421: I0818 02:09:59.345782    2462 reconciler.go:272] "operationExecutor.MountVolume started for volume \"admission-control\" (UniqueName: \"kubernetes.io/host-path/b7691e2ed5f0757e7c63d38cfe3e6e9c-admission-control\") pod \"kube-apiserver-1111-2222--ab01\" (UID: \"b7691e2ed5f0757e7c63d38cfe3e6e9c\") "
	 2422: I0818 02:09:59.345857    2462 reconciler.go:257] "Starting operationExecutor.MountVolume for volume \"pki\" (UniqueName: \"kubernetes.io/host-path/b7691e2ed5f0757e7c63d38cfe3e6e9c-pki\") pod \"kube-apiserver-1111-2222--ab01\" (UID: \"b7691e2ed5f0757e7c63d38cfe3e6e9c\") "
	 2423: I0818 02:09:59.345912    2462 reconciler.go:272] "operationExecutor.MountVolume started for volume \"pki\" (UniqueName: \"kubernetes.io/host-path/b7691e2ed5f0757e7c63d38cfe3e6e9c-pki\") pod \"kube-apiserver-1111-2222--ab01\" (UID: \"b7691e2ed5f0757e7c63d38cfe3e6e9c\") "
	 2424: I0818 02:09:59.345949    2462 reconciler.go:257] "Starting operationExecutor.MountVolume for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/b7691e2ed5f0757e7c63d38cfe3e6e9c-log-dir\") pod \"kube-apiserver-1111-2222--ab01\" (UID: \"b7691e2ed5f0757e7c63d38cfe3e6e9c\") "
	 2425: I0818 02:09:59.345996    2462 reconciler.go:272] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/b7691e2ed5f0757e7c63d38cfe3e6e9c-log-dir\") pod \"kube-apiserver-1111-2222--ab01\" (UID: \"b7691e2ed5f0757e7c63d38cfe3e6e9c\") "
	 2426: I0818 02:09:59.346014    2462 kubelet.go:2100] "SyncLoop ADD" source="api" pods=[kube-system/calico-kube-controllers-6cdf7cf687-nbfmd default/caas-statefulset-0 kube-system/kube-proxy-1111-2222--ab01 kube-system/calico-node-pst2k default/caas-daemonset-hbw4m kube-system/kube-apiserver-1111-2222--ab01 kube-system/kube-controller-manager-1111-2222--ab01 kube-system/kube-scheduler-1111-2222--ab01 kube-system/op-containers-eviction-965648bbb-fvhwk default/caas-rs-pxv2t default/caas-statefulset-1]
	 2436: I0818 02:09:59.346368    2462 reconciler.go:257] "Starting operationExecutor.MountVolume for volume \"local-time\" (UniqueName: \"kubernetes.io/host-path/b7691e2ed5f0757e7c63d38cfe3e6e9c-local-time\") pod \"kube-apiserver-1111-2222--ab01\" (UID: \"b7691e2ed5f0757e7c63d38cfe3e6e9c\") "
	 2438: I0818 02:09:59.346442    2462 reconciler.go:272] "operationExecutor.MountVolume started for volume \"local-time\" (UniqueName: \"kubernetes.io/host-path/b7691e2ed5f0757e7c63d38cfe3e6e9c-local-time\") pod \"kube-apiserver-1111-2222--ab01\" (UID: \"b7691e2ed5f0757e7c63d38cfe3e6e9c\") "
	 2442: I0818 02:09:59.346485    2462 reconciler.go:257] "Starting operationExecutor.MountVolume for volume \"audit\" (UniqueName: \"kubernetes.io/host-path/b7691e2ed5f0757e7c63d38cfe3e6e9c-audit\") pod \"kube-apiserver-1111-2222--ab01\" (UID: \"b7691e2ed5f0757e7c63d38cfe3e6e9c\") "
	 2443: I0818 02:09:59.346540    2462 reconciler.go:272] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/host-path/b7691e2ed5f0757e7c63d38cfe3e6e9c-audit\") pod \"kube-apiserver-1111-2222--ab01\" (UID: \"b7691e2ed5f0757e7c63d38cfe3e6e9c\") "
	 2470: I0818 02:09:59.347348    2462 reconciler.go:257] "Starting operationExecutor.MountVolume for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/b7691e2ed5f0757e7c63d38cfe3e6e9c-certs\") pod \"kube-apiserver-1111-2222--ab01\" (UID: \"b7691e2ed5f0757e7c63d38cfe3e6e9c\") "
	 2471: I0818 02:09:59.347391    2462 reconciler.go:272] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/b7691e2ed5f0757e7c63d38cfe3e6e9c-certs\") pod \"kube-apiserver-1111-2222--ab01\" (UID: \"b7691e2ed5f0757e7c63d38cfe3e6e9c\") "
	 2472: I0818 02:09:59.347450    2462 reconciler.go:257] "Starting operationExecutor.MountVolume for volume \"webhook\" (UniqueName: \"kubernetes.io/host-path/b7691e2ed5f0757e7c63d38cfe3e6e9c-webhook\") pod \"kube-apiserver-1111-2222--ab01\" (UID: \"b7691e2ed5f0757e7c63d38cfe3e6e9c\") "
	 2474: I0818 02:09:59.347501    2462 reconciler.go:272] "operationExecutor.MountVolume started for volume \"webhook\" (UniqueName: \"kubernetes.io/host-path/b7691e2ed5f0757e7c63d38cfe3e6e9c-webhook\") pod \"kube-apiserver-1111-2222--ab01\" (UID: \"b7691e2ed5f0757e7c63d38cfe3e6e9c\") "
	 2476: I0818 02:09:59.347539    2462 reconciler.go:257] "Starting operationExecutor.MountVolume for volume \"cluster-id\" (UniqueName: \"kubernetes.io/host-path/b7691e2ed5f0757e7c63d38cfe3e6e9c-cluster-id\") pod \"kube-apiserver-1111-2222--ab01\" (UID: \"b7691e2ed5f0757e7c63d38cfe3e6e9c\") "
	 2478: I0818 02:09:59.347607    2462 reconciler.go:272] "operationExecutor.MountVolume started for volume \"cluster-id\" (UniqueName: \"kubernetes.io/host-path/b7691e2ed5f0757e7c63d38cfe3e6e9c-cluster-id\") pod \"kube-apiserver-1111-2222--ab01\" (UID: \"b7691e2ed5f0757e7c63d38cfe3e6e9c\") "
	 2493: I0818 02:09:59.348435    2462 operation_generator.go:712] MountVolume.SetUp succeeded for volume "admission-control" (UniqueName: "kubernetes.io/host-path/b7691e2ed5f0757e7c63d38cfe3e6e9c-admission-control") pod "kube-apiserver-1111-2222--ab01" (UID: "b7691e2ed5f0757e7c63d38cfe3e6e9c") 
	 2494: I0818 02:09:59.348530    2462 operation_generator.go:712] MountVolume.SetUp succeeded for volume "pki" (UniqueName: "kubernetes.io/host-path/b7691e2ed5f0757e7c63d38cfe3e6e9c-pki") pod "kube-apiserver-1111-2222--ab01" (UID: "b7691e2ed5f0757e7c63d38cfe3e6e9c") 
	 2495: I0818 02:09:59.348563    2462 operation_generator.go:712] MountVolume.SetUp succeeded for volume "log-dir" (UniqueName: "kubernetes.io/host-path/b7691e2ed5f0757e7c63d38cfe3e6e9c-log-dir") pod "kube-apiserver-1111-2222--ab01" (UID: "b7691e2ed5f0757e7c63d38cfe3e6e9c") 
	 2496: I0818 02:09:59.348594    2462 operation_generator.go:712] MountVolume.SetUp succeeded for volume "audit" (UniqueName: "kubernetes.io/host-path/b7691e2ed5f0757e7c63d38cfe3e6e9c-audit") pod "kube-apiserver-1111-2222--ab01" (UID: "b7691e2ed5f0757e7c63d38cfe3e6e9c") 
	 2506: I0818 02:09:59.348833    2462 operation_generator.go:712] MountVolume.SetUp succeeded for volume "local-time" (UniqueName: "kubernetes.io/host-path/b7691e2ed5f0757e7c63d38cfe3e6e9c-local-time") pod "kube-apiserver-1111-2222--ab01" (UID: "b7691e2ed5f0757e7c63d38cfe3e6e9c") 
	 2508: I0818 02:09:59.348878    2462 operation_generator.go:712] MountVolume.SetUp succeeded for volume "cluster-id" (UniqueName: "kubernetes.io/host-path/b7691e2ed5f0757e7c63d38cfe3e6e9c-cluster-id") pod "kube-apiserver-1111-2222--ab01" (UID: "b7691e2ed5f0757e7c63d38cfe3e6e9c") 
	 2512: I0818 02:09:59.348974    2462 operation_generator.go:712] MountVolume.SetUp succeeded for volume "certs" (UniqueName: "kubernetes.io/host-path/b7691e2ed5f0757e7c63d38cfe3e6e9c-certs") pod "kube-apiserver-1111-2222--ab01" (UID: "b7691e2ed5f0757e7c63d38cfe3e6e9c") 
	 2513: I0818 02:09:59.349005    2462 operation_generator.go:712] MountVolume.SetUp succeeded for volume "webhook" (UniqueName: "kubernetes.io/host-path/b7691e2ed5f0757e7c63d38cfe3e6e9c-webhook") pod "kube-apiserver-1111-2222--ab01" (UID: "b7691e2ed5f0757e7c63d38cfe3e6e9c") 
	 2540: I0818 02:09:59.349759    2462 pod_workers.go:915] "Pod cannot start yet" pod="kube-system/kube-apiserver-1111-2222--ab01" podUID=b7691e2ed5f0757e7c63d38cfe3e6e9c
	
------- housekeeping could not kill the old pod 17b60a8466d5776e928eb61d70f0ad95
	I0818 02:10:02.447225    2462 kubelet.go:2188] "SyncLoop (housekeeping)"
I0818 02:10:02.453290    2462 kubelet_pods.go:1095] "Clean up pod workers for terminated pods"
I0818 02:10:02.453363    2462 pod_workers.go:1337] "Pod worker has been requested for removal but is still not fully terminated" podUID=17b60a8466d5776e928eb61d70f0ad95
I0818 02:10:02.453400    2462 kubelet_pods.go:1124] "Clean up probes for terminated pods"
I0818 02:10:02.462697    2462 kubelet_pods.go:1161] "Clean up orphaned pod statuses"
I0818 02:10:02.471704    2462 kubelet_pods.go:1180] "Clean up orphaned pod directories"
I0818 02:10:02.471894    2462 kubelet_pods.go:1191] "Clean up orphaned mirror pods"
I0818 02:10:02.471919    2462 kubelet_pods.go:1198] "Clean up orphaned pod cgroups"
I0818 02:10:02.471954    2462 kubelet.go:2196] "SyncLoop (housekeeping) end"



	 2994: I0818 02:10:11.446238    2462 kubelet.go:2157] "SyncLoop (SYNC) pods" total=1 pods=[kube-system/kube-apiserver-1111-2222--ab01]
	 2995: I0818 02:10:11.446370    2462 pod_workers.go:915] "Pod cannot start yet" pod="kube-system/kube-apiserver-1111-2222--ab01" podUID=b7691e2ed5f0757e7c63d38cfe3e6e9c
	 3131: I0818 02:10:23.446217    2462 kubelet.go:2157] "SyncLoop (SYNC) pods" total=1 pods=[kube-system/kube-apiserver-1111-2222--ab01]
	 3132: I0818 02:10:23.446400    2462 pod_workers.go:915] "Pod cannot start yet" pod="kube-system/kube-apiserver-1111-2222--ab01" podUID=b7691e2ed5f0757e7c63d38cfe3e6e9c
	 3194: I0818 02:10:35.446955    2462 kubelet.go:2157] "SyncLoop (SYNC) pods" total=1 pods=[kube-system/kube-apiserver-1111-2222--ab01]
	 3195: I0818 02:10:35.447092    2462 pod_workers.go:915] "Pod cannot start yet" pod="kube-system/kube-apiserver-1111-2222--ab01" podUID=b7691e2ed5f0757e7c63d38cfe3e6e9c
	 3255: I0818 02:10:46.446683    2462 kubelet.go:2157] "SyncLoop (SYNC) pods" total=1 pods=[kube-system/kube-apiserver-1111-2222--ab01]
	 3257: I0818 02:10:46.446896    2462 pod_workers.go:915] "Pod cannot start yet" pod="kube-system/kube-apiserver-1111-2222--ab01" podUID=b7691e2ed5f0757e7c63d38cfe3e6e9c
	 3690: I0818 02:10:59.446719    2462 kubelet.go:2157] "SyncLoop (SYNC) pods" total=1 pods=[kube-system/kube-apiserver-1111-2222--ab01]
	 3691: I0818 02:10:59.446813    2462 pod_workers.go:915] "Pod cannot start yet" pod="kube-system/kube-apiserver-1111-2222--ab01" podUID=b7691e2ed5f0757e7c63d38cfe3e6e9c
	 3849: I0818 02:11:14.446552    2462 kubelet.go:2157] "SyncLoop (SYNC) pods" total=1 pods=[kube-system/kube-apiserver-1111-2222--ab01]
	 3851: I0818 02:11:14.446760    2462 pod_workers.go:915] "Pod cannot start yet" pod="kube-system/kube-apiserver-1111-2222--ab01" podUID=b7691e2ed5f0757e7c63d38cfe3e6e9c
	 4019: I0818 02:11:28.447619    2462 kubelet.go:2157] "SyncLoop (SYNC) pods" total=1 pods=[kube-system/kube-apiserver-1111-2222--ab01]
	 4020: I0818 02:11:28.447679    2462 pod_workers.go:915] "Pod cannot start yet" pod="kube-system/kube-apiserver-1111-2222--ab01" podUID=b7691e2ed5f0757e7c63d38cfe3e6e9c
	 4122: I0818 02:11:42.446872    2462 kubelet.go:2157] "SyncLoop (SYNC) pods" total=1 pods=[kube-system/kube-apiserver-1111-2222--ab01]
	 4124: I0818 02:11:42.448804    2462 pod_workers.go:915] "Pod cannot start yet" pod="kube-system/kube-apiserver-1111-2222--ab01" podUID=b7691e2ed5f0757e7c63d38cfe3e6e9c
	 4197: I0818 02:11:55.446548    2462 kubelet.go:2157] "SyncLoop (SYNC) pods" total=1 pods=[kube-system/kube-apiserver-1111-2222--ab01]
	 4198: I0818 02:11:55.446644    2462 pod_workers.go:915] "Pod cannot start yet" pod="kube-system/kube-apiserver-1111-2222--ab01" podUID=b7691e2ed5f0757e7c63d38cfe3e6e9c

---------- the old pod mount-volumes timeout as of the deletion from podManager
	 4579: E0818 02:12:02.065186    2462 kubelet.go:1755] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[pki log-dir webhook audit cluster-id local-time certs admission-control], unattached volumes=[pki log-dir webhook audit cluster-id local-time certs admission-control]: timed out waiting for the condition" pod="kube-system/kube-apiserver-1111-2222--ab01"
------------ the first sync for the old pod exited
	 4580: I0818 02:12:02.065221    2462 kubelet.go:1559] "syncPod exit" pod="kube-system/kube-apiserver-1111-2222--ab01" podUID=17b60a8466d5776e928eb61d70f0ad95 isTerminal=false
	 4581: I0818 02:12:02.065470    2462 event.go:291] "Event occurred" object="kube-system/kube-apiserver-1111-2222--ab01" kind="Pod" apiVersion="v1" type="Warning" reason="FailedMount" message="Unable to attach or mount volumes: unmounted volumes=[pki log-dir webhook audit cluster-id local-time certs admission-control], unattached volumes=[pki log-dir webhook audit cluster-id local-time certs admission-control]: timed out waiting for the condition"
	 4582: E0818 02:12:02.073699    2462 pod_workers.go:1012] "Error syncing pod, skipping" err="unmounted volumes=[pki log-dir webhook audit cluster-id local-time certs admission-control], unattached volumes=[pki log-dir webhook audit cluster-id local-time certs admission-control]: timed out waiting for the condition" pod="kube-system/kube-apiserver-1111-2222--ab01" podUID=17b60a8466d5776e928eb61d70f0ad95
	 4583: I0818 02:12:02.073757    2462 pod_workers.go:1039] "Processing pod event done" pod="kube-system/kube-apiserver-1111-2222--ab01" podUID=17b60a8466d5776e928eb61d70f0ad95 updateType=0
	 4584: I0818 02:12:02.073781    2462 pod_workers.go:921] "Processing pod event" pod="kube-system/kube-apiserver-1111-2222--ab01" podUID=17b60a8466d5776e928eb61d70f0ad95 updateType=0
	 4614: I0818 02:12:03.437531    2462 kubelet.go:1557] "syncPod enter" pod="kube-system/kube-apiserver-1111-2222--ab01" podUID=17b60a8466d5776e928eb61d70f0ad95
	 4615: I0818 02:12:03.437573    2462 kubelet_pods.go:1455] "Generating pod status" pod="kube-system/kube-apiserver-1111-2222--ab01"
	 4616: I0818 02:12:03.437639    2462 kubelet_pods.go:1467] "Got phase for pod" pod="kube-system/kube-apiserver-1111-2222--ab01" oldPhase=Pending phase=Running
	 4617: I0818 02:12:03.438070    2462 kubelet.go:1734] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-1111-2222--ab01"
	 4618: I0818 02:12:03.460103    2462 status_manager.go:670] "Pod was deleted and then recreated, skipping status update" pod="kube-system/kube-apiserver-1111-2222--ab01" oldPodUID=17b60a8466d5776e928eb61d70f0ad95 podUID=b7691e2ed5f0757e7c63d38cfe3e6e9c
	 4619: E0818 02:12:03.488008    2462 kubelet.go:1736] "Failed creating a mirror pod for" err="pods \"kube-apiserver-1111-2222--ab01\" already exists" pod="kube-system/kube-apiserver-1111-2222--ab01"
	 4619: E0818 02:12:03.488008    2462 kubelet.go:1736] "Failed creating a mirror pod for" err="pods \"kube-apiserver-1111-2222--ab01\" already exists" pod="kube-system/kube-apiserver-1111-2222--ab01"
	 4620: I0818 02:12:03.488102    2462 volume_manager.go:410] "Waiting for volumes to attach and mount for pod" pod="kube-system/kube-apiserver-1111-2222--ab01"
	

	 4649: I0818 02:12:09.446281    2462 kubelet.go:2157] "SyncLoop (SYNC) pods" total=1 pods=[kube-system/kube-apiserver-1111-2222--ab01]
	 4650: I0818 02:12:09.446465    2462 pod_workers.go:915] "Pod cannot start yet" pod="kube-system/kube-apiserver-1111-2222--ab01" podUID=b7691e2ed5f0757e7c63d38cfe3e6e9c
	 4781: I0818 02:12:24.446856    2462 kubelet.go:2157] "SyncLoop (SYNC) pods" total=1 pods=[kube-system/kube-apiserver-1111-2222--ab01]
	 4782: I0818 02:12:24.446984    2462 pod_workers.go:915] "Pod cannot start yet" pod="kube-system/kube-apiserver-1111-2222--ab01" podUID=b7691e2ed5f0757e7c63d38cfe3e6e9c
	 4935: I0818 02:12:38.447005    2462 kubelet.go:2157] "SyncLoop (SYNC) pods" total=3 pods=[default/mypod1 kube-system/kube-apiserver-1111-2222--ab01 kube-system/kube-scheduler-1111-2222--ab01]
	 4937: I0818 02:12:38.447202    2462 pod_workers.go:915] "Pod cannot start yet" pod="kube-system/kube-apiserver-1111-2222--ab01" podUID=b7691e2ed5f0757e7c63d38cfe3e6e9c
	
------- housekeeping could not kill the old pod 17b60a8466d5776e928eb61d70f0ad95
	I0818 02:12:42.446881    2462 kubelet.go:2188] "SyncLoop (housekeeping)"
I0818 02:12:42.450366    2462 kubelet_pods.go:1095] "Clean up pod workers for terminated pods"
I0818 02:12:42.450420    2462 pod_workers.go:1337] "Pod worker has been requested for removal but is still not fully terminated" podUID=17b60a8466d5776e928eb61d70f0ad95
I0818 02:12:42.450488    2462 kubelet_pods.go:1124] "Clean up probes for terminated pods"
I0818 02:12:42.461041    2462 kubelet_pods.go:1161] "Clean up orphaned pod statuses"
I0818 02:12:42.470069    2462 kubelet_pods.go:1180] "Clean up orphaned pod directories"
I0818 02:12:42.470206    2462 kubelet_pods.go:1191] "Clean up orphaned mirror pods"
I0818 02:12:42.470224    2462 kubelet_pods.go:1198] "Clean up orphaned pod cgroups"
I0818 02:12:42.470246    2462 kubelet.go:2196] "SyncLoop (housekeeping) end"


	 5076: I0818 02:12:53.447158    2462 kubelet.go:2157] "SyncLoop (SYNC) pods" total=1 pods=[kube-system/kube-apiserver-1111-2222--ab01]
	 5077: I0818 02:12:53.447313    2462 pod_workers.go:915] "Pod cannot start yet" pod="kube-system/kube-apiserver-1111-2222--ab01" podUID=b7691e2ed5f0757e7c63d38cfe3e6e9c
	 5490: I0818 02:13:05.446641    2462 kubelet.go:2157] "SyncLoop (SYNC) pods" total=1 pods=[kube-system/kube-apiserver-1111-2222--ab01]
	 5491: I0818 02:13:05.446762    2462 pod_workers.go:915] "Pod cannot start yet" pod="kube-system/kube-apiserver-1111-2222--ab01" podUID=b7691e2ed5f0757e7c63d38cfe3e6e9c
--------------- the new pod still could not start
	 5548: I0818 02:13:17.447184    2462 kubelet.go:2157] "SyncLoop (SYNC) pods" total=1 pods=[kube-system/kube-apiserver-1111-2222--ab01]
	 5549: I0818 02:13:17.447332    2462 pod_workers.go:915] "Pod cannot start yet" pod="kube-system/kube-apiserver-1111-2222--ab01" podUID=b7691e2ed5f0757e7c63d38cfe3e6e9c