kubernetes: Pod Eviction Timeout not respected
What happened?
- I deployed a deployment object with 3 replicas.
- I added the following IP table rule on one of the nodes where a replica is running (after ensuring I am able to SSH after running the following command):
sudo iptables -P OUTPUT DROP
The Node goes to NotReady state in about 40 seconds per defined timeouts, however, the pod (replica) running on that node is terminated in about 1 minute 10 seconds.
What did you expect to happen?
It should take > 5 minutes per default pod-eviction-timeout
for the pod to move to the terminating state
How can we reproduce it (as minimally and precisely as possible)?
Provided above.
Anything else we need to know?
No response
Kubernetes version
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.0", GitCommit:"e19964183377d0ec2052d1f1fa930c4d7575bd50", GitTreeState:"clean", BuildDate:"2020-08-26T14:30:33Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.0", GitCommit:"e19964183377d0ec2052d1f1fa930c4d7575bd50", GitTreeState:"clean", BuildDate:"2020-08-26T14:23:04Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"}
Cloud provider
GCP
OS version
# On Linux:
$ cat /etc/os-release
NAME="Ubuntu"
VERSION="16.04.7 LTS (Xenial Xerus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04.7 LTS"
VERSION_ID="16.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
VERSION_CODENAME=xenial
UBUNTU_CODENAME=xenial
$ uname -a
Linux test-node-0 4.15.0-1080-gcp #90~16.04.1-Ubuntu SMP Fri Jul 10 19:11:10 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
Install tools
Container runtime (CRI) and and version (if applicable)
Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 17.06.2-ee-19
RuntimeApiVersion: 1.30.0
Related plugins (CNI, CSI, …) and versions (if applicable)
About this issue
- Original URL
- State: closed
- Created 3 years ago
- Comments: 18 (10 by maintainers)
You are right, thanks for catching. I only checked
kubelet
andcontroller manager
configs for any misconfiguration.