kubernetes: Pod sandbox be killed and re-created when pull image failed
What happened: Pod sandbox be killed and re-created when pull image fail.
root@k8s-master:~# kubectl describe po nginx
Name: nginx
Namespace: default
Priority: 0
Node: k8s-master/192.168.56.2
Start Time: Mon, 16 Sep 2019 23:45:44 -0400
Labels: <none>
Annotations: <none>
Status: Pending
IP: 110.244.0.185
Containers:
app:
Container ID:
Image: nginx:not-exist
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-72dkg (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-72dkg:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-72dkg
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 20s default-scheduler Successfully assigned default/nginx to k8s-master
Normal Pulling 19s kubelet, k8s-master Pulling image "nginx:not-exist"
Warning Failed 15s kubelet, k8s-master Failed to pull image "nginx:not-exist": rpc error: code = Unknown desc = Error response from daemon: manifest for nginx:not-exist not found
Warning Failed 15s kubelet, k8s-master Error: ErrImagePull
Normal SandboxChanged 11s (x2 over 14s) kubelet, k8s-master Pod sandbox changed, it will be killed and re-created.
Normal BackOff 8s (x4 over 12s) kubelet, k8s-master Back-off pulling image "nginx:not-exist"
Warning Failed 8s (x4 over 12s) kubelet, k8s-master Error: ImagePullBackOff
What you expected to happen: pod sandbox should not be re-created but only need to reconcile containers
How to reproduce it (as minimally and precisely as possible):
kubectl create -f pod.yaml
pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
namespace: default
spec:
containers:
- image: nginx:not-exist
imagePullPolicy: IfNotPresent
name: app
ports:
- containerPort: 80
protocol: TCP
resources: {}
Anything else we need to know?:
Environment:
- Kubernetes version (use
kubectl version
):
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:40:16Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:32:14Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
- Cloud provider or hardware configuration:
- OS (e.g:
cat /etc/os-release
):
NAME="Ubuntu"
VERSION="16.04.6 LTS (Xenial Xerus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04.6 LTS"
VERSION_ID="16.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
VERSION_CODENAME=xenial
UBUNTU_CODENAME=xenial
- Kernel (e.g.
uname -a
): Linux k8s-master 4.12.9-041209-generic #201708242344 SMP Fri Aug 25 03:47:24 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux - Install tools: kubeadm
- Network plugin and version (if this is a network-related bug): cilium
- Others:
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Comments: 17 (14 by maintainers)
I think I have found the problem. Thank you for you information @heartlock
pleg
can only detect the container state change excluding IP change