cilium: Observing ImagePullBackOff and CrashLoopBackoff in Kubernetes example with cluster built with kops running in AWS
When following the Getting Started docs for installing Cilium on Kubernetes, the pods created end up in either an ImagePullBackOff
state or a CrashLoopBackoff
state as follows:
$ kubectl get pods --namespace kube-system
NAME READY STATUS RESTARTS AGE
cilium-gbz8v 0/1 Error 0 1m
cilium-pxfz7 0/1 CrashLoopBackOff 3 1m
cilium-z2cnm 0/1 ImagePullBackOff 0 1m
...
General Information
- Cilium version: Docker Image
cilium/cilium:stable
- Kernel version:
4.4.111-k8s
- Orchestration system version in use: Kubernetes 1.8.6
- Link to relevant artifacts: https://github.com/cilium/cilium/blob/master/examples/kubernetes/cilium.yaml
How to reproduce the issue
Create a k8s cluster using:
kops create cluster \
--cloud=aws \
--name=<REDACTED> \
--state=s3://<REDACTED> \
--zones=us-west-2b \
--node-count=2 \
--node-size=t2.medium \
--master-size=m4.large \
--dns=private \
--dns-zone=<REDACTED> \
--dns-zone=<REDACTED> \
--yes
After about 15 minutes the nodes will be up and DNS records will have been propagated. Execute comand:
$ kubectl create -f https://raw.githubusercontent.com/cilium/cilium/HEAD/examples/kubernetes/cilium.yaml
and observe state of pods over a few minutes.
About this issue
- Original URL
- State: closed
- Created 6 years ago
- Comments: 27 (16 by maintainers)
@natemurthy
v1.0.0-rc7
that was released yesterday contains the fix. Thestable
tag contains the fix as well.