microk8s: Pods stuck in ContainerCreating status, Failed create pod sandbox
When running “microk8s.enable dns dashboard”, the pods will stay in ContainerCreating status:
$ sudo snap install microk8s --beta --classic
microk8s (beta) v1.10.3 from 'canonical' installed
$ microk8s.kubectl get all
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 25s
$ microk8s.enable dns dashboard
Applying DNS manifest
service "kube-dns" created
serviceaccount "kube-dns" created
configmap "kube-dns" created
deployment.extensions "kube-dns" created
Restarting kubelet
Done
deployment.extensions "kubernetes-dashboard" created
service "kubernetes-dashboard" created
service "monitoring-grafana" created
replicationcontroller "monitoring-influxdb-grafana-v4" created
service "monitoring-influxdb" created
$ microk8s.kubectl get all --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/kube-dns-598d7bf7d4-f8lbm 0/3 ContainerCreating 0 9s
kube-system pod/kubernetes-dashboard-545868474d-ltkg8 0/1 Pending 0 4s
kube-system pod/monitoring-influxdb-grafana-v4-5qxm6 0/2 Pending 0 4s
NAMESPACE NAME DESIRED CURRENT READY AGE
kube-system replicationcontroller/monitoring-influxdb-grafana-v4 1 1 0 4s
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 1m
kube-system service/kube-dns ClusterIP 10.152.183.10 <none> 53/UDP,53/TCP 9s
kube-system service/kubernetes-dashboard ClusterIP 10.152.183.204 <none> 80/TCP 4s
kube-system service/monitoring-grafana ClusterIP 10.152.183.115 <none> 80/TCP 4s
kube-system service/monitoring-influxdb ClusterIP 10.152.183.228 <none> 8083/TCP,8086/TCP 4s
NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
kube-system deployment.apps/kube-dns 1 1 1 0 9s
kube-system deployment.apps/kubernetes-dashboard 1 1 1 0 4s
NAMESPACE NAME DESIRED CURRENT READY AGE
kube-system replicaset.apps/kube-dns-598d7bf7d4 1 1 0 9s
kube-system replicaset.apps/kubernetes-dashboard-545868474d 1 1 0 4s
The pods will stay in status ContainerCreating.
$ microk8s.kubectl describe pod/kubernetes-dashboard-545868474d-ltkg8 --namespace kube-system
Name: kubernetes-dashboard-545868474d-ltkg8
Namespace: kube-system
Node: <hostname>/192.168.1.17
Start Time: Tue, 12 Jun 2018 14:33:39 -0400
Labels: k8s-app=kubernetes-dashboard
pod-template-hash=1014240308
Annotations: scheduler.alpha.kubernetes.io/critical-pod=
Status: Pending
IP:
Controlled By: ReplicaSet/kubernetes-dashboard-545868474d
Containers:
kubernetes-dashboard:
Container ID:
Image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.0
Image ID:
Port: 9090/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Limits:
cpu: 100m
memory: 50Mi
Requests:
cpu: 100m
memory: 50Mi
Liveness: http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-vxq5n (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-vxq5n:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-vxq5n
Optional: false
QoS Class: Guaranteed
Node-Selectors: <none>
Tolerations: CriticalAddonsOnly
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 13m default-scheduler Successfully assigned kubernetes-dashboard-545868474d-ltkg8 to <hostname>
Normal SuccessfulMountVolume 13m kubelet, <hostname> MountVolume.SetUp succeeded for volume "default-token-vxq5n"
Warning FailedCreatePodSandBox 13m kubelet, <hostname> Failed create pod sandbox: rpc error: code = Unknown desc = NetworkPlugin kubenet failed to set up pod "kubernetes-dashboard-545868474d-ltkg8_kube-system" network: Error adding container to network: failed to Statfs "/proc/6763/ns/net": permission denied
Normal SandboxChanged 3m (x40 over 13m) kubelet, <hostname> Pod sandbox changed, it will be killed and re-created.
About this issue
- Original URL
- State: closed
- Created 6 years ago
- Reactions: 3
- Comments: 17 (4 by maintainers)
I got the same problem how I solve this problem?