kubeadm: runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Is this a BUG REPORT or FEATURE REQUEST?
BUG REPORT
- I’ve followed this guide.
- I’ve installed master node on 96 CPU ARM64 server.
- OS is Ubuntu 18.04 LTS. just after
apt-get update/upgrade
. - Used
kubeadm init --pod-network-cidr=10.244.0.0/16
. And then executed suggested commands. - Selected flannel pod-network:
sysctl net.bridge.bridge-nf-call-iptables=1
.wget https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml
.vim kube-flannel.yml
, replaceamd64
witharm64
kubectl apply -f kube-flannel.yml
.kubectl get pods --all-namespaces
:
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-78fcdf6894-ls44z 1/1 Running 0 20m
kube-system coredns-78fcdf6894-njnnt 1/1 Running 0 20m
kube-system etcd-devstats.team.io 1/1 Running 0 20m
kube-system kube-apiserver-devstats.team.io 1/1 Running 0 20m
kube-system kube-controller-manager-devstats.team.io 1/1 Running 0 20m
kube-system kube-flannel-ds-v4t8s 1/1 Running 0 13m
kube-system kube-proxy-5825g 1/1 Running 0 20m
kube-system kube-scheduler-devstats.team.io 1/1 Running 0 20m
Then joined two AMD64 nodes using kubeadm init
output:
1st node:
[preflight] running pre-flight checks
[WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh] or no builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}]
you can solve this problem with following methods:
1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support
I0802 10:26:49.987467 16652 kernel_validator.go:81] Validating kernel version
I0802 10:26:49.987709 16652 kernel_validator.go:96] Validating kernel config
[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 17.12.1-ce. Max validated version: 17.03
[discovery] Trying to connect to API Server "147.75.97.234:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://147.75.97.234:6443"
[discovery] Requesting info from "https://147.75.97.234:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "147.75.97.234:6443"
[discovery] Successfully established connection with API Server "147.75.97.234:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "cncftest.io" as an annotation
This node has joined the cluster:
* Certificate signing request was sent to master and a response
was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the master to see this node join the cluster.
2nd node:
[preflight] running pre-flight checks
[WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh] or no builtin kernel ipvs support: map[ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{} ip_vs:{}]
you can solve this problem with following methods:
1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support
I0802 10:26:58.913060 38617 kernel_validator.go:81] Validating kernel version
I0802 10:26:58.913222 38617 kernel_validator.go:96] Validating kernel config
[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 17.12.1-ce. Max validated version: 17.03
[discovery] Trying to connect to API Server "147.75.97.234:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://147.75.97.234:6443"
[discovery] Requesting info from "https://147.75.97.234:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "147.75.97.234:6443"
[discovery] Successfully established connection with API Server "147.75.97.234:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "devstats.cncf.io" as an annotation
This node has joined the cluster:
* Certificate signing request was sent to master and a response
was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the master to see this node join the cluster.
But on the master kubectl get nodes
:
NAME STATUS ROLES AGE VERSION
cncftest.io NotReady <none> 7m v1.11.1
devstats.cncf.io NotReady <none> 7m v1.11.1
devstats.team.io Ready master 21m v1.11.1
And then: kubectl describe nodes
(master is devstats.team.io
, nodes are: cncftest.io
and devstats.cncf.io
):
Name: cncftest.io
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/hostname=cncftest.io
Annotations: kubeadm.alpha.kubernetes.io/cri-socket=/var/run/dockershim.sock
node.alpha.kubernetes.io/ttl=0
volumes.kubernetes.io/controller-managed-attach-detach=true
CreationTimestamp: Thu, 02 Aug 2018 10:26:53 +0000
Taints: <none>
Unschedulable: false
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
OutOfDisk False Thu, 02 Aug 2018 10:34:51 +0000 Thu, 02 Aug 2018 10:26:52 +0000 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Thu, 02 Aug 2018 10:34:51 +0000 Thu, 02 Aug 2018 10:26:52 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 02 Aug 2018 10:34:51 +0000 Thu, 02 Aug 2018 10:26:52 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 02 Aug 2018 10:34:51 +0000 Thu, 02 Aug 2018 10:26:52 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready False Thu, 02 Aug 2018 10:34:51 +0000 Thu, 02 Aug 2018 10:26:52 +0000 KubeletNotReady runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Addresses:
InternalIP: 147.75.205.79
Hostname: cncftest.io
Capacity:
cpu: 48
ephemeral-storage: 459266000Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 264047752Ki
pods: 110
Allocatable:
cpu: 48
ephemeral-storage: 423259544900
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 263945352Ki
pods: 110
System Info:
Machine ID: d1c2fc94ee6d41ca967c4d43504af50c
System UUID: 4C4C4544-0052-3310-804A-B7C04F4E4432
Boot ID: d87670d9-251e-42a5-90c5-5d63059f03ab
Kernel Version: 4.15.0-22-generic
OS Image: Ubuntu 18.04.1 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://17.12.1-ce
Kubelet Version: v1.11.1
Kube-Proxy Version: v1.11.1
PodCIDR: 10.244.1.0/24
Non-terminated Pods: (0 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 0 (0%) 0 (0%)
memory 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 8m kubelet, cncftest.io Starting kubelet.
Normal NodeHasSufficientDisk 8m (x2 over 8m) kubelet, cncftest.io Node cncftest.io status is now: NodeHasSufficientDisk
Normal NodeHasSufficientMemory 8m (x2 over 8m) kubelet, cncftest.io Node cncftest.io status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 8m (x2 over 8m) kubelet, cncftest.io Node cncftest.io status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 8m (x2 over 8m) kubelet, cncftest.io Node cncftest.io status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 8m kubelet, cncftest.io Updated Node Allocatable limit across pods
Name: devstats.cncf.io
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/hostname=devstats.cncf.io
Annotations: kubeadm.alpha.kubernetes.io/cri-socket=/var/run/dockershim.sock
node.alpha.kubernetes.io/ttl=0
volumes.kubernetes.io/controller-managed-attach-detach=true
CreationTimestamp: Thu, 02 Aug 2018 10:27:00 +0000
Taints: <none>
Unschedulable: false
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
OutOfDisk False Thu, 02 Aug 2018 10:34:51 +0000 Thu, 02 Aug 2018 10:27:00 +0000 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Thu, 02 Aug 2018 10:34:51 +0000 Thu, 02 Aug 2018 10:27:00 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 02 Aug 2018 10:34:51 +0000 Thu, 02 Aug 2018 10:27:00 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 02 Aug 2018 10:34:51 +0000 Thu, 02 Aug 2018 10:27:00 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready False Thu, 02 Aug 2018 10:34:51 +0000 Thu, 02 Aug 2018 10:27:00 +0000 KubeletNotReady runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Addresses:
InternalIP: 147.75.78.47
Hostname: devstats.cncf.io
Capacity:
cpu: 48
ephemeral-storage: 142124052Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 264027220Ki
pods: 110
Allocatable:
cpu: 48
ephemeral-storage: 130981526107
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 263924820Ki
pods: 110
System Info:
Machine ID: d1c2fc94ee6d41ca967c4d43504af50c
System UUID: 00000000-0000-0000-0000-0CC47AF37CF2
Boot ID: f257b606-5da2-43fd-8782-0aa4484037f4
Kernel Version: 4.15.0-20-generic
OS Image: Ubuntu 18.04.1 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://17.12.1-ce
Kubelet Version: v1.11.1
Kube-Proxy Version: v1.11.1
PodCIDR: 10.244.2.0/24
Non-terminated Pods: (0 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 0 (0%) 0 (0%)
memory 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 7m kubelet, devstats.cncf.io Starting kubelet.
Normal NodeHasSufficientDisk 7m kubelet, devstats.cncf.io Node devstats.cncf.io status is now: NodeHasSufficientDisk
Normal NodeHasSufficientMemory 7m kubelet, devstats.cncf.io Node devstats.cncf.io status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 7m kubelet, devstats.cncf.io Node devstats.cncf.io status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 7m kubelet, devstats.cncf.io Node devstats.cncf.io status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 7m kubelet, devstats.cncf.io Updated Node Allocatable limit across pods
Name: devstats.team.io
Roles: master
Labels: beta.kubernetes.io/arch=arm64
beta.kubernetes.io/os=linux
kubernetes.io/hostname=devstats.team.io
node-role.kubernetes.io/master=
Annotations: flannel.alpha.coreos.com/backend-data={"VtepMAC":"9a:7f:81:2c:4e:16"}
flannel.alpha.coreos.com/backend-type=vxlan
flannel.alpha.coreos.com/kube-subnet-manager=true
flannel.alpha.coreos.com/public-ip=147.75.97.234
kubeadm.alpha.kubernetes.io/cri-socket=/var/run/dockershim.sock
node.alpha.kubernetes.io/ttl=0
volumes.kubernetes.io/controller-managed-attach-detach=true
CreationTimestamp: Thu, 02 Aug 2018 10:12:56 +0000
Taints: node-role.kubernetes.io/master:NoSchedule
Unschedulable: false
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
OutOfDisk False Thu, 02 Aug 2018 10:34:49 +0000 Thu, 02 Aug 2018 10:12:56 +0000 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Thu, 02 Aug 2018 10:34:49 +0000 Thu, 02 Aug 2018 10:12:56 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 02 Aug 2018 10:34:49 +0000 Thu, 02 Aug 2018 10:12:56 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 02 Aug 2018 10:34:49 +0000 Thu, 02 Aug 2018 10:12:56 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Thu, 02 Aug 2018 10:34:49 +0000 Thu, 02 Aug 2018 10:21:07 +0000 KubeletReady kubelet is posting ready status. AppArmor enabled
Addresses:
InternalIP: 147.75.97.234
Hostname: devstats.team.io
Capacity:
cpu: 96
ephemeral-storage: 322988584Ki
hugepages-2Mi: 0
memory: 131731468Ki
pods: 110
Allocatable:
cpu: 96
ephemeral-storage: 297666278522
hugepages-2Mi: 0
memory: 131629068Ki
pods: 110
System Info:
Machine ID: 5eaa89a81ff348399284bb4cb016ffd7
System UUID: 10000000-FAC5-FFFF-A81D-FC15B4970493
Boot ID: 43b920e3-34e7-4de3-aa6c-8b5c525363ff
Kernel Version: 4.15.0-20-generic
OS Image: Ubuntu 18.04.1 LTS
Operating System: linux
Architecture: arm64
Container Runtime Version: docker://17.12.1-ce
Kubelet Version: v1.11.1
Kube-Proxy Version: v1.11.1
PodCIDR: 10.244.0.0/24
Non-terminated Pods: (8 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
kube-system coredns-78fcdf6894-ls44z 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%)
kube-system coredns-78fcdf6894-njnnt 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%)
kube-system etcd-devstats.team.io 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-apiserver-devstats.team.io 250m (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-controller-manager-devstats.team.io 200m (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-flannel-ds-v4t8s 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%)
kube-system kube-proxy-5825g 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-scheduler-devstats.team.io 100m (0%) 0 (0%) 0 (0%) 0 (0%)
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (0%) 100m (0%)
memory 190Mi (0%) 390Mi (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 23m kubelet, devstats.team.io Starting kubelet.
Normal NodeAllocatableEnforced 23m kubelet, devstats.team.io Updated Node Allocatable limit across pods
Normal NodeHasSufficientPID 23m (x5 over 23m) kubelet, devstats.team.io Node devstats.team.io status is now: NodeHasSufficientPID
Normal NodeHasSufficientDisk 23m (x6 over 23m) kubelet, devstats.team.io Node devstats.team.io status is now: NodeHasSufficientDisk
Normal NodeHasSufficientMemory 23m (x6 over 23m) kubelet, devstats.team.io Node devstats.team.io status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 23m (x6 over 23m) kubelet, devstats.team.io Node devstats.team.io status is now: NodeHasNoDiskPressure
Normal Starting 21m kube-proxy, devstats.team.io Starting kube-proxy.
Normal NodeReady 13m kubelet, devstats.team.io Node devstats.team.io status is now: NodeReady
Versions
kubeadm version (use kubeadm version
):
kubeadm version: &version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:50:16Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/arm64"}
Environment:
- Kubernetes version (use
kubectl version
):
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:53:20Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/arm64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:43:26Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/arm64"}
- Cloud provider or hardware configuration:
- Master: Bare metal server 96 cores, ARM64, 128G RAM, swap turned off.
- Nodes (2): bare metal server 48 cores, AMD64, 256G RAM, swap tuned off x 2.
uname -a
: Linux devstats.team.io 4.15.0-20-generic #21-Ubuntu SMP Tue Apr 24 06:16:20 UTC 2018 aarch64 aarch64 aarch64 GNU/Linux- OS (e.g. from /etc/os-release):
NAME="Ubuntu"
VERSION="18.04.1 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.1 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic
lsb_release -a
:
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.1 LTS
Release: 18.04
Codename: bionic
- Kernel (e.g.
uname -a
):Linux devstats.team.io 4.15.0-20-generic #21-Ubuntu SMP Tue Apr 24 06:16:20 UTC 2018 aarch64 aarch64 aarch64 GNU/Linux
- Others:
docker version
:
docker version
Client:
Version: 17.12.1-ce
API version: 1.35
Go version: go1.10.1
Git commit: 7390fc6
Built: Wed Apr 18 01:26:37 2018
OS/Arch: linux/arm64
Server:
Engine:
Version: 17.12.1-ce
API version: 1.35 (minimum version 1.12)
Go version: go1.10.1
Git commit: 7390fc6
Built: Wed Feb 28 17:46:05 2018
OS/Arch: linux/arm64
Experimental: false
What happened?
The exact error seems to be:
runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
On the node: cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/default/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
From this thread (no KUBELET_NETWORK_ARGS
there).
journalctl -xe
on the node:
Aug 02 10:44:51 devstats.cncf.io kubelet[38796]: W0802 10:44:51.040663 38796 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 02 10:44:51 devstats.cncf.io kubelet[38796]: E0802 10:44:51.040876 38796 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Directory /etc/cni/net.d
exists, but is empty.
What you expected to happen?
All nodes in Ready
state.
How to reproduce it (as minimally and precisely as possible)?
Just follow steps from the tutorial. Tried 3 times and it happens all the time.
Anything else we need to know?
Master is ARM64, 2 nodes are AMD64. Master and one node is in Amsterdam and 2nd node is in the USA.
I can use kubectl taint nodes --all node-role.kubernetes.io/master-
to run pods on master, but this is not a solution. I want to have a real multi-node cluster to work with.
About this issue
- Original URL
- State: closed
- Created 6 years ago
- Reactions: 33
- Comments: 72 (30 by maintainers)
@lukasredynk
yeah, so this is an arch issue after all, thanks for confirming. let’s focus on flannel here as the weave issue seems like a tangent one.
have a look this by @luxas for context, if not seen it already: https://github.com/luxas/kubeadm-workshop
it should but the manifest you are downloading is not a “fat” one: https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml
as far as i understand it the arch taints are propagated and you need to fix that with
kubectl
on each node (?).looks like a “fat” manifest is in master and was added here: https://github.com/coreos/flannel/commit/c5d10c8b16d7c43bfa6e8408673c6d95404bd807#diff-7891b552b026259e99d479b5e30d31ca
related issue/pr: https://github.com/coreos/flannel/issues/663 https://github.com/coreos/flannel/pull/989
my assumption is that this is bleeding edge and you have to use:
so bring the cluster down and give that a try and hope it works. our CNI docs would need a bump, yet this needs to happen when
flannel-next
is released.There is a small progress. I’ve installed master on amd64 then one node on amd64 too. All worked fine. I’ve added arm64 node and now I have: master amd64: Ready node amd64: Ready node arm64: NotReady:
runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
flannel
net plugin cannot talk between different architectures and arm64 cannot be used as a master at all.runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Any suggestions what should I do? Where should I report this? I already have a 2 nodes cluster (master and node amd64) but I want to help resolve this issue so one can use any arch master with any arch nodes just OOTB.
I had a similar case where I was creating the network plugin before linking the workers, which kept the /etc/cni/net.d missing. I re-executed the configuration after linking the worker nodes using:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
As a result, the configuration in /etc/cni/net.d was created successfully and the node showed in a Ready state.Hope that helps anyone with the same issue.
The file /etc/cni/net.d/10-flannel.conflist was missing cniVersion key in its config.
Adding “cniVersion”: “0.2.0” solved the issue.
I run this command and it solved my issue:
This creates a file in /etc/cni/net.d directory with the name of 10-flannel.conflist. I believe that kubernetes requires a network, which is set by this package. My cluster is in the following state:
NAME STATUS ROLES AGE VERSION k8s-master Ready master 3h37m v1.14.1 node001 Ready <none> 3h6m v1.14.1 node02 Ready <none> 167m v1.14.1
That just did it!
Faced the same issue here. kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml.
Worked for me.
So, I’ve installed master
kubeadm init
on the amd64 host and triedweave net
and the result is exactly the same as when trying this on the arm64 host:runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
the flannel repository needed a fix. the kubeadm guide for installing flannel was just updated, see: https://github.com/kubernetes/website/pull/16575/files
I faced the issue when updated to V1.16.0 from 1.15.
Sorry, I cannot help, I don’t have ARM64 nodes anymore, now I have a 4 node AMD64 bare-metal cluster.
Ok, I have the same error, running coredns instead of Flannel. I need to check the logs to see what is wrong. If I run
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
, than both nodes become ready instantly. But I must use coredns, also, this is a pre-built image, so I need to troubleshoot it anywayReinstall docker on the notready node. Worked for me.
ok, understood. in that case i’m closing the issue. thanks.
So, I’ve tried
weave net
and it is not working: On master:kubectl get nodes
:kubectl describe nodes
(the same cni related error, but also on the master node now):journalctl -xe
on master:kubectl get po --all-namespaces
:kubectl describe po --all-namespaces
:kubectl --v=8 logs --namespace=kube-system weave-net-2fsrf --all-containers=true
: