kubernetes: 1.16 upgrade causes nodes to never enter ready state arm64

What happened: Upgrading to 1.16.0 causes Cluster nodes to never be ready on arm64

What you expected to happen: Cluster should become ready

How to reproduce it (as minimally and precisely as possible): 1.) kubeadm init --pod-network-cidr=10.244.0.0/16 2.) Join from 2 other arm boards using command outputed from kubeadm init

Anything else we need to know?: Docker version 0

docker version
Client:
 Version:           18.09.1
 API version:       1.39
 Go version:        go1.11.6
 Git commit:        4c52b90
 Built:             Fri, 13 Sep 2019 10:45:43 +0100
 OS/Arch:           linux/arm
 Experimental:      false

Server:
 Engine:
  Version:          18.09.1
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.11.6
  Git commit:       4c52b90
  Built:            Fri Sep 13 09:45:43 2019
  OS/Arch:          linux/arm
  Experimental:     false
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-19T13:57:45Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:27:17Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/arm"}
$ kubectl describe node pi0
Name:               pi0
Roles:              master
Labels:             beta.kubernetes.io/arch=arm
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=arm
                    kubernetes.io/hostname=pi0
                    kubernetes.io/os=linux
                    node-role.kubernetes.io/master=
Annotations:        flannel.alpha.coreos.com/backend-data: {"VtepMAC":"5e:4b:79:7e:8f:18"}
                    flannel.alpha.coreos.com/backend-type: vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager: true
                    flannel.alpha.coreos.com/public-ip: 192.168.0.10
                    kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Sun, 22 Sep 2019 17:08:17 -0600
Taints:             node-role.kubernetes.io/master:NoSchedule
                    node.kubernetes.io/not-ready:NoSchedule
Unschedulable:      false
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Sun, 22 Sep 2019 17:33:20 -0600   Sun, 22 Sep 2019 17:08:17 -0600   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Sun, 22 Sep 2019 17:33:20 -0600   Sun, 22 Sep 2019 17:08:17 -0600   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Sun, 22 Sep 2019 17:33:20 -0600   Sun, 22 Sep 2019 17:08:17 -0600   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            False   Sun, 22 Sep 2019 17:33:20 -0600   Sun, 22 Sep 2019 17:08:17 -0600   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized. WARNING: CPU hardcapping unsupported
Addresses:
  InternalIP:  192.168.0.10
  Hostname:    pi0
Capacity:
 cpu:                4
 ephemeral-storage:  30400236Ki
 memory:             3999784Ki
 pods:               110
Allocatable:
 cpu:                4
 ephemeral-storage:  28016857452
 memory:             3897384Ki
 pods:               110
System Info:
 Machine ID:                 76ffec47b3ac4c9e83794145c04ce02f
 System UUID:                76ffec47b3ac4c9e83794145c04ce02f
 Boot ID:                    20eed1f7-6a75-46af-8cff-c54e42b0896f
 Kernel Version:             4.19.66-v7l+
 OS Image:                   Raspbian GNU/Linux 10 (buster)
 Operating System:           linux
 Architecture:               arm
 Container Runtime Version:  docker://18.9.1
 Kubelet Version:            v1.16.0
 Kube-Proxy Version:         v1.16.0
PodCIDR:                     10.244.0.0/24
PodCIDRs:                    10.244.0.0/24
Non-terminated Pods:         (6 in total)
  Namespace                  Name                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                  ----                           ------------  ----------  ---------------  -------------  ---
  kube-system                etcd-pi0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
  kube-system                kube-apiserver-pi0             250m (6%)     0 (0%)      0 (0%)           0 (0%)         24m
  kube-system                kube-controller-manager-pi0    200m (5%)     0 (0%)      0 (0%)           0 (0%)         24m
  kube-system                kube-flannel-ds-arm-d78ct      100m (2%)     100m (2%)   50Mi (1%)        50Mi (1%)      18m
  kube-system                kube-proxy-tmtm8               0 (0%)        0 (0%)      0 (0%)           0 (0%)         25m
  kube-system                kube-scheduler-pi0             100m (2%)     0 (0%)      0 (0%)           0 (0%)         24m
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                650m (16%)  100m (2%)
  memory             50Mi (1%)   50Mi (1%)
  ephemeral-storage  0 (0%)      0 (0%)
Events:
  Type    Reason                   Age                From             Message
  ----    ------                   ----               ----             -------
  Normal  NodeAllocatableEnforced  26m                kubelet, pi0     Updated Node Allocatable limit across pods
  Normal  NodeHasSufficientMemory  26m (x8 over 26m)  kubelet, pi0     Node pi0 status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    26m (x8 over 26m)  kubelet, pi0     Node pi0 status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     26m (x7 over 26m)  kubelet, pi0     Node pi0 status is now: NodeHasSufficientPID
  Normal  Starting                 25m                kube-proxy, pi0  Starting kube-proxy.

Environment: raspberry pi 4 - 4 core 4gb of ram

  • Kubernetes version (use kubectl version): Included above

  • Cloud provider or hardware configuration: Hardware

  • OS (e.g: cat /etc/os-release): Raspbian

  • Kernel (e.g. uname -a): Linux pi0 4.19.66-v7l+ #1253 SMP Thu Aug 15 12:02:08 BST 2019 armv7l GNU/Linux

  • Install tools: kubeadm

  • Network plugin and version (if this is a network-related bug): I tried adding flannel after the fact but it does not change the state

  • Others: Sig - @kubernetes/network

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Reactions: 1
  • Comments: 24 (7 by maintainers)

Most upvoted comments

Can confirm on both on Centos 7 x86_64 and Raspbian armv7l. After upgrading to v1.16.0 previously functional nodes all report:

KubeletNotReady runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

A downgrade to 15.04 and a reset restored functionality.

@jeremyje sudo sed -i '/"name": "cbr0",/a"cniVersion": "0.2.0",' /etc/cni/net.d/10-flannel.conflist is a temporary fix and will be removed if the flannel pod is restarted.

I was able to resolve this by applying a fixed kube-flannel.yml (with cniVersion) and then deleting the impacted flannel pod so that it restarted; now my test upgraded nodes are reporting Ready. There is a PR for kube-flannel.yml from lwr20 which should resolve this issue going forward.

I can confirm the issue on Ubuntu 18.04 with all patches and kubernetes-cni 0.7.5. Is there a way to manually add a CNI version line to the config? How should it look like?