kubernetes: kube-proxy log shows Failed to retrieve node info: Unauthorized

/sig latest What happened: kubeadm initialized successfully with below command: sudo kubeadm init --config kubeadm-config-external-etcd.yaml --upload-certs --v=6

Below is kubeadm-config-external-etcd.yaml file apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration kubernetesVersion: stable controlPlaneEndpoint: “192.168.0.87:6443” etcd: external: endpoints: - https://192.168.0.80:2379

- https://192.168.0.84:2379

    - https://192.168.0.88:2379
    caFile: /etc/kubernetes/pki/etcd/ca.crt
    certFile: /etc/kubernetes/pki/apiserver-etcd-client.crt
    keyFile: /etc/kubernetes/pki/apiserver-etcd-client.key

apiServer: certSANs:

  • “192.168.34.17”

- “192.168.0.85”

- “192.168.0.86”

  • “127.0.0.1”
  • “192.168.0.87”
  • “worker2.hadoop.com” extraArgs: authorization-mode: Node,RBAC timeoutForControlPlane: 4m0s

apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration hostnameOverride: “ibhl-bcktst1”

What you expected to happen: Everything fine except for kube-proxy and weave-net pods. Below is the pod status: NAME READY STATUS RESTARTS AGE coredns-5644d7b6d9-dnz67 0/1 Pending 0 31h coredns-5644d7b6d9-jtmkz 0/1 Pending 0 31h kube-apiserver-ibhl-bcktst1 1/1 Running 0 6d kube-controller-manager-ibhl-bcktst1 1/1 Running 0 8h kube-proxy-9l6p9 1/1 Running 3 52m kube-scheduler-ibhl-bcktst1 1/1 Running 0 8h weave-net-rr6rg 1/2 Running 3 64m

How to reproduce it (as minimally and precisely as possible): I am using HAProxy load balancer. Kubeadm initialization is fine. But why we have problem with kube-proxy authorization. Below is the log of kube-proxy and weave-net pods that are failing.

kube-proxy log hari@IBHL-BCKTST1:/$ kubectl logs kube-proxy-9l6p9 -n kube-system W1023 15:20:56.676255 1 server_others.go:329] Flag proxy-mode=“” unknown, assuming iptables proxy E1023 15:20:57.507015 1 node.go:124] Failed to retrieve node info: Unauthorized E1023 15:20:58.777394 1 node.go:124] Failed to retrieve node info: Unauthorized E1023 15:21:01.181747 1 node.go:124] Failed to retrieve node info: Unauthorized E1023 15:21:06.090578 1 node.go:124] Failed to retrieve node info: Unauthorized E1023 15:21:14.402591 1 node.go:124] Failed to retrieve node info: Unauthorized F1023 15:21:14.402650 1 server.go:443] unable to get node IP for hostname ibhl-bcktst1

weave-net log hari@IBHL-BCKTST1:/$ kubectl logs weave-net-rr6rg -n kube-system -c weave FATA: 2019/10/23 15:39:26.636111 [kube-peers] Could not get peers: Get https://10.96.0.1:443/api/v1/nodes: dial tcp 10.96.0.1:443: i/o timeout Failed to get peers

Anything else we need to know?: kube-proxy running in node will communicate with api server which is also on the same machine. Will the request go directly to api server or will it go through configured Load Balancer!? Why unauthorized error in log for kube-proxy. I think if this proxy error is fixed, then automatically weave-net will be resolved.

Environment:

  • Kubernetes version (use kubectl version): hari@IBHL-BCKTST1:/$ kubeadm version kubeadm version: &version.Info{Major:“1”, Minor:“16”, GitVersion:“v1.16.2”, GitCommit:“c97fe5036ef3df2967d086711e6c0c405941e14b”, GitTreeState:“clean”, BuildDate:“2019-10-15T19:15:39Z”, GoVersion:“go1.12.10”, Compiler:“gc”, Platform:“linux/amd64”}

  • Cloud provider or hardware configuration: hari@IBHL-BCKTST1:/$ lspci -nnk 00:00.0 Host bridge [0600]: Intel Corporation 440FX - 82441FX PMC [Natoma] [8086:1237] (rev 02) Subsystem: Red Hat, Inc. Qemu virtual machine [1af4:1100] 00:01.0 ISA bridge [0601]: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II] [8086:7000] Subsystem: Red Hat, Inc. Qemu virtual machine [1af4:1100] 00:01.1 IDE interface [0101]: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II] [8086:7010] Subsystem: XenSource, Inc. 82371SB PIIX3 IDE [Natoma/Triton II] [5853:0001] Kernel driver in use: ata_piix Kernel modules: pata_acpi 00:01.2 USB controller [0c03]: Intel Corporation 82371SB PIIX3 USB [Natoma/Triton II] [8086:7020] (rev 01) Subsystem: XenSource, Inc. 82371SB PIIX3 USB [Natoma/Triton II] [5853:0001] Kernel driver in use: uhci_hcd 00:01.3 Bridge [0680]: Intel Corporation 82371AB/EB/MB PIIX4 ACPI [8086:7113] (rev 01) Subsystem: Red Hat, Inc. Qemu virtual machine [1af4:1100] Kernel modules: i2c_piix4 00:02.0 VGA compatible controller [0300]: Cirrus Logic GD 5446 [1013:00b8] Subsystem: XenSource, Inc. GD 5446 [5853:0001] Kernel driver in use: cirrus Kernel modules: cirrusfb, cirrus 00:03.0 SCSI storage controller [0100]: XenSource, Inc. Xen Platform Device [5853:0001] (rev 01) Subsystem: XenSource, Inc. Xen Platform Device [5853:0001] Kernel driver in use: xen-platform-pci

  • OS (e.g: cat /etc/os-release): hari@IBHL-BCKTST1:/$ cat /etc/os-release NAME=“Ubuntu” VERSION=“16.04.6 LTS (Xenial Xerus)” ID=ubuntu ID_LIKE=debian PRETTY_NAME=“Ubuntu 16.04.6 LTS” VERSION_ID=“16.04” HOME_URL=“http://www.ubuntu.com/” SUPPORT_URL=“http://help.ubuntu.com/” BUG_REPORT_URL=“http://bugs.launchpad.net/ubuntu/” VERSION_CODENAME=xenial UBUNTU_CODENAME=xenial

  • Kernel (e.g. uname -a): hari@IBHL-BCKTST1:/$ uname -a Linux IBHL-BCKTST1 4.15.0-66-generic #75~16.04.1-Ubuntu SMP Tue Oct 1 14:01:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

  • Install tools:

  • Network plugin and version (if this is a network-related bug):

  • Others:

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Reactions: 3
  • Comments: 15 (5 by maintainers)

Most upvoted comments

Check the permissions assigned. They were expanded at some point:

get clusterrole system:node-proxier -o yaml

Look for a block like:

- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
  - list
  - watch

check your kube-proxy-token-xxxxx, it might be using expired token and/or cert try deleting it (it will be recreated), and delete kube-proxy pods, and see if anything changes when those are recreated

+1 Same here with external etcd