kubernetes: kube-apiserver: failover on multi-member etcd cluster fails certificate check on DNS mismatch
What happened: Kubernetes APIServer connects to etcd in HTTPS but the certificate check is invalid
Sep 23 18:36:42 kube-control-plane-to6oho0e kube-apiserver[18881]: W0923 18:36:42.109767 18881 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://kube-control-plane-mo2phooj.k8s.lan:2379 0 <nil>}. Err :connection error: desc = "transport: authentication handshake failed: x509: certificate is valid for localhost, kube-control-plane-mo2phooj.k8s.lan, not kube-control-plane-baeg4ahr.k8s.lan". Reconnecting...
What you expected to happen: When kube-apiserver connect to kube-control-plane-mo2phooj with the correct certificate it should not fail because it search for another etcd node certificate.
How to reproduce it (as minimally and precisely as possible): do a etcd 3.4 HTTPS setup with 3 https nodes with each node with its own SSL certificate
Anything else we need to know?:
Environment:
- Kubernetes version (use
kubectl version): 1.16.0 - Cloud provider or hardware configuration: None
- OS (e.g:
cat /etc/os-release): Debian 9/arm64 - Kernel (e.g.
uname -a): 4.4.167-1213-rockchip-ayufan-g34ae07687fce - Install tools: N/A
- Network plugin and version (if this is a network-related bug): N/A
- Others: N/A
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Reactions: 7
- Comments: 30 (25 by maintainers)
etcd backports of fix: