kubernetes: Incompatibility between kubelet and kubeadm and kubectl on which label prefix to use
What happened:
I am seeing the following warnings in kubelet logs:
Mar 18 22:41:05 ip-172-32-13-98 kubelet[1655]: W0318 22:41:05.009124 1655 options.go:265] unknown 'kubernetes.io' or 'k8s.io' labels specified with --node-labels: [node-role.kubernetes.io/master]
Mar 18 22:41:05 ip-172-32-13-98 kubelet[1655]: W0318 22:41:05.009610 1655 options.go:266] in 1.15, --node-labels in the 'kubernetes.io' namespace must begin with an allowed prefix (kubelet.kubernetes.io, node.kubernetes.io) or be in the specifically allowed set (beta.kubernetes.io/arch, beta.kubernetes.io/instance-type, beta.kubernetes.io/os, failure-domain.beta.kubernetes.io/region, failure-domain.beta.kubernetes.io/zone, failure-domain.kubernetes.io/region, failure-domain.kubernetes.io/zone, kubernetes.io/arch, kubernetes.io/hostname, kubernetes.io/instance-type, kubernetes.io/os)
What you expected to happen:
The kubelet seems to think that that the label prefix node-role.kubernetes.io is not allowed, suggesting node.kubernetes.io instead. However, kubeadm uses the former prefix to label and taint control plane nodes. Furthermore, kubectl displays node labels based on the former prefix too. So there is an incompatibility between kubelet and the kubeadm/kubectl duo. Either the prefix node-role.kubernetes.io should be included the allowed prefixes list, or the kubeadm/kubectl behaviors should change.
How to reproduce it (as minimally and precisely as possible):
Setup a new cluster using kubeadm, check kubelet logs in the master node.
Environment:
- Kubernetes version (use
kubectl version): 1.13.4 - Cloud provider or hardware configuration: AWS
- OS (e.g:
cat /etc/os-release): Debian GNU/Linux 9 (stretch) - Kernel (e.g.
uname -a): Linux ip-172-32-74-112 4.9.0-8-amd64 #1 SMP Debian 4.9.144-3 (2019-02-02) x86_64 GNU/Linux - Install tools:
kubeadm - Others:
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Reactions: 24
- Comments: 49 (34 by maintainers)
Commits related to this issue
- Update Kubernetes from v1.15.3 to v1.16.0 * Drop `node-role.kubernetes.io/master` and `node-role.kubernetes.io/node` node labels * Kubelet (v1.16) now rejects the node labels used in the kubectl get ... — committed to poseidon/typhoon by dghubble 5 years ago
- Update Kubernetes from v1.15.3 to v1.16.0 * Drop `node-role.kubernetes.io/master` and `node-role.kubernetes.io/node` node labels * Kubelet (v1.16) now rejects the node labels used in the kubectl get ... — committed to poseidon/typhoon by dghubble 5 years ago
- Update Kubernetes from v1.15.3 to v1.16.0 * Drop `node-role.kubernetes.io/master` and `node-role.kubernetes.io/node` node labels * Kubelet (v1.16) now rejects the node labels used in the kubectl get ... — committed to poseidon/typhoon by dghubble 5 years ago
- Update Kubernetes from v1.15.3 to v1.16.0 * Drop `node-role.kubernetes.io/master` and `node-role.kubernetes.io/node` node labels * Kubelet (v1.16) now rejects the node labels used in the kubectl get ... — committed to poseidon/terraform-onprem-kubernetes by dghubble 5 years ago
- Update Kubernetes from v1.15.3 to v1.16.0 * Drop `node-role.kubernetes.io/master` and `node-role.kubernetes.io/node` node labels * Kubelet (v1.16) now rejects the node labels used in the kubectl get ... — committed to poseidon/terraform-digitalocean-kubernetes by dghubble 5 years ago
- Update Kubernetes from v1.15.3 to v1.16.0 * Drop `node-role.kubernetes.io/master` and `node-role.kubernetes.io/node` node labels * Kubelet (v1.16) now rejects the node labels used in the kubectl get ... — committed to poseidon/terraform-aws-kubernetes by dghubble 5 years ago
- Update Kubernetes from v1.15.3 to v1.16.0 * Drop `node-role.kubernetes.io/master` and `node-role.kubernetes.io/node` node labels * Kubelet (v1.16) now rejects the node labels used in the kubectl get ... — committed to poseidon/terraform-google-kubernetes by dghubble 5 years ago
- k8s_fedora: Label master nodes with kubectl Due to [0], we can not label nodes with node-role.kubernetes.io/master="". We need to do it with the kubernetes API. [0] https://github.com/kubernetes/kub... — committed to openstack/magnum by deleted user 5 years ago
- Update git submodules * Update magnum from branch 'master' - Merge "k8s_fedora: Label master nodes with kubectl" - k8s_fedora: Label master nodes with kubectl Due to [0], we can not labe... — committed to openstack/openstack by deleted user 5 years ago
- k8s_fedora: Label master nodes with kubectl Due to [0], we can not label nodes with node-role.kubernetes.io/master="". We need to do it with the kubernetes API. [0] https://github.com/kubernetes/kub... — committed to openstack/magnum by deleted user 5 years ago
- k8s_fedora: Label master nodes with kubectl Due to [0], we can not label nodes with node-role.kubernetes.io/master="". We need to do it with the kubernetes API. [0] https://github.com/kubernetes/kub... — committed to openstack/magnum by deleted user 5 years ago
- k8s_fedora: Label master nodes with kubectl Due to [0], we can not label nodes with node-role.kubernetes.io/master="". We need to do it with the kubernetes API. [0] https://github.com/kubernetes/kub... — committed to openstack/magnum by deleted user 5 years ago
- Update Kubernetes from v1.15.3 to v1.16.0 * Drop `node-role.kubernetes.io/master` and `node-role.kubernetes.io/node` node labels * Kubelet (v1.16) now rejects the node labels used in the kubectl get ... — committed to poseidon/terraform-azure-kubernetes by dghubble 5 years ago
- Update Kubernetes from v1.15.3 to v1.16.0 * Drop `node-role.kubernetes.io/master` and `node-role.kubernetes.io/node` node labels * Kubelet (v1.16) now rejects the node labels used in the kubectl get ... — committed to aristanetworks/monsoon by dghubble 5 years ago
There has to be an “official” straightforward way to assign node roles via the
kubeadmconfig file. Ifnode-labelsis not going to support this use case, some other mechanism/key should. Assume that you have something like an auto-scaling group for “worker” nodes. The most natural thing to do would be to provision them with akubeadmconfig file that assigns “worker” roles to these nodes at join time, without jumping any unnecessary hoops. IMO, a solution that first assigns a temporary taint and then labels the node later on in the provisioning process viakubectllooks like a temporary work-around, not a permanent idiom.A variety of use cases with autoscaling groups, managed instance groups, or scale sets rely on the
node-roleability to create sets of nodes labled beforehand for different purposes. It seems to me unlikely these cases will introduce an operator-style components that just-in-time labels nodes as they come and go. I’m not sure how the control plane initialization process could be enough - especially in preemptible worker clusters, nodes are added after control plane bringup, so it would need to be a longer running process.I get what we’re trying to achieve here security-wise (adversary takeover of kubelet, change role). But I suspect a great many folks, will find less than intended workarounds, that will ultimately undermine those efforts. They’ll run a process with credentials kubelet lacks. Any solutions involving temporary labels or taints seem to miss the original point - if we’re worried about an adversary changing the flag, that same adversary can change the temporary label / taint flag.
This is going to be interesting or I’m missing something. Hopefully the later.
Doesn’t it make sense that
kubectlshould support a ROLE label having been set in one of the valid node domains, e.g.node.kubernetes.io/role=“CustomerA”` so that customers can continue to define specialist node roles (except ‘master’) and not break current functionality?If I transition to using node.kubernetes.io/role=“xyz” I get a degraded kubectl experience: -
@liggitt: Question about the third option: If
--node-labelsfunctionality is not there, how would one label their nodes at setup time (not after join) viakubeadm? Today, the most convenient way I know of is to usenode-labelsinkubeletExtraArgswhen usingkubeadm. Note that this concern applies to all nodes, not just control plane nodes.I also think
node-labelsinkubeletExtraArgsis a very useful feature from the perspective of Cluster API.node-role.kubernetes.iolabel is very useful in that it is the default printcolumm ofkubectl get node.I’m looking for a solution to this issue. I use Kube-aws setup with ASG. The kubelet fails to start because of node-role.
Use Case:
I want to keep the Same node label for Master and Worker nodes as I’m using in v1.15 .
When I tried to upgrade the cluster to V1.16 the --node-labels is restricted to use ‘node-role’
If I keep the node role as “node-role.kubernetes.io/master” the kubelet fails to start after upgrade… if I remove the label, ‘kubectl get node’ output shows <none> for the upgraded node.
Yes, there are indeed many places where the
node-roleprefix is used. It seems like a pragmatic solution could be to add this prefix to the allowed list (hence not breaking anyone), but disallowing certain label(s) when self-labeling so that one won’t be able to arbitrarily label itself as a master node.the picture around the node-role label limitations on the kubelet side were explained above. TL;DR:
if you have arguments around the node-role limitations this is a topic for sig/node (kubelet) and sig/architecture (see this)
/remove-sig cluster-lifecycle /remove-area kubeadm
deployers like kubeadm, kops and kubespray will continue relying on the
"node-role.kubernetes.io/master"labels.https://github.com/kubernetes/kubernetes/blob/master/cmd/kubeadm/app/constants/constants.go#L206
@liggitt How about this label prefix
node-role.kubernetes.iowhich used across projects ? Could you give some guideline about whether add it to allowed list or change it tonode.kubernetes.io?I can work on this ticket with your guideline, thanks. /assign
Ref: https://github.com/kubernetes/kubernetes/pull/68267