kubeadm: kubeadm init does not configure RBAC for configmaps correctly

Is this a BUG REPORT or FEATURE REQUEST?

BUG REPORT

Versions

kubeadm version (use kubeadm version): “v1.12.0-alpha.0.957+1235adac3802fd-dirty”

What happened?

I created a control plane node with kubeadm init. I ran kubeadm join on a separate node and got this error message:

[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace                                                                        
configmaps "kubelet-config-1.12" is forbidden: User "system:bootstrap:4ipkdk" cannot get configmaps in the namespace "kube-system"                                                               

What you expected to happen?

I expected kubeadm join to successfully finish

How to reproduce it (as minimally and precisely as possible)?

As far as I can tell, run kubeadm init and kubeadm join on another node. I have a lot of extra code/yaml that shouldn’t be influencing the config map (required for happy aws deployment). But if this turns out to not be reproducible I will provide more indepth instructions.

Anything else we need to know?

I think also kubeadm join and kubeadm init are naming the config maps inconsistently. The init command uses the kubernetesVersion specified in the config file and the join command uses the kubelet version for the name of the config map (e.g. kubelet-config-1.1). This is fine unless you have mismatched versions then it is not fine.

The init command makes RBAC rules for anonymous access to config maps in the kube-public namespace but it doesn’t seem to put the kubelet config in the public namespace and thus the node joining has no access to it.

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Comments: 17 (6 by maintainers)

Most upvoted comments

This is fine unless you have mismatched versions then it is not fine.

text

D’oh! What I didn’t yet know is that kubeadm always downloads the latest version of the Kubernetes control plane from gcr.io unless you tell it otherwise. So if I want to install 1.12.1 even when 1.13 is available, I need to do kubeadm init --kubernetes-version 1.12.1 --pod-network-cidr whatever/whatever

@chuckha I completed join successfully on a cluster with all the components build from master + release number forced to v1.11.0

  • kubelet-config-1.11 was created in kube-system
  • role kubeadm:kubelet-config-1.11 was created in kube-system with get permission on the config map
  • rolebinding kubeadm:kubelet-config-1.11 was created in kube-system for system:nodes and system:bootstrappers:kubeadm:default-node-token

So IMO:

  • RBAC are created
  • if we are in “version consistent” scenario, everything works

The part still to investigate is

The init command uses the kubernetesVersion specified in the config file and the join command uses the kubelet version for the name of the config map (e.g. kubelet-config-1.1)