kubeadm: the v1alpha3 usage is unclear: Unable to init kubeadm 1.12 with config file

Is this a BUG REPORT or FEATURE REQUEST?:

/kind bug /sig release /sig cluster-lifecycle

reported in https://github.com/kubernetes/kubernetes/issues/69242

What happened:

According to changelog:

  • v1alpha3 has split apart MasterConfiguration into separate components; InitConfiguration, ClusterConfiguration, JoinConfiguration, KubeletConfiguration, and KubeProxyConfiguration
  • Different configuration types can be supplied all in the same file separated by ---.

Printed the new config format (kubeadm config print-default) to see what changes i need to make in order to spin up a new 1.12 cluster, decided to test the default config (kubeadm config print-default > master.yml) i got the from command and just changed the criSocket from docker to containerd, thing is i’m unable to init the cluster due to the following output:

kubeadm init --config master.yml
invalid configuration: kinds [InitConfiguration MasterConfiguration JoinConfiguration NodeConfiguration] are mutually exclusive

What you expected to happen:

Cluster to init with the new config format

How to reproduce it (as minimally and precisely as possible):

Use the default config and try to init the cluster with it

Anything else we need to know?:

Running just kubead init works (or kubeadm init --cri-socket /var/run/containerd/containerd.sock in my case)

Environment:

  • Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.0", GitCommit:"0ed33881dc4355495f623c6f22e7dd0b7632b7c0", GitTreeState:"clean", BuildDate:"2018-09-27T17:05:32Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.0", GitCommit:"0ed33881dc4355495f623c6f22e7dd0b7632b7c0", GitTreeState:"clean", BuildDate:"2018-09-27T16:55:41Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
  • Cloud provider or hardware configuration:
    • on-prem, esxi host v6.0 managed by vcenter server 6.7
  • OS (e.g. from /etc/os-release):
NAME="Ubuntu"
VERSION="16.04.5 LTS (Xenial Xerus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04.5 LTS"
VERSION_ID="16.04"
VERSION_CODENAME=xenial
UBUNTU_CODENAME=xenial
  • Kernel (e.g. uname -a):
Linux master 4.15.18-041518-generic #201804190330 SMP Thu Apr 19 07:34:21 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
  • Install tools: template built with packer, kubernetes installed using apt-get and containerd downloaded by wget
  • Others:
master.yml
apiEndpoint:
  advertiseAddress: 0.0.0.0
  bindPort: 6443
apiVersion: kubeadm.k8s.io/v1alpha3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
nodeRegistration:
  criSocket: /var/run/containerd/containerd.sock
  name: master
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiVersion: kubeadm.k8s.io/v1alpha3
auditPolicy:
  logDir: /var/log/kubernetes/audit
  logMaxAge: 2
  path: ""
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: ""
etcd:
  local:
    dataDir: /var/lib/etcd
    image: ""
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.12.0
networking:
  dnsDomain: cluster.local
  podSubnet: ""
  serviceSubnet: 10.96.0.0/12
unifiedControlPlaneImage: ""
---
apiEndpoint:
  advertiseAddress: 0.0.0.0
  bindPort: 6443
apiVersion: kubeadm.k8s.io/v1alpha3
caCertPath: /etc/kubernetes/pki/ca.crt
clusterName: kubernetes
discoveryFile: ""
discoveryTimeout: 5m0s
discoveryToken: abcdef.0123456789abcdef
discoveryTokenAPIServers:
- kube-apiserver:6443
discoveryTokenUnsafeSkipCAVerification: true
kind: JoinConfiguration
nodeRegistration:
  criSocket: /var/run/containerd/containerd.sock
  name: master
tlsBootstrapToken: abcdef.0123456789abcdef
token: abcdef.0123456789abcdef
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
clientConnection:
  acceptContentTypes: ""
  burst: 10
  contentType: application/vnd.kubernetes.protobuf
  kubeconfig: /var/lib/kube-proxy/kubeconfig.conf
  qps: 5
clusterCIDR: ""
configSyncPeriod: 15m0s
conntrack:
  max: null
  maxPerCore: 32768
  min: 131072
  tcpCloseWaitTimeout: 1h0m0s
  tcpEstablishedTimeout: 24h0m0s
enableProfiling: false
healthzBindAddress: 0.0.0.0:10256
hostnameOverride: ""
iptables:
  masqueradeAll: false
  masqueradeBit: 14
  minSyncPeriod: 0s
  syncPeriod: 30s
ipvs:
  excludeCIDRs: null
  minSyncPeriod: 0s
  scheduler: ""
  syncPeriod: 30s
kind: KubeProxyConfiguration
metricsBindAddress: 127.0.0.1:10249
mode: ""
nodePortAddresses: null
oomScoreAdj: -999
portRange: ""
resourceContainer: /kube-proxy
udpIdleTimeout: 250ms
---
address: 0.0.0.0
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /etc/kubernetes/pki/ca.crt
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
cgroupDriver: cgroupfs
cgroupsPerQOS: true
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
configMapAndSecretChangeDetectionStrategy: Watch
containerLogMaxFiles: 5
containerLogMaxSize: 10Mi
contentType: application/vnd.kubernetes.protobuf
cpuCFSQuota: true
cpuCFSQuotaPeriod: 100ms
cpuManagerPolicy: none
cpuManagerReconcilePeriod: 10s
enableControllerAttachDetach: true
enableDebuggingHandlers: true
enforceNodeAllocatable:
- pods
eventBurst: 10
eventRecordQPS: 5
evictionHard:
  imagefs.available: 15%
  memory.available: 100Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s
failSwapOn: true
fileCheckFrequency: 20s
hairpinMode: promiscuous-bridge
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 20s
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
imageMinimumGCAge: 2m0s
iptablesDropBit: 15
iptablesMasqueradeBit: 14
kind: KubeletConfiguration
kubeAPIBurst: 10
kubeAPIQPS: 5
makeIPTablesUtilChains: true
maxOpenFiles: 1000000
maxPods: 110
nodeLeaseDurationSeconds: 40
nodeStatusUpdateFrequency: 10s
oomScoreAdj: -999
podPidsLimit: -1
port: 10250
registryBurst: 10
registryPullQPS: 5
resolvConf: /etc/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 2m0s
serializeImagePulls: true
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 4h0m0s
syncFrequency: 1m0s
volumeStatsAggPeriod: 1m0s

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Reactions: 7
  • Comments: 32 (21 by maintainers)

Most upvoted comments

we have a full example here @ulm0 https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1alpha3

search for kubeletExtraArgs. both the init and join configuration have a sub object called NodeRegistrationOptions.

try passing at least --v=1 to kubeadm init and look for the string feature gates: … showing the list of enabled gates.

@ReSearchITEng @neolit123 Little clarification here:

If you leave field empty, kubeadm will keep values in sync among different components, if you specify values in the kubeproxy config, this value will be preserved (up to you to keep things in sync)

relevant code here: https://github.com/kubernetes/kubernetes/blob/59957af12529c0cb84f43b2117f115387b49202d/cmd/kubeadm/app/componentconfigs/defaults.go#L44

Hello, While trying the new config ymls of kubeadm 12, it was noticed:

  1. the generated yml:
    • keeps, in the apiEndpoint.advertiseAddress, the dummy “1.2.3.4” address instead of defaulting to the default one of the machine (e.g. the one returned by hostname -i )
    • has “cgroupDriver: cgroupfs” - probably dummy, not autodetected (which in my case is systemd)
  2. There are 5 kinds of configs, but not very clear which is for what:
    • kind: InitConfiguration
    • kind: JoinConfiguration
    • kind: ClusterConfiguration
    • kind: KubeProxyConfiguration
    • kind: KubeletConfiguration The doc (https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-config/) does not present which sections are required on the master, which on the node. (while sounds easy that Init is for the master and Join for the nodes, it’s not perfectly clear about the other 3). From my tests, I could put on master all but Join:
    • kind: InitConfiguration
    • kind: ClusterConfiguration
    • kind: KubeProxyConfiguration
    • kind: KubeletConfiguration and on the node:
    • kind: JoinConfiguration
    • kind: KubeProxyConfiguration
    • kind: KubeletConfiguration

When: “kind: ClusterConfiguration” was added to the node, some interesing message came up:

converting (v1alpha3.ClusterConfiguration) to (kubeadm.JoinConfiguration): NodeRegistration not present in src

Can anyone help with details?

^ ClusterConfiguration

I’m not sure this is heading in the right direction, but the issue as stated was bang on to my frustration.

It’s hard to believe this passes for acceptable validation output in 2018:

[root@k8s-m1 admin]# kubeadm init --config kube-adm.yml
error converting YAML to JSON: yaml: line 4: found character that cannot start any token

Completely ambigous, and in fact, in this case, the kube-adm.yml file did not even exist. It is just a raw underlying lib parse error on a file that doesn’t exist.

No matter what YAML error was in my config file, I would get a similar error output… a line number, even though multiple YAML files are combined, so often the lines were from a particular YAML segment that was never referenced in the output.

But the most frustrating was when I got to more qualitative validations… like this one:

[root@k8s-m1 admin]# kubeadm init --config /root/kube-adm.yml
host must be a valid IP address or a valid RFC-1123 DNS subdomain

That is a lovely error, but would you mind telling me which host field you are referring to!

It was only by removing them one by one that I found the issue… an unnecessary “http://” prefix in my template triggered the parse error at line 110: https://github.com/kubernetes/kubernetes/blob/master/cmd/kubeadm/app/util/endpoint.go

return "", "", fmt.Errorf("host must be a valid IP address or a valid RFC-1123 DNS subdomain")

Would it be so hard to include the offending host in the error output? Or the relevant field from a list of hundreds?

If this was my first experience with kubeadm it would have been my last. Not sure how this represents progress. Earlier configuration mechanisms seemed sane.