kubeadm: Invalid etcd config generated when using DNS discovery approach

Is this a BUG REPORT

Versions

kubeadm version:

&version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3", GitCommit:"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0", GitTreeState:"clean", BuildDate:"2019-06-06T01:41:54Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Kubernetes version:
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3", GitCommit:"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0", GitTreeState:"clean", BuildDate:"2019-06-06T01:44:30Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
  • Cloud provider or hardware configuration: AWS
  • OS: Ubuntu 18.04
  • Kernel: Linux 4.15.0-1040-aws

What happened?

When using kubeadm init phase etcd local --config=/root/kubeadmcfg.yaml with the config file containing the following extra argument:

    extraArgs:
      discovery-srv: "example.com"

an invalid configuration is generated and etcd fails to start with the following message:

multiple discovery or bootstrap flags are set. Choose one of "initial-cluster", "discovery" or "discovery-srv"

etcd code that test for this scenario

This is due to the fact that kubeadm adds the initial-cluster argument when generating the config

What you expected to happen?

kubeadm might check if discovery-srv is present in the ExtraArgs and not add the initial-cluster argument if so.

How to reproduce it (as minimally and precisely as possible)?

cat <<EOF >"etcdconfig.yaml"
apiVersion: "kubeadm.k8s.io/v1beta1"
kind: ClusterConfiguration
etcd:
  local:
    extraArgs:
      discovery-srv: "example.com"
EOF

kubeadm init phase etcd local --config=./kubeadmcfg.yaml

Check etcd logs.

Anything else we need to know?

I would be more than happy to contribute with a PR if you think the “expected behavior” is correct.

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Comments: 15 (7 by maintainers)

Most upvoted comments

Thanks a lot @fabriziopandini for the detailed explanation.

I’ll re-read those pages for sure and try to get the best out of them.

  1. create an external etcd cluster

Yes, I have etcd up and running at the moment, including the discovery part, I’ll play with it for a little bit and I’ll then decide if the static configuration could be a better option.

  1. use kubeadm to create a K8s cluster that uses the external etcd cluster

Yes I’m setting up the master nodes at the moment and in this case I’ve already generated the external config part with:

etcd:
  external:
    endpoints:
      - https://peer-0..:2379
      - https://peer-1...:2379
      - https://peer-2...:2379

The stacked etcd mode was the first thing me and my team considered but then we decided for the external setup for multiple reasons 😉.