cluster-api: Umbrella: Breaking apart clusterctl

Describe the solution you’d like As an old-timer to kubernetes, I find clusterctl to be weird… It’s performing operations which “could be preconditions”, and also performing operations which IMO should be part of the .spec of the objects. This issue tries to break down some of the details, and feedback is solicited.

  1. Building a bootstrap cluster…

    • Why? We can / should have it as a precondition for folks to run kubectl apply
  2. Create / Delete (CRDs, Cluster, Machines, … )

    • Why? That’s just a kubectl apply OR delete… possibly with a kubectl plugin to make it feel more 1st classed.
  3. Pivot

    • IMO, this should be a field in the cluster.spec that is part of a state machine of the cluster object.
  4. Kubeconfig

    • There are many ways to drop an encrypted secret config to access, and I think creating a workflow for this might better serve the community.

What I’m really struggling with is… do we really need this tool? I think there are portions of clusterctl which I think could move into a client library that I think would be generally useful as aggregate utility functions which are common operations that providers could leverage. I also think a kubectl plugin might be generally useful to treat clusterapi objects as 1st classed resources, but other then that.?.?.? In a v1alpha2 world what workflows are missing that clusterctl provides?

/kind feature /cc @ncdc @vincepri @detiber

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Reactions: 5
  • Comments: 23 (23 by maintainers)

Most upvoted comments

I’m totally cool with having a bootstrap cluster be a precondition.

Overall, I think kubectl plugins are a great way for us to move going forward, when clusterctl was started kubectl plugins either did not exist yet or they were just implemented with little to no documentation.

I don’t see a need for clusterctl. I’ve been using CAPA entirely without clusterctl. I create a management cluster with kind (one command) and then use kubectl to deploy the CAPI/CAPA bits, and the workload cluster(s).

I do use a CAPA-maintained helper shell script to generate the manifests, though. With the infra/bootstrap provider split, I can see each infra and bootstrap provider shipping its own tool to help generate manifests. There might be room for a CAPI-maintained tool that helps generate the now provider-agnostic Cluster and Machine manifests.

Haven’t thought this through, but I suspect kubectl plugins using kustomize would do the job nicely.

I like the idea of a kubectl cluster bootstrap plugin. kubectl create cluster would be nice but I don’t think you can override create like that (?).

Same - kind is so easy to set up these days, that should be sufficient as a minimum requirement.

I definitely like the idea of an operator, but we also need to remember that deploying the operator also presents a chicken/egg situation where an existing cluster needs to be present.

  1. Kubeconfig
  • There are many ways to drop an encrypted secret config to access, and I think creating a workflow for this might better serve the community.

+1 to this approach going forward, it didn’t exist for the first iteration

  1. Building a bootstrap cluster…
  • Why? We can / should have it as a precondition for folks to run kubectl apply

The big reason for this was to simplify the user experience and avoid needing to run manual pre-steps.