cluster-api-provider-aws: Cluster with reused name in different namespace fails to boot

/kind bug

What steps did you take and what happened:

  1. Boot a cluster with a given name in the default namespace
  2. Wait for that cluster to successfully come up
  3. Create a new namespace
  4. Boot a cluster with an identical name to the first in the new namespace
  5. Cluster does not create a new VPC or any associated resources

What did you expect to happen: A new VPC, set of machines, and associated resources should have been booted

Anything else you would like to add:

Environment:

  • Cluster-api-provider-aws version:
  • Kubernetes version: (use kubectl version):
  • OS (e.g. from /etc/os-release):

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Comments: 17 (13 by maintainers)

Most upvoted comments

Currently we treat the cluster name as a unique value. We have a few options here:

  • Add some type of validation that the cluster name is indeed unique
    • This would still present issues where the controllers are scoped to a single namespace or running under different management clusters
  • Prefix everywhere we use the cluster name for naming, tagging, etc with the namespace name
    • This would still present issues where multiple management clusters are in use
    • This could also lead to exceeding string lengths in various places and would need to be accounted for.
  • Generate some type of unique identifier that can be pre/post pended to the cluster name
    • This unique identifier would need to be generated and saved to the spec to persist across pivot or a backup/restore (resource UUID does not persist, nor does Status)
    • string lengths should also be accounted for
    • may need to also tag resources with the namespace to help for easier identification of resources in AWS, or some other method to easily differentiate between clusters if running on a single management cluster.