cluster-api: clusterctl init fails to find core provoder while running e2e tests

What steps did you take and what happened: [A clear and concise description on how to REPRODUCE the bug.] While running e2e tests with BETA v1.1.0-beta.1 pre release, clusterctl init fails with:

`clusterctl init --core cluster-api --bootstrap kubeadm --control-plane kubeadm --infrastructure metal3
[AfterEach] Workload cluster creation
  /home/ubuntu/cluster-api-provider-metal3/test/e2e/e2e_test.go:53
STEP: Dumping all the Cluster API resources in the "metal3" namespace
STEP: Deleting cluster metal3/test1
STEP: Deleting cluster test1
INFO: Waiting for the Cluster metal3/test1 to be deleted
STEP: Waiting for cluster test1 to be deleted

• Failure [5183.383 seconds]
Workload cluster creation
/home/ubuntu/cluster-api-provider-metal3/test/e2e/e2e_test.go:34
  Creating a highly available control-plane cluster
  /home/ubuntu/cluster-api-provider-metal3/test/e2e/e2e_test.go:57
    Should create a cluster with 3 control-plane and 1 worker nodes [It]
    /home/ubuntu/cluster-api-provider-metal3/test/e2e/e2e_test.go:58

    failed to run clusterctl init
    Unexpected error:
        <*errors.withStack | 0xc000d832c0>: {
            error: <*errors.withMessage | 0xc000bf0f20>{
                cause: <*errors.withStack | 0xc000d83278>{
                    error: <*errors.withMessage | 0xc000bf0f00>{
                        cause: <*errors.withStack | 0xc000d83248>{
                            error: <*errors.withMessage | 0xc000bf0ee0>{
                                cause: <*errors.withStack | 0xc000d83200>{
                                    error: <*errors.withMessage | 0xc000bf0ec0>{
                                        cause: <*errors.fundamental | 0xc000d831d0>{
                                            msg: "failed to find releases tagged with a valid semantic version number",
                                            stack: [..., ..., ..., ..., ..., ..., ..., ..., ..., ..., ..., ..., ..., ..., ..., ..., ..., ..., ..., ..., ..., ..., ..., ..., ..., ..., ..., ...],
                                        },
                                        msg: "failed to get latest version",
                                    },
                                    stack: [0x178aa69, 0x17823de, 0x1782145, 0x17db9d5, 0x17db9a6, 0x17dcded, 0x17e11ee, 0x17e0c25, 0x17e0330, 0x17e6f5c, 0x183f8be, 0x184c05c, 0x149613a, 0x1495b05, 0x14951fb, 0x149af89, 0x149a967, 0x14bb085, 0x14bada5, 0x14ba5e5, 0x14bc892, 0x14c5605, 0x14c542a, 0x18358d0, 0x5149e2, 0x46ae81],
                                },
                                msg: "error creating the local filesystem repository client",
                            },
                            stack: [0x17823fc, 0x1782145, 0x17db9d5, 0x17db9a6, 0x17dcded, 0x17e11ee, 0x17e0c25, 0x17e0330, 0x17e6f5c, 0x183f8be, 0x184c05c, 0x149613a, 0x1495b05, 0x14951fb, 0x149af89, 0x149a967, 0x14bb085, 0x14bada5, 0x14ba5e5, 0x14bc892, 0x14c5605, 0x14c542a, 0x18358d0, 0x5149e2, 0x46ae81],
                        },
                        msg: "failed to get repository client for the CoreProvider with name cluster-api",
                    },
                    stack: [0x178224a, 0x17db9d5, 0x17db9a6, 0x17dcded, 0x17e11ee, 0x17e0c25, 0x17e0330, 0x17e6f5c, 0x183f8be, 0x184c05c, 0x149613a, 0x1495b05, 0x14951fb, 0x149af89, 0x149a967, 0x14bb085, 0x14bada5, 0x14ba5e5, 0x14bc892, 0x14c5605, 0x14c542a, 0x18358d0, 0x5149e2, 0x46ae81],
                },
                msg: "failed to get provider components for the \"cluster-api\" provider",
            },
            stack: [0x17e1468, 0x17e0c25, 0x17e0330, 0x17e6f5c, 0x183f8be, 0x184c05c, 0x149613a, 0x1495b05, 0x14951fb, 0x149af89, 0x149a967, 0x14bb085, 0x14bada5, 0x14ba5e5, 0x14bc892, 0x14c5605, 0x14c542a, 0x18358d0, 0x5149e2, 0x46ae81],
        }
        failed to get provider components for the "cluster-api" provider: failed to get repository client for the CoreProvider with name cluster-api: error creating the local filesystem repository client: failed to get latest version: failed to find releases tagged with a valid semantic version number
    occurred

    /home/ubuntu/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.0-beta.1/framework/clusterctl/client.go:84

    Full Stack Trace
    sigs.k8s.io/cluster-api/test/framework/clusterctl.Init({0x0, 0x203000}, {{0xc000e5c050, 0x4b}, {0xc000e54439, 0x55}, {0xc0009373a0, 0x1d}, {0x1cebf91, 0xb}, ...})
    	/home/ubuntu/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.0-beta.1/framework/clusterctl/client.go:84 +0x45b
    github.com/metal3-io/cluster-api-provider-metal3/test/e2e.pivoting()
    	/home/ubuntu/cluster-api-provider-metal3/test/e2e/pivoting_test.go:51 +0x4fe
    github.com/metal3-io/cluster-api-provider-metal3/test/e2e.glob..func5.3.1()
    	/home/ubuntu/cluster-api-provider-metal3/test/e2e/e2e_test.go:85 +0x4bc
    github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc00018ba00)
    	/home/ubuntu/go/pkg/mod/github.com/onsi/ginkgo@v1.16.5/internal/leafnodes/runner.go:113 +0xba
    github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc002c41690)
    	/home/ubuntu/go/pkg/mod/github.com/onsi/ginkgo@v1.16.5/internal/leafnodes/runner.go:64 +0x125
    github.com/onsi/ginkgo/internal/leafnodes.(*ItNode).Run(0xc00018ba00)
    	/home/ubuntu/go/pkg/mod/github.com/onsi/ginkgo@v1.16.5/internal/leafnodes/it_node.go:26 +0x7b
    github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc0005140f0, 0xc00038da58, {0x1f4bf80, 0xc00047acc0})
    	/home/ubuntu/go/pkg/mod/github.com/onsi/ginkgo@v1.16.5/internal/spec/spec.go:215 +0x2a9
    github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc0005140f0, {0x1f4bf80, 0xc00047acc0})
    	/home/ubuntu/go/pkg/mod/github.com/onsi/ginkgo@v1.16.5/internal/spec/spec.go:138 +0xe7
    github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc0003cdb80, 0xc0005140f0)
    	/home/ubuntu/go/pkg/mod/github.com/onsi/ginkgo@v1.16.5/internal/specrunner/spec_runner.go:200 +0xe5
    github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc0003cdb80)
    	/home/ubuntu/go/pkg/mod/github.com/onsi/ginkgo@v1.16.5/internal/specrunner/spec_runner.go:170 +0x1a5
    github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc0003cdb80)
    	/home/ubuntu/go/pkg/mod/github.com/onsi/ginkgo@v1.16.5/internal/specrunner/spec_runner.go:66 +0xc5
    github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc00010fd50, {0x7f31700b7b30, 0xc00018b860}, {0x1ce86b4, 0x1}, {0xc0005a1c80, 0x1, 0x1}, {0x1f9e5b8, 0xc00047acc0}, ...)
    	/home/ubuntu/go/pkg/mod/github.com/onsi/ginkgo@v1.16.5/internal/suite/suite.go:79 +0x4d2
    github.com/onsi/ginkgo.runSpecsWithCustomReporters({0x1f4d880, 0xc00018b860}, {0x1ce86b4, 0x9}, {0xc00053ef20, 0x1, 0x240c5c2})
    	/home/ubuntu/go/pkg/mod/github.com/onsi/ginkgo@v1.16.5/ginkgo_dsl.go:245 +0x185
    github.com/onsi/ginkgo.RunSpecs({0x1f4d880, 0xc00018b860}, {0x1ce86b4, 0x9})
    	/home/ubuntu/go/pkg/mod/github.com/onsi/ginkgo@v1.16.5/ginkgo_dsl.go:220 +0x14a
    github.com/metal3-io/cluster-api-provider-metal3/test/e2e.TestE2e(0x0)
    	/home/ubuntu/cluster-api-provider-metal3/test/e2e/e2e_suite_test.go:79 +0x90
    testing.tRunner(0xc00018b860, 0x1dc5350)
    	/usr/local/go/src/testing/testing.go:1259 +0x102
    created by testing.(*T).Run
    	/usr/local/go/src/testing/testing.go:1306 +0x35a
------------------------------
STEP: Tearing down the management cluster


Summarizing 1 Failure:

[Fail] Workload cluster creation Creating a highly available control-plane cluster [It] Should create a cluster with 3 control-plane and 1 worker nodes 
/home/ubuntu/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.1.0-beta.1/framework/clusterctl/client.go:84

Ran 1 of 1 Specs in 5187.800 seconds
FAIL! -- 0 Passed | 1 Failed | 0 Pending | 0 Skipped
--- FAIL: TestE2e (5187.80s)` 

What did you expect to happen: clusterctl init command run properly

Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]

Environment:

  • Cluster-api version: v1.1.0-beta.1
  • Cluster-api test framework: test@v1.1.0-beta.1
  • Minikube/KIND version:
  • Kubernetes version: (use kubectl version):
  • OS (e.g. from /etc/os-release):

/kind bug

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Comments: 21 (21 by maintainers)

Most upvoted comments

@furkatgofurov7 any update on this issue?

@fabriziopandini sorry I missed this, there was a problem with passing a correct config and we got this working after realizing it. Thanks for your help! I will close the issue.

/close