cluster-api: Quick Start with Docker failing
What steps did you take and what happened: [A clear and concise description on how to REPRODUCE the bug.]
I apologise if I’ve missed something obvious, still fairly new to all this.
I followed the quick start guide and this guide.
1. git clone https://github.com/kubernetes-sigs/cluster-api.git
2. cd cluster-api/
3. make clusterctl
4. cat > clusterctl-settings.json <<EOF
{
"providers": ["cluster-api","bootstrap-kubeadm","control-plane-kubeadm", "infrastructure-docker"],
"provider_repos": []
}
EOF
5. make -C test/infrastructure/docker docker-build REGISTRY=gcr.io/k8s-staging-capi-docker
6. make -C test/infrastructure/docker generate-manifests REGISTRY=gcr.io/k8s-staging-capi-docker
7. ./cmd/clusterctl/hack/local-overrides.py
8. cat > ~/.cluster-api/clusterctl.yaml <<EOF
providers:
- name: docker
url: $HOME/.cluster-api/overrides/infrastructure-docker/latest/infrastructure-components.yaml
type: InfrastructureProvider
EOF
9. cat > kind-cluster-with-extramounts.yaml <<EOF
kind: Cluster
apiVersion: kind.sigs.k8s.io/v1alpha3
nodes:
- role: control-plane
extraMounts:
- hostPath: /var/run/docker.sock
containerPath: /var/run/docker.sock
EOF
10. kind create cluster --config ./kind-cluster-with-extramounts.yaml --name clusterapi
11. kind load docker-image gcr.io/k8s-staging-capi-docker/capd-manager-amd64:dev --name clusterapi
12. clusterctl init --core cluster-api:v0.3.0 --bootstrap kubeadm:v0.3.0 --control-plane kubeadm:v0.3.0 --infrastructure docker:v0.3.0
Everything worked as excepted up until point 12. This was the output:
Fetching providers
Using Override="core-components.yaml" Provider="cluster-api" Version="v0.3.0"
Error: failed to get provider components for the "cluster-api:v0.3.0" provider: failed to detect default target namespace: Invalid manifest. There should be no more than one resource with Kind Namespace in the provider components yaml
What did you expect to happen: Expected something similar to this:
Fetching providers
Installing cert-manager
Waiting for cert-manager to be available...
Installing Provider="cluster-api" Version="v0.3.0" TargetNamespace="capi-system"
Installing Provider="bootstrap-kubeadm" Version="v0.3.0" TargetNamespace="capi-kubeadm-bootstrap-system"
Installing Provider="control-plane-kubeadm" Version="v0.3.0" TargetNamespace="capi-kubeadm-control-plane-system"
Installing Provider="infrastructure-docker" Version="v0.3.0" TargetNamespace="capd-system"
Your management cluster has been initialized successfully!
You can now create your first workload cluster by running the following:
clusterctl config cluster [name] --kubernetes-version [version] | kubectl apply -f -
Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]
Also used Kind v0.8.1 but had the same problem.
Environment:
- Cluster-api version: v0.3.7
- Minikube/KIND version: kind v0.7.0
- Kubernetes version: (use
kubectl version
): v1.18.6 - OS (e.g. from
/etc/os-release
): Ubuntu 20.04.1 & macOS v10.15.5
/kind bug [One or more /area label. See https://github.com/kubernetes-sigs/cluster-api/labels?q=area for the list of labels]
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Comments: 20 (13 by maintainers)
There is something weird in your setup, but I can’t understand it.
The manifest generated for you contains:
While it should contain
(with the capi prefix in front of each name)
Looks like this might depend on your kustomize version. mine is 3.1.0 (IF I remember well any version > 3 should work)
This is what
./cmd/clusterctl/hack/local-overrides.py
has you run. I don’t think the version is relevant here.This look very similar to my issue, I had to tweak the namespaces manually