kops: [Hetzner] Generates the servers but does not set up the cluster
/kind bug
1. What kops version are you running? The command kops version, will display
this information.
Client version: 1.27.0
2. What Kubernetes version are you running? kubectl version will print the
version if a cluster is running or provide the Kubernetes version specified as
a kops flag.
Client Version: v1.27.4
3. What cloud provider are you using? Hetzner
4. What commands did you run? What is the simplest way to reproduce this issue?
export KOPS_STATE_STORE=s3://XXXXXXXXXXXXX
export HCLOUD_TOKEN=XXXXXXXXXXXXX
kops create cluster --name=test.example.k8s.local \
--ssh-public-key=~/.ssh/hetzner.pub --cloud=hetzner --zones=fsn1 \
--image=ubuntu-20.04 --networking=calico --network-cidr=10.10.0.0/16 --kubernetes-version 1.26.7
kops update cluster --name test.example.k8s.local --yes --admin
5. What happened after the commands executed? I0820 19:11:37.839489 4034 executor.go:111] Tasks: 0 done / 47 total; 38 can run W0820 19:11:38.145388 4034 vfs_keystorereader.go:143] CA private key was not found I0820 19:11:38.181567 4034 keypair.go:226] Issuing new certificate: “etcd-manager-ca-main” I0820 19:11:38.181938 4034 keypair.go:226] Issuing new certificate: “apiserver-aggregator-ca” I0820 19:11:38.187196 4034 keypair.go:226] Issuing new certificate: “etcd-manager-ca-events” I0820 19:11:38.193928 4034 keypair.go:226] Issuing new certificate: “etcd-peers-ca-events” I0820 19:11:38.215820 4034 keypair.go:226] Issuing new certificate: “etcd-clients-ca” I0820 19:11:38.218170 4034 keypair.go:226] Issuing new certificate: “etcd-peers-ca-main” W0820 19:11:38.225427 4034 vfs_keystorereader.go:143] CA private key was not found I0820 19:11:38.264225 4034 keypair.go:226] Issuing new certificate: “kubernetes-ca” I0820 19:11:38.274655 4034 keypair.go:226] Issuing new certificate: “service-account” I0820 19:11:39.038562 4034 executor.go:111] Tasks: 38 done / 47 total; 3 can run I0820 19:11:40.312737 4034 executor.go:111] Tasks: 41 done / 47 total; 2 can run I0820 19:11:40.787769 4034 executor.go:111] Tasks: 43 done / 47 total; 4 can run I0820 19:11:41.810296 4034 executor.go:111] Tasks: 47 done / 47 total; 0 can run I0820 19:11:41.834529 4034 update_cluster.go:323] Exporting kubeconfig for cluster kOps has set your kubectl context to test.example.k8s.local
Cluster is starting. It should be ready in a few minutes.
Suggestions:
- validate cluster: kops validate cluster --wait 10m
- list nodes: kubectl get nodes --show-labels
- ssh to a control-plane node: ssh -i ~/.ssh/id_rsa ubuntu@
- the ubuntu user is specific to Ubuntu. If not using Ubuntu please use the appropriate user based on your OS.
- read about installing addons at: https://kops.sigs.k8s.io/addons.
Resources are generated in Hetzner, a control-plane, a node, 2 volumes etcd unattached, a load balancer pointing to the control plane but unhealthy, a network, and 2 firewalls, one for the nodes and the other for the control plane. But not get to create the cluster or anything like that.
6. What did you expect to happen? Create a Kubernetes cluster
7. Please provide your cluster manifest. Execute
kops get --name my.example.com -o yaml to display your cluster manifest.
You may want to remove your cluster name and other sensitive information.
apiVersion: kops.k8s.io/v1alpha2
kind: Cluster
metadata:
creationTimestamp: "2023-08-20T17:35:49Z"
name: test.example.k8s.local
spec:
api:
loadBalancer:
type: Public
authorization:
rbac: {}
channel: stable
cloudProvider: hetzner
configBase: s3://XXXXXXXXXXXXX/test.example.k8s.local
etcdClusters:
- cpuRequest: 200m
etcdMembers:
- instanceGroup: control-plane-fsn1
name: etcd-1
manager:
backupRetentionDays: 90
memoryRequest: 100Mi
name: main
- cpuRequest: 100m
etcdMembers:
- instanceGroup: control-plane-fsn1
name: etcd-1
manager:
backupRetentionDays: 90
memoryRequest: 100Mi
name: events
iam:
allowContainerRegistry: true
legacy: false
kubelet:
anonymousAuth: false
kubernetesApiAccess:
- 0.0.0.0/0
- ::/0
kubernetesVersion: 1.26.7
networkCIDR: 10.10.0.0/16
networking:
calico: {}
nonMasqueradeCIDR: 100.64.0.0/10
sshAccess:
- 0.0.0.0/0
- ::/0
subnets:
- name: fsn1
type: Public
zone: fsn1
topology:
dns:
type: None
masters: public
nodes: public
---
apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: "2023-08-20T17:35:49Z"
labels:
kops.k8s.io/cluster: test.example.k8s.local
name: control-plane-fsn1
spec:
image: ubuntu-20.04
machineType: cx21
maxSize: 1
minSize: 1
role: Master
subnets:
- fsn1
---
apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: "2023-08-20T17:35:50Z"
labels:
kops.k8s.io/cluster: test.example.k8s.local
name: nodes-fsn1
spec:
image: ubuntu-20.04
machineType: cx21
maxSize: 1
minSize: 1
role: Node
subnets:
- fsn1
8. Please run the commands with most verbose logging by adding the -v 10 flag.
Paste the logs into this report, or in a gist and provide the gist link here.
https://gist.github.com/kespineira/33a1f984674ef86baa92db87fb7c4f77
9. Anything else do we need to know? No. Thanks in advance
About this issue
- Original URL
- State: open
- Created 10 months ago
- Comments: 18 (8 by maintainers)
Hi @hakman, I have tested it with other IP addresses and it works fine. Thanks a lot.
@kespineira yeah, same on my end. @hakman I`ll try it, thank you.
Thanks mate! I have managed to get the cluster up. But on node trying to download the kubelet is getting a 403 error.
I have tried to do it myself with wget and I get a 403 from the server, but locally with no problem.