kops: sshKeyName throws Secret Error
Thanks for submitting an issue!
-------------BUG REPORT --------------------
- Fill in as much of the template below as you can.
apiVersion: kops/v1alpha2
kind: Cluster
metadata:
name: johnd-kops.k8s.local
creationTimestamp: 2017-10-23T21:07:46Z
spec:
api:
loadBalancer:
type: Public
authorization:
RBAC: {}
channel: stable
cloudLabels:
Team: conductor-testing
cloudProvider: aws
configBase: s3://conductor-testing-kops-state/johnd-kops.k8s.local
etcdClusters:
- etcdMembers:
- instanceGroup: master-us-east-1d
name: d
name: main
- etcdMembers:
- instanceGroup: master-us-east-1d
name: d
name: events
kubernetesApiAccess:
- 0.0.0.0/0
kubernetesVersion: 1.7.8
masterInternalName: blue-johnd-kops.k8s.local
masterPublicName: johnd-kops.k8s.local
networkCIDR: 172.31.0.0/16
networkID: omitted
networking:
calico: {}
nonMasqueradeCIDR: 100.64.0.0/10
sshKeyName: john-testing-key
sshAccess:
- omitted
subnets:
- cidr: 172.31.100.0/24
name: us-east-1d
type: Private
zone: us-east-1d
- cidr: 172.31.100.0/24
name: utility-us-east-1d
type: Utility
zone: us-east-1d
topology:
dns:
type: Public
masters: private
nodes: private
fileAssets:
- name: bootstrap.yaml
# Note if not path is specificied the default path it /srv/kubernetes/assets/<name>
path: /etc/kubernetes/manifests/boostrap.yml
roles: [Master]
content: |
apiVersion: v1
kind: Namespace
metadata:
name: something
---
apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: 2017-10-23T19:15:02Z
name: nodes
labels:
kops.k8s.io/cluster: johnd-kops.k8s.local
spec:
image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-xenial-16.04-amd64-server-20170721
machineType: t2.medium
maxSize: 3
minSize: 3
role: Node
subnets:
- us-east-1d
---
apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: 2017-10-23T19:15:02Z
name: master-us-east-1d
labels:
kops.k8s.io/cluster: johnd-kops.k8s.local
spec:
image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-xenial-16.04-amd64-server-20170721
machineType: t2.medium
maxSize: 1
minSize: 1
role: Master
subnets:
- us-east-1d
- What kops version are you running? use
kops version1.8.0 - What Kubernetes version are you running? use
kubectl version1.8 - What cloud provider are you using? aws
- What commands did you execute (Please provide cluster manifest
kops get --name my.example.com, if available) and what happened after commands executed?kops update cluster $NAME --yes
Output
SSH public key must be specified when running with AWS (create with `kops create secret --name johnd-kops.k8s.local sshpublickey admin -i ~/.ssh/id_rsa.pub`)
6. What you expected to happen:
Cluster to be created
7. How can we to reproduce it (as minimally and precisely as possible):
kops create -f [myspecfromabove.yml]
8. Anything else do we need to know:
I think it’s just a exception that needs to be tuned. Happy to contribute any other information.
About this issue
- Original URL
- State: closed
- Created 7 years ago
- Comments: 20 (3 by maintainers)
This is still an issue. I thought this was going to get fixed in 1.9.0.
I can verify that if you run (using kops 1.8.0.beta.1) the
kops create secret --name $NAME sshpublickey admin -i ~/.ssh/id_rsa.pubthat cluster creation will complete successfully and will not create a new key inside AWS, but use the existing keypair defined insshKeyName.Can someone clarify whether the
kops create secretcommand should be necessary, or if its simply a remnant of previous behavior and should be something that will eventually be cleaned up?