aws-iam-authenticator: Bad certificate error

Kubernetes Version: 1.8.4

I’m attempting to give this a try in a testing cluster I’ve spun up with kops 1.8.0. I think I’m quite close, but it looks like I’ve got a certificate problem somewhere. When attempting to do the following, I get a few errors:

kubectl --kubeconfig=/path/to/my/kubeconfig --token="$(heptio-authenticator-aws token -i mycluster.local)" get nodes

The first is immediate on my local workstation: error: You must be logged in to the server (Unauthorized)

The second is from the container itself: time="2017-12-05T20:14:47Z" level=info msg="http: TLS handshake error from 127.0.0.1:41980: remote error: tls: bad certificate" http=error

I’m assuming I have the wrong certificate configured somewhere, but from the docs it is not clear where that might be. Does the authenticator need to use the cluster certificates (in this case generated by kops) or are the certs it generates on its own correct?

About this issue

  • Original URL
  • State: closed
  • Created 7 years ago
  • Reactions: 2
  • Comments: 24 (9 by maintainers)

Most upvoted comments

I get the following error:

aws-iam-authenticator-4v2rj aws-iam-authenticator time="2019-01-10T00:00:15Z" level=info msg="mapping IAM role" groups="[system:masters]" role="arn:aws:iam::xxxxxxxxxxxx:role/kubernetes-admin" username="kubernetes-admin:{{AccountID}}:{{SessionName}}"
aws-iam-authenticator-4v2rj aws-iam-authenticator time="2019-01-10T00:00:15Z" level=info msg="loaded existing keypair" certPath=/var/aws-iam-authenticator/cert.pem keyPath=/var/aws-iam-authenticator/key.pem
aws-iam-authenticator-4v2rj aws-iam-authenticator time="2019-01-10T00:00:15Z" level=info msg="listening on https://127.0.0.1:21362/authenticate"
aws-iam-authenticator-4v2rj aws-iam-authenticator time="2019-01-10T00:00:15Z" level=info msg="reconfigure your apiserver with `--authentication-token-webhook-config-file=/etc/kubernetes/heptio-authenticator-aws/kubeconfig.yaml` to enable (assuming default hostPath mounts)"
aws-iam-authenticator-4v2rj aws-iam-authenticator time="2019-01-10T00:26:21Z" level=info msg="http: TLS handshake error from 127.0.0.1:52350: remote error: tls: bad certificate" http=error

running aws-iam-authenticator gcr.io/heptio-images/authenticator:v0.3.0 with kops v1.11.0 and k8s 1.11.5

The cohesive list to get the authenticator working for a kops cluster (or any cluster) should go as follows - https://github.com/heptio/authenticator#kops-usage which includes #3 that says “If the cluster already exists, roll the cluster with kops rolling-update cluster ${CLUSTER_NAME} in order to recreate the master nodes.”

@Raffo , this sounds like the masters don’t have the certs when it boots thus it’s creating those when you apply the daemonset which wouldn’t have been loaded into the apiserver yet,

Here are the steps that I just took and it worked without restarting… Please tell me if you are doing something different. I think step 6 and 9 are what is missing, but that’s just a hunch.

  1. export CLUSTER_NAME=auth.debug.weave.k8s.local
  2. kops create cluster ${CLUSTER_NAME} --zones us-west-1a --networking weave
  3. kops edit cluster ${CLUSTER_NAME}
  4. Add the following to the .spec
# ...
kubeAPIServer:
    authenticationTokenWebhookConfigFile: /srv/kubernetes/heptio-authenticator-aws/kubeconfig.yaml
  hooks:
  - name: kops-hook-authenticator-config.service
    before:
      - kubelet.service
    roles: [Master]
    manifest: |
      [Unit]
      Description=Download Heptio AWS Authenticator configs from S3
      [Service]
      Type=oneshot
      ExecStart=/bin/mkdir -p /srv/kubernetes/heptio-authenticator-aws
      ExecStart=/usr/local/bin/aws s3 cp --recursive ${KOPS_STATE_STORE}/${CLUSTER_NAME}/addons/authenticator /srv/kubernetes/heptio-authenticator-aws/
  1. Making sure to replace the ${KOPS_STATE_STORE} and ${CLUSTER_NAME}
  2. Generate and upload assets:
heptio-authenticator-aws init -i $CLUSTER_NAME
aws s3 cp cert.pem ${KOPS_STATE_STORE}/${CLUSTER_NAME}/addons/authenticator/cert.pem
aws s3 cp key.pem ${KOPS_STATE_STORE}/${CLUSTER_NAME}/addons/authenticator/key.pem
aws s3 cp heptio-authenticator-aws.kubeconfig ${KOPS_STATE_STORE}/${CLUSTER_NAME}/addons/authenticator/kubeconfig.yaml
  1. kops update cluster ${CLUSTER_NAME} --yes
  2. watch kops validate cluster until success
  3. set output and state path in example.yaml and make sure to set the cluster name and some role
  4. kubectl apply -f example.yaml
  5. kubectl logs -f -n kube-system [authenticator POD] in a new terminal window
  6. Create new user in kubeconfig with:
user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      command: heptio-authenticator-aws
      args:
        - "token"
        - "-i"
        - "CLUSTER_NAME"
        - "-r"
        - "ROLE_ARN"
  1. Set the context to use the new user.
  2. kubectl get nodes and watch the log output.