kubernetes-cd-plugin: EKS authentication failing
I’m trying to set up a deployment job to an EKS cluster, which has been set up properly and used manually, but whenever the kubernetesDeploy pipeline step runs, it fails due to an authentication error:
ERROR: io.fabric8.kubernetes.client.KubernetesClientException:
Failure executing: GET at: https://*****.
Message: Forbidden! User arn:aws:eks:us-east-******** doesn't have permission. deployments.extensions "*********" is forbidden: User "system:anonymous" cannot get deployments.extensions in the namespace "default".
In order to replicate manual cluster authentication, I’ve made sure the aws-iam-authenticator tool is available in all slave PATHs, and my pre-deploy stage generates the ~/.aws/credentials file required for the authenticator to generate a token. It then appends the Jenkins IAM access key and ID as required for the profile:
[jenkins]
aws_access_key_id = ****
aws_secret_access_key = *****
I’ve verified that it generates with the correct secrets and format.
The stored Kubeconfig in Jenkins is the one generated with the AWS CLI, as specified here, with a modification in the user block to specify that the profile used from the credentials file I generated previously should be the jenkins one - as follows:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: ******
server: ***
name: arn:aws:eks:us-east-1:****
contexts:
- context:
cluster: arn:aws:eks:us-east-1:****
user: arn:aws:eks:us-east-1:****
name: arn:aws:eks:us-east-1:****
- context:
cluster: arn:aws:eks:us-east-1:****
user: arn:aws:eks:us-east-1:****
name: arn:aws:eks:us-east-1:****
current-context: arn:aws:eks:us-east-1:****
kind: Config
preferences: {}
users:
- name: arn:aws:eks:us-east-1:****
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- token
- -i
- ****
env:
- name: "AWS_PROFILE"
value: "jenkins"
command: aws-iam-authenticator
I also edited the cluster permissions as described in the EKS docs with the correct jenkins IAM user and permissions block:
Name: aws-auth
Namespace: kube-system
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
****
Data
====
mapRoles:
----
- rolearn: ****
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
mapUsers:
----
- userarn: arn:aws:iam::****:user/jenkins
username: jenkins
groups:
- system:masters
Events: <none>
This works when using the jenkins profile to manually affect the cluster.
My expected behaviour is as follows:
- Job is run
~/.aws/credentialsfile is generated on the slave with access keys from the credential binding plugin- kubernetes-cd calls the
aws-iam-authenticatorto generate a token for its API use, resulting in the tool generating a token with the credentials specified in~/.aws/credentials, and proceeds to deploy per usual.
However, it seems that despite proper credentials, per my initial error log the plugin is not authenticating to the cluster correctly, resulting in the system:anonymous permissions being applied and subsequent deployment failure.
Have I just made a huge mistake in configuration somewhere along the line?
About this issue
- Original URL
- State: open
- Created 6 years ago
- Reactions: 6
- Comments: 15 (1 by maintainers)
Will this plugin supports Blue/green deployments to EKS?
Obligatory 6 month follow-up.
Any updates? I would love to use this plugin for EKS.
It looks like the Plugin cannot handle working with the
aws-iam-authenticatoras it just doesnt seem to call it even if it is the kubeconfig 😕 I have been installingkubectland andaws-iam-authenticatoron my agents and using the kubernetes-cli plugin to run kubectl inshblocks until this plugin is updatedBeware that 4.1.1 is broken in this regard.
Is this project still being maintained? cc: @ArieShout
For me, It works only with 4.1.2 (last version of kubernetes-client now) I tried with 4.1.0 and 4.1.1 and it didn’t work.
`
`