aws-cdk: (aws-eks): kubectl layer is not compatible with k8s v1.22.0
Describe the bug
Running an empty update on an empty EKS cluster fails while updating the resource EksClusterAwsAuthmanifest12345678 (Custom::AWSCDK-EKS-KubernetesResource).
Expected Behavior
The update should succeed.
Current Behavior
It’s fails with error:
Received response status [FAILED] from custom resource. Message returned: Error: b'configmap/aws-auth configured\nerror: error retrieving RESTMappings to prune: invalid resource extensions/v1beta1, Kind=Ingress, Namespaced=true: no matches for kind "Ingress" in version "extensions/v1beta1"\n' Logs: /aws/lambda/InfraMainCluster-awscdkawseksKubec-Handler886CB40B-rDGV9O3CyH7n at invokeUserFunction (/var/task/framework.js:2:6) at processTicksAndRejections (internal/process/task_queues.js:97:5) at async onEvent (/var/task/framework.js:1:302) at async Runtime.handler (/var/task/cfn-response.js:1:1474) (RequestId: acd049fc-771c-4410-8e09-8ec4bec67813)
Reproduction Steps
This is what I did:
- Deploy an empty cluster:
export class EksClusterStack extends cdk.Stack {
constructor(scope: Construct, id: string, props: cdk.StackProps) {
super(scope, id, props);
const clusterAdminRole = new iam.Role(this, "ClusterAdminRole", {
assumedBy: new iam.AccountRootPrincipal(),
});
const vpc = ec2.Vpc.fromLookup(this, "MainVpc", {
vpcId: "vpc-1234567890123456789",
});
const cluster = new eks.Cluster(this, "EksCluster", {
vpc: vpc,
vpcSubnets: [{ subnetType: ec2.SubnetType.PRIVATE_WITH_NAT }],
clusterName: `${id}`,
mastersRole: clusterAdminRole,
defaultCapacity: 0,
version: eks.KubernetesVersion.V1_22,
});
cluster.addFargateProfile("DefaultProfile", {
selectors: [{ namespace: "default" }],
});
}
}
- Add a new fargate profile
cluster.addFargateProfile("IstioProfile", {
selectors: [{ namespace: "istio-system" }],
});
- Deploy the stack and wait for the failure.
Possible Solution
No response
Additional Information/Context
I checked the version of kubectl in the lambda handler and it’s 1.20.0 which AFAIK is not compilable with cluster version 1.22.0. I’m not entirely sure how the lambda is created. I thought it matches the kubectl with whatever version the cluster has. ~But it seems it’s not~ It is not the case indeed (#15736).
CDK CLI Version
2.20.0 (build 738ef49)
Framework Version
No response
Node.js Version
v16.13.0
OS
Darwin 21.3.0
Language
Typescript
Language Version
3.9.10
Other information
Similar to #15072?
About this issue
- Original URL
- State: closed
- Created 2 years ago
- Reactions: 35
- Comments: 27 (7 by maintainers)
Commits related to this issue
- remove prune setting for eks cluster see https://github.com/aws/aws-cdk/issues/19843 — committed to dominodatalab/cdk-cf-eks by steved 2 years ago
- remove prune setting for eks cluster (#105) * remove prune setting for eks cluster see https://github.com/aws/aws-cdk/issues/19843 * empty commit to trigger CI — committed to dominodatalab/cdk-cf-eks by steved 2 years ago
@akefirad Yesterday I had the same issue. As a temporary solution, you can create your own lambda layer version and pass it as a parameter to the Cluster construct. Here is my solution in python. It’s just a combination of AwsCliLayer and KubectlLayer
My code building layer.zip every synth, but you can build it once you need it and save layer.zip in your repository.
assets/kubectl-layer/build.sh
assets/kubectl-layer/Dockerfile
assets/kubectl-layer/requirements.txt
kubectl_layer.py
Thanks @jaredhancock31 ! This helped me a lot. ^_^
If anyone needs it, here is my example implementation in Go that I tweaked using the original cdk init file
go-cdk.go, and with the suggestion above to upgrade a cluster I was experimenting with.complete code here: https://gist.github.com/andrewbulin/e23c313008372d4e5149899817bebe32
snippet here:
There is a separate module you need to install
aws-cdk.lambda-layer-kubectl-v23then you can importfrom aws_cdk import lambda_layer_kubectl_v23Hi, you need to add the package
@aws-cdk/lambda-layer-kubectl-v23to your (dev) dependencies and import the layer from that package.Release this week should have a way to use an updated kubectl layer.
Solution is announced for Mid-September, see this issue.
same here. CDK version 2.37.0
FYI, a workaround is to set Prune to false. This of course has some side effects, but you can mitigate that by ensuring there’s only one kubernetes object per manifest.
Any docs/guidance on how to proceed using Golang? Can’t find a proper module to import…
update: the go module is buried here for anyone else hunting: https://github.com/cdklabs/awscdk-kubectl-go/tree/kubectlv22/v2.0.3/kubectlv22
I am using aws-cdk-go and couldn’t able to find lambda-layer-kubectl-v23 in go pkg dependencies
Hello
I see 1.23 support has been merged! 🎉 Thanks for the effort there.
Re: KubectlV23Layer - is this still an experimental feature? We’d like to implement a
V1_23kubectl layer using the java edition, however this doesn’t seem possible at this stage?@cgarvis Thank you for the update. We are waiting impatiently for the Release.
I thought this would get auto closed once #20000 was merged 😅
No reason to keep this issue open with #20000 merged I think. Thanks for the ping