aws-cdk: (aws-eks): kubectl layer is not compatible with k8s v1.22.0

Describe the bug

Running an empty update on an empty EKS cluster fails while updating the resource EksClusterAwsAuthmanifest12345678 (Custom::AWSCDK-EKS-KubernetesResource).

Expected Behavior

The update should succeed.

Current Behavior

It’s fails with error:

Received response status [FAILED] from custom resource. Message returned: Error: b'configmap/aws-auth configured\nerror: error retrieving RESTMappings to prune: invalid resource extensions/v1beta1, Kind=Ingress, Namespaced=true: no matches for kind "Ingress" in version "extensions/v1beta1"\n' Logs: /aws/lambda/InfraMainCluster-awscdkawseksKubec-Handler886CB40B-rDGV9O3CyH7n at invokeUserFunction (/var/task/framework.js:2:6) at processTicksAndRejections (internal/process/task_queues.js:97:5) at async onEvent (/var/task/framework.js:1:302) at async Runtime.handler (/var/task/cfn-response.js:1:1474) (RequestId: acd049fc-771c-4410-8e09-8ec4bec67813)

Reproduction Steps

This is what I did:

  1. Deploy an empty cluster:
export class EksClusterStack extends cdk.Stack {
  constructor(scope: Construct, id: string, props: cdk.StackProps) {
    super(scope, id, props);

    const clusterAdminRole = new iam.Role(this, "ClusterAdminRole", {
      assumedBy: new iam.AccountRootPrincipal(),
    });

    const vpc = ec2.Vpc.fromLookup(this, "MainVpc", {
      vpcId: "vpc-1234567890123456789",
    });

   const cluster = new eks.Cluster(this, "EksCluster", {
      vpc: vpc,
      vpcSubnets: [{ subnetType: ec2.SubnetType.PRIVATE_WITH_NAT }],
      clusterName: `${id}`,
      mastersRole: clusterAdminRole,
      defaultCapacity: 0,
      version: eks.KubernetesVersion.V1_22,
    });

    cluster.addFargateProfile("DefaultProfile", {
      selectors: [{ namespace: "default" }],
    });
  }
}
  1. Add a new fargate profile
    cluster.addFargateProfile("IstioProfile", {
      selectors: [{ namespace: "istio-system" }],
    });
  1. Deploy the stack and wait for the failure.

Possible Solution

No response

Additional Information/Context

I checked the version of kubectl in the lambda handler and it’s 1.20.0 which AFAIK is not compilable with cluster version 1.22.0. I’m not entirely sure how the lambda is created. I thought it matches the kubectl with whatever version the cluster has. ~But it seems it’s not~ It is not the case indeed (#15736).

CDK CLI Version

2.20.0 (build 738ef49)

Framework Version

No response

Node.js Version

v16.13.0

OS

Darwin 21.3.0

Language

Typescript

Language Version

3.9.10

Other information

Similar to #15072?

About this issue

  • Original URL
  • State: closed
  • Created 2 years ago
  • Reactions: 35
  • Comments: 27 (7 by maintainers)

Commits related to this issue

Most upvoted comments

@akefirad Yesterday I had the same issue. As a temporary solution, you can create your own lambda layer version and pass it as a parameter to the Cluster construct. Here is my solution in python. It’s just a combination of AwsCliLayer and KubectlLayer

My code building layer.zip every synth, but you can build it once you need it and save layer.zip in your repository.

assets/kubectl-layer/build.sh

#!/bin/bash
set -euo pipefail

cd $(dirname $0)

echo ">> Building AWS Lambda layer inside a docker image..."

TAG='kubectl-lambda-layer'

docker build -t ${TAG} .

echo ">> Extrating layer.zip from the build container..."
CONTAINER=$(docker run -d ${TAG} false)
docker cp ${CONTAINER}:/layer.zip layer.zip

echo ">> Stopping container..."
docker rm -f ${CONTAINER}
echo ">> layer.zip is ready"

assets/kubectl-layer/Dockerfile

# base lambda image
FROM public.ecr.aws/sam/build-python3.7

#
# versions
#

# KUBECTL_VERSION should not be changed at the moment, see https://github.com/aws/aws-cdk/issues/15736
# Version 1.21.0 is not compatible with version 1.20 (and lower) of the server.
ARG KUBECTL_VERSION=1.22.0
ARG HELM_VERSION=3.8.1

USER root
RUN mkdir -p /opt
WORKDIR /tmp

#
# tools
#

RUN yum update -y \
    && yum install -y zip unzip wget tar gzip

#
# aws cli
#

COPY requirements.txt ./
RUN python -m pip install -r requirements.txt -t /opt/awscli

# organize for self-contained usage
RUN mv /opt/awscli/bin/aws /opt/awscli

# cleanup
RUN rm -rf \
    /opt/awscli/pip* \
    /opt/awscli/setuptools* \
    /opt/awscli/awscli/examples


#
# Test that the CLI works
#

RUN yum install -y groff
RUN /opt/awscli/aws help

#
# kubectl
#

RUN mkdir -p /opt/kubectl
RUN cd /opt/kubectl && curl -LO "https://storage.googleapis.com/kubernetes-release/release/v${KUBECTL_VERSION}/bin/linux/amd64/kubectl"
RUN chmod +x /opt/kubectl/kubectl

#
# helm
#

RUN mkdir -p /tmp/helm && wget -qO- https://get.helm.sh/helm-v${HELM_VERSION}-linux-amd64.tar.gz | tar -xvz -C /tmp/helm
RUN mkdir -p /opt/helm && cp /tmp/helm/linux-amd64/helm /opt/helm/helm

#
# create the bundle
#

RUN cd /opt \
    && zip --symlinks -r ../layer.zip * \
    && echo "/layer.zip is ready" \
    && ls -alh /layer.zip;

WORKDIR /
ENTRYPOINT [ "/bin/bash" ]

assets/kubectl-layer/requirements.txt

awscli==1.22.92

kubectl_layer.py

import builtins
import typing
import subprocess

import aws_cdk as cdk

from aws_cdk import (
    aws_lambda as lambda_
)

from constructs import Construct

class KubectlLayer(lambda_.LayerVersion):

    def __init__(self, scope: Construct, construct_id: builtins.str, *,
        compatible_architectures: typing.Optional[typing.Sequence[lambda_.Architecture]] = None,
        compatible_runtimes: typing.Optional[typing.Sequence[lambda_.Runtime]] = None,
        layer_version_name: typing.Optional[builtins.str] = None,
        license: typing.Optional[builtins.str] = None,
        removal_policy: typing.Optional[cdk.RemovalPolicy] = None
    ) -> None:

        subprocess.check_call("<path to assets/kubectl-layer/build.sh>")]) # build layer.zip every run

        super().__init__(scope, construct_id,
            code=lambda_.AssetCode(
                path=asset_file("<path to created assets/kubectl-layer/layer.zip>"),
                asset_hash=cdk.FileSystem.fingerprint(
                    file_or_directory=asset_dir("<path to assets/kubectl-layer/ dir>"),
                    exclude=["*.zip"]
                )
            ),
            description="/opt/awscli/aws, /opt/kubectl/kubectl and /opt/helm/helm",
            compatible_architectures=compatible_architectures,
            compatible_runtimes=compatible_runtimes,
            layer_version_name=layer_version_name,
            license=license,
            removal_policy=removal_policy
        )

I am using aws-cdk-go and couldn’t able to find lambda-layer-kubectl-v23 in go pkg dependencies

Any docs/guidance on how to proceed using Golang? Can’t find a proper module to import…

update: the go module is buried here for anyone else hunting: https://github.com/cdklabs/awscdk-kubectl-go/tree/kubectlv22/v2.0.3/kubectlv22

Thanks @jaredhancock31 ! This helped me a lot. ^_^

If anyone needs it, here is my example implementation in Go that I tweaked using the original cdk init file go-cdk.go, and with the suggestion above to upgrade a cluster I was experimenting with.

complete code here: https://gist.github.com/andrewbulin/e23c313008372d4e5149899817bebe32

snippet here:

	cluster := awseks.NewCluster(
		stack,
		jsii.String("UpgradeMe"),
		&awseks.ClusterProps{
			Version:      awseks.KubernetesVersion_V1_22(),
			KubectlLayer: kubectlv22.NewKubectlV22Layer(stack, jsii.String("kubectl")),
			ClusterName:  jsii.String("upgrade-me"),
			ClusterLogging: &[]awseks.ClusterLoggingTypes{
				awseks.ClusterLoggingTypes_AUDIT,
			},
		},
	)

I also couldn’t able to import lambda_layer_kubectl_v23 in the python package (aws-cdk-lib==2.50.0)

There is a separate module you need to install aws-cdk.lambda-layer-kubectl-v23 then you can import from aws_cdk import lambda_layer_kubectl_v23

Hello,

Thank you for the new release to support EKS 1.23.

But when I deploy the stack to create EKS 1.23, I got the warning:

You created a cluster with Kubernetes Version 1.23 without specifying the kubectlLayer property. 
This may cause failures as the kubectl version provided with aws-cdk-lib is 1.20, 
which is only guaranteed to be compatible with Kubernetes versions 1.19-1.21. 
Please provide a kubectlLayer from @aws-cdk/lambda-layer-kubectl-v23.

Then I try to follow the document:

import { KubectlV23Layer } from 'aws-cdk-lib/lambda-layer-kubectl-v23';

const cluster = new eks.Cluster(this, 'hello-eks', {
  version: eks.KubernetesVersion.V1_23,
  kubectlLayer: new KubectlV23Layer(this, 'kubectl'),
});

But there seems no package lambda-layer-kubectl-v23 under aws-cdk-lib v2.50.0. Is lambda-layer-kubectl-v23 available now?

Hi, you need to add the package @aws-cdk/lambda-layer-kubectl-v23 to your (dev) dependencies and import the layer from that package.

Release this week should have a way to use an updated kubectl layer.

Solution is announced for Mid-September, see this issue.

same here. CDK version 2.37.0

FYI, a workaround is to set Prune to false. This of course has some side effects, but you can mitigate that by ensuring there’s only one kubernetes object per manifest.

I am using aws-cdk-go and couldn’t able to find lambda-layer-kubectl-v23 in go pkg dependencies

Any docs/guidance on how to proceed using Golang? Can’t find a proper module to import…

update: the go module is buried here for anyone else hunting: https://github.com/cdklabs/awscdk-kubectl-go/tree/kubectlv22/v2.0.3/kubectlv22

I am using aws-cdk-go and couldn’t able to find lambda-layer-kubectl-v23 in go pkg dependencies

Hello

I see 1.23 support has been merged! 🎉 Thanks for the effort there.

Re: KubectlV23Layer - is this still an experimental feature? We’d like to implement a V1_23 kubectl layer using the java edition, however this doesn’t seem possible at this stage?

@cgarvis Thank you for the update. We are waiting impatiently for the Release.

I thought this would get auto closed once #20000 was merged 😅

No reason to keep this issue open with #20000 merged I think. Thanks for the ping