aws-cdk: aws_eks: error retrieving RESTMappings to prune

Describe the bug

After using aws_eks.Cluster.add_manifest to apply kubernetes objects on my cluster the first time, subsequent attempts to update my application or any other manifest results in the following such error:

[INFO]	2022-12-19T18:03:17.565Z	4adf551c-1846-45a6-82bf-d16d68c20512	Running command: ['kubectl', 'apply', '--kubeconfig', '/tmp/kubeconfig', '-f', '/tmp/manifest.yaml', '--prune', '-l', 'aws.cdk.eks/prune-c890e450b1abcaee1ebedde5645f1196f5ae447ab8']
[ERROR] Exception: b'ingress.networking.k8s.io/gitlab configured\nerror: error retrieving RESTMappings to prune: invalid resource extensions/v1beta1, Kind=Ingress, Namespaced=true: no matches for kind "Ingress" in version "extensions/v1beta1"\n'
Traceback (most recent call last):
  File "/var/task/index.py", line 14, in handler
    return apply_handler(event, context)
  File "/var/task/apply/__init__.py", line 69, in apply_handler
    kubectl('apply', manifest_file, *kubectl_opts)
  File "/var/task/apply/__init__.py", line 91, in kubectl
    raise Exception(output)

I do not have any Ingress resources from the extensions/v1beta1 group in my cluster but I do have one from the networking.k8s.io group.


% kubectl get ingress --all-namespaces
NAMESPACE   NAME     CLASS    HOSTS                                          ADDRESS                                                           PORTS   AGE
gitlab      gitlab   <none>   gitlab.cdk-eks-fargate.cakalu.people.aws.dev   k8s-gitlab-xxx.elb.amazonaws.com   80      49m

% kubectl get ingress.networking.k8s.io -n gitlab
NAME     CLASS    HOSTS                                          ADDRESS                                                           PORTS   AGE
gitlab   <none>   gitlab.cdk-eks-fargate.example.com   k8s-gitlab-xxx.elb.amazonaws.com   80      49m

The cluster has been created with the default prune=True since I left out specification of the field

kubernetes_cluster = eks.Cluster(
            self,
            id=f"{prefix}-cluster",
            version=version,
            vpc=vpc,
            vpc_subnets=[
                ec2.SubnetSelection(
                    subnet_group_name="private-subnet",
                ),
            ],
            cluster_logging=[
                eks.ClusterLoggingTypes.AUDIT,
            ],
            default_capacity=0,
            endpoint_access=eks.EndpointAccess.PUBLIC_AND_PRIVATE,
            kubectl_layer=kubectl_v24.KubectlV24Layer(self, id=f"{prefix}-kubectl"),
            masters_role=masters_role,
            output_masters_role_arn=False,
            place_cluster_handler_in_vpc=True,
            secrets_encryption_key=kms_key_data,
            output_cluster_name=False,
            output_config_command=False,
            tags=tags,
        )

As you can see, I am using the Kubectl_V24 layer which supposedly has the correct version of kubectl to match the cluster version i’m working on which is 1.24.

I have seen this issue on Fargate EKS 1.22, 1.23 and 1.24

Related Issues

https://github.com/aws/aws-cdk/issues/19843 https://github.com/aws/aws-cdk/issues/15736 https://github.com/aws/aws-cdk/issues/15072

Expected Behavior

I should be able to continuously update my application without it failing

Current Behavior

The update of any part of the application always fails with the aforementioned error

Reproduction Steps

Create a new EKS Fargate 1.22+ cluster Use the aws_eks.Cluster.add_manifest method to apply a manifest e.g. a gitlab deployment Run cdk deploy Update the gitlab deployment for example by updating the image tag, adding an env variable, changing an env variable value etc. Run cdk deploy Error occurs

Possible Solution

N/A

Additional Information/Context

No response

CDK CLI Version

2.55.0

Framework Version

No response

Node.js Version

18.10.0

OS

Ubuntu 22.04

Language

Python

Language Version

3.9.14

Other information

No response

About this issue

  • Original URL
  • State: closed
  • Created 2 years ago
  • Reactions: 8
  • Comments: 21 (8 by maintainers)

Most upvoted comments

For those using python cdk - I was able to get this running with the following:

npm install -s @aws-cdk/lambda-layer-kubectl-v24
pip install aws-cdk.lambda-layer-kubectl-v24
from aws_cdk.lambda_layer_kubectl_v24 import KubectlV24Layer

cluster = aws_eks.Cluster(self, 'cluster',
                          masters_role=self._eks_admin_role,
                          vpc=self._host_vpc,  # private_subnet_ids
                          vpc_subnets=[ec2.SubnetSelection(subnet_filters=[ec2.SubnetFilter.by_ids(private_subnet_ids)])],
                          default_capacity=0,
                          version=aws_eks.KubernetesVersion.V1_24,
                          output_cluster_name=True,
                          output_masters_role_arn=True,
                          role=self._eks_admin_role,
                          kubectl_layer=KubectlV24Layer(self, 'KubectlV24Layer'),
                          )

Not that much different than the typescript version, but took a bit of digging to figure out some of the namespacing as it isn’t explicitly listed in the python package. Hopefully this helps someone else out. Thanks for the original solution @AlyIbrahim!

OK, Digging deeper I found the solution !!!

First you need to import the version of kubectl that matches you current cluster to the CDK project dependencies. If you are using Typescript Project and your Kubernetes Version is 24 you will need to run the following command in your project directory: npm install -s @aws-cdk/lambda-layer-kubectl-v24 You can use other versions though the max I found was 24

Now in your stack you need to import the Kubectl layer as: import { KubectlV24Layer } from '@aws-cdk/lambda-layer-kubectl-v24';

And while creating the cluster you need to specify the Kubectl layer version as a one of the Clusterprops kubectlLayer: new KubectlV24Layer(this, 'Kubectlv24Layer')

As a complete example:

cons myCluster = new eks.Cluster(this, 'my-cluster'.{
clusterName: 'my-cluster',
version: eks.KubernetesVersion.V1_24,
kubectlLayer: new KubectlV24Layer(this, 'Kubectlv24Layer'),
vpc: myVPC,
...
...
})

I tested this solution and it is working fine.

Documentation should be clear and change this prop to a mandatory prop as it can cause breaking problems. Also it’s not clear if AWS will provide versions beyond 24 or will this be left to the community.

@AlyIbrahim

We have a feature request for aws-eks 1.25 support now - https://github.com/aws/aws-cdk/issues/24282

At the same time, if you need the kubectl layer 1.25 assets, please give an upvote on https://github.com/cdklabs/awscdk-asset-kubectl/issues/166

OK, Digging deeper I found the solution !!!

First you need to import the version of kubectl that matches you current cluster to the CDK project dependencies. If you are using Typescript Project and your Kubernetes Version is 24 you will need to run the following command in your project directory: npm install -s @aws-cdk/lambda-layer-kubectl-v24 You can use other versions though the max I found was 24

Now in your stack you need to import the Kubectl layer as: import { KubectlV24Layer } from '@aws-cdk/lambda-layer-kubectl-v24';

And while creating the cluster you need to specify the Kubectl layer version as a one of the Clusterprops kubectlLayer: new KubectlV24Layer(this, 'Kubectlv24Layer')

As a complete example:

cons myCluster = new eks.Cluster(this, 'my-cluster'.{
clusterName: 'my-cluster',
version: eks.KubernetesVersion.V1_24,
kubectlLayer: new KubectlV24Layer(this, 'Kubectlv24Layer'),
vpc: myVPC,
...
...
})

I tested this solution and it is working fine.

Documentation should be clear and change this prop to a mandatory prop as it can cause breaking problems. Also it’s not clear if AWS will provide versions beyond 24 or will this be left to the community.

Thanks @AlyIbrahim , this is really very helpful.

Hello Team. We are impacted as well with kubectl being bundled with old version into the lambda assets. For now, i am importing the kubectl v24 lambda layer and declaring it during the eks cluster creation

@peterwoodworth Thanks for your response …

I suggest the following:

  • Add kubectlLayer prop to the main example with the required import and a description on how to add the required layer npm install -s @aws-cdk/lambda-layer-kubectl-v24
  • Provide a link to the available lambda-layer-kubectl-vXX packages
  • Add a warning for the default behavior if KubeLayer is not specified
  • Add a troubleshooting section for this kind of error

I see that you had part of this already with the parameter but it’s down below and the regular user will not realize that this prop is required above v20. so unless the user read all doc for all the props it’s not easy to understand this breaking change …

Another option is to make the option mandatory to force users to know about the prop, but proper documentation can avoid this.