cdk-eks-blueprints: Karpenter invalid code

Describe the documentation issue

const karpenterAddonProps = {
    requirements: [
        { key: 'node.kubernetes.io/instance-type', op: 'In', vals: ['m5.2xlarge'] },
        { key: 'topology.kubernetes.io/zone', op: 'NotIn', vals: ['us-west-2c']},
        { key: 'kubernetes.io/arch', op: 'In', vals: ['amd64','arm64']},
        { key: 'karpenter.sh/capacity-type', op: 'In', vals: ['spot','on-demand']},
    ],
    subnetTags: {
        "Name": "blueprint-construct-dev/blueprint-construct-dev-vpc/PrivateSubnet1",
    },
    securityGroupTags: {
        "kubernetes.io/cluster/blueprint-construct-dev": "owned",
    },
    taints: [{
        key: "workload",
        value: "test",
        effect: "NoSchedule",
    }],
    amiFamily: "AL2",
    amiSelector: {
        "karpenter.sh/discovery/MyClusterName": '*',
    },
    consolidation: { enabled: true },
    ttlSecondsUntilExpired: 2592000,
    weight: 20,
    interruptionHandling: true,
}

const karpenterAddOn = new blueprints.addons.KarpenterAddOn(karpenterAddonProps);

I pasted it verbatim and it doesn’t work:

TSError: ⨯ Unable to compile TypeScript:
bin/my-blueprints.ts:39:61 - error TS2345: Argument of type '{ requirements: { key: string; op: string; vals: string[]; }[]; subnetTags: { Name: string; }; securityGroupTags: { "kubernetes.io/cluster/blueprint-construct-dev": string; }; taints: { key: string; value: string; effect: string; }[]; ... 5 more ...; interruptionHandling: boolean; }' is not assignable to parameter of type 'KarpenterAddOnProps'.
  Types of property 'requirements' are incompatible.
    Type '{ key: string; op: string; vals: string[]; }[]' is not assignable to type '{ key: string; op: "In" | "NotIn"; vals: string[]; }[]'.
      Type '{ key: string; op: string; vals: string[]; }' is not assignable to type '{ key: string; op: "In" | "NotIn"; vals: string[]; }'.
        Types of property 'op' are incompatible.
          Type 'string' is not assignable to type '"In" | "NotIn"'.

39 const karpenterAddOn = new blueprints.addons.KarpenterAddOn(karpenterAddonProps);

Links

https://aws-quickstart.github.io/cdk-eks-blueprints/addons/karpenter/

About this issue

  • Original URL
  • State: closed
  • Created a year ago
  • Comments: 41

Most upvoted comments

I ran into this too. You need to do something like:

cluster.getClusterInfo().cluster as cdk.Cluster

So that you can access awsAuth now. (Since 1.10.0?)

What should the id be?

Anything you want. The stackID is the name of the cloud formation stack that will be created in the target account/region once you deploy.

E.g. const id = 'my-karpenter'; or you can replace the stackId with const stackID = 'my-awesome-blueprint';

@NinoSkopac you can either copy paste that code into your project, or if you want to run the pattern as is the please clone the patterns repo and run make pattern karpetner deploy.

I see what’s happening - and this is something I’ll need to re-update the doc for.

  1. You are seeing m5.large instead of m5.2xlarge defined in Karpenter: By default, EKS Blueprints will deploy a managed nodegroup with 1 m5.large instance (defined in our doc here, under desiredSize and instanceTypes). Karpenter addon uses helm chart to install the tool, and because the karpenter pods will need to run before you can actually use it to scale based on workload requirements, you will need to use that nodegroup node to deploy it, as defined here.

After the karpenter pods are installed and you run a workload (scaled out to a few pods), you should be able to see the nodes scale with Karpenter (and not with the nodegroup).

  1. You are seeing one of the Karpenter pods pending: By default, Karpenter deploys two replicas - one per node. Since we have only 1 node by default, you see one of the pod pending.

@NinoSkopac, can you give this a try?

const karpenterAddOn = new blueprints.addons.KarpenterAddOn({
  requirements: [
      { key: 'node.kubernetes.io/instance-type', op: 'In', vals: ['m5.2xlarge'] },
      { key: 'topology.kubernetes.io/zone', op: 'NotIn', vals: ['us-west-2c']},
      { key: 'kubernetes.io/arch', op: 'In', vals: ['amd64','arm64']},
      { key: 'karpenter.sh/capacity-type', op: 'In', vals: ['spot','on-demand']},
  ],
  subnetTags: {
    "Name": "blueprint-construct-dev/blueprint-construct-dev-vpc/PrivateSubnet1",
  },
  securityGroupTags: {
    "kubernetes.io/cluster/blueprint-construct-dev": "owned",
  },
  taints: [{
    key: "workload",
    value: "test",
    effect: "NoSchedule",
  }],
  amiFamily: "AL2",
  amiSelector: {
    "karpenter.sh/discovery/MyClusterName": '*',
  },
  consolidation: { enabled: true },
  ttlSecondsUntilExpired: 2592000,
  weight: 20,
  interruptionHandling: true,
});

the code above worked in our repo, however for strict checking you can make it more explicit as the message suggests:

const eksCluster = (b.getClusterInfo().cluster as any) as eks.Cluster;

@NinoSkopac the comment from @paulchambers is correct, you need to cast to the eks.Cluster since 1.10 after we added imported cluster. If cluster is imported (pre-existing) then for some reason CDK did not expose the awsAuth capability. See https://github.com/aws-quickstart/cdk-eks-blueprints/issues/766#issuecomment-1629210811 for the same.