redux-toolkit: [RED-16] Slow intellisense in VSCode

Hey šŸ‘‹

I’m trying to figure out what slows our VSCode intellisense (TS-backed code auto-complete) down and may have found the main cause.

Not 100% sure it’s RTK Query (createApi) or not, but figured it’s a decent-enough thing I found that I may as well share it.

Where I work at we have a pretty hefty React/RTK app (not OSS šŸ˜ž), we’ve been dealing with slow (not unbearable, but annoying) response rate from VSCode intellisense (feels like >1s until the suggestions list shows up, but looking at the TS Server logs it’s probably ~800ms).

I tried a few things, eventually landed on this:

If I any-fy the call to createApi, the TS Server logs report that completionInfo (which is in charge of computing the list of suggested items that show up in VSCode’s autocomplete) drops from 840ms to 122ms.

Here’s a video before the change (note how slow it takes from the time I hit . to when I see the suggestions:

https://user-images.githubusercontent.com/927310/221375943-1547b820-7f19-40b8-933a-0269d4983faa.mp4

Here it is when I make the following change:

export const api = createApi({

To:

export const api = (createApi as any)({

https://user-images.githubusercontent.com/927310/221375968-ad185389-96b7-4146-98da-c4e072e283ca.mp4

RED-16

About this issue

  • Original URL
  • State: open
  • Created a year ago
  • Reactions: 5
  • Comments: 20 (3 by maintainers)

Most upvoted comments

We did some perf analysis last night, and confirmed that HooksWithUniqueNames seems to be the biggest time sink . We think it has to do with the way this is getting handled as a distributive check, which ends up turning into a union of N individual types (which later gets converted using UnionToIntersection):

export type HooksWithUniqueNames<Definitions extends EndpointDefinitions> =
  keyof Definitions extends infer Keys
    ? Keys extends string
      ? Definitions[Keys] extends { type: DefinitionType.query }
        ? {
            [K in Keys as `use${Capitalize<K>}Query`]: UseQuery<
              Extract<Definitions[K], QueryDefinition<any, any, any, any>>
            >
          } &
            {
              [K in Keys as `useLazy${Capitalize<K>}Query`]: UseLazyQuery<
                Extract<Definitions[K], QueryDefinition<any, any, any, any>>
              >
            }
        : Definitions[Keys] extends { type: DefinitionType.mutation }
        ? {
            [K in Keys as `use${Capitalize<K>}Mutation`]: UseMutation<
              Extract<Definitions[K], MutationDefinition<any, any, any, any>>
            >
          }
        : never
      : never
    : never

Here’s a flame graph of the perf from dutzi’s example with 2.0-beta.2:

image

We’ve got a PR up in https://github.com/reduxjs/redux-toolkit/pull/3767 that rewrites it to do 3 mapped object types - one each for queries, lazy queries, and mutations:

export type HooksWithUniqueNames<Definitions extends EndpointDefinitions> = {
  [K in keyof Definitions as Definitions[K] extends {
    type: DefinitionType.query
  }
    ? `use${Capitalize<K & string>}Query`
    : never]: UseQuery<
    Extract<Definitions[K], QueryDefinition<any, any, any, any>>
  >
} &
  {
    [K in keyof Definitions as Definitions[K] extends {
      type: DefinitionType.query
    }
      ? `useLazy${Capitalize<K & string>}Query`
      : never]: UseLazyQuery<
      Extract<Definitions[K], QueryDefinition<any, any, any, any>>
    >
  } &
  {
    [K in keyof Definitions as Definitions[K] extends {
      type: DefinitionType.mutation
    }
      ? `use${Capitalize<K & string>}Mutation`
      : never]: UseMutation<
      Extract<Definitions[K], MutationDefinition<any, any, any, any>>
    >
  }

This appears to be a major improvement! Running a perf check against that same example, the main blocking section drops from 2600ms to 1000ms (still a long time, but 60% better!):

image

Could folks try out that PR and let us know how much of an improvement it feels like in practice? You can install it from the CodeSandbox CI build here:

Note that the PR is against our v2.0-integration branch, so it will involve an upgrade, but I’m happy to have us backport that to 1.9.x as well.

We’d like to use RTK Query with the generated react hooks, but with our roughly 400 endpoints (using a custom queryFn) the performance of TypeScript is so dramatically impacted that I’m afraid it’s not usable. In IntelliJ, the autocompletion on the ā€œapiā€ object will run for minutes without returning a suggestion list.

Maybe it is not connected directly, but I also experienced a performance degradation (type completion) in a medium sized project (using @rtk-query/codegen-openapi)

In our case we found the culprit to be multiple calls to .enhanceEndpoints(). After refactoring the code to use it only once in the whole application, performance was back to expected levels.

here’s a version backported to v1.9.x: https://github.com/reduxjs/redux-toolkit/pull/3769

Given that we did just speed up the RTKQ hooks types, I’m going to say this is sufficiently improved for 2.0. We can do more perf testing post-2.0, but in the interest of getting 2.0 wrapped up I’m going to move this out of the 2.0 milestone and not spend any further time on this until 2.0 is out the door.

Hey, I think I made some progress here.

I created a small repo that reproduces the issue https://github.com/dutzi/rtk-ts-perf.

Check out this video that demos it https://cln.sh/bHBsDGGm.

I tried playing a bit with the typings, and noticed that if I edit ./node_modules/@reduxjs/toolkit/dist/query/react/module.d.ts removing HooksWithUniqueNames in line 23:

} & HooksWithUniqueNames<Definitions>;

Change to:

}

I get instant intellisense.

I didn’t have enough time to improve the utility type, but hope this helps move this forward.

@ConcernedHobbit : how did you generate that step percentage perf information?

I used tsc with the --generateTrace flag and manually took a look at it in the Perfetto.dev web-app.

Hi there & thanks for opening this issue @dutzi. I’m happy to have come across it, as it confirms my own testing.

We’re using RTK Query with the OpenAPI code generator, resulting in about 7000 lines of generated endpoint definitions. I can fully reproduce your observations with VSCode IntelliSense population being significantly slow (1-3s). Changing the API type to any as described above immediately ā€˜solves’ the issue.

Unfortunately, I’m lacking the knowledge to provide helpful input here, but I’ll be monitoring the issue and happy to help with triage.

I started examining this after reading this post.

I think it’s a good place to start (check out the comment section, it has some interesting discussion with useful links).

Or, tl;dr:

TS-Wiki on Performance Tracing A better tool to inspect TSC’s trace.json

Anyhow, I’ll try helping!