async-graphql: Slow compile and cargo check
We have a relatively small graphql schema but it takes a long time for cargo check
to complete (~20s). Is this a common problem or is there any known issue that can causes the long check times? Happy to provide more details if needed.
Sources are at: https://github.com/openmsupply/remote-server/tree/develop/graphql/src/schema
About this issue
- Original URL
- State: open
- Created 2 years ago
- Reactions: 1
- Comments: 32 (11 by maintainers)
It works! My crazy idea to have an “outer layer” macro that strips out all the
async-graphql
macros (whencargo check
is running in rust-analyzer) actually works! (this maybe is not surprising to experienced rustacians, but as a newcomer from JS/TS, Rust’s macro system feels really empowering)The
cargo check
command is now down to 2.5 seconds, which is such a relief relative to the 6 second timings I was getting just a couple days ago.What drawbacks does it have?
Not much! Basically, it just means that when you hover over the
async-graphql
macros in your IDE, you no longer see rust-doc descriptions and such.That’s really not a big deal though, because there are only six macros that get stripped (
graphql
,Object
,Subscription
,SimpleObject
,MergedObject
,MergedSubscription
), so once you know what those 6 macros do, there is not much point to having their rust-doc info show up in your IDE like that.How do you use it?
Pretty straightforward: any structs where you use the
async-graphql
macros, simple wrap that area of code with thewrap_async_graphql
macro, like so:There is one other change you must make:
This
wrap_agql_schema_build
macro is necessary so that when the rest of theasync-graphql
macros are stripped, the code that’s meant to call the schema-building still compiles. All it does is that, when the macro-stripping is enabled, it replaces theSchema::build
expression with this:(If a new macro seems overkill for this, it may be possible to use a more standard replacement approach, like the built-in cfg! macro or something…)
Anyway, while the output-token-caching approach in my previous post is more flexible, this “dumber” macro-stripping is probably better for
async-graphql
, since it gives even better speeds, avoids some complications with caching, and you don’t lose much. (since the macros are almost completely just for outputting that GraphQL API, not for in-IDE introspection or the like – at least for the way I’m currently using it)While I will likely work more on the output-token-caching approach eventually, for now this solves my concerns well, and lets me resume work on my actual project without frustration.
Anyway, I hope to eventually make a separate crate for this system, so that other users of
async-graphql
can easily speed up their cargo-check times as well.Until then, you can find the initial source-code for it here (in the
rust-macros
sub-crate of my larger project): https://github.com/debate-map/app/blob/d55d0043b4cdaea558adf70840ab6d902df04cf8/Packages/rust-macros/src/wrap_async_graphql.rsOther than the instructions already mentioned, the only other thing needed is to have
STRIP_ASYNC_GRAPHQL=1
as an environment-variable for your in-IDEcargo check
executions. In VSCode, this is accomplished by adding the following to your project’s.vscode/settings.json
:(I might be able to remove this requirement later on, if I figure out a way to detect the rust-analyzer execution-context automatically.)
So today I thought:
Turns out, you absolutely can. I now have a combined
wrap_slow_macros
macro (source here; newwrap_serde_macros
macro source here) that I wrap around all my structs (well, the ones that useSerialize
,Deserialize
, or any of theasync-graphql
macros).It strips out the
async-graphql
macros, and replacesSerialize
andDeserialize
with “stubs”, that just produce this:With the optimization above, my
cargo check
time is now down to a pleasant 1.5 seconds (it was 6.1s just a few days ago), without having to split my codebase into subcrates or the like. 😄I’m quite relieved! This was my biggest concern with Rust (slow macros for important crates like
serde
andasync-graphql
), but the flexibility of Rust’s macro-system has enabled me to just skip/stub whatever macros I want to at cargo-check time – retaining fast in-IDE type-checking.I also have been having very slow “cargo check” times in my Rust project: https://github.com/debate-map/app/tree/master/Packages/app-server-rs/src/db
In Rust 1.58, merely changing the string in a
println
would take 6 seconds for thecargo check
call to complete (as called by rust-analyzer automatically on file-save).In Rust 1.59 (just updated to see if it would help), this same thing now takes 17 seconds! (I cannot wait 17 seconds every time I change one letter just to see syntax/type errors!)
I eventually narrowed down one of the slowdowns to being due to the
SimpleObject
macro.Specifically, given these structs:
Code for database structs
If I make no changes except removing the
SimpleObject
macro from the above structures, it speeds up mycargo check
times by ~4.5 seconds.4.5 seconds doesn’t sound that bad? Well, it’s acceptable if the 4.5 seconds was occurring only when I am running
cargo build
or the like. But that 4.5 seconds becomes painful when it’s slowing down every instance of basic syntax/type checking in my IDE.Also keep in mind that the 4.5 seconds is only one part of the overhead – there is more overhead, eg. in the
Object
macro used for adding queries/subscriptions/mutations fields to the graphql API, but I’m ignoring those parts for now to focus on theSimpleObject
slowdown. Altogether, the “db/XXX” files end up adding 12+ seconds to mycargo check
time on rust 1.59, which is just too much.If more details are desired, here is a text log I wrote detailing my cargo-check timing process:
Log of slowdown investigation
println!("TestXXX");
line in main.rs, just before the “mod …” lines. (makes it easier to test impact of commenting files out)Haha, I think the only way is to upgrade the computer. 😂 Or, as @clemens-msupply did, split a project into many sub crates.
I tried updating from Rust 1.59 to Rust 1.60.0-beta.3 (as per a comment on the Discord saying 1.59 had incremental-compilation broken), but the 4.5 second cargo-check cost from the
SimpleObject
macro was still present.However, in a terminal I then ran
cargo clean
, followed bycargo check
, and from that point on, I was getting much better timings, for the stripped down code:SimpleObject
macros in useSimpleObject
macros removedSo on Rust 1.60, it seems the overhead of using
SimpleObject
is 1.8s, rather than 4.5s as seen on 1.59.When I revert all the stripping-down I did, the overall time for
cargo check
(after the change of a println string) is now back to the ~6s I was getting on 1.59.For me, that still feels quite slow; is 6s considered acceptable for cargo-check for even smallish programs like this? (I’m thinking of TypeScript for example, where syntax/type checking generally shows up in the IDE in under 1 second, even in much larger codebases)
Anyway, I will see if I can speed it up somehow…
This definitely does not reduce check time. 😂
But this is possible, splitting into multiple crates can make full use of multi-core cpu.
I don’t think it makes much sense to do this, in fact I never bothered with this problem, usually I only check it once with
cargo clippy
after I have written the code and tested it.