runtime: Huge performance regression during JIT code generation with ILGenerator on .NET 6.0 (Windows x64)
Description
Hi, after upgrading from .NET 5.0.16 to .NET 6.0.4, we encountered a seemingly huge performance regression when using System.Reflection.Emit.ILGenerator
(using DynamicMethod
). It occurs in Jurassic, a .NET JavaScript runtime that compiles JavaScript code to .NET IL code as lightweight functions.
Note: The performance issue occurs while the JIT is generating the native code, not in the resulting native code generated by the JIT.
With the reproduction project described below, when runnig with .NET 5.0.16 (on Windows 10 Version 21H2 x64), I get the following execution times depending on the build configuration:
Debug: 60 ms Release: 130 ms
When executing it with .NET 6.0.4, I get the following times:
Debug: 54 ms Release: 4300 ms
Notice that for Release, with .NET 6.0.4 the execution time is about 30x as high as with .NET 5.0.16. I also tested it on Ubuntu 20.04, and got similar results.
Reproduction Steps
I created a repro project that uses Jurassic to compile JavaScript code in .NET code with System.Reflection.Emit.ILGenerator
using a DynamicMethod
(unfortunatly I’m not sure how to recreate the issue without Jurassic, as I’mt not very experienced with ILGenerator
).
The DynamicMethod
is created in MethodGenerator.cs
, line 228.
Jurassic will generate 3 DynamicMethods
for the three script functions in the JavaScript code (one for the global method, one for the function doThings
, and one for the anonymous funciton expression in partitions.sort
.
To reproduce the issue:
- Clone https://github.com/kpreisser/jurassic.git and switch to branch
reproJitPerformanceRegression
. cd
intoReproJitPerfRegression
, then run the following commands:dotnet run -f net5.0 -c Release
dotnet run -f net6.0 -c Debug
dotnet run -f net6.0 -c Release
Expected behavior
When running dotnet run -f net6.0 -c Release
, the time shown after the first Executed script after:
should be similar to the time reported for net5.0
, which is about 130 ms. E.g.:
Framework: .NET 5.0.16
Compiled script after: 00:00:00.0844951
BEFORE Function
START Function!
AFTER Function, execution took 133 ms
Executed script after: 00:00:00.1492869
BEFORE Function
START Function!
AFTER Function, execution took 0 ms
Executed script after: 00:00:00.0013050
Actual behavior
When running dotnet run -f net6.0 -c Release
, the time shown after the first Executed script after:
is about 30x higher, e.g. 4300 ms. E.g.:
Framework: .NET 6.0.4
Compiled script after: 00:00:00.1040805
BEFORE Function
START Function!
AFTER Function, execution took 4234 ms
Executed script after: 00:00:04.2561898
BEFORE Function
START Function!
AFTER Function, execution took 0 ms
Executed script after: 00:00:00.0013538
Note: As you can see from the logging, the huge delay (4200 ms) only occurs the first time executing the script function, so it seems to occur while the JIT generates the native code, not in the actual native code generated by the JIT.
Regression?
Yes, with .NET 5.0.16, as noted above, the execution time is reasonable (around 130 ms).
Known Workarounds
Compile the assembly that uses ILGenerator
without Code Optimizations enabled (e.g. as Debug
instead of Release
), but that may affect performance of the native code generated by the JIT.
Configuration
- .NET 6.0.4, .NET SDK 6.0.202
- OS: Windows 10 Pro Version 21H2 Build 19044, x64
- Arch: x64
- It probably also occurs on other OSes, I also reproduced it on Ubuntu 20.04 x64.
Other information
No response
About this issue
- Original URL
- State: closed
- Created 2 years ago
- Comments: 16 (13 by maintainers)
This is fixed in main / 7.0 and will be fixed shortly with 6.0.6.
@kpreisser the fix will be in 6.06 which I think should be released in June. I’ll ping you once it is out so you can give it a try.
There might be ways to alter your IL generation to avoid this but it might take a bit of experimentation to hit on the best approach. The pathological case is when your method contains a long chain of similar branches, eg repeated if-then-elses with the same predicate. If you can vary the predicate and/or reduce the length of the chain you should be able to mitigate the problem.
Since this is the second case of this issue we’ve seen. I’ll also propose that we fix this in 6.0 servicing.
Yes, looks like the same issue, lots of time spent under
optRedundantBranch
.I got things working by updating Microsoft.SourceLink.GitHub to Version 1.1.1.
Might potentially be the same as #66076 and fixed in main already by #66259, though I haven’t verified.