runtime: JsonSerializer.Deserialize is intolerably slow in Blazor WebAssembly, but very fast in .NET Core integration test

In my Blazor app, I have a component that has a method like this. (I’ve replaced a call to GetFromJsonAsync with code from inside it, to narrow down the slow part.)

  private async Task GetData()
  {
      IsLoading = true;
      string url = $".../api/v1/Foo";  // will return a 1.5 MB JSON array
      var client = clientFactory.CreateClient("MyNamedClient");

      Console.WriteLine($"starting");

      List<Foo> results;

      Task<HttpResponseMessage> taskResponse = client.GetAsync(url, HttpCompletionOption.ResponseContentRead, default);

      var sw = Stopwatch.StartNew();
      using (HttpResponseMessage response = await taskResponse)
      {
        
        response.EnsureSuccessStatusCode();
        var content = response.Content!;

        if (content == null)
        {
          throw new ArgumentNullException(nameof(content));
        }
        
        string contentString = await content.ReadAsStringAsync();

        sw.Stop();
        Console.WriteLine($"Read string: {sw.Elapsed}");
        sw.Restart();

        results = System.Text.Json.JsonSerializer.Deserialize<List<Foo>>(contentString)!;
        //results = Newtonsoft.Json.JsonConvert.DeserializeObject<List<Foo>>(contentString); // comparable

      }

      sw.Stop();
      Console.WriteLine($"Deserialize: {sw.Elapsed}");
      
      StateHasChanged();
      IsLoading = false;

My download of 2-6 MB takes 1-6 seconds, but the rest of the operation (during which the UI is blocked) takes 10-30 seconds. Is this just slow deserialization in ReadFromJsonAsync (which calls System.Text.Json.JsonSerializer.Deserialize internally), or is there something else going on here? How can I improve the efficiency of getting this large set of data (though it isn’t all that big, I think!)

I have commented out anything bound to Results to simplify, and instead I just have an indicator bound to IsLoading. This tells me there’s no slowness in updating the DOM or rendering.

When I attempt the same set of code in an automated integration test, it only takes 3 seconds or so (the download time). Is WebAssembly really that slow at deserializing? If so, is the only solution to retrieve very small data sets everywhere on my site? This doesn’t seem right to me. Can this slowness be fixed?

Here’s the resulting browser console log from running the above code:

VM1131:1 Fetch finished loading: GET "https://localhost:5001/api/v1/Foo".
Read string: 00:00:05.5464300
Deserialize: 00:00:15.4109950
L: GC_MAJOR_SWEEP: major size: 3232K in use: 28547K
L: GC_MAJOR: (LOS overflow) time 18.49ms, stw 18.50ms los size: 2048K in use: 187K
L: GC_MINOR: (LOS overflow) time 0.33ms, stw 0.37ms promoted 0K major size: 3232K in use: 2014K los size: 2048K in use: 187K

Using Newtonsoft.Json (as in the commented-out line) instead of System.Text.Json gives very similar results.

For what it’s worth, here’s the Chrome performance graph. The green is the download and the orange is “perform microtasks”, which I assume means WebAssembly work.

enter image description here

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Reactions: 3
  • Comments: 51 (17 by maintainers)

Most upvoted comments

I deployed with .NET 8 preview 2 today and deserialization is much faster. Jiterpreter helps a lot, from about 4-5 seconds with .NET 7 to 1-2 seconds. The rest of the application works too which is nice 😃

Any new news or suggestions (@szalapski )? We have the exact same problem. So we can not realize our application with Blazor.

You’ll want to avoid creating a string from the content and use a Stream instead.

Yes this will allow the deserializer to start before all of the data is read from the Stream and prevent the string alloc.

System.Text.Json should be ~2x faster for deserialization than Newtonsoft so it would be good to see your object model to see if you hit an area that is slow on STJ.

In either case, since both Newtonsoft and STJ are slow there is likely something else going on.

The Large object graph benchmark section in https://github.com/dotnet/runtime/discussions/40318 has deserialization perf of 372ms for a string of length 322K. This also includes a “polymorphic” mode due to using System.Object that causes deserialization to be much slower (almost 2x) than without it. Anyway, extrapolating 332K to your 1MB is a 3x factor, so I assume it would take about 372ms * 3 = ~1.1 seconds to deserialize (on my fast desktop in isolation).

Some thoughts:

  • Can you share your hardware specs? Memory\CPU
  • Is the test running in isolation on dedicated hardware or is it hosted?
  • Can you share your object model (your Foo type from the link)?
  • Is there also rendering going on (or other CPU tasks) that would affect perf significantly?
  • Change to Async\Stream mode as mentioned earlier.

@Webreaper Thanks, looking forward to .NET 8. Also I just compiled with AOT turned on in .NET 7 and the slowness disappeared so suggest trying that @pragmaeuge

First, and most importantly, thanks to the team working on Blazor and web assembly. We think this technology has a really bright future!

I’ll add my support for @szalapski here. We have an .NET open source library that is used heavily in back end services run on AWS Lambda. We were excited with the possibility of running some of our code in our web application. Our initial attempts to compile and run web assembly from our library in .NET 6 preview 7 have been met with massive performance degradation.

I established a small benchmark that creates 1000 cubes using the library (the library is for creating 3d stuff with lots of Vector3 structs and Polygon), serializes them to JSON, then writes the resulting 3D model to glTF. I duplicated that code in a small Blazor app.

Running the Blazor code compiled using dotnet run -c release (non AOT) and viewing the console in Chrome shows:

00:00:07.2027000 for writing to gltf.

We found that AOT compilation (which takes nearly 15 minutes), increases the performance by 2x.

The benchmark containing the same code run on the desktop, shows the following for writing to gltf:

Method Mean Error StdDev Gen 0 Gen 1 Gen 2 Allocated
‘Write all cubes to glb.’ 105.8 ms 4.42 ms 2.31 ms 21000.0000 3000.0000 1000.0000 85.05 MB

It takes nearly 67x as long to run in web assembly. We have a similar performance degradation for serializing and deserializing JSON.

Some considerations as to what might be slow:

  • glTF creation involves the manipulation of List<byte>. We’ve seen guidance that suggests you shouldn’t use IList<T> and we’re not doing much of that. But perhaps reading and writing bytes is inherently slow?
  • JSON serialization uses Newtonsoft.Json.Net and a custom converter for deserializing to child classes. We’ve seen the recommendation to move to System.Text.Json and it’s a hard pill to swallow because our code requires converters and makes liberal use of json.net attributes. We’d love to try and get this to work as is. The fact that writing to glTF, and potentially many other operations, is so slow suggests that optimizing for JSON may fix a small part of the problem but will not leave us with confidence that adoption of web assembly is a possibility.
  • We use several third party dlls that we had to compile with .NET 6 as well to even get the publishing of the Blazor project to work.

You can find our example Blazor project that has no UI but runs the wasm and reports to the console here: https://github.com/hypar-io/Elements/tree/wasm-perf/Elements.Wasm. You can find the corresponding benchmark WasmComparison here: https://github.com/hypar-io/Elements/tree/wasm-perf/Elements.Benchmarks

We’re really excited for the effort to bring C# to web assembly and are happy to provide any further information necessary. It would be fantastic for these development efforts if there was a way to run a dotnet benchmark across the core CLR and web assembly to make an apples->apples comparison. For now we’ve had to build our own.

One more thing… This performance degradation is not everywhere. We can call methods in our library that do some pretty complicated geometry stuff and they run at near native speed. We have a couple of demos of interactive 3d geometry editing and display using Blazor wasm. It’s just serialization and reading/writing bytes that seem to be a big issue. Also looping in @gytaco who is doing some amazing work using c#->web assembly for geometry stuff.

I have had a similar journey recently moving through different serialisers and finally arriving at Messagepack which has been good enough in interpreted WASM for current users. Performance Vs System.Text.Json is impressive

However, scope of our WASM app is definitely expanding and we have users looking to handle 100s of thousands of of objects to perform data manipulation/analysis in browser like Excel would chomp through on a normal desktop. Wait times for data loads of this size (they really aren’t massive payloads delivered from the API) are at the point where it is difficult to satisfy users and Server Side Blazor is becoming the only option.

The Blazor/WASM community has generally always expressed that code runs at native speeds (until you learn that everything outside of the .net libraries is interpreted) and I had hoped AOT would make an enormous difference here, allowing Messagepack serialiser to run at native speed. Our initial benchmarks of rc1 are showing it to be slower in this area than interpreted mode.

Maybe it’s my misunderstanding of how serialisation works - is it object construction in .Net itself being slow here and I shouldn’t see any difference between AOT and interpreted builds? Either way, serialisation is painfully slow for what is really not that much data.

@tareqimbasher @szalapski 2MB json file is taking about 7 seconds to deserialize which is not acceptable. We moved to MessagePack . It is taking 2 seconds.

I just tried it with a 10MB json file and its unusably slow. 10MB isn’t that much. Its tiny. Its taking over 2 minutes to load the initial page which doesn’t make sense IMO. I’m using the best performance tricks too:

    Assembly powerAssembly = typeof(PowerService).Assembly;
    await using Stream? stream = powerAssembly.GetManifestResourceStream("PowerShared.PowerDocuments.json");

    ValueTask<XMLJsonWrapper?> powerXmlDocumentsTask = JsonSerializer.DeserializeAsync<XMLJsonWrapper>(stream, new JsonSerializerOptions()
    {
        DefaultIgnoreCondition = JsonIgnoreCondition.WhenWritingNull
    });

Its so slow and takes so long I can’t even run a performance profiler either. THe performance profiler just bombs out and gets stuck.

I’m having other problems too where external .net 7 DLLs take forever to load.

There needs to be a way to quickly and efficiently load datasets into blazor WASM.

This is on .NET 7 by the way

I see this is being targeted for .NET7. Blazor WASM has been great for the most part but this performance issue is making it really difficult to view Blazor as a viable option for some of the more data intensive projects I have coming up. I’ll give MessagePack a try since it seems people have had some success with that,

I am not 100% sure but it seems very likely that this is related: I just tried to deserialize a 2.6MB json containing 10.000 simple POCOs. In chrome the deserialization took took ~4 seconds (that’s actually “good enough” for me, at least right now) But in Firefox the same deserialization took ~35 seconds! That is a serious problem for me …

FYI: I am using .NET 6 Preview 3 and System.Text.Json

Which version of Blazor are you using? You’ll want to avoid creating a string from the content and use a Stream instead. If you are using the net5.0 you should look at the System.Net.Http.Json extensions.