runtime: .NET Core performance few times slower compared to .NET Framework for Read & Write to File using BinaryFormatter with Mutex
Hardware specification Windows Server 2016 Standard .NET Core 2.1 4 Cores 12 GB RAM
I noticed that the performance for .NET Core is much worst than .NET Framework when i want to achieve thread synchronization while read and write to a file. The following code snippet demonstrate what I want to achieve. Which is at any one time only 1 thread can read the file, change the value and write it back to the file.
Full console based code as requested
using System;
using System.Diagnostics;
using System.IO;
using System.Runtime.Serialization.Formatters.Binary;
using System.Threading;
using System.Threading.Tasks;
namespace MutexTest
{
class Program
{
static void Main(string[] args)
{
var fileName = @"D:\Personal\TestFile.txt";
if (!File.Exists(fileName))
{
using (var fs = new FileStream(fileName, FileMode.OpenOrCreate, FileAccess.Write, FileShare.ReadWrite))
{
var formatter = new BinaryFormatter();
formatter.Serialize(fs, new TestClass() { Value1 = "Somevalue", Value2 = 1, Value3 = 1, Value4 = 1, Value5 = 1, Value6 = 1, Value7 = 1 });
}
}
Stopwatch stopwatch = new Stopwatch();
stopwatch.Start();
Parallel.For(0, 10000, a =>
{
TestMethod();
});
stopwatch.Stop();
Console.WriteLine(stopwatch.Elapsed.TotalMilliseconds);
}
private static void TestMethod()
{
var mutex = default(Mutex);
var isExistingMutex = false;
try
{
isExistingMutex = Mutex.TryOpenExisting(@"Global\somemutexname", out mutex);
}
catch (IOException)
{
// expected
}
if (!isExistingMutex)
{
var createdNew = false;
mutex = new Mutex(false, @"Global\somemutexname", out createdNew);
}
try
{
mutex.WaitOne();
}
catch (AbandonedMutexException)
{
// expected
}
var result = default(TestClass);
using (var fs = new FileStream(@"D:\Personal\TestFile.txt", FileMode.OpenOrCreate, FileAccess.Read, FileShare.ReadWrite))
{
var formatter = new BinaryFormatter();
result = (TestClass)formatter.Deserialize(fs);
}
result.Value2 = new Random().Next();
using (var fs = new FileStream(@"D:\Personal\TestFile.txt", FileMode.OpenOrCreate, FileAccess.Write, FileShare.ReadWrite))
{
var formatter = new BinaryFormatter();
formatter.Serialize(fs, result);
}
mutex.ReleaseMutex();
mutex.Dispose();
}
}
[Serializable]
public class TestClass
{
public string Value1 { get; set; }
public int Value2 { get; set; }
public long Value3 { get; set; }
public long Value4 { get; set; }
public long Value5 { get; set; }
public long Value6 { get; set; }
public byte Value7 { get; set; }
}
}
The same code is used in .NET Core and .NET Framework. This is the result i get for .NET Core and .NET Framework. Console - Tested using Stopwatch class. Parallel Loop 2,000. [Framework] 2358 milliseconds (average) [Core] 7710 milliseconds (average)
Kestrel (Web API) vs .NET Framework (Web API) - Tested using Visual Studio Load Test, run for 1 minute. [Framework] 2128 request per second (average) [Core] 774 request per second (average)
Based on the request, can see that .NET Framework outperform .NET Core. So I would like to know is this the expected result in .NET Core? or I have to write it differently compared to .NET Framework? or is it due to Serialization?
About this issue
- Original URL
- State: closed
- Created 6 years ago
- Comments: 22 (17 by maintainers)
No, what I tried to say is that your change is not connected to the regression. It’s probably a regression of Mutex or Mutex + BinaryFormatter code.
@Anipik @ViktorHofer This issue hasn’t had activity in 2 years. Is there anything for us to do here? It looks like the threading issue was asked and answered in https://github.com/dotnet/runtime/issues/29198, so I don’t think this issue is actionable any longer by us.