runtime: GC out of memory inside the docker container
Description
When running a .NET 7 console application inside a docker container with a memory limit of 400mb, my application crashes, although DotMemory profiling shows that it is able to use just over 200mb.
Reproduction Steps
This is my simple program to run:
internal class Program
{
private static void Main(string[] args)
{
foreach (KeyValuePair<string, object> config in GC.GetConfigurationVariables())
{
Console.WriteLine($"{config.Key} {config.Value}");
}
StringBuilder sb = new StringBuilder();
while (true)
{
for (int i = 0; i < 500_000; i++)
{
sb.Append(Guid.NewGuid());
sb.Append(Guid.NewGuid());
sb.Append(Guid.NewGuid());
}
string str = sb.ToString();
int allocatedBytesByString = Encoding.UTF8.GetByteCount(str);
Console.WriteLine($"Allocated on this stage megabytes by string: {(allocatedBytesByString / 1024 / 1024)}");
sb.Clear();
Console.WriteLine(str);
}
}
}
This is memory profiling without memory limits:
This is out of memory exception when i starting docker container with 400mb memory limit. When displaying the collector configuration, we can see that it noticed the memory limit and limited the heap size to 75%.
The following screenshot shows that 1 large object occupies 51 megabytes:
Expected behavior
I expected that the garbage collector will start collecting objects more often so as not to run into an OutOfMemoryException.
Actual behavior
In fact, when starting the application, we immediately get an error OutOfMemoryException
Regression?
No response
Known Workarounds
No response
Configuration
- .NET 7.0.100
- OS Windows 10 Pro 21H2
- x64
- Docker Server: Docker Desktop 4.20.0 (109717)
Other information
The reason for this is probably the following: DotMemory shows that in the first few seconds of its operation, the application clearly starts consuming more memory than allowed (about 600 mb), and only then it starts to go down to 200mb.
About this issue
- Original URL
- State: closed
- Created a year ago
- Comments: 15 (9 by maintainers)
In my opinion, on the
sb.Clear()
line, we have aStringBuilder
buffer in the form of a large number of char arrays with a total size of about 103 MB and a string of approximately the same size. We cannot clear this string because it is needed on the next lineConsole.WriteLine(str)
.sb.Clear()
tries to allocate a new buffer, also about 103MB in size. And at this stage, anOutOfMemoryException
occurs because there is not enough memory to store these 3 objects.https://github.com/dotnet/runtime/blob/03f2be66ea268a1ea285899f56f55b81e5a25044/src/libraries/System.Private.CoreLib/src/System/Text/StringBuilder.cs#L421-L425
In the case when we change the sequence of calling
sb.Clear()
andConsole.WriteLine(str)
, the string can already be cleared on thesb.Clear()
line and only theStringBuilder
buffer remains in memory. Therefore, there is enough memory and OutOfMemoryException does not occur.You can also specify
Capacity
inStringBuilder
. In this case,sb.Clear()
will not allocate a new bufferIn my opinion, diagnosing OOM is very difficult or almost impossible without a memory dump of the process at the time of the crash. Because OOM can occur even on the line
object o = (object)1
.The vectors for improvement can be the optimization of memory allocation after the occurrence of OOM in order to always receive a correct stacktrace. It might make sense to create a static
OutOfMemoryException
object and use it in crash locations instead of creating the object each timeIt is also not entirely clear why
StringBuilder.Clear()
sees a new memory buffer in the case when we have more than 1 link in theStringBuilder
chain. It may be possible to reuse memory that was previously allocatedThanks for the dump, @MMaximus111! We took a look and found that based on your container size, the app simply ran out of memory and after trying to do a full blocking GC, threw an OOM exception.
Details
We deduced this by opening the dump in windbg and entering !ao, the command to display details about OOMs and found that the request that threw the exception requested 108002488 bytes.
By the time the OOM was thrown, the total amount of committed memory was 229957632 bytes.
Adding up the total committed and the requested bytes to allocate returns in 337960120 bytes that’s greater than your heap hard limit of 314572800 bytes.
We also checked to see if the GC did the right thing by trying one last time to do a full blocking GC and found that despite the fact that it was compacting, it didn’t help.