runtime: Memory not being collected back
Description
Earlier this week my API was hit by a burst of requests. As expected the memory usage grew, BUT it did not returned to the original value. Then later the same week, another burst of request happened, and yet again the memory inflated but did not return to the previous value (which was not the original)
This behavior has been observed for a total time of 1 week, take a look:

The peak in memory, as mentioned before, was caused by a burst in request. Our avg request rate is around 35 and at the window (10 min) I got as high as 750 req/s.
This issue DOES NOT CAUSE PODs to restart
Configuration
.Net 7,
Latest version from mcr.microsoft.com,
Host on AKS, kubernets version 1.23.8,
Pods are:
limits:
cpu: 500m
memory: 1Gi
requests:
cpu: 500m
memory: 512Mi
Regression?
Data
About this issue
- Original URL
- State: open
- Created a year ago
- Comments: 17 (8 by maintainers)
@maffelbaffel I have written about this problem in “I didn’t change my code at all, why am I seeing a regression in memory when I upgrade my .NET version?” section of mem-doc, please take a look and let me know if it’s helpful.
I think that this is the kind of thing that leads to problem. How am I to determine if that memory is indeed just being saved for a rainy day? Otherwise I would be triggering a not-trivial, blocking, operation that would yield perhaps a few KBs worth of memory…
Isn’t there a “decommission as soon as possible” flag or config?