quarkus: High memory utilization using Quarkus Native
Describe the bug
When running performance tests comparing quarkus native to spring native, quarkus native uses significantly more memory than expected. Our results closely mirror https://www.baeldung.com/spring-boot-vs-quarkus with Quarkus utilizing on average 2.5x to 3x the amount of RAM of Spring Boot, up to a peak of 5x RAM. This applies to both K8S and Serverless testing.
Expected behavior
The RAM utilization to more closely mirror what Spring native does.
Actual behavior
When running performance tests comparing quarkus native to spring native, quarkus native uses significantly more memory than expected. Our results closely mirror https://www.baeldung.com/spring-boot-vs-quarkus with Quarkus utilizing on average 2.5x to 3x the amount of RAM of Spring Boot, up to a peak of 5x RAM. This applies to both K8S and Serverless testing.
How to Reproduce?
Create a simple Spring Boot native project and a Quarkus native project.
Implement the bare minimum needed to create a “ping” endpoint that simply responds “pong”.
Begin pushing traffic against the relevant endpoints.
Output of uname -a
or ver
No response
Output of java -version
No response
GraalVM version (if different from Java)
No response
Quarkus version or git rev
No response
Build tool (ie. output of mvnw --version
or gradlew --version
)
No response
Additional information
No response
About this issue
- Original URL
- State: closed
- Created a year ago
- Comments: 16 (13 by maintainers)
@JBailes and company this is also why I have petitioned Baeldung to take down their article or make it accurate as its leading to misinformation out in the space about Spring Boot vs Quarkus. I have used both heavily and I dread working in Spring Boot and I enjoy every second of working in Quarkus. 😄
I cannot reproduce the behaviour. I used this sample-application of mine (
github.com
) with the following modifications:30M
35M
of memory (limit and reservation)animal-service
toContainerfile.native
./mvnw --define native --define quarkus.native.container-runtime=podman clean package
docker-compose --file local-deployment/docker-compose up -d postgres
./mvnw flyway:migrate
docker-compose --file local-deployment/docker-compose up -d --build
podman stats animal-service --interval 1
wrk
tests:wrk/run.sh 1 4 2m
condense script to do all deployment-steps above in one go:
Throughput on my machine was around 4.9k requests/second.
podman stats...
reports a consumption of ~25 MB of memory. When I took a look in the container itself (echo $(podman exec animal-service '/bin/sh' '-c' 'cat /proc/1/status') | sed -e 's|.*\(..RSS: [[:digit:]]\+ ..\).*|\1|''
), I saw that the output (28141 kB
) is in line with whatpodman stats
reported. Just for good measure, I ran thewrk
-tests for 10 minutes, to verify stability1.I experimented a little bit more with memory limits. Going to
30M
results in:There is soem performance to be gained when we increase memory to
40M
. There is on more performance gain with more than40M
memory.EDIT I changed the application to include a timer job. It seems that this pushed the application “over the barrier”, so it now needs
35M
of RAM intead of30M
. I adjusted the text accordingly.<footnotesize>1This is layman’s was to verify. A real verification would run much longer.</footnotesize>
@melloware I think you’ve hit the nail on the head, that explains the results I was getting as I see the same behavior. I also suspect Quarkus caches much more aggressively then Spring Boot, as I am seeing Quarkus use substantially (~30%) less CPU then Spring. This evening I’m planning on playing around more with the memory settings within Quarkus as well as limiting memory as I have a suspicion allowing Quarkus unlimited RAM may have it consuming more RAM then expected.