quarkus: High memory utilization using Quarkus Native

Describe the bug

When running performance tests comparing quarkus native to spring native, quarkus native uses significantly more memory than expected. Our results closely mirror https://www.baeldung.com/spring-boot-vs-quarkus with Quarkus utilizing on average 2.5x to 3x the amount of RAM of Spring Boot, up to a peak of 5x RAM. This applies to both K8S and Serverless testing.

Expected behavior

The RAM utilization to more closely mirror what Spring native does.

Actual behavior

When running performance tests comparing quarkus native to spring native, quarkus native uses significantly more memory than expected. Our results closely mirror https://www.baeldung.com/spring-boot-vs-quarkus with Quarkus utilizing on average 2.5x to 3x the amount of RAM of Spring Boot, up to a peak of 5x RAM. This applies to both K8S and Serverless testing.

How to Reproduce?

Create a simple Spring Boot native project and a Quarkus native project.

Implement the bare minimum needed to create a “ping” endpoint that simply responds “pong”.

Begin pushing traffic against the relevant endpoints.

Output of uname -a or ver

No response

Output of java -version

No response

GraalVM version (if different from Java)

No response

Quarkus version or git rev

No response

Build tool (ie. output of mvnw --version or gradlew --version)

No response

Additional information

No response

About this issue

  • Original URL
  • State: closed
  • Created a year ago
  • Comments: 16 (13 by maintainers)

Most upvoted comments

@JBailes and company this is also why I have petitioned Baeldung to take down their article or make it accurate as its leading to misinformation out in the space about Spring Boot vs Quarkus. I have used both heavily and I dread working in Spring Boot and I enjoy every second of working in Quarkus. 😄

I cannot reproduce the behaviour. I used this sample-application of mine (github.com) with the following modifications:

  • limit the container to 30M 35M of memory (limit and reservation)
  • build the application natively:
    • change image of animal-service to Containerfile.native
    • Build the application native: ./mvnw --define native --define quarkus.native.container-runtime=podman clean package
  • deploy the database: docker-compose --file local-deployment/docker-compose up -d postgres
  • create db-schema: ./mvnw flyway:migrate
  • deploy the application: docker-compose --file local-deployment/docker-compose up -d --build
  • in a separate terminal, start watching the container stats: podman stats animal-service --interval 1
  • start wrk tests: wrk/run.sh 1 4 2m
  • let the tests finish, observe the container stats during test execution

condense script to do all deployment-steps above in one go:

# Build the application:              
./mvnw clean package
# Stop previous service (if any):
docker-compose --file local-deployment/docker-compose.yml down
# Start the database service:
docker-compose --file local-deployment/docker-compose.yml up -d postgres
# Create Database schema:
./mvnw flyway:migrate
# Start the application
docker-compose --file local-deployment/docker-compose.yml up -d --build
# Run wrk tests:
wrk/run.sh 1 4 2m

Throughput on my machine was around 4.9k requests/second. podman stats... reports a consumption of ~25 MB of memory. When I took a look in the container itself (echo $(podman exec animal-service '/bin/sh' '-c' 'cat /proc/1/status') | sed -e 's|.*\(..RSS: [[:digit:]]\+ ..\).*|\1|''), I saw that the output (28141 kB) is in line with what podman stats reported. Just for good measure, I ran the wrk-tests for 10 minutes, to verify stability1.

I experimented a little bit more with memory limits. Going to 30M results in:

  • the cpu having a load of ~75%, and
  • almost no load on the database container, i.e. almost nothing comes through.
  • This was also reflected in the throughput, which was a miserable 10 requests/s

There is soem performance to be gained when we increase memory to 40M. There is on more performance gain with more than 40M memory.


EDIT I changed the application to include a timer job. It seems that this pushed the application “over the barrier”, so it now needs 35M of RAM intead of 30M. I adjusted the text accordingly.


<footnotesize>1This is layman’s was to verify. A real verification would run much longer.</footnotesize>

@melloware I think you’ve hit the nail on the head, that explains the results I was getting as I see the same behavior. I also suspect Quarkus caches much more aggressively then Spring Boot, as I am seeing Quarkus use substantially (~30%) less CPU then Spring. This evening I’m planning on playing around more with the memory settings within Quarkus as well as limiting memory as I have a suspicion allowing Quarkus unlimited RAM may have it consuming more RAM then expected.