spring-data-neo4j: Unable to acquire connection from pool within configured maximum time
Hi, I have a Reactive REST API using Spring Data Neo4j (SpringBoot v2.7.5).
I noticed lots of error logs in our service “org.neo4j.driver.exceptions.ClientException: Unable to acquire connection from the pool within configured maximum time of 60000ms”. I use default configuration provided by SDN.
I was able to reproduce the error. If I return Mono.error in case of an empty result, all the connections from the pool are in use and the service cannot acquire a new connection, no matter how much time passes.
private fun findUserById(id: UUID): Mono<User> {
return userRepository.findById(id)
.switchIfEmpty(Mono.error(NotFoundException("User $id not found")))
}
With enabled property log-leaked-sessions: true, there are “Neo4j Session object leaked, please ensure that your application fully consumes results in Sessions or explicitly calls close on Sessions before disposing of the objects.” logs.
Only after restarting the service, it returns to a healthy state. Why are the connections from the pool not get getting closed?
About this issue
- Original URL
- State: closed
- Created 2 years ago
- Comments: 15 (2 by maintainers)
Commits related to this issue
- GH-2632 - Upgrade Java driver to 4.4.10. Closes #2632 for 6.3 with the fixed driver 4.4.10. — committed to spring-projects/spring-data-neo4j by michael-simons 2 years ago
- GH-2632 - Upgrade Java driver to 4.4.10. Closes #2632 for 6.3 with the fixed driver 4.4.10. — committed to spring-projects/spring-data-neo4j by michael-simons 2 years ago
- GH-2632 - Upgrade Java driver to 5.3.1. Fixes #2632 on 7.0.x and main with the fixed driver 5.3.1. — committed to spring-projects/spring-data-neo4j by michael-simons 2 years ago
This is fixed with driver 4.4.10 and 5.3.1 now. Thanks to everyone involved investigating and fixing it. Until there are new SDN releases, the following upgrade paths should work:
🎁 🎄
We have a fix ready to be implemented. This will be available in the 5.4 Java driver which is planned for release in early January. It will also be back ported to the LTS 4.4 driver as a patch release, which will be available sooner if all things go smoothly.
This has been now addressed with the upgrade to driver 4.4.10 in 6.3.x and 6.2.x. Waiting for a 5.3 patch or the before mentioned 5.4. Thanks.
Another workaround is using explicit
concatMapinstead of flat map and avoiding parallel transactions this way. We are investigating this.Thanks to awesome VMWare folks we have this down to be reproducible with this
Hello @michael-simons, thank you for your reply. Yes I see that just a simple findById works fine. I have to add more details of my particular case. I have a more complex structure of Flux methods invocation in my project, so i guess it makes the difference.
My method consumes a list of objects and creates a Flux to iterate over the list, than in flatMap there is another repository call which returns an existing node and then I do a call to get a user which is not found and an exception is thrown.
I easily reproduce it with such an example
collectListis important here, since I do some mapping in the project after thatAs the result the requests are stuck and I see “org.neo4j.driver.exceptions.ClientException: Unable to acquire connection from the pool within configured maximum time of 60000ms”
I used your apach bench command to reproduce
ab -n 20000 -c 10 localhost:8080/6aeb4340-5712-4678-ba94-fd001a56b43dHello @RomanRomanenkov thanks for using our module. I tried to recreate your scenario, please see attached project
sdn2632.zip
It basically recreates what you have
with minimal config
You’ll need the tx manager:
With apache bench used like this
I See the connections via
call dbms.listConnections();spike to 100, which is the drivers default.All requests succeed, regardless whether I use an id that exists or not.
The connections will be there for an hour, which is also the default.
Configuring
you will see them going away in the database again.
No logs about leaked sessions.
Please share as much details as you have, thanks.