vertx-sql-client: When a pooled connection is closed, the actual connection should not be made available after it has been closed internally

After scheduled upgrade of running postgres our application came to a strange state: every request to db throw io.vertx.core.VertxException: Connection not open CLOSED. After manually restarting every pod of our app it healed (during startup each instance was able to connect to db as usual). We use connection pool like that:

    PgClient.pool(vertx, PgPoolOptions().apply {
        database = config.database
        host = config.host
        port = config.port
        user = config.user
        password = config.password
        isSsl = true
        isTrustAll = true
        maxSize = 10
        cachePreparedStatements = true
    })

Unfortunately I don’t have reproducer for this situation. Maybe you will be able to figure out the cause by stacktrace.

Pg-client version: 0.8.0 Postgres version: PostgreSQL 10.4 (Ubuntu 10.4-2.pgdg18.04+1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 7.3.0-16ubuntu3) 7.3.0, 64-bit

Full stacktrace:

io.vertx.core.VertxException: Connection not open CLOSED
at io.reactiverse.pgclient.impl.SocketConnection.bilto(SocketConnection.java:193)
at io.reactiverse.pgclient.impl.CommandBase.foo(CommandBase.java:54)
at io.reactiverse.pgclient.impl.SocketConnection.schedule(SocketConnection.java:181)
at io.reactiverse.pgclient.impl.ConnectionPool$PooledConnection.schedule(ConnectionPool.java:81)
at io.reactiverse.pgclient.impl.PgConnectionImpl.schedule(PgConnectionImpl.java:60)
at io.reactiverse.pgclient.impl.PgClientBase.lambda$preparedQuery$0(PgClientBase.java:41)
at io.reactiverse.pgclient.impl.SocketConnection$CachedPreparedStatement.get(SocketConnection.java:112)
at io.reactiverse.pgclient.impl.PrepareStatementCommand.foo(PrepareStatementCommand.java:60)
at io.reactiverse.pgclient.impl.SocketConnection.schedule(SocketConnection.java:181)
at io.reactiverse.pgclient.impl.ConnectionPool$PooledConnection.schedule(ConnectionPool.java:81)
at io.reactiverse.pgclient.impl.PgConnectionImpl.schedule(PgConnectionImpl.java:60)
at io.reactiverse.pgclient.impl.PgConnectionImpl.lambda$schedule$0(PgConnectionImpl.java:64)
at io.vertx.core.impl.ContextImpl.lambda$wrapTask$2(ContextImpl.java:339)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:463)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:886)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Comments: 22 (13 by maintainers)

Most upvoted comments

@mkulak can you set an exception handler to know what is happening to the connection ?

conn.exceptionHandler(err -> {
  // Log the error
});

so we know the reason that force all connection to be closed.

can you also upgrade the client version to 0.9.0 ?

@AlxGDev thanks I’ve been able to reproduce and fix multiple bugs

UPD: We have 2 different services running reactive-pg-client (each talks to it’s own postgres). Initially both of them were running version 0.8.0. We ran into this bug 2 times. Both times it hit both services simultaneously.

After that in one service we bumped version to 0.9.0, installed exception handler with logging and added try-catch for this specific exception. In catch block we simply recreate pool. Other service left untouched.

Today bug reproduced (with usual symptoms) in second service (unmodified one). First service didn’t encounter any problem (no exceptions in logs at all). So either we were unlucky and situation leading to a bug didn’t appear for first service OR this bug was fixed in 0.9.0

Now I’m going to bump library version on second service (without adding custom pool recreation logic) and see what will happen next.