netty: Apparent memory leak when writing a TextWebSocketFrame on websocket connect.
Expected behavior
I’m trying to write a Netty based Web Socket server that expected to receive high load. The basic reproducible case involves simply opening a web socket connection and the server sending back a welcome message to the client.
Actual behavior
While load testing during development, I’m seeing excessive memory utilization by the process that cannot be attributed to any identifiable source (at least, not identifiable by me, thus the issue). Under heavy load, the process eventually consumes all available memory on the host before it is killed by a separate oom-reaper process.
This memory utilization only occurs when the server starts writing new, non-empty,TextWebSocketFrames.
Steps to reproduce
Modify the example Web Socket server to send back any non-empty greeting on connect and send heavy load. (Note: the definition of heavy load will vary based on the available memory for the service).
Minimal yet complete reproducer code (or URL to code)
Sample addition to WebSocketFrameHandler
@Override
public void userEventTriggered(ChannelHandlerContext ctx, Object evt) throws Exception {
super.userEventTriggered(ctx, evt);
if (evt instanceof HandshakeComplete) {
websocketConnected(ctx);
}
}
private void websocketConnected(ChannelHandlerContext ctx) {
ctx.channel().writeAndFlush(new TextWebSocketFrame("{}"));
}
For a fully functioning app that should exhibit the behavior in question, see this repo: https://github.com/jcohen/netty-ws-leak
Netty version
4.1.43.Final
JVM version (e.g. java -version)
openjdk version "1.8.0_212"
OpenJDK Runtime Environment (IcedTea 3.12.0) (Alpine 8.212.04-r0)
OpenJDK 64-Bit Server VM (build 25.212-b04, mixed mode)
OS version (e.g. uname -a)
Linux a1d94bdbf936 4.9.184-linuxkit #1 SMP Tue Jul 2 22:58:16 UTC 2019 x86_64 Linux
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Comments: 18 (9 by maintainers)
@jcohen alright I did spent some time to look into it and sadly there is not much we can do on our side in netty…
The problem is that with compression enabled it will use
permessage-deflatewithserver_no_context_takeovernot used. For more infos on what this actual means check: See https://tools.ietf.org/html/rfc7692#section-7.1.1.1.So with the configuration we need to create a new
Deflater(which is part of the JDK: https://docs.oracle.com/javase/8/docs/api/java/util/zip/Deflater.html ) and keep it “alive” till the connection is closed. The problem here is thatDeflateris implemented usingJNIand so reserve native memory that will not be released until the connection is closed and so thedeflater.end()method can be called by us.So the only way to “guard” against such a problem is to limit the max number of concurrent connections which use the extension.
Ok thanks … looking into it now
Thanks so much everyone for the investigation and the details. I’m so glad to know I wasn’t going mad for the past week 😂. I’ll have control over both clients and the server, so I think I can live without globally enabled compression and instead rely on compressing individual messages on an as-needed basis.
Hi, @jcohen also you can try use selective compression approach https://github.com/netty/netty/pull/8910, (e.g. compress only message more than threshold) we use this feature on production. Or optimize extension configuration described on this old, but very interesting post.
I will have a look.