ingress-nginx: [bug] Nginx dropping websocket connections
NGINX Ingress controller version: 0.15.0
Kubernetes version (use kubectl version):
1.10.3
Environment:
- Cloud provider or hardware configuration: AWS
- Install tools: Kops
What happened: Websocket connections are getting closed intermittently between 10sec~40sec. In the link below you can find the full details of the issue posted in the SignalR repo:
https://github.com/aspnet/SignalR/issues/2411
What you expected to happen: The connection stays open
How to reproduce it (as minimally and precisely as possible):
This sample client application connects with our signalr server https://github.com/claylaut/signalr-websocket-connection-issue
About this issue
- Original URL
- State: closed
- Created 6 years ago
- Comments: 20 (11 by maintainers)
@claylaut ok, two tests:
--enable-dynamic-configurationto the ingress controller deployment.Reasons:
after a reload, the nginx worker processes are replaced. That means eventually the websocket and keepalived connections will be terminated (nginx behavior we cannot change). The suggested change avoids the termination after 10 seconds.
this new flag removes reloads in nginx for change in endpoints (pods).
No, we’re not using prometheus yet though we intend to. Yes, this can be closed thanks for your help.
Yep, the
enable-dynamic-configurationis not reloading the backend and now the connection stays open.Had to add:
since other namespaces that had the ingress controller not configured yet were reloading the backend for the properly configured namespace. (I think these settings should be enabled by default).
Just to confirm: since the backend is not being reloaded anymore it’s safer if I set
worker-shutdown-timeoutback to its default value, right?