linkerd2: proxy memory leak
We can reliably reproduce a proxy memory leak with the following config:
:; kubectl create ns lifecycle
:; curl -s https://raw.githubusercontent.com/linkerd/linkerd-examples/master/lifecycle/lifecycle.yml |linkerd inject - |kubectl apply -f - -n lifecycle
Then, the leak can be observed by watching the broadcast container, which slowly grows its RSS as long as the process runs:
:; while true ; do date ; kubectl top po --containers -n lifecycle -l app=bb-broadcast |awk '$2 ~ /^linkerd-proxy$/ {print $0}' ; sleep 600 ; done
Wed May 29 21:38:17 UTC 2019
bb-broadcast-8768bbf55-p62l4 linkerd-proxy 196m 9Mi
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Comments: 18 (17 by maintainers)
@olix0r I don’t believe that’s what’s happening here — it’s the connection attempt that’s timing out, the error message is “request timed out” only because it’s coming from the
tower-timeout
middleware wrapping the connect service.The above example only leaks when using kube-proxy for
usermode
proxying. In this mode, kube-proxy terminates the TCP connection. Iniptables
proxying, the connection never succeeds, so the request is never dispatched into hyper, etc.Here’s an example TCP stream in the usermode case:
It seems likely we could reproduce this with any client that talks to a server that accepts and immediately closes a connection as soon as it reads some data…