shadowsocks-rust: High memory consumption for UDP associations on Linux (OpenWRT)

Test equipment: r4s Test system: openwrt

https://github.com/shadowsocks/shadowsocks-rust/suites/4952439929/artifacts/143776137 download:https://github.com/shadowsocks/shadowsocks-rust/actions/runs/1703977841

/etc/sysctl.d/11-nf-conntrack.conf

# Do not edit, changes to this file will be lost on upgrades
# /etc/sysctl.conf can be used to customize sysctl settings

net.netfilter.nf_conntrack_acct=1
net.netfilter.nf_conntrack_checksum=0
net.netfilter.nf_conntrack_max=1020000
net.netfilter.nf_conntrack_tcp_timeout_established=7440
net.netfilter.nf_conntrack_udp_timeout=60
net.netfilter.nf_conntrack_udp_timeout_stream=180

/root/socks/sslocal --protocol tun -s "[::1]:8388" -m "aes-256-gcm" -k "hello-kitty" --outbound-bind-interface lo --tun-interface-name tun1 -U --udp-timeout 60 --udp-max-associations 65535

dns.exe -sr gddx.txt -at google.com -sl 6

dns-test.zip

Start dns to test for 30 seconds, then shut down.

Then wait for 5 minutes, after the number of connections drops,

image image

Then it can be seen that sslocal occupies more than 400M of memory and will not continue to release,

About this issue

  • Original URL
  • State: closed
  • Created 2 years ago
  • Comments: 61

Commits related to this issue

Most upvoted comments

You are reporting bugs to an opensource project. I am not paid and not full time working on it. If you just continue just like that I must have to solve your problem or I should do my best to service you, conversation ends here.

Remember, you are not a customer to me. We are just software developers.

I won’t follow your configuration because you are adding variables and you test method is just nonsense.

And I can confirm that I can see nearly 400MiB RSS on R4S after opening 100 UDP clients sending queries continously (:

image

But the memory consumption is stable and start to drop (the clients are still here):

image

The test is still running, and I cannot see any memory leak because memory usage is very stable.

So if you are willing to go on and see why it takes 400MiB to maintain 65536 UDP associations, we can go on. But I think I have already made a conclusion, there is no memory leak.

I have told you the test environment, and even the source code of the trigger. If you don’t do it, you still question me.

If you follow your own logic, you will not find the problem.

Are the screenshots I gave you fake?

I can trigger, but you can’t trigger, is this my problem or your problem?