dns: too many goroutines
Hi, i’m use this in windows10. When I use 200 threads to nslookup and send UDP package to dns server, it finds that there are too many goroutines and occupying large memory of heaps.
So I review the code, and find that there is no limits when proccess UDP packages in server.go. I don’t sure if it is all right. May anybody help me?
my code:
func serveDNS(laddr string) error {
serveMux := dns.NewServeMux()
serveMux.HandleFunc(".", handleDnsRequest)
glog.Errorf("serveDNS Begin...")
e := make(chan error)
for _, _net := range [...]string{"udp", "tcp"} {
srv := &dns.Server{Addr: laddr, Net: _net, Handler: serveMux}
go func(srv *dns.Server) {
e <- srv.ListenAndServe()
}(srv)
}
return <-e
}
dns code:
github.com/miekg/dns/server.go
for srv.isStarted() {
m, s, err := reader.ReadUDP(l, rtimeout)
if err != nil {
if !srv.isStarted() {
return nil
}
if netErr, ok := err.(net.Error); ok && netErr.Temporary() {
continue
}
return err
}
if len(m) < headerSize {
if cap(m) == srv.UDPSize {
srv.udpPool.Put(m[:srv.UDPSize])
}
continue
}
wg.Add(1)
go srv.serveUDPPacket(&wg, m, l, s)
}
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Comments: 21 (14 by maintainers)
Commits related to this issue
- Add LimitReader to drop excess packets This includes code from #1052 and add a tests See for further discussion: #997 Signed-off-by: Miek Gieben <miek@miek.nl> — committed to miekg/dns by miekg 4 years ago
- Add LimitReader to drop excess packets This includes code from #1052 and add a tests See for further discussion: #997 Signed-off-by: Miek Gieben <miek@miek.nl> — committed to miekg/dns by miekg 4 years ago
- Add LimitReader to drop excess packets Refuse packets when we're over a certain limit of goroutines. This includes code from #1052 and add a tests. See for further discussion: #997 Signed-off-by: M... — committed to miekg/dns by miekg 4 years ago
[ Quoting notifications@github.com in “Re: [miekg/dns] too many goroutines…” ]
If we agree this is a sensible way forward, it will be a user defined function with a default impl. Just like the AcceptFunc. But I’m also afraid of capping performance unnecessary .
@szuecs getting the current memory allocated to a process is not portable (sadly). We could potentially make the knob you need to tweak; i.e. I have 2 GB, please figure out how many things I can do with that, and SERVFAIL if I hit it.
@tmthrgd memory usage seems to be the overarching thing we can something sensible about. We could start with the dumb thing of
2k * #goroutine < X-> OK,>= X-> SERVFAIL.runtime.NumGoroutineswould make this trivial.Followup question: should this be a core “feature” or left to the application? I.e. even in the case for coredns, you don’t want to make a cache plugin suffer from a slow backend used in the forward plugin.
That still pushed the problem downwards - I rather do something sensible here.
I’m also load testing (localhost <-> localhost so slight grain of salt) coredns, with the backend being served with the erratic plugin (which can introduces delays and drops, sofar only tested with delays). Doing this with
dnsperfwhich may be too smart. But I’m not seeing it. 300 goroutines, memory is sane, perf is 40K qps (backend does 130K qps directly, so some optimization might be in order).Actually correctly reading your proposal; so what’s that value going to be and can it be dynamically determined? If not - I’ve seen this in Google - you’re just passing the problem down to the operator (SRE in Google), and they are left with the same question. To add to the difficulty we compile for various platforms and cpu archs