go: proposal: runtime: garbage collect goroutines blocked forever

Hi everyone!

As of today, Go’s runtime does not garbage collect blocked goroutines. The most cited reason being, that goroutines blocked forever are usually a bug and that collecting them would hide this bug. I would like to show a few examples, where garbage collected goroutines would really be appreciated and lead to a much safer and less buggy code.

What does it mean?

package main

import "fmt"

func numbers() <-chan int {
	ch := make(chan int)
	go func() {
		for i := 0; ; i++ {
			ch <- i
		}
	}()
	return ch
}

func main() {
	for i := 0; i < 1000; i++ {
		for num := range numbers() {
			if num >= 1000 {
				break
			}
			fmt.Println(num)
		}
	}
}

This code has a memory leak in current Go. The function numbers returns a channel that generates an infinite sequence of natural numbers. We get this sequence 1000 times in the main function, print the first 1000 numbers of each sequence and quit. The numbers function spawns a goroutine that feeds the channel with numbers. However, once we print the first 1000 numbers produced by the goroutine, the goroutine stays in the memory, blocked forever. If the first for-loop iterated forever, we would run out of memory very quickly.

How to collect goroutines?

I’d suggest the following algorithm:

  1. All non-blocked goroutines are marked active.
  2. All channels not reachable by active goroutines are marked inactive.
  3. All gouroutines blocked on an inactive channel are marked dead and garbage collected.

Edit: Based on the discussion, I add one detail to the implementation. A goroutine marked dead would be collected in a way that’s identical to calling runtime.Goexit inside that goroutine (https://golang.org/pkg/runtime/#Goexit).

Edit 2: Based on the further discussion, runtime.Goexit behavior is debatable and maybe not right.

What are the benefits?

Once such goroutine garbage collection is enabled, we can solve large variety of new problems.

  1. Infinite generator functions. Generator is a function that returns a channel that sends a stream of values. Heavily used in functional languages, they are currently hardly possible in Go (we’d have to send them a stopping signal).
  2. Finite generator functions. We can do them in Go, however, we have to drain the channel if we want to avoid a memory leak.
  3. Manager goroutines. A nice way to construct a concurrent-safe object in Go is to, instead of guarding it by a mutex, create a manager goroutine, that ranges over a channel that sends commands to this object. Manager goroutine then executes this commands as they come in. This is hard to do in Go, because if the object goes out of scope, manager goroutine stays in the memory forever.

All of the described problems are solvable. Manual memory management is also solvable. And this is just like that. Inconvenient, error-prone and preventing us from doing some advanced stuff, that would be really useful.

I’d like to point out, that the whole talk “Advanced Go Concurrency Patterns” would be pointless, if Go collected blocked goroutines.

What do you guys think?

About this issue

  • Original URL
  • State: closed
  • Created 7 years ago
  • Reactions: 21
  • Comments: 71 (29 by maintainers)

Most upvoted comments

I’m not sure about the details,

The proposal process is really about the details, though.

but it seems logical that the deferred statements would run and then the goroutine would be silently killed.

What about locks it’s currently holding? Would deferred statements be run?

What about the cases where it actually is a bug to have the leak and the goroutine shouldn’t be silently swept under the rug?

Maybe an alternate proposal would be that the sender goroutine should have to explicitly select on the receiver going away:

func numbers() <-chan int {
	ch := make(chan int)
	go func() {
		for i := 0; ; i++ {
			select {
                        case ch <- i:
                        unreachable:  // language change, though.
                                // choose: panic, return silently, etc
                        }
		}
	}()
	return ch
}

But really, now that we have context in the standard library, the answer should be to start the generator goroutine with a context.

This makes the garbage collector more expensive.

Also, how do you “mark dead” a goroutine? A goroutine can’t be killed by another goroutine. Do you run deferred statements? Does the channel operation panic?

This would require a lot of design & implementation work for marginal benefit.

I don’t agree with calling Goexit. If a goroutine is blocked forever, we can just garbage collect it without changing any semantics (and maybe sample it or something, for the bug case). It would never have run again, so there isn’t an observable difference except for memory footprint. Why would it call Goexit, run defers, etc.? That seems like an unexpected behavior while waiting on a channel. And it feels like finalizers - we can only provide nebulous guarantees about whether we can detect it and how quickly we do so.

Holding a lock while doing a channel operation should be heavily frowned upon 😦

I’m not sure about the details, but it seems logical that the deferred statements would run and then the goroutine would be silently killed. It would not be killed by another goroutine, it would be killed by the garbage collector.

I personally don’t think the benefits are marginal. It would make a lot of the obvious code work correctly.

If you watch Rob Pike’s “Go Concurrency Patterns” talk, you see that his code examples would not be correct, if the goroutines would not be garbage collected like this. Also, in one point in the talk, being asked by some guy “What happens to that goroutine”, he responds something like, “Don’t worry, it’ll be collected”.

I was missing this for a long time too and still don’t quite understand, why memory is considered a resource that a programmer should obviously not need to care about, while goroutines are (cf. “What about the cases where it actually is a bug to have the leak and the goroutine shouldn’t be silently swept under the rug?” - I could just as easily ask “what about cases where a leaked pointer is actually a bug and the GC is sweeping that under the rug?”).

That being said, I believe a) this proposal so far to be too hand-wavy. I think there are a lot of questions to be answered about the details of what goroutines can and can’t be collected. And as this feature is only useful, once it’s codified in the spec (otherwise a program can’t rely on the collection, thus any actual use of it would still be incorrect), these questions would need good, complete but also concise answers. And b) that indeed, context seems like a good enough solution to at least all cases I care about. In 99% of code, I need to plumb through a context anyhow and rely on it being cancelled upstack.

So, it would be cool, if this could happen, but the edge-cases should definitely be thought hard about.

It’s possible to detect when a goroutine is blocked for more than some time by examining the stack (proof-of-concept https://github.com/egonelbre/antifreeze/blob/master/monitor.go). As such, one of the approaches could be that you can specify a time limit for how long a goroutine can be blocked.

With regards to the example, this is faster, shorter and doesn’t require an additional goroutine:

type Numbers struct{ next int64 }
func (n *Numbers) Next() int { return int(atomic.AddInt64(&n.next, 1)) }

func main() {
	for i := 0; i < 10; i++ {
		var numbers Numbers
		for {
			num := numbers.Next()
			if num >= 1000 {
				break
			}
			fmt.Println(num)
		}
	}
}

In fact, it might be a good idea to panic if the garbage collector wants to collect an unreachable goroutine but finds that it has pending defers (or more conservatively, is holding a lock), since that’s always a bug.

@golang101 This argument doesn’t seem very reasonable because:

  1. A blocked unreachable goroutine will not stop the main goroutine from exiting.
  2. A blocked unreachable goroutine does nothing and will never do anything. Collecting it makes no difference except for memory consumption.

@egonelbre Using a timeout would be horrible, as it will either be too long or too short for some purposes. Especially accidentally timing out a goroutine which is still reachable would cause weird unpredictable bugs.

Integrating goroutine-collecting with the GC would make sure that all the collecting is completely “unobservable” without dumping stack traces. And as @DocMerlin mentions, it does significantly increase the expressivity of Go compared to languages like Erlang with somewhat similar concepts of communicating lightweight processes.

@randall77, you could imagine doing exactly what you propose in the GC, but instead of doing anything else to kill the goroutine, just ~sync.Once a warning about a leaked goroutine to stderr. The runtime doesn’t complain to stderr often, but this seems justified enough.

For example, I often use goroutines to express a sequence of numbers, such as an allocator for object IDs, to avoid intertwining the logic for incrementing counters, etc with the main loop where the business logic happens.

Far more frequently, I use goroutines to manage an object that requires complex synchronization. Essentially, in the function that creates the object (“NewWhateverStruct(…)” etc), a goroutine will be spun up in the background that communicates with the methods through channels and does all the actual work. This can include objects that do not manage external resources; a large in-memory thread-safe database for example. Currently, users of such an object must call a “Close()” method or something to kill the goroutines running in the background, which is annoying and easy to mess up, especially when the object may be referenced many times throughout many goroutines.

I think this could work, but it would require folding goroutines into the marking phase of the GC. Instead of treating all goroutines as roots, start with only active goroutines. (Active here means not blocked on a channel. Goroutines in syscalls are considered active for this.) Any time we mark a channel, mark all the goroutines waiting on that channel as active, and schedule their stacks for scanning. At the end of mark, any goroutines not marked active can be collected. As a bonus, we never need to scan their stack (and we can’t, because then we would find a reference to the channel they are waiting on).

There are lots of tricky details:

  • Channels have direction - if an active goroutine has a send-only reference to a channel, it could never wake up a goroutine receive-waiting on that channel. The GC has no way to tell different channel references apart (send/receive/bidirectional is only a compile-time thing).
  • The GC needs to know that channels (or possibly something the channel references, like the goroutine struct) are special objects. It needs that information to know that when marking the channel, it needs to do extra work it doesn’t need to do for normal objects. We’d need to allocate channels on segregated pages or something. or maybe marks similar to how we do finalizers?
  • Goroutines could be selecting on several channels at once. Any one of those could cause it to be active.

On a more meta level, this proposal would encourage using forever-blocking goroutines during the normal course of business. Right now they are considered a bug. The proposal suggests this is just a defense against memory leaks. But this is a slippery slope people are surely going to drive a mac truck down. Lazy-evaluating Haskell interpreter, anyone? Go isn’t really a good fit for that, as a goroutine is a really heavy future implementation. But people will try anyway, and get frustrated with the performance.

If this is implemented (and I think it indeed would be useful), deferred functions should not run. Essentially this should not change the observable behavior of any program except for memory usage.

I do feel like this feature would be very helpful, as it allows goroutines essentially to be used as continuations (from languages like Scheme) that contain some state that could be resumed or thrown away implicitly. A goroutine in the background managing some variables local to that goroutine could replace structs in cases where you really do not want to expose any internal structure, or you are doing some complex synchronization.

One common question among new Go programmers is why there isn’t a way to force kill a goroutine, like Unix kill -9, but for a single goroutine. The answer is that it would be too hard to program in that world, in fear that your goroutine might be killed at any moment. Each goroutine shares state with other goroutines.

What happens to the locks the goroutine holds? The invariants they protect may have been temporarily violated (that’s what locks are for) and not yet reestablished. Releasing the lock will break the invariants permanently; not releasing the lock will deadlock the program.

What happens to wait groups waiting for that goroutine? If there’s a deferred wg.Done, should it run? If the waitgroup just wants to know the goroutine is no longer exiting, maybe that’s OK. But if the waitgroup has a semantic meaning like “all the work I kicked off is done”, then it’s probably not OK to report that the goroutine is done when in fact it’s not really done, just killed.

What happens to the other goroutines that goroutine is expected to communicate with? Maybe the goroutine was about to send a result on a channel. The killer can possibly send the result on the killed goroutine’s behalf, but how can the killer know whether the goroutine completed its own send or not before being killed?

For all these reasons and more, Go requires that if you want a goroutine to stop executing, you communicate that fact to the goroutine and let it stop gracefully.

Note that even in Unix, where processes don’t share memory, you still end up with problems like in a pipeline p1 | p2 when p2 is killed and p1 keeps trying to write to it. At least in that case the write system call has a way to report errors (in contrast to operations like channel sends or mutex acquisitions), but all the complexity around SIGPIPE exists because too many programs still didn’t handle errors correctly in that limited context.

The proposal in this issue amounts to “have the GC kill permanently blocked goroutines”. But that raises all the same questions, and it’s a mistake for all the same reasons.

More fundamentally, the GC’s job is to provide the illusion of infinite memory by reclaiming and reusing memory in ways that do not break that illusion, so that the program behaves exactly as if it had infinite memory and never reused a byte. For all the reasons above, killing a goroutine would very likely break that illusion. If defers are run, or locks are released, now the program behaves differently. If the stack is reclaimed and that happens to cause finalizers to run, now the program behaves differently.

Perhaps worst of all, collecting blocked goroutines would mean that when your whole program deadlocks, there are no goroutines left! So instead of a very helpful snapshot of how all the goroutines got stuck, you get a print reporting a deadlock and no additional information. Deadlocks are today the best possible concurrency bug: when they happen, the program stops and sits there waiting for you to inspect it. Contrast that with race conditions, where the eventual crash happens maybe billions of instructions later and you have to find some way to reconstruct what might have gone wrong. If the GC discards information about deadlocked goroutines, even for partial deadlocks, this fantastic property of deadlocks - that they are easy to debug because everything is sitting right there waiting for you - goes out the window.

Debuggability is the same reason we don’t do tail call optimization: when something goes wrong we want to have the whole stack that identifies how we got to the code in question. The useful information discarded by tail call optimization is nothing compared to the useful information discarded by GC reclaiming blocked goroutines.

On top of all these problems, it’s actually very difficult in most cases to reliably identify goroutines that are blocked forever. So this optimization would very likely not fire often, making it a rare event. The last thing you want is for this super-subtle behavior that can make your program behave in mysterious ways only happen rarely.

I just don’t see collecting permanently blocked goroutines happening in any form. There is a vanishingly small intersection between the set of situations where you even identify the opportunity reliably and the set of situations where discarding the goroutines is safe and doesn’t harm debugging.

The GC could address both of these problems - correctness and debuggability - by reclaiming goroutines but being careful to keep around any state required so that it looks like they’re still there: don’t run defers, record all stack pointers to keep those objects and any finalizers reachable (now or in the future) from those objects live, record a text stack trace to show in the eventual program crash, and so on. But this is really just compression, not collection, since some information must still be preserved. Effort spent compressing leaked memory is probably wasted: better to make it easier for programmers to find and fix leaks instead.

We don’t know whether the chan will be read in later. code only know that the chan should be closed or not. So exit groutine explicitly(using context/cancel) or using time-out is better, I think.

select {
case ch <- i:
case <-time.After(time.Second):
	close(ch)
	return // exit goroutine
}

If one happens to write incorrect code which creates deadlocks or goroutines which are “dead” for some reason, one cannot easily determine what went wrong since the evidence is now magically removed.

Weak pointers are easier: you have a pointer type that the GC explicitly ignores (except that it gets updated when an object gets moved, for systems with a moving GC, and it gets cleared when an object is freed.)

What we need for this is something different: given a goroutine G1 blocked on a channel, we need to run a GC from all roots except G1, and see whether we marked the channel. That is straightforward but too inefficient to actually use. I don’t know an efficient way to implement this. Perhaps there is one.

The GODEBUG flag wouldn’t break the idioms, it would just print “# of goroutines collected this cycle” as it already does for bytes etc. And you already can totally disable the GC using debugging options anyway.

Nobody is suggesting some weird heuristic to collect only goroutines that aren’t stuck due to bugs. It’s perfectly okay to collect ones that are stuck due to bugs, it only loses us some debugging info we can recover by simply disabling the GC. And a large amount of existing “buggy code” will simply be the correct way of doing things in the future.

The whole argument sounds suspiciously like “we shouldn’t use a GC because it hides bugs from Valgrind”.

True, but when do finalizers run is already something that’s poorly defined and implementation specific. I wouldn’t imagine any “real code” breaking if goroutines are collected; indeed right now infinitely blocking goroutines are a memory leak so they are unlikely to occur in non-buggy code.

This proposal does cause finalizers to run that would never have otherwise.

Another reason to reject this proposal: often, I want to block a goroutine for ever deliberately, for example, to avoid the main goroutine exiting. This proposal will make this impossible.

@Merovius There are other languages that support this feature, and “collecting infinitely blocking threads” is something that is already an established concept. For example, Racket does exactly this, with a well-defined specification:

A thread that has not terminated can be garbage collected (see Garbage Collection)
 if it is unreachable and suspended or if it is unreachable and blocked on only
 unreachable events through functions such as semaphore-wait, 
semaphore-wait/enable-break, channel-put, channel-get, sync, sync/enable-break,
 or thread-wait. Beware, however, of a limitation on place-channel blocking; see the
 caveat in Places.

Racket has an much more complex system of channel-like constructs (“events”) than Go’s channel select — everything from events to file descriptors to channels to user-defined composite events could be put in a select-equivalent — but yet that paragraph reasonably fully describes how the GC collects unreachable threads.

It is certainly not equivalent to solving the halting problem, and is similar in difficulty to memory GC.

This feature would be pretty useless and indeed “bug-hiding” if it were just used to fix up broken programs that leak goroutines. It is certainly possible to implement this as a strict guarantee with strict definitions of unreachability, and in fact that would be the only way I would support this proposal.

@bunsim I believe comparing cancelling contexts with using free is a false equivalency. The closest equivalency to memory-management, I believe, would be having a pool-allocator, passing that down-stack and free it as a whole once the request is finished, precisely eliminating the need to call free. In general, your http/rpc/whatever server should call cancel for you at some point, eliminating the need for you to do it. Now, that doesn’t help with code that isn’t request oriented, but that’s why I emphasized the “I” in my post; personally, I just can’t think of any code I would write that a) would care about a goroutine-leak because it’s long-running and b) isn’t some sort of server with context-scoped work.

In response to your definition of unreachable, waving your hands more vigorously isn’t really progress. There are selects to think about, read-only channels, write-only channels, sending channels over channels, changing the values of channel variables, interfaces… I’m not saying, that there isn’t a good precise definition to be given that covers all these cases. But I also wouldn’t be surprised if solving this problem precisely would turn out to be equivalent to solving the halting problem. It’s just very hard to tell, as long as the level of discussion is this high-level.

It is still worthwhile to think about even partial heuristics that can be employed (it would probably be the first step anyway) without codifying it in the spec, just as a clutch to contain the effects of bugs, but as I said, any such heuristic couldn’t be relied upon. So as for correctness you can’t assume them anyway, making them much less interesting as a feature for the average go-user, I think. At least for the next couple of releases, I, at least, am going to contain my excitement here.

@bunsim: not quite.

A goroutine is unreachable if it is waiting on a resource that is unreachable (in the GC sense) from any other goroutine stack.

… from any other reachable goroutine’s stack.

Which of course is a recursive definition. GC marking solves exactly this problem for objects.

@Merovius It seems like there’s a very simple way of defining “unreachable”:

  • A goroutine is unreachable if it is waiting on a resource that is unreachable (in the GC sense) from any other goroutine stack.

This would also allow some of the more common erlang patterns to be useful in go.

Okay, so your proposal is that a channel send or receive operation that can be dynamically proven to never complete turns into a call to runtime.Goexit (https://golang.org/pkg/runtime/#Goexit)?