Friday, July 8, 2011

Reference-counting just isn't enough and actually is harmful

I recently came across a blog entry titled "Why Garbage Collection is not Necessary and actually Harmful".

I used to share the same opinion in my C++ days, when I used to think no better language could be written. A few years later, I have drastically changed my opinion on the subject. I am not going to address all points made against garbage collection, and instead will focus on one issue: The claim that reference-counting is a viable alternative.

Reference-counting is an automatic memory management system where memory allocated to an object is freed when the object is no longer referred to. C++ makes it relatively easy to replace pointers by wrappers that act like pointers, with the addition that reference counts in objects are automatically increased and decreased, as wrappers are created and destroyed.

Reference-counting by itself is flawed as it does not account for cycles. The aim of automatic memory management is to deallocate memory when it is no longer reachable from the program. Reference-counting deallocates memory when it is no longer reachable from reference-counted objects. These two coincide only in the absence of cycles (and provided a number of other conditions are met, e.g. all reference-counted objects are referenced through wrappers)

Edaqa says:

simple reference counting also covers the vast majority of memory management needs. Sweeping collection has a distinct feature of being able to resolve cyclic dependencies. That is when A points to B and B points to A. Reference counting cannot manage this in the general case.

Obviously if you have such a feature you will make use of it. I don’t see however that cyclic dependencies are a significant obstacle in most programs. Such object relationships can normally be resolved, or significantly mitigated, by one of a number of other techniques.

I strongly object to the claim that cyclic dependencies are not a significant obstacle in most programs. It may be so in version 1.0 of a program, but even that is stretching it. A reference-counting based memory recollection can introduce enough chaos into a software product that early retirement may be needed earlier than expected.

Potential cyclic dependencies are hard to detect.

It might seem at first that it should be possible and easy to detect cycles when looking at the class diagram. In practice, detecting loops involving more than 3 steps is in my experience tricky. Maybe not when you first design your classes, but when you or someone else alters it later. A developer who does not have the entire class diagram at his disposal or in his mind simply cannot know whether an addition can introduce cycles or not.

Actual cyclic dependencies are harder to detect.

Cycles at the class level are not necessarily problematic. There are moreover very common. Consider for instance the usual definition of lists:

template<typename T>
struct Node<T> {
   T data;
   struct Node<T>* next;
};

Although it is in theory possible to create a cyclic list, this should never happen in any run of the program. How does one convince oneself that this is indeed the case? The problem is solved for lists through the use of a safe API, but home-made more complex types seldom offer such an API.

There is no easy way to deal with cycles.

In order to avoid memory leaks, cycles must be broken in one way or another. This involves manually decreasing the count of an object, or simply not accounting for a reference.

The problem here is which link to choose in the cycle? Obviously, the link must be part of the cycle. Finding a candidate can be difficult by itself for the reasons mentioned above, and there is another restriction. The candidate link must never be a potential member of a cycle-free path for any run of your program, or you might end up deallocating objects that are still in use.

Update (May 2012): Jon Harrop in the comments point out there are languages that prevent creating cycles, and there are algorithms to detect and collect cycles. In this post I was mostly focused on C++ and the traditional ways of doing ref-counting, namely using shared_ptr.

Conclusion

The real problem with reference-counting and avoiding cycles is that there is no good way to convince oneself of the absence of bugs through static analysis or inspection of source code. Absence of cycles is a dynamic property of program runs, and is therefore difficult to analyse. Testing might help, but to my knowledge, few write and maintain test cases specifically for the purpose of validating reference counting.

What makes matters worse is that it's very easy to mistake reference-counting pointer wrappers for substitutes to classic pointers. Systematically using the wrappers often ends up introducing memory leaks. Fixing them introduces dangling pointers.

Mark-and-sweep is no silver bullet, but at least it's a bullet aimed in the right direction.

6 comments:

Jarle Stabell said...

Perhaps you or I should give this guy a hint that functional programming is where the fun is these days, since it seems he doesn't include languages which pretty much require a full GC among 'general purpose languages'? :)

I agree totally with you, but his blog post argumenting against full GC is amongst the nicer I've read in that "category". Years ago, this kind of blog post tended to insinuate that those in favour of full GC were point and drool newbie programmers not mentally capable of doing manual GC...

edA-qa mort-ora-y said...

I wrote the referenced article.

For a moment, let's pretend that I agree with the opinion that cyclic dependencies are hard to detect and difficult to resolve. The main point of my article isn't about cyclic dependencies, it's about GC. I like the fact that GC can fix our cyclic dependency problem for us, that is clearly one of its advantages. The other problems and limitations of GC are the focus of my article.

I also don't want to claim that reference counting is the ultimate form of memory management. In C++ we now have this pointer soup, that despite basically achieving what I want, is a royal pain to use. I'd like a cleaner solution. The problem so far is that any new langauge I look at with GC, they've simply substitued this mess with a serious limitation.

My article is intend to rally against this trend of GC being *the* solution. There are two many programming domains where the current GC approach just can't work correctly. If nobody can find a solution to the problems I've listed in my article, then many systems will be stuck with C and C++ indefinitely. That is really despressing, but so long as an alternative doesn't exist, what can we do?

Johann Deneux said...

@edA-qa mort-ora-y: The title of your article is too strong. It implies mark-and-sweep is never a solution.

Michael said...

The original article is bizzare to say the least and goes against common-sense aspects of the programmer being more concerned with the domain than program coding overhead - the cognitive load of not having GC in simply *not* worth it for almost all mainstream programs. Very few programs (frame-rate based games being one class) don't like pre-emption - but most code gets it all the time anyway, just not from GC but form multi-tasking. And certainly we can (and do) combine determinstic/stack-based methods with GC when the compiler recognizes these situations.

Jon Harrop said...

A few points:

1. You should stress that scope-based reference counting hangs on to heap-allocated blocks until the last reference to them falls out of scope, which can be considerably longer than necessary and can even be longer than they are reachable. Consequently, tracing collectors can and sometimes do collect sooner than scope-based reference counting. The author of the original article is perpetuating myths in this context.

2. Naive reference counting is very slow due to cache misses when bumping counts. This has been solved by deferring decrements but that reintroduces the disadvantage of non-deterministic deallocation.

3. Deterministic deallocation is often touted as an advantage of reference counting so it is worth noting that it only applied to single-threaded programs. In multithreaded programs, counter bumps must be atomic and threads race to decrement to zero.

4. When you say "detecting loops involving more than 3 steps is in my experience tricky" you are not seeing the forest for the trees. Objectively, it is quite easy to design languages that can only ever produce unidirectional heaps by design, e.g. Erlang and Mathematica.

5. You say "In order to avoid memory leaks, cycles must be broken in one way or another" but there are actually cycle collection algorithms (e.g. trial deletion) for reference counting collectors.

Johann Deneux said...

@Jon: Thanks for your comments.

Regarding 1: How do ref-counting and GC differ in that respect?

You write: "can even be longer than they are reachable"

How can something referenced in a scope that hasn't been left yet be unreachable? Shouldn't it be reachable through the scope?

Regarding 4 and 5: I was thinking quite specifically about C++ at this point. There is nothing in the language to enforce that, so it has to be done manually, which I think is unlikely to work well in practice.

I have lived through a project that had exactly this problem. It was an expression-manipulation framework written in C++, and newly produced expressions resulting from a translation had references back to the original expressions (in non-obvious ways). That caused major leaks, which once detected and "fixed" turned into dangling refs.

I don't know about efficient cycle detection algorithms, but I'm willing to accept they exist :) That being said, I don't think shared_ptr and its boost precursors provided that.