Yeah. The "almost entire" applies to std.algorithm. How much allocation goes on elsewhere depends on what module you're talking about. Some definitely avoid it, whereas others definitely require it. That will be (and has been) changing though. The main thing is probably going to be making much heavier use of output ranges rather than automatically allocating arrays/strings. We'll get there though, because there's clearly a lot of interest in doing so, and there's no fundamental reason why we can't.
There have been papers on how GCs can be more efficient than manual memory management in some situations, so it's not always the case that RAII is better from a performance standpoint (though I think that it frequently is). But regardless, a big reason for the GC in D is memory safety. Using the GC, you can make memory safety guarantees that cannot be made with malloc and free (e.g. you can guarantee that a pointer won't be invalid with a GC but can't with manual memory management), and D is big on making memory safety guarantees. Also, some features (such as array slices or closures) work very well when you have a GC and become much more difficult to pull off without one due to a lack of automatic memory management that the compiler can use and a lack of clear ownership in some cases (e.g. the runtime owns the memory for array slices rather than there being a particular array which owns the memory).
That being said, D does not require the GC. It's easier if you use it for at least some of the features, but D makes it very easy to use RAII and manual memory management, which also helps a lot in making the GC work better, because it's much easier to avoid making a lot of garbage for it to collect when you don't need to. A lot of stuff in D ends up on the stack rather than the heap, and D's GC ends up having a lot less work to do than the GC does in languages like C# or Java. That being said, the current GC implementation needs some work (and effort is definitely put forth in that area), but if you don't want to use the GC, you can minimize its use or even outright avoid it (though _completely_ avoiding it can be a bit annoying since that means avoiding a few language features that require it; that list is short though).
So, while D definitely uses the GC and promotes its use where appropriate, you have full control over memory just like you would in C++. And the few features that are hampered by avoiding the GC don't even exist in C++ in the first place.
> A lot of stuff in D ends up on the stack rather than the heap, and D's GC ends up having a lot less work to do than the GC does in languages like C# or Java
This is true, however there's a compromise involved. Because D also allows for manual memory management and unsafe memory access, it means that the GC is not free to move stuff in memory at will ... which really means that garbage collectors, like the ones available for Java (precise, generational, non-blocking, fairly predictable and compacting) are very hard to develop, probably next to impossible. This is the price you pay for using a lower level language and it's not a price that will go away easily.
I've been using Scala a lot for the past two years, using it to build high-traffic web services and while Scala is pretty wasteful in terms of allocating short-lived objects, I've been surprised at how well the JVM handles it.
In terms of throughput for example, Java GCs are much better than manual memory management. Allocating memory usually involves just incrementing a pointer, so it's basically as cheap as stack allocation. Deallocating short-lived objects is also fairly cheap, since it happens in bulk and so the amortized cost is pretty similar to dealocating stuff on the stack (!!!). The JVM can do some pretty neat things, like for example if it detects that certain references do not escape their local context, it can decide to allocate those objects straight on the stack.
What really sucks about garbage collection is the unpredictability. Java's CMS for example, awesome as it is, still blocks the world from time to time. And when it does, you have no real control over how much time it keeps the process hostage. The new G1 in JDK7 is much better and if you want the state of the art in terms of near-real-time GCs, you can buy into Azul's pauseless GC. But they still suck for certain apps. Allocating objects on the heap also means you have to pay the price of boxing/unboxing the references involved. This sucks too.
On the other hand, by having a good GC at disposal, it's much easier to build multi-threaded architectures. In C++ for example, it's so freaking painful to deal with non-blocking concurrent algorithms, or really, multi-threading of any kind.
> Because D also allows for manual memory management and unsafe memory access, it means that the GC is not free to move stuff in memory at will
Since I wrote a moving GC for Java years ago, and hence know how they work, I set the D semantics so it allows a moving GC. It's just that nobody has written one for D.
Having D used in production in a big company like Facebook is a big deal. No, it doesn't mean that D has taken over the programming world or anything like that, but it shows that the language is maturing and that real companies are now willing to use it in the real world - and it's a pretty major company at that.
So, this is just one step forward, but it's a big one and exciting to the folks who are fans of D - especially those who have put a lot of time and effort into it.
I think that a lot of confusion has been caused by Go using the term "systems language," because (as I understand it) Go doesn't mean it in the same way that C++ and D do. The Go folks seem to be thinking systems in the sense of large networks of computers and the like (the kind of stuff that Google typically does), whereas the C++ and D folks are thinking systems in terms of stuff like operating systems. What Go is trying to do does not necessarily require low-level primitives (though it can benefit from them), whereas what C++ and D are trying to do does require such primitives.
The problem is by deciding that they can use "systems language" (which they dropped) because "system" means "a set of connected things or parts forming a complex whole" results in every language falling under "systems language."
Others have been using it to distinguish languages suitable for writing operating systems/drivers years before Go introduced this confusion.
When I hear "systems programmer" I think computer-to-computer. Therefore when I hear "systems language" I think computer-to-computer even absent Go's use of the term. I not be disappointed if a systems language were not appropriate for creating operating systems.