As I can't update my comment, here goes some info out of Turbo Pascal 5.5 marketing brochure,
> Fast! Compiles 34 000 lines of code per minute
This was measured on a IBM PS/2 Model 60.
So lets put this into perspective, Turbo Pascal 5.5 was released in 1989.
IBM PS/2 Model 60 is from 1987, with a 80286 running at 10 MHz, limited by 640 KB, with luck one would expad it up to 1 MB and use HMA, in what concerns using it with MS-DOS.
Now projecting this to 2025, there is no reason that compiled languages, when using a limited set of optimizations like TP 5.5 on their -O0, can't be flying in their compilation times, as seen in good examples like D and Delphi, to use two examples of expressive languages with rich type systems.
Old versions of Turbo Pascal running in FreeDOS on the bare metal of a 21st century PC is how fast and responsive I wish all software could be, but never is. Press a key and before you have time to release it the operation you started has already completed.
A problem is how people have started depending on the optimizations. "This tower of abstractions is fine, the optimizer will remove it all" Result is some modern idioms run slow as molasses without optimization, and you can't really use o0 at all.
Turbo Pascal was an outlier in 1989 though. The funny thing is that I remember Turbo C++ being an outlier in the opposite direction.
In my computer science class (which used Turbo C++), people would try to get there early in order to get one of the two 486 machines, as the compilation times were a huge headache (and this was without STL, which was new at the time).
I recently saw an article about someone improving the machine code generation time of an assembler, here; I idly noticed that the scale was the same number of instructions we had in the budget to compile whole lines of code (expressions & all) "back in the day". It was weird. Of course, we're fighting bandwidth laws, so if you looked at the wall clock time, the machine code generation time was very good in an absolute sense.
> be, before C and C++ took over the zeitgeist of compilation times.
I wouldn’t put them together. C compilation is not the fastest but fast enough to not be a big problem. C++ is a completely different story: not only it orders of magnitude slower (10x slower probably not the limit) on some codebases compiler needs a few Gb RAM (so you have to set -j below the number of CPU cores to avoid OOM).
Back in 1999 - 2003, when I was working on a product mixing Tcl and C, the builds took one hour per platform, across Aix, Solaris, HP-UX, Red-Hat Linux, Windows 2000/NT build servers.
C++ builds can be very slow versus plain old C, yes, assuming people do all mistakes there can be done.
Like overuse of templates, not using binary libraries across modules, not using binary caches for object files (ClearMake style already available back in 2000), not using incremental compilation and incremental linking.
To this day, my toy GTK+ Gtkmm application that I used for a The C/C++ Users Journal article, and have ported to Rust, compiles faster in C++ than Rust in a clean build, exactly because I don't need to start from the world genesis for all dependencies.
I talked a bit about this at the Rust All Hands back in May.
A lot of Rust packages that people ust are setup more like header-only libraries. We're starting to see more large libraries that better fit the model of binary libraries, like Bevy and Gitoxide. I'm laying down a vague direction for something more binary-library like (calling them opaque dependencies) as part of the `build-std` effort (allow custom builds of the standard library) as that is special cased as a binary library today.
You only build the world occasionally, like when cloning or staring a new project, or when upgrading your rustc version. It isn't your development loop.
I do think that dynamic libraries are needed for better plugin support, though.
Unless a shared dependency gets updated, RUSTFLAGS changes, a different feature gets activated in a shared dependency, etc.
If Cargo had something like binary packages, it means they would be opaque to the rest of your project, making them less sensitive to change.
Its also hard to share builds between projects because of the sensitivity to differences.
That was a clean build for the binary code used in Tcl scripts, every time someone would sync their local development with latests, or switch code branches.
Plenty of code was Tcl scritping, and when re-compiling C code, only the affected set of files would be re-compiled, everything else was kept around in object files and binary libraries, and if not affected only required re-linking.
I have seen projects where ninja instead of make (both generated by cmake) is able to cleverly invoke the compiler such that CPU is saturated and ram isn't exhausted, and make couldn't (it reached oom).
I have first hand experience of painfully slow C# compile times. Sprinkle in a few extra slow things like EDMX generated files (not C# but part of the MS ecosystem) and it has no business being in a list of fast compiling languages.
As much as I love F#, I wouldn't add it to a list of fast compilers; although I really appreciate that it supports actual file-scoped optimizations and features for compile-time inlining.
I wouldn't even put C# in there - minute builds are like nothing unusual, and my only experience with Java is Android but that was really bad too.
They aren't C++ levels bad, but they are slow enough to be distracting/flow breaking. Something like dart/flutter or even TS and frontend with hot reload is much leaner. Comparing to fully dynamic languages is kind of unfair in that regard.
I did not try Go yet but from what I've read (and also seeing the language design philosophy) I suspect it's faster than C#/Java.
Android used to have lighting-fast builds even when accounting for Google's quirky tooling, R.java generation and binary XML processing. After introduction of Gradle build system and Kotlin Android build times have become laughingstock of entire programming world.
This however has nothing to do with Java — Kotlin compiler is written Kotlin, and Gradle is written in unholy mix of Kotlin, Java and Groovy (with later being especially notorious for being slow).
Right, the work I get paid to do is often C# and literally yesterday I was cursing a slow C# build. Why is it slow? Did one of the other engineers screw up something about the build, maybe use a feature that's known to take longer to compile, or whatever? I don't know or care.
This idea that it's all sunshine and lollipops for other languages is wrong.
> Both Go and Ocaml have really, really fast compilers. They did the works from the get go, and now wont have to pay the price.
People tend to forget that LLVM was pretty much that for the C/C++ world. Clang was worlds ahead of GCC when first released (both in speed and in quality of error messages), and Clang was explicitly built from the ground up to take advantage of LLVM.
I can always see myself working somewhere the money leads me. I see languages as a tool and not a "this not that" or "ewww" . Sure I have my preferences but they all keep a Rivian over my head and a nice italian leather couch to lay my head on.
I cant see myself ever again working on a system with compile times that take over a minute or so (prod build not counting).
I wish more projects would have they own "dev" compiler that would not do all the shit llvm does, and only use llvm for the final prod build.