Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Both Go and Ocaml have really, really fast compilers. They did the works from the get go, and now wont have to pay the price.

I cant see myself ever again working on a system with compile times that take over a minute or so (prod build not counting).

I wish more projects would have they own "dev" compiler that would not do all the shit llvm does, and only use llvm for the final prod build.



They show a return to how fast compilers used to be, before C and C++ took over the zeitgeist of compilation times.

Eiffel, Common Lisp, Java and .NET (C#, F#, VB) are other examples, where we can enjoy fast development loops.

By combining JIT and AOT workflows, you can get best of both worlds.

I think the main reason this isn't as common is the effort that it takes to keep everything going.

I am on the same camp regarding build times, as I keep looking for my Delphi experience when not using it.


As I can't update my comment, here goes some info out of Turbo Pascal 5.5 marketing brochure,

> Fast! Compiles 34 000 lines of code per minute

This was measured on a IBM PS/2 Model 60.

So lets put this into perspective, Turbo Pascal 5.5 was released in 1989.

IBM PS/2 Model 60 is from 1987, with a 80286 running at 10 MHz, limited by 640 KB, with luck one would expad it up to 1 MB and use HMA, in what concerns using it with MS-DOS.

Now projecting this to 2025, there is no reason that compiled languages, when using a limited set of optimizations like TP 5.5 on their -O0, can't be flying in their compilation times, as seen in good examples like D and Delphi, to use two examples of expressive languages with rich type systems.


Reminds me of this old series of posts on the Turbo Pascal compiler (have been shared a few times on HN in the past):

A Personal History of Compilation Speed (2 parts): https://prog21.dadgum.com/45.html

"Full rebuilds were about as fast as saying the name of each file in the project aloud. And zero link time. Again, this was on an 8MHz 8088."

Things That Turbo Pascal is Smaller Than: https://prog21.dadgum.com/116.html

Old versions of Turbo Pascal running in FreeDOS on the bare metal of a 21st century PC is how fast and responsive I wish all software could be, but never is. Press a key and before you have time to release it the operation you started has already completed.


Our Delphi codebase at work is 1.7 million lines of code, takes about 40 seconds on my not very spicy laptop to do a full release build.

That's with optimizations turned on, including automatic inlining, as well as a lot of generics and such jazz.


A problem is how people have started depending on the optimizations. "This tower of abstractions is fine, the optimizer will remove it all" Result is some modern idioms run slow as molasses without optimization, and you can't really use o0 at all.


Indeed. Also,

- Turbo Pascal was compiling at o-1, at best. For example, did it ever in-line function calls?

- its harder to generate halfway decent code for modern CPUs with deep pipelines, caches, and branch predictors, than it was for the CPUs of the time.


> its harder to generate halfway decent code for modern CPUs with deep pipelines

Shouldn't be the case for an O0 build.


Can you give an example? AFAICT monomorphization takes the major portion of time, and it's not even a result of some complicated abstraction.


Turbo Pascal was an outlier in 1989 though. The funny thing is that I remember Turbo C++ being an outlier in the opposite direction.

In my computer science class (which used Turbo C++), people would try to get there early in order to get one of the two 486 machines, as the compilation times were a huge headache (and this was without STL, which was new at the time).


I someone tha started C++ with Turbo C++ 1.0 for MS-DOS, I certainly don't remeber having such a hard time on my 20 MHz 386 SX.


I recently saw an article about someone improving the machine code generation time of an assembler, here; I idly noticed that the scale was the same number of instructions we had in the budget to compile whole lines of code (expressions & all) "back in the day". It was weird. Of course, we're fighting bandwidth laws, so if you looked at the wall clock time, the machine code generation time was very good in an absolute sense.


Modern compilers do an insane amount of work compilers didn't do just a decade or two ago, let alone 35 years ago.

But I somewhat agree for an O0 the current times are not satisfactory, at all.


> be, before C and C++ took over the zeitgeist of compilation times.

I wouldn’t put them together. C compilation is not the fastest but fast enough to not be a big problem. C++ is a completely different story: not only it orders of magnitude slower (10x slower probably not the limit) on some codebases compiler needs a few Gb RAM (so you have to set -j below the number of CPU cores to avoid OOM).


Back in 1999 - 2003, when I was working on a product mixing Tcl and C, the builds took one hour per platform, across Aix, Solaris, HP-UX, Red-Hat Linux, Windows 2000/NT build servers.

C++ builds can be very slow versus plain old C, yes, assuming people do all mistakes there can be done.

Like overuse of templates, not using binary libraries across modules, not using binary caches for object files (ClearMake style already available back in 2000), not using incremental compilation and incremental linking.

To this day, my toy GTK+ Gtkmm application that I used for a The C/C++ Users Journal article, and have ported to Rust, compiles faster in C++ than Rust in a clean build, exactly because I don't need to start from the world genesis for all dependencies.


That’s not really an apples to apples comparison, is it?


I dunno why, the missing apple on Rust side is not embracing binary libraries like C and C++ do.

Granted there are ways around it for similar capabilities, however they aren't the default, and defaults matter.


I talked a bit about this at the Rust All Hands back in May.

A lot of Rust packages that people ust are setup more like header-only libraries. We're starting to see more large libraries that better fit the model of binary libraries, like Bevy and Gitoxide. I'm laying down a vague direction for something more binary-library like (calling them opaque dependencies) as part of the `build-std` effort (allow custom builds of the standard library) as that is special cased as a binary library today.


You only build the world occasionally, like when cloning or staring a new project, or when upgrading your rustc version. It isn't your development loop.

I do think that dynamic libraries are needed for better plugin support, though.


> You only build the world occasionally

Unless a shared dependency gets updated, RUSTFLAGS changes, a different feature gets activated in a shared dependency, etc.

If Cargo had something like binary packages, it means they would be opaque to the rest of your project, making them less sensitive to change. Its also hard to share builds between projects because of the sensitivity to differences.


Except for RUSTFLAGS changes (which aren’t triggered by external changes), those only update the affected dependencies.


One hour? How did you develop anything?


That was a clean build for the binary code used in Tcl scripts, every time someone would sync their local development with latests, or switch code branches.

Plenty of code was Tcl scritping, and when re-compiling C code, only the affected set of files would be re-compiled, everything else was kept around in object files and binary libraries, and if not affected only required re-linking.


That’s probably why tcl is in there, you use uncompiled scripting to orchestrate the native code, which is the part that takes hours to compile.


I have seen projects where ninja instead of make (both generated by cmake) is able to cleverly invoke the compiler such that CPU is saturated and ram isn't exhausted, and make couldn't (it reached oom).


I have first hand experience of painfully slow C# compile times. Sprinkle in a few extra slow things like EDMX generated files (not C# but part of the MS ecosystem) and it has no business being in a list of fast compiling languages.


Painfully slow? On a modern machine reasonable sized solution will compile almost instantly.


I can do the same with some C code I worked on at enterprise scale.

Lets apply the same rules then.


This.

The C# compiler is brutally slow and the language idioms encourage enormous amounts of boilerplate garbage, which slows builds even further.


As much as I love F#, I wouldn't add it to a list of fast compilers; although I really appreciate that it supports actual file-scoped optimizations and features for compile-time inlining.


I wouldn't even put C# in there - minute builds are like nothing unusual, and my only experience with Java is Android but that was really bad too.

They aren't C++ levels bad, but they are slow enough to be distracting/flow breaking. Something like dart/flutter or even TS and frontend with hot reload is much leaner. Comparing to fully dynamic languages is kind of unfair in that regard.

I did not try Go yet but from what I've read (and also seeing the language design philosophy) I suspect it's faster than C#/Java.


Go is slower than Java (Java's (JIT) optimising compiler and GCs are much more advanced than Go's).


That's runtime performance, I'm taking about compile times.


About the same if you compile many java files with a single run of the compiler.


Android used to have lighting-fast builds even when accounting for Google's quirky tooling, R.java generation and binary XML processing. After introduction of Gradle build system and Kotlin Android build times have become laughingstock of entire programming world.

This however has nothing to do with Java — Kotlin compiler is written Kotlin, and Gradle is written in unholy mix of Kotlin, Java and Groovy (with later being especially notorious for being slow).


Right, the work I get paid to do is often C# and literally yesterday I was cursing a slow C# build. Why is it slow? Did one of the other engineers screw up something about the build, maybe use a feature that's known to take longer to compile, or whatever? I don't know or care.

This idea that it's all sunshine and lollipops for other languages is wrong.


The fun of tracking down slow autoconf builds with spaghetti Makefiles in C, that take a full hour to do a clean build.

Lets not mix build tools with compilers.


Android is an anti-pattern in Java world, and having a build system based on a relatively slow scripting language hardly helps.

Many things that Google uses to sell Java (the language) over Kotlin also steems from how bad they approach the whole infrastructure.

Try using Java on Eclipse with compilation on save.


It never took more than a few seconds to me.


> Both Go and Ocaml have really, really fast compilers. They did the works from the get go, and now wont have to pay the price.

People tend to forget that LLVM was pretty much that for the C/C++ world. Clang was worlds ahead of GCC when first released (both in speed and in quality of error messages), and Clang was explicitly built from the ground up to take advantage of LLVM.


Yet Ocaml is unusable. Focus on user experience before focusing on speed. Golang did both tbh.


I can always see myself working somewhere the money leads me. I see languages as a tool and not a "this not that" or "ewww" . Sure I have my preferences but they all keep a Rivian over my head and a nice italian leather couch to lay my head on.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: