Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As I can't update my comment, here goes some info out of Turbo Pascal 5.5 marketing brochure,

> Fast! Compiles 34 000 lines of code per minute

This was measured on a IBM PS/2 Model 60.

So lets put this into perspective, Turbo Pascal 5.5 was released in 1989.

IBM PS/2 Model 60 is from 1987, with a 80286 running at 10 MHz, limited by 640 KB, with luck one would expad it up to 1 MB and use HMA, in what concerns using it with MS-DOS.

Now projecting this to 2025, there is no reason that compiled languages, when using a limited set of optimizations like TP 5.5 on their -O0, can't be flying in their compilation times, as seen in good examples like D and Delphi, to use two examples of expressive languages with rich type systems.



Reminds me of this old series of posts on the Turbo Pascal compiler (have been shared a few times on HN in the past):

A Personal History of Compilation Speed (2 parts): https://prog21.dadgum.com/45.html

"Full rebuilds were about as fast as saying the name of each file in the project aloud. And zero link time. Again, this was on an 8MHz 8088."

Things That Turbo Pascal is Smaller Than: https://prog21.dadgum.com/116.html

Old versions of Turbo Pascal running in FreeDOS on the bare metal of a 21st century PC is how fast and responsive I wish all software could be, but never is. Press a key and before you have time to release it the operation you started has already completed.


Our Delphi codebase at work is 1.7 million lines of code, takes about 40 seconds on my not very spicy laptop to do a full release build.

That's with optimizations turned on, including automatic inlining, as well as a lot of generics and such jazz.


A problem is how people have started depending on the optimizations. "This tower of abstractions is fine, the optimizer will remove it all" Result is some modern idioms run slow as molasses without optimization, and you can't really use o0 at all.


Indeed. Also,

- Turbo Pascal was compiling at o-1, at best. For example, did it ever in-line function calls?

- its harder to generate halfway decent code for modern CPUs with deep pipelines, caches, and branch predictors, than it was for the CPUs of the time.


> its harder to generate halfway decent code for modern CPUs with deep pipelines

Shouldn't be the case for an O0 build.


Can you give an example? AFAICT monomorphization takes the major portion of time, and it's not even a result of some complicated abstraction.


Turbo Pascal was an outlier in 1989 though. The funny thing is that I remember Turbo C++ being an outlier in the opposite direction.

In my computer science class (which used Turbo C++), people would try to get there early in order to get one of the two 486 machines, as the compilation times were a huge headache (and this was without STL, which was new at the time).


I someone tha started C++ with Turbo C++ 1.0 for MS-DOS, I certainly don't remeber having such a hard time on my 20 MHz 386 SX.


I recently saw an article about someone improving the machine code generation time of an assembler, here; I idly noticed that the scale was the same number of instructions we had in the budget to compile whole lines of code (expressions & all) "back in the day". It was weird. Of course, we're fighting bandwidth laws, so if you looked at the wall clock time, the machine code generation time was very good in an absolute sense.


Modern compilers do an insane amount of work compilers didn't do just a decade or two ago, let alone 35 years ago.

But I somewhat agree for an O0 the current times are not satisfactory, at all.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: