Are you sure it’s an extinct art though? LLVM is flourishing, many interesting IRs come to life like MLIR, many ML-adjacent projects build their own compilers (PyTorch, Mojo, tinygrad), many big tech like Intel, AMD, Nvidia, Apple and others contribute to multiple different compilers, projects integrate one to another at different levels of abstraction (PyTorch -> Triton -> CUDA) - there is a lot of compilation going on from one language to another
Not to mention many languages in a mainstream that weren’t that popular 10 years ago - think Rust, Zig, Go
Do you distinguish between writing a compiler and writing an optimizing compiler, and if so, how is writing an optimizing compiler an extinct art?
Equality saturation, domination graphs, chordal register allocation, hardware-software codesign, etc there are many new avenues of research for compilers, and these are just the ones on the top of my head that are relevant to my work. Most optimization work is R&D and much of it is left unimplemented at scale, and things like the phase-ordering problem and IR validation are hard to do in practice, even given ample resources and time.
> Modern Compiler Implementation by Andrew W. Appel
It comes in three flavors C, ML (Meta Language), and Java
https://www.cs.princeton.edu/~appel/modern/
Writing a compiler in Standard ML is as natural as writing a grammar and denotational semantics.
Compiler writing is becoming an extinct art.