I went and learned Haskell because PG's "Blub Paradox" essay said that when people look up the power continuum at languages they don't understand, they merely think them "weird languages", and then he went and called Haskell a weird language in a comp.lang.lisp post.
He dismisses type-theory research as irrelevant because he doesn't think strong typing plays well with macros. This could probably be considered a jab at Haskell and similar languages.
I'm slowly getting to the point where I consider macros to be an admission of defeat on the part of the language designer. They are a Pandora's box leading to impenetrable, bespoke domain-specific languages. They also don't compose the way functions do, yet they usually masquerade as functions by using the same syntax.
By giving you more control over evaluation order, Haskell goes a long way toward giving you functions powerful enough to replace the macros you would've written in Lisp. That seems like the right approach to me.
That's a good reflection. I used to think the same.
However, I am now truly convinced DSLs are the future of programming.
Alan Perlis said "Beware of the Turing tar pit in which everything is possible but nothing of interest is easy". With Turing completeness, proving things (formal methods) is really hard.
If you have custom DSLs with restricted and well-understood semantics, formal methods become tractable and practical. That's the opposite of the Turing tar pit Perlis warned us about.
I imagine in the future we will have languages similar to Racket, which allow creating DSLs, and lots of mature tooling to prove things on code written using said DSLs.
I suppose my comment came off as anti-DSL. That was not my intention. I am pro-DSL, I just don't think macros are the right way to build them. Functions are the universal building block of computing, why not use them?
Macros don't exist at runtime, functions do. You can't pass a macro to a function nor return one. You can't store a list of macros to apply to a value. Macros are nothing more than user-programmable syntactic sugar.
In a language like Haskell, you can do with functions most things that would only exist as macros in Lisp.
Not existing at runtime seems more like a performance consideration. The macro could very well had just deferred evaluation of it's arguments, so you could manipulate the AST during runtime and return the code for immediate eval, and it would work provide the same syntactic sugar but with a runtime function (and it's probably how it works when an interpreter evaluate the macros instead of a compiler).
But by moving that transformation to compile time, you get not only the power to create syntactic sugar for the programmer but also to better control what the compiler will generate (and cache it, so you'll be free from the asymptotic complexity of the runtime control flow). That's great for both AoT compilation, for which you can effectively remove parts of the algorithm that can be pre-computed from the output program, and JIT for which runtime and compile time can interact to more intelligently distribute the workload.
You get that for free with a lazy language. You can write any sort of control flow mechanism to your heart's content and it's just a function in Haskell. People have written all manner of fancy coroutine systems, continuations, etc, without the need for macros.
moving that transformation to compile time
Haskell, gives you a great deal of control over this. Some of the best libraries make use of this functionality for things like stream fusion. Additionally, Haskell can lift constants to the top level so they're only evaluated once, on demand. This gives you free memoization without increasing code size (as you would if you precomputed a large constant).
I wasn't trying to argue that this was the only or even the best way of doing it, just why it's advantageous that you separate runtime and compile time functions, especially in languages with eager evaluation.
The advantage of macros is just that they are simple to understand and use (for most you basically just have to understand tree data types, or just cons, especially for CL style macros), to implement (since every (?) language already have an AST representation, and particularly powerful in Lisp for obvious reasons) and the language doesn't even really have to support them in their normal syntax (fully separating runtime programming syntax and compile time programming syntax if they want). And for that they give a lot of power by allowing easy language extensions, syntactic sugar and powerful optimizations.
Yet even Haskell has Template Haskell, and it is widely used (with the convenience syntax of Lens as an example). Haskellers seem to have conceded that, in spite of their language's power, multi-stage programming -- where higher-staged functions don't exist at runtime! -- has an established place in the ecosystem.
In my opinion the only use case that is difficult to functions replicate are meta-information about the code (line numbers for errors, code file names, etc.). Not saying that it should be a feature to every language though
But yeah functions can pretty much describe DSLs, most of the time DSLs are written to build a declarative representation of some concept, that can easily be achieved with factory-like functions
I want a language that's essentially a "unified meta-DSL" that gives you different mini-languages for different purposes.
Today's languages don't scale up or down very well. For example, you can't write cache-optimized algorithms in Ruby and you can't write a sleek ORM in C.
Languages like Rust and C++ try to bridge the worlds by starting out being "near the metal" and then building high-level primitives on top, and that's a good start, but even with Rust's macro system — which does allow you to write some DSL-like stuff — you're always targeting Rust syntax.
Neither is every going to let you operate in a region without type annotations, for example, even though it's more productive for the programmer when experimenting/sketching, prototyping or in some contexts like shells or game scripting.
As an example, think about how much boilerplate there is around writing tests. Even though Go has a test framework built in, and tests can be nice and small, you still end up writing a lot of this:
func TestUppercase(t testing.T) {
v := Uppercase("hello")
if v != "HELLO" {
t.Errorf("expected HELLO, got %q", v)
}
}
instead something like:
test "Uppercase" {
Uppercase("hello") => "HELLO"
}
I write a lot of YAML-based table-driven tests that I wish were just actual code, because it is code.
I want something very similar. A language powerful enough to get almost everything done with very few and very well thought through escape hatches in form of DSLs. For example, Elm has a mechanism for writing shaders:
I can think of at least a few other DSLs that would be useful: some logic/query languages (SQL/Datalog/GraphQL) and maybe a highly mathematical visual language like gezira (vpri). It would also be very useful to be able to write things in a lower level language for performance optimization (where needed).
Language design is very hard. Harder than API design. Most people are not much good at it which is why there are not many popular languages. Many people are good at designing languages for themselves but that is much easier. On top of that, are you going to get time to document it? When the inevitable flaws are found will you still be there to fix and extend it in a consistent way? Will there be stack overflow posts there to help people work around it if not?
DSLs are not general reusable computing environments. The goal is to encode domain logic into the language in a way that simplifies common domain problems.
The problems may not look anything like the problems you can solve with a general purpose language. If they did, you wouldn't need a DSL.
Of course you need to understand your domain really really well to do this with any success.
There is a sliding scale between API design and DSL design, at least when we are talking about internal DSL's. At what point does an API become a DSL?
It is true that DSL's need to be documented and maintainable, but if you have the kind of problem when a DSL might be a solution, then not having a DSL would just express the inherent complexity in a different way.
To put it another way, a clear abstraction layer (component, API, framework, DSL etc.) needs to be well-designed, documentated and maintainable, but this need is not avoided by having a system of the same complexity without such clear abstraction layers.
Btw. I disagree about the popularity of general purpose languages. Some very popular languages (VB, JavaScript) are quite badly designed but a popular due to integration with a platform.
Monads enable DSLs exactly as well as macros. The bind operator is nothing else than a standard interpreter interface.
In fact, monads have better afordance on domain specific semantics, while macros usually afford specific syntaxes. That makes macro based DSLs shallow and hard to understand when compared to the monad ones.
I agree that DSLs are the future for defining useful declarative code, and most languages/frameworks see turning to this direction (React JSX, Flutter Widgets and the other 100 HTML-like frameworks for example)
Also DSLs still can be written without macros, a lot of frameworks use functions and builders to define a symbolic representation to be consumed later.
Any language that forces a particular programming paradigm is IMHO never going to be all that great. Java forced OOP on people in ways it shouldn't have. Functional languages force immutability in ways they shouldn't.
Any language forcing concepts or not, will never be all that great. Just look at PERL and C++, in my opnion they have a much more bad language design than Java and Haskell
Not be that great is in the very nature of anything that can be "quality measurable" by humans. If "quality measurable" was or will be ever a thing
I was getting my Master's in 2003 and Haskell was very hot amongst all the professors and students who were into functional programming.
I learned it because of group projects where we could pick the implementation language and that's what our group wanted.
At that time, Simon Peyton Jones was employed by Microsoft and, at least as I perceived it, was allowed to work on ghc full time because of the goodwill it generated amongst programmers for Microsoft to be the patron of Haskell.
Haskell was strongly localized, and with heavy university bias. There were users outside, but not many, and I think it was around 2003-2005 that it became more common to use (also I suspect GHC to be significant in causing that).
Also, Microsoft is quite open in its support for more heterogenous programming than typical Unix environment, with significant use of "niche" programming languages in various places (including things as atypical as putting custom Prolog implementation into Windows network configuration tools, just to run a prolog program that handled said configuration)
With the advent of Learn You a Haskell, Real World Haskell, Haskell Book...etc, I really feel it is a lot more popular now, but I don't have your experience to draw upon so maybe it is just cognitive dissonance on my part.
I doubt Paul Graham was into the theoretical programming scene in that era. He might have been aware of it, but I doubt it was as production worthy as Common Lisp and thus probably not worthy of his time.
We had to use Haskell as part of our CS studies in university in 2006 and I believe it was already used there for some years. From my perception Haskell isn't more mainstream now than it was back then.