> The unifying aspect of new languages such as Rust, Nim, and Gleam is that they were designed from the beginning to be beyond paradigms.
I really don’t think this is correct at all. Rather, Rust/Nim/Gleam are first and foremost imperative languages. They may have some functional and Lispy features thrown in, but that doesn’t change the fact that programs in those languages involve writing statements to be executed one after another — the defining aspect of imperative languages.
If you really wanted to, I guess you could define this particular combination of ‘imperative+functional+macros’ as its own new paradigm. These languages are consistent enough with their design that that might actually make sense. But it’s certainly not ‘post-paradigm‘ in any meaningful way.
Agreeing and amplifying: I believe multi-paradigm is a useful term, but you can always find a prioritization in the paradigms. Your language is going to privilege either mutable or immutable data. It can support both, but one is going to be considered the default. Even if the language itself doesn't, the standard library and the resulting influence it has on the 3rd party libraries will result in a preference. Your language will privilege one side or the other of the expression problem. Your language will privilege statements or functions. Your language will privilege static or dynamic types. Your language will prioritize compile time things or run time things. And so on for quite a few things. Even when someone finds an interesting way to split the difference in one of those categories, that does not defy this categorization, it adds a new one, a new way of prioritizing. But that new way still won't be a completely even 50% split.
There are many multiparadigm languages. They are even the norm now. But being multiparadigm doesn't mean they all support all paradigms equally. Programming X in Y is a problem for almost all combinations of X and Y for X != Y, because the language will always have a "grain" to it. Thus, it is still meaningful to argue about which paradigm preference is suitable to which tasks.
Even two of the most similar languages there are, at least in terms of language spec, Python and Ruby, demonstrate significant differences in the orientation of the library ecosystem in some of their philosophies.
Mostly agree but Common Lisp goes out of its way to not prioritize paradigms.
Immutable or mutable? Depends how you write it. Lists don't mutate until you choose to.
Statements or functions? Both I guess. Loop is declarative, many things are imperative, some things are functional.
Dynamic or static typing? Depends whether you bothered to give the compiler type hints or not.
Compile time or runtime? CL is AOT by default but you can eval-when your way into running code whenever you want. Readtime too
To be honest one of the biggest problems with CL is the multiparadigmatic design - you end up with many styles of code, all of which are valid and none of which anyone can agree on.
I agree with all of this. But I’ll point out that ‘the paradigms can be prioritised’ wasn’t intended to be my main point there. I wanted rather to refute the argument that Rust etc. aren’t ‘post-paradigm’ at all: on the contrary, they fit in quite well with existing paradigms.
> Even two of the most similar languages there are, at least in terms of language spec, Python and Ruby, demonstrate significant differences in the orientation of the library ecosystem in some of their philosophies.
Hmm… I feel you can get a lot more similar than that. Python and Ruby are actually rather different to my mind. Rather, you could take OCaml vs SML, or Scheme vs Racket. The differences are still there, but it’s much more subtle.
To add to my parent comment, I think there’s another point to be made here: ‘post-paradigm’ is, to some extent, a contradiction in terms. You can’t build a language without its basic structure implying some paradigm (or possibly more than one).
There’s a perspective I find useful here. I tend to think of different ‘programming paradigms’ as different approaches to solving problems. In imperative languages, you solve problems by modifying values in sequence until you get to the answer; in functional languages, you solve problems by building up a function which gives you back the answer; and so on.
The key thing here is, you do need to give yourself some way of solving problems. That translates directly to the paradigm which your programming language adopts. If you give yourself more than one way of solving problems, then the language becomes multi-paradigm. In rare cases, you might even end up inventing a totally novel problem-solving approach, and hence a new paradigm… but even that’s not ‘post-paradigm’, it’s just another paradigm.
I don’t disagree on any of your technical points. But I also think for practical purposes you’re missing the forest here. I agree with the sentiment of the article - I think the big trend in general purpose PLs is a blend of multiple classical paradigms. Perhaps we’re moving the goalpost and paradigms need to be rearranged - but that is intrinsically interesting - it’s literally the continents of knowledge drifting slowly into new configurations.
Single-paradigm languages like prolog, CSS or SQL keep their restrictions not because the lack of use-cases, but because the benefit of keeping the complex execution engines away from the end-user exceeds the minor wins in expressiveness.
I don’t think it’s a coincidence that the declarative languages are in this category. They are higher level, and opening up low-level customizations is really tricky: for instance, if you put imperative code inside your CSS, it needs complex “re-evaluation rules”, that results in a dilemma: either give full control to the programmer, which imposes specific execution engine designs and complex API surfaces – or re-evaluate too often, which risks killing memoization and perf (cache invalidation). It could be even worse if the code has side-effects or dep cycles.
On the contrary, imperative low level languages like Rust can easily come along and say things like: “this is not only a function, but a side-effect free function”. “This is not just a reference, but an immutable reference”. Then you can cleverly leverage those traits in your “execution engine” ie the compiler, to deliver low-level perf. There are even people who describe rust as a high-level language for these reasons, which is a bit provocative to me but in all honesty not completely outrageous.
An alternative take on the last 10-15 years:
- General purpose PLs typically have an imperative base, while integrating multiple classical paradigms:
- Only a few aspects of OOP are added to modern PLs, where inheritance has largely been superseded by simpler composition
- Features from FP have surged in popularity, being integrated and even retro-fitted into general purpose PLs, providing both perf- and DX improvements
- Structured meta-programming and/or codegen has been a strong focus for compiled languages, acknowledging that it’s preferable to limit the complexity of the core language at the expense of separate pre-compile phases
I don't think Prolog is a single paradigm language. It just has one dominant paradigm: Logic Programming. But in actual Prolog systems you'll find procedural programming, constraint programming, object-oriented programming, meta programming (programming on the language level).
Once it was Borland's Turbo Prolog. Quote: "It combines the best features of logical, functional, and object-oriented programming paradigms, offering a powerful, type-safe, high-level language. Visual Prolog is well-suited for developing applications for Microsoft Windows 32/64 platforms and supports advanced client-server and three-tier solutions."
There is a comparison of Prolog implementations on Wikipedia:
I haven't researched this, but there should be zillions of language extensions like this to Prolog. It is one of the symbolic AI languages, as such it has a long history of extensions.
> I don’t disagree on any of your technical points. But I also think for practical purposes you’re missing the forest here. I agree with the sentiment of the article - I think the big trend in general purpose PLs is a blend of multiple classical paradigms.
You’re not wrong. That may even be the sentiment of the article. But the article certainly doesn’t phrase it that way — it’s saying that modern programming languages are ‘beyond paradigms’. That’s clearly wrong, and that’s what I’m arguing against.
Fair enough! I agree with that and looks like I missed your main point.
It’s a bit naive and generally ahistorical to think we’ve “transcended paradigms”, whether within tech or outside. It’s similar to the bias of thinking that current year/western/majority perspectives are “enlightened” and free of bias. Usually just means we’re unable to see the bigger picture.
Gleam is one of the really really functional languages out there. There is no mutability, there are no statements, no loops. It has let bindings, function application, and conditionals through if/case, and that’s it. It has just tricked you because it has a very nice and friendly syntax.
It also has no macros so I don’t know what you mean in that last paragraph.
"Functional" has lost its meaning. Originally, functional programming was rather well-defined - now everything that has a `.map` is called functional. Gleam goes a longer way than, say, Python. But since it still has side-effects and lacks a system, it is very different from e.g. Haskell.
If Gleam is imperative by your logic, then Scheme and the whole ML family (SML, OCaml) would be imperative as well. That's a strange choice of semantics. Even Haskell has unsafePerformIO, which it might sadly need.
Imperative and (pure) functional are not opposites! They are actually orthogonal.
The opposite of imperative is declarative, and the opposite of (pure) functional is non (pure) functional.
Schema and (to my limited knowldge) the ML family are in fact non (pure) functional. However, they share more functional-aspects with pure functional languages than most other popular languages do.
> Even Haskell has unsafePerformIO, which it might sadly need.
Sure, it's almost never a black and white thing. But I think for the sake of the argument we can call Haskell a pure functional language, even if it is indeed possible to write non pure functional code in it. There are actual pure functional languages (like Idris) but I think for practical use in the discussion it's fine to call Haskell pure functional. Scheme and ML languages are very different from Haskell though, so I think they deserve to be called differently. They are however also different from, say, C. So to improve our communication I think it is worth to not put those 3 things into 2 boxes. We need 3 boxes here.
Disclaimer: I want to emphasize that none of any of that says that a language is inherentially better or worse! This is just terminology-talk.
It isn't clear what you're discussing. You mentioned that functional programming has lost its meaning and then went on to describe things as if your defintion would exclude some of the most shining examples of functional programming.
Functional does not mean just pure functional. And a purely pure functional language would literally be worthless as it wouldn't be able to do anything useful.
> then went on to describe things as if your defintion would exclude some of the most shining examples of functional programming
Maybe those examples are "most shining" from your perspective or definition but not mine. Mind to mention the specific languages?
That being said, to be precise, functional programming is a technique or style, it's not a property of a language anyhow. However, some languages make it very hard or even impossible to apply the style, either fully or partially. So practically speaking, that is how I classify a language as more or less functional.
> Functional does not mean just pure functional.
Yes, that's exactly what I said in my first sentence no?
Originally "functional programming" had the meaning of what we nowadays often call "pure functional programming" though. Language and definitions change over time. I was ranting a bit that the term "functional" is nowadays so unclear that there is little benefit in knowing that a language is "functional".
> And a purely pure functional language would literally be worthless as it wouldn't be able to do anything useful.
This is wrong. You haven't understand what functional programming is. In a nutshell it means that you are not directly executing effects, but rather that you build up a datastructure that describes the effect-execution. You then pass around an modify that datastructure to the point where you return it as the last thing that your does in the main-method (or whatever your language/runtime calls that). From that point on, this datastructure is being processed and the effects (like writing to a file, showing something on the screen) are (most likely) executed by the runtime.
In other words, there is no restrictions on what you can do with a pure functional language. You can try it out with Idris, which is a 100% pure functional language without any escape hatches. And still you can make it do all kinds of effectful things like processing files, sending emails, ...
I guess I was confused, and still am, about what the point is here.
> You haven't understand what functional programming is.
I almost exclusively use functional languages, so I'm not sure about that.
> I was ranting a bit that the term "functional" is nowadays so unclear that there is little benefit in knowing that a language is "functional".
I'm actually not sure that is a problem. Any examples? And it's quite common to describe something as "functional-first" if it has a functional core but allows other paradigms such as imperative or OOP, such as OCaml.
And a lot of this is based upon the semantics and definitions of the words like "pure", "functional", etc. which are more like spectrums than binary.
> definitions of the words like "pure", "functional", etc. which are more like spectrums than binary.
That's exactly it. "functional programming" originally was not a spectrum. It was well defined. Functions meant "pure functions" or (which is the same) "mathematical functions" in that context. And they still do when people use the term "functional programming" in the original meaning. Though nowadays, I rather use "pure functional programming" since "functional programming" was taken over. ;-)
So on the other hand, can you provide a precise and meaningful definition for "functional" from your own perspective?
I suspect it is possible to make the same argument for the contrary position. Once we're talking about "some functional and Lispy features thrown in"; we must start to question what a paradigm is.
If I write a program, fundamentally I have a blob in my head that I'm reifying into formal logic. It doesn't make sense to talk about the blob as having a paradigm; but I'm not sure that it is useful to talk about the reification as having a "paradigm" either. The appropriate techniques to be used are on a functional-imperative spectrum based on how stateful the problem is.
This whole idea of programming paradigms I think is a mis-take of a more fundamental question of how to classify programming problems - we're looking at the shadow in Plato's cave and getting nonsense because we expect the problem to to take on aspects of the programming language. Which is not a strategy for achieving success. We should be classifying problems based on state, not programming languages based on paradigms.
Most of the experts have figured that out, they pick their programming language based on the problem space they want to deal in. But the paradigm paradigm doesn't help in making those decisions, so it's utility is low.
I disagree. In Rust, traits are used all over for dynamic dispatch, the defining feature of object-oriented programming. Moreover, all three languages support structural pattern matching (borrowed from functional programming) as a core control-structure.
Certainly all three languages can be used in a simple, imperative style. But that's not the only paradigm that can be used unlike in C or early versions of Python. Many programs in these languages look significantly different than just statements and procedures.
> In Rust, traits are used all over for dynamic dispatch, the defining feature of object-oriented programming.
In my experience, Rust traits are used much more for static dispatch `fn foo<T: Trait>(T)` than dynamic dispatch `fn foo(Box<dyn Trait>)`. This, together with its ADTs and lack of inheritance, gives it a very different feel from most "object oriented" languages.
Also, what counts as a "defining feature" depends greatly on who's defining it. Though dynamic dispatch is certainly up there on most lists.
> Many programs in these languages look significantly different than just statements and procedures.
Like I said, they’ve certainly adopted features from other paradigms… but the basic, underlying structure of programs in these languages is still statements, sequenced one after another. It’s not like Haskell or Scheme where nearly every operation is ultimately done through function calls. And it’s certainly not ‘post-paradigm’ as the article claims.
> Moreover, all three languages support structural pattern matching (borrowed from functional programming) as a core control-structure.
As for this: people often associate functional programming with pattern-matching, but I’ve never really understood why. The only relationship is that it originated in the ML language family, which also happens to be functional. There’s many functional languages which lack pattern-matching (e.g. most Lisps), and there’s many non-functional languages which have it (Java, C#, Python).
I would think pattern matching as coming from string processing (https://en.wikipedia.org/wiki/COMIT , SNOBOL, ...) and pattern-directed programming coming from rule-based & logic programming (PLANNER, Prolog, ...).
Bro, Java and python just got it. Lisps don't include it because it's just another lib/extension. Even emacs had one (see the recent pcase lawn article)
Yeah, I'm having trouble seeing how Rust/Nim/Gleam can be written in a logic programming style (like Prolog or Datalog).
At minimum, that means there are _two_ paradigms: logic programming and "beyond paradigms"....which sorta means that we're not really beyond paradigms now, are we?
I think the reality is that certain previously-niche paradigms became cool again, and folks started grabbing off some of the most-visible pieces of those paradigms (pattern-matching, ADTs, map/reduce/filter, maybe even a pipeline operator) and adapting them to other paradigms.
The closest thing I've seen to truly multi-paradigm was a programming language now lost to time called Metamine, which I kept a clone of[1]. Here's some previous discussion[2]
In Metamine
a := b is a normal assignment
c = d+1 means that c will ALWAYS be equal to d+1
z = 10-time results in a countdown timer, z
That magical equals is declarative programming... something that I've only seen mixed with imperative functioning that one time.
IIRC, this was called "permanent assignment" in CPL, a name I very much like.
It's also pretty much the same as a dataflow constraint (see: Amulet, Garnet, Spreadsheets, ...)
In Objective-S, I use the syntax |= for the unidirectional variant of a dataflow constraint, =|= for bidirectional (ObjS uses := or ← for assignment, = for equality). So the above would be:
a := b.
a ← b.
c |= d+1.
z |= 10-time. (if 'time' were the current time in ObjS)
'Multi-paradigm' languages are quite old, like Lisp.
Got closures? Then you've got objects.
Can you write code that's executed sequentially? Then you've got imperative programming.
Can you pass a function as a parameter to another? That's pretty functional.
Having this in a language has been common for decades. But there are more paradigms, notably 'macro-oriented' programming and logic programming. Are these well supported in Rust and Nim? If not, then they aren't as "beyond paradigms" as Lisp.
There's arguably several forms of object oriented languages too, besides what might be called Java/C++/Python style there's message passing Smalltalk style and prototype style as in JavaScript and its parent. Would probably need to support these styles too, to be "beyond paradigms".
Assembler could also be considered a programming paradigm, so we'd need to support that too.
And if logic programming is a paradigm, constraint programming likely is another. Glue or shell programming yet another.
I think a lot if the paradigm frame comes from programming being still relatively new (although that's becoming a less and less viable meme). We're still very "one true way" in how we talk about software design.
Personally, I'd love to see us talking more in terms of problems/solutions.
To give an example. State leads to complexity and bugs, and it can be minimized by preferring pure functions where possible. That's a good principle that can be learnt from without embracing everything functional and rejecting everything that involves objects.
One version would be to actually leave the expression as is and temporarily switch the Lisp reader to Infix.
I'm loading an infix reader macro into LispWorks, the code is roughly 30 years old:
CL-USER 1 > (ql:quickload "INFIX")
To load "infix":
Load 1 ASDF system:
infix
; Loading "infix"
;;; *************************************************************************
;;; Infix notation for Common Lisp.
;;; Version 1.3 28-JUN-96.
;;; Written by Mark Kantrowitz, CMU School of Computer Science.
;;; Copyright (c) 1993-95. All rights reserved.
;;; May be freely redistributed, provided this notice is left intact.
;;; This software is made available AS IS, without any warranty.
;;; *************************************************************************
("INFIX")
Now we can write Infix expressions:
CL-USER 2 > '#I( a + b * c )
(+ A (* B C))
Let's set the variables a, b, c
CL-USER 3 > setf a 10 b 20 c 30
30
The Infix expression reader macro at work:
CL-USER 4 > #I( a + b * c)
610
Inside the Infix macro Lisp parses a sublanguage of Infix expressions into Lisp s-expressions. Generally this would be possible with a normal macro, but the reader also changes the tokenizing of elements, so we can also write:
CL-USER 5 > #I(a+b*c)
610
In "normal" Lisp syntax a+b*c would be a single symbol. The infix reader parses it into five symbols and a list according to operator priorities.
A better example is the Oz programming language and the Mozart programming system (https://en.wikipedia.org/wiki/Oz_(programming_language)) explained in the book Concepts, Techniques, and Models of Computer Programing by Peter Van Roy and Seif Haridi.
Not necessarily. Oz was created as poster child to multi-paradigm language design. The book shows this. Each paradigm gets its own chapter and Oz examples to program in each are shown. This is similar to the cirricula structure author mentions. In contrast the article makes a case for programming without paradigm distinction and mixing elements across styles.
Since Oz's Wikipedia page was linked, Wikipedia apparently has a page (https://en.wikipedia.org/wiki/Comparison_of_multi-paradigm_p...) comparing multi-paradigm languages (though can be argued all languages nowadays are multi-paradigm). A summary table for some of those (partial support isn't considered):
Interesting ones will be Wolfram supporting the most basic (not in others) natively, CLisp the only one supporting all the basic considering libraries, and Julia that alongside CLisp support the most considering extra (in others). Moreover CLisp alongside C++ show that thanks to libraries supported can be greatly extended (doubled in those two). To also compare to article, another will be Haskell supporting more paradigms (even only considering native ones) than Rust. Sadly Nim and Gleam aren't in list.
The original article did not actually make much sense to me since there can be no "Programming beyond Paradigms" by virtue of the fact that all languages allow solving problems in at least one "default" Paradigm (the computation model embodied by its abstract machine). Thus the correct term is "Multi Paradigm" and current day languages do this to varying levels of ease by providing appropriate syntactic features mapping to abstract machines. In that regard Mozart/Oz is the poster child since unlike other languages that is its stated goal.
The author states "I believe that programming paradigms are now best understood as a style of programming, rather than as an exclusive set of features." which is not correct. A "Style of Programming" is not a "Paradigm" unless that style embodies a specific computation/abstract machine model. The relevant syntactic features could be used in a mix-and-match manner to increase the design expression space. A good example is the template features in C++; originally designed as an alternative to macros to implement containers until some smart people figured out that it was much more powerful and could be used for programming in the "Functional Paradigm" (with some additional extensions).
My impression is that the Wolfram language is not particularly object-oriented and that it does not support OOP. Am I wrong? It has certain features that are enablers for libraries supporting OOP.
Not sure either. Could exist but don't know about it and sadly Wolfram lacks a formal specification. After bit of searching though found there're few[0] packages to enable object-oriented programming. So can probably assume there isn't native support for it. Seems the Wikipedia page needs more thorough research.
This has been true of Common Lisp almost since the beginning. Supported functional, imperative and object oriented paradigms out of the box. And macros allow supporting almost any other programming style in the language as well. Probably any programming paradigm you can think of at some point was added to Common Lisp as a library through macros.
From the content of the article it is not clear if the author really knows about the topic of programming paradigms (as per undergraduate courses).
As other commenters are saying, in 2024 we can combine paradigms more than before the 2000s but paradigms continue to exist. It is very different to use think in functional terms than in imperative ones or even in purely logic terms as in Z3. There are combinations that are pretty natural (e.g. LINQ in .NET) while others go beyond specific languages: just interfacing components written under different paradigms.
My two cents is that programming languages will always favor a paradigm over others because many paradigms are strongly connected to the compiler or interpreter and offer syntax sugar to give the developer different strategies. The multiparadigm approach would much alike the lines of Microsoft .NET (F#, C#) where specific programming languages can use a unified framework.
In futuristic terms I think the whole programming field will majorly lend more toward reuse and developing based on specs than what we are doing now: reinventing the wheel across different organizations. For example, front-end development should be more visual and parametric than the code it requires to write.
"Think" being the operative word. Paradigms are mental models of computation and organization. The benefit of designing a language around a single model is ease of understanding. The downside is that this locks the developer into a single way of looking at things, which may not always be the best match for the domain. Multi-paradigm approaches (most modern langs) broaden the available options, but at the cost of greater cognitive load.
I don't agree about the cognitive load, people are different and the idea that there is a "single developer mind" is completely wrong. For example, YMMV, if you look at LINQ in .NET is a natural query language over different paradigms. I don't think it adds a cognitive load.
And, we should also think about the way that computer science and/or software engineering is studied. If there were a better teaching approach about multiparadigm thinking there would be less cognitive load. My argument is that when you work professionally you don't spend the same cognitive requirements that studying at university because even unconsciously you assume that you know most of the material.
What I agree is that you would not find many people who are profficient in Haskell/OCaml/Lisp at the same time of imperative languages. You can know both but it is rare to work interchangeably in both. Again, I think that part of this is how we learn computer science and that new pedagogic ways could help to be at least good in both.
My personal frustration with logic implementations like Prolog is that the promise on focusing in the "what" instead of in the "how" is not fullfiled. I think that SMT solvers like Z3 are great in this topic at the expense of narrowing the problem space.
LINQ is an interesting example because it exists in two forms, query syntax and method syntax. You might consider these "mini-paradigms". Most users know one or the other. I've rarely met people who are equally proficient at both. But almost anything you can accomplish in query form can also be done in method form, and the industry has settled on method syntax as the standard.
Why do you think that is? I think it's because method syntax fits more neatly into the larger C# paradigm of imperative code. Using both requires the developer to pivot to a different mental model (declarative coding), so we try to standardize on the one that fits best in the bigger picture. This is what I mean by cognitive load. Most devs can surely handle both, but why make them do extra mental work when that effort is better spent on the problem domain.
I think it is more about education, the way "we" study/teach programming than an intrinsecally problem about paradigms. I personally find the query syntax better and I am not working as a developer "anymore". I would say that is a matter of taste and education. Related [1]. Personally, I look for simple and clear syntaxes and don't have a problem switching from one programming language to another while the focus in the solution is clear.
I understand what you're saying, and I agree people should be able to switch as needed. But look at one of the top comments from that link (I swear that wasn't me):
"I find query syntax nigh unreadable. It's like someone stuck some other language in the middle of my C#. The mental switch between that small snippet of code and the surrounding code is harsh and annoying." [1]
As the big name languages become a never-ending accretion of multi-paradigm features, then it just feels like a sort of Grey Goo landscape. There's a charm in things remaining true to themselves, even at the cost of maximum utility.
Paradigms aren’t just technical capabilities within a language, they’re also modelling approaches for real world problems.
Imperative programming lets you think about a series of steps. Structured programming lets you split and name series of steps into known procedures and structures. Object oriented programming (and object relational mapping) was wildly successful because a lot of enterprise programming is about interactions between durable real world entities. Functional programming remains somewhat niche because modelling in terms of pipelines of pure functions is hard to map to real world problems without great discipline.
None of these is right or wrong, nor is any subset or combination. But they’re not just regrettable constraints, they’re valuable ways of thinking. Any language seeking to grow beyond one or more paradigms still needs to offer a clear and consistent way for programmers to reason about the world.
I more or less agree with the author that "programming paradigms are becoming practically the same as programming styles", but even so, some languages have core features that cannot be replicated by mainstream "multi-paradigm" languages, and then that I can call them as the Next Paradigm.
An example is dependent types. This is impossible in all the languages the author mentions in the article (yes, well, GHC Haskell comes close), but there is at least one "21st century general-purpose language" that uses it, called Idris.
I think this article is _closer_ to right than a lot of writing on programming paradigms is, but still misses a central idea.
Programming paradigms can't be defined intensionally, by listing their features, nor extensionally, by listing programming languages that fit each paradigm. Either way results in unsatisfactory definitions from which reasonable people will identify errors of excluded languages.
This is because programming paradigms aren't sets, they're cognitive categories. Programming paradigms are subjective mappings of conceptual framings of program structure onto specific languages. A language fits a paradigm if that mapping is subjectively natural.
Of course, people tend not to like subjective definitions, but this is more accurate.
Hmm...I still don't think that the differences between imperative, OO and functional programming are sufficient for them to be called "paradigms" and for a language that supports variations to be called "multi-paradigm"
I really don’t think this is correct at all. Rather, Rust/Nim/Gleam are first and foremost imperative languages. They may have some functional and Lispy features thrown in, but that doesn’t change the fact that programs in those languages involve writing statements to be executed one after another — the defining aspect of imperative languages.
If you really wanted to, I guess you could define this particular combination of ‘imperative+functional+macros’ as its own new paradigm. These languages are consistent enough with their design that that might actually make sense. But it’s certainly not ‘post-paradigm‘ in any meaningful way.