Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's already been happening for quite some time. Mainstream programming has done little else in recent years than converge toward functional programming.

Also, they wrote "functional," not "purely functional."

By the way, from a formal perspective, imperative programming is an extension of purely functional programming.



This is true, in that most of the scholarship builds up its proofs starting with the lambda calculus. But there are so many paradigms (Turing machines, SKI combinators, excel spreadsheets) that are equivalent that I’m not at all convinced they had to start with lambda calculus. They just happened to.

Out in the real world, the thing that all programming languages are actually built on top of looks much more like a Turing machine than a collection of composed anonymous functions. But of course if you want to make your programs go really fast, you can’t treat them like Turing machines either. You need to acknowledge that all of this theory goes out the window in the face of how important optimizing around memory access is.

Which isn’t to say one perspective is right and one is wrong. These perspective all exist and have spread because they can all be useful. But acting like one of them is “reality” isn’t all that helpful.

Ps. Not that the parent actually said the formal perspective was reality. I just wanted to articulate this thought I had bouncing around in my head for a while.


> Out in the real world, the thing that all programming languages are actually built on top of looks much more like a Turing machine than a collection of composed anonymous functions.

Hardware logic as described in a HDL language is precisely a collection of "composed anonymous functions", including higher-order functions which are encoded as "instructions" or "control signals". We even build stateful logic from the ground up by tying these functions together in a "knot", with the statefulness being the outcome of feedback.


But it's hard to argue the machine at the end is stateless. We can endlessly do this. You can construct lambda calculus with Turing machines and Turing machines in lambda calculus.

There seems to be this weird idea in the functional community that the existence of some construction of one thing in another shows that one of those things is "more fundamental" than the other, when in reality this is often a circular exercise. e.g. Functions can be formalized as sets and sets can be formalized as functions.

Even worse in this specific case, the Church-Turing thesis tells us that they're equivalent, which is the only sensible answer to the question of which is more fundamental. There's an oft quoted phrase of "deep and abiding equivalencies" and it bears pointing out how deep and abiding these equivalencies are. From a formal perspective they are the same. Yes, there's arguments could be made that typed lambda calculus and its relation to logic are important, and that's true but it's not a formal argument at all and I think it's best to be clear on that.


> You can construct lambda calculus with Turing machines and Turing machines in lambda calculus.

I realize that these models of computation are equivalent. My point was rather that the imperative paradigm collapses into the functional paradigm in practical programming when I disregard the admissibility of arbitrary side effects.

> e.g. Functions can be formalized as sets and sets can be formalized as functions

I can derive functions from set theory in my sleep, and I can kickstart set theory without functions, but I wouldn't know how to define the concept of a function without sets. And even if I did: I can't even specify the characteristic function of a set without resorting to the inclusion relation.

> But it's hard to argue the machine at the end is stateless.

I'm not really that interested in the relationship between the various paradigms and the machine. What interests me most is how well I, as a human being, can write non-trivial programs. To me, it is immediately obvious that the composition of purely functional program units is conceptually simple enough to be done by a child, while unrestricted side effects can very quickly make things very complicated. However, I don't want to get involved in the discussion on this topic. I have accepted that others see it differently, although I find that completely baffling. I don't want to take away anyone's for loops, etc. To each their own.


> I realize that these models of computation are equivalent. My point was rather that the imperative paradigm collapses into the functional paradigm in practical programming when I disregard the admissibility of arbitrary side effects.

But in practical programming with imperative languages, arbitrary side effects can't be disregarded, so they don't collapse into the functional paradigm. In fact, from a physical perspective, every possible CPU has states, so the most physically fundamental model of computation (something like register machines, or GOTO programs) is imperative and more fundamental than functional models, like untyped lambda calculus. Functional models might be more mathematically elegant though.

> I wouldn't know how to define the concept of a function without sets.

Whitehead and Russell showed how to define functions just in first-order logic with identity, without requiring any set theoretical axioms, by defining an n-ary function via an n+1 place relation. See here top right: https://mally.stanford.edu/Papers/rtt.pdf

This is quite natural, because predicates (properties and relations) already occur in natural language, while sets do not; they are a mathematical abstraction. For example, sets can be empty, or arbitrarily nested, or both arbitrarily nested and otherwise empty, which has no analog in natural language.

> I can't even specify the characteristic function of a set without resorting to the inclusion relation.

If you try to define sets by using functions, functions are in this context assumed to be more fundamental than sets. Then you don't need to define functions. Then the inclusion relation is simply defined via the characteristic function. You don't need to define that function. Just like you, in the reverse case, don't need to define sets, if you want to define functions via sets.


> But in practical programming with imperative languages, arbitrary side effects can't be disregarded, so they don't collapse into the functional paradigm.

I'm sorry to have to say this so bluntly, but I think you understand as well as I do that in a language such as C#, it is entirely possible to write large amounts of purely functional yet useful code, just as you would in Haskell. That's why it's possible in SICP to wait until Chapter 3 to introduce the special form set!. That is the issue I was concerned with.

> from a physical perspective

I already mentioned that this is not the perspective that interests me. I don't care at all about the physical substrate for computation.

Thanks for the paper. I might take a look at it, although I've already been given a good tip elsewhere with map theory. I'm not convinced by the claim that properties and relations occur in natural language but sets supposedly do not.

The last paragraph isn't very helpful either. I'm not sure who is misunderstanding whom here, but we don't need to hash it out. This isn't a conversation I'm enjoying.


You're looking for Grue's map theory for formalizing sets in terms of functions.


Thanks! It seems that in the metatheory, one can resort to type theory in order to avoid having to fall back on set theory in a circular manner. Unfortunately, I don't know anything about that, but I'll take a closer look at it.


Good point on functional vs purely functional. To the GP, what we're seeing is that more mainstream languages are taking the most popular features from functional languages and integrating them. The combination of Scala & Rust are a perfect example given how ML inspired they are. C# has a bevy of functional trappings.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: