Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you're in eager evaluation land, everything evaluates as you come to it, so you can handle all arguments at runtime by using a stack and no problem is felt. Once we say, "actually we don't evaluate until it's used" ambiguity arises.

So the context of this is to solve a problem-in-practice with arguments in a lazy evaluation environment, which is that you don't know how many arguments you need to supply without doing the evaluation.



I’m very far from an expert in these styles of programming languages, but I’m pretty sure I’m missing something, and it’s not well explained in the OP.

The OP uses a lisp-y syntax, fully parenthesized, for its code examples. But function arity isn’t so much an issue in these languages — you can infer it from the parentheses, and there’s no implicit currying. (The OP uses a Haskell-like syntax for its definitions, but I think it contains multiple mistakes, making it very hard to understand.)

A language with implicit currying and eager evaluation specifically of function arguments just seems overcomplicated to me. It sounds like a stack machine in which you pop an element off the stack, evaluate it enough to determine its arity, then pop that many more elements, evaluate them, and then apply the function. This sounds unpleasantly dynamic and quite hard to implement efficiently.

Am I missing something here?


I wouldn't read too much into the syntax - the Haskell-like definitions are just a succinct way to describe the AST representation for the toy call-by-push-value language the post is talking about. Similarly, the syntax that resembles LISP, is actually (I believe) just shorthand for how the language would be represented in an extended lambda calculus with call-by-push-value semantics.

I think it's important to note that the post is clearly written for someone working on the implementation of compilers/type systems. So for example, in a compiler IR, especially those based on an extended lambda calculus representation, functions/closures are often curried, i.e. only take a single argument. As a result it can be an issue at a given call site to know whether a function is "saturated", i.e. all of its arguments have been applied so it will do some work and return the actual result; or whether it is only partially saturated, in which case it returns a new function that needs one less argument than before. I gather that this interacts poorly with typing in some situations, but I'm not super familiar with the issues here as I haven't gone down the rabbit hole of building a compiler with this kind of IR.

That said, as someone who works as a compiler engineer, I do recognize the potential of call-by-push-value, namely that it reifies evaluation semantics in the IR, rather than it being implicitly eager or lazy, and thus having to hack around the lack of one or the other, i.e. implementing lazy evaluation on top of eager semantics or vice versa. Making it a first-class citizen enables optimizations that would otherwise be impossible or very difficult to implement. To be clear though, you can obviously implement lazy on top of eager, or eager on top of lazy, without call-by-push-value - but I think if you anticipate needing to do that, and you are building a new compiler, using a call-by-push-value IR might be a worthwhile approach to take.

There might be more to it that I'm not really seeing yet in my brief reading of it, but the post made sense to me, it's just assuming a lot about the reader (i.e. that you are deeply familiar with compiler implementation, type theory, etc.).


Is that what they mean by “calculating the arity”?


Yes, arity is "number of arguments". I don't have expertise in lazy evaluation myself, I've just been exposed to enough of it to explain the motivations.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: