Laziness and TMC are fundamentally different from partial instantiation in that they are implementation details, mostly/wholly invisible to the language semantics themselves.
A key aspect of partial instantiation is that the "holes" may be filled in by a semantically unrelated piece of code, which is not the case for either laziness or TMC (wherein the contents data structure must be defined in one place, even if the implementation does not evaluate it immediately).
Yes, that is true! Koka's constructor contexts also allow you to do this. A constructor context is a data structure with exactly one hole which you can fill in with an arbitrary value (or even several arbitrary values, copying the constructor context in the process).
Also, you can do lazy initialization in any language that has functions by passing a getter function around instead of a value.
I recently had need for this to implement validation for recursive data structures in TypeScript. It depends on support for forward references in the body of a function. The tricky bit is ensuring that the getter isn’t called until the cycle is completed by defining the forward reference’s target. The type system doesn’t help; you just have to write the implementation so it doesn’t call the getter.
I don't think Haskell can do this, can have a growable linked list for example. 'last a' is 'last a', regardless what is between them (modulo shadowing and such).
And I suspect that Prolog's Partial Instantiation is, while not mutating data, but it is mutating references somewhere
Technically, Haskell laziness is just mutability under the hood :)
And the "difference list" mentioned in the article is also in Haskell - although framed differently (more "functionally")
type DList a = [a] -> [a]
concat :: DList a -> DList a -> DList a
concat = (.)
toList :: DList a -> [a]
toList d = d []
fromList :: [a] -> DList a
fromList = (++)
Haskell has cyclic data structures, which also can’t be implemented without a mutable reference somewhere, though it may be buried in the implementation.
The difference is being able to define an incomplete data structure (with a forward reference) and then defining the target of the reference at runtime. Most languages will complain about an undefined reference if it’s not defined by the end of a module.
You could do it with soft references, though. Use a symbol or string to refer to something and define it later.
I rather think the fact the the same symbol/string always denotes the same thing is especially helpful for cyclic structures.
Anyway I think I misunderstood the article, I thought they added things to a dictionary in the prolog repl, which would be impossible in haskell/ghci afaik.
Disclosure: I work on Koka's FBIP optimization (Option 4).
> The most efficient data structure to use here would be a mutable dynamic array and in an imperative language that's what pretty much everyone would use. But if you asked an OCaml programmer, they would almost certainly use a linked list instead.
I agree with this sentiment. However, OCaml does have mutable arrays that are both efficient and convenient to use. Why would a programmer prefer a list over them? In my opinion, the main benefit of lists in this context is that they allow pattern matching and inductive reasoning. To make functional programming languages more suited for array programming, we would thus need something like View Patterns for arrays.
A related issue is that mutation can actually be slower than fresh allocations in OCaml. The reason for this is that the garbage collector is optimized for immutable datastructures and has both a very fast minor heap that makes allocations cheap and expensive tracking for references that do not go from younger to older elements. See: https://dev.realworldocaml.org/garbage-collector.html#scroll...
> Unfortunately, this makes it impossible to use any standard functions like map on linear values and either makes linearity nearly useless or inevitably creates a parallel, incomplete universe of functions that also work on linear values.
You can implement polymorphism over linearity: this is done in Frank Pfenning's SNAX language and planned for the uniqueness types in a branch of OCaml.
> This might sound a little dangerous since accidentally holding on to a reference could turn a linear time algorithm quadratic
No, the in-place reuse optimization does not affect the asymptotic time complexity. But it can indeed change the performance drastically if a value is no longer shared since copies are needed then.
> A tracing garbage collector just doesn't give you this sort of information.
> for now even these struggle to keep up with tracing garbage collectors even when factoring in automatic reuse analysis.
I investigated the linked benchmarks for a while. The gap between Koka and Haskell is smaller than described in that initial comment, but a tuned GHC is indeed a bit faster than Koka on that benchmark.
Perhaps contrary to most people in this thread, I think you should avoid learning lenses or category theory too early. These are great tools, but they take months or even years to master and are not required to write useful code in the language.
I find Haskell very useful for my projects, but to achieve this I restrict myself to the basic subset of the language (Haskell 2010, no fancy extensions such as type families or GADTs) and use few libraries aside from the core libraries. New features and libraries always carry a high learning curve in Haskell and less popular libraries can be buggy. Instead, you will often be more productive just writing the required functionality from scratch (and it will teach you more too!).
At Jane Street, I saw my coworkers learn functional programming in just one week. (some still struggled with monads in the second week -- if that is you, I can recommend Phil's paper: https://homepages.inf.ed.ac.uk/wadler/papers/marktoberdorf/b...). If you are learning Haskell in your free time and with no one experienced to help, it will obviously take you longer. If you have questions, feel free to post on the Haskell IRC or Reddit. Just don't worry that you need to read another tutorial before getting started :)
As with anything Haskell, one could simply reply with "here's the formal definition."
But in easily digestible terms, it's a way to narrow your view over a collection to a subset that satisfies some property, and perhaps perform some operations on that subset.
This video presentation (with a very humorous audience dynamic, might I add) serves as a great introduction by example: https://youtube.com/watch?v=QZy4Yml3LTY
I'm going to watch the video, but I'm having trouble understanding how lenses are different from filter, and perhaps map. Is it just a combination of filter and map?
No, it's quite different. A super rough but somewhat accurate starting point is to think of a lens as being like the `.foo.bar.baz` in `myobject.foo.bar.baz`: a way to describe some "location" inside of a data structure, in a way that allows for getting and setting. But lenses are also data, so they can be stored in variables, manipulated, and so on. It's a very useful concept, and very much not required or helpful when learning Haskell. It's also not a core Haskell concept in any sense, just something that has been built on top of Haskell as a library.
Also helpful to note that the "location" can be virtual. For example you could have a lens into the hours field a string that contains an ISO-8601 formatted date, so you could do something like
let s = "20240722T193456.789-0800"
putStrLn $ 19840206T193456.789-0800 -- prints "20240722T203456.789-0800"
and they can be composed, so if you have some other lens that gets from your outer structure to an ISO-8601 formatted date string field, let's call that lens `fieldLens` then `fieldLens . hours` will get from your outer structure to the hours field of that date field.
Zip trees are novel but their performance (and therefore also skip lists, since they are isomorphic) lacks behind other linked structures like Treaps and especially LBSTs. [1] I personally find skip lists to be overhyped binary search trees in disguise.
Absolutely trees in disguise, in fact there are deterministic versions of skip lists that are equivalent to B-trees. An 1-2 skip list (where each pointer to next can skip from 1 to 2 pointers on the level below) is isomorphic to a 2-3 tree.
I always felt like Pugh was pointedly ignoring the cost of inconsistently sized data structures and felt like more transparency there was necessary. It's been a really long time since I dug into them though so I could be out of date.
I'd also like to see better investigations into Treaps balanced not for fairness but for average access time, so that more frequently used values are faster to look up than uncommon values.
I don't know anything about zip trees, but I'd think in a common scenario the memory addresses are reasonably consecutive. Would that be a problem? If not, why not just assign consecutive ranks in the first place?
You can use more external displays using a dock and DisplayLink Manager though. I got three extra monitors to run on my M1 Air with no problems that way.
The headline "GPT-4 increased BCG consultants’ performance by over 40%" is misleading since it implies that they became more productive in their actual work, when this is a carefully controlled study that separates tasks by an "AI frontier". Only inside the frontier did the "quality" of work increase by 40%, while they completed 12% more tasks on average.
I am slooooowwly working on my own language that is planned to use work from both you and the Effekt team, so i may send you all an email in the future when i start having questions.
But I have really limited free time and only work on a language for really specific reasons. Understand, nothing else seemed to try to solve the problem i have at hand.
That said, if the offer is a solid long term employment at good salary, i am open to negotiations. My email is on my blog and my profile iirc