I think i'm missing something. On slide 4 it says (the code) it has 0 exceptions and is robust, but then on slide 7 it says that (the system) it crashes without stack traces? Is that right?
What did the OP meant by being robust, when the code crashes unexpectedly?
The cynic in me says that unless Haskell has a magical "do the right thing in every situation, with automatic full knowledge of your application's business requirements" operator, if there's no specific logic in the code that responds to errors, they're just ignoring them (or else they're building a very bare bones app - any nontrivial piece of software will have potential error conditions out of its control that it has to react to in very specific ways that depend on the app itself).
In which case I could produce an equally "robust" Java program by wrapping everything in a try/catch and not logging anything. Except that in Java, or just about any other language, with a single method call I could obtain that elusive stack trace that OP is sorely missing. Which, in real applications (which, to be fair, I've never been masochistic enough to write in Haskell), I always do, and it significantly improves our ability to track down the remaining unanticipated conditions that our error handling logic hasn't already handled in well defined ways. And that sort of work should consume most of anyone's time spent developing, because the happy path is the easiest piece of the puzzle - engineers handle all code paths, hackers handle the most important ones, and hacks only handle the best case scenarios.
I'm not really getting what Haskell has helped here, apart from making the article more upvote-able.
That said, the presentation slides don't give me even a vague sense what the real thrust of the presentation was, so I'm probably missing the substance. I take issue with this being posted without any additional context, but I'll grant the benefit of the doubt and assume that there may actually have been something worth listening to if we didn't just have the slides to look at.
Haskell doesn't need a magical "do the right thing" operator. What it does instead is constantly nag you, saying, "but you haven't considered this case".
This causes you to think about you code in much more depth and generally leads to good results without the need to constantly debug.
I find that things written in Haskell that compile generally work first time more often than the law of averages would seem to allow.
Exception in Haskell is more specific than in other languages. There are many ways to have "unexpected circumstances" such as Maybe, Either, Error, Mplus, Alternative, and the transformer variants thereof. Exceptions refer explicitly to "asynchronous" exceptions—the kind in most other languages, that imply global changes in control flow that are difficult to reason about.
People tend to avoid this behavior except in rare circumstances. That said, it's easy to replicate locally with the Cont monad. These local reflows are easier to reason about as well. You can even build it atop delimited continuations for more control. These are great for covering early stopping in searches, for instance.
>Exceptions refer explicitly to "asynchronous" exceptions
This is not technically true. You can throw exceptions from pure code, either explicitly or with something like an incomplete function definition (which the compiler should warn you about). However, it is considered very bad practice to do so.
Sure, and if you treat them all as `undefined`/bottom then you get CPO semantics. Usually "Exception" still refers to `Control.Exception.SomeException` and `error` is bottom.
Catching "undefined" from pure code is the danger that leads to massively confusing semantics—it breaks down the "value" concept badly. The `spoon` library is a great example of this and it's scary to see it in places. That said, it's not terrible for wrapping up foreign code that isn't quite unsafe enough to need "IO" treatment.
The best usage of "error" is to mark completely impossible situations. These show up easily when you do dependently typed stuff with GADTs, but can also exist due to various algorithm invariants.
Well written Haskell code pushes as much of the business rules as possible into the type system. It uses the type system as a tool to keep you from forgetting to check corner cases or prevent you from creating states that make no sense in the context of business rules.
The canonical example here is using a distinct type that can only be produced by conversion functions to ensure that user input is always escaped correctly on it's way to the database or a web page. Code that fails to perform this escaping won't just send dangerous data somewhere, it will fail to compile. With care, the same can be done for ensuring business rules are also observed. In many ways the distinct-types-with-conversion-functions pattern is the tip of the iceberg here.
It's obviously possible to not do this, but that's a bit like using a database by shoving all of your data into one giant string then complaining that the database doesn't help you with anything.
There is a big difference between static and dynamic checking. In a statically checked language like Haskell the error is guaranteed to be caught at compile time rather than maybe being caught runtime.
It can be, yes, but it's easier to do so in Haskell. Some of the compiler extensions allow you to push truly crazy invariants through type checking.
Also, for most dynamic languages, checking the type at runtime goes against the grain of the language. Python or Ruby code that is littered with type checks is often unnecessarily brittle.
In Haskell one typically eschews exceptions in favor of for instance the Error monad, as you don't get a typical stack trace in a lazily evaluated language.
Probably they have experienced some exception during development, but have had no exeptions in productin.
Likely that is referring to the fact that when it crashes, it crashes in a way that can be interesting to debug. I took it as a comment about development vs one about production.
Lazy languages don't have equivalent call stacks as strict languages. However, GHC lets you get a stack trace via profiling. E.g. +RTS -xc flag gives a stack trace.
What strategies did you use to combat the lack of stack traces, especially in production?
I was happy to see that you are fellow Brisbanites. I'm a bit out of the "local loop", is there much of a local Brisbane start-up scene?