Hacker Newsnew | past | comments | ask | show | jobs | submit | tobz619's commentslogin

Haskell also has GADTs: https://ghc.gitlab.haskell.org/ghc/doc/users_guide/exts/gadt..., firstly as an extension and now it's built into the latest version of this year's compiler


You're misinterpreting 'GHC2024'. It's just a language edition, a short hand of enabling a bunch of extensions. You have been able to enable GADTs for many years now, with just a single pragma. It has been built in to GHC for all these years.


Yep, Scala is influenced by Haskell.


It can using pure or return or if working with just Maybe specifically then Maybe is defined like so:

data Maybe a = Just a | Nothing

So to make an X a Maybe X, you'd put a Just before a value of type X.

For example:

one :: Int

one = 1

mOne :: Maybe Int

mOne = Just one -- alternatively, pure one since our type signature tells us what pure should resolve to.

Reason we can do this is because Maybe is also an Applicative and a Monad and so implements pure and return which takes an ordinary value and wraps it up into an instance of what we want.


Sounds similar to how you need to do Some(x) when passing x to something expecting an Option in rust.

Swift interestingly doesn’t require this, but only because Optionals are granted lots of extra syntax sugar in the language. It’s really wrapping it in .some(x) for you behind the scenes, but the compiler can figure this out on its own.

This means that in swift, changing a function from f(T) to f(T?) (ie. f(Optional<T>)) is a source-compatible change, albeit not an ABI-compatible one.


Isn't that explicit casting? Implicit casting would be automatically performed by the compiler without the need to (re)write any explicit code.


> mOne = Just one

I'd call that explicit casting. Implicit casting would be

    mOne = one
Compiler already knows what "one" is, it could insert the "Just" itself, no? Possibly due to an operator defined on Maybe that does this transformation?

That is, are there some technical reasons it doesn't?

Or is it just (no pun inteded) a language choice?


Why would this be useful? Why do you want the types to change underneath you?


Better question: Why would you want your call site code to break when your type signature gets changed in a way that doesn't necessitate breaking anything?


Because what you're asking for precludes the concept of mathematical guarantees. I'm not taking your question at face value, because you could be asking why call site code should break when the type signature generalises (which is a useful thing), but that's not what you're asking.

It seems you're asking for code to be both null safe and not null safe simultaneously.

Having a language just decide that it would like to change the types of the values flowing through a system is wild. It's one of the reasons that JavaScript is a trash fire.


You are misunderstanding things.


I’m certainly misunderstanding why so many people in this thread insist on speaking authoritatively on a topic they clearly know very little about.


Because it otherwise forces the caller to have an extra explicit step that doesn't really contribute to anything. It's a trivial transform, and as such just gets in the way of what the code actually does.

Of course with great power comes great responsibility, so it's a tool that should be used sparingly and deliberately.

Now as mentioned I don't use Haskell, but that's why I like it in other languages.

I asked as I was curious if there was something that prevented this in Haskell, beyond a design choice.


A good reason to use Haskell is that it generally guides the programmer away from doing things like this.

If you make a breaking change to your API, then you should want your tools to tell you loud and clear that it’s a breaking change.

I also don’t agree that keeping the structure around values internally logically consistent “doesn’t really contribute to anything”. On the contrary, I think this idea is hugely important. How would your idea generalise? The compiler should just know that my `Int` should be a `Maybe Int` and cast it for me. Should the compiler also know that my `[a]` should be cast to a `(a, b)` because incidentally we’re fairly confident that list should always have two elements in it?

I think if this way of thinking is unfamiliar, then it’s a good reason to learn Haskell (or Elm, which is at least as good, or maybe better, for driving this point home).


> The compiler should just know that my `Int` should be a `Maybe Int` and cast it for me.

The compiler should not "just" know it. It would know it because we told it how.

Consider a function that takes a float and returns a complex number. I then change the function to take a complex number ("Complex Float" in Haskell if I read the docs right), and returns a float.

I could then tell the compiler, by implementing an implicit cast operator, how to cast float to complex. The implicit part is then that the compiler tries it without me telling it to use the cast explicitly.

Then any code that worked with the old function should work perfectly fine using the modified function without modifications, since per definition the reals are contained in the complex numbers.

This is how I do it in several languages I've used.


But now you’re talking about something else aren’t you? Now you’re talking about generalising. You can generalise, for example, from Float to Floating a => a. But I don’t understand how the original Int to Maybe Int change could be sensible. How does that work?


The exact same way? Or perhaps you could tell me, in which cases can an Int not be turned into a Maybe Int?

Again, I don't know Haskell, so from the outside it looks like much the same as the float -> complex conversion.


It sounds like you need to learn about parametricity.

https://www.well-typed.com/blog/2015/05/parametricity/


It doesn't sound like that to me. Could you not just answer his question?


To whom shall I send the invoice?


If you have a value "aValue :: a" and a monadic function of "mFunc :: (a -> Maybe b)" that's essentially just asking you to use `>>= :: Maybe a -> (a -> Maybe b) -> Maybe b` as well as `pure :: Applicative f => a -> f a` which will lift our regular `aValue` to a `Maybe a` in this instance.

Then to get the result "b" you can use the `maybe :: b -> (a -> b) -> Maybe a -> b` function to get your "b" back and do the weakening as you desire.

`Maybe` assumes a computation can fail, and the `maybe` function forces you to give a default value in the case that your computation fails (aka returns Nothing) or a transformation of the result that's of the same resultant type.

Overall, you'd end up with a function call that looks like:

foo :: b

foo = maybe someDefaultValueOnFailure someFuncOnResult (pure aValue >>= mFunc)

or if you don't want to change the result then you can use `fromMaybe :: a -> Maybe a -> a`

bar :: b

bar = fromMaybe someOtherDefaultValueOnFailure (pure value >>= mFunc) -- if the last computation succeeds, return that value of resultant type of your computation


This is fine and understandable in theory, but a usability disaster in practice.

If function f returns b or nothing/error, and is then improved to return b always, client code that calls f should not require changes or become invalid, except perhaps for a dead code warning on the parts that deal with the now impossible case of a missing result from f.

You are suggesting not only a pile of monad-related ugly complications to deal with the mismatch between b and Maybe b, which are probably the best Haskell can do, but also introducing default values that can only have the practical effect of poisoning error handling.


> If function f returns b or nothing/error, and is then improved to return b always, client code that calls f should not require changes or become invalid

Why do you need to change the type signature at all? You "improved" [1] a function to make impossible for the error case to occur, but it's used everywhere and the calling code must handle the error case (I mean, that's what static typing of this sort). So there you have it: the client code is not rendered invalid, it just has dead code for handling a case that will never happen (or more usually, this just bubbles up to the error handler, not even requiring dead code at every call site).

As an aside, I don't see the problem with the "pile of monads" and it doesn't seem very complicated.

----

[1] which I assume means "I know I'll be calling this with values that make it impossible for the error to occur". If you are actually changing the code, well, it goes without saying that if the assumptions you made when choosing the type changed when re-writing the function, well... the calling sites breaking everywhere is a strength of static typing.


Changing the type signature (which, by the way, could be at least in easy cases implicitly deduced by the compiler rather than edited by hand) allows new client code to assume that the result is present.


Changing the type signature to relax/strengthen pre or post conditions is a fundamental change though. I would expect it to break call sites. That's a feature!


Strengthening postconditions and relaxing preconditions is harmless in theory, so it should be harmless in practice.

Haskell gets in the way by facilitating clashes of incompatible types: there are reasons to make breaking changes to type signatures that in more deliberately designed languages might remain unaltered or compatible, without breaking call sites.


> If function f returns b or nothing/error, and is then improved to return b always, client code that calls f should not require changes or become invalid, except perhaps for a dead code warning on the parts that deal with the now impossible case of a missing result from f.

You can achieve this by not changing the type and keeping the result as Maybe b. Dead code to handle `Nothing`, no harm done.

However, you clarify you don't want this because:

> Changing the type signature (which, by the way, could be at least in easy cases implicitly deduced by the compiler rather than edited by hand) allows new client code to assume that the result is present.

But this cuts both ways. If the old call site can assume there may be errors (even though the new type "b" doesn't specify them) then the new call site cannot assume there are no errors (what works for old must work for new).

I must say I see no real objection to the proposal at https://news.ycombinator.com/item?id=41519649 besides "I don't like it", which is not very compelling to me.


(or absent in the case of input parameters)


A function in which the input is needed for the computation is very different to one where it's not needed. I would expect the type signature to reflect this, why would you want it otherwise?


Say you have a function which expects objects of type Foo as an input and which returns objects of type Baz. One day, the function is improved by also accepting the type Bar, i.e. Foo|Bar. So Foo isn't needed for the computation, because Bar is also accepted.

Or you have a function which expects objects of type String as an input. But then you realize that in your case, null values can be handled just like empty strings. So the input type can be relaxed to String|Null.


There's a difference between empty strings and Null values imo.

Just "" != Nothing

If you want to handle empty strings as a input in haskell then you have a function of type `f :: String -> b` and you pattern match on your input?

  f "" = someResult
  f ...
Nothing assumes a proper null in that there is genuinely nothing to work with. Still you can make a function to handle it or use `maybe`?


Perhaps that theoretically solves the problem, but it sounds awfullly complicated in practice.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: