Hacker Newsnew | past | comments | ask | show | jobs | submit | rastrian's commentslogin

I really don’t like the argument calling “industrial usage” just because a main company or FAANG aren’t using the tech stack, but arbitrary under the hood are doing basically the same stuff with internal toolings that should be entirely under the language features, not under a system design library.

But your take about modern Java is correctly, and they adopt this style under internal projects for some workflows.


heisen-valves are a perfect comparison, thank you.

Mostly “boring” stuff where the type system pays rent fast:

- Domain/state machines (payments/fulfillment-style workflows): modeling states + transitions so “impossible” states literally can’t be represented. - Parsers/DSLs & config tooling: log parsers, small interpreters, schema validation, migration planners. - Internal CLIs / automation: batch jobs, release helpers, data shapers, anything you want to be correct and easy to refactor later. - Small backend services when the domain is gnarly (Servant / Yesod style) rather than huge monoliths.

If you’re learning it beyond CS exposure, I’d start with a CLI + a parser (JSON/CSV/logs), then add property-based tests (QuickCheck). That combo teaches types, purity, effects, and testing in one project without needing to “go full web stack” on day 1.


You can get most of the “ADT/state-machine reliability” benefits in Python by combining static checking + tagged unions + boundary validation:

Model states as tagged unions (Union + Literal + dataclass(frozen=True)), use match (Py3.10+) and add assert_never so type checkers complain when you forget a case.

Run Pyright (strict) or mypy –strict in CI so “illegal states” show up as build failures, not incidents.

Validate/parsing at boundaries (HTTP/queues) with Pydantic discriminated unions (tagged unions at runtime), then keep internals typed.

For expected failures, prefer an explicit Result (e.g., returns) over exceptions-as-control-flow.

Use Ruff for lint/consistency (it’s not a type checker, but pairs well with one).

References here:

Pyright: https://microsoft.github.io/pyright/ mypy --strict: https://mypy.readthedocs.io/en/stable/getting_started.html PEP 634 (match): https://peps.python.org/pep-0634/ assert_never & exhaustiveness guide: https://typing.python.org/en/latest/guides/unreachable.html typing_extensions (backports): https://typing-extensions.readthedocs.io/ Pydantic discriminated unions: https://docs.pydantic.dev/latest/concepts/unions/ returns Result: https://returns.readthedocs.io/en/latest/pages/result.html Ruff FAQ: https://docs.astral.sh/ruff/faq/


lmao

I mostly agree: for many businesses, a big SaaS outage and a payments outage can look similar in impact (lost revenue, interrupted operations). It’s not “life or death” most of the time.

The reason money-related systems often get singled out is the combination of irreversibility and auditability: a bad state transition can mean incorrect balances/settlement, messy reconciliation, regulatory reporting, and long-tail customer harm that persists after the outage is over.

That said, my point isn’t “finance is special therefore FP.” It’s “build resilience and correctness by design early”, explicit state machines/invariants, idempotency/reconciliation, and making invalid states hard to represent. Doing this from the beginning also improves the developer experience: safer refactors, clearer reviews, fewer ‘tribal knowledge’ bugs.


I think your Option/String example is a real-world tradeoff, but it’s not a slam-dunk “untagged > tagged.”

For API evolution, T | null can be a pragmatic “relax/strengthen contract” knob with less mechanical churn than Option<T> (because many call sites don’t care and just pass values through). That said, it also makes it easier to accidentally reintroduce nullability and harder to enforce handling consistently, the failure mode is “it compiles, but someone forgot the check.”

In practice, once the union has more than “nullable vs present”, people converge to discriminated unions ({ kind: "ok", ... } | { kind: "err", ... }) because the explicit tag buys exhaustiveness and avoids ambiguous narrowing. So I’d frame untagged unions as great for very narrow cases (nullability / simple widening), and tagged/discriminated unions as the reliability default for domain states.

For reliability, I’d rather pay the mechanical churn of Option<T> during API evolution than pay the ongoing risk tax of “nullable everywhere.

My post argues for paying costs that are one-time and compiler-enforced (refactors) vs costs that are ongoing and human-enforced (remembering null checks).


I believe there is a misunderstanding. The compiler can check untagged unions just as much as it can check tagged unions. I don't think there is any problem with "ambiguous narrowing", or "reliability". There is also no risk of "nullable everywhere": If the type of x is Foo|Null, the compiler forces you to write a null check before you can access x.bar(). If the type of x is Foo, x is not nullable. So you don't have to remember null checks (or checks for other types): the compiler will remember them. There is no difference to tagged unions in this regard.

I think we mostly agree for the nullable case in a sound-enough type system: if Foo | null is tracked precisely and the compiler forces a check before x.bar, then yes, you’re not “remembering” checks manually, the compiler is.

Two places where I still see tagged/discriminated unions win in practice:

1. Scaling beyond nullability. Once the union has multiple variants with overlapping structure, “untagged” narrowing becomes either ambiguous or ends up reintroducing an implicit tag anyway (some sentinel field / predicate ladder). An explicit tag gives stable, intention-revealing narrowing + exhaustiveness.

2. Boundary reality. In languages like TypeScript (even with strictNullChecks), unions are routinely weakened by any, assertions, JSON boundaries, or library types. Tagged unions make the “which case is this?” explicit at the value level, so the invariant survives serialization/deserialization and cross-module boundaries more reliably.

So I’d summarize it as: T | null is a great ergonomic tool for one axis (presence/absence) when the type system is enforced end-to-end. For domain states, I still prefer explicit tags because they keep exhaustiveness and intent robust as the system grows.

If you’re thinking Scala 3 / a sound type system end-to-end, your point is stronger; my caution is mostly from TS-in-the-wild + messy boundaries.


I think the real promise of "set-theoretic type systems" comes when don't just have (untagged) unions, but also intersections (Foo & Bar) and complements/negations (!Foo). Currently there is no such language with negations, but once you have them, the type system is "functionally complete", and you can represent arbitrary Boolean combination of types. E.g. "Foo | (Bar & !Baz)". Which sounds pretty powerful, although the practical use is not yet quite clear.

Yep, in practice a lot of orgs treat reliability as a cost center until an outage becomes a headline or a regulatory incident. I’ve seen the same tension in payments/banking: product pressure wins until the risk is visible.

Part of why I like “make invalid states unrepresentable” approaches is exactly that: it’s one of the few reliability investments that can pay back during feature work (safer refactors, fewer regressions), not only during incidents.


I've seen reliability become incident level and then 3mo later execs are on our ass because we didn't fix another crisis fast enough.

and this company is hugely successful. so i've learned that the biggest competitive advantage in fintech is flagrant disregard for correctness and compliance.

i'm glad i have a csuite with the stones to execute that. i am way too principled.


I’ve worked in Brazilian banking stacks that were literally FTP + spreadsheets for years. So yes, the ecosystem is often messy and protocols can be flaky.

That’s exactly why I argue for stronger internal modeling: when the boundary is dirty, explicit state machines/ADTs + exhaustiveness + idempotency/reconciliation help ensure bad feeds don’t silently create invalid internal states.


I totally agree as a fellow fintech engineer. It was a battle getting approval for all that from Product for us. While we were battling for it, we rushed multiple projects without literally any of it. And then spent a year+ each time cleaning up the mess.

And your pockets were being filled in that year while you were just doing cleanups. Mission accomplished.

I get why it reads like FP evangelism, but I don’t think it’s “ignoring decades of prior art.” I’m not claiming these ideas are exclusive to FP. I’m claiming FP ecosystems systematized a bundle of practices (ADT/state machines, exhaustiveness, immutability, explicit effects) that consistently reduce a specific failure mode: invalid state transitions and refactor breakage.

Rust is actually aligned with the point: it delivers major reliability wins via making invalid states harder to represent (enums, ownership/borrowing, pattern matching). That’s not “FP-first,” but it’s very compatible with functional style and the same invariants story.

If the TS example came off as “types instead of validation,” that’s on me to phrase better, the point wasn’t “types eliminate validation,” it’s “types make the shape explicit so validation becomes harder to forget and easier to review.”


I would keep in mind how much the title communicates your intentions on future posts. The conversation about preventing invalid states has to be somewhat inferred when it could have been explicitly stated, and that’d be really useful comparing other approaches - e.g. the classic OOP style many people learned in school also avoid these problems as would something like modern Python using Pydantic/msgspec so it’d be useful to discuss differences in practice, and especially with a larger scope so people who don’t already agree with you can see how you came to that position.

For example, using the input parsing scenario, a Java 1.0 tutorial in 1995 would have said that you should create a TimeDuration class which parses the input and throws an exception when given an invalid value like “30s”. If you say that reliability requires FP, how would you respond when they point out that their code also prevents running with an invalid value? That discussion can be far more educational, especially because it might avoid derails around specific issues which are really just restating the given that JavaScript had lots of footgun opportunities for the unwary developer, even compared to some languages their grandmother might have used.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: