Hacker Newsnew | past | comments | ask | show | jobs | submit | jevndev's commentslogin

> the definitions are cloudy enough […]

This is one of the biggest traps I’ve seen in code review. Generally, everyone is coming from a good place of “I’m reviewing this code to maintain codebase quality. This technically could cause problems. Thus I’m obligated to mention it”. Since the line of “could cause problems (important enough to mention)” is subjective, you can (and will, in my experience) get good natured pedants. They’ll block a 100LOC patch for weeks because “well if we name this variable x that COULD cause someone to think of it like y so we can’t name it x” or “this pattern you used has <insert textbook downsides that generally aren’t relevant for the problem>. I would do it with this other pattern (which has its own downsides but i wont say them)”.


The “Stop at first level of type implementation” is where I see codebases fail at this. The example of “I’ll wrap this int as a struct and call it a UUID” is a really good start and pretty much always start there, but inevitably someone will circumvent the safety. They’ll see a function that takes a UUID and they have an int; so they blindly wrap their int in UUID and move on. There’s nothing stopping that UUID from not being actually universally unique so suddenly code which relies on that assumption breaks.

This is where the concept of “Correct by construction” comes in. If any of your code has a precondition that a UUID is actually unique then it should be as hard as possible to make one that isn’t. Be it by constructors throwing exceptions, inits returning Err or whatever the idiom is in your language of choice, the only way someone should be able to get a UUID without that invariant being proven is if they really *really* know what they’re doing.

(Sub UUID and the uniqueness invariant for whatever type/invariants you want, it still holds)


> This is where the concept of “Correct by construction” comes in.

This is one of the basic features of object-oriented programming that a lot of people tend to overlook these days in their repetitive rants about how horrible OOP is.

One of the key things OO gives you is constructors. You can't get an instance of a class without having gone through a constructor that the class itself defines. That gives you a way to bundle up some data and wrap it in a layer of validation that can't be circumvented. If you have an instance of Foo, you have a firm guarantee that the author of Foo was able to ensure the Foo you have is a meaningful one.

Of course, writing good constructors is hard because data validation is hard. And there are plenty of classes out there with shitty constructors that let you get your hands on broken objects.

But the language itself gives you direct mechanism to do a good job here if you care to take advantage of it.

Functional languages can do this too, of course, using some combination of abstract types, the module system, and factory functions as convention. But it's a pattern in those languages where it's a language feature in OO languages. (And as any functional programmer will happily tell you, a design pattern is just a sign of a missing language feature.)


I find regular OOP language constructor are too restrictive. You can't return something like Result<CorrectObject,ConstructorError> to handle the error gracefully or return a specific subtype; you need a static factory method to do something more than guaranteed successful construction w/o exception.

Does this count as a missing language feature by requiring a "factory pattern" to achieve that?


The natural solution for this is a private constructor with public static factory methods, so that the user can only obtain an instance (or the error result) by calling the factory methods. Constructors need to be constrained to return an instance of the class, otherwise they would just be normal methods.

Convention in OOP languages is (un?)fortunately to just throw an exception though.


In languages with generic types such as C++, you generally need free factory functions rather than static member functions so that type deduction can work.


> You can't return something like Result<CorrectObject,ConstructorError> to handle the error gracefully

Throwing an error is doing exactly that though, its exactly the same thing in theory.

What you are asking for is just more syntactic sugar around error handling, otherwise all of that already exists in most languages. If you are talking about performance that can easily be optimized at compile time for those short throw catch syntactic sugar blocks.

Java even forces you to handle those errors in code, so don't say that these are silent there is no reason they need to be.


This is why constructors are dumb IMO and rust way is the right way.

Nothing stops you from returning Result<CorrectObject,ConstructorError> in CorrectObject::new(..) function because it's just a regular function struct field visibility takes are if you not being able to construct incorrect CorrectObject.


I don't see this having much to do with OOP vs FP but maybe the ease in which a language lets you create nominal types and functions that can nicely fail.

What sucks about OOP is that it also holds your hand into antipatterns you don't necessarily want, like adding behavior to what you really just wanted to be a simple data type because a class is an obvious junk drawer to put things.

And, like your example of a problem in FP, you have to be eternally vigilant with your own patterns to avoid antipatterns like when you accidentally create a system where you have to instantiate and collaborate multiple classes to do what would otherwise be a simple `transform(a: ThingA, b: ThingB, c: ThingC): ThingZ`.

Finally, as "correct by construction" goes, doesn't it all boil down to `createUUID(string): Maybe<UUID>`? Even in an OOP language you probably want `UUID.from(string): Maybe<UUID>`, not `new UUID(string)` that throws.


> Even in an OOP language you probably want `UUID.from(string): Maybe<UUID>`, not `new UUID(string)` that throws.

One way to think about exceptions is that they are a pattern matching feature that privileges one arm of the sum type with regards to control flow and the type system (with both pros and cons to that choice). In that sense, every constructor is `UUID.from(string): MaybeWithThrownNone<UUID>`.


The best way to think about exceptions is to consider the term literally (as in: unusual; not typical) while remembering that programmers have an incredibly overinflated sense of ability.

In other words, exceptions are for cases where the programmer screwed up. While programmers screwing up isn't unusual at all, programmers like to think that they don't make mistakes, and thus in their eye it is unusual. That is what sets it apart from environmental failures, which are par for the course.

To put it another way, it is for signalling at runtime what would have been a compiler error if you had a more advanced compiler.


Unfortunately many languages treat exceptions as a primary control flow mechanism. That's part of why Rust calls its exceptions "panics" and provides the "panic=abort" compile-time option which aborts the program instead of unwinding the stack with the possibility of catching the unwind. As a library author you can never guarantee that `catch_unwind` will ever get used, so its main purpose of preventing unwinding across an FFI boundary is all it tends to get used for.


> Unfortunately many languages

Just Java (and Javascript by extension, as it was trying to copy Java at the time), really. You do have a point that Java programmers have infected other languages with their bad habits. For example, Ruby was staunchly in the "return errors as values and leave exception handling for exceptions" before Rails started attracting Java developers, but these days all bets are off. But the "purists" don't advocate for it.


Python as well. E.g. FileNotFoundError is an exception instead of a returned value.


> Functional languages can do this too, of course, using some combination of abstract types, the module system, and factory functions as convention

In Haskell:

1. Create a module with some datatype

2. Don't export the datatype's constructors

3. Export factory functions that guarantee invariants

How is that more complicated than creating a class and adding a custom constructor? Especially if you have multiple datatypes in the same module (which in e.g. Java would force you to add multiple files, and if there's any shared logic, well, that will have to go into another extra file - thankfully some more modern OOP languages are more pragmatic here).

(Most) OOP languages treat a module (an importable, namespaced subunit of a program) and a type as the same thing, but why is this necessary? Languages like Haskell break this correspondence.

Now, what I'm missing from Haskell-type languages is parameterised modules. In OOP, we can instantiate classes with dependencies (via dependency injection) and then call methods on that instance without passing all the dependencies around, which is very practical. In Haskell, you can simulate that with currying, I guess, but it's just not as nice.


Indeed, OOP and FP both allow and encourage attaching invariants to data structures.

In my book, that's the most important difference with C, Zig or Go-style languages, that consider that data structures are mostly descriptions of memory layout.


You have it backwards from where I'm standing.

'null' (and to a large extent mutability) drives a gigantic hole through whatever you're trying to prove with correct-by-construction.

You can sometimes annotate against mutability in OO, but even then you're probably not going to get given any persistent collections to work with.

The OO literature itself recommends against using constructors like that, opting for static factory pattern instead.


Nullability doesn't have anything to do with object-oriented programming.


Yes, yes, "No true OO language ..." and all that.

But I'm going to keep conflating the two until they release an OO language without nulls.


Funny enough, he has talked about this exact problem on his podcast “Two’s complement”; Specifically the episode “The future of compiler explorer”. Commenters below are correct that it’s just about how heavily associated his name is with the tool. I just figured I’d also drop this source here because he has a lot of interesting things to say about his involvement with the project


For anyone else wanting to listen to the episode, this site worked well for me:

https://podtail.com/en/podcast/two-s-complement/the-future-o...

It does have ads, but they were not too intrusive. Scroll down if there’s an ad on first click and there’s a play button that plays the episode.

For me the ads it showed were only text and images, not audio interrupting ads.

You can also listen to it on YouTube:

https://www.youtube.com/watch?v=2QXo5c7cUKQ

But since it’s audio only, I preferred listening to it via the aforementioned podcast website.


I’ve stumbled into this problem before while drafting a language I want to make*. A lot of the design philosophy is “symbols for language features” and as such import/export is handled by `<~`and `~>`. An example of an exported function:

``` <~ foo := (a: int) { a - 1 } ```

Then at the import site:

``` ~> foo ```

* some day it’ll totally for real make it off the page and into an interpreter I’m sure :,)


I think the key to understanding why people want this is that those people care about results more than the act of coding. The easy example for this is a corporation. If the software does what was said on the product pitch, it doesn’t matter if the developer had fun writing it. All that matters is that it was done in an efficient enough (either by money or time) manner.

A slightly less bleak example is data analysis. When I am analyzing some dataset for work or home, being able to skip over the “rote” parts of the work is invaluable. Examples off the top of my head being: when the data isn’t in quite the right structure, or I want to add a new element to a plot that’s not trivial. It still has to be done with discipline and in a way that you can be confident in the results. I’ll generally lock down each code generation to only doing small subproblems with clearly defined boundaries. That generally helps reduce hallucinations, makes it easier to write tests if applicable and makes it easier to audit the code myself.

All of that said, I want to make clear that I agree that your vision of software engineering Becoming LLM code review hell sounds like… well, hell. I’m in no way advocating that the software engineering industry should become that. Just wanted to throw in my two cents


If you care about the results you have to care about the craft, full stop.


Probably the most unfortunate thing is that the whole AI garbage trend exposes how little people care about the craft leading to garbage results.

As a comparison point I've gone through over 12,000 games on Steam. I've seen endless games where large portions of it are LLM generated. Images, code, models, banner artwork, writing. None of it is worth engaging with because every single one has a bunch of disjointed pieces shoved together combined.

Codebases are going to be exactly the same. A bunch of different components and services put together with zero design principal or cohesion in mind.


It really feels the same as weed/nicotine/alcohol/sex/other vices. If history has taught us anything, outright banning them only makes them into forbidden fruit. We need to explain (and frequently reinforce) these negative effects of modern phone use so kids can grow up understanding them. Right now, it seems like a lot of people really only start to understand the impacts of this kind of phone use long after they're addicted. Hopefully informing them before that happens would help.

Of course, this kind of thing is easy to do wrong. Programs like D.A.R.E. and THRIVE tried going the way of fear tactics which seems to really not work well. We need to have an open and honest discussion about "yes, this is fun. But it DOES have a bad side" instead.

The last sticking point there is that it assumes people will be rational and come to the conclusion of using with moderation. Hopefully people can be rational... Otherwise I think there's no hope for us in solving the brainrot epidemic.


"We need to explain..."

From my own experience and that of fellow parents that I talked to, explanations will be dismissed outright by the all-knowing teenagers, and any attempt to have a rational conversation on the topic will fail. Just like any addict, kids will deny that they are addicted. I had to act once the smartphone addiction reached a disaster level. What worked the best for me was "no you cannot bring your phone to school or use it before the homework is done, that's my decision and I don't have to provide you with any explanation." Did this generate some resentment and a few tantrums? You bet, but I got the result I wanted, peace of mind and homework done on time. I disagree with you.


> outright banning them only makes them into forbidden fruit

I think it should be fine to outright ban them in certain contexts, like classroom learning; just as they are outright banned (usually) in theaters or playhouses or places of worship.

And to cite your example, even in the most liberal jurisdictions I think it's not acceptable for students to take drugs in the classroom. Phones are basically the same thing.


Oop, I totally missed the "during the school day" part of the grandparent comment. I totally agree with banning them during the school day. My argument was against the point that the grandparent wasn't making which was banning phones from K-12 students both during and after the school day


> If history has taught us anything, outright banning them only makes them into forbidden fruit.

They may be 'forbidden fruit', but does that means that it would lead to more use of them?

Do you think people drank more in 2020 or 1920 during prohibition?

Do you think people smoked more weed in 2025 or, say, 1985 when it was less legal?

Do you think there is more gambling in 2025, or in 1925 when the laws banning it were still fresh?

I think you'll reach the conclusion that outright banning does in fact reduce the usage of the vice.


OP didn't say ban. They said restrict. Moderation is what's needed here.


> A good starting point would be fully banning all phones for the entirety of the school day in K-12.

Is what I was responding to in the grandparent of your comment


“Banning” during a specific time at a specific location is not really a “ban”. It is a restriction.


Oh I just realized I missed the "during the school day" part of the comment I cited. That's totally my mistake. For what it's worth, I agree with banning during the school day but (although no one is making the point here) I would disagree with banning them from children everywhere always.


What is really needed is parents that teach their kids impulse control and how to prioritize, to know what is extracurricular and what is not. You can play video games, smoke weed, do whatever on your phone once your work is done, not before or during.


As a society we need to help parents to achieve that. It’s not helpful to just blame parents.


There was no mention of an outright ban, merely restrictions on use. Much as we have restrictions on where and when one can indulge in weed, nicotine, alcohol, and so forth.


You are correct. I absolutely missed the "during the school day" stipulation.


> It really feels the same as weed/nicotine/alcohol/sex/other vices ... banning them only makes them into forbidden fruit.

How many 10 years old smoke weed, have sex, and drink alcohol ?

10 years old spending hours per days on their phone on the other hand...


Im curious about what you're working on. Do you have a repo for the project?

As for optimizations, I figure evolution-designed languages might come up with things that are hard to pattern match for more complex operations.


The core idea of what I am working on is to build a solution that can generate programs capable of converting arbitrary input to arbitrary output (bytes to bytes) based on a reasonable quantity of training data.

I'm trying to determine if a more symbolic approach may lend itself to broader generalization capabilities in lieu of massive amounts of training data. I am also trying to determine if dramatically simplified, CPU-only architectures could be feasible. I.e., ~8 interpreted instructions combined with clever search techniques (tournament selection & friends).

I don't have anything public yet. I am debating going wide open for better collaboration with others.

> I figure evolution-designed languages might come up with things that are hard to pattern match for more complex operations.

I think I agree with this - once you hit a certain level of complexity things would get really hard to anticipate. The chances you would hit good patterns would probably drop over time as the model improves.

I've been looking at an adjacent idea wherein a meta program is responsible for authoring the actual task program each time, but I haven't found traction here yet. Adding a 2nd layer really slows down the search. And, the fitness function for the meta program is a proxy at best unless you have a LOT of free time to critique random program sources.


I feel like a lot of people are forgetting how good llms are at small isolated tasks because of how much better they've gotten at larger tasks. The best experiences I've had with llms all involve sketching out the interfaces for components I need and letting it fill in the implementation. That mentality also rewards choices that lead to good/maintainable code. you give functions good names so the AI knows what to implement. You make the code you ask it to generate as small as possible to minimize the chance of it hallucinating/going off the rails. You stub simple apis for the same reason. And (unsurprisingly) small, well defined functions are extremely testable! Which is a great trait to have for code that you know can very well be wrong.

In time the AI will be good enough design whole applications in this vibe-code-y way... But all of the examples I've seen so far indicate that even the best publicly available models aren't there. It seems like every example I've seen has the developer bickering with the ai about something it just won't get right - often wasting more time than they were slightly more hands on. Until the tech gets over that I'll stick to it being the "junior developer I give a uml diagram to so they can figure out the messy parts".


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: