Hacker Newsnew | past | comments | ask | show | jobs | submit | oblio's commentslogin

Yeah, no. None of those have built in debuggers, for example. I also doubt Rust compilation is fast on slow computers.

You know things are bad when someone compares something to Spring and says: "this is thing is more complicated!".

What's that?


Did they communicate this from the top or just turn a blind eye to it?

They had official trainings on how to use Copilot/ChatGPT and some other tools, security and safety trainings and so on, this is not some people deciding to use whatever feature was there from Ms by default.

Weird but FreePascal is fairly solid for its niche.

> If software is the commodity, what is the bespoke value-added service that can sit on top of all that?

Aggregation. Platforms that provide visibility, influence, reach.



> This pattern has already played out in chess and go. For a few years, a skilled Go player working in collaboration with a go AI could outcompete both computers and humans at go. But that era didn't last. Now computers can play Go at superhuman levels. Our skills are no longer required. I predict programming will follow the same trajectory.

Both of those are fixed, unchanging, closed, full information games. The real world is very much not that.

Though geeks absolutely like raving about go and especially chess.


> Both of those are fixed, unchanging, closed, full information games. The real world is very much not that.

Yeah but, does that actually matter? Is that actually a reason to think LLMs won't be able to outpace humans at software development?

LLMs already deal with imperfect information in a stochastic world. They seem to keep getting better every year anyway.


This is like timing the stock market. Sure, share prices seem to go up over time, but we don't really know when they go up, down, and how long they stay at certain levels.

I don't buy the whole "LLMs will be magic in 6 months, look at how much they've progressed in the past 6 months". Maybe they will progress as fast, maybe they won't.


I’m not claiming I know the exact timing. I’m just seeing a trend line. Gpt3 to 3.5 to 4 to 5. Codex and now Claude. The models are getting better at programming much faster than I am. Their skill at programming doesn’t seem to be levelling out yet - at least not as far as I can see.

If this trend continues, the models will be better than me in less than a decade. Unless progress stops, but I don’t see any reason to think that would happen.


That would require accurate validation of said documents, which is extremely hard now. Pointing 1 million PDF LLM machine guns at current validation pipelines will not end well, especially since LLMs are inherently unreliable.

This is lost on people. A 98% accurate automation is useful if you can programmatically identify the 2% of cases that need human review. If you can’t, and it matters, then every case needs human review.

So you lose a lot of benefits to the time sync, but since people tend to have their eye glaze over when the correction rate is low, you may still miss the 2% anyway.

This is going to put a stop to a lot of ideas that sound reasonable on paper.


> normoid

Do you work extra hard to be this arrogant or does it come naturally?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: