Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yep, this is a hell I've come to know over the past year. You get people trying to build things they really have no business building, not without at least doing their own research and learning first, but the LLM in their editor lets them spit out something that looks convincing enough to get shipped, unless someone with domain experience looks at it more closely and realizes it's all wrong. It's not the same as bodged-together code from Stack Overflow because the LLM code is better at getting past the sniff test, and magical thinking around generative AI leads people to put undue trust in it. I don't think we've collectively appreciated the negative impact this is going to have on software quality going forward.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: