Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Also, the errors that an LLM makes are not errors that a typical human would make. This makes reviewing their code particularly challenging.


I find the errors they make are often comically "human", like forgetting a premise.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: