Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This may be highly dependent on problem domain or programming language (see the other article about GPT tending to hallucinate any time it is given problems that don't exist in its training set). My experience has mostly been that the output (including simple stuff like "test this function", though we generally avoid unit tests due to low benefit and high cost) is consistently so flawed that the time to fix it approaches the time to write it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: