Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Anthropic had a recent paper on why llms can't even get e.g. simple arithmetic consistently correct, much less generalize the concept of infinite series. The finding was that they don't find a way to represent the mechanics of an operation, they build chains of heuristics that sometimes happen to work.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: