Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It’s not a super useful line of inquiry to ask “why” LLMs are good at something. You might be able to come up with a good guess, but often the answers just aren’t knowable. Understanding the mechanics of how LLMs train and how they perform inference isn’t sufficient to explain their behavior a lot of the time.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: