Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I keep hearing that LLMs are trained on "Internet crap" but is it true?

Karpathy repeated this in a recent interview [0], that if you'd look at random samples in the pretraining set you'd mostly see a lot of garbage text. And that it's very surprising it works at all.

The labs have focused a lot more on finetuning (posttraining) and RL lately, and from my understanding that's where all the desirable properties of an LLM are trained into it. Pretraining just teaches the LLM the semantic relations it needs as the foundation for finetuning to work.

[0]: https://www.dwarkesh.com/p/andrej-karpathy



"Just" is the wrong way to put it.

Pretraining teaches LLMs everything. SFT and RL is about putting that "everything" into useful configurations and gluing it together so that it works better.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: