Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

All Pythia models were trained on 300B tokens, LLaMa models were trained on 1/1.4T tokens.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: