Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

On the contrary, I think it demonstrates an inherent limit to the kind of tasks / datasets that human beings care about.

It's known that large neural networks can even memorize random data. The number of random datasets is unfathomably large, and the weight space of neural networks trained on random data would probably not live in a low dimensional subspace.

It's only the interesting-to-human datasets, as far as I know, that drive the neural network weights to a low dimensional subspace.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: