Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You could probably derive some smart initialization for the first layer of a NN based on domain knowledge (color spaces, sobel filters, etc.). But since this is such a small part of what the NN has to learn, I expect this to result in a small improvement in training time and have no effect on final performance and accuracy, so it's unlikely to be worth the complexity of developing such a feature.


Absolutely this.

Seems like on HN people are still learning 'the bitter lesson'.


Amdahl’s law?



Thank you!


Sorry - should have included a cite.

That said, Amdahl's law is also probably related in some degree - I would view YUV conversion as an unnecessary optimization.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: