Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That’s not the case. It is significantly faster, at least for deep learning workloads. Check out Lambda Lab’s benchmarks.


https://lambdalabs.com/gpu-benchmarks for those who are curious, ends up being quite the difference no matter which way you slice it, up to 60% better. Of course, this is probably due to the increased interconnect bandwidth and memory rather than raw compute horsepower, but for the workloads in question that's relevant.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: