Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You need to consider this in the context of the relevant task. Nvidia GPUs have extremely high peak performance for GEMM, but when working with LLMs, bandwidth (and RAM capacity) becomes the limiting factor. There is a reason why real ML-focused datacenter Nvidia GPUs use much wider RAM interfaces and a much higher price point. The M2 Ultra might not have the raw compute, but it has a lot of RAM and large caches.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: