Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"Everyone" at top is also developing their own chips for inference and providing APIs for customers to not worry about using CUDA.

It looks like the price to performance of inference tasks gives providers a big incentive to move away from Nvidia.



There are only like 3 AI building companies who have the tech capability and resources to afford that and 2 of them don't even offer their chips to others or have gone back to Nvidia. The rest is manufacturers desperately trying to get a piece of the pie.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: