Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Ping me when the software stack for the AMD hardware is as good as CUDA.


What exactly are you missing when hipifying your CUDA codebase? For most of the software I've looked at this has been a breeze, mostly consisting of srtting up toolchains.

Or do you mean the profiler tooling?

I hear everyone say that AMD doesn't have the software, but I'm a little confused --- have you tried HIP? And have you tried the automatic CUDA - HIP translation? What's missing?


I think they're referring to support in general. Technically HIP exists, but it's a pain to actually use, it has limited hardware support, is far less reliable in terms of supporting older hardware, needs a new binary for every new platform and so on.

CUDA runs on pretty much every NVIDIA GPU, this year they dropped support for GPUs released 9 years ago, and older binaries are very likely to be forward compatible.

Meanwhile my Radeon VII is already unsupported despite still being pretty capable (especially for FP64), and my 5700XT was never supported at all (I may be mixing this up with support for their math libraries), everyone was just led on with promises of upcoming support for 2 years. So "AMD has the software now" is not really convincing.


I suppose if you're talking about consumer cards, I agree, support is often missing.

But if we're talking datacenter GPUs, the software is there. Data centers is where most GPGPU computing happens after all.

It's not ideal when it comes to hobby development, but if you're working in a professional capacity I'm assuming you're working with a modern HPC or AI cluster.


Well, to provide my own experience bringing GPU acceleration to scientific computation software, AMD got passed on because even if the software was there at the time (it wasn't), there was no way to justify spending a bunch of money buying everyone workstation AMD GPUs when we could just start by picking up basic consumer NVIDIA cards to work on for anyone who didn't happen to have one already and then worry about buying more suitable cards if needed.

Of course the end goal was to run on a large HPC cluster we had access to, but for efficient development, support on personal machines was necessary. My personal dual 3090 setup has been invaluable for getting through debugging and testing before dealing with the queueing system on the cluster (Side note: it also ended up revealing another important benefit of consumer side support for GPGPU, a 3090 was easily matching the performance of a single node of the CPU-only version of the cluster, thus massively bringing down the cost of entry to an otherwise computationally restrictive topic).


This is a very valid point. 100% AMD need better support of consumer cards to get research, university and everyday folks using their SW. The high end consumer7900 xtx are supported now, and a lot of other cards not officially supported actually work. (I've a colleague who tried it on his home computer and got it working, despite that card not being on the official list) .. . Still, AMD need to be getting more cards working ASAP


There is a truly gigantic demand for this - I expect you won't be waiting too long.


Related (published yesterday): Intel CEO attacks Nvidia on AI: 'The entire industry is motivated to eliminate the CUDA market' https://www.tomshardware.com/tech-industry/artificial-intell...


The chip industry is sure. But are the customers? The customers who cared are jaded by nearly 15 years of intel and amd utterly failing to make a compelling alternative and likely have a large existing investment in CUDA based GPUs.


Yes, but no customer wants to give Nvidia monopoly money forever either. So like it or not they need alternatives.


> but no customer wants to give Nvidia monopoly money forever either.

From a consumer perspective, I agree. From a datacenter, edge and industrial application perspective though, I think those crowds are content funding an effective monopoly. Hell, even after CUDA gets dethroned for AI, it wouldn't surprise me if the demand continued for supporting older CUDA codebases. AI is just one facet of HPC application.

We'll see where things go in the long-run, but unless someone resurrects OpenCL it feels unlikely that we'll be digging CUDA's grave anytime soon. In the world where GPGPU libraries are splintered and proprietary, the largest stack is king.


I wish I could buy ML cards with Monopoly money.


I support this product that uses off the shelf nvidia GPUs to do <thing> on the computer they use for <big machine>. I see a lot of IT deps and MSPs asking if they can use AMD gpus because apparently a lot of dells come with em or something. I always have to tell them no and to just buy a P1000 or one of the other cheapo quadros instead and I hate it.


Whether it's Microsoft with Win32, AMD with x86-64, or Nvidia with CUDA, the winner is always the guy who enable people to do things with their computers.

Meanwhile, priests like Intel with Itanium, Microsoft with WinRT, FOSS nerds, or AMD with GPUs will continue failing because most people ain't got no time to be preached to about how they achieve something.



It feels like playing software catchup in this fast moving sector is very challenging. NVIDIA+CUDA is kind of a standard at this point. AMD's CPUs still ran Windows. AMD's GPUs still ran DirectX and OpenGL. This feels different.


I’ve been waiting sixteen years.


There has never been more money riding on eliminating the CUDA monopoly than now.


That’s been true for 16 years.


It's now a trillion dollar market (as measured by market cap). This has only been true for a few months.


well, that's with the 1000% profit on Nvidias monopoly.


Eh, 16 years ago CUDA was the cheap option, compared to other HPC offerings.

And there wasn't a parts shortage (modulo some cryptocurrency mining, but that impacted both GPU vendors)

And ML models weren't so large as to make 8GB of vram sound meagre.

And there weren't a bunch of venture capitalists throwing money at the work, because the state of the art models were doing uninspiring things. Like trying to tag your holiday photos, but doing it wrong because they couldn't tell a bicycle helmet and a bicycle apart.


HIP is a direct drop in for CUDA. It's ready, and many folks using it ported CUDA code with little to no effort.

The SW story has been bad for a long time, but it is perhaps right now better than you think




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: