Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> If the voltage was higher (i believe 'low volt' classification tops out at 48v)

Yep, 48V through sensitive parts of the body could be unpleasant but 24V is almost as safe as 12V. Why didn't they use 24V and 25A to achieve required 600W of power instead of 12V and 50A?



Because no PC power supply has a 24V rail and even though there's a fancy new connector you can still use an adapter to get the old-fashioned plugs.

After all you don't want to limit your market to people who can afford to buy both your most expensive GPU and a new power supply. In the PC market backwards compatibility is king.


>Because no PC power supply has a 24V rail

Servers with NVIDIA H200 GPUs (Supermicro ones for example) have power supplies that have 54 volt rail, since that gpu requires it. I can easily imagine a premium ATX (non-mandatory, optional) variant that has higher voltage rail for people with powerful GPUs. Additional cost shouldn't be an issue considering top level GPUs that would need such rail cost absurd money nowadays.


A server is not a personal computer. We are talking about enthusiast GPUs here who will install these components into their existing setup whereas servers are usually sold as a unit including the power supply.

> Additional cost shouldn't be an issue considering top level GPUs that would need such rail cost absurd money nowadays.

Bold of you to assume that Nvidia would be willing to cut into its margin to provide an optional feature with no marketable benefit other than electrical safety.


>Nvidia would be willing to cut into its margin to provide

Why would that be optional on a top of the line GPU that requires it? NVIDIA has nothing to do with it. I'm talking about defining an extended ATX standard, that covers PSUs, and it would be optional in the product lines of PSU manufacturers. The 12VHPWR connector support in PSUs is already a premium thing, they just didn't go far enough.


Electrical safety -> not destroying your GPU does seem like something sellable.

It could probably be spinned into some performance pitch if you really want to.


A higher input voltage may eventually be used but a standard PC power supply only has 12V and lower (5V and 3.3V) available, so they'd need to use a new type of power supply or an external power supply, both of which are tough sells.

On the other hand, the voltages used inside a GPU are around 1V, and a higher input voltage introduces lower efficiency in the local conversion.

12V is only really used because historically it was available with relatively high power capacity in order to supply 12V motors in disk drives and fans. If power supplies were designed from the ground-up for power-hungry CPUs and GPUs, you could make an argument for higher voltage, but you could also make an argument for lower voltage. Or for the 12V, either because it's already a good compromise value, or because it's not worth going against the inertia of existing standards. FWIW there is a new standard for power supplies and it is 12V only with no lower or higher voltage outputs.


Lower voltage would be OK, but then all of the cables and plugs would need to be redesigned. And there would still need to be for voltage switching unless we start adding a protocol and have PSU switch voltage dynamically... which is also not efficient.

Since they went so far to create a new cable which wont be available on old PSU they would have easily extended that slightly and introduced an entirely new PSU class which has a new voltage also. But now they went the easy route and it failed which is even worse as they will have to redesign it now instead of it being safely done the first time.


There are versions of the cables (and adapters) that work on older PSUs, although new PSUs are starting to come with the same connector that new GPUs have.

Anyway there are pros and cons to using 12V, or lower or higher, and anything except 12V would require a new PSU so it's a hard sell. But even without that detail, I have a feeling 12V is a reasonable choice anyway, not too low or high for conversion either in the PSU or in the GPU or other component.

In any case, at the end of the day sending 12V from the PSU to GPU is easy. The connector used here is bad, either by design or manufacturing quality, but surely the solution can be a better housing and/or sockets on the cable side connectors instead of a different voltage.


> Lower voltage would be OK, but then all of the cables and plugs would need to be redesigned. And there would still need to be for voltage switching unless we start adding a protocol and have PSU switch voltage dynamically... which is also not efficient.

It's not like that. It's a design where PSU only provides 12V to motherboard and motherboard provides the rest. Only location of those connectors change. It's called ATX12VO.

In modern PC almost nothing draws from 3v3 rail, not even RAM. I'm pretty sure nothing draws 3v3 directly from PSU at all today.

5v rail directly from PSU only used for SATA drives.


Because nobody makes 24V power supplies for computers, they'd have to convince the whole industry to agree on new PSU standards.


> they'd have to convince the whole industry to agree on new PSU standards.

We already have a new PSU standard, it's called ATX12VO and drops all lower voltages (5V, 3.3V), keeping only 12V. AFAIK, it's not seen wide adoption.


It's also of no use for the problem at hand, PCIe already uses 12V but that's way too low for the amount of power GPUs want.


It's not great. Dropping 5V makes power routing more complicated and needs big conversion blocks outside the PSU.

I would say it makes sense if you want to cut the PSU entirely, for racks of servers fed DC, but in that case it looks like 48V wins.


There are already huge conversion blocks outside the PSU. That's why they figured there's no need to keep an extra one inside the PSU and run more wiring everywhere.

Your CPU steps down 12 volts to 1 volt and a bit. So does your GPU. If you see the big bank of coils next to your CPU on your motherboard, maybe with a heatsink on top, probably on the opposite side from your RAM, that's the section where the voltage gets converted down.


Those are actually at the point of use and unavoidable. I mean extra ones that convert to 5V and then send the power back out elsewhere. All those drives and USB ports still need 5V and the best place to make it is the PSU.


Why is the PSU the best place to make 5 volts? In the distant past it made sense because it allowed some circuitry to be shared between all the different voltages. Now that is not a concern.


The motherboard is cramped, the PSU has a longer life time, and routing power from PSU to motherboard to SATA drive is a mess.


Yup, exactly. The VRMs on my Threadripper board take up quite a bit of space.


24VDC is the most common supply for industrial electronics like PLCs, sensors etc. It is used in almost every type of industrial automation systems. 48VDC is also not uncommon for bigger power supplies, servos, etc.

https://www.siemens.com/global/en/products/automation/power-...


Cutting the ampacity in half from 50A to 25A only drops the minimum (single) conductor size from #8 to #10, also there is no 24v rail in a PSU.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: