Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is because they are trying to parallel like 50amps (it's 12 volt IIRC) over a few conductors to get to 600watts.

If it becomes unbalanced due to any number of reasons, none of those individual cables can come close to handling it - they will all generate enough heat to melt lots of things.

Conservatively, they'd have to be 8awg each to be able to handle the full load without melting if they ended up taking the full load onto a single conductor.

That's the crappy part about low voltages.

If the voltage was higher (i believe 'low volt' classification tops out at 48v), it'd be more dangerous to deal with in some aspects, but it'd be easier to have small cables that won't melt.



Can we talk about how absolutely terrifying is that 600W figure? We're not transcoding or generating slop as the primary use case, we're playing computer games. What was wrong with the previous-generation graphics that we still need to push for more raw performance, rather than reducing power draw?


What was “wrong” is that enough people are willing to pay exorbitant prices for the highest-end gear that Nvidia can do most anything they want as long as their products have the best numbers.

Other companies do make products with lower power draw — Apple in particular has some good stuff in this space for people who need it for AI and not gaming. And even in the gaming space, you have many options for good products — but people who apparently have money to burn want the best at any cost.


We must be thinking about very different types of games, because even though I’m completely bought into the Apple ecosystem and love my M3 macbook pro and mac mini, I have a windows gaming PC sitting in the corner because very few titles I’d want to play are available on the mac.


Perhaps I phrased it poorly but I was trying to separate out GPU workloads for AI and gaming. The apple ecosystem is very poor for gaming overall, but in their ML and LLM related abilities they have very good performance at a fraction of the power draw of a modern nvidia card.

So the point being, nvidia is optimizing for gamers who are willing to throw top dollar at the best gear, regardless of power draw. But it’s a choice, and other manufacturers can make different tradeoffs.


Is the primary use case for *090 series gaming anymore? 5070 which is probably what most popular gaming card is 250W. If I recall correctly it can push 4k @ 60fps for most games.

But yes, I do agree that TDPs for GPUs are getting ridiculous.


4k 60Hz is still largely unachievable for even top of the line cards when testing recent games with effects like raytracing turned up. For example, an RTX 4090 can run Cyberpunk 2077 at 4k at over 60fps with the Ray Tracing Low preset, but not any of the higher presets.

However, it's easy to get misled into thinking that 4k60 gaming is easily achieved by more mainstream hardware, because games these days are usually cheating by default using upscaling and frame interpolation to artificially inflate the reported resolution and frame rate without actually achieving the image quality that those numbers imply.

Gaming is still a class of workloads where the demand for more GPU performance is effectively unlimited, and there's no nearby threshold of "good enough" beyond which further quality improvements would be imperceptible to humans. It's not like audio where we've long since passed the limits of human perception.


4k@60 isn't all that good today and 5070 can do it with reduced graphics in modern games.

x90 cards IMO are either bought by people that absolutely need them (yay market segmentation) or simply because they can (affording is another story) and want to have the best of the latest.


This genereation seems that is getting performance using more power and more cores. Not really an architectural change but only packing more things in the chip that require more power.


Too true. I've been looking replace my 1080. This was a beast in 2016, but the only way I can get a more performant card these days is to double the power draw. That's not really progress.


Then get a modern GPU and limit the power to what your 1080 draws. It will still be significantly faster. GPU power is out of control these days, if you knock 10% off the power budget you generally only lose a few percentage of performance.

Cutting the 5090 down from 575w to 400w is a 10% perf decrease.


Even if I knew how to do that, I'd still need double the power connectors I currently have.


5090 was an example, same process applies to lower tier GPUs that don't require extra power cables. ie a 3080 with the same power budget as a 1080 would run circles around it (1080 with default max power limit of 180w gets approx 7000 in TimeSpy, 3090 limited to 150w gets approx 11500). Limiting the power budget is very simple with tools such as MSI Afterburner and others in the same space.


That's because 1080 and whole 10xx generation was pinacle and is the best GPU nvidia ever made. Nvidia won't make the same mistake any time soon.


Because previous generation graphics didn't include ray/path tracing or DLSS technologies. They had baked in lighting and shaders that required much less compute to generate. Now that it does it requires more computing power that (we assume) Nvidia hasn't been able to solve with improved higher efficient computing power but simply by pushing more power through the card.

It's what Intel has been grappling with, their CPU's are drawing more and more wattage at the top end.


Take a step back, perspectively.

1. People want their desktop computers to be fast. These are not made to be portable battery sippers. Moar powa!!!

2. People have a powerpoint at the wall to plug their appliances into.

Ergo, desktop computers will tend towards 2000w+ devices.

"Insane!" you may cry. But a look at the history of car manufacture suggests that the market will dictate the trend. And in similar fashion, you will be able to buy your overpowered beast of a machine, and idle it to do what you need day to day.


Well exactly my point. I'm "still" using an M1 Mac mini as my daily driver. 6W idle. In a desktop. It is crazy fast compared to the Intel Macs of the year before, but the writing was already on the wall: this is the new low-end, the entry level.

Still? It runs Baldur's Gate 3. Not smoothly, but it's playable. I don't have an M4 Pro Max Ultra Plus around to compare the apples to apples, but I'd expect both perf and perf per watt to be even better.

If one trillion dollar company can manage this, why not the other?


I imagine it's using more than 6w to play Baldurs Gate 3 but still, I get that it is far more efficient for the work being done. I'm a bit irked that my desktop idles at 35w. But then I recall growing up with 60w light bulbs as the default room lighting...

But other people will look at that and say "Not smooth = unplayable. If you can do so much with 100w or less, then lets dial that up to 2000w and make my eyes bleed!"

We're not the ones pushing the limits of the market it seems.


Is your argument that computer games don't merit better performance (e.g. pushing further into 4K) and/or shouldn't expand beyond the current crop and we give up on better VR/AR ?


Why should we reduce power draw? We live in an age of abundance.


Can you point me to the abundance? Because I sure can point you to the consequences of thinking we live in an age of abundance.


Disposable income has never been higher? One of the world's biggest health problems is everyone eating too much food?

Lack of electricity production is entirely a human choice at this point. There's no need to output carbon to make it happen.


If only we had connectors which could actually handle such currents. Maybe something along the lines of an XT90, but no Nvidia somehow wants to save a bit of space or weight on their huge brick of a card. I don't get it.


The USB-C connectors on laptops and phones can deliver 240 watts [1] in a 8.4x2.7mm connector.

12VHPWR is 8.4x20.8mm so it's got 7.7x the cross-sectional area but transmits only 2.5x the power. And 12VHPWR also has the substantial advantage that GPUs have fans and airflow aplenty.

So I can see why someone looking at the product might have thought the connector could reasonably be shrunk.

Of course, the trick USB-C uses is to deliver 5A at 48v, instead of 50A at 12v

[1] https://en.wikipedia.org/wiki/USB-C#Power_delivery


Nobody thought that they could push 50A at 12V through half the connector. It's management wanting to push industrial design as opposed to safety. They made a new connector borrowing from an already existing design, pushed up the on paper amperage by 3A, never changed the contact resistance, and made the parent connector push current near it's limit (10.5A max vs 8.3A). And oh, the insertion force is so, so much higher than ever before. Previous PCIe connectors push about 4A through a connector designed for about 13A.

Worth also mentioning that the same time the 12VHPWR connector was being market tested was during Ampere, the same generation where Nvidia doubled down on the industrial design of their 1st party cards.

Also there's zero devices out there that actually deliver or take 240W over USB-C. Texas Instruments literally only released the datasheets for a pivotal supporting IC within the last 6 months.


> Also there's zero devices out there that actually deliver or take 240W over USB-C. Texas Instruments literally only released the datasheets for a pivotal supporting IC within the last 6 months.

The 16-inch Framework laptop can take 240W power. For chargers, the Delta Electronics ADP-240KB is an option. Some Framework users have already tried the combination.


> Nobody thought that they could push 50A at 12V through half the connector.

If you're saying that the connector doesn't have a 2x safety factor then I'd agree, sure.

But I can see how the connector passed through the design reviews, for the 40x0 era cards. The cables are thick enough. The pins seem adequate, especially assuming any GPU that's drawing maximum power will have its fans producing lots of airflow; plenty of connectors get a little warm. There's no risk of partial insertion, because the connector is keyed, and there's a plastic latch that engages with a click, and there's four extra sense pins. I can see how that would have seemed like a belt-and-braces approach.

Obviously after the first round of melted connectors they should have fixed things properly.

I'm just saying to me this seems like regular negligence, rather than gross negligence.


The spec may say it, but I've never encountered a USB-C cable that claims to support 240 watts. I suspect if machines that tried to draw 240W over USB-C were widespread, we would see a lot of melted cables and fires. There are enough of them already with lower power draw charging.


Search Amazon for "240W USB" and you get multiple pages of results for cables.

A few years ago there was a recall of OnePlus cables that were melting and catching fire, I had 2 of them and both melted.

But yes 240W/48V/5A is insane for a spec that was originally designed for 0.5W/5V/100mA. I suspect this is the limit for USB charging as anything over 48V is considered a shock hazard by UL and 5A is already at the very top of the 3-5A limit of 20AWG for fire safety.


We've had a variety of 140W laptops for a few years already, so the original spec has been far away for a while now.

The advantage of USB-C is the power negotiation, so getting the higher rating only on circuits that actually support it should de doable and relatively safe.

The OnePlus cable melting give me the same impression as when hair power cables melt: it's a solved problem, the onus is on the maker.


240W cables are here but at around a 10x price premium. Also cables are chipped so e.g. a 100W cable won't allow 240 in the first place.

Users needing the 240W have a whole chain of specialized devices, so buying a premium cable is also not much of an issue.


The connector could reasonably be shrunk. It just now has essentially no design margin so any minor issue immediately becomes major! 50A DC is serious current to be treated with respect. 5A DC is sanely manageable.


If only we had electrical and thermal fuses that could be used to protect the connectors and wires.


At these wattages just give it its own mains plug.


> At these wattages just give it its own mains plug.

You might think you're joking, but there are gamer cases with space for two PSUs, and motherboards which can control a secondary PSU (turning both PSUs on and off together). When using a computer built like that, you have two main plugs, and the second PSU (thus the second mains plug) is usually dedicated to the graphics card(s).


I've done this, without a case, not because I actually used huge amounts of power, but because neither PSU had the right combination of connectors.

The second one was turned on with a paperclip, obviously.

Turns out graphics cards and hard drives are completely fine with receiving power but no data link. They just sit there (sometimes with fans at max speed by default!) until the rest of the PC comes online.


You can also hookup a little thingy that takes sata power on one side and 24 pin on the other. As soon as there is power on sata side, relay switches and second PSU turns on.


This may not be fast enough for some add-in cards. It would be better to connect the PS_ON (green) cable from both ATX24 connectors together, so that the motherboard turns on both PSUs simultaneously.

This would still have the disadvantage that the PWROK (grey) cable from the second PSU would not be monitored by the motherboard, leaving the machine prone to partial reset quirks during brown-outs. Normally a motherboard will shut down when PWROK is deasserted, and refuse to come out of reset until it returns.


The joke actually removes this connector problem though, while a secondary PSU does not.


Server systems already work like this for redundancy.


No they don't. Server-grade redundant PSUs usually use a CRPS form factor, where individual PSU modules slot into a common multi-module PSU housing known as a PDB (power distribution board). Each module typically outputs only 12V and the PDB manages the down-conversion to 5V, 5VSB, and 3.3V. From there, there is only one set of power cables between the PDB and the system's components including the motherboard and any PCIe add-in cards. Additionally, there is a PMBus cable between the PDB and the motherboard so that the operating system and the motherboard's remote management interface (e.g. IPMI) can monitor the status of each individual module (AC power present, measured power input, measured power output, measured voltage input, measured frequency input, fan speeds, temperature, which module is currently powering the system, etc).

PSUs can be removed from the PDB and replaced and reconnected to a source of power without having to shut down the machine or even remove the case lid. You don't even need to slide the machine out of the rack if you can get to the rear.

Example:

https://www.fspgb.co.uk/_files/ugd/ea9ce5_d90a79af31f84cd59d...


You can have the machine draw twice the amount from the server PSUs. It kills the redundancy, but it is supposed to work.


But that still only happens over one set of power cables, from the PDB. The post you replied to described using a separate PSU with separate component power cables to power specific components. Current sharing in server PSUs is handled by every PSU equally powering all of the components.

Edit: For example, in a 3+1 redundant setting, 3 PSUs would be active and contributing toward 1/3 of the total load current each; 1 PSU would be in cold standby, ready to take over if 1 of the others fails or is taken offline.


Not without precedent: The Voodoo 5 6000 by 3dfx came with its own external PSU almost 25 years ago.

https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQqLXew...


Also put it in a separate case, and give it an OcuLink cable to attach to the main desktop tower. I suspect that's exactly where we're heading, to be fair.


I've built video rigs that did just that. An external expansion chassis that you could put additional PCIe cards when the host only had 3 slots. The whole eGPU used to be a cute thing, but it might have been more foreshadowing than we realized.


Have you measured latency?

In modern(last 4 years approximately) GPUs, physical wiring distance is starting to contribute substantially to latency.


latency due to wiring distances is far from being an issue in these scenarios. The signals travel at the speed of light. 186 miles per millisecond.

The problem you will encounter with pcie gen5 risers is signal integrity.


> The signals travel at the speed of light

It's about 75-90% the speed of light, but even that's too slow.

Modern hardware components are getting to latencies in single digit nanoseconds. Light travels about 30cms in a nanosecond, so extending a pcie port to a different box is going to have a measurable difference.


A single round trip isn't going to register, but there are multiple in a frame, so it's not inconceivable that it could add up at some point. I would like to see it demonstrated, though.


Without one of these rigs, you would not be able to do much at all because of the limited PCIe slots in the host. "not much" here means render times into the hours per clip to even longer. With the external chassis and additional cards, you could achieve enough bandwidth for realtime playback. Specific workflow would have been taking Red RAW camera footage that takes heavy compute to debayer, running whatever color correction on the video, running any additional filters like noise removal, finally writing the output back to something like a ProRes. Without the chassis, not happening, with the chassis you can do realtime playback during the session and faster than realtime during rendering/exporting.

Also, these were vital to systems like the MacPro Trashcan that had 0 PCIe slots. This system was a horrible system, and everyone I know that had one reverted back to their latest 2012 cheese grater systems with the chassis.

There was another guy I know that was building his own 3D render rig for his own home experimental use when those render engines started using GPUs. He built a 220v system that he'd unplug the dryer to use. It had way more GPU cards than he had slots for by using PCIe splitters. Again, these were not used to draw realtime graphics to a screen. They were solely compute nodes for the renderer. He was running circles around the CPU only render farm nodes.

People think that the PCIe lanes are the limiting factor, but again, that's just for getting the GPUs data back to the screen. As compute nodes, you do not need full lanes to get the benefits. But for doubting Thomas types like you, I'm sure my anecdotal isn't worth much


There were no latency concerns. These were video rigs, not realtime shoot'em ups. They were compute devices running color correction and other filters type of thing, not pushing a video signal to a monitor 60fps 240Hz refresh nonsense. These did real work /s


Ah makes sense, the other kind of graphics!


We could also do like we do on car audio, just two big fat power cables, positive and negative, 4awg, or even bigger with a nice crimped ferrule or lug bolted in.


true. at this prices they might as well include a power brick and take responsibility of the current carrying path from the wall to the die.


> If only we had connectors which could actually handle such currents.

The problem isn't connectors, the problem (fundamentally) is to share electric connectivity between multiple conductors.

Sure, you can run 16A over 1.5 mm² wires, and 32A over 2.5 mm² (taken from [1], yes it's for 230V but that doesn't matter, the current is important not the voltage). And theoretically you could run 32A over 2x 1.5 mm² (you'd end up with 3 mm² cross section), but it's not allowed by code as when, for any reason, either of the two legs disconnects entirely or has increased resistance e.g. due to corrosion or a loose screw / wire nut (hence, please always use Wago style clamps - screws and wire nuts are not safe, even if torqued properly which most people don't), suddenly the other leg has to carry (much) more current than it's designed for and you risk anything from molten connectors to an outright fire. And that is what NVidia currently is running into, together with bad connections (e.g. due to dirt ingress).

The correct solution would be for the GPU to not tie together the incoming individual 12VHPWR pins on a single plane right at the connector input but to use MOSFETs and current/voltage sense to detect stuff like different current availability (at least it used to be the case with older GPUs that there were multiple ways to supply them with power and only, say, one of two connectors on the GPU being used) or overcurrents due to something going bad. But that adds complexity and, at least for the overcurrent protection, yet another microcontroller plus one ADC for each incoming power pin.

Alternatively each 12VHPWR pair could get its own (!) DC-DC converter down to 1V2 or whatever the GPU chip actually needs, but again that also needs a bunch of associated circuitry.

Another and even more annoying issue by the way is grounding - because all the electricity that comes in also wants to go back to the PSU and it can take any number of paths - the PCIe connector, the metal backplate, the 12VHPWR extra connector, via the shield of a DP cable that goes to a Thunderbolt adapter card's video input to that card, via the SLI connector to the other GPU and its ground...

Electricity is fun!

[1] https://stex24.com/de/ratgeber/strombelastbarkeit


> The correct solution would be for the GPU to not tie together the incoming individual 12VHPWR pins on a single plane right at the connector input but to use MOSFETs and current/voltage sense to detect stuff like different current availability (at least it used to be the case with older GPUs that there were multiple ways to supply them with power and only, say, one of two connectors on the GPU being used) or overcurrents due to something going bad. But that adds complexity and, at least for the overcurrent protection, yet another microcontroller plus one ADC for each incoming power pin.

> Alternatively each 12VHPWR pair could get its own (!) DC-DC converter down to 1V2 or whatever the GPU chip actually needs, but again that also needs a bunch of associated circuitry.

So as you say, monitoring multiple inputs happened on the older xx90s, and most cards still do it. It's not hard.

Multiple DC-DC converters is something every GPU has. That's the only way to get enough current. So all you have to do is connect them to specific pins.


> It's not hard

It still is because in the end you're dealing with dozens of amps on the "high" voltage side and hundreds of amps on the "low" (GPU chip) voltage side. The slightest fuck-up can and will have disastrous consequences.

GPUs these days are on the edge of physics when it comes to supplying them with power.


Let me rephrase.

Doing the power conversion is hard.

Realizing that you already have several DC converters sharing the load, and deciding to power specific converters with specific pins, is comparatively easy. And the 3090 did it.


This is the top end halo product. What's wrong with pushing the envelope? Should we all play tetris because "what's wrong with block graphics?".

I'm not defending the shitty design here, but I'm all for always pushing the boundaries.


Pushing the boundaries of a simple connector is not innovation, that's just reckless and a fire hazard.


> If the voltage was higher (i believe 'low volt' classification tops out at 48v)

Yep, 48V through sensitive parts of the body could be unpleasant but 24V is almost as safe as 12V. Why didn't they use 24V and 25A to achieve required 600W of power instead of 12V and 50A?


Because no PC power supply has a 24V rail and even though there's a fancy new connector you can still use an adapter to get the old-fashioned plugs.

After all you don't want to limit your market to people who can afford to buy both your most expensive GPU and a new power supply. In the PC market backwards compatibility is king.


>Because no PC power supply has a 24V rail

Servers with NVIDIA H200 GPUs (Supermicro ones for example) have power supplies that have 54 volt rail, since that gpu requires it. I can easily imagine a premium ATX (non-mandatory, optional) variant that has higher voltage rail for people with powerful GPUs. Additional cost shouldn't be an issue considering top level GPUs that would need such rail cost absurd money nowadays.


A server is not a personal computer. We are talking about enthusiast GPUs here who will install these components into their existing setup whereas servers are usually sold as a unit including the power supply.

> Additional cost shouldn't be an issue considering top level GPUs that would need such rail cost absurd money nowadays.

Bold of you to assume that Nvidia would be willing to cut into its margin to provide an optional feature with no marketable benefit other than electrical safety.


>Nvidia would be willing to cut into its margin to provide

Why would that be optional on a top of the line GPU that requires it? NVIDIA has nothing to do with it. I'm talking about defining an extended ATX standard, that covers PSUs, and it would be optional in the product lines of PSU manufacturers. The 12VHPWR connector support in PSUs is already a premium thing, they just didn't go far enough.


Electrical safety -> not destroying your GPU does seem like something sellable.

It could probably be spinned into some performance pitch if you really want to.


A higher input voltage may eventually be used but a standard PC power supply only has 12V and lower (5V and 3.3V) available, so they'd need to use a new type of power supply or an external power supply, both of which are tough sells.

On the other hand, the voltages used inside a GPU are around 1V, and a higher input voltage introduces lower efficiency in the local conversion.

12V is only really used because historically it was available with relatively high power capacity in order to supply 12V motors in disk drives and fans. If power supplies were designed from the ground-up for power-hungry CPUs and GPUs, you could make an argument for higher voltage, but you could also make an argument for lower voltage. Or for the 12V, either because it's already a good compromise value, or because it's not worth going against the inertia of existing standards. FWIW there is a new standard for power supplies and it is 12V only with no lower or higher voltage outputs.


Lower voltage would be OK, but then all of the cables and plugs would need to be redesigned. And there would still need to be for voltage switching unless we start adding a protocol and have PSU switch voltage dynamically... which is also not efficient.

Since they went so far to create a new cable which wont be available on old PSU they would have easily extended that slightly and introduced an entirely new PSU class which has a new voltage also. But now they went the easy route and it failed which is even worse as they will have to redesign it now instead of it being safely done the first time.


There are versions of the cables (and adapters) that work on older PSUs, although new PSUs are starting to come with the same connector that new GPUs have.

Anyway there are pros and cons to using 12V, or lower or higher, and anything except 12V would require a new PSU so it's a hard sell. But even without that detail, I have a feeling 12V is a reasonable choice anyway, not too low or high for conversion either in the PSU or in the GPU or other component.

In any case, at the end of the day sending 12V from the PSU to GPU is easy. The connector used here is bad, either by design or manufacturing quality, but surely the solution can be a better housing and/or sockets on the cable side connectors instead of a different voltage.


> Lower voltage would be OK, but then all of the cables and plugs would need to be redesigned. And there would still need to be for voltage switching unless we start adding a protocol and have PSU switch voltage dynamically... which is also not efficient.

It's not like that. It's a design where PSU only provides 12V to motherboard and motherboard provides the rest. Only location of those connectors change. It's called ATX12VO.

In modern PC almost nothing draws from 3v3 rail, not even RAM. I'm pretty sure nothing draws 3v3 directly from PSU at all today.

5v rail directly from PSU only used for SATA drives.


Because nobody makes 24V power supplies for computers, they'd have to convince the whole industry to agree on new PSU standards.


> they'd have to convince the whole industry to agree on new PSU standards.

We already have a new PSU standard, it's called ATX12VO and drops all lower voltages (5V, 3.3V), keeping only 12V. AFAIK, it's not seen wide adoption.


It's also of no use for the problem at hand, PCIe already uses 12V but that's way too low for the amount of power GPUs want.


It's not great. Dropping 5V makes power routing more complicated and needs big conversion blocks outside the PSU.

I would say it makes sense if you want to cut the PSU entirely, for racks of servers fed DC, but in that case it looks like 48V wins.


There are already huge conversion blocks outside the PSU. That's why they figured there's no need to keep an extra one inside the PSU and run more wiring everywhere.

Your CPU steps down 12 volts to 1 volt and a bit. So does your GPU. If you see the big bank of coils next to your CPU on your motherboard, maybe with a heatsink on top, probably on the opposite side from your RAM, that's the section where the voltage gets converted down.


Those are actually at the point of use and unavoidable. I mean extra ones that convert to 5V and then send the power back out elsewhere. All those drives and USB ports still need 5V and the best place to make it is the PSU.


Why is the PSU the best place to make 5 volts? In the distant past it made sense because it allowed some circuitry to be shared between all the different voltages. Now that is not a concern.


The motherboard is cramped, the PSU has a longer life time, and routing power from PSU to motherboard to SATA drive is a mess.


Yup, exactly. The VRMs on my Threadripper board take up quite a bit of space.


24VDC is the most common supply for industrial electronics like PLCs, sensors etc. It is used in almost every type of industrial automation systems. 48VDC is also not uncommon for bigger power supplies, servos, etc.

https://www.siemens.com/global/en/products/automation/power-...


Cutting the ampacity in half from 50A to 25A only drops the minimum (single) conductor size from #8 to #10, also there is no 24v rail in a PSU.


But you would then need to bring it down to the low voltages required by the chips and that would greatly increased cost, volume, weight, electrical noise and heat of the device.


Nah, modern GPUs are already absolutely packed with buck converters, to convert 12v down to 2v or so.

Look at the PCB of a 4090 GPU; you can find plenty of images of people removing the heatsink to fit water blocks. They literally have 24 separate transistors and inductors, all with thermal pads so they can be cooled by the heatsink.

The industry could change to 48v if they wanted to - although with ATX3.0 and the 16-pin 12VHPWR cable being so recent, I'd be surprised if they wanted to.


They could make a new spec for graphics cards and have a 24v/48v rail for them on a new unique connector.

I guess the problem is not only designing the cards to run on the higher voltages but also getting AMD and Intel on board because otherwise no manufacturer is going to make the new power supplies.


IIRC the patchwork of laws, standards and regulations across the world for low voltage wiring is what restricted voltage in the 36 V – 52 V range. Some locations treating it as low, some as an intermediate and others treated it as high voltage.

It may be marine market specific, but several manufacturers limit to 36v for even high amperage motors because of it.

Obviously I=V/R will force this in the future though.


USB PD can go up to 48V so I'd assume that's fine from a regulatory standpoint.

Going from 12V to 48 means you can get 600W through an 8-pin with a 190% safety factor, as opposed to melting your 12VHPWR.


Not even to mention the fact that the SXM format Nvidia cards have been running on 48-52v power for a few years already.


Of course there is, same on motherboards and to a smaller extent hard drives.


The voltage step-down is already in place, from 12V to whatever 1V or 0.8V is needed. Doing the same thing starting from 48V instead of 12V does not change anything fundamentally, I guess.


It changes a lot. You are switching at different frequencies and although the currents are smaller, there is an increased cost if you want to do it efficiently and not have too many losses.

But anyway for consumer products this is unlikely to happen because it would force users to get new power supplies which would reduce their sales quite drastically at least for the first one they make like that.

The solution would maybe be to make a low volume 48V card and slowly move people over it showing them it is better?

Anyway this is clearly not a case of "just use X" where X is 48V. It is much more subtle than that.


> a low volume 48V card

I wouldn't be shocked if someone told me that Nvidia already sells more 48V parts than consumer 12V parts.


48V would work with significantly cheaper wiring.


Yes. I'm not suggesting they increase the voltage, as i said, there are lots of tradeoffs.

But i'll also say - outside of heat, all of the things you listed are not safety concerns (obviously, electrical noise can be if it's truly bad enough, but let's put that one mostly aside).

Having a small, cost efficient, low weight device that has no electrical noise is still not safe if it starts fires.


When you work with normal AC power, it is considered unsafe practice to use parallel wires to share load in a circuit. Reason: one might get decoupled somehow, you don’t notice, and when fully loaded the heat causes a fire risk. This problem sounds similar. A single fat wire is the easiest, but I guess it’s not that simple.


> This problem sounds similar. A single fat wire is the easiest, but I guess it’s not that simple.

The problem is the 12V architecture, so the only way you can ramp power up is to increase amperage, and sending 50A over a single wire would probably require 8AWG. That's... really not reasonable for inside a PC case.

Then again, burning down your house is somewhat unreasonable too.


> When you work with normal AC power, it is considered unsafe practice to use parallel wires to share load in a circuit.

The NEC permits using conductors #1/0AWG or larger for parallel runs, it doesn’t forbid it entirely.


Yeah. I have 800 amp service which is basically always done with parallel 400 mcm or 500 mcm (depending on where it is coming from, since POCO doesn't have to follow NEC)

Within conduit, there is basically no other option. In free air there are options (750 mcm, etc).

Even if there were, you could not pay me to try to fish 750 mcm through conduit or bend it


Yeah, (2) sets of 4”C 4#500MCM #1/0G (copper) is typical for an 800A service. My electricians feel the same way as you do about anything over #500, usually for 400A we parallel #4/0 instead of one set of #500.


The 8awg would need a massive connector, else it will still melt/desolder.


Would be trivial to add a fuse / resettable breaker inline.


That would be a novel failure mode: the GPU scheduler had an unbalanced work load across the cores and tripped a breaker. The OS can reset? Kill the offending process "out of power"?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: