Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Effectively every single H100 in existence now will be e-waste in 5 years or less.

This remains to be seen. H100 is 3 years old now, and is still the workhorse of all the major AI shops. When there's something that is obviously better for training, these are still going to be used for inference.

If what you say is true, you could find a A100 for cheap/free right now. But check out the prices.



Yeah, I can rent an A100 server for roughly the same price as what the electricity would cost me.


Because they buy the electricity in bulk so these things are not the same.


That is true for almost any cloud hardware.


Where?


~$1.25-1.75/hr at Runpod or vast.ai for an A100

Edit: https://getdeploying.com/reference/cloud-gpu/nvidia-a100


The A100 SXM4 has a TDP of 400 watts, let's say about 800 with cooling etc overhead.

Bulk pricing per KWH is about 8-9 cents industrial. We're over an order of magnitude off here.

At 20k per card all in price (MSRSP + datacenter costs) for the 80GB version, with a 4 year payoff schedule the card costs 57 cents per hour (20,000/24/365/4) assuming 100% utilization.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: