Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Rail and fiber deprecates on multiple decade timescales. AI data centers close to tulips. Even assuming we manage to make data center stretch to 10 years, these assets won't be around long enough to support ecosystem of new companies if the economics stops making sense. Ultimately the only durable thing is any power infra that gets built, vs rail and fiber where inheritance isn't just rail networks or fiber but like 1000s of kilometers of earthwork projects to build out massive physical networks.




Data centers last decades. Many or the current AI hosting vendors such as Coreweave have crypto origin. Their data centers were built out in 2010s, early 2020s.

Many of legacy systems still running today are IBM or Solaris servers at 20, 30 year old. No reason to believe GPU won’t still be in use in some capacity (e.g. interference) a decade from now.


Skeletons of data centers and some components (i.e. cooling) have long shelf life, but they're also ~10% of investment. Plurality of fiber and rail went towards building out linear infrastructure where improvements can be milked at nodes to improve network efficiency (better switches etc).

VS plurality of AI investment, i.e. trillions are going towards fast deprecating components where we can say with relative confidence will likely be net negative stranded assets in terms of amoritization costs if current semi manufacturing trends continues.

Keeping some mission critical legacy systems around is different than having trillions that makes no financial sense to keep on the books, i.e. post bubble new gen hardware will likely not have scarcity pricing or better compute efficiency (better apex and opex), there is no reason to believe companies will legacy GPUs around at scale if every rack loses them money relative to new hardware. And depending on actual commercialization compute demand, it can simply make more economic sense to retire them than keep them going.


Used to last decades, the world didn't move at this speed before.

Is this a question of the GPU chips dying due to be being warm semiconductors, or of them becoming outdated in relation to new chips?

Both. Semi vs concrete, depreciating vs durable assets. Durable linear assets you upgrade switches to improve fiber/rail which is where most of the investment was in. GPUs you replace the racks which is where most of the investment is in. Either way it cannot be stretched the same way materially and most important economically, i.e. new chips with better power efficiency = running old chips is literally losing money squatting on an data center slot. There is very little reason to believe new chips will cost more than legacy (current chips) for the simple reason much of currnet chips are acquired on scarcity pricing, i.e. Nvidia margins went from 50% to 70%. That 20% is massive capex premium that is not going to be competitive if bubble pops and Nvidia has to sell new hardware on commodity pricing, new hardware that is in all likelihood also going to be more compute efficient in terms of power (opex). Even if you stretch existing compute past 3-5 to 10 years it is still closer to tuplids than rail or fiber in terms of economically productive timescale.

TLDR old durable infra tends to retains positive residual value because they're not easy to replace economically/frequently, old compute has negative residual value because they are easy to replace economically/frequently.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: