Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Fiber can solidly hit 10x the price of installation over CAT6a/7, between the more expensive cabling, ethernet conversion on the room terminals (ok, maybe you have one computer with a PCI-E fiber adapter? nothing else does), and the networking switch in a closet/basement (price a switch with more than 4 SFP+/fiber channels. they approach five figures. so, you'll probably have to convert back to ethernet at the source as well).

And the benefit is tenuous. CAT6a/7 can hit 10Gbps, as long as the run length isn't insane. Even the 11th gen Intel NUCs ship with 2.5Gbps ethernet LAN ports, on-board; outfitting your endpoint devices to breach 1Gbps is far cheaper, especially considering most won't ever breach 1Gbps due to hardware limitations (PS5/Xbox? Ikea Tradfri Gateway?).

Even in the "local network upload/download" case; you've got a server, and you want 40Gbps to that server. Building a file server capable of sustained 40Gbps transfer rates is... insane. Its not easy, nor cheap. It requires multiple PCI-E attached NVME drives in RAID-0, on the latest-gen TR/EPYC platform (for their PCI-E lane count, maybe Xeon is good enough nowadays as well). In 2021, this is still in the realm of "something Linus Tech Tips does as a showcase, with all the parts donated by advertisers, and it still sucks to get going because Linux itself becomes the bottleneck". Remember: A Samsung 980 Pro NVME Gen4 ($200/TB) can sustain somewhere around 6Gbps read; you'd need 6-8 of them, in a single RAID-0. And, realistically, you'd want 12-16 of them in RAID 0+1. A server capable of this is easily in the mid-five-figures.

(and, fun fact, even after you build a server capable of this; Windows Explorer literally cannot transfer files that fast. you have to use a third party program.)

If you're a millionaire outfitting your mansion, then sure, maybe fiber makes sense (due to both upfront cost and length of the cable runs, where sustaining 10Gbps on CAT6a/7 is more tenuous). But I think the assertion that Cat6a/7 will be "obsolete" by 2030 is pretty crazy. Yes, technology will continue to get cheaper and more accessible, and I do think we'll see more fiber providers in tier 1 and 2 metro US areas offer wider 2Gbs and 5Gbs connections, but CAT6a/7 is perfectly capable of saturating this. Just ask yourself: Do you really predict that the PlayStation 6, maybe 2028, will have a duplex fiber port on the back, instead of ethernet? Its 2021, and Microsoft's Xbox download servers can't even download game data at gigabit speeds; they rarely breach 250-500Mbps.

Given the niche that fiber lives in, even taking the position that "its just dual-channel light, one up, one down, nothing can travel faster than light, its the perfect future-proof tech" is tenuous. Whos to say that, in the next twenty years, a consumer standard for fiber is developed which runs quadplex (2 up 2 down)? Or simplex (because its "good enough")? Or the connectors are totally different (which would be the easiest to switch because it may not need new cable runs. maybe).

Oh, also: PoE! PoE is freakin fantastic for prosumer setups. and only available on copper. You can run copper to areas around your house where you want security cameras or other smart devices, and not have to worry about also running power.



I agree with pretty much 100% of what you've said there - but fiber doesn't need to have the mystery of being really expensive... Not for houses, but for commercial use, if you spend the money one time to buy a fusion splicer and good tool kit, some basic consumables, two strand singlemode is actually 1/3rd the price per meter of cat6. Due to it being so cheap to manufacture and the cost of copper being high right now. Done correctly you have a guaranteed hassle free upgrade path as far as 100GbE and 400GbE on the same fiber, patch cables, patch panel, etc.

But for residential use, one of the primary needs to run an ethernet connection to different places in the house is for an 802.11ac/ax (or whatever next generation AP), so fiber doesn't really solve the problem because you still need electrical power for the AP. Obviously one cable and 802.3af/at/bt PoE is a better idea than running fiber powering each AP off AC power wherever it's mounted. Aside from the fact that APs except for very, very expensive enterprise ones don't come with SFP/SFP+ ports, and are generally designed around the concept of being powered from the switch they're connected to anyways.

One of the reasons why i really strongly agree with your points is that in a residential environment it's going to be very, very difficult to really move throughput through an 802.11ac/ax AP that gets anywhere near stressing the speeds of a 2.5 or 5GBaseT connection in the future. I'd be fairly confident in saying that a house wired today with cat6a at sub 50 meter lengths, that tests OK for 5GBaseT, will probably be good for the next 25-30 years.


The big thing for me is, its easy to say "oh, fiber is future proofing". Alright, can't argue with that; just as its impossible to predict the future to say fiber is the correct choice for the future, its also impossible for me to say that it isn't. But, I strongly suspect it won't be necessary in our lifetimes.

The primary reason I suspect this is on both ends of the internet delivery spectrum:

First; I think the broad resource allocation focus over the next 10-15 years in the US will be getting "the bottom 80%" up to 100Mbps+ speeds; not getting the "top 20%" beyond 1Gbps. Many of the traditional ground-line companies who would be doing this work (Comcast, Spectrum, etc) are going to be experiencing pressure from emerging wireless technology that can meet these speeds, with beyond-adequate latency, at a fair price, and require far less infrastructure work (Verizon/AT&T/T-Mobile 5G, Starlink) (Starlink is a wild one; you're competing against the gravity well of the planet at that point; what can any of these companies who are "good at digging holes in the ground" do?).

Sitting in my new apartment here, I have AT&T home internet. Averages ~50/10 @ 25ms. I was told on the phone it would be 200 down. "Well, the lines in this building are so long, very old, we ran some tests and we can sell you the 100 plan, but you probably wont get those speeds reliably, you'd be better off on the cheaper 50 plan". Ok, fine. Let me run a speedtest on my phone here, Verizon 5G, 125/50 @ 10ms. The cell companies can just put up a tower, cover hundreds of people with really freakin' good internet, sell it as home internet, what are the cable companies supposed to do against that? Spend thousands of dollars re-tooling the wires in this old building to get "just as good" internet to six people, half of whom won't pay for it?

And the key thing there, these emerging wireless internet technologies won't breach gigabit for decades. Its difficult enough getting them to gigabit.

Part of the reason they won't is on the other end; we're hitting the point, very quickly, where Bill Gates' old misattributed "64k should be enough" quote is becoming true; just not 64k, more like "4K video". Would having 5Gbps internet, instead of 1Gbps (which I had just a few days ago) actually fundamentally change how I interface with content online? Not even close. Even 100Mbps doesn't; there's a point where internet just hits "yup, that's good enough". Cool, I can download Warzone in an hour instead of four hours; its the same thing at the end of the day.

An argument could be made that continuing to push internet forward will open up more innovation in content delivery; whether that's game streaming, 8K video, actually decent quality 4K video, whatever. I think this is tenuous as well, because a big bottleneck for many content providers is networking costs on their end. So much money has been (rightly!) dumped into making our (mostly privatized) nationwide internet backbone "resilient", that suddenly its gotten very expensive to egress data from most hosting providers (big cloud certainly, but even small cloud and colo providers). A high quality 4K video stream can saturate a 100Mbps line; as an end user, that sounds great, I've got a 100Mbps line! But as a service provider, you multiply that 100Mbps by XXX,XXX users, and the numbers start looking really scary. That situation will not improve in the next 1-3 decades; the focus right now is in algorithms to get the same quality in lower bandwidth, not just pushing more bandwidth.

Plus, applications like Game Streaming are both bandwidth intensive and latency intensive. So, double-edged sword, and one that the emerging wireless home internet technologies won't solve well. Having whole-home 40Gbps fiber or a 5Gbps uplink won't help you with Stadia.

Point being, I think arguably for the rest of our lifetimes, the internet as a whole is going to enter a holding pattern while we catch everyone else up with acceptable speeds, improve the width of the backbone (not just the "depth" e.g. fallovers and resiliency), which includes 10-100xing edge distribution, and improve underlying algorithms to reduce the size of content while maintaining quality. All of this will be prioritized above widespread 10Gbps to the home.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: