Thus far in this thread nobody has offered any counterpoint to the speed argument.
The first PC we bought at home was a 133MHz Pentium.
My current box is a Ryzen 3700 at 16*3.6GHz.
It doesn't feel like I have that much more power at my disposal doing everyday things. Web browsers aren't 400 times faster today than Internet Explorer was in 1995.
They should be. Even if you account for all the extra stuff that's going on today things should be at least 50 times faster. Why aren't they?
RAM bandwidth and speed, network latency, display sound like the most important. If that 133MHz pentium rendered a web page, it did so at 640×400 pixels, right? 16 colours? Or just in text? So it had to process about 4k (if text) or 128k (if graphics). Your current display involves a little more data.
RAM access takes about 10ns now, it took longer back then but not very much longer. Your sixteen cores can do an awful lot as long as they don't need to access RAM, and I doubt that you need sixteen cores to render a web page. The cores are fast, but their speed just removes them further from being a bottleneck, it doesn't really speed up display of text like this page.
And then there's the latency — ping times are a little closer to the speed of light, but haven't shrunk by anything close to a factor of 400.
It's also driven by a graphics card with more RAM than I had HDD space back in the day with a dedicated CPU that's also a whole lot faster than 133MHz.
Every piece of hardware is better but software has bloated up to remove those speed gains, except when it comes to things like AAA games where they're still pushing the envelope. That's the only place you can actually tell you've got hot new hardware, because they're the only ones caring enough about performance.
The increase in video RAM requires more of the CPU and GPU. Downloading and displaying a JPEG at today's resolution requires more of everything, not just video RAM.
Anyway, If you come to over to the server side you can see other code that performs very differently than it could have back in 1983. Sometimes unimaginably different — how would a service like tineye.com have been implemented a few decades ago?
The point I'm making is that my desktop PC sitting in my office now does have more of everything compared to my 133MHz PC from 1995. Not everything has scaled up at the same pace, sure, but literally every piece of hardware is better now.
People talk about difference in resolution and color depth? 640x480x16 isn't that much less than 1920x1080x32. My current resolution has 13 times more data than my 1995 one, and my HW can handle refreshing it 120 times per second and fill it with millions of beautiful anti-aliased polygons all interacting with each other with simulated physics and dozens of shaders applied calculating AI behaviour, path finding, thousands of RNG roll, streaming data to and from disk and syncing everything over the network which is still limited by the speed of light. As long as I play Path of Exile that is.
Opening desktop software is perceptually the same as in the 90s. From launching to usable state is about the same amount of time, and it's not doing so much more that it can explain why current software takes so long.
If I can play Path of Exile at 120fps it's obviously not an issue of HW scaling or not being able to achieve performance.
Who knows? My hunch is there's two main factors influencing this. The first is that constraints breed creativity. If you know you only have 133MHz on a single CPU you squeeze as as possible much out of every cycle, on modern CPUs what's a few thousand cycles between friends?
The second is SDK/framework/etc. bloat, which is probably influenced by the first. With excess cycles you don't care if your tools start to bloat.
I think it's primarily an issue of attitude. If you want to write fast software you'll do it, regardless of the circumstances. It all starts with wanting it.
I worked on a framework in the nineties and did such things as render letters to pixels. Here are some of the optimisations we did then, compared to now:
We used much lower output resolution.
We used integer math instead of floating point, reducing legibility. How paragraphs were wrapped depended on whether we rendered it on this monitor or that, or printed it.
We used prescaled fonts instead of freely scalable fonts for the most important sizes, and font formats that were designed for quick scaling rather than high-quality results. When users bought a new, better monitor they could get worse text appearance, because no longer was there a hand-optimised prescaled font for their most-used font size.
We used fonts with small repertoires. No emoji, often not even € or —, and many users had to make up their minds whether they wanted the ability to type ö or ø long before they started writing.
Those optimisations (and the others — that list is far from complete) cost a lot of time for the people who spent time writing code or manually scaling fonts, and led to worse results for the users.
I think you're the kind of person who wouldn't dream of actually using anything other than antialiased text with freely scalable fonts and subpixel interletter space. You just complain that today's frameworks don't provide the old fast code that you wouldn't use and think develpers are somehow to blame for not wanting to write that code.
Perfectly well? Really? Scrolling around the map or zooming causes you no kind of rendering delays or artefacts? It feels consistently snappy no matter what you do?
Apparently Microsoft Autoroute was first released in 1988, covered several dozen countries, and could be obtained by ordering it. Thus using it for the first time would involve a delay or at least a day in order to order, receive and install the program. After that, starting it should be quick, but I can't tell whether it required inserting the CD into the drive. Even if the appliocation is already installed on the PC and not copy-protected, looking something up doesn't sound obviously faster than opening a web page, typing the name of the location into the search box, and waiting for the result.
And you had to wait for new CDs to arrive by mail whenever roads changed. And I'm not talking "2 day delivery Amazon with UPS updates" here, I'm talking you send an envelope + check into the mail and maybe a month from now you get a CD back.
It didn't actually work that well. The internet is the only real way you can get a Google-maps like autoroute feature with reasonable update times. Constantly buying new CDs and DVDs on a subscription basis is a no-go for sure. I don't think anyone's 56kb modem was fast enough to update the map data.
Even if you bought the CDs for the updated map data, it only was updated every year IIRC. So there was plenty of roads that simply were wrong. Its been a long time since I used it, but Google Maps is better at the actual core feature: having up to date maps, and an up-to-date route information.
Hint: Microsoft wasn't sending around Google cars to build up its database of maps in the 90s. Nor were there public satellite images released by the government to serve as a starting point for map data. (Satellite imagery was pure spycraft. We knew governments could do it, but normal people did NOT have access to that data yet). The maps were simply not as accurate as what we have today, not by a long shot.
--------
Has anyone here criticizing new software actually live in the 90s? Like, there's a reason that typical people didn't use Microsoft Autoroute and other map programs. Not only was it super expensive, it required some computer know-how that wasn't really common yet. And even when you got everything lined up just right, it still had warts.
The only thing from the 90s that was unambiguously better than today's stuff was like... Microsoft Encarta, Chips Challenge and Space Cadet Pinball. Almost everything else is better today.
With browsers the network latency caps the speed ultimately, no matter how fast a CPU you have. Also HDD/SSDs are very slow compared to the CPU caches. Granted PCs of the previous era also had the same limitation but their processors were not fast enough to run a browser 400 times faster if only the HDD wasn't there.
But other simpler programs should be very much faster. That they perceptually aren't is because (IMO) code size and data size has increased almost exponentially while CPU caches and main memory speeds haven't kept up.
Main memory speeds haven't increased like CPU speeds have but it's nowhere close to where it was in 1995. You can get CPUs today with larger cache than I had RAM back then, as well.
I know that CPU speed isn't everything and so a 400x speedup is not reasonable to expect. That's why I hedged and said 50x.
Every part of my computer is a lot faster than it was back then and I can barely tell unless I'm using software originating from that era because they wrote their code to run on constrained hardware which means it's flying right now.
It's like we've all gone from driving 80 MPH in an old beater Datsun to driving 30 MPH in a Porsche and just shrug because this is what driving is like now.
The first PC we bought at home was a 133MHz Pentium.
My current box is a Ryzen 3700 at 16*3.6GHz.
It doesn't feel like I have that much more power at my disposal doing everyday things. Web browsers aren't 400 times faster today than Internet Explorer was in 1995.
They should be. Even if you account for all the extra stuff that's going on today things should be at least 50 times faster. Why aren't they?