Well, Valve got seriously concerned about the Windows Store, like, a decade ago, since that could have reduced the stranglehold of Steam on the gaming marketplace.
Turns out that the usual Microsoft incompetence-and-ADHD have kind-of eliminated that threat all by itself.
Also: turns out that, if you put enough effort into it, Linux is actually a quite-usable gaming platform.
Still: are consumers better off today than in the PS2 era? I sort-of doubt it, but, yeah, alternate universes and everything...
I believe Valve's concerns went(or maybe go?) beyond just the Windows Store, and into "We believe Microsoft may become unable to ship a good Operating System in the future".
In a 2013 interview with Gabe Newell: "Windows 8 was like this giant sadness. It just hurts everybody in the PC business. Rather than everybody being all excited to go buy a new PC, buying new software to run on it, we’ve had a 20+ percent decline in PC sales — it’s like 'holy cow that’s not what the new generation of the operating system is supposed to do.' There’s supposed to be a 40 percent uptake, not a 20 percent decline, so that’s what really scares me. When I started using it I was like 'oh my god...' I find [Windows 8] unusable." [0]
The Windows Store probably was a part of it, sure, but looking at that quote from 2025, after having your SSD broken, your recovery unusable and your explorer laggy? It's quite bitter-sweet.
Outside of XBox, Minecraft, and journalists trying it out, I don't think I've heard of anyone using the Microsoft store.
The Wikipedia page has quite the description of the view from within Microsoft:
> Phil Spencer, head of Microsoft's gaming division, has also opined that Microsoft Store "sucks". As a result, Office was removed as an installable app from the store, and made to redirect to its website.
I actually use the Microsoft store before looking elsewhere for software. It’s basically a package manager with a minimal jank. It’s there on a new install and it works. It sucks that they don’t let you add other sources though.
Having an app from an exe installer sucks because you have to update it manually, or, it uses resources while you’re using it to check for updates. With the windows store I can update everything at once and don’t need a million individual update checks on startup.
Yes. At least in 11 Pro installs, you can just say you'll be joining to a domain, create a local admin account and never actually join it. Then to create other users you can do it via the command line, or probably through the GUI after telling it you don't want one a couple of times
I tried to use Microsoft's Game Pass and the Xbox store on a Windows machine with multiple users.
It was astoundingly unusable for sharing Microsoft's own game within my own household with my own family members. Completely broken user experience.
It's not hard to believe that Steam was able to thrive because Microsoft has just done an amazingly bad job with this. I've been in software dev for 20 years and it still baffles me that companies with tens of thousands of engineers can produce such shitty software experiences.
It's not the engineers at fault here but C-suits. Those who are out of touch or stuck up their arse within their own world. Believing their own delusional vision is it based on that they have a toddler who's four.
I'd say engineers are at fault for bugs and performance issues, as well as poor UX (not counting what's made to sell you something or collect your data)
It can be the engineers issue, sure. Hire the wrong crew and you're sunk. However, while I might be bias, what do you do when the higher-execs don't give you time / space to fix the bugs, performance issues? No one writes genius code from day one of a project.
I've had to aggressively pitch to execs who've totally ignored the fact that $app is vulnerable. Would result in fines and if we optimized it could be pushed to milk further money offering X feature. I was denied because it's a waste of time, cost and "didn't provide anything for the company".
After finally persuading them and getting the classic response of "Oh!, why didn't you say so" three weeks later was fired for making the company waste money. This wasn't a small company in an industrial park.
Ever since I've turned down jobs that smell like toxicity. You can sort of see the companies stink when you enter reception.
> "Also: turns out that, if you put enough effort into it, Linux is actually a quite-usable gaming platform."
Valve is the one putting in the effort and paying for it at their own expense. If they ever lose interest in paying for it, like GabeN retiring and Ebenezer Scrooge replacing him, then it's game over for Linux gaming (literally).
valve would recoup the cost from a bigger customer base, as well as paying it as insurance against windows/microsoft targeting them as an existential threat.
It's cheap for what they're getting. And iirc, it being open source means the foundation could be built upon by others if they do decide to call it quits.
That would make very little sense business wise. Steam “consoles” are not big break just for linux but also for valve.
What could easily happen though is locking down their consoles once they get profitable.
"Sense business wise" seems to vary quite a bit nowadays, at least every other day there's a headline of a company on here doing something almost exclusively for short-term value at the detriment to long-term health.
> Well, Valve got seriously concerned about the Windows Store, like, a decade ago, since that could have reduced the stranglehold of Steam on the gaming marketplace
Microsoft telegraphed its intention to kill Steam. The plan was a hermetically sealed ecosystem where only cryptographically signed code could run on Windows computers, from UEFI boot to application launch. This meant users would only run software Microsoft let them, and there was no room for the Steam store in Microsoft's vision of the future then.
They're in the gradual process of open-sourcing their driver stack by moving the bits they want to keep proprietary into the firmware and hardware, much like AMD did many years ago.
It takes a long time to become mature, but it's a good strategy. NVIDIA GPUs will probably have pretty usable open-source community drivers in 5 years or so.
And yet, not much has changed in that decade, right? Well, other than the Steam Deck, which is a well-defined set of hardware for a specific purpose, and which is the main driver for Linux game compatibility...
And that's great! But for a random owner of random hardware. the experience is, well... same as it ever was?
The experience on random hardware in 2025 is nowhere close to what is was in 2015. Have you tried it recently? In 2025 I can install pretty much any game from Steam on my Linux desktop with an nvidia gpu and it just works. The experience is identical to Windows.
The 2015 experience was nothing like this, you'd be lucky to get a game running crash-free after lots of manual setup and tweaking. Getting similar performance as Windows was just impossible.
> But for a random owner of random hardware. the experience is, well... same as it ever was?
Far from it... the only area you tend to see much issue with a current Linux distro is a few wifi/bt and ethernet chips that don't have good Linux support. Most hardware works just fine. I've installed Pop on a number of laptops and desktops this past year and only had a couple issues (wifi/bt, and ethernet) in those cases it's either installing a proprietary driver or swapping the card with one that works.
Steam has been pretty great this past year as well, especially since Kernel 6.16, it's just been solid AF. I know people with similar experience with Fedora variants.
I think the Steam Deck's success with Proton and what that means for Linux all around is probably responsible for at least half of those who have tried/converted to Linux the past couple years. By some metrics as much as 3-5% in some markets, which small is still a massive number of people. 3-5 Million regular users of Desktop Linux in the US alone. That's massive potential. And with the groundwork for Flatpak and Proton that has been taken, there's definitely some opportunity for early movers in more productivity software groups, not just open-source.
Gaming on linux in 2015 was a giant pita and most recent games didn't work properly or didn't work at all through wine.
In 2025 I just buy games on steam blindly because I know they'll work, except for a handful of multiplayer titles that use unsupported kernel level anticheat.
>And yet, not much has changed in that decade, right?
the performance difference between SteamOS and Windows did
>Well, other than the Steam Deck, which is a well-defined set of hardware for a specific purpose, and which is the main driver for Linux game compatibility...
>And that's great! But for a random owner of random hardware. the experience is, well... same as it ever was?
the 2025 ars technica benchmark was performed on a Legion Go S, not on a steam deck
I'm all for MS bashing and laughing at their incompetence, but was there really any threat there? I don't know anyone on PC who was interested in buying a game anywhere other than Steam in 2015.
It was specifically the release of Windows RT (Windows 8 on ARM) in 2012 that had people nervous that Microsoft wanted to lock Windows down long-term in the manner of iOS and Apple. Windows RT only ran code signed by Microsoft and only installed programs from Microsoft's store. It failed, and Microsoft let off the gas locking down Windows, but that moment was probably the specific impetus for Gabe Newell to set Valve on a decade long course of building support for Steam and the games in its storefront on Linux. Windows being locked down to the degree of iOS was an existential risk to Valve as a company and Steam as a platform in 2012. It isn't anymore.
Windows RT also drew ire from people other than Newell at the time IIRC. It was widely perceived as a trial balloon for closing down Windows almost completely. The first Steam Machines a decade ago were Valve's answering trial balloon. Both failed, but Valve learned and Microsoft largely did not... They haven't locked down Windows 11 to the point of Windows RT, but they're abusing their users to the point of potentially sabotaging their own market dominance for consumer PCs.
Yes. We could have had Windows on Arm ten years previously, but Microsoft tried to use the platform transition as an opportunity for lock in. Fortunately this meant there were no apps and basically zero take up of WinRT.
People feared that MS will make installing things not from the store harder. Like what apple is doing. It posed a serious potential threat. Given that MS had complete control over the Windows, DirectX and many other tools developers were using.
They still can be, Microsoft is one of the biggest publishers, and they can lock everything from their studios into XBox app store or Gamepass, if they feel like it.
I buy games on GoG when I can, Steam when I have to. I have nothing against Steam, but they do have a near monopoly position on PC. Unfortunately the non-GoG alternatives are from even worse actors.
> Well, Valve got seriously concerned about the Windows Store, like, a decade ago...
Yeah, I briefly addressed that concern in the article as a comparison to Facebook; probably could've expanded on it, but it was already quite long and didn't feel like it fit naturally into the topic at hand
That wasn't meant as a criticism, more like some additional context. With how irrelevant the Microsoft Store is these days, I can't blame anyone for skipping over it...
I'm impressed they even managed to create a game subscription that works on both PC and Xbox. It felt too much like Xbox was made by a different company than Windows for a long time. Remember Games for Windows Live?
They probably mean that with some projects, Microsoft builds something half-assed and then moves on to something else. Instead of sticking with the project, evaluating it critically, and committing to fixing whatever sucks about it until it's high quality.
One could get that impression from the Windows Store/Microsoft Store. And also the state of the Settings UI for at least the past 13 years - Windows 8 moved a small fraction of Settings to Metro design, but 13 years later there are still some pieces of Windows 7 UI left.
Or the Edge browser fiasco - how can a company as large as Microsoft conclude "eh, I guess we just can't have a browser that works well enough for enough of the web to be competitive, let's just give up and do a Chrome branch"
Or the Kin phone: "we launched this 4 weeks ago and I guess it sucks, let's just pull the plug and never mention this again"
Or Windows features like home group, libraries, and Windows Home Server - they're around for a few years, then someone decides "we don't really care about this" and dump them.
LLMs know nothing about Unpoly, and quite a bit about htmx. This requires you to actually learn Unpoly, because, well, even pointing your LLM-of-choice at the Unpoly docs (which are quite okay!) makes it regress-to-the-ugly-Javascript-workarounds-mean pretty much on try #1.
I'm not yet sure whether this is a good thing or not -- I'll let you know once my latest iteration of my web framework is finally working as I envisioned, he-said sort-of-jokingly, which should be Soon Now.
But yeah, either alternative still beats React by a country mile, since everything related to that descends into madness right away.
I don’t think there is anything in unpoly that a good llm couldn’t figure out with a look over the docs pretty quickly. It’s pretty simple and has some great functionality, especially if you are shooting for progressive enhancement.
Well, I actually use Unpoly, and I can assure you that LLMs don't get it, no matter how many pointers to the (excellent!) docs one includes.
Like, even just now, Claude Code with Opus 4-dot-latest, is absolutely convinced you need a bunch of fragile cascading Javascript listeners to dismiss a lower-level menu in case a dialog is opened, while the Unpoly docs, correctly and clearly, point out that 'shatter' exists for just that purpose.
And this is one of the use cases that I continue to highlight as the achilles heel of LLMs. I'm not holding it wrong: they're not reading it right.
The 'recent graduates' quoted in this article all seem to be from (for lack of a better description) 'developing countries' hoping to get a (again, generalizing) 'high-paying FAANG job'.
My initial reaction would be that these people, unfortunately, got scammed, and that the scammers-promising-abundant-high-paying-jobs have now found a convenient scapegoat?
AI has done nothing so far to reduce the backlog of junior developer positions from where I can see, but, yeah, that's all in "Europoor" and "EU residency required" territory, so what do I know...
For the last few decades its been offshoring that filled the management agenda in the way AI does today so it doesn't seem surprising to me that the first gap would be in the places you might offshore a testing department to, etc.
Offshoring has the exact same benefits/problems that AI has (i.e: it's cheap, yet you have to specify everything in excruciating detail) and has not been a significant factor in junior hiring, like, ever, in my experience.
My experience is that it is not a reduction in work in the place being offshored, but it changes the shape of the labor market and certainly in the places being offshored to. Replace offshore with something cheaper and a lot of juniors in top offshore locations are the quickest to feel it. Local juniors might be worth hiring again if they need a lot of oversight once agents make them questionably productive.
Currently helping with hiring and can't help but reflect on how it changed over past couple of years. We are now filtering for much stronger candidates across all experience levels, but junior side of the scale had been affected much more. Where previously we would take top 5% of junior applicants that made it past first phone screen, now it's below 2%.
"This article was amended on 26 June 2025 to clarify that the link between AI and the decline in graduate jobs is something suggested by analysts, rather than documented by statistics"
Plus, that decline seems specious anyway (as in: just-about visible when you only observe the top-5% of the chart), plus, the UK job market has always been very different from the EU-they-left-behind.
Again, in my experience, that simply never happened, at least not with regard to junior positions.
During COVID we were struggling to retain good developers that just couldn't deal with the full-remote situation[1], and afterwards, there was a lull in recent graduates.
Again, this is from a EU perspective.
[1] While others absolutely thrived, and, yeah, we left them alone after the pandemic restrictions ended...
Huh. It sounds like your perspective isn't just EU focused but N=1, based solely on your company.
The post-pandemic tech hiring boom was well documented both at the time and retrospectively. Lots of resources on it available with a quick web search.
I never claimed a broad perspective. But I've yet to see a "post-pandemic hiring boom" anywhere in junior-level-IT jobs in the EU, and a quick trip to Google with those exact words turned up nothing either.
Malware in build scripts/dependencies. That's not exclusively credential/crypto-stealers, there's apparently also a healthy demand for various types of spam straight from corpo gateways...
Well, if Apple were really clever, they'd have introduced an 'EU DMA CAPTCHA' by now, requiring anyone EU-adjacent-resident to mark all the evil EU bureaucrats in a picture of room before allowing them to resume their doomscrolling.
I mean, it absolutely worked for effectively sinking the GDPR, where pretty much everyone now equates that law with obnoxious 'cookie banners', to the point that these regulations are being relaxed, despite never requiring these banners in any way, shape or form in the first place.
But, yeah, despite that, I'd say they'll get away with this as well...
> I mean, it absolutely worked for effectively sinking the DMCA, where pretty much everyone now equites that law with obnoxious 'cookie banners', to the point that these regulations are being relaxed.
I don't think DMCA has anything to do with that though I did wish everyone hated it. You probably meant GDPR.
It doesn't require consent for cookies or similar data that are strictly necessary to do what the user has asked for - a token for logging in or the contents of a shopping cart are the two canonical examples.
It certainly does require informed consent in other situations though and the dreaded cookie banners were the industry's attempt to interpret that legal requirement.
No, it's now entirely accurate. Nothing in the GDPR requires 'cookie banners', and your Wikipedia link doesn't 'dispell' that 'misconception', but nice try...
My point is that it was never the GDPR that required any sort of "cookie banner" in the first place.
The cookie banner requirement is itself a widespread misconception because the actual rule is neither specific to cookies (it would also cover other locally stored data) nor universal (for example it doesn't require specific consent for locally storing necessary data like session/login mechanics or the contents of a shopping basket).
The requirements for consent that do exist originate in the ePrivacy Directive. That directive was supposed to be superseded by a later ePrivacy Regulation that would have been lex specialis to the GDPR - possibly the only actual link between any of the EU law around cookies and the GDPR - but in the end that regulation was never passed and it was formally abandoned earlier this year.
So for now rules about user consent for local data storage in the EU - and largely still in the UK - do exist but they derive from the ePrivacy Directive and they are widely misunderstood. And while there has been a lot of talk about changes to EU law that might improve the situation with the banners so far talk is all it has been.
The alternatives are not so viable. Librewolf is nice, but is still just 99% Firefox. Same with the Chromium-based browsers. We need real alternatives in this space.
The capital intensity of browsers is so high that there won't be any alternatives unless standards evolve to be simpler and easier to implement. That won't happen while Google and co are driving.
> unless standards evolve to be simpler and easier to implement.
It's almost impossible for people to prevent themselves from saying "we could do just a _bit_ more here" until over time they've built something complex enough they cannot manage it.
Reasonable! Anyone who cares about AD security has been AES-only for at least a year now, and most likely much longer, and it's not like these mitigations are especially hard, unless you're still running some seriously obsolete software.
Yeah, lovely... But can we please stop retconning obsolete technology into something to strive for? The Epson, Tandy, Psion and Nokia almost-like-a-laptop systems of the time were pretty neat, but not magic.
Really: you could lock me into a room with just a pencil and a ream of blank sheets, and nothing of value would come out, and that's not because of the technology or the distractions, but just... well...
I know a few people who would love a device that gave them only the things they need and none of the rest. A great keyboard, enough room for writing.
I use an iPad with a keyboard when I need this kind of “writing room” thing, but I know someone who uses an ancient electronic typewriter.
FWIW when my disorganisation is catastrophic, I go out for a walk, leave my phone at home if I can, sit on a bench, and try to organise my life in one side of A4. And then if there’s a task that I can start by writing, I do it there, with a pen.
I fairly frequently leave my phone in the office and take a clipboard full of lined paper and a ballpoint to a place where I can write without access to the internet - I've got a number of published CS papers and at least one funded grant where a significant amount of writing was done in longhand on paper.
Of course this would require a bit of software work and maybe a brain swap to make it into the sort of portable typewriter that I'm really looking for, but given this as a starting point it should be fairly easy.
One question I have - what is the finished weight?
To each their own. If there were a Psion that supported modern email, calendar, and task standards, with wifi sync, I would carry it most days. I basically never make phone calls anymore, and I always found the old greyscale LCDs to be very legible.
Caveat: such a device should not be infested with shitty spyware like everything else these days.
Those don’t have physical keyboards, and all run Android which rules them out. I also prefer old school greyscale LCDs to e-ink, as e-ink has issues with ghosting and slow refresh.
The closest modern device is the Planet Computers PDA, which can run Linux, but it can’t run mainline Linux and it has a modern color screen that uses too much power.
I was going to mention Planet before I saw your follow-up comment. I bought their Gemini, and it seemed interesting for a while, but being a phone with a keyboard, still effectively a phone, battery life wasn't great and again a phone, my default was never to shut it down, and it would always be out of juice if not used for a while. Eventually it outdated itself sitting in a drawer; just didn't feel right. The external notification screen seemed like a good idea but too clunky for general use, and then the awkward fingerprint sensor position and accidentally touching things when opening / closing. I was actually considering it during my post-BlackBerry withdrawal period, but it just didn't cut it, and while it had the roots behind it and some seemingly nice productivity software, a Psion it just wasn't.
Pairing is a pain, charging is a nuisance, battery life is a constant worry, responsiveness is dodgy... there is nothing good about it. Give me something built-in, cabled, and always-on.
I think this article goes a bit overboard with the negative language ('lies', 'fools'), especially since (auto)VACUUM and indexes really don’t have that much to do with each other: the former is indeed critical on PostgreSQL to ensure availability, but something of a niche feature for most other databases, while index maintenance is important regardless of platform.
For a certain class of applications ('SQLite level'), there’s not even much of that, though, other than ensuring there are no missing or obsolete indexes, which you can take care of with 15 minutes of quality time with the EXPLAIN statement every now and then.
When using a database with persistent index statistics (like SQL Server and Oracle and, yeah, PostgreSQL), it’s important to at least ensure those get updated on a regular basis (but that’s almost always automatic and sufficient unless you're prone to not-usually-done bulk operations) and to optimize or rebuild the underlying tree on a semi-regular basis. This does require some additional non-default setup and monitoring, and can be surprising when you first encounter it.
But it’s not exactly an obscure-slash-secret bit of DBA lore either, unlike what's suggested here...
Author here - thank you for the comments. This article is indeed playing a lot on verge of clickbait and I did asked about that shortly after publishing.
No worries -- not publishing at all is worse than publishing disliked content (well, to a certain extent), so keep reading that feedback, but don't be too discouraged by it!
There is a bunch of AI slop in there ... It does seem like the author probably knows what he's talking about, since there is seemingly good info in the article [1], but there's still a lot of slop
Also, I think the end should be at the beginning:
Know when your indexes are actually sick versus just breathing normally - and when to reach for REINDEX.
VACUUM handles heap bloat. Index bloat is your problem.
The intro doesn't say that, and just goes on and on about "lies" and stupid stuff like that.
This part also feels like AI:
Yes. But here's what it doesn't do - it doesn't restructure the B-tree.
What VACUUM actually does
What VACUUM cannot do
I don't necessarily think this is bad, since I know writing is hard for many programmers. But I think we should also encourage people to improve their writing skills.
[1] I'm not an SQL expert, but it seems like some of the concrete examples point to some human experience
Author here – it’s actually funny, as you pointed out parts that are my own (TM) attempts to make it a bit lighthearted.
LLM is indeed used for correction and improving some sentences, but the rest is my honest attempt at making writing approachable. If you’re willing to invest the time, you can see my fight with technical writing over time if you go through my blog.
(Writing this in the middle of a car wash on my iPhone keyboard ;-)
I'm not exactly sure what's going on with this article and whether it's just the language barrier or something else, but... it doesn't make an awful lot of sense?
'Tricks' like 'not including too many comments' were already well-known from day one of the ZX line (which started around 1980) because, well, you had 1K, 16K or 48K or RAM to work with, so every character counted!
Also, you were painfully aware of the performance of inner/outer loops, because, absolutely, a sub-3 MHz clock speed doesn't leave many other options. Other than to migrate to assembly coding, which was where most serious Sinclair coding took place.
The article is right about one thing, though: the Sinclair BASIC interpreter was a work of minimalist art, as was the hardware. "Sure, let's multiplex the audio-in line with the video sync signal, so we can save a pin on the ULA" is not something that gets a lot of consideration these days...
"You can poke new values into the start-of-program pointer to speed up jumps to the end of the program, and speed up loop execution" is pretty tricky. So's "you can poke new values into another pointer to jump to arbitrary statements within a multi-statement line, and here[1] is a 3d maze program that abuses the heck out of this to become a one-liner".
> 'Tricks' like 'not including too many comments' were already well-known from day one of the ZX line (which started around 1980) because, well, you had 1K, 16K or 48K or RAM to work with, so every character counted!
But that GO TO/GO SUB target position in the code list matters, because does a linear search (so the no use comments, isn't only about wasting RAM), was new for me. And I toyed with a ZX Spectrum as child.
They key problem is that GO SUB and GO TO are not instantaneous like a CALL or JP instruction would be in Z80. The BASIC interpreter does a linear scan through the entire source code, until it reaches the specified line number. Every time you want to jump or call.
That's why this article is called "Efficient Basic Coding"... all the "tricks" are about moving the most frequently called code to the lowest line numbers, so the interpreter spends as little time scanning the source code destination line numbers as possible.
The second article in the series is on a theme, where variables aren't indexed either, the interpreter scans through each variable in turn until it finds the one with the name referenced in the source code... again you want to define them in order of most frequently accessed...
Turns out that the usual Microsoft incompetence-and-ADHD have kind-of eliminated that threat all by itself.
Also: turns out that, if you put enough effort into it, Linux is actually a quite-usable gaming platform.
Still: are consumers better off today than in the PS2 era? I sort-of doubt it, but, yeah, alternate universes and everything...
reply