Drivers are difficult if you need to support lots of them. If you pick just one or a few pieces of hardware then it should be fairly straightforward. Target VMs only for example and you probably cut away 99% of the driver complexity.
For my case, I am planning to re-implement them. I like doing this.
I sure am not going to be able to re-implement everything myself though. I will concentrate on what I need, and I will consider implementing others if anyone else other than me is willing to use the OS (which would be incredible if it happened)
I clearly understand nothing of this, but it always felt confused about it. Why won't Linux aim for ABI stability? Wouldn't that be a win for everyone involved?
Cyclic logic that says you're wrong for wanting a stable kernel interface, because the kernel keeps changing so the solution is to just get your code merged into mainline. As a tautology, it's true, but it's also a cover for "because we don't want to".
See Windows or android GKI for existence proof that it can be done if so motivated.
From what I understood, I think the big difference here is the human factor: Windows and Android are maintained by employees, who have no choice but to work on things even if they don't like doing it. Linux on the other hand is a collective effort of people doing what they want to do on their free time.
They come from employees of various companies, not employees of some company that owns Linux itself. All those various companies have different goals, and are only contributing because doing so is in their economic interest. Having a stable ABI would require a lot more work, and these companies have zero incentive to invest in this effort: it doesn't help their profitability, it just makes things easier for others outside their companies. Arguably, it also makes the kernel code a lot more complicated and bug-prone.
It makes sense for MS, for example, to want a stable ABI to make things easier for 3rd-party devs so they'll target that platform, and for MS to shoulder the effort of maintaining that ABI cumbersomeness. It doesn't really makes sense for Linux. You could argue that this hampers adoption of Linux as an alternative to Windows on desktop machines, but even if that's true, no one involved really has an economic incentive to change this. In the places where Linux is dominant (Android + servers + embedded), a stable ABI isn't really helpful or needed.
TL;DR: maintaining a stable driver ABI is more work because you have to deal with backwards compatibility, and it mainly benefits vendors that don't make their drivers open source.
So the Linux devs are really against it both from a lack of resources point of view, and from an ideological "we hate closed source" point of view.
Unfortunately, most vendors with closed source drivers don't give a shit about ideology and simply provide binaries for very specific Linux releases. That means users end up getting screwed because they are stuck on those old Linux versions and can't upgrade.
The Linux devs have this strange idea that vendors will see this situation as bad and decide that the best option is to open source their code, but that never happens. Even Google couldn't get them to do that. This is one of the main reasons that Android OS updates are not easy and universal.
> My understanding is that the userspace interface is extremely stable.
Assuming you compile a static binary that doesn't even rely on libc, then yes, something compiled 20 years ago will still run.
But in the real world you have to recompile your software constantly due to breakage in dependencies (including glibc). And a binary won't work cross-distros without adding efforts despite running the same kernel (sure, that part isn't the kernel's fault).
It's often memed that the most stable interface on Linux is win32 (via wine), and that meme isn't entirely off-base.
I thought that unless you're relying on buggy behaviour, or other implementation-specific details which are explicitly and prominently documented as unsupported and/or subject to change[0], any binary compiled for Linux libc6/glibc2 (released in Jan '97, available in e.g. Debian "Hamm" in July '98) should still run with the most recent glibc today.[1] Is that not right?
[0] e.g. if your app has a use-after-free bug which happened to work 20 years ago, it may not work any more. Although SimCity famously had a bug like this on Windows, and Microsoft put in a SimCity-specific "shim" to ensure it would continue working when they changed Windows' allocator, if your app is not as popular as SimCity was then it probably won't be as lucky, even on actual Windows.
[1] for the same architecture, obviously. Your i386 app won't run with an s390x or amd64 glibc.
Aren't most drivers kernel modules? In theory, the goal to aim for is that Maestro is able to compatibly load C Linux kernel modules. Then, whether or not the driver module is written in C or Rust is orthogonal to which kernel is used.
(Just bs'ing here, haven't written drivers in over a decade. What other complexity am I missing?)