Unfortunately, some packages which are free software can only be found in nonguix, due to Guix wanting to be able to build everything from source. So things like Gradle that require a huge bootstrap chain that no one yet has bothered to do are instead packaged in nonguix as a binary download.
The Google Pixel 10 can give you notifications when your location is tracked in this manner as well. I turned it on and have been notified a few times.
It is interesting that we let this happen. Modern phones are very useful devices, but they're not really mandatory for the vast majority of people to actually carry around everywhere they go, in many cases they merely add some convenience or entertainment, and act to consolidate various other kinds of personal devices into just one. If you wanted, you could more often than not avoid needing one. Yet, we pretty much all carry one around anyways, intentionally, and this fact is somewhat abused because it's convenient.
Having watched a fair bit of police interrogations videos recently (don't knock it, it can be addicting) I realized that police have come to rely on cell phone signals pretty heavily to place people near the scene of a crime. This is doubly interesting. For one, because criminals should really know better: phones have been doing this for a long time, and privacy issues with mobile phones are pretty well trodden by this point. But for another, it's just interesting because it works. It's very effective at screwing up the alibi of a criminal.
I've realized that serious privacy violations which actually do work to prevent crime are probably the most dangerous of all, because it's easy to say that because these features can help put criminals behind bars, we should disregard the insane surveillance state we've already built. It's easy to justify the risks this poses to a free society. It's easy to downplay the importance of personal freedoms and privacy.
Once these things become sufficiently normal, it will become very hard to go back, even after the system starts to be abused, and that's what I think about any time I see measures like chat control. We're building our own future hell to help catch a few more scumbags. Whoever thinks it's still worth it... I'd love to check back in in another decade.
The entire point of the free software movement is to promote free software principles and software rights. What I think many Linux distributions would prefer is a model where companies who do benefit from selling software and hardware are funding them indirectly, so they can focus on continuing to promote free software in a more neutral way, without the pressures and potentially misaligned incentives that come from running a store front can bring.
There are distributions like elementary OS which are happy to sell you things with this model, though, but I just don't think it's surprising many distributions would actively prefer to not be in this position even if it leaves money on the table. This sort of principled approach is exactly why a lot of us really like Linux.
It's really unfortunate the term "free software" took off rather than e.g. "libre software", since it muddies discussions like this. The point of "free software" is not "you don't have to pay," it's that you have freedom in terms of what you do with the code running on your own machine. Selling free software is not incompatible with free software: it's free as in freedom, not as in free beer.
Nobody in this comments thread appears to be confused by or misusing the term "free software". We're talking about free software vs (commercial) proprietary software.
> I am still surprised most Linux Distros haven't changed their package managers to allow for selling of proprietary solutions directly
Free packages remain unaffected, but now there are optional commercial options you can pay for which fund the free (as in free money) infrastructure you already take advantage of so that these projects are fully sustainable. I imagine some open source projects could even set themselves up to receiving donations directly via package managers.
I promise you, everybody understands the general idea, but adding a built-in store to your operating system is far from a neutral action that has no second- or third-order effects. It isn't that it somehow affects "free" packages. Incoming text wall, because I am not very good at being terse.
- It creates perverse incentives for the promotion of free software.
If development of the operating system is now funded by purchases of proprietary commercial software in the app store, it naturally incentivizes them to sell more software via the app store. This naturally gives an incentive to promote commercial software over free software, contrary to the very mission of free software. They can still try to avoid this, but I think the incentive gets worse due to the next part (because running a proper software store is much more expensive.)
Free software can be sold, too, but in most cases it just doesn't make very much sense. If you try to coerce people into paying for free software that can be obtained free of charge, it basically puts it on the same level as any commercial proprietary software. If said commercial software is "freemium", it basically incentivizes you to just go with the freemium proprietary option instead that is not just free software, but also often arguably outright manipulative to the user. I don't really think free software OS vendors want to encourage this kind of thing.
- It might break the balance that makes free software package repositories work.
Software that is free as in beer will naturally compete favorably against software that costs money, as the difference between $0 and $1 is the biggest leap. Instead of selling software you can own, many (most?) commercial software vendors have shifted to "freemium" models where users pay for subscriptions or "upsells" inside of apps.
In commercial app stores, strict rules and even unfair/likely to be outlawed practices are used to force vendors to go through a standardized IAP system. This has many downsides for competition, but it does act as a (weak) balance against abusive vendors who would institute even worse practices if left to their own devices. Worse, though, is that proprietary software is hard to vet; the most scalable way to analyze it is via blackbox analysis, which is easily defeated by a vendor who desires to do so. Android and iOS rely on a combination of OS-level sandboxing and authorization as well as many automated and ostensibly human tests too.
I am not trying to say that what commercial app stores do is actually effective or works well, but actually that only serves to help my point here. Free software app stores are not guaranteed to be free of malware more than anything else is, but they have a pretty decent track record, and part of the reason why is because the packaging is done by people who are essentially volunteers to work on the OS, and very often are third parties to the software itself. The packages themselves are often reviewed by multiple people to uphold standards, and many OSes take the opportunity to limit or disable unwanted anti-features like telemetry. Because the software is free, it is possible to look at the actual changes that go into each release if you so please, and in fact, I often do look at the commit logs and diffs from release to release when reviewing package updates in Nixpkgs, especially since it's a good way to catch new things that might need to be updated in the package that aren't immediately apparent (e.g.: in NixOS, a new dlopen dependency in a new feature wouldn't show up anywhere obvious.)
Proprietary software is a totally different ball game. Maintainers can't see what's going on, and more often than not, it is simply illegal for them to attempt to do so in any comprehensive way, depending on where they live.
If the distributions suddenly become app store vendors, they will wind up needing to employ more people full time to work on security and auditing. Volunteers doing stuff for free won't scale well to a proper, real software store. Which further means that they need to make sure they're actually getting enough revenue for it to be self-sustaining, which again pushes perverse incentives to sell software.
What they wanted to do is build a community-driven OS built on free software by volunteers and possibly non-profit employees, and what they got was a startup business. Does that not make the problem apparent yet?
- It makes the OS no longer neutral to software stores.
Today, Flatpak and Steam are totally neutral and have roughly equal footing to any other software store; they may be installed by default in some cases, but they are strictly vendor neutral (except for obviously in SteamOS). If the OS itself ships one, it lives in a privileged position that other software store doesn't. This winds up with the exact same sorts of problems that occur with Windows, macOS, iOS and Android. You can, of course, try to behave in a benevolent manner, but what's even better than trying to behave in a benevolent manner is trying to put yourself in as few situations as possible to where you need to in order to maintain the health of an ecosystem. :)
--
I think you could probably find some retorts to this if you wanted. It's not impossible to make this model work, and some distributions do make this model work, at least insofar as they have gotten now. But with that having been said, I will state again my strongly held belief that it isn't that projects like Debian or Arch Linux couldn't figure out how to sell software or don't know that they can.
I believe I have zero Alkaline batteries left in my house and I'm relatively surprised that pretty much everything works fine. If anything, I suspect the only problem is that some devices have an inaccurate account of how dead the batteries are. But I use Eneloops on everything, even things surely not designed at all to run on them. (And I reckon you could probably make more devices work if you really wanted to; adding an additional cell or two in series would surely give you a voltage that's in range, if you can figure out a good way to do it.)
Of course not all rechargable batteries are the same; there are a few different rechargable battery chemistries in the AA form factor. I like Eneloop Pros, though; they've been very reliable for me. I've been using them for years and I've never had to throw one out yet; supposedly they last over a thousand cycles with most of their capacity.
I think I have only one device that uses AA - my central heating's radio thermostat. This thing has caused me untold hassle, which is only partially down to the batteries, but still...
Totally OT, but does anyone have a good link on how the thermostat gets paired with the boiler? I'm thinking of getting replaced and would like to talk to the gas fitter from a vaguely informed point of view.
Personally, I keep things simple. Got a new (pretty basic) Honeywell thermostat after a kitchen fire; thermostat was pretty old anyway. For wiring, you mostly have 2-wire and 3-wire although there are a lot of variations as you get fancier: https://nassaunationalcable.com/blogs/blog/a-full-guide-to-t...
Number of zones in the house may affect things as may boiler only or AC being in the picture as well.
Thermostats (aka space temperature sensors) can have between two and eight wires. A boiler will usually have three: 24V power, call for heat, and common.
If your boiler has inputs on the terminal block for a thermostat, I would highly recommend buying a wired one, the 24V constant power removes the need for batteries.
If you can provide a link to your boiler’s installation and operations manual, I can tell you.
> good link on how the thermostat gets paired with the boiler?
You should have a book with the boiler that says how your system is setup. They nearly always include schematics and are very helpful. Typically you can open a cover and see the wiring details as well.
Forget about web sites, there are too many different ways a system can be setup, so even if they are not slop they can still be inapplicable for you. Once you know what you are looking at you can sometimes get useful information from the web, but until then you can't sort out what is useful for you.
Yeah. Have your manuals handy if you get a furnace guy/electrician in. My electrician actually wired up my thermostat wrong when I got a new thermostat in.
Sounds like an awful idea in general. Think KISS concept. But you'd have to look at manuals. There's probably no single answer. (I was thinking of wireless control of the thermostat itself.)
I'd add that, where I live (New England), furnace failures can be basically catastrophic so any theoretical convenience advantages just aren't worth it.
Wireless thermostats are really common in the UK. I don't know about elsewhere. I'm interested how they pair (like Bluetooth) with the boiler.
Basically, the thermostat is in a living area and you set the temperature(s) to what you want, it senses them, and then talks to the boiler (in my case in the roof-space) to heat up (or not, via radio) the water in the radiators to satisfy that. It's a feedback loop.
I'm maybe just a fuddy duddy but relying on wireless tech for critical systems unnecessarily seems like asking for trouble. Running wires to a boiler isn't generally that complicated and it's just one less point of failure.
That depends. Sometimes because they have encryption. It isn't hard to have shared keys of some sort. If there is no internet connection on the link (which is possible - I've never seen where both systems connect to wifi, but if they do worry!), and so you don't have to worry about malicious hackers and in turn the encryption is good enough even if done somewhat wrong.
Sometimes they are just send a radio signal and hope nobody else in range is using that frequency.
So again, there are too many different ways this is done to guess. Unfortunately they probably don't put this in the manual.
I have a weather station that takes two 1.2 V. The LCD screen is a bit dim compared to when used with fresh 1.5 V alkalines. Other than that, most things take the 1.2 V well. But they better do because alkalines reach 1.2 V with >50% capacity left.
Yeah, most of the devices using 9V are smoke/CO detectors which only accept alkalines. I don't use the few remaining 9V devices enough to justify buying a new charger.
> Can the compositor replicate the input-to-pixel latency of uncomposited x11 on low-end devices or is that a class of performance we just have to sacrifice for the frame-perfect rendering of wayland?
I think this is ultimately correct. The compositor will have to render a frame at some point after the VBlank signal, and it will need to render with it the buffers on-screen as of that point, which will be from whatever was last rendered to them.
This can be somewhat alleviated, though. Both KDE and GNOME have been getting progressively more aggressive about "unredirecting" surfaces into hardware accelerated DRM planes in more circumstances. In this situation, the unredirected planes will not suffer compositing latency, as their buffers will be scanned out by the GPU at scanout time with the rest of the composited result. In modern Wayland, this is accomplished via both underlays and overlays.
There is also a slight penalty to the latency of mouse cursor movement that is imparted by using atomic DRM commits. Since using atomic DRM is very common in modern Wayland, it is normal for the cursor to have at least a fraction of a frame of added latency (depending on many factors.)
I'm of two minds about this. One, obviously it's sad. The old hardware worked perfectly and never had latency issues like this. Could it be possible to implement Wayland without full compositing? Maybe, actually. But I don't expect anyone to try, because let's face it, people have simply accepted that we now live with slightly more latency on the desktop. But then again, "old" hardware is now hardware that can more often than not, handle high refresh rates pretty well on desktop. An on-average increase of half a frame of latency is pretty bad with 60 Hz: it's, what, 8.3ms? But half a frame at 144 Hz is much less at somewhere around 3.5ms of added latency, which I think is more acceptable. Combined with aggressive underlay/overlay usage and dynamic triple buffering, I think this makes the compositing experience an acceptable tradeoff.
What about computers that really can't handle something like 144 Hz or higher output? Well, tough call. I mean, I have some fairly old computers that can definitely handle at least 100 Hz very well on desktop. I'm talking Pentium 4 machines with old GeForce cards. Linux is certainly happy to go older (though the baseline has been inching up there; I think you need at least Pentium now?) but I do think there is a point where you cross a line where asking for things to work well is just too much. At that point, it's not a matter of asking developers to not waste resources for no reason, but asking them to optimize not just for reasonably recent machines but also to optimize for machines from 30 years ago. At a certain point it does feel like we have to let it go, not because the computers are necessarily completely obsolete, but because the range of machines to support is too wide.
Obviously, though, simply going for higher refresh rates can't fix everything. Plenty of laptops have screens that can't go above 60 Hz, and they are forever stuck with a few extra milliseconds of latency when using a compositor. It is unideal, but what are you going to do? Compositors offer many advantages, it seems straightforward to design for a future where they are always on.
Love your post. So, don’t take this as disagreement.
I’m always a little bewildered by frame rate discussions. Yes, I understand that more is better, but for non-gaming apps (e.g. “productivity” apps), do we really need much more than 60 Hz? Yes, you can get smoother fast scrolling with higher frame rate at 120 Hz or more, but how many people were complaining about that over the last decade?
I enjoy working on my computer more at 144Hz than 60Hz. Even on my phone, the switch from 60Hz to a higher frame rate is quite obvious. It makes the entire system feel more responsive and less glitchy. VRR also helps a lot in cases where the system is under load.
60Hz is actually a downgrade from what people were used to. Sure, games and such struggled to get that kind of performance, but CRT screens did 75Hz/85Hz/100Hz quite well (perhaps at lower resolutions, because full-res 1200p sometimes made text difficult to read on a 21 inch CRT, with little benefit from the added smoothness as CRTs have a natural fuzzy edge around their straight lines anyway).
There's nothing about programming or word processing that requires more than maybe 5 or 6 fps (very few people type more than 300 characters per minute anyway) but I feel much better working on a 60 fps screen than I do a 30 fps one.
Everyone has different preferences, though. You can extend your laptop's battery life by quite a bit by reducing the refresh rate to 30Hz. If you're someone who doesn't really mind the frame rate of their computer, it may be worth trying!
It isn't equivelent in the sense that the progressive scanout on CRTs resulted in near-zero latency and with minimal image persistance, versus flat panels which are global refresh adding latency and worsening motion clarity. So it isn't really a "but", it's a "made even better by being rendered only one pixel/dot at a time".
Motion clarity yes, but it's zero latency in the least useful way possible, only true when you're rendering the top and bottom of the screen at different points in time. And scanout like that isn't unique to CRTs, many flat panels can do it too.
When rendering a full frame at once and then displaying it, a modern screen is not only able to be more consistent in timing, it might be able to display the full frame faster than a CRT. Let's say 60Hz, and the frame is rendered just in time to start displaying. A CRT will take 16 milliseconds to do scanout. But if you get a screen that supports Quick Frame Transport, it might send over the frame data in only 3 milliseconds, and have the entire thing displayed by millisecond 4.
I never complained about 60, then I went to 144 and 60 feels painful now. The latency is noticable in every interaction, not just gaming. It's immediately evident - the computer just feels more responsive, like you're in complete control.
Even phones have moved in this direction, and it's immediately noticable when using it for the first time.
I'm now on 240hz and the effect is very diminished, especially outside of gaming. But even then I notice it, although stepping down to 144 isn't the worst. 60, though, feels like ice on your teeth.
Did you use the same computer at both 60 and 144? I have no doubt that 144 feels smoother for scrolling and things like that. It definitely should. But if you upgraded your system at the same time you upgraded your display, much of the responsiveness would be due to a faster system.
I have a projector that can project 4k at 60hz or 1080p at 240, and I can really notice it by just moving the cursor around. I don’t need to render my games anywhere near 240 to notice that too. Same with phones - moving from pixel 3 to pixel 5, scrolling through settings or the home screen was a palpable difference. Pixel 3 now feels broken. It is not.m, it just renders at 60 instead of 90 fps.
Yes same system, then again at 240hz. Realistically I think just about any modern GPU can composite at 240 fps, although I see what you mean if I did an SSD upgrade or something, but I didn't.
> how many people were complaining about that over the last decade?
Quite a few. These articles tend to make the rounds when it comes up: https://danluu.com/input-lag/https://lwn.net/Articles/751763/ Perception varies from person to person, but going from my 144hz monitor to my old 60hz work laptop is so noticeable to me that I switched it from a composited wayland DE to an X11 WM.
Input lag is not the same as refresh rate. 60 Hz is 16.7 ms per frame. If it takes a long time for input to appear on screen it’s because of the layers and layers of bloat we have in our UI systems.
Refresh rate directly affects one of the components of total input lag, and increasing refresh rate is one of the most straightforward ways for an end user to chip away at that input lag problem.
If our mouse cursors are going to have half a frame of latency, I guess we will need 60Hz or 120Hz desktops, or whatever.
I dunno. It does seem a bit odd, because who was thinking about the framerates of, like, desktops running productivity software, for the last couple decades? I guess I assumed this would never be a problem.
Mouse cursor latency and window compositing latency are two separate things. I probably did not do a good enough job conveying this. In a typical Linux setup, the mouse cursor gets its own DRM plane, so it will be rendered on top of the desktop during scanout right as the video output goes to the screen.
There are two things that typically impact mouse cursor latency, especially with regards to Wayland:
- Software-rendering, which is sometimes used if hardware cursors are unavailable or buggy for driver/GPU reasons. In this case the cursor will be rendered onto the composited desktop frame and thus suffer compositor latency, which is tied to refresh rate.
- Atomic DRM commits. Using atomic DRM commits, even the hardware-rendered cursors can suffer additional latency. In this case, the added latency is not necessarily tied to frame times or refresh rates. Instead, its tied to when during the refresh cycle the atomic commit is sent; specifically, how close to the deadline. I think in most cases we're talking a couple milliseconds of latency. It has been measured before, but I cannot find the source.
Wayland compositors tend to use atomic DRM commits, hence a slightly more laggy mouse cursor. I honestly couldn't tell you if there is a specific reason why they must use atomic DRM, because I don't have knowledge that runs that deep, only that they seem to.
Mouse being jumpy shouldn’t be related to refresh rate. The mouse driver and windowing system should keep track of the mouse position regardless of the video frame rate. Yes, the mouse may jump more per frame with a lower frame rate, but that should only be happening when you move the mouse a long distance quickly. Typically, when you do that, you’re not looking at the mouse itself but at the target. Then, once you’re near it, you slow down the movement and use fine motor skills to move it onto the target. That’s typically much slower and frame rate won’t matter much because the motion is so much smaller.
Initially I wrote “input device”, but since mouse movements aren’t generally a problem, I narrowed it to “keyboard”. ;) Mouse clicks definitely fall into the same category, though.
Essentially, the only reason to go over 60 Hz for desktop is for a better "feel" and for lower latency. Compositing latency is mainly centered around frames, so the most obvious and simplest way to lower that latency is to shorten how long a frame is, hence higher frame rates.
However, I do think that high refresh rates feel very nice to use even if they are not strictly necessary. I consider it a nice luxury.
I couldn't find ready stats on what percentage of displays are 60 hz but outside of gaming and high end machines I suspect 60 hz is still the majority of of machines used by actual users meaning we should evaluate the latency as it is observed by most users.
The point is that we can improve latency of even old machines by simply attaching a display output that supports a higher refresh rate, or perhaps even variable refresh rate. This can negate most of the unavoidable latency of a compositor, while other techniques can be used to avoid compositor latency in more specific scenarios and try to improve performance and frame pacing.
A new display is usually going to be cheaper than a new computer. Displays which can actually deliver 240 Hz refresh rates can be had for under $200 on the lower end, whereas you can find 180 Hz displays for under $100, brand new. It's cheap enough that I don't think it's even terribly common to buy/sell the lower end ones second-hand.
For laptops, well, there is no great solution there; older laptops with 60 Hz panels are stuck with worse latency when using a compositor.
Plenty of brand new displays are still sold that only go up to 60hz, especially if you want high quality IPS panels.
They aren't as common now, but when making a list of screens to replace my current one, I am limiting myself to IPS panels and quite a few of the modern options are still 60hz.
Yeah, I personally still have a lot of 60 Hz panels. One of my favorites is a 43" 4K IPS. I don't think I will be able to get that at 120+ Hz any time soon.
Of course, this isn't a huge deal to me. The additional latency is not an unusable nightmare. I'm just saying that if you are particularly latency sensitive, it's something that you can affordably mitigate even when using a compositor. I think most people have been totally fine eating the compositor latency at 60 Hz.
I hope that XFCE remains a solid lightweight desktop option. I've become a huge fan of KDE over the past couple of years, but it certainly isn't what you would consider lightweight or minimal.
Personally, I'm a big proponent of Wayland and not big Rust detractor, so I don't see any problem with this. I do, however, wonder how many long-time XFCE fans and the folks who donated the money funding this will feel about it. To me the reasoning is solid: Wayland appears to be the future, and Rust is a good way to help avoid many compositor crashes, which are a more severe issue in Wayland (though it doesn't necessarily need to be fatal, FWIW.) Still I perceive a lot of XFCE's userbase to be more "traditional" and conservative about technologies, and likely to be skeptical of both Wayland and Rust, seeing them as complex, bloated, and unnecessary.
Of course, if they made the right choice, it should be apparent in relatively short order, so I wish them luck.
> Still I perceive a lot of XFCE's userbase to be more "traditional" and conservative about technologies, and likely to be skeptical of both Wayland and Rust, seeing them as complex, bloated, and unnecessary.
Very long time (since 2007) XFCE user here. I don't think this is accurate. We want things to "just work" and not change for no good reason. Literally no user cares what language a project is implemented in, unless they are bored and enjoy arguing about random junk on some web forum. Wayland has the momentum behind it, and while there will be some justified grumbling because change is always annoying, the transition will happen and will be fairly painless as native support for it continues to grow. The X11 diehards will go the way of the SysV-init diehards; some weird minority that likes to scream about the good old days on web forums but really no one cares about.
There are good reasons to switch to Wayland, and I trust the XFCE team to handle the transition well. Great news from the XFCE team here, I'm excited for them to pull this off.
I used XFCE for a long time and I very much agree. it just works, and is lightweight. I use KDE these days but XFCE would be my second choice.
> The X11 diehards will go the way of the SysV-init diehard
I hope you are not conflating anti-systemD people with SysV init diehards? As far as I can see very few people want to keep Sysv init, but there are lots who think SystemD init is the wrong replacement, and those primarily because its a lot more than an init system.
In many ways the objects are opposite. People hate system D for being more than init, people hate Wayland for doing less than X.
Edit: corrected "Wayland" to "XFCE" in first sentence!
Not sure I agree here, assuming you mean "... than X11". With Wayland, you put your display code, input-handling code, compositor code, session-handling code, and window-management code all in the same process. (Though there is a Wayland protocol being worked on to allow moving the WM bits out-of-process.)
With X11, display and input-handling are in the X server, and all those other functions can be in other processes, communicating over standard interfaces.
> you put your display code, input-handling code, compositor code, session-handling code, and window-management code all in the same process
That's an implementation detail. You can absolutely separate one out from the other and do IPC - it just doesn't make much sense to do so for most of these.
The only one where I see it making sense is the window manager, which can simply be an extension/plugin either in a scripting language or in wasm or whatever.
It's not an implementation detail that X11 specifies interfaces between those separate components and Wayland does not - X11 is designed for for the window manager being separate from the display server, Wayland is designed for them being the same.
I do not have a strong opinion about Xorg vs Wayland. My only real concern is that it might make it harder for the BSDs but that seems to be being dealt with. I do like being able to use X over the nextwork but that is a problem that can be solved.
I do dislike System D for two reasons. One is exactly because it s a monolith and, in effect, an extension of the OS. The other is the attitude of the developers which becomes very evident if you browser the issues.
How is Wayland more modular? It conflates the window manager, the compositor, and the display server, all into a single component that must be replaced as a single unit. This kind of new conflation is exactly what people dislike about systemd.
It's less monolithic in the sense that instead of one creaky unmaintainable ancient mass of software doing the actual rendering gruntwork there are now five (and counting) somewhat incompatible slick untested new masses of software doing it in slightly different ways that application developers have to worry about. It's kind of a pick your poison situation.
IME it's always best to read any claims of "unmaintainable" as "not as fun as designing something new". Nothing is truly unmaintainable if the will is there.
I know OpenBSD's fork of it is being maintained just fine even though they've declared it feature-complete (which for some reason is anathema to a lot of people).
If Rust has one weakness right now, it's bindings to system and hardware libraries. There's a massive barrier in Rust communicating with the outside ecosystem that's written in C. The definitive choice to use Rust and an existing Wayland abstraction library narrows their options down to either creating bindings of their own, or using smithay, the brand new Rust/Wayland library written for the Cosmic desktop compositor. I won't go into details, but Cosmic is still very much in beta.
It would have been much easier and cost-effective to use wlroots, which has a solid base and has ironed out a lot of problems. On the other hand, Cosmic devs are actively working on it, and I can see it getting better gradually, so you get some indirect manpower for free.
I applaud the choice to not make another core Wayland implementation. We now have Gnome, Plasma, wlroots, weston, and smithay as completely separate entities. Dealing with low-level graphics is an extremely difficult topic, and every implementor encounters the same problems and has to come up with independent solutions. There's so much duplicated effort. I don't think people getting into it realize how deceptively complex and how many edge-cases low-level graphics entails.
> using smithay, the brand new Rust/Wayland library
Fun fact: smithay is older than wlroots, if you go by commit history (January 2017 vs. April 2017).
> It would have been much easier and cost-effective to use wlroots
As a 25+ year C developer, and a ~7-year Rust developer, I am very confident that any boost I'd get from using wlroots over smithay would be more than negated by debugging memory management and ownership issues. And while wlroots is more batteries-included than smithay, already I'm finding that not to be much of a problem, given that I decided to base xfwl4 on smithay's example compositor, and not write one completely from scratch.
Thanks for the extra info. I'm glad it hasn't turned out to be much of an issue. I've looked at your repository and it seems to be off to a great start.
Personally, I'm anxious to do some bigger rust projects, but I'm usually put off by the lack of decent bindings in my particular target area. It's getting better, and I'm sure with some time the options will fill out more.
There really isn't a "massive barrier" to FFI. Autogenerate the C bindings and you're done. You don't have to wrap it in a safe abstraction, and imo you shouldn't.
This. It is somewhat disheartening to hear the whole interop-with-C with Rust being an insurmountable problem. Keeping the whole “it’s funded by the Government/Google etc” nonsense aside: I personally wish that at least a feeble attempt would be made to actually use the FFI capabilities that Rust and its ecosystem has before folks form an opinion. Personally - and I’m not ashamed to state that I’m an early adopter of the language - it’s very good. Please consider that the Linux kernel project, Google, Microsoft etc went down the Rust path not on a whim but after careful analysis of the pros and cons. The pros won out.
> This. It is somewhat disheartening to hear the whole interop-with-C with Rust being an insurmountable problem.
I have done it and it left a bad taste in my mouth. Once you're doing interop with C you're just writing C with Rust syntax topped off with a big "unsafe" dunce cap to shame you for being a naughty, lazy programmer. It's unergonomic and you lose the differentiating features of Rust. Writing safe bindings is painful, and using community written ones tends to pull in dozens of dependencies. If you're interfacing a C library and want some extra features there are many languages that care far more about the developer experience than Rust.
> a big "unsafe" dunce cap to shame you for being a naughty, lazy programmer
You just have to get over that. `unsafe` means "compiler cannot prove this to be safe." FFI is unsafe because the compiler can't see past it.
> Once you're doing interop with C you're just writing C with Rust syntax
Just like C++, or go, or anything else. You can choose to wrap it, but that's just indirection for no value imo. I honestly hate seeing C APIs wrapped with "high level" bindings in C++ for the same reason I hate seeing them in Rust. The docs/errors/usage are all in terms of the C API and in my code I want to see something that matches the docs, so it should be "C in syntax of $language".
> a big "unsafe" dunce cap to shame you for being a naughty, lazy programmer
That's bizarrely emotional. It's a language feature that allows you to do things the compiler would normally forbid you from doing. It's there because it's sometimes necessary or expedient to do those things.
My point is that using C FFI is "the things the compiler would normally forbid you from doing" so if that's a major portion of your program then you're better off picking a different language. I don't dislike rust, but it's not the right tool for any project that relies heavily on C libraries.
> The X11 diehards will go the way of the SysV-init diehards; some weird minority
I upvoted your general response but this line was uncalled for. No need to muddy the waters about X11 -> Wayland with the relentlessly debated, interminable, infernal init system comparison.
> Literally no user cares what language a project is implemented in
This is only true most of the time - some languages have properties which "leak" to user.
Like if it's Java process, then sooner or later user will have to mess with launchers and -Xmx option.
Or if it's a process which has lots of code and must not crash, language matters. C or C++ would segfault on any sneeze. Python or Ruby or even Java would stay alive (unless they run out of memory, or hang due to a logic bug)
> The X11 diehards will go the way of the SysV-init diehards; some weird minority that likes to scream about the good old days on web forums but really no one cares about.
Being proud of how you are uncaring towards others is a sad state of affairs.
> Literally no user cares what language a project is implemented in
I think this is true but also maybe not true at the same time.
For one thing, programming languages definitely come with their own ecosystems and practices that are common.
Sometimes, programming languages can be applied in ways that basically break all of the "norms" and expectations of that programming language. You can absolutely build a bloated and slow C application, for example, so just using C doesn't make something minimal or fast. You can also write extremely reliable C code; sqlite is famously C after all, so it's clearly possible, it just requires a fairly large amount of discipline and technical effort.
Usually though, programs fall in line with the norms. Projects written in C are relatively minimal, have relatively fewer transitive dependencies, and are likely to contain some latent memory bugs. (You can dislike this conclusion, but if it really weren't true, there would've been a lot less avenues for rooting and jailbreaking phones and other devices.)
Humans are clearly really good at stereotyping, and pick up on stereotypes easily without instruction. Rust programs have a certain "feel" to them; this is not delusion IMO, it's likely a result of many things, like the behaviors of clap and anywho/Rust error handling leaking through to the interface. Same with Go. Even with languages that don't have as much of a monoculture, like say Python or C, I think you can still find that there are clusters of stereotypes of sorts that can predict program behavior/error handling/interfaces surprisingly well, that likely line up with specific libraries/frameworks. It's totally possible to, for example, make a web page where there are zero directly visible artifacts of what frameworks or libraries were used to make it. Yet despite that, when people just naturally use those frameworks, there are little "tells" that you can pick up on a lot of the time. You ever get the feeling that you can "tell" some application uses Angular, or React? I know I have, and what stuns me is that I am usually right (not always; stereotypes are still only stereotypes, after all.)
So I think that's one major component of why people care about the programming language that something is implemented in, but there's also a few others:
- Resources required to compile it. Rust is famously very heavy in this regard; compile times are relatively slow. Some of this will be overcome with optimization, but it still stands to reason that the act of compiling Rust code itself is very computationally expensive compared to something as simple as C.
- Operational familiarity. This doesn't come into play too often, but it does come into play. You have to set a certain environment variable to get Rust to output full backtraces, for example. I don't think it is part of Rust itself, but the RUST_LOG environment variable is used by multiple libraries in the ecosystem.
- Ease of patching. Patching software written in Go or Python, I'd argue, is relatively easy. Rust, definitely can be a bit harder. Changes that might be possible to shoehorn in in other languages might be harder to do in Rust without more significant refactoring.
- Size of the resulting programs. Rust and Go both statically link almost all dependencies, and don't offer a stable ABI for dynamic linking, so each individual Rust binary will contain copies of all of their dependencies, even if those dependencies are common across a lot of Rust binaries on your system. Ignoring all else, this alone makes Rust binaries a lot larger than they could be. But outside of that, I think Rust winds up generating a lot of code, too; trying to trim down a Rust wasm binary tells you that the size cost of code that might panic is surprisingly high.
So I think it's not 100% true to say that people don't care about this at all, or that only people who are bored and like to argue on forums ever care. (Although admittedly, I just spent a fairly long time typing this to argue about it on a forum, so maybe it really is true.)
Why does Wayland "feel like the future?" It feels like a regression to me and a lot of other people who have run into serious usability problems.
At best, it seems like a huge diversion of time and resources, given that we already had a working GUI. (Maybe that was the intention.) The arguments for it have boiled down to "yuck code older than me" from supposed professionals employed by commercial Linux vendors to support the system, and it doesn't have Android-like separation — a feature no one really wants.
The mantra of "it's a protocol" isn't very comforting when it lacks so many features that necessitate workarounds, leading to fragmentation and general incompatibility. There are plenty of complicated, bad protocols. The ones that survive are inherently "simple" (e.g., SMTP) or "trivial" (e.g., TFTP). Maybe there will be a successor to Wayland that will be the SMTP to its X400, but to me, Wayland seems like a past compromise (almost 16 years of development) rather than a future.
Wayland supports HDR, it's very easy to configure VRR, and it's fractional scaling (if implemented properly) is far superior to anything X11 can offer.
Furthermore, all of these options can be enabled individually on multiple screens on the same system and still offer a good mix-used environment. As someone who has been using HiDPI displays on Linux for the past 7 years, wayland was such a game changer for how my system works.
Fractional scaling for wayland is broken on a per app basis which feels strictly worse to me than it was before. Libre office currently is broken on wayland and works in x11
LibreOffice works for me on wayland lol. I don't know why you would wanna do fractional scaling on a per app basis whenever you got one screen. But, for your libreoffice woes, try using a different backend?
Which, because we're talking rendering and GPU and drivers, is incredibly frustrating, because if we're here, it's because the system doesn't have working GPU drivers, at which point, a misconfiguration is a crash and a power cycle and a "hope pstore managed to save something", and the hardware/software cursor settings getting lost somewhere-how.
I'm not trying to do it on a per app basis. I mean that some apps work and some don't. I should not be playing with rendering backends per app to get them working. If thats needed its broken.
People keep pushing KDE+Wayland to beginners either through recommendations or preconfigured stuff like bazzite. My experience is that the defaults in such a setup are broken and frustrating.
Even if you dislike Wayland, forwards-going development is clearly centred around it.
Development of X11 has largely ended and the major desktop environments and several mainstream Linux distributions are likewise ending support for it. There is one effort I know of to revive and modernize X11 but it’s both controversial and also highly niche.
You don’t have to like the future for it to be the future.
It's mostly coz nobody really wants to improve X11. I don't think there is many wayland features that would be impossible to implement in X11 it's just nobody wants to dig into crusty codebase to do it.
And sadly wayland decided to just not learn any lessons from X11 and it shows.
What do you mean nobody wants to improve X11? There were developers with dozens of open merge requests with numerous improvements to X11 that were being actively ignored/held back by IBM/Red Hat because they wanted Wayland, their corporate project, to succeed instead.
Reviewing PRs and merging them requires great effort, especially in case of a non-trivial behemoth like X. Surely if all these merge requests were of huge value, someone could have forked the project and be very happy with all the changes, right?
Not having enough maintainers, and some design issues that can't be solved are both reasons why X was left largely unmaintained.
> Surely if all these merge requests were of huge value
There were a lot of MRs with valuable changes however Red Hat wanted certain features to be exclusive to Wayland to make the alternative more appealing to people so they actively blocked these MRs from progressing.
> someone could have forked the project and be very happy with all the changes, right?
That's precisely what happened, one of the biggest contributors and maintainers got bullied by Red Hat from the project for trying to make X11 work and decided to create X11Libre (https://github.com/X11Libre/xserver) which is now getting all these fancy features that previously were not possible to get into X11 due to Red Hat actively sabotaging the project in their attempt to turn Linux into their own corporate equivalent of Windows/macOS.
We’re accustomed to "the future" connoting progress and improvement. Unfortunately, it isn’t always so (no matter how heavily implied). Just that it’s literally expected to be the future state if matters.
Wayland was the first display system on Linux I've used that just worked perfectly right out of the box on a bog standard Intel iGPU across several machines. I think that is a big draw for a lot of people like myself who just want to get things done. For me X11 represents the past through experience I had when I had to tinker with the X11 config file to get basic stuff like video playback to work smoothly without tearing. My first Wayland install was literally a "wow this is the future of Linux" for me quite honestly when I realised everything just worked without even a single line of config. I would recommend a Wayland distro like Debian to the average computer user knowing Wayland just works -- prior to Wayland I'd be like "well Linux is great but if you like watching YouTube you'll need to add a line to your xorg config to trun on the thingy that smoothes out video playback on Intel iGPUs". Appreciate others have different perpectives -- I come from the POV of someone who likes to install a OS and have all the basic stuff working out of the box.
It is many years, I guess close to a decade, since I needed to change X config manually. I still find the odd rough edge in Wayland (the most recent was failing screenshots with KDE).
This argument is actually backwards: one of the goals of the wayland project is to draw development away from X. If wayland didn't exist, people would have worked on X11 a lot more.
This question sounds to me like you suspect some outright evil getting projected here. That would go too far. The wayland project tried to get the support of X developers early so that they could become a sort of "blessed" X successor early on. Plenty of earlier replacement attempts have failed because they couldn't get bigger community support, so this had to be part of a successful strategy. Any detrimental effects on X from that move were never a direct goal, as far as I am aware, just a consequence.
This isn't quite right? Wayland was literally created by an X11 developer who got two more main X11 developers in. It's a second system, not a competitor as such.
Yes, I do interpret your “draw development away from X” as suggesting an attempt to damage X (sorry if I misinterpreted your post, but I do think my interpretation was not really that unreasonable).
This “blessed successor” without and detrimental effects as a main goal: that’s pretty close to my understanding of the project. IIRC some X people were involved from the beginning, right?
Wanting developers to switch projects doesn't have to be malicious, in fact personally i doubt there were any bad intentions in place, the developers of Wayland most likely think they're doing the right thing.
That’s a fork, which is fine. But for example, users from most mainstream distros will have to compile it themselves.
I guess we’ll see if that development is ever applied to the main branch, or if it supplants the main X branch. At the moment, though… if that’s the future of X, then it is fair to be a little bit unsure if it is going to stick, right?
That seems pretty interesting. I guess it relies on BSD plumbing though?
Funny enough, the my first foray into these sort of operating systems was BSD, but it was right when I was getting started. So I don’t really know which of my troubles were caused by BSD being tricky (few probably), and which were caused by my incompetence at the time (most, probably). One of these days I’ll try it again…
Yup, "pledge" is one of my BSD envies. Namespaces and unshare are significantly more complex and we're still told not to use them as a security barrier (which is explicitly in scope for pledge).
I've been on and off linux desktops since the advent of Wayland. Unsure of the actual issues people run into at this point outside of very niche workflows or applications, to which, there are X11 fallbacks for.
Also, by "commercial linux vendors", you do realize Wayland is directly supported (afaik, correct me if wrong) by the largest commercial linux contributors, Red Hat, Canoncial. They're not simply 'vendors'.
> Unsure of the actual issues people run into at this point outside of very niche workflows or applications, to which, there are X11 fallbacks for.
I don't know if others have experienced this but the biggest bug I see in Wayland right now is sometimes on an external monitor after waking the computer, a full-screen electron window will crash the display (ie the display disconnects).
I can usually fix this by switching to another desktop and then logging out and logging back in.
Such a strange bug because it only affects my external monitor and only affects electron apps (I notice it with VSCode the most but that's just cause I have it running virtually 24/7)
If anyone has encountered this issue and figured out a solution i am all ears.
This is probably worth reporting. I don't think I've ever heard or ran into something like that before. Most issues I ran into during the early rollout of Wayland desktop environments was broken or missing functionality in existing apps.
> it doesn't have Android-like separation — a feature no one really wants.
It's certainly a feature I want. Pretty sure I'm not alone in wanting isolation between applications--even GUI ones. There's no reason that various applications from various vendors shouldn't be isolated into their own sandboxes (at least in the common case).
There is a big reason: It impedes usability, extensibility and composability. If you sandbox GUI applications then the sandbox needs to add support for any interaction between them or they will just not be possible - and to fully support many advanced interactions like automation you will essentially have to punch huge holes in the sandbox anyway.
Meanwhile the advantages of sandboxing are pretty much moot in an open source distro where individual applications are open and not developed by user hostile actors.
Yes, sandboxing impedes those things. But I assume you're not advocating against sandboxing in general, right?
Starting with a sandbox and poking holes/whitelisting as-needed is a good way to go. Whitelisting access on a per-application basis is a pragmatic way to do this, and Flatpak with Wayland gives a way to actually implement this. It's imperfect, but it's a good start.
Preventing keylogging is a good, concrete example here. There's no reason some random application should be able to see me type out the master password in my password manager.
Likewise, there is no reason that some other application should be able to read ~/.bash_history or ~/.ssh/. The browser should limit itself to ~/Downloads. Etc.
> Meanwhile the advantages of sandboxing are pretty much moot in an open source distro where individual applications are open and not developed by user hostile actors.
Defense in depth. Belt and suspenders. I do trust the software I run to some degree, and take great care in choosing the software. But it's not perfect. Likewise, I take care to use sandboxing features whenever I can, acknowledging that they sometimes must have holes poked in them. But the Swiss cheese model is generally a good lens: https://en.wikipedia.org/wiki/Swiss_cheese_model
If we weren't concerned with belt and suspenders and could rely on applications being developed by non-hostile actors, then we could all run as root all the time! But we don't do that--we try to operate according to least-privilege and isolate separate tasks as much as is practical. Accordingly, technologies which allow improved isolation with zero or minimal impact to functionality are strictly a good thing, and should be embraced as such.
> given that we already had a working GUI. (Maybe that was the intention.)
Neither X11 nor Wayland provide a GUI. Your GUI is provided by GTK or QT or TCL or whatever. X11 had primitive rendering instructions that allowed those GUIs to delegate drawing to a central system service, but very few things do that anymore anyway. Meaning X11 is already just a dumb compositor in practice, except it's badly designed to be a dumb compositor because that wasn't its original purpose. As such, Wayland is really just aligning the protocol to what clients actually want & do.
- Having a single X server that almost everyone used lead to ossification. Having Wayland explicitly be only a protocol is helping to avoid that, though it comes with its own growing pains.
- Wayland-the-Protocol (sounds like a Sonic the Hedgehog character when you say it like that) is not free of cruft, but it has been forward-thinking. It's compositor-centric, unlike X11 which predates desktop compositing; that alone allows a lot of clean-up. It approaches features like DPI scaling, refresh rates, multi-head, and HDR from first principles. Native Wayland enables a much better laptop docking experience.
- Linux desktop security and privacy absolutely sucks, and X.org is part of that. I don't think there is a meaningful future in running all applications in their own nested X servers, but I also believe that trying to refactor X.org to shoehorn in namespaces is not worth the effort. Wayland goes pretty radical in the direction of isolating clients, but I think it is a good start.
I think a ton of the growing pains with Wayland come from just how radical the design really is. For example, there is deliberately no global coordinate space. Windows don't even know where they are on screen. When you drag a window, it doesn't know where it's going, how much it's moving, anything. There isn't even a coordinate space to express global positions, from a protocol PoV. This is crazy. Pretty much no other desktop windowing system works this way.
I'm not even bothered that people are skeptical that this could even work; it would be weird to not be. But what's really crazy, is that it does work. I'm using it right now. It doesn't only work, but it works very well, for all of the applications I use. If anything, KDE has never felt less buggy than it does now, nor has it ever felt more integrated than it does now. I basically have no problems at all with the current status quo, and it has greatly improved my experience as someone who likes to dock my laptop.
But you do raise a point:
> It feels like a regression to me and a lot of other people who have run into serious usability problems.
The real major downside of Wayland development is that it takes forever. It's design-by-committee. The results are actually pretty good (My go-to example is the color management protocol, which is probably one of the most solid color management APIs so far) but it really does take forever (My go-to example is the color management protocol, which took about 5 years from MR opening to merging.)
The developers of software like KiCad don't want to deal with this, they would greatly prefer if software just continued to work how it always did. And to be fair, for the most part XWayland should give this to you. (In KDE, XWayland can do almost everything it always could, including screen capture and controlling the mouse if you allow it to.) XWayland is not deprecated and not planned to be.
However, the Wayland developers have taken a stance of not just implementing raw tools that can be used to implement various UI features, but instead implement protocols for those specific UI features.
An example is how dragging a window works in Wayland: when a user clicks or interacts with a draggable client area, all the client does is signal that they have, and the compositor takes over from there and initiates a drag.
Another example would be how detachable tabs in Chrome work in Wayland: it uses a slightly augmented invocation of the drag'n'drop protocol that lets you attach a window drag to it as well. I think it's a pretty elegant solution.
But that's definitely where things are stuck at. Some applications have UI features that they can't implement in Wayland. xdg-session-management for being able to save and restore window positions is still not merged, so there is no standard way to implement this in Wayland. ext-zones for positioning multi-window application windows relative to each-other is still not merged, so there is no standard way to implement this in Wayland. Older techniques like directly embedding windows from other applications have some potential approaches: embedding a small Wayland compositor into an application seems to be one of the main approaches in large UI toolkits (sounds crazy, but Wayland compositors can be pretty small, so it's not as bad as it seems) whereas there is xdg-foreign which is supported by many compositors (Supported by GNOME, KDE, Sway, but missing in Mir, Hyprland and Weston. Fragmentation!) but it doesn't support every possible thing you could do in X11 (like passing an xid to mpv to embed it in your application, for example.)
I don't think it's unreasonable that people are frustrated, especially about how long the progress can take sometimes, but when I read these MRs and see the resulting protocols, I can't exactly blame the developers of the protocols. It's a long and hard process for a reason, and screwing up a protocol is not a cheap mistake for the entire ecosystem.
But I don't think all of this time is wasted; I think Wayland will be easier to adapt and evolve into the future. Even if we wound up with a one-true-compositor situation, there'd be really no reason to entirely get rid of Wayland as a protocol for applications to speak. Wayland doesn't really need much to operate; as far as I know, pretty much just UNIX domain sockets and the driver infrastructure to implement a WSI for Vulkan/GL.
Thanks a lot for an actually constructive comment on Wayland! The information tends to be lost in all the hate.
I understand the frustration, but I see a lot of "it's completely useless" and "it's a regression", though to me it really sounds like Wayland is an improvement in terms of security. So there's that.
The fact that this post is downvoted into grayness while lazy hateful rants aren't shows just how rotten HN community has gotten around opensource these days :/
Do you know if global shortcuts are solved in a satisfactory way, and if there easy mechanism for one application to query wayland about other applications.
One hack I've made a while ago was to bind win+t command to a script that queried the active window in the current workspace, and based on a decision opened up a terminal at the right filesystem location, with a preferred terminal profile.
All I get from llms is that dbus might be involved in gnome for global shortcuts, and when registering global shortcuts in something like hyperland app ids must be passed along, instead of simple scripts paths.
Currently, the Wayland protocol itself doesn't have a standard solution to global shortcuts. Instead, it's being pushed to the XDG Desktop Portal API, under the org.freedesktop.portal.GlobalShortcuts service:
This should work with Hyprland provided that you are using xdg-desktop-portal-hyprland, as it does indeed have an implementation of GlobalShortcuts.
I'm not sure if this API is sufficient for your needs, or if it is too much of a pain to use. Like many Wayland things, it prescribes certain use cases and doesn't handle others. The "configure" call seems to rely on xdg-foreign-unstable-v2 support, but AFAIK Hyprland doesn't support this protocol, so I have no idea what you're supposed to do on Hyprland for this case.
I am sorry to see developers have to deal with things in a relatively unfinished state, but such is the nature of the open source desktop.
Thanks for the insight, I really appreciate it. I don't use hyperland (just what came up as brief research). Xfce generally has simple and legible code, hopefully this wayland compositor will be just as hackable and tweakable for my needs.
> xdg-session-management for being able to save and restore window positions
> is still not merged, so there is no standard way to implement this in Wayland
For me, this is a real reason not to want to be forced to use Wayland. I'm sure the implementation of Wayland in xfce is a long time off, and the dropping of Xwindows even further off, so hopefully this problem will have been solved by then.
It's a downgrade that we have no choice but to accept in order to continue using our machines. Anyone familiar with Microsoft or Apple already knows that's the future.
They're trying to "nudge" everyone. Major desktop environments and entire distributions are removing X11 support to varying degrees. A lot of this is because they can't get their adoption rates above about half due to various broken workflows or simply user preference.
They intentionally don't want you to keep using X11, and they'll keep turning up the heat on the pot until we're all boiling.
Gnome just removed the middle-click paste option. Is that because they fixed the clipboard situation on Linux, and there's a universal, unambiguous way of cut and paste that works across every application? No. It's because middle-click to paste is an "X-ism." This is just demagoguery and unserious.
> Gnome just removed the middle-click paste option. Is that because they fixed the clipboard situation on Linux, and there's a universal, unambiguous way of cut and paste that works across every application? No. It's because middle-click to paste is an "X-ism." This is just demagoguery and unserious.
They disabled it by default. You can enable it if you want.
Once again, Gentoo Linux proves (somewhat regrettably) to be one of the best Linux distros out there. OpenRC and Xorg as defaults, with SystemD and Wayland as supported options is quite a lovely way to do things.
> Gnome just removed the middle-click paste option.
Gnome removes useful things all the time. "The Gnome folks do something user-hostile just because they feel like it" isn't news; that's been going on for decades. This habit of theirs is a big reason why I've been using KDE for a very long time.
Unfortunately I don't think Gentoo will keep X11 support in e.g. KDE once its dropped upstream (which is already announced), they don't have the manpower for that.
And KDE itself is also not the bastion of user choice it once was, even if they haven't yet gone quite as hostile as Gnome.
> Unfortunately I don't think Gentoo will keep X11 support in e.g. KDE once its dropped upstream...
IIRC, the only part that's dropping X11 support is Plasma. From [0]:
There are currently no plans to drop X11 support in KDE applications outside of Plasma.
This change only concerns Plasma’s X11 login session, which is what’s going away.
I don't really care about Plasma; a taskbar to house a system tray and clock is nice, as is desktop wallpaper, but I don't particularly care about that stuff. I use very little of KDE: kwin, krunner, kmix, kcalc, okular, dolphin (rarely), and whatever handles the global keyboard shortcuts.
Hell, on my ~twenty-year-old computer I don't use Plasma because it's a resource hog, but I still use KDE.
That's fair, but I would also read it as a sign of things to come for the rest. If you can't run full KDE on X11 there will not be many KDE developers caring about X11 support. KWin for example has already gained many bugs on X11 that I expect to never be fixed. And now KWin for X11 is split into a separate project which will hopefully mean fewer further regressions but probably also not much further development which means bitrot as things around it change.
> That's fair, but I would also read it as a sign of things to come for the rest.
Given this statement from the announcement that I linked to previously
The Plasma X11 session will be supported by KDE into early 2027.
We cannot provide a specific date, as we’re exploring the possibility of shipping some extra bug-fix releases for Plasma 6.7. The exact timing of the last one will only be known when we get closer to its actual release, which we expect will be sometime in early 2027.
I expect that I will get at least a year's notice before they stop actively working on the rest of the parts of KDE that interact with X11... whenever that ends up being. A year is more than enough time to find replacements for things that might eventually stop working one day.
Were I sixteen, I'd be very excited to preemptively move to something else. Now? The folks who work on it say that they'll keep it working for the forseeable future, and their behavior suggests that I'll get ample notice before they stop working on it.
The software in question works now (AFAICT) and will continue to work for quite a while. I am likely to get a significant amount of warning before they stop working on the software. I see no reason to switch. I have much better things to do with my time.
Yeah, I am staunch proponent of "don't try to fix what is not broken". Current XFCE is fast, light-weight, usable and works fine without major issues. While I don't fully understand the advantages / disadvantages of XFCE using Wayland instead of X, if, as someone else pointed out here on HN, running XFCE on Wayland is going to make it slower, it means these developers will be crippling one of XFCE's strongest feature. In that case other minor advantages seems pointless to users like me.
> running XFCE on Wayland is going to make it slower
Citation. None of the other desktops have slowed with Wayland, and gaming is as fast as, if not marginally faster on KDE/Gnome with Wayland vs LXDE on X.
Latency and throughput are very different things. However, it's worth noting that the comparison here is with and without compositing. If you were using compositing already on X11 (I believe XFCE offers it with "Desktop Effects" or something to that tune) then you've already been eating compositing latency, and you should actually get less latency in some situations.
But as far as it performing worse overall, I don't think that would be expected. Compositing itself does lean more on hardware acceleration to provide a good experience, though, so if you compare it on a machine that has no hardware accelerated graphics with compositing disabled, then it really would be worse, yeah.
Little misconception here (beware i'm using xlibre and causal user). On X11 you can find two mechanisms which can be called compositor :
1st: "enable display compositing" option - this one increases latency as every window draw need go though compositor application (in nutshell it exchanging opengl textures - only synchronization messages goes over "wire")
2nd: the Xserver rendering pipeline compositor, this one goes with modesetting (intel, amdgpu) driver TearFree option - almost everything inside X11 server in OpenGL textures and compositor perform direct blending to screen (including direct scanout).
What I want to tell, on modern X (there are merge requests for Xorg server to modesetting driver, amdgpu have this code) with TearFree enabled you by default optimal hardware acceleration - there comes lower latency
Long-time XFCE user here. We care that stuff works the same, we appreciate how much work it is to achieve that when the world is changing out from under you, and we appreciate that XFCE understands this and cares about it. Being in Rust is not a concern.
Wayland has lots of potential, but it's far from ready to replace X11, especially in multitasking environments. XFCE is taking their time, because their community is more very concerned stability.
I predict that XFCE will default to X11 until Wayland has reached broad feature parity, then default to Wayland but keep X11 support until the last vestages of incompatibility are delt with.
There's no reason that this wouldn't be accepted by their community, and it should be lighter weight, in the end.
I am an XFCE user since many years, and am pretty decidedly in the "traditional and conservative about technologies" camp, and I think this is neat and just fine and dandy -- as long as they're not in a hurry to depreciate X11. Whenever I eventually have to go Wayland I would like to continue to use XFCE, so thumbs up for doing the work.
Afaik there exists only X11 and Wayland, and X11 is dying if not dead. And for rust I don't see why a desktop user would be concerned by the language used as long as it is good enough.
This would not be surprising at all! An impressive amount of work has gone into making the Linux VFS and filesystem code fast and scalable. I'm well aware that Linux didn't invent the RCU scheme, but it uses variations on RCU liberally to make filesystem operations minimally contentious, and aggressively caches. (I've also learned recently that the Linux VFS abstractions are quite different from BSD/UNIX, and they don't really map to eachother. Linux has many structures, like dentries and generic inodes, that map to roughly one structure in BSD/UNIX, the vnode structure. I'm not positive that this has huge performance implications but it does seem like Linux is aggressive at caching dentries which may make a difference.)
That said, I'm certainly no expert on filesystems or OS kernels, so I wouldn't know if Linux would perform faster or slower... But it would be very interesting to see a comparison, possibly even with a hypervisor adding overhead.
Let's do a quick analysis of the amount of money put forth to push AI:
> OpenAI has raised a total of $57.9B over 9 funding rounds
> Groq has raised a total of $1.75 billion as of September, 2025
Well, we could go on, but I think that's probably a good enough start.
I looked into it, but I wasn't able to find information on funding rounds that David Bushell had undergone for his anti-AI agenda. So I would assume that he didn't get paid for it, so I guess it's about $0.
Meanwhile:
- My mobile phone keyboard has "AI"
- Gmail has "AI". Google docs has "AI". At one point every app was becoming a chat app, then a TikTok clone. Now every app is a ChatGPT or Gemini frontend.
- I'm using a fork of Firefox that removes most of the junk, and there's still some "AI" in the form of Link Preview summaries.
- Windows has "AI". Notepad has "AI". MS Paint has "AI".
- GitHub stuck an AI button in place of where the notifications button was, then, presumably after being called every single slur imaginable about 50000 times per day, moved it thirty or so pixels over and added about six more AI buttons to the UI. They have a mildly useful AI code review feature, but it's surprisingly half-baked considering how heavily it is marketed. And I'm not even talking about the actual models being limited, the integration itself is lame. I still consider it mildly useful for catching typos, but that is not with several billion dollars of investment.
- Sometimes when I log into Hacker News, more than half of the posts are about AI. Sometimes you get bored of it, so you start trying to look at entries that are not overtly about AI, but find that most of those are actually also about AI, and if not specifically about AI, goes on a 20 minute tangent about AI at some point.
- Every day every chat every TV program every person online has been talking about AI this AI that for literally the past couple of years. Literally.
- I find a new open source project. Looks good at first. Start to get excited. Dig deeper, things start to look "off". It's not as mature or finished as it looks. The README has a "Directory Structure" listing for some odd reason. There's a diagram of the architecture in a fixed width font, but the whitespace is misaligned on some lines. There's comments in the code that reference things like "but the user requested..." as if, the code wasn't written by the user. Because it wasn't, and worse, it wasn't read by them either. They posted it as if they wrote it making no mention at all that it was prompts they didn't read, wasting everyone's time with half-baked crapware.
And you're tired of anti-AI sentiment? Well God damn, allow me to Stable Diffusion generate the world's smallest violin and synthesize a song to play on it using OpenAI Jukebox.
I'm not really strictly against AI entirely, but it is the most overhyped technology in human history.
> Sometimes when I log into Hacker News, more than half of the posts are about AI.
And I don't ever see it under a fifth, anymore. There is a Hell of a marketing push going on, and it's genuinely hard to tell the difference between the AI true believers and the marketing bots.
My Samsung TV from 2013 is a smart TV with AI voice control features.
My scanner from 2003 has OCR.
Gaming has a very rich history with AI innovations. 1996's Creatures is a standout example.
AI has always been everywhere around you. AI predates me and you. The reason you're hearing about it now is because of the capability increase brought about by the lower cost of greater scale. But it's still in the uncanny valley. It is still flawed. To paraphrase John McCarthy, that's what makes it AI [^1].
I know you're fatigued from hearing about AI for the last three years. But I have been hearing about it for decades with the same magnitude of excitement and dismissal from pro-AI and anti-AI critics. Alan Turing laid the foundations for the technological singularity in 1950. Discourse has accelerated since the early 80's and 90's by writers like Vernor Vinge and computer scientists like Ray Kurzveil.
I encourage you to pay attention. Not to recent hype, but to what is actually happening. Steady innovation as always. That you were blindsided by it is curious.
Sorry, but what you are talking about is AI (old). What I'm talking about is "AI" (new). It's different. Video games had AI (old). Notepad in 2026 has "AI" (new). Very different.
I could explain the difference but it's beyond my pay grade.
Me as a kid thought this would be a great idea, and started implementing a PE binfmt. I actually did make a rudimentary PE binfmt, though it started to occur to me how different Windows and Linux really were as I progressed.
For example, with ELF/UNIX, the basic ELF binfmt is barely any more complex than what you'd probably expect the a.out binfmt to be: it maps sections into memory and then executes. Dynamic linking isn't implemented; instead, similar to the interpreter of a shell script, an ELF binary can have an interpreter (PT_INTERP) which is loaded in lieu of the actual binary. This way, the PT_INTERP can be set to the well-known path of the dynamic linker of your libc, which itself is a static ELF binary. It is executed with the appropriate arguments loaded onto the stack and the dynamic linker starts loading the actual binary and its dependencies.
Windows is totally different here. I mean, as far as I know, the dynamic linker is still in userland, known as the Windows Loader. However, the barrier between the userland and kernel land is not stable for Windows NT. Syscall numbers can change during major updates. And, sometimes, implementation details are split between the kernel and userland. Now, in order to be able to contribute to Wine and other projects, I've had to be very careful how I discover how Windows internals works, often by reading other's writings and doing careful black box analysis (for some of this I have work I can show to show how I figured it out.) But for example, the PEB/TIB structures that store information about processes/threads seems to be something that both the userland and kernel components both read and modify. For dynamic linking in particular, there are some linked lists in the PEB that store the modules loaded into the process, and I believe these are used by both the Windows loader and the kernel in some cases.
The Windows NT kernel also just takes on a lot more responsibilities. For example, input. I can literally identify some of the syscalls that go into input handling and observe how they change behavior depending on the last result of PeekMessage. The kernel also appears to be the part of the system that handles event coalescing and priority. It's nothing absurd (the Wine project has already figured out how a lot of this works) but it is a Huge difference from Linux where there's no concept of "messages" and probably shouldn't be.
So the equivalent of the Windows NT kernel services would often be more appropriate to put in userland on Linux anyways, and Wine already does that.
It would still be interesting to attempt to get a Windows XP userland to boot directly on a Linux kernel, but I don't think you'd ever end up with anything that could ever be upstreamed :)
Maybe we should do the PE binfmt though. I am no longer a fan of ELF with it's symbol conflicts and whatnot. Let's make Linux PE-based so we can finally get icons for binaries without needing to append a filesystem to the end of it :)
I mean something a bit different. I mean using PE binaries to store Linux programs, no Wine loader.
Of course, this is a little silly. It would require massively rethinking many aspects of the Linux userland, like the libc design. However, I honestly would be OK with this future. I don't really care that much for ELF or its consequences, and there are PE32+ binaries all over the place anyways, so may as well embrace it. Linux itself is often a PE32+ binary, for the sake of EFI stub/UKI.
(You could also implement this with binfmt_misc, sure, but then you'd still need at least an ELF binary for the init binary and/or a loader.)
(edit: But like I said, it's a little silly. It breaks all kinds of shit. Symbol interposition stops working. The libdl API breaks. You can't LD_PRELOAD. The libpthread trick to make errno a thread local breaks. Etc, etc.)
Wine has no problem loading Linux programs in PE format. It doesn't enforce that you actually call any Windows functions and it doesn't stop you making Linux system calls directly.
Well yes, but you'd be spawning a wineserver and running wineboot and all kinds of baggage on top, all for the very simple task of mapping and executing a PE binary, and of course you would still wind up needing ELF... for the Wine loader and all of the dependencies that it has (like a libc, though you could maybe use a statically-linked musl or something to try to minimize it.)
Meanwhile the actual process of loading a PE binary is relatively trivial. It's trivial enough that it has been implemented numerous times in different forms by many people. Hell, I've done it numerous times myself, once for game hacking and once in pure Go[1] as a stubborn workaround for another problem.
Importing an entire Wine install, or even putting the effort into stripping Wine down for this purpose, seems silly.
But I suppose the entire premise is a little silly to begin with, so I guess it's not that unreasonable, it's just not what I am imagining. I'm imagining a Linux userland with simply no ELF at all.
[1]: https://github.com/jchv/go-winloader - though it doesn't do linking recursively, since for this particular problem simply calling LoadLibrary is good enough.
I recently learned that Windows binaries contain metadata for what version they are (among other things, presumably). I was discussing in-progress work on making a mod manager for a popular game work on Linux with the author of the tool, and they mentioned that one of the things that surprised them was not being able to rely on inspection of a native library used by most mods to determine what version they had installed on Linux like they could on Windows. It had never occurred to them that this wasn't a first-class feature of Linux binary formats, and I was equally surprised to find out that it was a thing on Windows given that I haven't regularly used Windows since before I really had much of a concept of what "metadata in a binary format" would even mean.
Are you talking about the "Linux version" it targets or the version of the library? If its the latter, then it is the case, that versioning works per symbol instead of per library, so that a newer library can still contain the old symbols. If you want the latest version a library implements, you could search all symbols and look for the newest symbol version.
If you want it the other way around you could look at the newest symbol the library wants.
I probably could be more clear about what I'm trying to convey. Tool A is written to manage mods for game B, and lots of mods for that game utilize library C. Tool A does not directly load or link to library C, but it does inspect the version of library C that currently exists alongside game B so that it can detect if mods depend on a newer version of it and notify the user that it needs to be updated.
I'm realizing now that I forgot an important detail in all of this: the metadata of the library existed as part of the metadata that the filesystem itself tracked rather than something in the contents of the file itself. This metadata doesn't seem to exist on Linux (which library C only supports if running via Proton rather than any native Linux version of the game). I could imagine it might be possible for this to be set as some sort of extended attribute on a Unix filesystem, but in practice it seems that the library will have this extended filesystem metadata when downloading the DLL onto a Windows machine (presumably with an NTFS filesystem) but not a Linux one.
> so that it can detect if mods depend on a newer version of it and notify the user that it needs to be updated.
The dynamic linker will literally tell you this, if you ask it.
> the metadata of the library existed as part of the metadata that the filesystem itself tracked rather than something in the contents of the file itself.
So does this metadata has anything to do with the file at all, or could I also attach it to e.g. an MP4 file? If that is the case than the difference is that the distributor for MS Windows did add the attribute and the distributor for GNU/Linux did not, it doesn't have anything to do with the platform.
EDIT:
> (which library C only supports if running via Proton rather than any native Linux version of the game
So library C isn't even an ELF shared object, but a PE/COFF DLL? Then that complaint makes even less sense.
I assume it's the ability to tag a .dll as version 0.0.0.1 or whatever (it shows up under the file name in Windows Explorer). I think company name is another one that Windows Explorer displays but there are probably a few other supported attributes as well.
Well, then the answer is that shared objects on GNU/Linux do not have a finite version, they just implement them. This is because they are expected to implement more than one version, so that you can just update the shared object, and it will still work with programs that expect to call the old ABI. You can however get the latest version. This is also what is in the filename. Note, that there are two versioning schemes: semver and libtool, they mean the same, but are notated differently. This version does not does not necessary equal the version of the project/package, APIs/ABIs can and do have their own version numbers.
I think this is just solving a different but related problem. Symbol versioning enables you to make observable changes to the behavior of some library like a libc while providing backwards compatibility by providing the old behaviors indefinitely. I've never gotten the impression that it is "prescribed" that shared objects on a Linux system don't have a specific/definite version, and I don't think that symbol versioning is necessarily appropriate for all programs, either-it just happens to be a mechanism used by glibc, and I don't think it is very common outside of glibc (could be wrong.)
On the other hand, the Windows NE/PE VERSIONINFO resource is mostly informational in nature. I think early on in the life of Windows, it was often used to determine whether or not the system library shipped by an installer was newer than the installed version, so that program installers could safely update the system libraries when necessary without accidentally overwriting new versions of things with older versions. That's just simply a problem that ELF shared objects don't try to solve, I reckon, because there aren't usually canonical binaries to install anyways, and the problem of distributing system libraries is usually left up to the OS vendor.
Actually though, there is another mechanism that Linux shared objects often use for versioning and ABI compatibility: the filename/soname. Where you might have a system library, libwacom.so.9.0.0, but then you also have symlinks to it from libwacom.so.9 and libwacom.so, and the soname is set to libwacom.so.9.
There is, of course, no specific reason why ELF binaries don't contain metadata, and you could certainly extend ELF to contain PE-style resources if you want. (I remember some attempts at this existed when I was still getting into Linux.) It may be considered "bad practice" to check the runtime version of a library or program before doing something, in many cases, but it is still relatively common practice; even though there's no standard way to do it, a lot of shared objects do export a library function that returns the runtime version number, like libpng's png_access_version_number. Since distributions patch libraries and maybe even use drop-in replacements, there is obviously no guarantee as to what this version number entails, but often it does entail a promise that the ABI should be compatible with the version that is returned. (There is obviously not necessarily a canonical "ABI" either, but in most cases if you are dealing with two completely incompatible system/compiler ABIs there isn't much that can be done anyways, so it's probably OK to ignore this case.)
So I think the most "standard" way to embed the "version" of a shared object on Linux is to export a symbol that either returns the version number or points to it, but there is no direct equivalent to the PE VERSIONINFO standard, which may be a pain when the version number would be useful for something, but the developers did not think to explicitly add it to the Linux port of their software, since it is often not used by the software itself.
But I personally wouldn't agree with the conclusion that "shared objects on GNU/Linux do not have a finite version" as a matter of course; I think the more conservative thing to say is that there is not necessarily a definite version, and there is not necessarily a "canonical" binary. In practice, though, if your copy of libSDL is an almost-unmodified build of the source tree at a given version 3.0.0, it is entirely reasonable to say that it is "definitely" libSDL 3.0.0, even though it may not refer to an exact byte-for-byte source tree or binary file.
The article says it seemed good. If you want a quantified look, Repology has some stats.
GNU Guix: https://repology.org/repository/gnuguix
Nixpkgs unstable: https://repology.org/repository/nix_unstable
However, note that the Guix entry doesn't include Nonguix AFAICT. So, it's a bit higher if you are using nonguix.