I like claude models, but crush and opencode are miles ahead of claude code. It's a pity anthropic forces us to use inferior tooling (I'm on a "team" plan from work). I can use an API key instead but then I'll blow past 25$ in an hour.
For me wayland offers only downsides, without any upsides. I feel the general idea behind it (pushing all complexity and work onto other layers) is broken.
I'll stick to xorg and openbox for many years to come.
Moving complexity from the layer you only have one of, to the layers where there are many, many competing pieces of software, was an absolutely bonkers decision.
It's hard to imagine a statement that could fly more in the face of open source.
It's absolutely an essential characteristic for long term survival, for long term excellence. To not be married to one specific implementation forever.
Especially in open source! What is that organizational model for this Authoritarian path, how are you going to - as Wayland successfully has - get every display server person onboard? Who would had the say on what goes into The Wayland Server? What would the rules be?
Wayland is the only thing that makes any sense at all. A group of peers, fellow implementers, each striving for better, who come together to define protocols. This is what made the internet amazing what made the web the most successful media platform, is what creates the possibility for ongoing excellence. Not being bound to fixed decisions is an option most smart companies lust but somehow when Wayland vs X comes up, everyone super wants there to be one and only one path, set forth three decades ago that no one can ever really overhaul or redo.
It's so unclear to me how people can be so negative and so short and so mean on Wayland. There's no viable alternative organization model for Authoritarian display servers. And if somehow you did get people signed up, this fantasy, there's such a load of pretense that it would have cured all ills? I don't get it.
I think then big part is maintenance, xorg doesn't look likely to be maintained long into the future in the way Wayland will be. And a lot of the Xorg maintainers are now working in Wayland.
So good or bad idea, Wayland is slowly shifting to being the default in virtue of being the most maintained up to date compositor.
Wayland is not a compositor. Being more maintaned than Xorg doesn't mean anything because wayland doesn't do a tenth of the things Xorg did.
What used to be maintained in one codebase by Xorg devs is now duplicated in at least three major compositors, each with their own portal implementation and who knows what else. And those are primarily maintaned by desktop environment devs who also have the whole rest of the DE to worry about.
This guy started that Xlibre fork over throwing a fit because he was told not to break Xorg with his contributions, and he ranted that he just wants to be able to merge whatever he wants. I would not trust the stability of that fork at all.
Looks to me like he's a belligerent personality, but probably not wrong when he says Redhat has an agenda that involves suppressing progress on Xorg and forcing Wayland on users instead.
I was open minded toward Wayland when the project was started... in 2008. We are 18 years down the road now. It has failed to deliver a usable piece of software for the desktop. That's long enough for me to consider it a failed project. If it still exists, it's probably for the wrong reasons (or at the least, reasons unrelated to any version of desktop Linux I want to run, like perhaps it has use in the embedded space).
Taking the proposition as true, what goal does Redhat have in "forcing wayland on users"? I am asking this in good faith, I literally naively do not understand what the "bad" bit is.
Like, ok, its 2030 and X11 is dead, no one works on it anymore and 90% of Linux users use Wayland, what did they gain? I know they did employ Pottering but not anymore, and AFAIK they contribute a non-trivial amount of code up stream to, Linux, Gnome? KDE? If more users are on wayland they can pressure Gnome to ... what?
I sort of get an argument around systemd and this, in that they can push I guess their target feature sets into systemd and force the rest of the eco-system to follow them, but, well I guess I don't get that argument either, cause they can already put any special sauce they want in Redhat's shipped systemd implementation and if its good it will be picked up, if its bad it wont be?
I guess, if Redhat maintains systemd & wayland, then they could choke out community contributions by ignoring them or whatever, but wouldn't we just see forks? Arch would just ship with cooler-systemd or whatever?
- Maintaining X requires a lot of time, expertise and cost, it's a hard codebase to work with, deprecating X saves them money
- Wayland is simpler and achieves greater security by eliminating features of the desktop that most users value, but perhaps Redhat's clients in security-conscious fields like healthcare, finance and government are willing to live without
So I suspect it comes down to saving money and delivering something they have more control of which is more tailored to their most lucrative enterprise scenarios; whereas X is an old mess of cranky unix guys and their belligerent libre cruft.
There are some parallels to systemd I guess, in that its design rejected the Unix philosophy, and this was a source of concern for a lot of people. Moreover at the time systemd was under development, my impression of Poettering was that he was as incompetent as he was misguided and belligerent - he was also advocating for abandoning POSIX compatibility, and PulseAudio was the glitchiest shit on my desktop back then. But in the end systemd simply appeared on my computer one day and nothing got worse, and that is the ultimate standard. If they forced wayland on me tomorrow something on my machine would break (this is the main point of the OP), and they've had almost 20 years to fix that but it may arguably never get fixed due to Wayland's design. So Wayland can go the way of the dodo as far as I'm concerned.
The statements "Rejecting the unix philosophy by being a huge monolith which does too many things", and "systemd is actually 69 different binaries, only one of which runs as pid 1" are mutually exclusive. The latter is provably true, so the former must not be.
> ...what goal does Redhat have in "forcing wayland on users"?
The same goal any group of savvy corporate employees has when their marquee project has proved to be far more difficult, taken way longer, and required far more resources than anticipated to get within artillery distance of its originally-stated goal?
I've personally seen this sort of thing play out several times during my tenure in the corporate environment.
I honestly dont know what that means, I've never worked in a big company/corporation. Try and disown it? How does that fit with xlibres anti-corporate control stance? I guess it they push wayland then drop it, we're left with X11 ignored and a non-financially supported alternative?
I guess I just don't get how the third E in EEE plays out in an open source environment.
Cant Edit: I mention Pottering above because I remember similar arguments against his stuff (that I also never really fully understood in terms of "end game"), not because I have personal animosity against him or his projects or want to hold him up as "example of what can go wrong".
There is a massive class of things in open source you can look at from the perspective of "Suppose a megacorp or private equity owns this entity and wants to cut costs as much as possible while contributing as little back to the community/ecosystem as possible... what happens next?" And boom you can suddenly see the Matrix. So in the case of Redhat it's likely just IBM being IBM at the financial level and all these little decisions trend a certain way in the long run because of that
ridiculous, wayland all in all provides a far better experience than X11, and wayland projects like plasma, hyprland, sway etc are very much not failed
I did some digging in the issues and PR's of pre-commit, the guy seems to be a major douche. Too bad, because uv is amazing. Might look at an alternative to pre-commit in the future.
I recently found out I've been banned from all of their repositories on GitHub, while as far as I'm aware our only interaction was on a duplicated bug issue I created, as I didn't manage to find the original with GitHub's search like in the linked issue from the OP.
I've been moving away from their tools with this and the resistance to implement/merge useful things that basically everyone wants
You're falling into the false dichotomy that always comes up with these topics: as if the choice is between the cloud and renting rack space while applying your own thermal paste on the CPUs.
In reality, for most people, renting dedicated servers is the goldilocks solution (not colocation with your own hardware).
You get an incredible amount of power for a very reasonable price, but you don't need to drive to a datacenter to swap out a faulty PSU, the on site engineers take care of that for you.
I ordered an extra server today from Hetzner. It was available 90 seconds afterwards. Using their installer I had Ubuntu 24.04 LTS up and running, and with some Ansible playbooks to finish configuration, all in all from the moment of ordering to fully operational was about 10 minutes tops. If I no longer need the server I just cancel it, the billing is per hour these days.
Bang for the buck is unmatched, and none of the endless layers of cloud abstraction getting in the way. A fixed price, predictable, unlimited bandwidth, blazing fast performance. Just you and the server, as it's meant to be.
I find it a blissful way to work.
I’d add this. Servers used to be viewed as pets; the system admins spent a lot time on snow flake configurations and managing each one. When we started standing up tens of servers to host the nodes of our app (early 2000s); the simple admin overhead was huge. One thing I have not seen mentioned here was how powerful ansible and similar tools were at simplifying server management. Iirc being able to provision and standup servers simply with known configurations was a huge win aws provided.
You were commonly given a network uplink and a list of public IP addresses you were to set up on your box or boxes. IPMI/BMC were not a given on a server so if you broke it, you needed to have remote hands and probably brains too.
Virtualisation was in the early days and most of the services were co-hosted on the server.
Software defined networks and Open vSwitch were also not a thing back then. There were switches with support for VLANs and you might have had a private network to link together frontend and backend boxes.
Servers today can be configured remotely. They have their own management interfaces so you can access the console and install OS remotely. The network switches can be reconfigured on the fly, making the network topology reconfigurable online. Even storage can be mapped via SAN. The only hands on issue is hardware malfunction.
If I was to compare with today, it was like having a wardrobe of Raspberry Pies on a dumb switch, plugging in cables when changes were needed.
Even if you don't go ansible/chef/puppet/salt, just having git is good. You can put your configs in git, use a git action to change whatever variables, and deploy to the target. No extra tools needed, and you get versioned configs.
> all in all from the moment of ordering to fully operational was about 10 minutes tops.
I think this is an important point. It's quick.
When cloud got popular, doing what you did could take upwards of 3 months in an organisation, with some being closer to 8 months. The organisational bureaucracy meant that any asset purchase was a long procedure.
So, yeah, the choices were:
1. Wait 6 months to spend out of capex budget
Or
2. Use the opex budget and get something in 10m.
We are no longer in that phase, so cloud services makes very little sense now because you can still use the opex budget to get a VPS and have in going in minutes with automation.
True, but I think you're touching on something important regarding value. Value is different depending on the consumer: for you, you're willing and able to manage more of the infrastructure than someone who has a more narrow skillset.
Being able to move the responsibility for areas of the service on to the provider is what we're paying for, and for some, paying more money to offload more of the responsibility actually results in more value for the organization/consumer
Couldn't agree more! I've been using Python for almost 20 years, my whole career is built on it, and I never missed typing.
Code with type hints is so verbose and unpythonic, making it much harder to read. Quite an annoying evolution.
As the article says, type hints represent a fundamental change in the way Python is written. Most developers seem to prefer this new approach (especially those who’d rather be writing Java, but are stuck using Python because of its libraries).
However it is indeed annoying for those of us who liked writing Python 2.x-style dynamically-typed executable pseudocode. The community is now actively opposed to writing that style of code.
I don’t know if there’s another language community that’s more accepting of Python 2.x-style code? Maybe Ruby, or Lua?
There is nothing python-2 about my python-3 dynamically typed code. I'm pretty confident a majority of new python code is still being written without type hints.
Hell, python type annotations were only introduced in python 3.5, the language was 24 years old by then! So no, the way I write python is the way it was meant to be, type hints are the gadget that was bolted on when the language was already fully matured, it's pretty ridiculous painting code without type hints as unpythonic, that's the world upside down.
If I wanted to write very verbose typed code I would switch to Go or Rust. My python stays nimble, clean and extremely readable, without type hints.
I agree completely! To be clear, I don’t consider describing code as “Python 2-style” to be a bad thing. It’s how I describe my own Python code!
Overall, I have found very few Python 3 features are worth adopting (one notable exception is f-strings). IMO most of them don’t pull their weight, and many are just badly designed.
> Hell, python type annotations were only introduced in python 3.5
Mypy was introduced with support for both for Python 2.x and 3.x (3.2 was the current) using type comments before Python introduced a standard way of using Python 3.0’s annotation syntax for typing; even when type annotations were added to Python proper, some uses now supported by them were left to mypy-style type comments in PEP 484/Python 3.5, with type annotations for variables added in PEP 526/Python 3.6.
I was using autojump for years (on debian) until I lost my jump history several times in the past few months. Turns out it's a known race condition bug fixed in a newer version:
reply