It's a pity. It's also a step back from valuing the Unix philosophy, which has its merits, especially for those with a "learning the system from scratch" mindset. Sorry, but I have no sympathy for systemd.
SysVinit has been seen by some people in the post-systemd world as some sort of mystifying mashup concocted by sadists, yet I've found that when it is explained well, it is clear and human-friendly, with easy uptake by newcomers. I echo that this decision is a pity.
It’s not just explaining but whether you have to support it on more than one distribution/version or handle edge cases. For a simple learning exercise, it can be easier to start with but even in the 90s it was notably behind, say, Windows NT 3 in a lot of ways which matter.
sysv is garbage tho. If unix philosophy is "make it do one thing and do it well", it doesn't do the one thing it is supposed to do well.
I dislike overloading systemd with tools that are not related to running services but systemd does the "run services" (and auxiliary stuff like "make sure mount service uses is up before it is started" or "restart it if it dies" and hundred other things that are very service or use-case specific) very, very well and I used maybe 4 different alternatives across last 20 years
Clearly there are lots of people who don't want something that does what you say systemd does. Bravo that choice is out there, but what a pity that LFS does not seem to have the resources to test future versions for SysVinit.
I don't have a dog in this fight but I find it funny that the anti-systemd crowd hates it because it doesn't "follow the Unix philosophy", but they tend to also hate Wayland which does and moves away from a clunky monolith (Xorg)
While Xorg itself (which isn't a monolith, BTW) provides more than the bare minimum, so does the Linux kernel - or even the Unix/BSD kernels of old - yet programs that did follow to the Unix philosophy were built on top of them.
In X11/Xorg's case, a common example would be environments built off different window managers, panels, launchers, etc. In theory nothing prevents Wayland to have something similar but in practice 17 years after its initial release, there isn't anything like that (or at least nothing that people do use).
At least in my mind, the Unix philosophy isn't some sort of dogma, just something to try and strive for and a base (like X11) that enables others to do that doesn't go against it from the perspective of the system as a whole.
I'm in the same boat. Systemd is an unpricipled mess and ships some quite shoddy replacements for pre-existing components. Wayland is super clean, it just takes for-everrr to add the features that users (and developers) expect. It could seriously have been done over 10 years ago not by heroic development effort, but by not being pathologically obstructive about features.
The two projects are complete opposites except in one way, they replace older stuff.
If you want to learn the system from scratch, the best way will be writing your own little init system from scratch, so you can understand how the boot sequence works. And as you make use of more and more of the advanced features of Linux, your init system will get more and more complex, and will start to resemble systemd.
If you only learn about sysvinit and stop there, you are missing large parts of how a modern Linux distro boots and manages services.
That's the point on which people differ. Even if we take as given that rc/svinit/runit/etc is not good enough (and I don't think that's been established), there are lots of directions you can go from there, with systemd just one of them.
And on the other hand, I have no sympathy for the Unix philosophy. I value results, not dogma, and managing servers with systemd is far more pleasant than managing servers with sysvinit was. When a tool improves my sysadmin life as much as systemd has, I couldn't care less if it violates some purity rule to do so.
(author here) it's actually the module system of OCaml that's amazing for large-scale code, not the effects. I just find that after a certain scale, being able to manipulate module signatures independently makes refactoring of large projects a breeze.
Meanwhile, in Python, I just haven't figured out how to effectively do the same (even with uv ruff and other affordances) without writing a ton of tests. I'm sure it's possible, but OCaml's spoilt me enough that I don't want to have to learn it any more :-)
I recently realized that "pure functional" has two meanings, one is no side-effects (functional programmers, especially of languages like Haskell use it this way) and the other is that it doesn't have imperative fragments (the jump ISWIM to SASL dropped the non-functional parts inherited from ALGOL 60). A question seems to be whether you want to view sequencing as syntax sugar for lambda expressions or not?
In my experience, "purely functional" always means "you can express pure functions on the type level" (thus guaranteeing that it is referentially transparent and has no side effects) -- see https://en.wikipedia.org/wiki/Pure_function
While python isn't type safe, you can use Pylance or similar in combination with type hinting to get your editor to yell at you if you do something bad type-wise. I've had it turned on for a while in a large web project and it's been very helpful, and almost feels type-safe again
It just isn't good enough. Anytime Pyright gives up in type checking, which is often, it simply decays the type into one involving Any/"Unknown":
Without strict settings, it will let you pass this value as of any other type and introduce a bug.
But with strict settings, it will prevent you from recovering the actual type dynamically with type guards, because it flags the existence of the untyped expression itself, even if used in a sound way, which defeats the point of using a gradual checker.
Gradual type systems can and should keep the typed fragment sound, not just give up or (figuratively) panic.
Personally I’ve handled this by just ignoring the gradual part and keeping everything strictly typed. This sometimes requires some awkwardness, such as declaring a variable for an expression I would otherwise just write inline as part of another expression, because Pyright couldn’t infer the type and you need to declare a variable in order to explicitly specify a type. Still, I’ve been quite satisfied with the results. However, this is mostly in the context of new, small, mostly single-author Python codebases; I imagine it would be more annoying in other contexts.
> I've had it turned on for a while in a large web project and it's been very helpful, and almost feels type-safe again
In my experience "almost" is doing a lot of heavy lifting here. Typing in python certainly helps, but you can never quite trust it (or that the checker detects things correctly). And you can't trust that another developer didn't just write `dict` instead of `dict[int, string]` somewhere, which thus defaults to Any for both key and value. And that will type check (at least with mypy) and now you lost safety.
Using a statically typed language like C++ is way better, and moving to a language with an advanced type system like that of Rust is yet another massive improvement.
Yeah, if you're going to use static type checks, which you should, you really want to run the checker in strict mode to catch oversights such as generic container types without a qualifier.
Although I've found that much of the pain of static type checks in Python is really that a lot of popular modules expose incorrect type hints that need to be worked around, which really isn't a pleasant way to spend one's finite time on Earth.
dict types to dict[Unknown, Unknown], not dict[Any, Any]. I'm the main developer on this code right now, so I've been pretty much ensuring its all typed reasonably well myself, but I don't just blindly trust it. I check what types it thinks it is, reason why they aren't as narrow as they could be, and fix that to make them more narrow, sometimes introducing extra classes so I don't have to type them as dict[dict[dict...]]. This is also an established codebase that does server-side processing that flask makes easy, and most of the developers working on it don't know C++ or Rust
Cool, I like these kinds of projects. When it comes to embedding a scripting language in C, there are already some excellent options: Notable ones are Janet, Guile, and Lua. Tcl is also worth considering. My personal favorite is still Janet[0]. Others?
That list (or any similar list) would be so helpful if it had a health column, something that takes into account number of contributors, time since last commit, number of forks, number of commits, etc. So many projects are effectively dead but it's not obvious at first sight, and it takes 2 or 3 whole minutes to figure out. That seems short but it adds up when evaluating a project, causing people to just go to a well known solution like Lua (and why not? Lua is just fine; in fact it's great).
Thanks! I’m unfamiliar with Janet but I’ve looked into the others you listed.
One personal preference is that a scripting syntax be somewhat ‘C-like’.. which might recommend a straight C embedded implementation although I think that makes some compromises.
Yes very C-like.. One immediate difference is that in these C-like scripting languages there’s a split between definitions and executable commands. In Cicada there are only executable commands: definitions are done using a define operator. (That’s because everything is on the heap; Cicada functions don’t have access to the stack). I personally think the latter method makes more sense for command-line interactivity, but that’s a matter of taste.
Yes I like this one. It’s similar and even more C-like, in that it discriminates between classes, class instances, functions, methods vs constructors, etc. (Cicada does not).
After over a decade of Debian, when I upgraded my PC, I tried every big systemd-based distro, including opensuse, which I wholly loathed. I finally decided on Void and feel at home as I did 20+ years ago when I began.
There are serious problems with the systemd paradigm, most of which I couldn't argue for or against. But at least in Void, I can remove network-manger altogether, use cron as I always have, and generally remain free to do as I please until eventually every package there is has systemd dependencies which seems frightfully plausible at this pace.
Void is as good as I could have wanted. If that ever goes, I guess it's either BSD or a cave somewhere.
I'm glad to see the terse questions here. They're well warranted.
Not stopping. Just clashing with that and a hundred other things that I never wanted managed by one guy. Systemd.timer, systemd.service, yes, trivial, but I don't catalog every thing that bothers me about systemd - I just stay away from it. There are plenty of better examples. So where ever I wrote 'stop', it should read hinder.
systemd parses your crontab and runs the jobs inside on its own terms
of course you can run Cron as well and run all your jobs twice in two different ways, but that's only pedantically possible as it's a completely useless way to do things.
> Void is as good as I could have wanted. If that ever goes, I guess it's either BSD or a cave somewhere.
If systemd-less Linux ever go, there are indeed still the BSDs. But I thought long and hard about this and already did some testing: I used to run Xen back in the early hardware-virt days and nowadays I run Proxmox (still, sadly, systemd-based).
An hypervisor with a VM and GPU passthrough to the VM is at least something too: it's going to be a long long while before people who want to take our ability to control our machines will be able to prevent us from running a minimal hypervisor and then the "real" OS in a VM controlled by the hypervisor.
I did GPU passthrough tests and everything works just fine: be it Linux guests (which I use) or Windows guests (which I don't use).
My "path" to dodge the cave you're talking about is going to involved an hypervisor (atm I'm looking at the FreeBSD's bhyve hypervisor) and then a VM running systemd-less Linux.
And seen that, today, we can run just about every old system under the sun in a VM, I take we'll all be long dead before evil people manage to prevent us from running the Linux we want, the way we want.
You're not alone. And we're not alone.
I simply cannot stand the insufferable arrogance of Agent Poettering. Especially not seen the kitchen sink that systemd is (systemd ain't exactly a homerun and many are realizing that fact now).
Gentoo doesn't "exist" because it is necessary to have an alternative to systemd. Gentoo is simply about choice and works with both openrc and systemd. It supported other inits to some degree as well im the past.
I use Fossil extensively, but only for personal projects. There are specific design conditions, such as no rebasing [0], and overall, it is simpler yet more useful to me. However, I think Fossil is better suited for projects governed under the cathedral model than the bazaar model. It's great for self-hosting, and the web UI is excellent not only for version control, but also for managing a software development project. However, if you want a low barrier to integrating contributions, Fossil is not as good as the various Git forges out there. You have to either receive patches or Fossil bundles via email or forum, or onboard/register contributors as developers with quite wide repo permissions.
It was developed primarily to replace SQLite's CVS repository, after all. They used CVSTrac as the forge and Fossil was designed to replace that component too.
Not all reading is the same. In other words, I wish this article had differentiated between different types of reading. For example, I read that many young adults have picked up reading "new adult" genre books. They enjoy the physical experience of an analog medium and consume one edition after another of popular series. This sounds fine at first, but the content is problematic. These books are not literature, and they may convey problematic views of behavior. For example, they may perpetuate outdated views of relationships between men and women, portraying them as unequal and reproducing clichéd stereotypes from the last millennium.
In short, the article focuses only on the amount of reading, but the content is also important. This should be part of the equation.
I see no reference to this in the article. Nor have you explained why these books are "not literature". This sounds like someone looking at a piece of art, and saying "that's not art".
As we're referencing young adults here, they already have a degree of understanding of the world today. Reading of the past, gives historical context to how the world is today, to why the world is as it is today. I'd have hoped they'd been well exposed to such things in school, and you can be absolutely sure they've been exposed to such things in movies, or music (have you heard some rap music?), or.. you know, this thing called the Internet.
In 12 seconds I can find more untoward content on the Internet, than I could in an entire library or book store.
When I was that age I read a lot of science fiction series. I had friends reading what they called “trashy romance”—they knew it was in no way realistic. This was also during peak Harry Potter, which is literary street food, and I say that as a compliment. Most of us read other stuff too, but realistically, dense English lit was confined to English class.
So this isn’t new and I don’t see the problem.
As for the “views,” by this standard kids shouldn’t read A Tale of Two Cities because it encourages beheadings.
Books portraying problematic behaviour doesn't mean it agrees with them, Jesus it seems like liberals and pseudo-progressives have adopted the right mindset and vocabulary of leftists and actual progressives while clinging onto their reactionary puritan sensibilities, this time saying something is "problematic" instead of demonic
> Every bad day for microsoft is yet another glorious day for linux.
Nah. If that were the case, Linux would dominate personal computer statistics. The reality is that most mainstream users just don't care. But, of course, that won't stop us.
I would also argue that _what_ personal computing means to most people has also evolved, even with younger generations. My gen Z nephew the other day was faberglasted when he learned I use my Documents, Videos, Desktop folders, ect. He literally asked "What is the Documents folder even for?". To most people, stuff is just magically somewhere (the cloud) and when they get a new machine tbey just expect it all to be there and work. I feel like these cryptography and legality discussions here on HackerNews always miss the mark because we overestimate hiw much most people care. Speaking of younger generations, I also get the feeling that there isn't such a thing as "digital sovereignty" or "ownership", at least not by the same definitions we gen x and older millennials internalize those definitions.
Across the generations, there are always a few groups to where cryptographic ownership really matter, such as journalists, protesters, and so on. Here on HN I feel like we tend to over-geeneralize these use cases to everybody, and then we are surprised when most people don't actually care.
I spent a long time tinkering with the tooling, which meant that writing always took a back seat or was put off. As a transition, I decided to use Bear Blog [0] for writing, and when I eventually find a self-hosted solution that works for me, I'll just switch over. And Bear Blog is in line with my values, unlike so many other platforms.
You make a good point. From a philosophical point of view, abstractions should hide complexity and make things easier for the human user. It should be like a pyramid: the bottom layer should be the most complex, and each subsequent layer should be simpler. The problem is that many of today's abstractions are built on past technology, which was often much better designed and simpler due to the constraints of that time. Due to the divergent complexity of today's abstractions and unavoidable leaks, we have a plethora of "modern" frameworks and tools that are difficult to use and create mental strain for developers. In short, I always avoid using such frameworks and prefer the old, boring basics wherever possible.
I'm struggling to form a definitive statement about my thoughts here, but I'll give it a try:
Every (useful) abstraction that aims to make an action easier will have to be more complex inside than doing the action itself.
Would love for someone to challenge this or find better words. But honestly, if that's not the case, you end up with something like leftPad. Libraries also almost always cover more than one use case, which also leads to them being more complex than a simple tailored solution.
I think of it as: adding an abstraction relocates complexity away from what you want to make easy and moves it somewhere else. It does not eliminate complexity in total, it increases it. The best abstractions have a soft edge between using them and not using them. The worst are like black holes.
reply