I can imagine situations where the emergency is noticed by other people that might not be near the location itself, and the person whose location would need to be determined is not able to use the mobile phone, such as could be the case in many accidents.
I think it would be sufficient to just have a log of this information being queried, and cases where the information has been pinged without a legitimate use case would the be investigated.
It seems though not having systemd in it would be against "init freedom": https://www.devuan.org/os/init-freedom . Or is there some particular criteria an init system needs to satisfy to be included, that systemd doesn't satisfy but the others do?
A systemd distro tends to be locked to systemd, with many pieces of software requiring systemd to be running. An init–freedom distro avoids such dependencies. Presumably, you can still install systemd if you really want to.
* that once it was adopted, every single package started requiring it
* which meant that packages that previously could run everywhere, now could only run on systemd-based systems
* binary logs - a solution that solved nothing but created problems
* which locked out any system that wasn't linux
* which locked out any linux system that didn't want to use it
* which led to abominations like systemd-resolved
* "bUt yOu DoNt hAVe tO uSE it" - tell that to the remote attestation crowd, of which Poettering is a founding member of. see https://news.ycombinator.com/item?id=46784572 - soon you'll have to use systemD because nothing else *can* be used.
literally everything the systemD crowd has done leads to lockout and loss of choice. All ramrodded through by IBM/RedHat.
The systemD developers don't care about any of this, of course. They've got a long history of breaking user space and poor dev practices because they're systemD. I mean, their attitude was so bad they got one of their principal devs kicked from the kernel because they overloaded the use of the kernel boot parameter "debug", which flooded the console, and refused to modify the debug option to something compatible like "systemd.debug", broke literally every other system, and then told everybody else "hey we're not wrong, the rest of the world is wrong." And this has been their attitude since then.
Look, if people want to use systemD, that's just fine. But it is a fact that the entire development process for systemD is predicated on making Linux incompatible with anything else, which is an entire inversion of how Linux and Free Software works.
I actually like unit files. But if systemD was just an init system, it would stop there.
I don't like unit files very much. Instead of these variables that are specific to systems, and are ignored if you use a too old version of systemd, thus running your ftp server as root, you can prepend to the command line: sudo -u nobody ftpd. This composes much better and you can use the same commands that work in the shell.
> * "bUt yOu DoNt hAVe tO uSE it" - tell that to the remote attestation crowd, of which Poettering is a founding member of. see https://news.ycombinator.com/item?id=46784572 - soon you'll have to use systemD because nothing else can be used.
You're saying that because the person who made systemd now work on hardware attestation, all Linux distributions will eventually require remote hardware attestation, where users don't actually have the keys?
Maybe I'm naive, maybe I trust my distribution too much (Arch btw), but I don't see that happening. Probably Ubuntu and some other more commercial OSes might, but we'll still have choices in what OS/distribution to use, so just "vote with your partitions" or whatever.
If you build remote attestation into your product, corporate entities will require it. Just look at Android - What phones today give you unlimited root? If you have rooted, what applications have you broken? If you root, what e-fuses have you blown in your CPU meaning it can never be un-rooted? Android, at the start, was open and freely modified - not so much anymore. Companies like Google can and have cut off access to user's data, without recourse. You can't modify your phone, so you don't own your phone. You just pay rent until they don't support it anymore.
I think phones are a completely different beast though (and already a lost cause), PCs seems a lot more resilient to that sort of lock down.
But on the other hand, you might be right, you never know how the future looks. But personally I'll wait until there is at least some signal that it's moving in that direction, before I start prepping for it to actually happening.
* Literally every game console
* Literally every smartphone
* Microsoft, with their Win11 requirements, is moving there
* John Deere (read on their own hardware attestation efforts to block DIY)
* Car companies (require specialized tooling and software subscriptions to make certain repairs)
* Anything that requires a signed bootloader and signed software updates
* Snapdragon CPUs and e-fuses that burn when you use unsigned software, and brick
* Apple hardware, literally crypto-signed so you can't use aftermarket parts
* Google Chromecast
* Amazon Kindle, locked hardware
* IBM has locked hardware to their laptops for *years*. Ever try upgrading a wifi card in an IBM laptop? They're already invested in this
And Linux probably predates most/many of those things, yet remains open and without forced attestation. Why suddenly it's different today than all those years you reference?
Companies can make Linux variants that are tivoized, but it's not standardized. They have to put effort into it. Soon it'll just be systemctl --tivoize-me
They are a different beast because of the culture surrounding them — nothing technologically different. Lennart wants to bring that same culture to desktops.
People have been saying this since day dot. It was very controversial for Debian to change to use systemd. The vote was close due to many arguments which are still being played out
This question sounds to me like you suspect some outright evil getting projected here. That would go too far. The wayland project tried to get the support of X developers early so that they could become a sort of "blessed" X successor early on. Plenty of earlier replacement attempts have failed because they couldn't get bigger community support, so this had to be part of a successful strategy. Any detrimental effects on X from that move were never a direct goal, as far as I am aware, just a consequence.
This isn't quite right? Wayland was literally created by an X11 developer who got two more main X11 developers in. It's a second system, not a competitor as such.
Yes, I do interpret your “draw development away from X” as suggesting an attempt to damage X (sorry if I misinterpreted your post, but I do think my interpretation was not really that unreasonable).
This “blessed successor” without and detrimental effects as a main goal: that’s pretty close to my understanding of the project. IIRC some X people were involved from the beginning, right?
Wanting developers to switch projects doesn't have to be malicious, in fact personally i doubt there were any bad intentions in place, the developers of Wayland most likely think they're doing the right thing.
You're making the analogy work: because the point of weightlifting as a sport or exercise is to not to actually move the weights, but condition your body such that it can move the weights.
Indeed, usually after doing weightlifting, you return the weights to the place where you originally took them from, so I suppose that means you did no work at in the first place..
That's true of exercise in general. It's bullshit make-work we do to stay fit, because we've decoupled individual survival from hard physical labor, so it doesn't happen "by itself" anymore. A blessing and a curse.
The required-test-per-function is sort of interesting. But it's not enforced that the test does anything useful, is it?
So I wonder how exhausting would it be to write in a language that required, for all functions, that they are tested with 100% path coverage.
Of course, this by itself wouldn't still be equivalent to proving the code, but it would probably point people to the corner cases of code quite rapidly. Additionally it would make it impossible to have code that cannot be tested with 100% path coverage due to static relationships within it, that are not (or cannot be) expressed in the type system, e.g. if (foo) { if (!foo) {..} }.
And would such a language need to have some kind of dynamic dependency injection mechanism for mocking the tests?
Amazing that these tools don't maintain a replayable log of everything they've done.
Although git revert is not a destructive operation, so it's surprising that it caused any loss of data. Maybe they meant git reset --hard or something like that. Wild if Codec would run that.
I have had codex recover things for me from its history after claude had done a git reset hard, codex is one of the more reliable models/harneses when it comes to performing undo and redo operations in my experience.
Claude (can’t remember if was 4.1 Opus, 4.5 Sonnet, or 4.5 Opus) once just started playing with git worktrees and royally f-d up the local repo and lost several hours of work. Since then, I watch it like a hawk.
`git reset --hard` doesn't remove unreferenced commits or rewrite the reflog so I don't think that would do it. Something like `git reset && git gc` would have to be done.
> Now please tell me what information the geolocation prompt actually provides to the website that cannot be taken from the IP address, which is already tracked and processed by google and every single website tracking tool.
Show me the bus schedule for the nearest bus stop, show me the nearest store, share my location in a chat..
The browser's IP-based geolocation (as per what https://mylocation.org/ can find out from my session) is kilometers away.
Google is not using exclusively DBs like MaxMind for geolocation though. They fuse a lot of data together and probably can even discern which building you're on from the other local network devices without the precise geolocation sharing.
Like the Meta/Yandex apps were doing, just not strictly for position tracking, but more centered toward pinpointing your unique id.
As I understand it, this tag might be at some point be supported by non-Google browsers as well, without access to Google internal databases. At first probably the Chromium-derived ones, which this tag probably lands up at some point.
Strava apps, FourSquare apps, proximity to friend alerts...
It's really disappointing how doubt and suspicion have rotted some people's brains. Any and every work being viewed with searing doubt is such an unfortunate fate, that consigns humanity away against progress and possibility.
As a developer of fun personal website toys for myself & friends, I want to do good things. I want a better web platform. Oh I can download third rate geo-ip databases to do a bad job inaccurately spying on people who maybe dont want to be spied on, and that won't work with VPN's/tailscale? So what? That sounds infernal as heck. None of the post engages with the subject matter, it's all whinge about something else.
And I didn't stop at geolocation. Usermedia is even more used, everyday by many many people. The ability to turn that in and off fast is crucial to users having control.
This morose-ith is right up there with the right wing batshittery that protestors are paid: people so far gone into a conspiracy world of vice that they literally cannot believe in good and hope, even when it's right ahead of them.
But, PS, Google, I still haven't seen any checks (actually I got a summer of code long long ago), so, please, get those moving!
reply