I'm guessing you're referencing my comment, that isn't what I said.
> But the team is not even willing to make promises as big as yours.
Be honest, look at the comment threads for this announcement. Do you honestly think a promise alone would be sufficient to satisfy all of the clamouring voices?
No, people would (rightfully!) ask for more and more proof -- the best proof is going to be to continue building what we are building and then you can judge it on its merits. There are lots of justifiable concerns people have in this area but most either don't really apply what we are building or are much larger social problems that we really are not in a position to affect.
I would also prefer to be to judged based my actions not on wild speculation about what I might theoretically do in the future.
I'm Aleksa, one of the founding engineers. We will share more about this in the coming months but this is not the direction nor intention of what we are working on. The models we have in mind for attestation are very much based on users having full control of their keys. This is not just a matter of user freedom, in practice being able to do this is far more preferable for enterprises with strict security controls.
I've been a FOSS guy my entire adult life, I wouldn't put my name to something that would enable the kinds of issues you describe.
Thanks for the clarification and to be clear, I don't doubt your personal intent or FOSS background. The concern isn't bad actors at the start, it's how projects evolve once they matter.
History is pretty consistent here:
WhatsApp: privacy-first, founders with principles, both left once monetization and policy pressure kicked in.
Google: 'Don’t be evil' didn’t disappear by accident — it became incompatible with scale, revenue, and government relationships.
Facebook/Meta: years of apologies and "we'll do better," yet incentives never changed.
Mobile OS attestation (iOS / Android): sold as security, later became enforcement and gatekeeping.
Ruby on Rails ecosystem: strong opinions, benevolent control, then repeated governance, security, and dependency chaos once it became critical infrastructure. Good intentions didn't prevent fragility, lock-in, or downstream breakage.
Common failure modes:
Enterprise customers demand guarantees - policy creeps in.
Liability enters the picture - defaults shift to "safe for the company."
Revenue depends on trust decisions - neutrality erodes.
Core maintainers lose leverage - architecture hardens around control.
Even if keys are user-controlled today, the key question is architectural:
Can this system resist those pressures long-term, or does it merely promise to?
Most systems that can become centralized eventually do, not because engineers change, but because incentives do. That’s why skepticism here isn't personal — it's based on pattern recognition.
I genuinely hope this breaks the cycle. History just suggests it's much harder than it looks.
Can you (or someone) please tell what’s the point, for a regular GNU/Linux user, of having this thing you folks are working on?
I can understand corporate use case - the person with access to the machine is not its owner, and corporation may want to ensure their property works the way they expect it to be. Not something I care about, personally.
But when it’s a person using their own property, I don’t quite get the practical value of attestation. It’s not a security mechanism anymore (protecting a person from themselves is an odd goal), and it has significant abuse potential. That happened to mobile, and the outcome was that users were “protected” from themselves, that is - in less politically correct words - denied effective control over their personal property, as larger entities exercised their power and gated access to what became de-facto commonplace commodities by forcing to surrender any rights. Paired with awareness gap the effects were disastrous, and not just for personal compute.
The value is being able to easily and robustly verify that my device hasn't been compromised. Binding disk encryption keys to the TPM such that I don't need to enter a password but an adversary still can't get at the contents without a zero day.
Of course you can already do the above with secure boot coupled with a CPU that implements an fTPM. So I can't speak to the value of this project specifically, only build and boot integrity in general. For example I have no idea what they mean by the bullet "runtime integrity".
> For example I have no idea what they mean by the bullet "runtime integrity".
This is for example dm-verity (e.g. `/usr/` is an erofs partiton with matching dm-verity). Lennart always talks about either having files be RW (backed by encryption) or RX (backed by kernel signature verification).
I don’t think attestation can provide such guarantees. To best of my understanding, it won’t protect from any RCE, and it won’t protect from malicious updates to configuration files. It won’t let me run arbitrary binaries (putting a nail to any local development), or if it will - it would be a temporary security theater (as attackers would reuse the same processes to sign their malware). IDSes are sufficient for this purpose, without negative side effects.
And that’s why I said “not a security mechanism”. Attestation is for protecting against actors with local hardware access. I have FDE and door locks for that already.
I think all of that comes down to being a matter of what precisely you're attesting? So I'm not actually clear what we're talking about here.
Given secure boot and a TPM you can remotely attest, using your own keys, that the system booted up to a known good state. What exactly that means though depends entirely on what you configured the image to contain.
> it won’t protect from malicious updates to configuration files
It will if you include the verified correct state of the relevant config file in a merkel tree.
> It won’t let me run arbitrary binaries (putting a nail to any local development), or if it will - it would be a temporary security theater (as attackers would reuse the same processes to sign their malware).
Shouldn't it permit running arbitrary binaries that you have signed? That places the root of trust with the build environment.
Now if you attempt to compile binaries and then sign them on the production system yeah that would open you up to attack (if we assume a process has been compromised at runtime). But wasn't that already the case? Ideally the production system should never be used to sign anything. (Some combination of SGX, TPM, and SEV might be an exception to that but I don't know enough to say.)
> Attestation is for protecting against actors with local hardware access. I have FDE and door locks for that already.
If you remotely boot a box sitting in a rack on the other side of the world how can you be sure it hasn't been compromised? However you go about confirming it, isn't that what attestation is?
Well, maybe we're talking about different things, because I've asked from a regular GNU/Linux user perspective. That is, I have my computers and I'm concerned I would lose my freedoms to use them as I wish, because this attestation would be adopted and become de-facto mandatory if I ever want to do something online. Just like what happened to mobile, and what's currently slowly happening to other desktop OSes.
Production servers are a whole different story - it's usually not my hardware to begin with. But given how things are mostly immutable those days (shipped as images rather than installed the old-fashioned sysadmin way), I'm not really sure what to think of it...
You originally asked what the value proposition for a regular (non-corporate) user was. Then you raised some objections to my answer (or at least so I thought).
Granted these technologies can also be abused. But that involves running third party binaries that require SGX or other DRM measures before they will unlock or decrypt content or etc. Or querying a security element to learn who signed the image that was originally booted. Devices that support those things are already widespread. I don't think that's what this project is supposed to be. (Although I could always be wrong. There's almost no detail provided.)
The "founding engineers" behind Facebook and Twitter probably didn't set out to destroy civil discourse and democracy, yet here we are.
Anyway, "full control over your keys" isn't the issue, it's the way that normalization of this kind of attestation will enable corporations and governments to infringe on traditional freedoms and privacy. People in an autocratic state "have full control over" their identity papers, too.
> I've been a FOSS guy my entire adult life, I wouldn't put my name to something that would enable the kinds of issues you describe.
Until you get acquired, receive a golden parachute and use it when realizing that the new direction does not align with your views anymore.
But, granted, if all you do is FOSS then you will anyway have a hard time keeping evil actors from using your tech for evil things. Might as well get some money out of it, if they actually dump money on you.
I am aware of that, my (personal) view is that DRM is a social issue caused by modes of behaviour and the existence or non-existence of technical measures cannot fix or avoid that problem.
A lot of the concerns in this thread center on TPMs, but TPMs are really more akin to very limited HSMs that are actually under the user's control (I gave a longer explanation in a sibling comment but TPMs fundamentally trust the data given to them when doing PCR extensions -- the way that consumer hardware is fundamentally built and the way TPMs are deployed is not useful for physical "attacks" by the device owner).
Yes, you can imagine DRM schemes that make use of them but you can also imagine equally bad DRM schemes that do not use them. DRM schemes have been deployed for decades (including "lovely" examples like the Sony rootkit from the 2000s[1], and all of the stuff going on even today with South Korean banks[2]). I think using TPMs (and other security measures) for something useful to users is a good thing -- the same goes for cryptography (which is also used for DRM but I posit most people wouldn't argue that we should eschew all cryptography because of the existence of DRM).
This whole discussion is a perfect example of what Upton Sinclair said, "It is difficult to get a man to understand something, when his salary depends on his not understanding it."
A rational and intelligent engineer cannot possibly believe that he'll be able to control what a technology is used for after he creates it, unless his salary depends on him not understanding it.
Insinuation? As a sw dev they don't have any agency over whether or by whom they get acquired. Their decision will be whether to leave if it's changing to the worse, and that's very much understandable (and arguably the ethical thing to do).
No, but I can promise to my current employer that me leaving my job won’t be a critical problem.
It’s less of an issue in the case of a normal job than in an open source project where often the commitment of particular founding individuals to the long-term future of the project is a big part of people’s decision to use or not use that tech in their solutions. Here, given that “Trusted computing” can potentially lock you out of devices you have bought, it’s important for people to be able to judge the risk of getting “legal ransomware”d if the trusted computing base ends up depending on a proprietary component that they can’t back out of.
That said, there is absolutely zero chance that I use this (systemd is already enough Poettering software for me in this lifetime) so I’m not personally affected either way.
So far, that's a slick way to say not really. You are vague where it counts, and surely you have a better idea of the direction than you say.
Attestation of what to whom for which purpose? Which freedom does it allow users to control their keys, how does it square with remote attestation and the wishes of enterprise users?
I'm really not trying to be slick, but I think it's quite difficult to convince people about anything concrete (such as precisely how this model is fundamentally different to models such as the Secure Boot PKI scheme and thus will not provide a mechanism to allow a non-owner of a device to restrict what runs on your machine) without providing a concrete implementation and design documents to back up what I'm saying. People are rightfully skeptical about this stuff, so any kind of explanation needs to be very thorough.
As an aside, it is a bit amusing to me that an initial announcement about a new company working on Linux systems caused the vast majority of people to discuss the impact on personal computers (and games!) rather than servers. I guess we finally have arrived at the fabled "Year of the Linux Desktop" in 2026, though this isn't quite how I expected to find out.
> Attestation of what to whom for which purpose? Which freedom does it allow users to control their keys, how does it square with remote attestation and the wishes of enterprise users?
We do have answers for these questions, and a lot of the necessary components exist already (lots of FOSS people have been working on problems in this space for a while). The problem is that there is still the missing ~20% (not an actual estimate) we are building now, and the whole story doesn't make sense without it. I don't like it when people announce vapourware, so I'm really just trying to not contribute to that problem by describing a system that is not yet fully built, though I do understand that it comes off as being evasive. It will be much easier to discuss all of this once we start releasing things, and I think that very theoretical technical discussions can often be quite unproductive.
In general, I will say that there a lot of unfortunate misunderstandings about TPMs that lead people to assume their only use is as a mechanism for restricting users. This is really not the case, TPMs by themselves are actually more akin to very limited HSMs with a handful of features that can (cooperatively with firmware and operating systems) be used to attest to some aspects of the system state. They are also fundamentally under the users' control, completely unlike the PKI scheme used by Secure Boot and similar systems. In fact, TPMs are really not a useful mechanism for protecting against someone with physical access to the machine -- they have to trust that the hashes they are given to extend into PCRs are legitimate and on most systems the data is even provided over an insecure data line. This is why the security of locked down systems like Xbox One[1] don't really depend on them directly and don't use them at all in the way that they are used on consumer hardware. They are only really useful at protecting against third-party software-based attacks, which is something users actually want!
All of the comments about DRM obviously come from very legitimate concerns about user freedoms, but my views on this are a little too long to fit in a HN comment -- in short, I think that technological measures cannot fix a social problem and the history of DRM schemes shows that the absence of technological measures cannot prevent a social problem from forming either. It's also not as if TPMs haven't been around for decades at this point.
>I think that technological measures cannot fix a social problem
The absence of technological measures used to implement societal problems totally does help though. Just look at social media.
I fear the outlaw evil maid or other hypothetical attackers (good old scare-based sales tactics) much less than already powerful entities (enterprises, states) lawfully encroaching on my devices using your technology. So, I don't care about "misunderstandings" of the TPM or whatever other wall of text you are spewing to divert attention.
Thanks, this would be helpful. I will follow on by recommending that you always make it a point to note how user freedom will be preserved, without using obfuscating corpo-speak or assuming that users don’t know what they want, when planning or releasing products. If you can maintain this approach then you should be able to maintain a good working relationship with the community. If you fight the community you will burn a lot of goodwill and will have to spend resources on PR. And there is only so much that PR can do!
Better security is good in theory, as long as the user maintains control and the security is on the user end. The last thing we need is required ID linked attestation for accessing websites or something similar.
that’s great that you’ll let users have their own certificates and all, but the way this will be used is by corporations to lock us out into approved Linux distributions. Linux will be effectively owned by RedHat and Microsoft, the signing authority.
it will be railroaded through in the same way that systemD was railroaded onto us.
> but the way this will be used is by corporations to lock us out into approved Linux distributions. Linux will be effectively owned by RedHat and Microsoft, the signing authority.
> but the way this will be used is by corporations to lock us out into approved Linux distributions. Linux will be effectively owned by RedHat and Microsoft, the signing authority.
This is basically true today with Secure Boot on modern hardware (at least in the default configuration -- Microsoft's soft-power policies for device manufacturers actually requires that you can change this on modern machines). This is bad, but it is bad because platform vendors decide which default keys are trusted for secure boot by default and there is no clean automated mechanism to enroll your own keys programmatically (at least, without depending on the Microsoft key -- shim does let you do this programmatically with the MOK).
The set of default keys ended up being only Microsoft (some argue this is because of direct pressure from Microsoft, but this would've happened for almost all hardware regardless and is a far more complicated story), but in order to permit people to run other operating systems on modern machines Microsoft signed up to being a CA for every EFI binary in the universe. Red Hat then controls which distro keys are trusted by the shim binary Microsoft signs[1].
This system ended up centralised because the platform vendor (not the device owner) fundamentally controls the default trusted key set and is what caused the whole nightmare of the Microsoft Secure Boot keys and rh-boot signing of shim. Getting into the business of being a CA for every binary in the world is a very bad idea, even if you are purely selfish and don't care about user freedoms (and it even makes Secure Boot less useful of a protection mechanism because it means that machines where users only want to trust Microsoft also necessarily trust Linux and every other EFI binary they sign -- there is no user-controlled segmentation of trust, which is the classic CA/PKI problem). I don't personally know how the Secure Boot / UEFI people at Microsoft feel about this, but I wouldn't be surprised if they also dislike the situation we are all in today.
Basically none of these issues actually apply to TPMs, which are more akin to limited HSMs where the keys and policies are all fundamentally user-controlled in a programmatic way. It also doesn't apply to what we are building either, but we need to finish building it before I can prove that to you.
Thanks for the reassurance, the first ray of sunshine in this otherwise rather alarming thread. Your words ring true.
It would be a lot more reassuring if we knew what the business model actually was, or indeed anything else at all about this. I remain somewhat confused as to the purpose of this announcement when no actual information seems to be forthcoming. The negative reactions seen here were quite predictable, given the sensitive topic and the little information we do have.
> The models we have in mind for attestation are very much based on users having full control of their keys.
If user control of keys becomes the linchpin for retaining full control over one's own computer, doesn't it become easy for a lobby or government to exert control by banning user-controlled keys? Today, such interest groups would need to ban Linux altogether to achieve such a result.
That's the thing. I can only provide a piece of software with the guarantee it can run on my OS. It can trust my kernel to let it run, but shouldn't expect anything more. The editor is free to run code it wants to guarantee the integrity of on its own infrastructure; but whatever reaches my machine _may_ at best run as the editor intends.
> The models we have in mind for attestation are very much based on users having full control of their keys.
FOR NOW. Policies and laws always change. Corporations and governments somehow always find ways to work against their people, in ways which are not immediately obvious to the masses. Once they have a taste of this there's no going back.
Please have a hard and honest think on whether you should actually build this thing. Because once you do, the genie is out and there's no going back.
This WILL be used to infringe on individual freedoms.
The only question is WHEN?
And your answer to that appears to be 'Not for the time being'.
This is extremely bad logic. The technology of enforcing trusted software is without inherent value good or ill depending entirely on expected usage. Anything that is substantially open will be used according to the values of its users not according to your values so we ought instead to consider their values not yours.
Suppose you wanted to identify potential agitators by scanning all communication for indications in a fascist state one could require this technology in all trusted environments and require such an environment to bank, connect to an ISP, or use Netflix.
One could even imagine a completely benign usage which only identified actual wrong doing alongside another which profiled based almost entirely on anti regime sentiment or reasonable discontent.
The good users would argue that the only problem with the technology is its misuse but without the underlying technology such misuse is impossible.
One can imagine two entirely different parallel universes one in which a few great powers went the wrong way in part enabled by trusted computing and the pervasive surveillance enabled by the capability of AI to do the massive and boring task of analyzing a massive glut of ordinary behaviour and communication + tech and law to ensure said surveillance is carried out.
Even those not misusing the tech may find themselves worse off in such a world.
Why again should we trust this technology just because you are a good person?
TLDR We already know how this will be misused to take away people's freedom not to run their own software stack but to dissent against fascism. It's immoral to build even with the best intentions.
You're providing mechanism, not policy. It's amazing how many people think they can forestall policies they dislike by trying to reject mechanisms that enable them. It's never, ever worked. I'm glad there are going to be more mechanisms in the world.
I was under the impression that the Mickey Mouse Protection Act 1998[1] extended the copyright protection for works retroactively (though already public domain works were excluded).
That being said, I guess the act had precautions to stop it from reducing the copyright protection for edge cases like these?
As someone who has caught DB a fair number of times over the years, I think DB is most hated by Germans (who love to complain) and German locals.
Maybe I've just been lucky so far, but as an Aussie it is hard to overstate the fact it is even possible to travel almost anywhere within the country and between several other countries by train for fairly cheap is already quite miraculous to me. Yeah, I've run into a fair few issues and it was annoying but that goes for every country I've been to (Japan had the least by far but trains still get delayed there more often than people think and I've also run into situations as in TFA where if I didn't speak Japanese things would've ended up worse).
I'm not sure I'd even put DB in my "bottom three" in terms of overall experience. Should it be much better? Of course. But if you listen to Germans it sounds like DB is the worst train network in the universe by a clear margin, and that's just obviously not true.
I appreciate you sharing the positive experience as a neighbor, but unfortunately, the Deutsche Bahn is as bad as presented here. (I spend several years commuting to college via DB). Once my train stopped in a small village with the announcement "the train ends here". Thankfully, kind people picked me up.
I used to complain about the French SNCF, then I discovered DB and stopped complaining. I've been a Bahncard 1. Klasse holder for a few years.
Last time I took the train in Germany I was 30h late and had to spend a sunday between Cologne and Karlsruhe (not that I was really surprised).
The punctuality is a joke, ICEs are unpractical, train management comically incompetent (remember when the ICEs cars would never come in the announced order and there was luggage room for maybe 15% of passengers?).
The cars are very dirty, especially in 1st class where eating a full meal at your seat is encouraged but the cars are cleaned once every two days.
However, the train attendants are usually very arranging for every aspect of the trip on board.
Cheap tickets are cool, but have been there for so long (the regional ones) Germans take them for granted.
As a French I feel that the SNCF is pretty good. We like to complain about it but I have had a few minor problems (3 hours pause from locomotive breakdown, or 2 hours stop to allow cleaning after a suicide) but nothing too bad.
There are on times and rarely canceled.
The major issues are with pricing and lack of investment outside of TGV but it's not too bad.
I live in Berlin but grew up in the US. Yep, Germany has much more train coverage than where I'm from originally. And that's great. But to understand the complaints you really have to spend some years living with the uncertainty created by the DB.
It depends which route you take, but for a wide swath of the German population, your chance of an absolutely wretched experience seems to be around 1 in 4. That means that people are constantly weighing the desire for affordable, sustainable, comfortable transport that may go horribly wrong, against the (similarly unpredictable) endemic traffic jams and exhaustion of driving, and often choosing wrong. If you have no car, you're weighing more reliable but slow and uncomfortable and traffic-jam-prone buses, or simply avoiding the travel. Constantly making decisions on penalty of deeply unpleasant consequences without any way to actually reasonably judge your decision is a special form of miserable.
At least in the US, most of the time, there is no decision to make: you drive.
A lot of the issues are local, some are time constrained. There is a CCC talk on youtube "BahnMining - Pünktlichkeit ist eine Zier (David Kriesel)", that concludes that any train traveling through certain trainstations will most likely end up significantly delayed. Then you have certain train models failing during summer. Or my recent favourite planned construction work with no apparent plan for a reliable replacement service beyond "here is a train, it might leave at some point".
> See also "instagram is spying on you through your microphone". It's not, but I've seen people argue that it's OK for people to believe that because it supports their general (accurate) sentiment that targeted ads are creepy.
I used to be sceptical of this claim but I have found it increasingly difficult to be sceptical after we found out last year that Facebook was exploiting flaws in Android in order to track your browsing history (bypassing the permissions and privilege separation model of Android)[1].
Given they have shown a proclivity to use device exploits to improve their tracking of users, is it really that unbelievable that they would try to figure out a way to use audio data? Does stock Android even show you when an app is using its microphone permission? (GrapheneOS does.) Is it really that unbelievable that they would try to do this if they could?
If they are using the microphone to target ads, show me the sales pitch that their ad sales people use to get customers to pay more for the benefits of that targeting.
I get your point, but can you point to a sales pitch which included "exploit security flaws in Android to improve tracking"? Probably not, but we know for a fact they did that.
Also, your own blog lists an leak from 2024 about a Facebook partner bragging about this ability[1]. You don't find the claim credible (and you might be right about that, I haven't looked into it), but I find it strange that you are asking for an example that your own website provides?
I have already experienced the benefits of sending this to several family members, and I'm thankful for the hard work you put into laying everything out so clearly
On paper, USDT probes are the best way for libraries (and binaries) to provide information for debugging because they can be used programmatically and have no performance overhead until they are measured but unfortunately they are not widely used.
Yeah, I really have to wonder what the thought process is behind leaving such a comment. When people first started doing it I wondered if it was some kind of guerrilla outrage marketing campaign.
Maybe I'm getting too jaded but I'm struggling to be quite that charitable.
The entireity of the human-written text in that comment was "From ChatGPT:" and it was formatted as though it was a slam-dunk "you're wrong, the computer says so" (imagine it was "From Wikipedia" followed by a quote disagreeing with you instead).
I'm sure some people do what you describe but then I would expect at least a little bit more explanation as to why they felt the need to paste a paragraph of LLM output into their comment. (While I would still disagree that it is in any way valuable, I would at least understand a bit about what they are trying to communicate.)
My thought process was that the original comment was based on their personal experiences and since ChatGPT is trained on a large dataset, it may offer a different perspective derived from experiences of a lot more people.
> "you're wrong, the computer says so"
My thought: you're knowledge may be limited, this is what a computer trained on a lot more data says:
SmartOS constructed a container-like environment using LX-branded zones, they didn't create an in-kernel equivalent to Linux's namespaces which it then nested in a zone. You're probably thinking of the KVM port to Solaris/illumos, which does run in a zone internally to provide additional protection.
While LX-branded zones were a really cool tech demo, maintaining compatibility with Linux long-term would be incredibly painful and you're bound to find all sorts of horrific bugs in production. I believe that Oxide uses KVM to run their Linux guests.
Linux has always supported nested namespaces and you can run Docker containers inside LXC (or Incus) fairly easily. Note that while it does add some additional protection (in particular, it transparently adds user namespaces which is a critical security feature most people still do not enable in Docker) it is still the same technology as containers and so kernel bugs still pose a similar risk.
As a maintainer of runc (the runtime Docker uses), if you aren't using user namespaces (which is the case for the vast majority of users) I would consider your setup insecure.
And a shocking number of tutorials recommend bind-mounting docker.sock into the container without any warning (some even tell you to mount it "ro" -- which is even funnier since that does nothing). I have a HN comment from ~8 years ago complaining about this.
You really need to use user namespaces to get this kind of security protection -- running as root inside a container without user namespaces is not secure. Yes, breakouts often require some other bug or misconfiguration but the margin for error is non-existent (for instance, if you add CAP_SYS_PTRACE to your containers it is trivial to break out of them and container runtimes have no way of protecting against that). Almost all container breakouts in the past decade were blocked by user namespaces.
Unfortunately, user namespaces are still not the default configuration with Docker (even though the core issues that made using them painful have long since been resolved).
I'm guessing you're referencing my comment, that isn't what I said.
> But the team is not even willing to make promises as big as yours.
Be honest, look at the comment threads for this announcement. Do you honestly think a promise alone would be sufficient to satisfy all of the clamouring voices?
No, people would (rightfully!) ask for more and more proof -- the best proof is going to be to continue building what we are building and then you can judge it on its merits. There are lots of justifiable concerns people have in this area but most either don't really apply what we are building or are much larger social problems that we really are not in a position to affect.
I would also prefer to be to judged based my actions not on wild speculation about what I might theoretically do in the future.
reply