On some major OSes (like Windows and Mac), there’s a “platform verifier” which can handle some of this, including the fetching and sharing of out of band data. It doesn’t have to be tied to a browser.
Linux should probably get one too, but I don’t know who will lead that effort.
In the mean time, browsers aren’t willing to wait on OSes to get their act together, and reasonably so. There’s regulation (and users, especially corporate/government) pushing for post-quantum solutions soon, so folks are trying to find solutions that can actually be deployed.
Browsers have always led in this space, all the way back to Netscape introducing SSL in the first place.
I think most folks involved are assuming the landmarks will be distributed by the browser/OS vendor, at least for end-user devices where privacy matters the most - Similar to how CRLSets/CRLite/etc are pushed today.
There's "full certificates" defined in the draft which include signatures for clients who don't have landmarks pre-distributed, too.
> If a new landmark is allocated every hour, signatureless certificate subtrees will span around 4,400,000 certificates, leading to 23 hashes in the inclusion proof, giving an inclusion proof size of 736 bytes, with no signatures.
That's assuming 4.4 million certs per landmark, a bit bigger than your estimate.
There's also a "full certificate" which includes signatures, for clients who don't have up-to-date landmarks. Those are big still, but if it's just for the occasional "curl" command, that's not the end of the world for many clients.
Capture-now Decrypt-later isn't really relevant to certificates, who mostly exist to defend against active MITM. The key exchange algorithms need to be PQ-secure for CN-DL, but that has already happened if you have an up-to-date client and server.
Chrome and Cloudflare are doing a MTC experiment this year. We'll work on standardizing over the next year. Let's Encrypt may start adding support the year after that. Downstream software might start deploying support MTCs the year after that. People using LTS Linux distros might not upgrade software for another 5 years after that. People run out-of-date client devices for another 5 years too.
So even in that timeline, which is about as fast as any internet-scale migration goes, it may be 10-15 years from today for MTC support to be fully widespread.
Yes, the rest of the cryptography needs to be PQ-secure as well.
But that's largely already true:
The key exchange is now typically done with X25519MLKEM768, a hybrid of the traditional x25519 and ML-KEM-768, which is post-quantum secure.
The exchanged keys typically AES-128 or AES-256 or ChaCha20. These are likely to be much more secure against quantum computers as well (while they may be weakened, it is likely we have plenty of security margin left).
Changing the key exchange or transport encryption protocols however is much, much easier, as it's negotiated and we can add new options right away.
Certificates are the trickiest piece to change and upgrade, so even though Q-day is likely years away still, we need to start working on this now.
Upgrading the key exchange has already happened because of the risk of capture-now, decrypt-later attacks, where you sniff traffic now and break it in the future.
> The key exchange is now typically done with X25519MLKEM768, a hybrid of the traditional x25519 and ML-KEM-768, which is post-quantum secure.
How "typical" are you suggesting this is? Honestly, it's the first I'd heard of this being done at all in the wild (not that I'm an expert). Peeking around a smattering of random websites in my browser, I'm not seeing it mentioned at all.
> Changing the key exchange or transport encryption protocols however is much, much easier, as it's negotiated and we can add new options right away.
So that’s a good point. We can quickly add new encryption protocols after the point things are negotiated in the connection, but adding something new or entirely replacing the certificate system or even just the underlying protocols is a big deal.
Next week at IETF 124, there's a Birds-of-a-Feather session that will kick off the standardization process here.
I think Merkle Tree Certificates a promising option. I'll be participating in the standardization efforts.
Chrome has signalled in multiple venues that they anticipate this to be their preferred (or only) option for post-quantum certificates, so it seems fairly likely we will deploy this in the coming years
I work for Let's Encrypt, but this is not an official statement or promise to implement anything yet. For that you can subscribe to our newsletter :)
Is there a good documentation (or maybe code) reference to the protocols that get used here? Running readsb is fine enough by me, but I'm just interested in how these systems work. I see some mentions of a Beast format. And then there's the mlat-client too
It appears to be where "you" (website visitors) have loaded page tiles. I was able to draw a little picture on the map by zooming in and panning around!
Excellent, thanks! Just took a look at "TCP/IP Illustrated: Volume 1" and was exactly what I was looking for. Any book along those lines that is compressed/water down (just to get started over a weekend)?
Linux should probably get one too, but I don’t know who will lead that effort.
In the mean time, browsers aren’t willing to wait on OSes to get their act together, and reasonably so. There’s regulation (and users, especially corporate/government) pushing for post-quantum solutions soon, so folks are trying to find solutions that can actually be deployed.
Browsers have always led in this space, all the way back to Netscape introducing SSL in the first place.