Hacker Newsnew | past | comments | ask | show | jobs | submit | tomgag's commentslogin

Given some of the comments in this thread, I would like to link this here:

https://gagliardoni.net/#20250714_ludd_grandpas

An abstract:

> "but then WHAT is a good measure for QC progress?" [...] you should disregard quantum factorization records.

> The thing is: For cryptanalytic quantum algorithms (Shor, Grover, etc) you need logical/noiseless qubits, because otherwise your computation is constrained [...] With these constraints, you can only factorize numbers like 15, even if your QC becomes 1000x "better" under every other objective metric. So, we are in a situation where even if QC gets steadily better over time, you won't see any of these improvements if you only look at the "factorization record" metric: nothing will happen, until you hit a cliff (e.g., logical qubits become available) and then suddenly scaling up factorization power becomes easier. It's a typical example of non-linear progress in technology (a bit like what happened with LLMs in the last few years) and the risk is that everyone will be caught by surprise. Unfortunately, this paradigm is very different from the traditional, "old-style" cryptanalysis handbook, where people used to size keys according to how fast CPU power had been progressing in the last X years. It's a rooted mindset which is very difficult to change, especially among older-generation cryptography/cybersecurity experts. A better measure of progress (valid for cryptanalysis, which is, anyway, a very minor aspect of why QC are interesting IMHO) would be: how far are we from fully error-corrected and interconnected qubits? [...] in the last 10 or more years, all objective indicators in progress that point to that cliff have been steadily improving


I agree with the statement that measuring the performance of factorisation now is not a good metric to assess progress in QC at the moment. However, the idea that once logical qubits become available, we reach a cliff, is simply wishful thinking.

Have you ever wondered what will happen to those coaxial cables seen in every quantum computer setup, which scale approximately linearly with the number of physical qubits? Multiplexing is not really an option when the qubit waiting for its control signal decoheres in the meantime.


Oh, I didn't mean to imply that the "cliff" is for certain. What I'm saying is that articles like Gutmann's fail to acknowledge this possibility.

Regarding the coaxial cables, you seem to be an expert, so tell me if I'm wrong, but it seems to me a limitation of current designs (and in particular of superconducting qubits), I don't think there is any fundamental reason why this could not be replaced by a different tech in the future. Plus, the scaling must not need to be infinite, right? Even with current "coaxial cable tech", it "only" needs to scale up to the point of reaching one logical qubit.


> I don't think there is any fundamental reason why this could not be replaced by a different tech in the future.

The QC is designed with coaxial cables running from the physical qubits outside the cryostat because the pulse measurement apparatus is most precise in large, bulky boxes. When you miniaturise it for placement next to qubits, you lose precision, which increases the error rate.

I am not even sure whether logical components work at such low temperatures, since everything becomes superconducting.

> Even with current "coaxial cable tech", it "only" needs to scale up to the point of reaching one logical qubit.

Having a logical qubit sitting in a big box is insufficient. One needs multiple logical qubits that can be interacted with and put in a superposition, for example. A chain of gates represents each logical qubit gate between each pair of physical qubits, but that's not possible to do directly at once; hence, one needs to effectively solve the 15th puzzle with the fewest steps so that the qubits don't decohere in the meantime.


> I am not even sure whether logical components work at such low temperatures, since everything becomes superconducting.

Currently finishing a course where the final project is designing a semiconductor (quantum dot) based quantum computer. Obviously not mature tech yet, but we've been stressed during the course that you can build most of the control and readout circuits to work at cryogenic temps (2-4K) using slvtfets. The theoretical limit for this quantum computing platform is, I believe, on the order of a million qubits in a single cryostat.


> you can build most of the control and readout circuits to work at cryogenic temps (2-4K) using slvtfets

Given the magic that happens inside high-precision control and readout boxes connected to qubits with coaxial cables, I would not equate the possibility of building one with such a control circuit ever reaching the same level of precision. I find it strange that I haven’t seen that on the agenda for QC, where instead I see that multiplexing is being used.

> The theoretical limit for this quantum computing platform is, I believe, on the order of a million qubits in a single cryostat.

What are the constraints here?


Watch out, possibly similar to this patent: https://patentimages.storage.googleapis.com/e4/9b/4e/883a9df...

(disclaimer: I am co-inventor at a previous employer, I don't get royalties for it, just reporting)


Thanks for the heads up! Good to know what's out there. Interesting that I independently arrived at something possibly similar.


>Good to know what's out there.

It opens you up to legal risk for knowingly infringing patents. If possible you never should look at a patent.


That is somewhat outdated advise post seagate: https://www.stblaw.com/docs/default-source/cold-fusion-exist...


The results of the 2025 elections for the president and board members at the International Association for Cryptologic Research (IACR) have been botched because the results of the super-secure cryptographic e-voting system cannot be retrieved due to the "accidental loss" of a decryption key.

https://iacr.org/news/item/27138

While human mistakes happen, this incident comes under very troubling circumstances.

Why does an e-voting system of an association like IACR not support t-out-of-n threshold decryption?

Why is a system where a single party can collude to invalidate the vote considered acceptable?

Wouldn't be wiser to freeze to the date of November 20th the eligibility status for voting instead of "calling to arms" IACR members who had previously decided to opt out from Helios emails?

Does the identity of some of the candidates to Director represent a problem for IACR?


Beautiful. Plays super smoothly on Firefox with NoScript, uBlock Origin and many other privacy extensions. But it lacks a player tutorial IMHO.


So does life. Just jump on in, try your best, make some discoveries, and have fun


In other words: it's the journey not the destination.


Personal opinion: Bluesky is "fedi-washing". Better Mastodon or Nostr.

https://gagliardoni.net/#20250818_battle_of_socials


I like the skepticism against Bluesky, and I agree that where VC money is involved things are mostly sketchy.

However, this post was about the at protocol, which seems like you just hand-waved in one sentence:

> The AT Protocol used by Bluesky has some interesting features, although to be honest I don't know how many of these are just impossible to achieve on ActivityPub or are just WIP lagging behind due to funding constraints.

I don't think the debate between them is super useful because their architectures are very different.

You also mentioned an issue with the bluesky relay, but others already exist so it's not techincally tied to Bluesky. Heck, I think the fact multiple can exist at the same, while degrades the social aspect, still makes it decentralized.

As for the identity management issue, they announced just last week that it's getting branched to an independent entity: https://docs.bsky.app/blog/plc-directory-org


> I don't think the debate between them is super useful because their architectures are very different.

Sure, that's true, but I, personally, care mostly about one question: Who holds the keys to the kingdom? In this respect, I think the AT Protocol fails spectacularly, mainly due to the lack of a credible strategy to implement really self-custodian identities.

> You also mentioned an issue with the bluesky relay, but others already exist so it's not techincally tied to Bluesky. Heck, I think the fact multiple can exist at the same, while degrades the social aspect, still makes it decentralized.

Yes, but this is also true for Nostr, Diaspora, Mastodon, etc. The difference being, last time I checked (and of course things might have changed in the meantime) with AT Protocol it was only possible to self-host part of the infrastructure (and hosting the relay is insanely demanding).

> As for the identity management issue, they announced just last week that it's getting branched to an independent entity: https://docs.bsky.app/blog/plc-directory-org

This is another example of gaslighting from Bluesky that just makes me angry. How in the holiest of Hells does an "Identity directory controlled by a Swiss Association" make the whole thing better?

Sorry, not buying it. I don't have a horse in the race, but won't fall for the marketing.


I agree with the sentiment and I wouldn't call Bluesky "open social"- I don't trust them either. But I still don't find these to be arguments to be against the protocol per se, which I find really interesting.

> Who holds the keys to the kingdom? In this respect, I think the AT Protocol fails spectacularly, mainly due to the lack of a credible strategy to implement really self-custodian identities

From what I've read, you can still own the entire stack from top to bottom, none of it is necessarily tied to bluesky. Even the identity managed being discussed only applies to bluesky, and whatever ecosystem subscribed to it; but in theory, you could create your own social platform with a new one (you'd obviously lose that ecosystem). But then again, this would also apply to Mastodon, since whoever owns the instance could always nuke it, and if you own your own instance, you need to build an network that trusts you. There's always an authority involved.

> The difference being, last time I checked (and of course things might have changed in the meantime) with AT Protocol it was only possible to self-host part of the infrastructure (and hosting the relay is insanely demanding).

Well it's definitely not the "50TB" you mentioned e.g here is someone running a relay on a $34/month vps and isn't going to accumulate more disk: https://whtwnd.com/bnewbold.net/3lo7a2a4qxg2l But it's importance is overblown anyway, it's just a json transmitter for signed data. I think the pds and identity managements are the better concern, and I hope there's a better way to decentralize those (if that makes sense).

EDIT: You're still correct that to fully spin up a new bluesky on your own you'd need an insane amount of storage for hosting all that data that's currently stored on bluesky (especially the did:plc and pds). All good arguments against the company, but that's only because people are choosing to store their pds repositories on bluesky. You could just as well point your repo to your own server and use a different social media. They could go under and someone else can create a new app view. I find that really cool; still leaves the identity issue open.


Personal opinion: Bluesky is "fedi-washing". Better Mastodon or Nostr.

https://gagliardoni.net/#20250818_battle_of_socials


I like to think I wrote a good analogy of what ChatControl/client-side scanning really is. They say "it's not a backdoor, it doesn't break E2E encryption", and they're right.

> It's like asking to an alcoholic schizo with a history of corruption, who only speaks Russian, and that you are forced by law to feed and host at your place at your own expense, to check your private letters before you're allowed to put them in an envelope.

https://gagliardoni.net/#20250916_clientside

https://infosec.exchange/@tomgag/115213723470901734


I didn't write this for the HN crowd, but here we go anyway: https://gagliardoni.net/#20250818_battle_of_socials

Happy to correct any factual inaccuracies.


I think that your description of ATproto relays is a conflation of the role of an AppView (or backend) in ATproto and a Nostr relay. Relays (by default) are not designed to be a permanent archive of content, and are really meant as content streams for backends to ingest and index appropriately. The storage cost is also overestimated, as people have begun to host third-party variants of the Bluesky AppView (which is partially open-source due to its dependence on internal code for some non-essential to microblogging functionality): https://whtwnd.com/futur.blue/3ls7sbvpsqc2w

The note at the end about Bluesky being able to censor, verify and ban users from the protocol is also largely incorrect, with some asterisks as is for a complex system. The Turkish accounts that were censored were hidden from the platform in Turkey via the app's labeler system, which allows for "composable moderation". You can use this system to implement geoblocking in Bluesky clients based on your IP address when you open the app, which is what they did to ban those accounts from being seen in Turkey. The application of labelers (outside of Bluesky's main moderation service which the Bluesky-hosted AppView follows) is client-side, and any client that doesn't want to respect the default geoblocking behaviour (or implement mod labels at all) can just ignore it.

The Politico columnist that was banned from Bluesky has their account taken down from the whole network because their account was hosted on a Bluesky PDS, which could be (somewhat because, again, the default AppView follows a default labeler for displaying content through the AppView's API) bypassed by moving their account to another PDS that isn't operated by Bluesky. If your account was banned from Bluesky while also being on a non-Bluesky PDS, you would still have access to the ecosystem (and a half-working version of Bluesky that is basically a shadowban due to the default client and AppView conflicting with the labeler's takedown action).

Speaking of PDSes, they also do quite a bit more than just store user data. As an user's identity is dependent on a PDS to exist as a proper account, most user actions have to be routed through it to allow applications to store their data on-protocol and to authenticate the user.

The verification system is implemented through a record type (or "Lexicon") that is stored on an account that basically confirms that the record owner has verified the target. The system is also odd in that there are two types of verified accounts, "trusted verifiers" (think Twitter's business verification system) and regular verified accounts. Trusted verifiers are chosen by the client and can verify their own set of accounts, giving them the regular checkmark. Clients that haven't implemented support for the checkmarks or allow users to choose their own trusted verifiers can basically see whatever checkmarks they want, or just disable the system altogether (which is possible in the default client).

How Bluesky uses DIDs are... complicated. ATproto supports two DID methods for accounts, did:web and did:plc. Web DIDs are used mainly for services on the network, but can also be used for regular accounts. PLC is a more complicated system, which becomes quite obvious when you find out the original acronym meaning was "placeholder". PLC is (in regards to the general protocol) not a decentralized system, as its current iteration is a DID document pastebin with authentication and version history. I do think that the method's current centralized status can be mitigated somewhat (synchronization between various directories, then having a consensus system for establishing the validity of the documents' current states), but the system could always be replaced at any point to either incorporate new features or to choose a new model for how documents are publicized.

Sorry for the long read but as you see I've wasted way too much time into reading through developer posts and documentation, had to unload it somehow.


Thank you for the detailed reply, your points make sense but many of these are, I think, too technical for the intended audience of my blog post, and do not change my overall impression of BlueSky. I will see if I manage to incorporate some of your points in a more digestible way, but reading the blog post you linked (which I didn't know, thanks) confirms my fears: 18 TB and 200$/month to run an instance which is basically serving one user is... insane? And with a lot of features not supported because closed source. I knew about did:web and did:plc and I agree that a future, better, fully decentralized implementation might possible, but at the current state I don't think BlueSky stands up to its promises compared to, e.g., Mastodon.


You're welcome. I understand that a lot of what I've said is technical jargon and nonsense to the average *.bsky.social user but a lot of it can be simply dumbed down to "the client can choose to ignore it" or "get off Bluesky servers, lol?".

At the risk of sounding like a shill, I would also say that the protocol is much less mature than ActivityPub or Nostr, but the rate of progress that I've seen is pretty rapid (compared to APub at least, Nostr is also a rapidly-developing protocol but its harder for me to track its progress as there's no reliable source for protocol updates that is not on Nostr afaik) and with the active developer community surrounding it I firmly believe that most of these issues will be solved within the next few years at worst. Zeppelin has also progressed on bringing back some of these missing features, as video processing and chat have been introduced to the AppView (albeit proxied through Bluesky's services so it's a moot point).

There's an important distinction to make between AppViews and an APub instance, which is that AppViews handle solely the application portion of the user experience while APub instances typically manage the entirety of the user's experience. As a result, ATproto users can hop between any AppView without any lock-in to a specific AppView provider as their accounts aren't bound to their existence (which means that anyone can switch from using the Bluesky AppView to the Zeppelin AppView with little difficulty (or any other AppView)), while users on the Fediverse cannot easily do the same (applications can authenticate with a Fediverse account to confirm their identity but there are limits to what you can do, such as federating with the identity of that user). They're also not designed to be closed/single-user instances, mainly because the PDS handles the role of user management and platforming users and is where most of that responsibility is placed at. In regards to active usage, enough moderation controversies have happened with Bluesky Social's policies to the point that a small (at this moment) market has opened for a Bluesky with truly user-controllable moderation, and that Zeppelin will be one of the main products to serve that market. The costs also aren't that large compared to some of the larger Mastodon instances, so for the amount of content that it's storing it could be way worse (mstdn.social saved 180 euros when moving to another server apparently but there's definitely other examples of Mastodon unnecessarily ballooning instance costs as it grows in scale, because it's bloatware compared to what's out there. mstdn.social is also a fraction of the activity size of the ATproto network's output on off-peak hours, so eh).

I will say that this isn't a core attribute of the Fediverse, the base protocol is only slightly less extensive and modifiable than Nostr as projects like ActivityPods and "nomadic identities" (over a decade old!) exist which can perform a similar role to an ATproto PDS but with the Solid protocol but it's seen little adoption due to the lack of focus towards implementing "next-gen" features like these in the current set of APub server software.


I'm Italian. On my side, I did what I could do: I emailed Italian politicians explaining why they should reject the proposal. A drop in the ocean, and far from impactful, but if it can change the odds even by an epsilon, why not?

https://gagliardoni.net/#20250805_chatcontrol

Big politics is not my thing, so for me the big effort was: 1) understanding who, among the zillions of politicians we have, could have a direct role in the decisional process and how; 2) searching and collecting the email addresses; and 3) funnily enough, picking the right honorifics (for example, I was not aware that "Onorevole" is reserved only to certain figures in Italian politics).

I shared the resulting effort on my website, in the hope of making life easier for fellow Italians who want to do the same.


Thank you for sharing this, it saved me quite some time and I coincidentally found a great resource (your blog).


Interesting. Could something like this be done for Mastodon / ActivityPub?


People have been doing this with ActivityPub/Mastodon for years: https://carlschwan.eu/2020/12/29/adding-comments-to-your-sta...


That's cool, didn't know that!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: