Hacker Newsnew | past | comments | ask | show | jobs | submit | Thom2000's commentslogin

I wonder if they think of a deeper integration of this into the age binary. Currently the invocation looks extremely ugly:

    age -r $(go run filippo.io/torchwood/cmd/age-keylookup@main joe@example.com)

I assume once it's stabilized you'd swap the `go run` for just installing and using a binary, similar to what you're already doing with age.

Honestly not sure why I didn't do that once the tool had stabilized.

Switched to

    go install filippo.io/torchwood/cmd/age-keylookup@main
    age -r $(age-keylookup alice@example.com)
age is designed to be composable and very stable, and this shell combination works well enough, so it's unlikely we'll build it straight into age(1).

Offtopic but I really appreciate golang and so I am always on the lookout of modern alternatives and I found age and I found it to be brilliant for what its worth

But I was discussing it with some techies once and someone mentioned to me that it had less entropy (I think they mentioned 256 bits of entropy) whereas they wanted 512 bits of entropy which pgp supported

I can be wrong about what exactly they talked about since it was long time ago so pardon me if thats the case, but are there any "issues" that you know about in age?

Another thing regarding the transparent servers is that what really happens if the servers go down, do you have any thoughts of having fediverse-alike capabilities perhaps? And also are there any issues/limitations of the transparent keyserver that you wish to discuss

Also your work on age has been phenomenal so thank you for creating a tool like age!


> But I was discussing it with some techies once and someone mentioned to me that it had less entropy (I think they mentioned 256 bits of entropy) whereas they wanted 512 bits of entropy which pgp supported

> I can be wrong about what exactly they talked about since it was long time ago so pardon me if thats the case, but are there any "issues" that you know about in age?

Entropy bikeshedding is very popular for PGP / GnuPG enthusiasts, but it's silly.

age uses X25519, HKDF-SHA256, ChaCha20, and Poly1305. Soon it will also use ML-KEM-768 (post-quantum crypto!). This is all very secure crypto. If a quantum computer turns out to be infeasible to build on Earth, I predict none of these algorithms will be broken in our lifetime.

PGP supports RSA. That's enough reason to avoid it.

https://blog.trailofbits.com/2019/07/08/fuck-rsa/

If you want more reasons:

https://www.latacora.com/blog/2019/07/16/the-pgp-problem/


> PGP supports RSA. That's enough reason to avoid it.

I hate to break the narrative but age also supports RSA, for SSH compat:

https://man.archlinux.org/man/age.1#SSH_keys


That's only because SSH supports RSA. Mainstream usage of age with age public keys only supports X25519.

Eh. You don't really get to do this sleight of hand. If you're gonna rag on RSA support as a shibboleth for bad design, it's bad for GPG and bad for age. If it's direct evidence of bad design, age shouldn't have permitted it via their SSH key support.

I agree in principle, but I'm not looking at "what SSH dragged in". I'm looking at age as a pure isolated thing, according to the spec: https://github.com/C2SP/C2SP/blob/main/age.md

This transparency keyserver actually gives us an excellent opportunity to measure how many people use Curve25519 vs RSA, even with SSH support.

We should contrast this with actively valid public keys on a PGP keyserver in 2026 and see which uses modern crypto more. The results probably won't be surprising ;)


Those goalposts are really agile.

We've moved from "PGP supports RSA. That's enough reason to avoid it." to "We should contrast this with actively valid public keys on a PGP keyserver in 2026 and see which uses modern crypto more".


> My biggest hurdle was getting it to export to a nice looking PDF that could be emailed or printed later.

If you can export to structured data such as JSON, I guess Typst would be a perfect fit for that job.


Exactly!

Bearer tokens should be replaced with schemes based on signing and the private keys should never be directly exposed (if they are there's no difference between them and a bearer token). Signing agents do just that. Github's API is based on HTTP but mutual TLS authentication with a signing agent should be sufficient.


FWIW it's possible to run readme examples automatically add part of tests: https://github.com/parallaxsecond/rust-cryptoki/blob/main/cr...


You don't need any third party modules and can proxy based on ALPN (https://wiki.xmpp.org/web/Tech_pages/XEP-0368#nginx) thus running everything on port 443. Note that ALPN is not encrypted AFAIK but public wifi services don't care.


It's hard to answer your question without repeating the arguments made in the post itself.

Are you implying that djb blew the matter out of proportion?


Poor quality analogy: should ed25519 only have been incorporated into protocols in conjunction with another cryptographic primitive? Surely requiring a hybrid with ecdsa would be more secure? Why did djb not argue for everyone using ed25519 to use a hybrid? Was he trying to reduce security?

The reason this is a poor quality analogy is that fundamentally ecdsa and ed25519 are sufficiently similar that people had a high degree of confidence that there was no fundamental weakness in ed25519, and so it's fine - whereas for PQC the newer algorithms are meaningfully mathematically distinct, and the fact that SIKE turned out to be broken is evidence that we may not have enough experience and tooling to be confident that any of them are sufficiently secure in themselves and so a protocol using PQC should use a hybrid algorithm with something we have more confidence in. And the counter to that is that SIKE was meaningfully different in terms of what it is and does and cryptographers apparently have much more confidence in the security of Kyber, and hybrid algorithms are going to be more complicated to implement correctly, have worse performance, and so on.

And the short answer seems to be that a lot of experts, including several I know well and would absolutely attest are not under the control of the NSA, seem to feel that the security benefits of a hybrid approach don't justify the drawbacks. This is a decision where entirely reasonable people could disagree, and there are people other than djb who do disagree with it. But only djb has engaged in a campaign of insinuating that the NSA has been controlling the process with the goal of undermining security.


> seem to feel that the security benefits of a hybrid approach don't justify the drawbacks.

The problem with this statement to me is that we know of at least 1/4 finalists in the post quantum cryptography challenge is broken, so it's very hard to assign a high probability that the rest of the algorithms will be secure from another decade of advancement (this is not helped by the fact that since the beginning of the contest, the lattice based methods have lost a signficant number of bits as better attacks have been discovered).


I've used dynamic pipelines. They work quite well, with two caveats: now your build process is two step and slower. And there are implementation bugs on Gitlab's side: https://gitlab.com/groups/gitlab-org/-/epics/8205

FWIW Github also allows creating CI definitions dynamically.


Interesting. I've never seen the import-with syntax, though and it's hard to find any documentation on it. Is this a syntax extension?


It’s been introduced as part of ecmascript 2026 https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...


It first started as an assert statement[0] for those who may have seen that, these type statements are an evolution out of that proposal.

I do wonder if this makes the importable gets (via type: json) a reality like assert was going to.

[0]: https://v8.dev/features/import-assertions


> I do wonder if this makes the importable gets (via type: json) a reality like assert was going to.

Yes, the JSON modules proposal is finished.

https://github.com/tc39/proposal-json-modules

https://caniuse.com/mdn-javascript_statements_import_import_...


An entire class fetch requests will go away with importable gets. I am excited for this


In node you could always require("food.json")


Not what I am talking about though.

I’m talking about in place of a fetch call, you could simply import a json response from an endpoint, there by bypassing the need to call fetch, and you’ll get the response as if it’s imported.

It won’t replace all GET calls certainly but I can think of quite a few first load ones that can simply be import statements once this happens


Ohh right. That makes sense.


Sadly, Rust proc macros operate on tokens and any serious macro implementation needs third-party crates.

Compile-time reflection, with good, built in API, akin to C# Roslyn would be a real boon.


Any serious anything needs third party crates. Coming from c++ this has been the most uncomfortable aspect of rust to me, but I am acclimating.


"comment" may be relevant to the object. Maybe using "_" for the whole object comment would be safer?


It would also be consistent (everything beginning with a "_" is a comment)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: