Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It isn't common to encrypt or sign the full content using asymmetric encryption.

For encryption you usually pick a large random key for symmetric encryption, encrypt the body with a strong symmetric cypher and that key, and encrypt the symmetric key with your asymmetric method and the intended recipient's public key.

For signing, you use a suitable secure hash function to produce a digest of the content, when sign the digest using your private key. Choice of digest function is as important as your choice in encryption method as an exploitable issue that allows you to generate the same digest for different content means that the signature, while secure in itself, is worthless because what has been signed is no longer effectively deterministic.

tl;dr: So when they say “digest algorithm for repository signing” they don't mean that they are using the SHA256 result as a signature, but that they use SHA256 to generate the digest that is then signed.



Both ECDSA and EdDSA hash the message internally before signing. The only advantage of signing a pre-hash, other than convenience of computation (eg: if a streaming implementation of the internal pre-hash is not available) would be to allow checking integrity without authenticity, which makes little sense.


Perhaps they are using the hash output for quick corruption tests as well as signing for authenticity?

And if they are computing a hash for that anyway, the computation saving from signing the hash not the whole content might be worthwhile?


Adding a pre-hash is the overhead here, because of the internal hash.

Granted, verifying only that hashes match for integrity would be cheaper than checking the signature. But since you need to to that anyway for authenticating the payload, why the intermediate step?


> Adding a pre-hash is the overhead here, because of the internal hash.

If it has no other purpose, then yes but only a small amount of overhead if the content is large because the internal hash is over a much smaller content. You have:

hash(<thousands/millions/more>bits) then hash(<hundreds>bits) & sign(<hundreds>bits)*

instead of

hash(<thousands/millions/more>bits) & hash(<hundreds>bits)

The bit you save, the hash(<hundreds>bits), is only significant if the input content is of similar size and you are performing the operation many times in quick succession.

> since you need to to that anyway for authenticating the payload

That is true from the client PoV. Server-side you might want to integrity check your data as part of regular proactive maintenance.

Of course I'm assuming here that the digest is stored so it can be used in this way – if not then I agree it is just overhead.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: