Why not do 120 then? We can show that the chance of false negative of 5 rounds is cryptographically negligible, so 5, 60, and 120 are all the same. If the only argument for 60 is that it's more and this is a really important rare operation, doesn't it also apply to 120?
I'm not trying to be glib, there is no rational stopping point if we reject the objective threshold.
I don't pretend to understand all the involved math, but what I'm trying to say is that as far as I understand T rounds gives 4^-T probability that we've chosen a "bad" prime (really composite) per normal Miller-Rabin in the worst case. Doing ~5 rounds has been shown to be enough to choose a good prime candidate, under most (very probable!) conditions when we get it a random, and thus the argument is that that ~5 rounds is fine. We agree so far?
I'm just asking, why not run the conservative 60 round test, rather than ~5 when you're doing a very rare, one time, key generation? I understand that it's very unlikely to reject any numbers, but at least BSI thinks it's worth it for important keys.
If I understand the recommendation right, you wouldn't do 60 for a 2048 bit key and then 120 for 4096, rather 61 rounds would be enough for 4096 if 120 is alright for 2048.
You're asking why not apply the formula for adversarially selected candidates even if we are randomly selecting candidates. There is simply no reason to, except "maybe we made a mistake" but then why would we not think we made a mistake also in calculating the 1/4 value, or in any other part of the code?
Phrased another way, do you have an argument for why run the conservative 60 round test, instead of asking for an argument for why not run it?
Again, you are "very unlikely" to win the Powerball jackpot. Rounds 6-60 have a cryptographically negligible chance of rejecting a composite. It's different, otherwise we'd have to worry about the "very unlikely" chance of the attacker guessing an AES-128 key on the first try.
(I don't follow you on the key sizes, if you apply the 1/4 probability, the candidate size is irrelevant.)
Thanks, understand what you mean. I probably botched the 1/4 probability thing, was thinking 4^-60 gives 2^-120 bit assurance (roughly, security margin in my mind), and an extra round would quadruple it, but doesn't work that way I realize.
As I understand it, the number of rounds needed (for a given acceptable failure probability) goes down the larger the number is. Note (very importantly) that this is assuming you are testing a RANDOM integer for primality. If you are given an integer from a potential malicious source you need to do the full number of rounds for the given level.
I'm not trying to be glib, there is no rational stopping point if we reject the objective threshold.