Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

To put some numbers on it, assume a civilization of 1 trillion people, with each person using 1 000 things that needs a large prime, and those 1 000 things need to generate a new prime 1 000 times a second.

The most conservative bound on the chance that a random single Miller-Rabin test will say that a composite number is prime is 1/4.

Using that bound, if that civilization used Miller-Rabin with 48 random Miller-Rabin tests to find primes they would get a composite falsely identified as a prime about once every 60 000 years.

If they used 64 tests they would have one false positive about ever 258 trillion years. That's past the point when all stars in the universe have run out of fuel.

Now assume that every star in the universe has a similar civilization. If everyone uses 96 Miller-Rabin tests there will be a false positive about once per 24 billion years.

As I said that is using a false positive rate of 1/4, which is very conservative. There's a paper [1], "Average Case Error Estimates For the Strong Prime Test" by Damgard, Landrock, and Pomerance that gives several bounds.

Their bounds are in terms of k, the number of bits of the odd number being tests, and t, the number of random Miller-Rabin tests. They give 4 bounds, for various ranges of k and t:

  (* Valid for k >= 2 *)
  k^2 4^(2-Sqrt[k])

  (* Valid if t = 2, k >= 88 or if 3 <= t <= k/9, k >= 21 *)
  k^(3/2) 2^t t^(-1/2)  4^(2-Sqrt[t k])

  (* Valid for t >= k/9, k >= 21 *)
  7/20 k 2^(-5t) + 1/7 k^(15/4) 2^(-k/2-2t) + 12 k 2^(-k/4-3t)

  (* Valid for t >= k/4, k >= 21 *)
  1/7 k^(15/4) 2^(-k/2-2t)
It is the second one that is most relevant in most situations where you want a large prime and want to do as few tests as possible to get your false positive chances below your acceptable threshold.

Using the hypothetical trillion being civilization with each being needing a million primes a second, here's the expected number of years between that civilization seeing a false positive using the second bound above, for k = 64 and t 2 through 5:

  2 3.6 x 10^26
  3 7.6 x 10^27
  4 8.5 x 10^28
  5 6.5 x 10^29
The bound gets lower as k increases. If they needed 1024 bit primes that table becomes:

  2 1.5 x 10^45
  3 1.3 x 10^51
  4 1.1 x 10^56
  5 2.1 x 10^60
[1] https://math.dartmouth.edu/~carlp/PDF/paper88.pdf


Oh, no disagreement. "Practical" really means "practical for math things where you want to guarantee correctness, not just have extremely high confidence". I can't think of any normal use of primes that needs something past what you can do with a bunch of M-R tests.

But we _do_ use a lot of computational power to verify finding particularly interesting primes, etc., so I maintain that it'd be a nice thing to have in our pocket. You never know when you'll find yourself on a deserted island with a powerful computer and need to deterministically prove the primality of a large number. ;)


Of course, if you end up with an incorrect result due to assuming the Riemann hypothesis, you'll get a fields medal anyway...


Note: the first section, using the conservative estimate, is correct. The second using the tighter bounds is not. I multiplied where I should have divided, leading to an error of a factor or around 10^48 in both tables. E.g., for generating 1024 bit primes with 5 tests the trillion people civilization generating a million primes a second per person would see one error around every 10^12 years, not one ever 10^60 years.

For 64 bit primes they would need to do around 48 tests per prime to get down to a similar error rate.

For




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: