Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Heh, I remember running into something similar professionally, using the .NET Framework to generate random numbers: When asked to generate random integers, it has a seemingly innocent conversion from integers to floating points and back that ends up causing significant bias on large ranges. Specifically, `Random.Next(b)` produces an integer between 0 and b − 1 by

1. taking a random integer between 0 and 2^31 − 2,

2. dividing the integer by 2^31 − 1 to get a double-precision floating-point between 0 and 1,

3. multiplying this floating-point with b to get a number between 0 and b,

4. taking the integer part of this new number to get an integer between 0 and b − 1.

Do this for b = 2^31 − 1, and 50.34% of outputs are odd numbers, where you'd expect (almost) as many evens as odds.

Ended up doing a bit of a write-up here: https://fuglede.dk/en/blog/bias-in-net-rng/

Another fun thing about their RNG was that it was apparently supposed to follow Knuth's provably useful additive PRNG which, given n − 1 randomly generated numbers, generates a new number by taking the sum of the (n − 24)'th and (n − 55)'th numbers modulo some specific number; back when I looked, Microsoft's implementation for some reason uses 34 instead of 24, probably just a typo, with the unfortunate side effect that Knuth's theoretical guarantees are out of the window.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: