That would be limited to west oakland which has always been extremely politically insular (tied up in racism and racial politics from last century). The east bay at large, especially the suburbs, is quite welcoming to tech and success.
The wife was mentioned because she was called out in the earlier article as a problem. We don't know if she was actually a problem or not. We, and specifically you, don't know enough to make the claims you're making.
Please, stop speculating. Have patience, wait for actual information.
There's nothing in the wikipedia article about the USA's ability to track planes. We did have a radar station in AK, which may be what you're thinking of? According to wikipedia it didn't produce useful information, and wasn't necessarily tracking the plane at all points anyway: http://en.wikipedia.org/wiki/Korean_Air_Lines_Flight_007#U.S...
I don't think we've ever had the ability to track all planes in the world.
It'll take me ages to go through all the episodes of Mayday, but I'm certain in one of them the US released highly classified radar data to prevent a wider political catastrophe, and that the narrator described the source of the data was from a new type of radar that had the capability to track every plane in the world.
The series is exceptionally well researched and presented.
These aren't myths, they're platform guarantees. It just so happens that a few of the most common unixes (Linux, BSD) implement a very good /dev/urandom and the author is suggesting that we write non-portable software that depends on implementation details of these platforms.
There can be benefits from depending on non-portable implementation details but also significant drawbacks.
There's no standard for this - Linux folks came up & implemented it first and other platforms have taken that as compatibility target. So "non-portable implementation details" is all there is, and it works fine for CSPRNG.
> Applications on Linux should use urandom to the exclusion of any other CSPRNG
Applications, yes. Appliances built on it - now that's more open to interpretation.
Back in 2003/2004 I was building a centrally managed security appliance system. At the time I made the hard choice that first-boot operations (such as generating long-term device keys) MUST use /dev/random. It made the initial installs take longer, but I refused to take the chance that an attacker could install and instrument a few hundred nodes and find out possible problems with entropy sources.
Once the first-boot sequence was over, applications used /dev/urandom for everything. This included the ipsec daemons. Forcing everything to /dev/random during first boot made sure that on subsequent boots there would be (for all practical purposes) enough entropy available for urandom to work securely.
The first-boot problems were amplified by the fact that we were running our nodes inside virtualization. (At the time: UML, and we built our own on top of it. Xen wasn't nearly ready enough back then.)
It's fascinating to see that the problems we had to deal with 10 years ago are now becoming an issue again. To this day I choose to use /dev/random if I need to generate key material shortly after boot (which could be install), or for my own long-term use. Good thing personal GPG keys have a shelf-life of several years...
If you're building an appliance, why wouldn't you simply ensure urandom is seeded at first boot?
I'm sympathetic to people's concerns about generating long-term keys. But my problem is, /dev/random isn't addressing the major risks there either. You should generate long-term keys on entirely separate hardware.
Lordy. Hyperbole much? There are cases where I am sure having access to a blocking source of entropy is interesting. Perhaps it has nothing to do with crypto. Maybe it's mathematical or scientific in nature. Who knows. But it's good to give developers an option and not cut off access to useful tools because you think they can't handle it.
Would you care to lay out a scenario in which a scientific application might care about the decision that the Linux kernel RNG makes about entropy estimation? Place make sure your answer takes into account how the Linux entropy estimator actually works.
"Except that we don't see CPUs being astonishingly creative in coming up with reasons why they shouldn't have to follow a clear, unambiguous program instruction."