From the article: "And while most types of software get more user-friendly over time, user-friendly cryptography seems to be intrinsically difficult. Experts are not much closer to solving the problem today than they were two decades ago."
I'm not sure I agree that user-friendly cryptography is "intrinsically difficult." It doesn't seem like it would be hard for email clients and even the Gmail frontend to pop up a message saying, "Your email is insecure. To let people send you private messages securely, set up your 'public key' now. It's easy." Then a short wizard would walk users through the process and automatically append the public key to all outgoing messages.
On the other side, if you were going to send a message to a friend, the email client would check if that person has published a public key and then ask, "The recipient allows secure messages. Would you like us to send this message securely?"
Google and Microsoft and other large companies are no strangers to implementing a feature and using their size and clout to quickly make it a de facto standard. The real reason we don't have easy end-user cryptography is that these companies would lose access to mine your data and provide new services on top of that (and the article mentions this too.)
"The real reason we don't have easy end-user cryptography is that these companies would lose access to mine your data"
Jeremy Kun recently wrote a good article summarizing some recent advances in encryption that make your statement somewhat less-than-entirely-accurate (scan for "differential privacy"):
And where is the private key stored? On Google or Microsoft's server? What then would be the point? (I assume you'll answer that it'll be done client-side, but JavaScript cryptography is a whole mess of fail. But that's a separate issue.)
And if it is stored client-side, what happens when the user inevitably loses their key? You and I might have backups in multiple places, and on an encrypted USB stick in a bank vault, but my dad doesn't, and the next time he spills wine on his laptop, there goes literally all of his e-mail.
Issue the user two smartcards, one for daily use, one that can be used to create a new daily use smartcard. Tell the user to keep the backup smartcard in a safe place.
Yes, someone will inevitably lose both. You just need to ensure that that is a rare event, and that there are alternative systems in place (i.e. that losing access to one system does not prevent people from living their lives).
I'm familiar. There's a big difference between "optional key escrow with a service I have chosen to trust" and "mandatory key escrow" though. Most importantly with regard to the ease of mass surveillance.
This is the real reason why cryptography hasn't caught on. It's opt-in by nature - No matter how hard you try, you can't send someone an encrypted message if they don't have a public key for you to use.
Actually, yes you can. Check out identity-based encryption and Voltage Security. It's currently in use by Wells Fargo, ADP, and other large enterprise customers.
The catch there is that IBE requires a centralized, trusted key-issuing service where you need to enroll to receive your message. If that's compromised, then game over.
Of course, you would need to be judicious about which group of key issuers you are willing to trust, but this method will at least reduce the risk. The other nice thing about this is that even if some key issuing service is compromised, the sender can force the receiver to switch services (compare to the TLS model, where dropping a CA is basically a coordination game problem).
Client-side, with a passphrase. A backup could be stored on the server or in your dropbox or wherever you'd like. It would be important, as part of the onboarding process, to communicate the need to keep the key safe and what happens to your old emails should you lose it.
As for using js for public key encryption, I've implemented it for a client and didn't have much trouble. There are libraries that workaround the usual problems. What have you seen out there that would cause a problem?
If you're doing public key crypto on the client side in javascript, then the client side JS must necessarily have access to the private key (unless you have a TPM _and_ browser hooks to use it). This means that suddenly the private key is vulnerable to any XSS attacker that can inject itself into the same origin as your javascript crypto code.
Fair point. XSS likely wouldn't be a problem in the case of a desktop email client. But in the case of a Gmail or Outlook.com frontend, I can see how you would be concerned about something in the js served up by Google or MS capturing the private key and sending it to the server.
That said, couldn't this be mitigated by having a strong passphrase on the private key? How hard is the wrapper to attack?
Also, couldn't security researchers easily monitor the packets on this process and sound the alarm should they find that the js served up by Google or Microsoft suddenly starts sending private keys to the server?
AFAIK a strong key passphrase would be effective at protecting the private key while it's at rest (stolen laptop / hard drive). However as soon as the private key is pulled into memory for a signing or encryption operation the passphrase doesn't matter as the raw key is needed at that point.
As for your second question, there are techniques that perform static and dynamic analysis on javascript to try and detect illegal flows or taint propagation (without having to resort to monitoring the outbound network traffic). See [1] and [2] if you're interested in that topic.
Also, this isn't a hypothetical attack. Basically the same setup is used for client-side bitcoin wallets, and there have been reports of thefts (stolen keys).
Not so. Nothing stops Linux distros defaulting to mail and file systems that encrypt everything by default, but the reality is that most people can't be bothred. I certainly can't, and I don't feel like putting myself out to encrypt everything in order to make it popular. As has been pointed out, the metadata of who you email and phone, while not probative in the same fashion as the contents of calls and emails, are nevertheless a significant source of data, and encryption won't alter that without major changes to the architecture of mail.
I'm not sure I agree that user-friendly cryptography is "intrinsically difficult." It doesn't seem like it would be hard for email clients and even the Gmail frontend to pop up a message saying, "Your email is insecure. To let people send you private messages securely, set up your 'public key' now. It's easy." Then a short wizard would walk users through the process and automatically append the public key to all outgoing messages.
On the other side, if you were going to send a message to a friend, the email client would check if that person has published a public key and then ask, "The recipient allows secure messages. Would you like us to send this message securely?"
Google and Microsoft and other large companies are no strangers to implementing a feature and using their size and clout to quickly make it a de facto standard. The real reason we don't have easy end-user cryptography is that these companies would lose access to mine your data and provide new services on top of that (and the article mentions this too.)