Mailpile plans to secure communications with PGP:
PGP encryption and verification of emails and recipients
But PGP's security is at odds with the kind of security appropriate for casual conversation. This is discussed eloquently in the Cypherpunk's paper Why not to use PGP. (They go on to introduce a new cryptosystem with more desirable properties).
The specific problems with PGP: Suppose your key or your correspondent's key is compromised, say by carelessness, hacking, or court order. Then all your past messages can be decrypted (as well as future ones), and your signatures be verified by a third party (say a court of law).
Again, in the words of Moxie Marlinspike
And the problem is that if at any point in the future, Bob’s public key is compromised, all previous traffic is compromised as well; that someone could easily just record all the traffic, and that’s totally not unrealistic today, and then at any point try and compromise Bob’s public key and go back and decrypt all of the previous traffic. So the first thing they notice is that one key compromise affects all previous correspondence. The second thing that seems weird is that the secrecy of what I write is a function of your security practices. I mean, I feel like I’m somewhat paranoid and I have reasonable security practices, but I don’t know about the people that I am communicating with. I would like for what I write to somehow be a function of my security practices. And the third thing that they note is that the PGP model gives you authenticity, but not deniability. If I sign my email “Hey Bob, today I was thinking that Eve is a real jerk” and at some point this email is compromised and discovered, there’s no way for me to deny that I wrote this. So the nice thing is that Bob knows that I wrote it, but there’s no way for me to deny to everyone else that I wrote it – you have this ‘undeniable’ problem.
To name these problems, PGP doesn't have forward secrecy, and it doesn't have plausible deniability. To be clear, this isn't a break of PGP—it never claimed to have these properties.
It's not immediately obvious that these properties are even possible. Nevertheless, in 2004, the Cypherpunks exhibited a cryptosystem (off-the-record, or OTR) that fails gracefully:
You could say that Bob losing control of his private key was the problem. But with today’s easily-compromised personal computers, this is an all-too-likely occurence. We would really prefer to be able to handle such failures gracefully, and not simply give away the farm.
There were two main problems:
- The compromise of Bob’s secrets allowed Eve to read not only future messages protected with that key, but past messages as well.
- When Alice wanted to prove to Bob that she was the author of the message, she used a digital signature, which also proves it to Eve, and any other third party.
When we think about private messages in the context of social conversation, we really want a system with different properties: we want only Bob to be able to read the message, and Bob should be assured that Alice was the author; however, no one else should be able to do either. Further, after Alice and Bob have exchanged their message, it should be impossible for anyone (including Alice and Bob themselves) to subsequently read or verify the authenticity of the encrypted message, even if they kept a copy of it. It is clear that PGP does not provide these desirable properties.
This paper introduces a protocol for private social communication which we call “off-the-record messaging”. The notion of an off-the-record conversation well-captures the semantics one intuitively wants from private communication: only the two parties involved are privy to the contents of the conversation; after the conversation is over, no one (not even the parties involved) can produce a [cryptographically verifiable] transcript; and although the participants are assured of each other’s identities, neither they nor anyone else can prove this information to a third party. Using this protocol, Alice and Bob can enjoy the same privacy in their online conversations that they do when they speak in person.
Ok, so that explains what forward secrecy and deniability are, and why they are desirable. But are they—as I claim—necessary? And if so, why doesn't PGP have them?
To answer the second question, PGP was written a world ago. Not so much was known then! In 1991 strong cryptography was illegal in the US. The government instead pushed 'key escrow'—encryption to which would always have a backdoor. Phil Zimmerman's stated aims for PGP were to liberate cryptography and to protect the privacy of ordinary conversation from surveillance. As is apparent, the legal battle was won (for the history of how, including why the PGP source code was printed in a paper book, listen to Moxie's talk). Today, crypto underpins the online shopping history. Alas, the second aim hasn't been achieved. Almost all digital communication (text messages, emails, phone calls) remains unencrypted. Security agencies engage in total mass surveillance.
Now, why is forward security vital? Because key compromises happen. Eventual key compromise should be expected. And it might happen sooner than expected, or without you knowing.
In my country, the UK, it's a criminal offence (RIPA) to refuse to surrender keys to the police. My dad and I used to send PGP encrypted messages to each other—about nothing more than family stuff. Yet at any time in the future, the police may demand my keys. If I yield, they can decrypt all the messages I ever sent. If I refuse, I can be gaoled. Had I used a forward secure protocol, I could safely reveal my private key to the police without compromising past messages (the ephemeral keys would be long lost). On my walk home, I simply announce to my friends to stop using the compromised key.
Contrary to the PGP FAQ, people in other jurisdictions are not safe. Your government may introduce a similar law in future, and use it to decrypt old messages. Your government may already introduced a similar law in secret.
Even without draconian laws, intelligence agencies are covertly pressuring providers for keys. Lavabit cracked, but responsibly informed the public and shut down.
This is speculation (until the next Snowden reveal), but we should assume intelligence agencies are also trying to acquire secret keys by hacking. This is worse than legal pressure, because you might not detect compromise. Forward security doesn't solve the problem, but it is some protection—messages sent before the compromise remain safe.
To protect against today's threats, any cryptosystem worth its salt should be forward secure. Security agencies are recording communications en mass (eg. Prism, Tempura). Key compromises happen regularly.
There is communication software today (OTR for instant messaging http://www.cypherpunks.ca/otr/ and RedPhone for voice calls https://whispersystems.org/) that does forward secure encryption, authentication, and plausible deniability. That's the standard we want for secure communication. If it's possible for email, we should do it.
If it's not possible for email, maybe we should ditch email. The network effect makes any change hard, but it's same cost to convince your friends "let's all install pidgin-otr" as "let's all install PGP". The former is a better investment. Encrypting all your messages (and signing them) for five years with the same key—as with PGP—is shooting yourself in the foot.
It's worth noting Google turned on forward security for their HTTPS servers http://googleonlinesecurity.blogspot.co.uk/2011/11/protecting-data-for-long-term-with.html . I doubt this protects against the NSA--they will have specific backdoors into Google. But it remains an instructional manual for other service providers--use forward secure crypto, make the NSA go out of their way to attack you.