Public Key Encryption: Temporary Privacy, Temporary Security
Privacy is a perennial national preoccupation, entering the zeitgeist for a few weeks every time a major breach or revelation of malpractice hits the news, then gradually fading from view. As interest waxes and wanes, facts on the ground have also been mixed: various states increase their espionage programs while others enshrine data ownership; companies increasingly take data security seriously while collecting ever more invasive data from users; end-to-end encrypted chat apps explode in popularity while security flaws (or, some allege, intentional backdoors) are found with regularity.
These are the natural growing pains of our increasingly online, connected society: when so much of life is lived by transmitting sensitive information over hundreds or thousands of miles, through dozens of intermediaries, to dozens of different individuals or companies, it is only logical that an arms race between those who want that information and those who want it kept private would result.
The key weapon on the side of the information-hiders, be they individuals wary of corporate tracking or criminals hiding illicit activity, has always been encryption, particularly of the public-key kind — that magical set of algorithms that allows two individuals to share no prior secret, have all of their communication heard by eavesdroppers, and yet still communicate something that no one but the two of them can understand. No less than Edward Snowden (who, whether you believe him a hero or traitor, is certainly well-versed in data privacy) says this about encryption in his book Permanent Record:
Deletion is a dream for the surveillant and a nightmare for the surveilled, but encryption is, or should be, a reality for all. It is the only true protection against surveillance. If the whole of your storage drive is encrypted to begin with, your adversaries can’t rummage through it for deleted files, or for anything else — unless they have the encryption key. If all the emails in your inbox are encrypted, Google can’t read them to profile you — unless they have the encryption key. If all your communications that pass through hostile Australian or British or American or Chinese or Russian networks are encrypted, spies can’t read them — unless they have the encryption key. This is the ordering principle of encryption: all power to the key holder.
Privacy advocates the world over imagine a world where cryptography replaces trust, where you can be certain of your data’s privacy because you can mathematically prove that only the intended recipients can discern anything at all from it. Even without such lofty crypo-Utopian ambitions, it is hard to deny that the modern internet, and hence much of the modern economy, is an edifice largely built on encryption: without it, secure and private connections over such a shared network would be impossible.
Unfortunately, though, most contemporary encryption doesn’t quite give us the guarantees we think it does: the confluence of cheap storage and nonlinear advances in computing means that the only thing you can be sure of is that your data is secure for now.
Quantum Computing + Cheap Storage = Limited Privacy
The general public is increasingly aware of the threat that quantum computing will have on the current internet: each stride forward in quantum computing brings us a little closer to the day when all public key algorithms — Diffie-Hellman, RSA, elliptic curve — become effectively useless against an adversary with such a computer. The standard response to this is usually fairly blase: whenever such machines are within a few years of being feasible, we are assured, standards will shift to a new, quantum-resistant public-key algorithm; users will not even notice the change except maybe through higher data usage fees and CPU utilization as they move to less efficient algorithms. This response is entirely valid for a whole range of use cases: there’s little to worry about when it comes to banking or online purchases because these will move to quantum-resistant algorithms before any damage can be done.
However, accessible quantum computing means that anything that is public-key encrypted using the state-of-the-art today will become readable to anybody who has that data. Hence the implicit assumption that encrypted data in an adversary’s hands is not useful (e.g., why we feel comfortable sending personal communications and browsing the web on our phones or over wifi) is incomplete — this is true, but only for a limited, uncertain time. Everything encrypted you send over a public network is a time capsule: anyone willing to hold onto it for long enough will be able to see its contents.
The attack this implies is very simple: Alice and Bob are communicating “securely” via public key encryption algorithm X. Eve (the “eavesdropper”) holds onto all of the traffic that passes between them. At some later date, when algorithm X is broken by quantum or some similar technological advance, Eve can go back, decrypt all of the traffic, and learn everything that Alice and Bob wanted kept secure
This is why cheap long-term storage is the second prerequisite for the attack: if public key encryption temporarily hides data, an adversary trying to attack a target cannot be sure which traffic to hold onto, and must collect all of it. Luckily for these adversaries (and unfortunately for the rest of us), storage is cheap and getting cheaper: as of today, a retail customer can easily buy a hard-drive for less than $20 per terabyte. I, an avid user of my cell phone, use much less than 20GB a month, or <240GB a year. If the ratio between data usage and storage cost stays this low, a relatively unsophisticated adversary could store as much of my data as he could sniff indefinitely for around $5/year. The “sniffing” portion provides more of a barrier to such a targeted attack for cell phones (e.g., the attacker would need something like a StingRay, which is expensive and of dubious legality), but sniffing all traffic on a wifi network requires no special hardware and is difficult to detect. From where I sit typing this in my apartment, my laptop can detect over 50 wifi networks.
This reality is scary: you could be subject to surveillance not only by companies or by government agencies, both of which can be reined in by law, but also by any private individual with malicious intent, a few hundred dollars of equipment, and a long time horizon.
What can we do about it?
As individuals, basically the only thing that can be done is to treat privacy on the internet as something temporary and unreliable. This obviously leads to a chilling effect, but it’s the reality of the situation: if you don’t want your web history, private emails, or nude photos leaked, then do not trust the current internet to protect them. These problems are surmountable, but not in a way that any individual can accomplish alone. From my (non-expert) viewpoint, solutions include:
- Adoption of quantum-resistant public-key encryption algorithms as soon as technically feasible. This is the quickest way to address the most obvious attack described above, though any algorithm has the danger of being made obsolete by further breakthroughs.
- More widespread options for symmetric (i.e., non-public) key encryption (as the NSA has used since 2015). This requires sharing a secret key over a secure channel, e.g., physically, but you could imagine an app making it easy to do this with the handful of people with whom you actually need to communicate with true secrecy. In the most extreme case, a “one-time-pad” can be used for communication (i.e., a key as long as the message). This would be absurdly memory intensive, but makes it literally impossible to decrypt the message, no matter what breakthroughs are made in computing.
- More options for obfuscating traffic. If an adversary cannot differentiate between the important and unimportant encrypted data, then you could use a service that creates bogus traffic to hide what you’re actually doing and, ideally, make it less feasible for the adversary to store all of your data.
The COVID crisis, in particular, has shown us how deeply we now rely on the internet and how enmeshed it is in every part of society. This is not a genie that we can, or should, put back into its bottle. But neither can we keep our heads in the sand and ignore the serious danger posed by using algorithms with only short-term privacy guarantees to protect deeply personal information online.
I’m sure I’ve missed out on a lot of nuances to this topic and other ways of resolving this problem, not being an expert in cryptography — please feel free to explain what I’ve missed or link to further readings below!