Discretion is an increasingly difficult thing to come by. We have traded the speed and convenience of information transfer for risk of its exposure as we have moved the entire contents of our lives online. The battle for privacy in a digital world is like the battle against entropy in the physical world. It’s constant.
Just as the computer, a one-time military-grade technology, has descended into the laps and pockets of every person, so too has cryptography. Once, systems of encrypting information were considered top government secrets. Now, they are the invisible protocols we interact with any time we open a web browser, place a phone call, or connect to a WiFi network.
The strength of cryptographic algorithms, and the protection they offer for securing individual liberty against everything from online attackers to meddling governments, has raised the status of this field from an esoteric branch of computer science to a foundational requirement for a free society.
Entirely new systems of financial exchange and coordination have been designed based on these principles alone. Cryptocurrencies now comprise over 2% of the global economy by market value. However, encryption isn’t just useful for shielding static data from prying third parties. Novel encryption schemes are demonstrating that we can perform all kinds of useful computations on fully encrypted data, presenting a world where no third party, even cloud providers, need to have access to private information to do work on it.
And while more powerful computing tools pose a threat to the security of many existing encryption methods, brilliant mathematicians are even now concocting cryptographic designs that can subvert attacks from even the most powerful quantum computers of the future. The tireless work of cryptographers shows that privacy need not be a standard we must part with.
Magic Math & Eternal Vigilance
Digital secrecy is made possible by the existence of strong cryptographic algorithms – mathematical functions that, in combination with a key, can turn legible information into inscrutable ciphertext. In general, the more time and computational resources it takes to turn the ciphertext back into readable information, the stronger the algorithm. To give a sense of just how effective cryptographic algorithms have become, it is believed that given all the computing resources available on Earth, decoding some of today’s cutting-edge algorithms would take more time than the age of the universe.
The strength of such algorithms is ultimately derived from the wonderful properties of a handful of mathematical objects.
Imagine if generating a ciphertext worked like smashing a glass. There’s a clear operation that can shatter the glass into pieces, but there’s no easy way to reverse the operation and put the glass back together. Mathematicians have been collecting and analyzing functions that work like this for hundreds of years.
One clear example with such properties is multiplication. Given a set of numbers, I can easily multiply them to get their product. However, performing the reverse operation is far more difficult. There is currently no algorithm that exists that can tell us how to factor a sufficiently large non-prime number. The problem of integer factorization, and the lack of a straightforward answer, has fascinated (if not greatly frustrated) mathematicians for nearly a hundred years. However, it’s also problems like this – problems with no easy answers – that allow for the existence of cryptography in the first place.
Of course, there is no proof that an algorithm to factor integers doesn’t exist. We just have strong suspicions that it doesn’t.
That doesn’t mean that there isn’t constant ongoing work devoted to proving these assumptions wrong or finding clever workarounds. They say the price of freedom is constant vigilance. The situation is no different with cryptography.
If cryptographers are on the blue team, there is an army of people on the red team, i.e. cryptanalysts, who are constantly working to break the most difficult cryptographic algorithms on Earth. Cryptanalysts aren’t merely nefarious antagonists; they are mathematicians and researchers who pad the halls of security and intelligence agencies around the world.
Thus far, many of their efforts have already seen astounding success. By 2012, there had been rumors circulating that the NSA had unlocked a “computing breakthrough” that afforded them the ability to “crack current public encryption.” At the time, connections like VPN, HTTPS, SSH, SSL, and many others relied on a Diffie-Hellman key exchange to establish a secure connection between two parties. The Diffie-Hellman protocol relied on the client and server to first agree on a large prime before using this number, along with respective private keys to generate a common secret key.
As it turned out, most implementations of Diffie-Hellman used the same hard-coded primes. As part of Edward Snowden’s leaks in 2014, it was revealed that the NSA had spent hundreds of millions of dollars building computing infrastructure that would succeed at figuring out all possible shared private key combinations, given the use of the same 1024-bit prime. Breaking a single commonly used prime would allow the NSA to decrypt connections to two-thirds of VPNs and a quarter of all SSH servers globally. Cracking a second 1024-bit prime would allow the NSA access to 20% of the top million HTTPS websites. The revelations shocked the world and led to many commonly accepted protocols, like SSL, being deprecated.
The Snowden leaks demonstrated a key lesson of cryptography — even if the math is inscrutable, execution and human error, like not changing your keys, can bring the entire system crashing down. Vigilance must be unceasing.
A Brief History of Cryptography
The most classic example of cryptography is the Caesar cipher. This was the kind of cipher school children might use today to pass clandestine notes to their friends, but at the height of the Roman Republic, it was what Julius Caesar used to convey his most important correspondences. The historian Suetonius described it thus: “if he had anything confidential to say, he wrote it in cipher, that is, by changing the order of the letters of the alphabet, that not a word could be made out. If anyone wishes to decipher these and get at their meaning, he must substitute the fourth letter of the alphabet, namely D, for A, and so with the others.”
Source: IBM
At the time, this substitution cipher was a pretty effective system of encryption on the whole. Even if intercepted partway through, only a select few knew how to read, so the odds that the code would be broken were systemically low. However, should anyone have been truly determined to crack the system, they could surely have succeeded at doing so. One only needs to look at the sorry history of Mary, Queen of Scots.
At the age of 25, Mary was imprisoned by her cousin, Queen Elizabeth I, for conspiring to kill her husband, who was heir apparent. For years, she had been sending correspondences to her people and son, all of which had been intercepted and stored at the French embassy in London.
A local Londoner who was loyal to Mary named Gilbert Gifford had been employed as a courier at the embassy and, upon discovering this arrangement, vowed to deliver Mary’s messages to their intended recipients. As it happened, a new adherent to Mary’s cause had manifested out of the blue — Anthony Babington. He claimed to have devised a plot to assassinate Queen Elizabeth I, all but granting Mary the path to the throne. Babington communicated with Mary via a nomenclator cipher, which substituted letters for designated symbols.
Source: Utah State University, Babington's nomenclature cipher
The problem, however, was that Babington was a double agent. Babington sent an encrypted message to Mary asking for her consent to set off a plot to kill the sitting queen. When Mary responded with an encrypted affirmative response, she had effectively incriminated herself. Working out the system of her ciphered text confirmed her treason and led to her hanging.
This historical episode became a core lesson in cryptography which was articulated by August Kerckoffs in 1883, “the system must not require secrecy and can be stolen by the enemy without causing trouble.” In other words, both the Caesar cipher, and Mary’s system, required no special key to decode. Decoding the system meant that the entire scheme could be unfurled.
An improvement was devised in the form of the Vigenère cipher, which required both an encryption scheme and a key. This cipher encrypted a message by shifting each letter according to a repeating keyword. So, if you want to encrypt "HELLO" with the keyword "KEY," you first align the keyword with the plaintext to get "KEYKE." Then, you’d shift each letter of "HELLO" by the position of the corresponding letter in "KEY" resulting in the encrypted message "RIJVS." To decrypt, you would reverse the process, shifting letters back by the positions of the keyword letters.
The world got a lot of steam out of the Vigenère cipher. It was used by Confederate Forces in the American Civil War, and even by the Swiss in World War I, however it was ultimately cracked independently by Charles Babbage and later Friedrich Kasiski in the 19th century.
By the time World War II began, and the Germans having realized that all previous encrypting schemes had been cracked, they were eager to have found Arthur Scherbius, a man who had spent the past ten years building mechanical cipher machines he called Enigmas.
The Enigma looked like a typewriter, but when a user typed a string of characters on it, the machine would light up the corresponding encrypted characters on a lampboard. Three rotors worked to switch the “key” of the machine, which determined the encryption scheme. The three rotors could configure letter assignments in more than 17,000 possible ways. Additionally, a plugboard allowed users to manually switch letter assignments. The plugboard exploded the difficulty of unscrambling the message. Using ten plugs created nearly 151 trillion possible combinations for letter assignments.
Source: Alessandro Nasiri
Messages encrypted with an Enigma machine would be sent via Morse code over radio waves while every month, machine operators across the war effort were delivered a printed key sheet with that day’s settings for the Enigma machines.
Cracking the code on the Enigma involved years of work from Polish cryptanalyst Marian Rejewski, who first produced a mechanical decrypter of Enigma called the bomb, in addition to the work of Alan Turing and Gordon Welchman at Bletchley Park who built on his efforts. However, one of the greatest aids to the decryption effort was simply improper operation. Messages would often contain repeating phrases like “Weather Report,” which over time could be guessed by cryptanalysts helping to crack the encoding scheme. As seen with the use of repeating prime numbers in Diffie-Hellman encryption, these are still errors made readily today.
All of the above are examples of symmetric encryption — schemes that allow the same key to both encrypt and decrypt messages. The benefits of these systems are their efficiency and they are still in use today as sophisticated digital encryption schemes. The Data Encryption Standard, for instance, was introduced in 1975 by NIST and the NSA, and readily adopted by public telephone lines, and the banking industry. Companies like International Flavors and Fragrances were using DES to protect formulas transmitted over the phone. Ironically, just two years later, Diffie and Hellman argued that the DES scheme could feasibly be decrypted by brute force attack. Some suspect that the NSA, given its involvement in the approval of DES, was well aware of this.
By 1997, the first independent team was able to break a message with DES using brute force strategies, leading the scheme to become deprecated until it was replaced by AES, the Advanced Encryption Standard, in 2001. AES is still widely used today to encrypt everything from files kept on local hard drives and databases. It’s even the standard the NSA advocates to encrypt Top Secret government files.
The benefits of symmetric encryption schemes like AES are their speed. If the secret key required to encrypt a message doesn’t need to travel far and exchange hands too often, this system can work exceedingly well. However, as the number of couriers the secret key is being shared with grows, so too does the vulnerability of the encrypted system. When the message needs to be communicated across publicly accessible mediums, symmetric encryption schemes can fall prey to interception.
A solution to this problem was conceived of in 1975 by Whitfield Diffie and Martin Hellman, who developed the idea of public key cryptography — a form of asymmetric encryption. Asymmetric encryption relies on the use of a public key, which can encrypt data, and a designated private key, which can decrypt data. The magic of this system is that no secret keys ever need to be shared over public channels. Prior to the existence of asymmetric encryption, when keys needed to be relayed to convey secret messages, only powerful institutions with the ability to ensure secure key distribution could actually keep secrets. With the dawn of public key cryptography, the ability to guard a secret became an ability afforded to everyone.
One of the first public key encryption systems developed was RSA, named for its inventors Ron Rivest, Adi Shamir, and Leonard Ableman. RSA’s security is based entirely on the integer factorization problem. In a nutshell, every user picks two prime numbers, which they keep secret. When multiplied together, they form a shared public key. Anyone can use this shared public key to encrypt a message, but only someone who knows both factors can decrypt the message. The extreme difficulty of factoring large integers makes recovering secret keys tremendously difficult.
The Future of Encryption
As time goes on, we increasingly realize there’s a lot more we can do with encryption. Encryption creates gates around data that prevent third parties from accessing it, but that doesn’t mean that if we can’t see the data we can’t do anything with it.
Historically, being able to see the actual data was critical to understanding or proving something about it. For instance, if a borrower wanted to prove to a bank that they had a particular income, they would need to disclose all of their financial documents to prove that this was the case.
Imagine if there was an algorithm that could convince the bank that your income fell within a particular range without you ever revealing any sensitive documents. This seems like it should be impossible, but remarkably, such an algorithm has already been around for nearly forty years. It’s called a zero-knowledge proof.
Devised in 1985 by Shafi Goldwasser, Silvio Micali, and Charles Rackoff, the operation of a zero-knowledge proof is best analogized by this classic example. Suppose there is a colorblind verifier holding a red ball and a green ball. The verifier cannot distinguish between the balls, but there is a prover who claims he can without revealing which ball is which color. To do so, the verifier hides the balls from the prover’s view and randomly decides whether to switch them in his hands or not. Then, the verifier presents the balls to the prover and asks if the balls were switched. The verifier repeats this interaction multiple times so that with each correct response from the prover, the verifier grows increasingly sure that the prover can in fact distinguish the balls.
In other words, the prover could show the verifier that he knew the correct arrangement of the balls without ever revealing how he knew the correct arrangement. In short, no new knowledge was transferred to the verifier, except for the fact that the prover could distinguish the balls.
As consumers become more sensitive to their private information being viewed by third-party services, you can clearly begin to outline a future where zero-knowledge proofs step into intermediate such scenarios. Imagine logging in to an account of yours using a password without ever sharing your actual password with the service provider. This is the kind of thing zero-knowledge proofs could be used to enable.
Cryptocurrencies have been particularly eager to adopt zero-knowledge proofs. Historically, blockchains like Bitcoin or Ethereum required the contents and usernames of all transactions to be posted publicly, even if individual wallets were private. A solution to this somewhat uncomfortable level of openness was offered by Zcash, which was founded in 2016. With Zcash, the sender, receiver, and amount transferred in a transaction can remain totally secret, while still proving that the transaction was valid.
An even more recent achievement in cryptography is the introduction of homomorphic encryption. Zero-knowledge proofs have shown that we can verify something about data that we haven’t seen — but homomorphic encryption allows us to do computation with data that is fully encrypted.
In 2009, computer scientist Craig Gentry designed a scheme that could perform addition and multiplication on ciphertext in a way that perfectly corresponds to the underlying data, all while keeping the underlying data entirely illegible to the entity performing the computation.
It was an astounding achievement, especially as the addition and multiplication operations could be combined to allow for more complex computation to be done. The only problem with homomorphic encryption is that it’s painfully slow. Some estimates put homomorphic encryption at something like a million times slower than traditional encryption schemes. The IEEE reports that in 2016, one major tech company implemented a fully-homomorphic encryption scheme that ended up performing one hundred trillion times slower than a regular operation on plaintext.
Given that a modern CPU can add two numbers in a billionth of a second, that would mean the homomorphic variant would take 27 hours to do the same addition. Apparently, the company was able to make an improvement on the implementation of their algorithm three years later and get it to perform 75 times faster, meaning that adding two numbers would only need to take 20 minutes.
While clearly a still nascent technology, the theoretical potential afforded by homomorphic encryption is very great indeed. It offers a possible world where computation can be performed on private data, even by adversarial actors, without users ever having their secrets compromised.
One interesting property of homomorphic encryption schemes is that they rely on lattice-based cryptography — a field of math whose problems are not easily solved by even quantum computers.
While quantum computing is itself a nascent technology, should it one day mature sufficiently, it could present substantial problems for most existing encryption algorithms that rely on integer factorization, like RSA. For example, a quantum algorithm called Shor’s algorithm, developed by MIT professor Peter Shor while working at Bell Labs in the 1990s, is able to factor large integers much faster than the fastest known algorithm on a classical computer.
For now, however, even the most sophisticated quantum computers are unable to factor the number 35 using Shor’s algorithm. In 2012, the factorization of 15 was successfully demonstrated. But at that rate, we might only be able to factor numbers near one million by the end of the century. Still, should quantum computing mature in our lifetimes, we can be comforted that even though current schemes of encryption will be phased out, new schemes being developed today, which are too onerous for current computing technologies, will become feasible in their own right.
Disclosure: Nothing presented within this article is intended to constitute legal, business, investment or tax advice, and under no circumstances should any information provided herein be used or considered as an offer to sell or a solicitation of an offer to buy an interest in any investment fund managed by Contrary LLC (“Contrary”) nor does such information constitute an offer to provide investment advisory services. Information provided reflects Contrary’s views as of a time, whereby such views are subject to change at any point and Contrary shall not be obligated to provide notice of any change. Companies mentioned in this article may be a representative sample of portfolio companies in which Contrary has invested in which the author believes such companies fit the objective criteria stated in commentary, which do not reflect all investments made by Contrary. No assumptions should be made that investments listed above were or will be profitable. Due to various risks and uncertainties, actual events, results or the actual experience may differ materially from those reflected or contemplated in these statements. Nothing contained in this article may be relied upon as a guarantee or assurance as to the future success of any particular company. Past performance is not indicative of future results. A list of investments made by Contrary (excluding investments for which the issuer has not provided permission for Contrary to disclose publicly, Fund of Fund investments and investments in which total invested capital is no more than $50,000) is available at www.contrary.com/investments.
Certain information contained in here has been obtained from third-party sources, including from portfolio companies of funds managed by Contrary. While taken from sources believed to be reliable, Contrary has not independently verified such information and makes no representations about the enduring accuracy of the information or its appropriateness for a given situation. Charts and graphs provided within are for informational purposes solely and should not be relied upon when making any investment decision. Please see www.contrary.com/legal for additional important information.