What is Crypto? Part 4: How Good Crypto Dies

What is Crypto? Part 4: How Good Crypto Dies

Previous: What Is Crypto? Part 3: Crypto Asymmetry

The previous posts detailed cryptographic primitives like block ciphers, asymmetric cryptography, and key exchange algorithms. However, even if you use all strong cryptographic primitives, it is still possible to make mistakes when combining and using them. In most situations, Transport Layer Security (TLS) is a good choice. (TLS is the basis of modern secure websites and server-to-server email connections.) There are more considerations and design properties than we can discuss here, but this final post of the crypto series will cover a few examples:

VERSIONING

A good protocol will prepare for the possibility that an underlying cryptographic primitive may be vulnerable to attack. Any cryptographic protocol should implement a version number in its initial connection that will prohibit the use of ciphers or protocol elements that are too old. It is also critically important that Alice and Bob verify this version number using each side’s public keys so that Mallory, the man-in-the-middle attacker, would not be able to downgrade the protocol to a version she could attack. This is difficult to do correctly and early versions of SSL (i.e. SSLv2) were vulnerable to downgrade attacks.

KEY/ALGORITHM ROTATION

One source of error when putting together cryptographic protocols is in how to change keys. In the past, attacks against block ciphers such as the data encryption standard (DES) have frequently required large volumes of data and/or an extended amount of time to crack. Just in case some such vulnerability might be found in a cipher, protocols like TLS that may be used for a long period of time usually build in some way to change keys. In TLS, this is called renegotiation and may be initiated by either end. For many years it worked just like the original key exchange, which opened a new vulnerability. An attacker could intercept a TLS connection, send some malicious request, then start a renegotiation and pass through a legitimate connection from an unsuspecting user. The website would then assume the second negotiation was the same client, since it had asked for a renegotiation, but it was really a different user.

In the above graphic, thanks to TLS, Mallory could not read or change any data sent over Alice’s secure channel once she had it set up. But it was already too late. Mallory had sent that data over the first connection and the website assumed the second negotiated connection was all from the same end user. This vulnerability was fixed in server configurations and more recent versions of TLS by requiring the challenge/responses of renegotiations to happen over the existing secured and authenticated tunnel, instead of sending them in the clear, like it does for the initial connection.

Note that unlike symmetric block *keys* that TLS may occasionally rotate, it is normally a very bad idea to change *algorithms* in the middle of a connection. Rather than making it more secure, it usually makes your communications less secure. For example, say you go to Gmail, log in with your password, and then browse your email. Internally, your login gives you an HTTP cookie which then you supply in each request to get another email. Suppose you were using a version of TLS that switched between the AES and RC4 symmetric key encryption algorithms. If an attacker, Eve, had an attack against RC4, then when you switched to RC4, Eve would be able to decrypt maybe one or two HTTP requests in the session before you switched back. These requests would contain your cookie, and Eve could go in to Gmail directly with that cookie and dump all your email. Rather than making it harder, it made it easier to attack you.

STACKING DIFFERENT ENCRYPTION ALGORITHMS

It is also considered bad form to stack encryption algorithms on top of each other (e.g. encrypting first with AES then with 3DES) because, if you use the same key, you’re probably not increasing security from any attack. If you derive multiple keys from the same seed, you’re not increasing your real key size or making anything more resilient to attack either, and even if you use different keys and are effectively increasing your *key size*, since the *block size* of each algorithm remains the same, you’re not getting stronger against attacks like sweet32.

Sweet32 is an attack that works on the Triple-DES (3DES) cipher because it has a key size of 168 bits, but a block size of 64 bits. So even though there are 2168 possible keys, or about as many as there are atoms on earth, there are only 264 possible ciphertext blocks, far fewer. If you do the math, thanks to the “birthday paradox” on average you will have a collision every 232 or just over 4 billion ciphertext blocks, a number frequently reached when transferring multi-gigabyte files. A collision means that two ciphertext blocks will be the same. When that is the case and you know what one of the corresponding plaintext blocks is, in many block cipher modes of operation, you will be able to calculate what the other plaintext is at once. This can reveal secrets like authentication tokens, breaking the entire system. As a result, do not use 3DES anymore or repeat its mistakes by stacking ciphers.

The one exception to stacking would be to use a post-quantum algorithm in conjunction with a traditional algorithm for key exchange and public key authentication since they defend against different things.

RANDOM ACCESS

TLS generally doesn’t work in real-time environments like streaming video or random-access environments like full disk encryption when you want to ignore lost packets or just jump to one byte without reading the whole first part of the disk and still be able to decrypt the packet or sector you want. For those, we use DTLS and block cipher modes of operation like CTR or XTS. By sacrificing reliability, and, in the case of some full disk encryption algorithms, a MAC, they are naturally weaker against some attacks. This means the programs that run on top of such encryption algorithms must be designed at the application layer to gracefully and securely handle unexpected losses, replays, and packet corruption at the block or message level.

SIDE CHANNEL ATTACKS INCLUDING HEIST

Having said all that, many of those are still vulnerable to Chosen-Plaintext attacks that are not hard to launch from a compromised router or coffee shop. Attacks including CRIME, BREACH leak information through packet sizes and compression.

The HEIST attack (https://www.blackhat.com/docs/us-16/materials/us-16-VanGoethem-HEIST-HTT…) is the current leading attack against HTTPS as it is does not require a man-in-the-middle attack or even network traffic sniffing ability and can be very difficult to defend against. HEIST combines knowledge of TCP congestion control algorithms, TLS, HTTPS, and JavaScript to determine the contents of a webpage through timings.

Browsers allow websites to trigger requests to other sites and see when those complete, so that, for example, they can load pictures hosted on a content distribution network or embed videos or tweets on another site. However, one site is not allowed to see the data that the other website returns without permission. This principle, known as the “same-origin policy (SOP)” is why your favorite blog cannot read your Gmail messages and is the basis of much of the web’s security. HEIST can work around that in some circumstances by deducing the length of the response with knowledge of TCP congestion control algorithms and time measurements. Once the attackers can determine the length of the responses from the server, they are able to infer information about the response’s contents. Most of the time, to increase speed and reduce bandwidth and data costs, websites enable HTTP compression. Compression algorithms like gzip work by identifying commonly repeated byte sequences and replacing them with shorter ones. If a sequence of 11 bytes such as the email subject line “hello world” shows up more than once in the message plaintext, the resulting compressed data stream does not get 11 bytes longer, it may only get one or two bytes longer. By ensuring that certain chosen bytes get embedded into the page the second site is serving, the attacking site can then determine whether those bytes were present elsewhere in the document. For example, an email search page will typically include the terms that were searched for. This can be used to accelerate the attack.

Most of these attacks (BEAST, CRIME, TIME, BREACH, HEIST, etc.) use the ability to manipulate the application layer to mess with timings or packet sizes as side channels to learn the packet contents even without having the key or being able to directly decrypt them. For most of them, the only solutions are also at the application layer. The underlying protocols cannot protect you against yourself.

One approach that is often suggested to defend against these side channels is to simply add random amounts of padding to the end of messages or introduce random delays. These tend to not work very well, not only because of the inconvenience of lost performance, but also because the attacker simply needs to run the attack a few times and record the averages to make the added padding irrelevant.

QUANTUM CRYPTOGRAPHY

So far, none of this has touched quantum cryptography or quantum cryptanalysis with quantum computers. But to summarize a lot of information, quantum cryptography is mostly useless in practice and does not live up to the hype many have ascribed to it. First, it demands the physically impossible absence of side channels to its physics-based algorithms, and second, it only replaces symmetric cryptography which is the strongest and least troublesome part of any ciphersuite. (See https://sidechannels.cr.yp.to/qkd/holographic-20160326.pdf).

QUANTUM CRYPTANALYSIS

It is important to understand the difference between quantum cryptography (ciphers using quantum effects) and quantum cryptanalysis (breaking classical ciphers using quantum effects). Quantum computer-powered cryptanalysis may one day practically attack the asymmetric algorithms RSA and ECC. However, in the quarter century Shor’s algorithm for quantum factoring has been theoretically described, scientists have still not been able to create a quantum computer that can be used to factor numbers faster than a classical computer or larger than trivially small sizes or special cases (such as cheating by designing the experiments with knowledge of the solution). Nevertheless, many charlatans claim to have quantum computers with large numbers of “qubits” in which they count many particles as qubits that display any kind of quantum effects, regardless of whether they are useful for something productive, like Shor’s algorithm. So, keep in mind, nearly anybody who claims they have a quantum computer now, implying it can do crypto calculations faster than a classical computer, is a charlatan, especially if they’re advertising dozens or hundreds of qubits.

POST-QUANTUM (PQ) ALGORITHMS

With the specter of quantum computers hanging over asymmetric algorithms like RSA and ECC, cryptographers are actively researching different algorithms for asymmetrically signing, encrypting, and exchanging keys that would not be vulnerable to quantum computers. There are many different PQ algorithms including the New Hope algorithm based on ring-learning-with-errors (Ring-LWE) mathematics and Supersingular Isogeny Diffie-Hellman (SIDH). PQ algorithms have not been as widely studied as classic algorithms, in many cases are significantly slower, and often require more network traffic, which has slowed their adoption.

CONCLUSION

Hopefully this series has helped you understand the basic concepts of applied cryptology, the problems it can solve, and the limitations of what crypto can do for you. You should also be familiar with how to spot many kinds of cryptographic flaws and snake oil, and should be able to help avoid making many of the same mistakes yourself.