Score:38

Is AES-128 quantum safe?

cn flag

I've been reading lately some contradicting messages with regards to the quantum-safe resistance of AES128. First, there are blog posts by Ericsson people like these ones:

Can quantum attackers break AES-128?

No. NIST estimates that a quantum computer breaking RSA-2048 in a matter of hours could be built by 2030 for about a billion dollars. This means that NIST estimates early quantum computers to have a clock rate of a few MHz. Such a quantum computer (a single 20 MHz quantum core) running Grover’s algorithm would need 1011 years (a hundred billion years) to break AES-128. Even a cluster of 109 quantum cores (the world's largest public classical supercomputer has 107 cores) with a clock rate of 2 THz would need 106 years (a million years) to break AES-128.

Considering all this, Grover’s algorithm does not pose any apparent threat to symmetric cryptography. Some years ago, there was a common conception that Grover’s algorithm required symmetric key sizes to be doubled – requiring use of AES-256 instead of AES-128. This is today considered a misconception – NIST, for example, now states that AES-128 will likely remain secure for decades to come, despite Grover’s algorithm [5].

In fact, one of the security levels in the NIST PQC standardization is equivalent to that of AES-128. This means that NIST thinks it is relevant to standardize parameters for PQC that are as strong under quantum attacks as AES-128. There could, of course, be other reasons why a longer key is needed, such as compliance, and using a longer key only has a marginal effect on performance.

In summary, our most important symmetric cryptographic tools (AES, SNOW 3G, SHA2, SHA3 and so on) remain secure against quantum computers as they are. This also applies to the authentication, key generation, encryption and integrity in 3G, 4G and 5G that rely purely on symmetric cryptography.

Sources: here and here

Also, some folks in Cloudfare mention:

This all is a long-winded way of saying that security level 1 seems solid for the foreseeable future.

source here

And finally, NIST also mentions regarding the evaluation criteria for PQC:

NIST will base its classification on the range of security strengths offered by the existing NIST standards in symmetric cryptography, which NIST expects to offer significant resistance to quantum cryptanalysis. In particular, NIST will define a separate category for each of the > following security requirements (listed in order of increasing strength2):

  • Any attack that breaks the relevant security definition must require computational resources comparable to or greater than those required for key search on a block cipher with a 128-bit key (e.g. AES128)

It is worth noting that the security categories based on these reference primitives provide substantially more quantum security than a naïve analysis might suggest. For example, categories 1, 3 and 5 are defined in terms of block ciphers, which can be broken using Grover’s algorithm, with a quadratic quantum speedup. But Grover’s algorithm requires a long-running serial computation, which is difficult to implement in practice. In a realistic attack, one has to run many smaller instances of the algorithm in parallel, which makes the quantum speedup less dramatic.

source: here

So, where is the truth? I had this impression for many years that we will definitely need to double the key sizes in symmetric crypto to remain quantum safe. Is this really a misconception?

kelalaka avatar
in flag
The most dangerous attack against AES-128 is [the multi-target attack](https://crypto.stackexchange.com/q/75880/18298). Also, Squamish [gave a calculation about the sequential calls of Grovers](https://crypto.stackexchange.com/a/67677/18298) with an indication that the setup time is not clear for the sequential call of Grover's algorithm. As we indicated by many, use 256-bit ciphers, see [Has AES-128 been fully broken?](https://crypto.stackexchange.com/a/76746/18298).
Score:47
ru flag

All of the above sources are correct; there is not a realistic threat to AES from Grover's algorithm.

The headline statement of $2^{64}$ quantum operations is often misinterpreted because people think of $2^{64}$ operations as computationally feasible. What they do not realise is that whereas $2^{64}$ operations performed in parallel are feasible for modern classical computers, $2^{64}$ operations performed in serial are not feasible. The other thing to know is that Grover's algorithm is highly non-parallelisable. If we deploy $2^c$ computational units in parallel to search using Grover's algorithm, it will complete in time proportional to $2^{(128-c)/2}$ so that using 256-quantum computers will only reduce runtime by 1/16, 1024-quantum computer will only reduce runtime by 1/32 and so forth.

Now consider that quantum computers currently operate at the kHz clock rate in comparison to classical computers that might run at the GHz clock rate and we see there is a huge gulf to overcome. See the Ericsson numbers for examples (note that the Ericsson site has a typo: 106 years should read $10^6$ years).

One might ask whether an improved algorithm could outperform Grover's algorithm. However Christof Zalka has shown that Grover's algorithm (and in particular its non-parallel nature) is the best possible complexity for unstructured search.

Jimakos avatar
cn flag
great answer many thanks. How come then that in so many sources, websites, blogs we often see this misconception? Pure lack of knowledge?
Daniel S avatar
ru flag
@Jimakos Pretty much so, but TBH quantum information and practical quantum engineering are difficult topics. The headline statement is very easy to quote and the community is a bit too used to measuring security purely in terms of the number of operations required for an attack.
samuel-lucas6 avatar
bs flag
What about [batch attacks](https://blog.cr.yp.to/20151120-batchattacks.html)?
Daniel S avatar
ru flag
@samuel-lucas6 See https://arxiv.org/pdf/2005.05911.pdf
poncho avatar
my flag
Actually, if you make *extremely* optimistic (but not totally implausible) assumptions about the eventual speed/scalability of Quantum Computers, AES-128 becomes breakable. For example, if we assumed a QC that can evaluate an AES computation in 10nsec, and we have 30 million such QCs that we can dedicate for a year, we can search the entire search space in a year. Is this plausible in the next 10 years? No. In the next 50? Not likely, but just possible Of course, this assumes we can devote this massive amount of computation to recover a single key; the average person need not worry
Daniel S avatar
ru flag
@poncho I don't want to put a limit on what the engineers might achieve, but your extreme optimism extends along at least 4 axes: clock rate, economic affordability, decoherence time and gate fidelity (and I do think that the assumptions are extreme for all four). I'll watch the tech, but I think that this is beyond my bounds of foreseeability.
poncho avatar
my flag
@DanielS: I did say it was **extremely** optimistic, and wouldn't happen for the next 50 years...
gs flag
I would say "not totally impossible within known physics" would describe the situation better than "extremely optimistic".
cn flag
I'd be curious to hear your take on my question at https://quantumcomputing.stackexchange.com/questions/28968/why-do-people-say-that-grovers-algorithm-does-not-parallelize-well
Fabian Röling avatar
cn flag
I'm surprised that there is such a lower bound on time retirement. Couldn't you, in the worst case, let a really big quantum computer just simulate a regular computer doing a password check? Even if password checking takes 1 second normally and the speed difference is kHz vs. GHz, it should only take 2 weeks. And because of superpositions, you could check all possibilities at once, right? I guess I have a misunderstanding of quantum computers at some step of my logic, but I don't know where.
Daniel S avatar
ru flag
@FabianRöling It's very quick to get a QC with a superposition of all keys and generating all of the outcomes associated with those keys. However, when trying to get an output from that state we can only measure and get a key/output at random which is almost surely not the one that we're looking for. Instead, we have to iterate an amplification process that makes it overwhelmingly likely that we get the key/output that we're looking for. It's that iterated amplification process that takes all the time.
Fabian Röling avatar
cn flag
@DanielS OK, but why, if a regular computer can do it so fast?
forest avatar
vn flag
@FabianRöling Regular computers use extremely small MOSFETs, which can switch reliably at billions of times per second. They've been improved over many decades from larger transistors that could only switch in the low MHz range, which themselves replaced vacuum tubes that could switch at mere kHz, which themselves replaced electromechanical relays that could only switch at hundreds of Hz. Quantum computers are much newer, and we simply don't know how to make qubits that can operate reliably at high speeds. Regular computers have been getting optimized for a very, very long time.
Score:0
bz flag

I am NOT an expert on computer science or quantum computing.

@Fabian Röling

Regular computers are deterministic. You know you'll be able to bruteforce the key and get it right even if it takes gazillion years to achieve so.

In QC's, the situation is rather different. Thanks to the superposition of quantum particles, you will be able to generate all 256-bit keys at once along with the corresponding outputs.

HOWEVER!

You will NOT be able to select the right key/output combo as you wish, instead you will end up with a random one. The reason is that measuring the state of the quantum particle (up-down) will cause it to randomly collapse into one of those states.

Also check out the Shor's algorithm and the way it utilizes QC's to eliminate the wrong guesses in breaking RSA keys.

The absence of a similar algorithm for AES is what makes it secure for Post-Quantum Cryptography I think.

Edit: It seems my answer was somewhat not quite right. I have released a better version down there.

Implementation of the Grover's algorithm and possible alternatives to the algorithm is pretty much non parallelizible on the QC's.

Thus, with clock rates at the level of kHz the large scale implementation of it seems like a scenario which is rather very far into the future...

I believe that exploiting the algorithm itself seems more that of a possibility compared to breaking the keys.

TL;DR: AES-128 bits will hold its security for quite some time.

poncho avatar
my flag
Actually, Grover's algorithm will find the right key (with high probability); the reason we believe AES-128 is safe is that Grover's algorithm requires a huge number of sequential steps - too many to be performed in any sort of timely manner
I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.