Score:12

Current Consensus on Security of Lattice Based Cryptography?

ca flag

In an edit to an answer by user forest, it was mentioned that there has been a new attack developed for lattice-based cryptography. I thought lattice-based cryptography is a fairly well established way of providing quantum-computing-proof security, and that the only thing left to do is developed a standardised system at NIST.

But the current attack leads me to my question: Is there a current consensus on how secure lattice-based cryptography is? That is, how confident are we that this is a reasonable alternative to typical RSA systems in both the quantum and classical methods of attack.

forest avatar
vn flag
The attacks are against LWE schemes, not all lattice-based schemes. Keep an eye on [this thread](https://groups.google.com/a/list.nist.gov/g/pqc-forum/c/Fm4cDfsx65s).
Mark avatar
ng flag
@forest the IDF paper doesn't make it sound like they are confident their technique only works against LWE-based schemes, and instead that they looked into Kyber in particular, and it was relatively straightforward to extend the attack to the other LPR schemes.
Daniel S avatar
ru flag
Consensus is a tricky word. I know of people with significantly different views on the topic and many more who would hesitate to venture an opinion due to lack of deep understanding. I'm not sure how best to answer the question while respecting guidelines on [subjectivity](https://stackoverflow.blog/2010/09/29/good-subjective-bad-subjective/).
Score:14
ng flag

The claimed attack does not "break" lattice-based cryptography, merely further improves known attacks. I'll try to briefly describe the situation.

Asymptotics:

Asymptotically, our best indication is that LWE takes time $2^{cn}$ to solve for some constant $c$. This is to be contrasted with RSA $2^{(\log N)^{1/3}}$ (this is very imprecise, and should properly use L-notation), and Elliptic curve DLog $2^{\frac{1}{2}\log_2|\mathbb{G}|}$. This is to say that, even with the improvements to attacks against LWE variants, we believe it to have "better scaling" than assumptions like RSA (which you specifically mention), although similar to ECDlog.

Current estimates of $c$ can vary, but it is $< 1/2$ (I believe perhaps $\approx 0.28$ is current, but haven't checked), so ECDlog is the best on this metric, and then LWE, and then (far in the distance) RSA/finite field DH. It's also not clear how useful this notion is practically --- it says that some parameter set may be secure, but doesn't help pick which one.

Concretely:

Even if LWE does take $2^{cn}$ time to solve, this doesn't help us if we don't know $c$. Improved attacks have (theoretically) improved $c$ over time. This can lead to various parameter sets being "broken" (although not in the sense that we can mount practical attacks against them --- in the sense they do not meet NISTs minimum standards to be in a certain "security level").

This situation is arguably better than the situation that factoring went through with the development of "index calculus" techniques. For ECDlog, the best attacks are still (variants of) the Pollard Rho algorithm, and it has been this way for a long time.

This is all to say that ECDlog concretely has the best-understood security, although I would say that LWE still beats out RSA/other things vulnerable to index calculus techniques. While the NFS was created ~30 years ago, it is still plausible that it is improved. For example, the road of (small characteristic) finite field DLog was from roughly the same complexity as RSA, to (roughly) $2^{(\log p^n)^{1/4}}$, and then to quasipolynomial time (cf Nadia Heninger, via Matthew Green). This doesn't mean something similar will happen to factoring (it hasn't for ~30 years), but it is perhaps still an uncomfortable possibility.

Practically:

The final way to understand security of a hardness assumption is to solve really big instances of it. This leads to questions

  1. what is the largest set of parameters that have been broken in a real computation?
  2. how much time was invested in this computation?

For RSA, things like RSA-240 (a 768-bit semiprime) have been broken in 1000 core years (or 6 wall months iirc). In this sense RSA (and finite-field DH for non-small characteristic) is our best tested (and perhaps best understood) hardness assumption. Elliptic curves have also had long computations. The records page lists a number of breaks in the range of groups of size $2^{110}$ to $2^{120}$, and wall clock times are again up to 6 months on the high end.

On this metric, LWE really does lag behind. There are some well-known "lattice challenges" (a la the RSA challenges), for example the TU Darmstadt challenges. These are for plain LWE (Peikert has made some for RLWE, but I don't see a public leaderboard for them). Anyway, for the plain LWE challenges, the highest dimension anyone has ever solved is 85, while dimensions 500+ (although 700+ is more common) is used in practical schemes. All the records I looked at used less than 200 wall-clock hours (usually on a handful of processors), so the computations are really much smaller scale than RSA/ECDlog computations.

This is to say that "big attacks" against LWE are at least an order of magnitude smaller than similar attacks against RSA/ECDlog, although this likely massively undersells the size of modern academic attacks against RSA. So both the biggest instance of LWE that has been broken is small (which is a good security indication), and it hasn't had "truly big attacks" mounted against it yet (which is a bad one).


The above ignores many subtleties about hardness assumptions, such as

  1. leakage resilience,
  2. ease of implementation,
  3. ease of side-channel mitigation.

LWE does actually do pretty well on all these metrics (as long as you aren't doing signatures), definitely better than RSA, I don't personally know for ECDlog though.

That all being said, I personally view LWE as

  1. a massive improvement over RSA/(finite field)DLog, and
  2. perhaps worse than ECDlog.

Definitely if quantum computers weren't a risk, there would be no reason to move away from ECDlog (except for specialty applications like FHE). But they are, so we must.

From this perspective, comparisons shouldn't be between LWE and RSA/ECDlog, as unfortunately the world of them making sense is rapidly leaving us. Instead, comparisons should be to other post-quantum assumptions. But among post-quantum assumptions, LWE is really toward the top. There are assumptions that have better security (such as McEliece), but their efficiency is too bad to be usable except for special cases. The main other feasible assumptions are

  1. NTRU, which is also lattice-based, and therefore not really "independent" from attacks on LWE, and
  2. SIDH.

SIDH itself is still an order of magnitude slower than lattice-based constructions iirc (although it is more compact). There are some reasons to argue its security has "stood the test of time better" than lattice-based assumptions (cf The Case for SIKE), but also its only ~10 years old iirc, so is itself still a new assumption (and was even more so in 2016, when NIST started its selection process). It also is rather mathematically complex, which may make implementations difficult (but I haven't tried).

This is somewhat long (and ignores many technical topics, say the hardness of "small moduli" LWR, or claims about the impact of galois group sizes on RLWE hardness, which have been unresolved for years. These are the topics that seem most likely for "big breaks" to be possible in, but nothing has happened yet), but I hope begins to answer your question at least.

Chris Peikert avatar
in flag
Bigger experiments have been run, taking > 1200 wall-clock hours across multiple GPUs, to attack lattice problem in dimensions up to 180: https://eprint.iacr.org/2021/141 .
ca flag
Thanks for the answer. Maybe this just is my lack of expertise and you made this clear already, but what about quantum security? How is it known that LWE can't be defeated by some quantum algorithm? Is it just that $2^cn$ is too much for quantum speedups? I thought Shors provided exponential speedup, so I would have naively thought $2^cn$ isn't necessarily sufficient.
Mark avatar
ng flag
@ChrisPeikert do you know if people have run similarly high-dimensional challenges for LWE? Given the recurring claims for how non-tight the SVP reduction is, I would think concrete comparisons to cryptanalysis of LWE would be most appropriate.
Mark avatar
ng flag
@StevenSagona Shors (and generalizations thereof) is only known to provide exponential speedups for a *very* limited class of problems (the only "notable" ones I know of within this class are the discrete logarithm problem and factoring). There has been some work extending Shor's to "non-standard lattice problems" (for example [this](https://arxiv.org/abs/2108.11015) and [this](https://dl.acm.org/doi/10.5555/2884435.2884499)), but so far people do not know how to use quantum algorithms to get exponential speedups for lattice problems of cryptographic interest.
Mark avatar
ng flag
@StevenSagona it is worth mentioning even if we extend the class of problems that have exponential quantum speedups past ones of cryptographic interest, the class of problems is still quite limited. In particular, my understanding is that they broadly either "break crypto primitives" (usually meaning factoring or dlog), or "simulate quantum systems" (generally with applications in chemistry). I've heard vaguely of some other things (some quantum ML algo maybe?), but also that some of these "new" algorithms have been dequantized. The ones that break crypto uses can also be used for some
Mark avatar
ng flag
... math computations, namely "computing class groups of number fields". A quantum algorithm for LWE would be exciting for coding theorists though --- it would give efficient (quantum) decoding for random lattices, which are good "error correcting codes" in certain noise channels (AWGN). This is contrasted with the quantum factoring/dlog algos, which would interact with broader society mostly via breaking crypto (unless there is some industrial application of class group computations I do not know).
Chris Peikert avatar
in flag
@Mark Those experiments attack SIS lattices, and hence could be used to attack LWE in various corresponding dimensions, in the usual ways. The tightness of worst-case hardness theorems isn’t relevant to any of this.
mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.