Score:0

A smaller modulus-to-noise ratio means more security in LWE

in flag

Let $\text{Adv}^{\text{DLWE}}_{n,m,q,\sigma}$ be the advantage of an attacker to distinguish LWE samples from uniform ones, where $m$ is the number of samples, $q$ the modulus and $\sigma$ the standard deviation of the error distribution.

I can't find an explicit expression for this advantage.

Does reducing $q$ and increasing $\sigma$ implies smaller advantage (and hence better security?)

Score:1
ng flag

I can't find an explicit expression for this advantage.

There isn't one. This is because it is consistent with the state of the art of complexity theory that $\mathsf{P} = \mathsf{NP}$, and therefore $\mathsf{Adv}_{n,m,q,\sigma}^{\mathsf{DLWE}}$ is some polynomial in the sizes of the relevant parameters. It is also consistent with current cryptographic thought that this isn't the case, and that more aggressive things are true, namely that $\mathsf{Adv}^{\mathsf{DLWE}}_{n,m,q,\sigma}$ is almost entirely controlled by $n\log q$, and in particular:

  1. $m$ can be quite large without impacting security, and
  2. $\sigma$ can be quite small (theoretically $\sigma = \Omega(\sqrt{n})$ is generally required, although practically $\sigma = O(1) \approx 8$ is common).

So how does one concretely evaluate this advantage? Generally by (concretely) evaluating the state-of-the-art of known attacks. To this end, there are two main resources:

  1. The ``LWE Estimator'' of Albrecth et al. is incredibly popular. You can see the initial paper here, and the (more up to date) sage module here.

  2. Existing concrete proposals of lattice-based primitives. For example, NIST PQC finalists Kyber, Saber, and NTRUPrime all include (concrete) analysis justifying their parameter choices. For heavier primitives, the Homomorphic Encryption Standard contains tables of suggested parameters, as well as summaries of attacks that guided the constructions of these tables.

That all being said...

Does reducing $q$ and increasing $\sigma$ implies smaller advantage (and hence better security?)

All else being equal, the answer is yes. Given an LWE instance $(\mathbf{A}, \vec b)$, one can modulus switch from $q\mapsto q'$ (for $q' < q$, the analysis is cleaner if $q' \mid q$ though). This roughly maps the standard deviation of the error from $\sigma \mapsto \frac{q'}{q}\sigma < \sigma$. One could then increase this error to some standard deviation $\sigma' > \sigma > \frac{q'}{q}\sigma$ by adding an appropriate Gaussian.

This is to say that there is a relatively simple reduction from $\mathsf{DLWE}_{n, m, q, \sigma}\leq \mathsf{DLWE}_{n, m, q', \sigma'}$ for $\sigma' > \sigma$ and $q' \mid q$ (the case of $q' < q$ is not much harder, but you have to deal with some "rounding error"), so the advantage will be smaller.

C.S. avatar
in flag
Thank you @Mark. Is there any paper where I can find a reduction between DLWE for different parameters?
Mark avatar
ng flag
They are spread between a few papers. The two fundamental ideas are that, given a (fixed) number of LWE samples, one can generate more (of slightly larger error) cheaply. This justifies the bound $m$ not impacting $\mathsf{Adv}$ much. See section 5 of [this](https://eprint.iacr.org/2020/337.pdf) for some details (the result is from before this paper though). The other main one is the analysis of modulus switching, in [this](https://web.eecs.umich.edu/~cpeikert/pubs/lwehard-old.pdf).
mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.