Score:2

How is it legal to use a rounded Gaussian for LWE?

in flag

As far as I understood, in Regev's initial paper, the error distribution was first constructed as follows:

enter image description here

Then rounded in the following way:

enter image description here

Using this distribution, the reduction in the theorem below can be achieved:

enter image description here

I don't understand how in many applications in literature, they use the LWE problem with the usual rounded error distribution:

enter image description here

and still ensure security (or solid reduction), with $D_s(\mathbf{x}) = \frac{1}{\mathbf{s}^n} \exp \left( \frac{-\pi \|\mathbf{x}\|^2}{s^2} \right)$ is the continuous normal distribution.

Chris Peikert avatar
in flag
The “usual rounded distribution” $\Psi_s$, reduced modulo $p$ (which happens when creating LWE samples), is *exactly* $n$ independent samples from the corresponding “scaled up and rounded” distribution $\bar{\Psi}_{s/p}$ over $\mathbb{Z}_p$. This can be seen by just working through the definitions, and noting that $D_s$ is a product distribution over all its $n$ coordinates.
C.S. avatar
in flag
@ChrisPeikert Thanks! But how to deal with the sum on $k$ going from $- \infty$ to $+\infty$ that is inside the integral?
Chris Peikert avatar
in flag
The infinite sum just captures the fact that the (continuous) normal distribution is reduced “mod 1”; this corresponds to reduction mod p after the scaling is applied. The bounded integral together with the infinite sum yields an infinite integral over all the reals.
Score:1
ng flag

I think this is just an artifact of Regev representing values in $\mathbb{T} \cong \mathbb{Z} / q\mathbb{Z} = \{0, 1/q, \dots, (q-1)/q\}$, rather than in $\{0,1,\dots,q\}$ directly. There are still a few things to mention though:

First, modern consensus is that for the hardness of $\mathsf{LWE}$ [1], the particular error distribution you use does not matter much. In particular, common ones (due to ease of sampling) are "centered binomials", or even uniform over a certain range. For examples, see NIST PQC finalists.

Of course, these distributions we use don't aren't always justified by the worst-case to average-case reductions. So what gives? We (roughly speaking) have two choices in how to choose parameters:

  1. Cryptanalyze $\mathsf{SIVP}_\gamma$ directly, find safe parameters, and then shove them through the $\mathsf{SIVP}_\gamma \leq \mathsf{LWE}$ reduction, or

  2. Cryptanalyze $\mathsf{LWE}$ directly, and choose those parameters.

The second is more popular by far. There are a few reasons for this:

  1. The reduction $\mathsf{SIVP}_\gamma \leq \mathsf{LWE}$ is (concretely) not very "tight". In particular, it leads to a decently large inflation of parameters. This is mentioned in a few places, for example another Look at Tightness 2. Of course, this often happens when one initially concretely analyzes a proof that does not care about constants. There have been some follow-up work, for example this, this, and this, but nothing too mainstream. Of the work, I only really have looked into the first --- iirc it found by inflating $(n, \log_2 q, \sigma)$ by a factor of $\approx 2$ compared to what people use in practice, you can get concrete security from the worst-case to average-case reduction.

  2. It's not clear that (worst-case) cryptanalyzing $\mathsf{SIVP}_\gamma$ is easier than (average-case) cryptanalyzing $\mathsf{LWE}$. This is simply because there isn't an obvious candidate for a worst-case instance of $\mathsf{SIVP}_\gamma$! Of course, one could always use an average-case analysis as a lower-bound of a worst-case analysis, but then why average case analyze a different problem than the one you care about?

This is all to say that the $\mathsf{SIVP}_\gamma \leq \mathsf{LWE}$ reduction is (mostly) seen as stating that $\mathsf{LWE}$ is (qualitatively) the "right distribution" to be looking at. There are many distributions of random instances you could look at, and some might be "structurally weak" (for examples of this, see work on "Weak Poly-LWE instances", or that RSA defined using a random $n$-bit integer, rather than random $n$-bit semiprime, would be the "wrong distribution" to use for structural reasons). The $\mathsf{SIVP}_\gamma \leq \mathsf{LWE}$ reduction can be interpreted as pinning down a particular distribution for $\mathsf{LWE}$, which we then quantitatively parameterize via (direct) cryptanalysis.

[1] Note that for signatures, the distribution does matter, but this is because of particulars of the constructions/security proofs of the signatures, and not abstractly because we think $\mathsf{LWE}$ is hard when you use discrete Gaussians and not rounded Gaussians/uniform random variables or whatever.

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.