Score:2

LWE and extended trapdoor claw free functions

sy flag

Let $q \geq 2$ be a prime integer. Consider two functions, given by:

$$f(b, x) = Ax + b \cdot u + e~~~(\text{mod}~q),$$ $$g(b, x) = Ax + b \cdot (As + e') + e~~~(\text{mod}~q),$$

where we have: \begin{align} b &\in \{0, 1\}, \\ x &\in \mathbb{Z}_{q}^{n}, \\ s &\in \mathbb{Z}_{q}^{m}, \\ A &\in \mathbb{Z}_{q}^{n \times m}, \\ e' &\in \mathbb{Z}_{q}^{m}, \\ u &\in \mathbb{Z}_{q}^{m}, \end{align}

$e$ is sampled from the distribution:

\begin{equation} D_{\mathbb{Z}_q, B^{'}}(x) = \frac{e^{\frac{- \pi ||x||^{2}}{B^{'2}}}}{\sum_{x \in \mathbb{Z}_q^{n}, ||x|| \leq B'} e^{\frac{- \pi ||x||^{2}}{B^{'2}}}}, \end{equation} where $$B' = \frac{q}{C_{T} \sqrt{mn \log q}},$$ $C_{T}$ is a fixed constant.


In this paper, the functions are defined in equation 29 and it is mentioned that in the worst case over $A$, $u$ and $e'$, assuming LWE is hard for polynomial time classical algorithms, distinguishing between $f$ and $g$ is also hard, given $A$ and given (polynomially many) "samples" (since $e$ is a probability distribution, outputs of $f$ or $g$ are samples) from either $f$ or $g$.

The LWE reduction also holds for average case.

The paper also mentions that these two functions are a family of "extended trap-door claw free functions (Definition 5, 6, 7.)"

As a reference to the proof of these above facts, the paper references this paper (Lemma 9.3). However, while proving Lemma 9.3, the second paper also references some other papers, like this one. The proof was not clear to me in any of the papers.


I wanted to ask how to reduce the task of distinguishing between $f$ and $g$ to LWE. I also wanted to ask about the importance of the function being "extended trap-door claw free" in that reduction.

From my understanding, the hardness of LWE says that if we are given $A$, distinguishing between uniformly random samples and samples from $As + e'$ is hard. I am not sure how $b$ and $x$ or the other parameters factor into that fact. Is that where we need the extended-trap-door-claw-free proof?

Score:2
ng flag

I think the following reduction is intended:

Let $D$ be a distinguisher for $f, g$. Build a distinguisher $D'$ for LWE that:

  1. Takes as input an LWE instance $(A, b')$
  2. Create the oracle $h_{A, b'}(b,x) = Ax + bb' + e\bmod q$
  3. Simulates $D$ with oracle access to $h$, and returns what $D$ does.

It seems relatively straightforward that this adversary should work, as:

  1. when $b'$ is uniformly random, $h_{A,b'}(b,x) = f(b,x)$
  2. when $b' = As + e'$, $h_{A,b'}(b,x) = g(b,x)$

The advantage bound you get should end up being independent of the particular LWE instance $(A, b')$ you use in the reduction. I imagine this is where the "worst case over $A, u, e'$" comes from, but I haven't really heard that before, so can't say for certain this is what the authors intend.

It's worth mentioning that I see nowhere you need that $f, g$ are extended trapdoor claw-free functions (or anything of this sort). Instead, I think all that is happening is that $f, g$ each come from efficiently post-processing an LWE sample (one in the world the sample is from the LWE distribution, one where it is uniformly random). If this could lead to distinguishable functions, it would clearly apply an attack on LWE.

BlackHat18 avatar
sy flag
When $b' = As + e$, $h(b, x) = Ax + b(As + e) + e'$. But, for equation 29 of this paper (https://arxiv.org/pdf/2002.12814.pdf), $g(b, x) = Ax + b(As + e') + e$. How are these the same then?
Mark avatar
ng flag
I swapped $e'\leftrightarrow e$, which was a typographic issue. Still though, the main point is that if any two distributions $D_0\approx_c D_1$ are computationally indistinguishable, so must be $h(D_0)\approx_c h(D_1)$ for any efficiently computable function $h$. The result follows by choosing the right $h$.
BlackHat18 avatar
sy flag
I am a bit confused about something. Let $(A, b')$ be an input LWE instance. Then, in one case $b'$ is uniformly at random. But, in the other case, isn't $b' = As + e$ for some Gaussian error $e$? However, here, for the second case $b' = As + e'$ --- $e'$ is no longer a Gaussian error; $e' \in \mathbb{Z}_{q}^{m}$.
Mark avatar
ng flag
@BlackHat18 That doesn't matter. You're right that the construction of $f, g$ works for arbitrary $e'$, but when used in the reduction to LWE you have that $e'$ is Gaussian.
mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.