Score:1

How to prove reduction from decision to seach LWE?

cn flag

I am new to cryptography, and trying to understand the concepts of LWE (learning with errors) formally. I will state my understanding of the definitions, which might be incorrect.

Definitions According to My Understanding

Let $R$ be a finite unital commutative ring equipped with a probability $\mu$ (whose $\sigma$-algebra is discrete). $R$ is said to satisfy the worst-case search LWE assumption if for all polynomial $n$ and all polynomial-time randomized algorithm $S$, the map \begin{equation} m\mapsto \min_{s\in R^m} p(m,s)\text{,} \end{equation} where \begin{equation} p(m,s) = \Pr\{(A,e)\in R^{m\times n(m)} \times R^{n(m)}: S (-sA+e,A)=s\}\text{,} \end{equation} is negligible. In the above equation, $A$ is sampled from the uniform probability, and $e$ is sampled from the product probability $\mu^{n(m)}$.

$R$ is said to satisfy the worst-case decision LWE assumption if for all polynomial $n$ and all polynomial-time randomized algorithm $D$, the map \begin{equation} m\mapsto \min_{s\in R^m} |p_1(m,s)-p_2(m)|\text{,} \end{equation} where \begin{equation} \begin{split} p_1(m,s) &= \Pr\{(A,e)\in R^{m\times n(m)}\times R^{n(m)}: D (-sA+e,A)=1\},\\ p_2(m) &= \Pr\{(b,A)\in R^{n(m)}\times R^{m\times n(m)}\times: D (b,A)=1\}\text{,} \end{split} \end{equation} is negligible. In the above equation, $A$ and $b$ are sampled from the uniform probabilities, and $e$ is sampled from the product probability $\mu^{n(m)}$.

Question

I proved that if $R$ is a finite field equipped with a probability $\mu$ (whose $\sigma$-algebra is discrete) and if $R$ satisfies the worst-case search LWE assumption, then $R$ also satisfies the worst-case decision LWE assumption. But how does one prove the converse? All literature I have seen so far just say that it is trivial, but I could not prove it. More explicitly, I need a proof of the following statement:

If $R$ is a finite unital commutative ring equipped with a probability $\mu$ (whose $\sigma$-algebra is discrete) and if $R$ satisfies the worst-case decision LWE assumption, then $R$ also satisfies the worst-case search LWE assumption.

My Attempt

Assume that $R$ satisfies the worst-case decision LWE assumption. Let $n$ be a polynomial and let $S$ be a polynomial-time randomized algorithm (whose inputs are elements in $R^{n(m)}\times R^{m\times n(m)}$ and whose outputs are elements in $R^m$). We need to show that the map \begin{equation} m\mapsto \min_{s\in R^m} p(m,s)\text{,} \end{equation} where $p(m,s)$ is defined above, is negligible. Given an input $(b,A)\in R^{n(m)}\times R^{m\times n(m)}$ where $b=-sA+e$, $S$ will return some guess $g\in R^m$ that might or might not equal $s$. One can compute $b+gA$ and denote it by $e'$. If $g=s$, then $e'=e$. If $g\neq s$, then $e'=e$ might or might not hold. I do not know what to do now.

Score:1
ng flag

As you are quite close to the correct answer, I won't give you the right answer, but instead will give you a qualitative description of the the final step you need to do.

Depending on the precise error distribution one chooses $e$ from, the LWE distribution $(A, sA + e)$ can be either:

  1. exactly the uniform distribution (say if $e$ is uniformly random)
  2. computationally distinguishable from uniform (say if $e$ is supported on $\{0\}$), or
  3. (thought to be) computationally indistinguishable from uniform (if $e$ is "bounded" --- you can explicitly view this to mean having i.i.d. Gaussian coordinates of parameter $\approx n$).

These three cases are useful to keep in mind when considering the random variable $(A, sA+e)$. Note that (for fixed $A$) the map $s\mapsto sA$ defines a lattice. The decisional LWE problem is (roughly) about detecting this lattice structure. For example

  1. in the first case, $sA+e$ is uniform over $R$, e.g. the error $e$ "washes out" any of the lattice structure (information theoretically),
  2. in the second case $sA$ is exactly a point in the lattice, and there exist efficient ways to test membership of some candidate point $b = sA$ in the lattice in this case, and
  3. in the third case, $sA$ is a perturbed point of the lattice, and it is plausibly hard to decide you are "close" to the lattice.

All of this is to say that your question has very different answers depending on the precise characteristics of the error distribution, so the error distribution has to factor into your answer somehow. For a hint how it will, you say

If $g\neq s$, then $e′=e$ might or might not hold. I do not know what to do now.

When $g\neq s$, then $b + gA =(g-s)A+e$. Roughly speaking, what you do next is argue that it is very unlikely for $(g-s)A+e$ to be drawn from the error distribution. This is because

  1. the error distribution is concentrated around zero (or perhaps even bounded), e.g. any element of the error distribution will have $|e|$ "small", and
  2. when $g\neq s$, $(g-s)A$ will be greater than $\lambda_1(\mathcal{L}(A))$, the shortest vector of the lattice $\mathcal{L}(A)$. For random $A$, this is typically "large".

It should seem plausible that you can quantitatively parameterize things such that, using quantitative versions of the above two statements, that it is unlikely that $g\neq s$>

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.