In Regev's publication "The Learning with Errors Problem", a naive algorithm is given on page 3 that can be used to tackle the LWE problem. This is the statement:
Another, even more naive algorithm is the following: keep asking for LWE samples until seeing $poly(n)$ equations of the form $s_1 \approx . . . $ (i.e., a pair $(\mathbf{a}, b)$ where $\mathbf{a} = (1, 0, . . . , 0))$, at which point we can recover the value of $s_1$. We then repeat this for all $s_i$. The probability of seeing such an equation is $q^{−n}$, leading to an algorithm requiring $2^{O(n \log(n))}$ equations.
I have a few basic questions:
- How exactly can one obtain $s_1$. If I understand correctly, we have only given samples $(\mathbf{a}$, $b = \langle a , s \rangle + e)$ and expressed them as an equation, if $\mathbf{a}$ is of the form $(1,0,...,0)$ then $b = 1 \cdot s_1 + e$, the error term $e$ now keeps us from getting the $s_1$. I can then only hope that in the $poly(n)$ equations perhaps one is of the form $b = 1 \cdot s_1 + 0$. So the question is, how do we get the $s_1$?
- Finally, how do we get that $2^{O(n \log(n))}$ equations are needed to get the total secret $s$.
- A more general question: If I think of LWE as a noisy linear system of equations, what would an application of Gaussian elimination achieve?