Score:0

Computational indistinguishability of two LWE type samples

cn flag

Consider the problem of distinguishing between polynomially many samples of either \begin{equation} (x, b, As + e) ~~\text{or}~~\left(x, b, ~Ax + b\cdot(As + e) + e'\right). \end{equation}

Here, $A$ is a public matrix and $s$ is a secret vector chosen uniformly at random. $e$ and $e'$ are Gaussian errors. $x$ and $b$ are sampled uniformly at random.

The dimensions of different objects are:

\begin{align} b &\in \{0, 1\}, \\ x &\in \mathbb{Z}_{q}^{n}, \\ s &\in \mathbb{Z}_{q}^{n}, \\ A &\in \mathbb{Z}_{q}^{m \times n}, \\ e, e' &\in \mathbb{Z}_{q}^{m}, \\ \end{align}

$q \geq 2$ is a prime integer.


Are these two cases (computationally) indistinguishable, when we are given polynomially many samples? I think they are, but I could not tie them to a conjecture.

Note that by LWE,

\begin{equation} (x, b, As + e) ~~\text{and}~~\left(x, b, u\right), \end{equation} are computationally indistinguishable and so are \begin{equation} (x, b, ~Ax + b\cdot(As + e) + e') ~~\text{and}~~\left(x, b, ~Ax + b\cdot u + e'\right). \end{equation}

$u$ is a uniformly random sample. However, I could not reduce my case to LWE.

Score:0
ru flag

One can trivially distinguish $(x,0,u)$ from $(x,0,Ax+e’)$ by subtracting $Ax$ from the third entry and seeing if the entries look uniform or Gaussian.

Distinguishing $(x,1,u)$ from $(x,1,A(x+s)+(e+e’))$ is a standard LWE problem (noting that the variance of $e+e’$ is the sum of the variances of $e$ and $e’$.

Thus samples with $b=0$ are trivial and samples with $b=1$ Are presumably not. Taking polynomially many samples is virtually certain to give at least one with $b=0$ and so allow us to distinguish trivially.

Morbius avatar
cn flag
Just to sanity check, if there were polynomially many samples from either \begin{equation} (x, b_1, b_2, \ldots, b_k, ~Ax + b_1\cdot(As_1 + e_1) + b_2\cdot(As_2 + e_2) + \cdots + b_k\cdot(As_k + e_k) + e') ~~\text{or}~~\left(x, b_1, b_2, \ldots, b_k, u \right), \end{equation} for $b_i \in \{0, 1\}$, for a polynomially large $k$ and for secret vectors $s_1, \ldots, s_k$, then these will be indistinguishable, is that right?
Morbius avatar
cn flag
The argument is just that when every $b_i$ is $0$, they are easily distinguishable, but such a case is exponentially unlikely. For every other case, for any choice of $k$, we can reduce it to LWE.
Daniel S avatar
ru flag
No, the argument is that if at least $b_i$ is 0, the set is easily distinguishable
Morbius avatar
cn flag
How can that be true? Let's say only $b_1 = 0$. Then, we are essentially distinguishing between $A(x + s_2 + s_3 + \cdots + s_k) + (e_1 + e_2 + \cdots + e_k) + e'$ and $u$. Isn't that just a variant of LWE? So, do we not need every $b_i$ to be $0$ for the samples to be distinguishable and not just at least one $b_i$ to be $0$?
Morbius avatar
cn flag
*We are distinguishing between $A(x + s_2 + \cdots + s_k) + (e_2 + \cdots + e_k + e')$ and $u$....
Daniel S avatar
ru flag
I suggest we move discussion [to this chat room](https://chat.stackexchange.com/rooms/135597/discussion-on-computational-indistinguiability).
mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.