Score:2

Regev's learning with error

so flag

If we talk about the security of public key encryption in Regev's lwe system, if attacker would have knowledge of $(u,u'):[\mathbf u=A\mathbf r, \mathbf u'=\mathbf b^t\mathbf r + \mu q/2, \mathbf r\in\{0,1\}^m]$ and $\mathbf b$, then value of $\mathbf r$ in $\mathbf u'$ can be guessed easily by the knowledge of $\mathbf u$ and $\mathbf A$ and so the value of message($\mu$) from $\mathbf u'$, so how is this system considered to be secure?

Score:0
ng flag

One can say something stronger than Daniel's answer. Let $D_1$ be the joint distribution of $(A, Ar)$, where

  1. $A$ is sampled uniformly at random, and
  2. $r$ is random (technically it suffices for it to have "high enough min-entropy", i.e. it doesn't have to be uniform, but it might as well be chosen this way).

Let $D_2$ be the joint distribution of $(A, u)$, where

  1. $A$ is sampled uniformly at random, and
  2. $u$ is uniformly random, independently of $A$

Then for large enough $m$, the distributions $D_1, D_2$ are statistically indistinguishable. As a result (for large enough $m$, which is used in cryptography) it is impossible to efficiently distinguish between samples from the two distributions.

This result is typically known as the "Leftover Hash Lemma". It's worth mentioning that it's only known to hold for algebraically unstructured LWE. For Ring Learning with errors, there are some things you can do, but they have many more caveats.

As a result, you cannot hope to map from $(A, Ar)$ to $r$. If you could, you could distinguish the two distributions. As the distributions are statistically close (by the LHL), this is impossible.

Note that these LHL-type arguments do have a downside though --- they require the parameter $m \geq \Omega(n\log q)$ to be somewhat large. There are other ways of building public-key encryption for lattices that avoid this argument (they "use the LWE assumption again" instead), allowing one to choose $m$ smaller. This is to say that LHL arguments aren't necessairly optimal, and in fact were rarely used in the NIST PQC competition.

Daniel S avatar
ru flag
To be a little nit-picky, I cannot hope to map $(A,Ar)$ to $r$ with an algorithm that does not also provide ostensibly valid outputs $r$ for (almost all) inputs from $(A,u)$. I can for example exhaustively try all values of $r$ and not break distinguishability. However any $r\in\{0,1\}^m$ that satisfies $Ar=0$ would (with high probability) allow message recovery.
Mark avatar
ng flag
My point is that under appropriately-chosen parameters, the distribution of $(A, Ar)$ is indistinguishable from the distribution of $(A, u)$, where $u$ is uniform. In this setting, in a **statistical** (not computational) sense $\mathbf{u} = A\mathbf{r}$ provides no meaningful information to attackers. The presence of solely $A$ may be leveraged for attacks, or plenty of other things.
Mark avatar
ng flag
But the original statement in the question "the value of $\mathbf{r}$ in $\mathbf{u}'$ can easily be guessed by the knowledge of $\mathbf{u}$ and $A$" is wrong, again in a *statistical* sense, i.e. even for computationally-unbounded attackers. Note that the mapping $u\mapsto r$ such that $u = Ar$ you suggest is well-defined --- with high probability there are many candidate preimages.
Score:0
ru flag

It is not easy known how to easily recover $\mathbf r$ from $\mathbf u=A\mathbf r$ when the number of entries in $\mathbf u$ is considerably less than the number of entries in $\mathbf r$ (i.e. when $A$ is significantly wider than it is tall).

In Regev's scheme $\mathbf u$ has $n$ entries and $m$ is chosen so that $m>(1+\epsilon)(n+1)\log q$. Hence, $m$ is bigger than $n$ by a factor of at least $\log q$, recovering $\mathbf r$ from $\mathbf u$ is believed to be hard and the system is considered secure.

I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.