Score:1

LWE assumption in cryptographic applications

sa flag

The LWE assumption states that it is hard to distinguish LWE samples from uniformly distributed samples. That is, the distinction $(A,b)$ with $A$ and $b$ uniformly distributed, is hard to distinguish from $(A,b=As+e)$, where $A$, $s$ are uniformly distributed and $e$ follows a different distribution.

However, in cryptographic applications, we have an additional message $m$ that we want to hide within the $As+e$ construction, so we end up with something like $As + e + m$

My question is, why can we still use the LWE assumption for this and say that this is also hard to distinguish from a uniform distribution? Is there perhaps a rationale for this, that adding a constant (consider message $m$ as a constant), does not significantly change the distribution?

user1035648 avatar
pt flag
Could you cite some references for adding messages to the LWE term? I've seen some schemes that add a term like $[\frac{q}{2}]\cdot m$ to $A\vec{s}+\vec{e}$, where $q$ is the modulus and $m$ is a bit, i.e., $0$ or $1$.
Score:2
ng flag

The following is a good general result to keep in mind, and a good exercise to prove if you don't see why it might be true.

Let $G$ be a finite group. Let $\mathcal{D}$ be a distribution on $G$ that is "shift-invariant", meaning

$$\forall m,k\in G: \Pr_{h\sim \mathcal{D}}[h = k] = \Pr_{h\sim\mathcal{D}}[h +m = k].$$

Define a private-key cryptosystem where

  • $\mathsf{KeyGen}$ samples $sk\sim\mathcal{D}$
  • $\mathsf{Enc}_{sk}(m) = sk + m$
  • $\mathsf{Dec}_{sk}(c) = (-sk) + c$

Then $(\mathsf{KeyGen}, \mathsf{Enc}, \mathsf{Dec})$ is a perfectly-secure encryption scheme.

Typically you see this for the group $(\mathbb{F}_2^n, \oplus)$, where $\oplus$ is XOR. The resulting scheme is the one-time pad, and the security proof follows from Shannon's initial argument regarding the security of the one-time pad.

Note that for finite groups, the only shift-invariant distributions are uniform distributions on $G$. The more general property is known as $G$ being a "Haar measure", but this isn't cryptographically useful --- you should only ever use the above for finite groups.

Anyway, in terms of the above, the way to show the construction you discuss is secure is the following.

  1. Consider an alternative cryptosystem, where you encrypt with $(A, u + m)$ rather than $(A, As + e + m)$
  2. Show that encrypting with $(A, u + m)$ is an instantiation of the above result for the group $\mathbb{Z}_q^n\times \mathbb{Z}_q$ (technically you are only encrypting messages in the sub-group $\{0\}^n\times \mathbb{Z}_q$ of this group but whatever, that doesn't hurt security).

The adversary cannot attack the second scheme (it's information theoretically secure). You can then bound the advantage of an adversary in attacking your cryptosystem by the advantage of an adversary in distinguishing your cryptosystem from the cryptosystem of 1. It should be routine to verify this advantage is precisely the advantage of the adversary in solving the LWE problem.

With respect to your comment to the other answer, the point isn't to compare the distributions $N(\mu,\sigma^2)$ and $N(\mu+c,\sigma^2)$, as they may be distinguishable for $\sigma^2$ small. Instead, you compare the distributions $U(G)$ and $U(G)+m$, which are exactly the same distribution for any $m\in G$.

Zpeed78 avatar
sa flag
I would like to believe your last sentence. I try it with a "proof", that goes also into the direction of what Daniel wanted to say (?). Let $U = \{0,1,2,3,4\}$ be uniform distribution on the corresponding set, each element is uniform in $\mathbb{Z}_5$. When we add, say, 2 to $U$, we obtain $U + 2 = \{2,3,4,0,1\}$ and this "keeps" the probabilities it is still a uniform distribution on $\mathbb{Z}_5$. My "issue" was to relate this to continuous distributions (there we would have a "shift" of the uniform distribution).
Mark avatar
ng flag
@Zpeed78 First, no continuous distributions exist in cryptography. We are computing things, and need to represent everything in finite space. We (in principle) could represent continuous distributions using a variable-length encoding. This will induce side-channels based on the message length. Everything is finite though.
Mark avatar
ng flag
Second, you have seen that for any fixed value $g$, $g + U(G) = U(G)$, i.e. shifts of the uniform distribution preserve the uniform distribution. Next, for any second distribution $D$ on $G$ (independent of $U(G)$ --- this is key), we can write $\Pr_{g\gets D}[D+U(G) = x] = \sum_{g\in G}\Pr[g + U(G) = x\mid D = g]\Pr[D = g]$. By our aformentioned shift-invariance, we can replace $\Pr[g + U(G) = x]$ with $\Pr[U(G) = x]$, to get that $\Pr[D + U(G) = x] = \Pr[U(G) = x]$, even when the "message" we are applying a uniform shift to is from an arbitrary distribution (independent of our shift)
Zpeed78 avatar
sa flag
Hi Mark and thank you very much. 1.) Do you have a reference for your quote, it looks like it comes from a book? 2.) Do you have a "formal" proof for the statement (about probability) in the quote, I could otherwise only argue here with my example and knowing that it is about finite groups modulo (a rather less "formal" proof). 3.) To your calculation in the comment, you use the theorem of the total probability together with the independence thus concretely $\Pr[g + U(G) = x\mid D = g] = \Pr[g + U(G) = x]$, right?
Mark avatar
ng flag
1. No, it's something I derived a few years ago for fun, but it doesn't really have any applications (and, once you know the appropriate math, is "obvious"), so isn't really worth trying to write down anywhere. 2.Do you mean the "shift invariant" statement? I'm just writing down a definition of [Haar measure](https://en.wikipedia.org/wiki/Haar_measure) for finite groups. 3. I believe so
Mark avatar
ng flag
Searching "one-time pad for groups" though, you see writing like [section 2.2 of this](https://n.ethz.ch/~saethrej/download/Summaries/6.%20Semester/InfSec_pt1.pdf), or [section 3 of this](https://www.ccs.neu.edu/home/wichs/class/crypto-fall17/lecture1.pdf). The first one says it works for arbitrary groups though, the second one works for abelian groups. Both should include the implicit assumption that the groups are finite (one can get the OTP argument to work in the case of infinite groups, but the variable-length group encodings are a side channel that make it not worth it).
Zpeed78 avatar
sa flag
Hi Mark and once again thanks! Regarding 2.) Yes, I'am interested in a more "formal" proof of your statement $\forall m,k\in G: \Pr_{h\sim \mathcal{D}}[h = k] = \Pr_{h\sim\mathcal{D}}[h +m = k]$, I can only imagine that this is true because the group is closed under addition, but why then the probabilities are equal for a distribution on the group I still don't quite see. I would be interested in the mathematical background.
Mark avatar
ng flag
@zpeed78 the simple proof follows from noting that for any $g\in G$, the function $f_g:G\to G$ given by $f_g(h)=g+h$ is always a bijection, and injections preserve the uniform distribution on finite sets (it maps elements of probability $p$ to elements of probability $p$, as all elements have probability $p$)
Score:1
ru flag

Adding a constant with wrap-around (or more generally, applying any bijection to a finite set) preserves the collection of probabilities of any discrete distribution on that finite set. Naturally, this includes both the uniform distribution and any distribution that approximates the uniform. Difficulty of distinguishing under addition of message data is therefore a straightforward equivalent to the original formulation. The original formulation involves fewer variables and is therefore to be preferred.

Zpeed78 avatar
sa flag
Thank you for your reply. Can you perhaps elaborate a bit on why (adding) the constant does not change the distribution? I know the context, if $N(\mu,\sigma^2)$ is a normal distribution and $Y = X + c$ is a random variable then the distribution of it is $N(\mu+c,\sigma^2)$ though. But these are two different distributions of the same type. You can also apply this to the uniform distribution, then you "shift" it, so to speak.
I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.