Score:2

In Learning with errors, what is the relation between the size(or standard deviation) of errors and security?

ke flag

I want to understand how the hardness of Learning With Errors problem varies as size of the error term changes.

For example, assuming that the other parameters are the same,

  1. LWE with errors sampled from centered binomial distribution of $\{-1, 0, 1\}$
  2. LWE with errors sampled from centered binomial distribution of $\{-2, -1, 0, 1, 2\}$

Which is the more difficult problem, in terms of complexity?

In addition, is there any trade-off relation?(e.g. security loss(or gain) by using larger errors and efficiency gain(or loss))?

Thank you in advance.

Mark avatar
ng flag
By centered binomial distribution of $S$, do you mean a distribution of the form $B = \sum_i B_i - \sum_i B_i'$ for $B_i, B_i'$ independent 0-1 coinflips such that $|B| \in S$? Or do you mean something like $B_i, B_i'$ uniform random variables in $S$?
Lee Seungwoo avatar
ke flag
@Mark I mean the former one. But I want to know in general how security level changes as the distribution of errors varies(especially standard deviation of error distribution)
Score:2
ng flag

This is a pretty hard topic to concisely summarize. In general, one way to answer your question is

The hardness of LWE is precisely what the Lattice Estimator says it is.

This is reductive, and almost certainly not true. Still, it is perhaps the most useful thing to say. Roughly speaking, if we define $T_{\mathsf{LWE}}(n,m,q,\sigma)$ as the time to break LWE with

  • dimension $n$,
  • number of samples $m$
  • ciphertext modulus $q$, and
  • gaussian paramter $\sigma$,

then $T_{\mathsf{LWE}}(n,m,q,\sigma)$ doesn't have any particularly nice closed-form expression. Other problems $P$ in cryptography often have one "best known attack" for general parameters, and you can summarize $T_P$ in terms of this best-known attack. For example

  • RSA has the general number field sieve
  • Discrete log has index calculus (in the finite field setting) or any $O(\sqrt{|G|})$ generic group algorithm (such as Pohlig-Hellman),
  • Code-based crypto has intersection-set decoding type algorithms (I believe I say "Prange's Algorithm" here, but I'm not a code-based person).

LWE has at least three relevant attacks, namely

  • the primal attack,
  • the dual attack, and
  • Arora-Ge.

This is to say that $T_{\mathsf{LWE}} = \min(T_{\mathsf{Primal}}, T_{\mathsf{Dual}}, T_{\mathsf{Arora-Ge}})$, and therefore doesn't have the nicest closed-form solution. This (and that LWE has so many non-trivial parameters, i.e. 4 compared to 1 for things like RSA) is (roughly) why people use things like the lattice estimator instead.

You might find this prior answer to be useful (in particular, links to some dissertations on algorithms for solving LWE in various parameter ranges). In short, I'll point out that your assumption that errors are contained within $\{-1,0,1\}$ is very strong, and opens you up to attacks like Arora Ge (though you need $O(n^3)$ LWE samples, which is on the high end in certain applications).

The general answer I would give though is to just use the lattice estimator, and perhaps read the dissertations linked in the other answer if you want more background on the general theory behind it.

Lee Seungwoo avatar
ke flag
I'll check the link. Thank you so much.
Score:1
ru flag

In your example, we can definitively say that the second instance is harder (or at least no easier) than the first.

This is because the sum of two errors sampled from the first distribution is distributed according to the second. Thus we can convert any instance of learning with errors in the first case to the second by adding an additional centred binomial $\{-1,0,1\}$ onto the samples. We can argue similarly in the case of Gaussian errors (though discretised Gaussians are slightly different).

This assumes that there is a "unique" solution as introducing larger errors can increase the number of possible solutions. Whether the security of your scheme rests on finding the solution used to generate the parameters, or if any solution will do will vary. In general, increasing noise will make it harder to find the causal solution, but may introduce easier-to-find non-causal answers.

In cryptography, the danger in increasing the noise is usually reliability. Lattice-based schemes will typically have decryption failure rates (or signatures that need resampling) and these increase with noise. Designers usually try to make these failures incredibly rare as there is cryptanalysis that extracts information from failures. Higher noise unreliability can be offset with reconciliation schemes where users error-correct approximate shared secrets, but doing this without interaction can be inefficient.

Lee Seungwoo avatar
ke flag
So as far as I understood naively, if LWE is used in some PKE or KEM scheme, increasing noise will make it harder(because an adversary needs to find the causal solution to decrypt), and if LWE is used in some signature scheme, increasing noise might make it easier to make a forgery(because an adversary only need to find any solution). Is it correct?
Daniel S avatar
ru flag
@LeeSeungwoo There are a lot of different lattice-based signature schemes, so it's hard to talk in full generality. However, if the scheme rests on exhibiting a vector close to (a hash of) the message, then, yes, forgery can become easier as the system becomes noisier (equivalently the notion of close becomes looser) and again this is due to an increase in the number of possible solutions.
Lee Seungwoo avatar
ke flag
I see what you mean. Thank you so much.
mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.