# Questions tagged as ['lwe']

Learning with Errors is a form of lattice problem used in the design of cryptographic primitives. LWE is based on the Closest Vector Problem (CVP).
Score: 1 The relationship between root hermite factor and bit-security? The root hermite factor corresponding to an bit-security level, such as 1.0045 corresponding to 128-bit security. What is the root hermite factor corresponding to 100-bit, 160-bit, 180-bit security?

root hermite factor: 1.0045 ? ? ? bit-security : 128 100 160 180

Score: 0 How to estimate the parameter of a lattice signature scheme with lossy reduction? The parameter of a lattice signature scheme DAZ19 with tight reduction can be choosed to make the underlying hardness problem intractable. How to estimate the parameter of a lattice signature scheme ESLL19 with lossy reduction? is there any relation between the reduction and parameter?

Score: 3 parameter estimating in lattice signature scheme when reading [BDLOP18], I run the lwe-estimator with the recommended parameters in Table 2 , but the result of hermite factor is 1.007, this result is bigger than the recommended hermite factor 1.0035 Score: 2 *-LWE equivalent of Diffie-Hellman $g^{x^2}$ vulnerability In Is Diffie-Hellman less secure when A and B select the same random number? , the possibility of Diffie-Hellman key exchange producing identical peer keys and the vulnerability of it against passive attackes was brought up, again - as a duplicate.

But is there a equivalent in *-LWE family of lattice-based key exchanges? My question being, without considering CCA-hardening such as Fujisaki-Okamoto t ...

Score: 3 Is the scheme in LWE also valid in R-LWE? One way of interpreting matrices in RLWE is that they are a subset of standard integer matrices that have special structure. For example, rather than using a random matrix $$A\in\mathbb{Z}_q^{n\times n}$$ (as we might in LWE-based constructions), we can replace this a matrix with a matrix where the first column (or row) is random, and the rest have a cyclic rotation structure:

$$\begin{pmatrix} a_1 & ...$$

Score: 1 Equivalence between search-LWE and decision-LWE Are there any constraints when it comes to proving that search-LWE and decision-LWE are equivalent? Should we assume that the module $$q$$ is prime when switching from one version to another?

Please give a good reference where proof exists.

Score: 1 Why $q$ in LWE must be polynomial in $n$ I am wondering why the modulus $$q$$ in the LWE problem has to be polynomial in $$n$$.

Another question is whether one can take it to be an arbitrary integer instead of a prime number.

Score: 0 NTL: Solve the closest vector problem for non-square matrix using LLL/Nearest Plane Algorithm Assume I have a matrix $$A \in \mathbb{Z}^{m \times n}$$, $$m > n$$, which forms a basis of a lattice. Given a vector target vector $$t = Ax + e$$, $$t,e \in \mathbb{Z}^m$$,$$x \in \mathbb{Z}^n$$, I want to find the (approximate) closest vector in the lattice $$\mathcal{L}(A)$$ to $$t$$.

I wanted to use Babai's nearest plane algorithm, in particular the NTL implementation NTL::NearVector to solve this problem (a ...

Score: 2 SIS vs LWE Problem The Ajtai one way function is defined by

$$f_A(x)= Ax \; mod\; q$$ where the x $$\in \{0,1\}^m$$ and A $$\in \mathbb{Z_q}^{n \times m}$$. $$f_A(x)$$ is one way function ( Ajtai 96)

While the Regev One way function(Regev 05) is defined over x $$\in \mathbb{Z_q}^k$$ and $$e \in \mathscr{E}^m$$ and A $$\in \mathbb{Z_q}^{m \times k}$$ .The one way function is defined as

$$g_A(x,e) =Ax +e \; mod\; q \; (LWE)$$

Score: 0 Is there a way to combine the Fuzzy extractor (or SS+ Ext) with state-of-art deep learning model? I recently reviewed biometric authentication with deep learning model, and I found, in cryptography, the fuzzy extractor,FE (or secure sketch,SS, plus strong extractor) have solve this problem good enough based on error-correcting codes, is there any research from this point to combine them? For deep learning model, it is natural to give a good representing for the input biometric object(e.g. face image ...

Score: 4 Prove that a small Ring-LWE secret is unique I just want to know whether my proof is correct, which is about proving that if the Ring-LWE secret is small, then it is unique. Before giving my proof, here is a fact:

Fact 1: $$\Pr [\Vert r \Vert_\infty \leq \beta: r\xleftarrow{\\\} R_q]\leq \left(\dfrac{2\beta+1}{q}\right)^n$$, where $$R_q=\mathbb{Z}_q[X]/(X^n+1)$$, where $$n$$ is a power of two, $$q$$ is a prime and $$\beta$$ is some positive real number ...

Score: 0 Is my proof about uniqueness of ring-LWE secret correct? Suppose that $$n$$ is a power of two, $$q=3\pmod 8$$, prime and $$R=\mathbb{Z}[X]/(X^n+1)$$. Denote $$\Vert\cdot\Vert$$ as the infinity norm in $$R_q=R/qR$$ on the coefficients of elements in $$R_q$$. The coefficients are assumed to be in $$[-\frac{q-1}{2},\frac{q-1}{2}]$$. I'll just cite some facts that I will use in my proof:

1. $$X^n+1$$ factors into two irreducible factors modulo $$q$$, where each factor is of de ...
Score: 2 Proof that (ring-)LWE secret is unique I read Regev's paper in 2005 about Learning with Errors and he mentioned that the secret of a LWE sample is unique but I have not seen a proof of this claim. Can someone point me to a paper proving this claim? Also, for the ring-LWE case, in particular for power of two cyclotomics, is the secret always unique?

Score: 0 Small modulus to noise ration in LWE implies better security I don't quite understand why a smaller quotient between modulus $$q$$ and the noise's standard deviation implies better security against known attacks.

Score: 2 Parameters in RLWE Let $$n, q, \sigma$$ be the polynomial degree($$x^n+1$$), coefficient modulo, and the standard derivation, respectively. I often see some parameters such as For RLWE, we can use the CRT to decompose the $$\text{RLWE}_{q}$$ to some $$\text{RLWE}_{q_i}$$ for $$1\leq i\leq l$$, where $$q = q_1 q_2\cdots q_l$$, then when we consider the security of RLWE, we should take $$\log q$$ or $$\log q_i$$ to be considered?

Score: 2 The error distribution in LWE $$\textbf{Continuous LWE}$$ : $$(\overrightarrow{a}, b)\in \mathbb{Z}_q^n\times \mathbb{T}$$, where $$\mathbb{T}=\mathbb{R}/\mathbb{Z}$$, $$b = \langle \overrightarrow{a},\overrightarrow{s}\rangle/q + e\mod 1$$, where the error $$e$$ is sampled from $$\Psi_\alpha(x) := \sum_{k=-\infty}^{\infty}\frac{1}{\alpha}\cdot exp(-\pi(\frac{x-k}{\alpha})^2), x\in [0,1)$$ over the torus $$\mathbb{T}$$. The density function