Score:1

What is Relationship between ciphertext quotient and polynomial degree in RLWE?

es flag

In Ring Learning with Errors problem, the size of the ciphertext quotient $q$ decides the size of the polynomial degree $n$ or vice versa. In other words, rlwe problem is hard only when the polynomial degree is set in comparison to the quotient. However, I am not sure what is the relationship between the values of the two parameters?

Can someone explain to me or point me to some resource?

Mark avatar
ng flag
Can you elaborate what you mean? It seems to me you might be talking about *module* LWE, over a ring $R$ that has some non-trivial rank as an abelian group. But I'm not entirely sure.
muhammad haris avatar
es flag
Hey @Mark I edited the question I am talking about polynomial degree which is also called ciphertext in some literature. Sorry for the confusion
Score:1
ng flag

There's not really a "relationship" between these two parameters, due to what is known as modulus switching. Roughly, given an LWE instance $\bmod q$, one can change it into an LWE instance $\bmod p$ at relatively little cost, for a wide variety of $p$. There are many results along these lines, but I will describe one from Worst-case to Average-case Reductions for Module Lattices.

Let $\psi$ be some probability distribution on $\mathbb{T}_{R^\vee}$, and $s\in(R^\vee_q)^d$ be a vector. We define $A^{(M)}_{q,s,\psi}$ as the distribution on $(R_q)^d × \mathbb{T}_{R^\vee}$ obtained by choosing a vector a $s\in(R_q)^d$ uniformly at random, and $e \in \mathbb{T}_{R^\vee}$ according to $\psi$, and returning $(a, \frac{1}{q}\langle a, s\rangle + e)$.

MLWE: For an integer $q \geq 2$ and a distribution $\Psi$ over a family of distributions over $K_\mathbb{R}$. The decision version of the Module Learning With Error problem $M-LWE_{q, \Psi}$ is as follows: Let $s \in (R^\vee_q)^d$ be uniformly random and $\psi$ be sampled from $\Psi$ ; The goal is to distinguish between arbitrarily many independent samples from $A^{(M)}_{q, s, \psi}$, and the same number of independent samples from $U(R^d_q, \mathbb{T}_{R^\vee})$.

This is more general than RLWE, and reduce to RLWE when $d = 1$. The family of distributions $\Psi_a$ are a certain elliptical Gaussian distribution, see section 2.3.

Anyway, the modulus switching result is theorem 4.8. Here, $N = nd$ is the "total dimension" of the MLWE instance. Setting $n = 1$ recovers the case of RLWE, that you are interested in.

Let $p, q \in [2, 2^{N^{O(1)}} ]$ and $\alpha, \beta ∈ (0, 1)$ such that $\beta \geq \alpha \max(1, \frac{q}{p})n^{1/4}N^{1/2}\omega(\log_2 N)$ and $\alpha q \geq \omega(\sqrt{\log(N)/n})$. There exists a polynomial time reduction from $M-LWE_{q,\Psi_\alpha}$ to $M-LWE_{p,\Psi_\beta}$.

This is all to say that you can reduce from an arbitrary modulus $q$ to another arbitrary modulus $p$, at the cost of inflating the noise rate from $\alpha\mapsto \frac{q}{p}\alpha\sqrt{N}\omega(\log_2 N)$. This isn't totally for free (there is an additional $\sqrt{N}$ factor), but given that the moduli $q, p$ are typically small polynomials in $N$, the cost you pay is relatively small in comparison to parameter sizes.

As a result, there really isn't a relationship between solely the ciphertext modulus (as it is typically called, not the ciphertext quotient) and the dimension, as any relationship also needs to take into account the size of the error distribution.

As for how to actually set all of these things, people typically feed their parameters into the LWE Estimator, which gives some bit security estimate for each particular parameter set.

muhammad haris avatar
es flag
Thanks for the response yes I do understand that you can switch from mod $q$ to mod $p$ when $p<q$ at no reduction to security but is that true also when $p>q$, according to my understanding and practice that would reduce the security for a fixed $N$?
muhammad haris avatar
es flag
In other words, if I take two RLWE samples, with the same $N$ and have an error sampled from the same distribution, but the ciphertext modulus of the first sample is $q$ and the other is $p$ , where $p>q$ , will they have same security level? I believe the RLWE sample with modulus $q$ will have a higher security level.. and that's what I want to understand why is that
Mark avatar
ng flag
When changing moduli, it is best to think about things in terms of the quotient of $\sigma/q$, e.g. *relatively* how big $\sigma$ is compared to $q$. It doesn't sound like you are talking about this, which can cause issues. In particular, if you increase $q$ without simultaneously changing $\sigma$, the quotient $\sigma/q'$ shrinks, reducing security. If you decrease $q$ without simultaneously changing $\sigma$, the quotient $\sigma/q'$ grows, increasing security. To determine precisely how much the security is (estimated to) change, use the LWE estimator.
muhammad haris avatar
es flag
okay got it so there is no contrived equation that gives a relationship of $N,q,\sigma$, and security level?
Mark avatar
ng flag
There is, but it is easiest to use LWE estimator. Roughly, the contrived equation is $\mathsf{seclevel}(N, q, \sigma) = \min_i (f_i(N, q, \sigma)$, where each $f_i(N, q, \sigma)$ describes a particular attack on (R)LWE. For more on each $f_i$, you can read the [NewHope Specification, section 4.2](https://www.newhopecrypto.org/data/NewHope_2020_04_10.pdf). NewHope (was) the most popular RLWE-based cryptosystem submitted to NIST's PQC standardization. It did not advance to the final round, as NIST has preferred MLWE and "plain LWE" over RLWE.
mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.