Score:2

Avoid CKKS Bootstraping

gf flag

CKKS is a levelled scheme, because the rescale $\lfloor\frac{x}{\Delta}\rceil$ operation requires truncating a modulus to be efficiently evaluated, and rescale is (usually) needed after every multiplication to control noise growth.

But I don't understand why rescale have to decrease ciphertext modulus. In residue number system it is probably hard to divide without removing ciphertext modulus, but would it be right that under (integer) high precision arithmetic system, we just have to naively divide each coefficient of ciphertext with $\Delta$ (with integer division), hence achieving FHE without need of bootstrapping?

Thanks.

Score:0
ru flag

If your question is why we don't implement the CKKS scheme in the ring $\mathbb R[X]/(X^n+1)$ rather than $R/qR$ where $R=\mathbb Z[X]/(X^n+1)$, it is because the learning with errors problem can be solved over $\mathbb R$ e.g. by using the linear least-squares method.

xade93 avatar
gf flag
No, the scheme is still in integer module q. By high precision arithmetic I mean rather than storing coefficients in Residue Number System (via chinese remainder theorem), store them naively by single number (e.g. BigNum in Java), and perform rescale by naively integer division.
Mark avatar
ng flag
@xade93 you'll run out of precision eventually. All FHE operations have some "precision loss cost" (ish). The only way known to regain precision is bootstrapping. If you have some fixed computation you want to do though, setting parameters large enough so you can tolerate the precision loss without bootstrapping can be a decent efficiency optimization.
xade93 avatar
gf flag
@Mark Oh that makes sense! So is it right to say since error grows at least 1 bit each time, regardless of whether we truncates the LSB or not we can only evaluate linear depth circuit without bootstrapping (with respect to the modulus size). Then we might as well truncate to prevent ciphertext size grow exponentially with depth. Which means the shrink modulus step is not necessary for correctness of bootstrapping, it exists merely to prevent linear growth of ciphertext
Mark avatar
ng flag
Not exactly. Modulus switching was actually first introduced to control noise growth during repeated multiplications. Roughly, often noise grows from $(e, e') \mapsto ee'$. If we first scale both of these down by some intermediate value $p$, this growth becomes $(e/p, e'/p)\mapsto (ee')/p^2$, i.e. *linearly* scaling down moduli incurs *quadratic* savings in noise growth. This is somewhat imprecise of course, but it shows the main idea of how it can do more than just give better efficiency. This is also because you can't "infinitely scale up $q$". Larger $q\implies$ less secure LWE $\implies$
Mark avatar
ng flag
requirements for larger $n$. For any such $q$ you can probably find an $n$ that works, but it is a little complex.
I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.