Score:2

Why does CKKS decryption have approximate correctness?

lr flag

I'm looking at the following text:

enter image description here

Why does CKKS decryption have an approximate correctness requiring that $\|u + e\| < {q / 2}$?

I mean if $\|u + e\| \ge {q / 2}$, how can I prove the CKKS decryption doesn't have approximate correctness?

Score:2
ng flag

All of lattice-based cryptography has similar restrictions to this. The easiest way to understand it is that lattice-based cryptography is implicitly made up of two parts

  • the encryption part, for example (in secret-key encryption, i.e. the simplest setting) $\mathsf{Enc}_s(m) = (A, As + e + m)$

  • the error-correction part, in that same secret key setting we encrypt $(q/2)m$ rather than $m$.

In this simplified example, we include this error-correction because we cannot decrypt standard encryption. In particular

$$\mathsf{Dec}_s(\mathsf{Enc}_s(m)) = \mathsf{Dec}_s(A, b:= As +m+ e) = b - As = m + e\neq m.$$

We can't decrypt precisely because of the presence of the error $e$. So we need a method of removing this error. In particular, encoding $m\mapsto (q/2)m$ lets us decode $c :=(q/2)m+e$ by rounding $c\mapsto \lfloor c/(q/2)\rceil$. This is equivalent to treating $c$ as an expression over $\mathbb{Z}$ (not $\mathbb{Z}_q$), and using standard techniques from coding theory (namely solving CVP on the lattice $(q/2)\mathbb{Z}^m$).

For CKKS, it can be viewed as essentially the same construction, except

  • you do FHE things (this doesn't change things much for the purposes of this question)
  • The encoding $m\mapsto (q/2)m$ (or more generally $m\mapsto (q/2^k)m$) isn't injective, but is instead "lossy".

By this, I mean that there are multiple $m, m'$ that will be encoded to the same value under the error-correction step. This means that the final result (after error-correction) may be "wrong", but it is wrong in a particular way. Namely, it is right, but for a computation done on a wrong (lower-precision) value. I'm pretty sure this is related to the notion of backwards numerical stability, but I'm not really an expert in that.

Score:0
lr flag

Here's a simple example: https://zhuanlan.zhihu.com/p/77478956

In summary, it keeps the same odd/even feature of the result, resulting in the wrong answer/message.

Mark avatar
ng flag
given that this is an english-language site, it might be useful to mention that this link is in chinese.
I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.