Score:0

Homomorphic Encryption CKKS scheme, Encoding: canonical embeding discretization

ws flag

I'm trying to understand CKKS (non bootstrappable) and I'm struggling with the encoding part. I'm using the original paper , "Homomorphic Encryption for Arithmetic of Approximate Numbers ", also this notes "Lattices, Homomorphic Encryption, and CKKS" and this blog.

I have four questions (will have sense after):

  1. If Im right of the projection.
  2. The final steps are trivials?
  3. Why when we round the coefficients we get the coefficients of the polynomial.
  4. Why they use a "special" algorithm for rounding that is random.

Context: So, given a vector $z\in \mathbb{C}^{N/2}$ they use the canonical embedding $\sigma^{-1}$ to map it to $\mathcal{R} =\mathbb{Z}[X]/(X^N+1)$. Basically $\sigma^{-1}:z\to\mathcal{R}$. First they expand z mirroring it with the complex conjugates so $\pi^{-1}(z)\in\mathbb{H}\subset \mathbb{C}^{N}$. With $\mathbb{H}$ the set of all complex vector with this conjugate mirror property. Because this $\pi^{-1}(z)$ may not live in the space of $\sigma(\mathcal{R})$, they discretize it by projecting this the base of $\sigma(\mathcal{R})$. Up to here, all good.

For this they take the basis vectors $\alpha_i$, and project z to it. First question, this is correct?:

\begin{equation} z_{proj} = \sum_{i=0}^{N-1} \frac{<z,\alpha_i>}{<\alpha_i,\alpha_i>}\alpha_i \end{equation}

Where <a,b> is the Hermitian inner product. However they perform an rounding algorithm ("coordinate-wise random rounding") for each $\frac{<z,\alpha_i>}{<\alpha_i,\alpha_i>}$.

And finally, if we call $A$ to the matrix that has as columns the basis vectors $\alpha_i$, $\lfloor x \rceil$ as x rounded with the algorithm and consider the vector $z' = (\lfloor<z, \alpha_1>/<\alpha_1,\alpha_1>\rceil,..., \lfloor<z, \alpha_n>/<\alpha_n,\alpha_n>\rceil)$. We can rewrite the projection as:

\begin{equation} z_{roundProj} = Az' \end{equation}

So if then we want to apply $\sigma^{-1}(z')$ (get the coefficients of the polynomial $\gamma_i$), is basically to solve the system of linear equations:

\begin{equation} A\gamma=z_{roundProj} \rightarrow \gamma=A^{-1}z_{roundProj} = A^{-1}Az'=z' \rightarrow \gamma=z \end{equation}

So, the steps that involves to multiply A with $z'$ to "complete" the projection to then solve the system of linear equations is not necessary. Second question, this is correct?

Third question, if this is correct, why when we round up each $\frac{<z,\alpha_i>}{<\alpha_i,\alpha_i>}$ we get each coefficient of the polynomial.

Fourth question why we need a special algorithm for rounding and not the usual ones (to the nearest integer) if in this stage there is not issue with security?

Score:0
ng flag

This is only an answer for the 4th question.

While we could apply the (deterministic) map $x\mapsto \lfloor x\rceil$, it has some issues. In particular, the error $e := \lfloor x\rceil - x$ from applying it depends on $x$, which is not great.

Coordinate-wise random round fixes this. In particular, it satisfies the correctness property that

$$\mathbb{E}[\mathsf{randround}(x)] = x$$

While there is non-trivial variance (i.e. the rounding algorithm is not deterministic), this is perhaps a slightly better situation to be in (and additionally, the variance may be independent of $x$ --- I would have to check though).

mmazz avatar
ws flag
Obviously I'm new in Crypto, but why is not great that that error depends on x? The encoding part I thought that was not where the security came in place.
Mark avatar
ng flag
It doesn't, but when analyzing error growth you then either need to apply worst-case bounds on $x$ (needlessly lossy), or appeal to heuristics. Randomized rounding let's you do provable analysis. I think it's typically larger than experimentally observed errors, e.g. people just use the heuristics. But if you want fully provable stuff randomized rounding can be useful.
I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.