Score:2

FHE Relinearization

gu flag

I don't understand why relinearization is so significant. I understand the equations in the paper (in this post I'll be using notation from BV but I would it applies to BGV+BFV) but if anything it seems like it would be less efficient. We go from: $$h_0+\sum_ih_ix_i+\sum_{i,j}h_{i,j}x_ix_j$$ to utilizing t and getting: $$h_0+\sum_ih_i(b_i-\langle a_i,t\rangle) + \sum_{i,j}h_{i,j}(b_{i,j}-\langle a_{i,j},t\rangle)$$ The paper says that this is linear in t, which is clear to me. However, we still have the quadratic coefficients and now we actually have a triple sum because we have to compute the inner product between $\langle a_{i,j}, t\rangle$. It looks to me like this linearity is a trick of notation and not an actual result (which can't be right). All this leads me to believe that we were better off not doing this in the first place!

Can someone help clarify this? Thanks!

fgrieu avatar
ng flag
Based on this [earlier question](https://crypto.stackexchange.com/q/104353/555), "the paper" is Zvika Brakerski and Vinod Vaikuntanathan's _Efficient Fully Homomorphic Encryption from (Standard) LWE_, in [2011 IEEE 52nd Annual Symposium on Foundations of Computer Science](https://doi.org/10.1109/FOCS.2011.12) (paywalled), also [there](https://eprint.iacr.org/2011/344).
Score:0
ng flag

Relinearization is important to build a compact FHE scheme.

An FHE scheme is said to be compact if

  1. The running time of decryption, and
  2. The size of ciphertexts

are both independent of the homomorphic computation which has taken place.

It is (roughly) trivial to build non-compact FHE schemes. Take some standard encryption scheme with ciphertexts $c := \mathsf{Enc}_k(m)$. To homomorphically evaluate $C$, "compute" the ciphertext $c' := (C, c)$. Then modify decryption to compute

$$\mathsf{Dec}_k'(C, c) = C(\mathsf{Dec}_k(c)) = C(m).$$

I.e. we defer the homomorphic computation until decryption, at the cost of increasing the size of ciphertexts, and running-time of decryption.

Why is this relevant?

In multiplication of tensor-based FHE schemes (things like BGV, B/FV, and even things like CKKS), the (rough) outline is the following.

  1. Start with some ciphertext $c$ such that $\langle c,k\rangle = \Delta m + e$, where $c$ is the ciphertext, $k$ is a key, $\Delta$ is some scaling factor, and $e$ is the error of the computation.

  2. Compute $\langle c_0, k\rangle \langle c_1,k\rangle = \langle c_0\otimes c_1, k\otimes k\rangle = (\Delta m_0+e_0)(\Delta m_1 + e_1) = \Delta^2m_0m_1 + \Delta(e_0m_1+m_0e_1)+e_0e_1$

  3. Scale this down to $\Delta^{-1}\langle c_0\otimes c_1, k\otimes k\rangle = \Delta m_0m_1 + (e_0m_1+m_0e_1)+\Delta^{-1}e_0e_1$.

This yields a ciphertext similar (ish) to the initial one, namely if one takes an inner product between it and a secret key, one gets something like $\Delta m'+ e'$ for $m' = m_0m_1$, and $e'$ a more complex expression one can directly see above.

This ciphertext has one significant issue though --- $c_0\otimes c_1$ is in a formal sense "larger" than $c_0, c_1$. This means that while we have defined a nice homomorphic operation, the ciphertext has grown as a result of this operation, and therefore we are not yet building a compact FHE scheme.

Relinearization fixes precisely this problem, and defines a way to convert these "large" $c_0\otimes c_1$ ciphertexts into "small" ciphertexts like $c_0, c_1$. One doesn't have to relinearize immediately, but the growth of these large ciphertexts is quite fast (if $c_i$ have "dimension $n$", then $c_0\otimes c_1$ has "dimension" $n^2$), so one really should relinearize fairly quickly so things don't blow up in size too much.

I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.