Score:1

What arithmetic operations are supported from fully homomorphic encryptions(FHE)?

in flag

I'm wondered about what arithmetic operations are supported from FHE.

I want to know for 2nd Gen(BGV,BFV), 3rd GEN(GSW,CGGI), 4th GEN(CKKS)!

  • Is 3rd can support more than and/or/not? I heard it is for a bit.

Thanks

Score:2
ng flag

First, worth mentioning that which schemes are "3rd gen" and which are "4th" isn't completely standard.

2nd Gen

Computations are modeled as arithmetic on arithmetic circuits. Equivalently, one can compute polynomial (modulo $p$ for some "plaintext modulus" $p$) arithmetic on the (plaintext) inputs to the cryptosystem. For these schemes bootstrapping is expensive, i.e. things can be more efficient if you can bound the number of multiplications that occur beforehand. Moreover these schemes can highly benefit from SIMD-like techniques --- if you have one operation you want to perform (in parallel) across ~1-10k data "streams" one can make the per-stream cost of the computation much smaller.

3rd Gen: GSW

GSW (in principle) supports precisely the same operations as 2nd gen schemes. The error-growth of GSW during multiplications is somewhat difficult to deal with though --- the operations $(a,b)\mapsto a\times b$ has error that depends on the size of one of the operands. This is to say that if both operands are large, error will be large, so it can be difficult to do general multiplications where both operands are large.

This is typically fixed by appealing to

3rd Gen: DM (FHEW) / CGGI (TFHE):

Essentially GSW where operands are of the form $x^d$ for data $d$. These are always small (regardless of $d$), so the aforementioned issue with GSW is fixed. But one only can only naively support arithmetic modulo the degree of the underlying cyclotomic extension (roughly). For example, if one wanted computations with 32 bits of precision, you'd have to work with cyclotomics of degree $\approx 2^{32}$, which is quite large.

In practice people can currently do computations with between 8 and 16 bits of precision typically. Not all implementations of DM/CGGI necessairly prioritize being able to do computations of this precision. But theoretically things aren't limited to bit operations, but also it is much more difficult to do high-precision computations than with 2nd-gen schemes.

4th Gen: CKKS

CKKS is really quite similar to 2nd gen schemes in terms of which basic operations it supports. The main difference is that since it doesn't try to be exactly correct, it can take more general operations than solely polynomial operations, and approximate these computations via polynomials. So one can hope to compute things like $\sin(x), \cos(x)$, at least in some approximate sense.


This is all a discussion of theoretically which operations are supported. Practice can be quite a bit different --- as you seem to have noticed, initial implementations of DM/CGGI didn't focus on much more than bit arithmetic (or "boolean circuits").

One example is Zama.ai's concrete library. See for example Concrete ShortInt and Concrete Integer. The first seems to implement 8-bit precision integer arithmetic (in line with what I said before), while the later claims arbitrary precision, by essentially reducing arbitrary integer arithmetic into big-integer arithmetic using 8-bit underlying arithmetic.

There could likely be similar support in other libraries (I know Palisade/OpenFHE has a DM/CGGI implementation). I don't know if they support more than binary operations (the directory structure at least implies they may still be limited to binary operations, but this might not be the best way to check this). There's theoretically no barrier though. If you want to check with someone, you could ask on their discourse group.

I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.