Score:3

Functional and security model for SEAL

ng flag

What's the functional and security model for SEAL?

From this I get that it

allows additions and multiplications to be performed on encrypted integers or real.

But what are the limitation, like range, precision, on inputs and outputs? What operations can be performed? Is there some limitation beyond range/precision?

What is the security model an application designer using SEAL as a black box should assume?

I'm especially puzzled by the following stated limitation (source):

decryptions of Microsoft SEAL ciphertexts should be treated as private information only available to the secret key owner, as sharing decryptions of ciphertexts may in some cases lead to leaking the secret key.

Does that mean an application using SEAL to e.g. compute the mean and variance on an encrypted data set can't publish the (decrypted) results? At least, the first (say) 4 decimal digits of mean and 2 first of variance must be OK to publish, right? If so, what's the limit of safe, or/and is there some built-in API to sanitize output so that it can safely be revealed?

Score:3
ng flag

decryptions of Microsoft SEAL ciphertexts should be treated as private information only available to the secret key owner, as sharing decryptions of ciphertexts may in some cases lead to leaking the secret key.

This was put in place as a response to the Li Micciancio attack on CKKS. The model Li Micciancio [LM] work in is traditional IND-CPA security augmented with a decryption oracle. This decryption oracle only decrypts if the "ideal" result on the left and right worlds match, so for correct FHE schemes (where the ideal computation is the computation which actually occurs) the notion is equivalent to IND-CPA security (any adversary could trivially simulate this oracle).

For schemes that may be incorrect, the equivalence no longer holds, and LM can break this augmented notion of security (and even extract the secret key). Several libraries have incorporated countermeasures as a result, you can read a summary here. I quote from this document:

SEAL. Currently, a modification for IND-CPA+ security on algorithms or API does not appear in SEAL [18]. Instead, they noted in SECURITY.md that the decryption results of SEAL ciphertexts should be treated as private information only available to the secret key owner.

So the answer to:

Does that mean an application using SEAL to e.g. compute the mean and variance on an encrypted data set can't publish the (decrypted) results? At least, the first (say) 4 decimal digits of mean and 2 first of variance must be OK to publish, right?

is "it depends". As SEAL does not contain any countermeasures, you are (in principle) vulnerable to the LM attack. You could post-process the mean and variance (as you suggest) by decreasing the precision, and it may be fine (this "deterministic rounding" is roughly the same as adding random noise to the lower order bits, although I think there are some mild benefits to adding random noise over deterministic rounding). But particular parameters for the post-processing have not been uniformly settled on yet.

It is worth mentioning the caveat that while LM manages to extract the secret key, for computations of more complex circuits this becomes less obvious how to do, although the indistinguishably attack still seems straightforward.

fgrieu avatar
ng flag
Oh my! Having to keep results of decryption secret is a very serious limitation. If I had use of FHE, I'd do my best to avoid a library with that constraint. On the other hand, like most, I have no application for FHE.
Mark avatar
ng flag
I agree. Unfortunately, the amount of noise which has to be added is rather large. Iirc it is something like $\sqrt{D}2^{n/2}t$ or something, where $D$ is a bound on the number of decryption queries you allow, $n$ is the security parameter, and $t$ is the size of the underlying (approximation) error. So best case (say $D = 1, n=80$) you inflating the size of the approximation error by $\approx 2^{40}$, e.g. you lose an additional 40 bits of precision.
Mark avatar
ng flag
This comes from a somewhat naive "noise flooding"-type argument though. It is plausible people will develop better arguments --- there has not been much followup work yet (that I am aware of).
mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.