Score:1

Binary Secret Sharing vs Garbled Circuits

kr flag

In Privacy-preserving machine learning, GC is usually used for privacy operation such as ReLU(x) where sign(x) needs to know. However, binary secret sharing also supports such computation via comparators($[x]_{encryted}$ > 0)(this paper). While compare the performance, binary secret sharing is usually way faster than garbled circuit. But why is garbled circuit still used in many related works, for example?

Score:2
us flag

Secret-sharing MPC has low total communication, low computation, but high round complexity. It needs a round of interaction for every layer in the circuit. If you are trying to evaluate a neural network with 100 layers, then you will need 100 rounds of interaction.

Garbled circuits have higher total communication and computation, but in just 1 or 2 rounds of interaction. The number of rounds does not depend on the structure of the circuit.

So in some settings, where the circuit is deep and/or the network has high latency, garbled circuits may result in faster MPC.

Mumon avatar
kr flag
Hi Mikero, thanks a lot for your explanation. What if there are arithmetic secret sharing between every two garbled circuits, do you still need 1 or 2 rounds for the entire network? Because from what I saw in Delphi(https://github.com/mc2-project/delphi), they create a circuit for each layer of ReLU.
us flag
I don't know about that specific protocol, but I know that some protocols convert between GC and secret sharing to take advantage of things that GC does well and things that secret sharing does well. Every conversion requires another round of interaction.
JAAAY avatar
us flag
@Mikero I have a few questions on your answer. In a protocol like GMW, some of the gates like NOT, XOR, XNOR can be evaluated non interactively. Only AND and OR gates need interactions and OT. I understand that the theoretical communication complexity doesn't change but I think it should be mentioned. For GC now, iirc FreeXOR also exists so there should be improvements there too. The part that I cannot understand it how a Garbled (not Garbled Thruth Table) can be evaluated in constant rounds when almost every gate of the circuits needs OT.
us flag
In both GC and SS-based MPC, linear gates are free (free in communication and rounds). In GC, only the input wires need OT, to allow the GC evaluator to learn the garbled input. Once the evaluator has the garbled input, they can evaluate the entire circuit non-interactively.
I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.