Score:4

How can Garbled Circuits be utilized to reduce the round complexity of GMW?

cl flag

I've been reading this set of notes on some topics in MPC and am having difficulty understanding the transformation the authors make in order to reduce the round complexity of the GMW protocol through applying garbled circuits. In particular, the goal is to make the round complexity of GMW independent of the depth of the circuit being evaluated and instead let it be dependent on $\kappa$, the security parameter. The approach seems to be as follows:

  1. Use the $\mathsf{Garble}$ algorithm in Yao's protocol. Compute the circuit $D(\cdot, \cdot)=\mathsf{Garble}_C(\cdot, \cdot)$.
  2. Run GMW on $D$ instead of $C$.

I have numerous doubts related to this:

  • Firstly, with regards to GMW: at any given point, the parties only have shares of a wire evaluation. How do they get the final output? Won't they simply have shares of it?
  • Why does $D$ need $x$ as an input? Doesn't that destroy the whole point of garbling, if you need $P_2$'s input to garble the circuit?
  • Why is the depth of $D$ only dependent on $\kappa$? Won't it also be dependent on the number of gates in $C$?
Lt. Commander. Data avatar
cl flag
@JAAAY yeah I'm not at all certain what they're planning to gain here. It's not even like they get malicious security or anything. But I'm still confused why there's the round reduction.
JAAAY avatar
us flag
I'm deleting my previous comment since after I read the notes I found that they describe a somehow different way.
JAAAY avatar
us flag
I think for a given circuit $C$ he is garbling it getting $D$ thus get constant rounds (again we bump into the OT problem) and execute $D$ using the GMW protocol. The GMW then can be run in parallel since the topological sorting of $D$ will have only a little (constant) depth but will have HUGE width, e.g. $D$ will have HUGE amount for every level of constant depth. I think he is explaining how to use the tradeoff between communication and computation complexity.
JAAAY avatar
us flag
*huge amount of gates
Score:1
cn flag

First question: at the end of GMW, the parties can reconstruct the outputs by broadcasting their shares to everyone.

Second question: a garbled circuit cannot be evaluated on plain inputs, we need to encode the inputs as well. In slightly more details, a garbling scheme produces a garbled circuit $(G, \vec K_0, \vec K_1) \gets \mathsf{Garble}(C)$. Here, I view $G$ as "the garbled circuit itself", and $\vec K_0, \vec K_1$ as the vectors of input encodings. That is, if the $i$-th input is equal to a bit $b$, then it is encoded by $K^{(i)}_b$ (the $i$-th entry of $\vec K_b$).

The general idea to get a round-reduced GMW would be the following: let me call $\vec x$ the joint input of all parties, and $C$ the target circuit. The parties use GMW to compute shares of $(G, \vec K_{\vec x})$, where I slightly abuse my notations and denote $\vec K_{\vec x}$ the vector $(K^{(i)}_{x_i})_i$. Here, the input is needed in the computation, because the parties need to obtain the encodings of the input in order to evaluate the garbled circuit! And of course, the other encodings (all keys $K^{(i)}_{1-x_i}$) should remain hidden for security.

Given shares of $(G, \vec K_{\vec x})$, all parties broadcast their shares, reconstruct $(G, \vec K_{\vec x})$, and can get the output $C(\vec x)$ using the garbled circuit evaluation procedure.

Third question: a core feature of Yao's garbled circuit protocol is that all the gates can be garbled in parallel. That is, you don't need to garble first the parents of a gate, then the gate itself: you define the labels for all wires in the circuit, and compute in parallel all the tables of encryptions (for Yao, you have 4 double-encryptions per gate). Therefore, the size of the circuit $D(C,\vec x)$ which outputs $(G, \vec K_{\vec x})$ is proportional to $|C| \cdot \kappa$, but its depth is only the depth of computing a single gate (since they can all be computed in parallel), which is independent of $|C|$, and will typically be $O(\kappa)$ using standard symmetric key encryption.

Hope that helped, I'll be happy to add further clarification if needed!

I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.