Score:2

Question about malicious security in protocol using OT

cn flag

I was studying a protocol that used an OT and suddenly and suddenly I realize that I fail to imagine how a protocol using an OT could be malicious secure.

Suppose we have a protocol P that use an OT as subrotocol. Suppose that the OT is used $N$ times. Each OT has input $x_{0,i}$, $x_{1,i}$, where $i$ denotes the $i-$th istance of the OT, from 1 to $N$. It is reasonable to suppose that, for every istance, the receiver choose bit $0$ or $1$ according to its input (this is for example the case of a Yao protocol, where for each gate the receiver ask for 0 if its input is 0 or 1 otherwise).

Now suppose that the sender in OT is malicious and decides, a priori, to change $x_{0,N}$ with something else. It could be a random value, the value $x_{1,N}$ or whatever. We have three possibility

  1. The protocol abort. Then the sender knows that during the last OT the receiver asked for 0.
  2. The protocol does not abort and the input is indeed correct. The sender now knows that during the last OT the receiver asked for 1.
  3. The protocol does not abort, the input is wrong and the parties are able to detect it. The sender now knows that during the last OT the receiver asked for 0.
  4. The protocol does not abort, the input is wrong but the parties are not able to detect it. Then the sender can arbitrariely make the protocol to output different values from the prescribed one.

If the behaviour is one among the first three the sender can change every OTs in this way and it has $\frac{1}{2^N}$ probability of learning all the request without going detected, that since $N$ is a fixed parameters is not negligible (even more so if $N$ is small). In the other cases the receiver learn that the sender is malicious but this does not prevent the receiver to learn something.

If the behaviour is the last I fail to understand how such a protocol could be considered "secure", since it is very easy for an attacker to perform a "DoS" attack, where every output is meaningless.

In the case of Yao protocol, suppose that the sender sets the last OT $X_{0,N}, X_{0,N}$. How can we prevent that? If the protocol "goes wrong" then the sender knows that the last bit of Bob is 0. It is huge, no?

And I'm not considering the the symmetric case: what happens if the receiver instead of asking according to the rule, asks randomly? As far as I can understand this behaviour is not considered during security proof, am I wrong?

Am I missing something? Are these considerations out of the scope and we allow this kind of behaviour? Maybe do we assume that the sender always sets $x_0$ and $x_1$ in the right way?

Score:1
us flag

There are a lot of questions wrapped up here -- answering all of them fully would require a complete tutorial on malicious secure MPC. I'll try to just focus on the big picture.

In general, if there is a "correct value" that the OT sender is "supposed to" use as OT input, then yes, the protocol has to be very careful. Usually there is a "correct value" because any other value would cause the receiver to abort. So "incorrect" values in the OT will cause aborts to happen based on the receiver's OT input -- leaking information about those OT inputs!

There are a few ways to deal with this. Here are the easiest to understand:

  • The classic garbled-circuit 2PC protocol of Lindell-Pinkas 2007 does the following. Suppose we really want to do an OT to select one of two wire labels $x_0, x_1$ based on the receiver's circuit input $b$. Suppose the receiver can check whether a wire label is valid or not. When the receiver aborts due to seeing an invalid wire label, it leaks their circuit input! Lindell & Pinkas address this by modifying the garbled circuit so that the receiver should input a secret sharing $b = b_1 \oplus \cdots \oplus b_\lambda$ of its true input $b$. The garbled circuit will then XOR these shares to recover $b$ and then proceed as usual. So now instead of one OT for this circuit input wire, there are $\lambda$ OTs because we have artificially increased the number of circuit inputs. Suppose the OT sender cheats in the first $k < \lambda$ OTs. The receiver's inputs to these first $k$ OTs are a set of secret shares, but fewer than the threshold amount of shares, so they are distributed independently of the private input $b$! In this case the receiver will simply abort with probability $1 - 2^{-k}$, independent of their input. If the OT sender cheats in all $\lambda$ OTs, then the receiver aborts with probability either 1 or $1-2^{-\lambda}$, with the probability depending on their private input! -- but since these are negligibly close, it does not actually constitute a leakage of information.

  • The ZK protocol of Jawurek, Kerschbaum, Orlandi is also based on garbled circuits. They do a typical OT of wire labels, where the OT receiver's input is their private ZK witness. They carefully structure the protocol so that the OT sender can publicly reveal both of its OT inputs at some later point --- this requires a variant of OT called "committed OT." Basically, the verifier computes some "final answer" about the garbled circuit, which leaks information about their private input if there was any cheating in the OTs. So instead of just sending this final answer, it commits to it. Then the OT sender reveals all of its OT inputs to prove that there was no cheating. If the OT receiver is convinced, it can then open its own commitment to the "final answer."

  • Sometimes you can design the protocol so there's no such thing as an incorrect OT input. Here is a simple way to check whether two private strings $x = x_1 \cdots x_n$ and $y = y_1 \cdots y_n$ are equal (inspired by the PSI protocol of Pinkas-Schneider-Zohner). Run $n$ OTS of random values, so the $i$th OT has inputs $r_{i,0}$ and $r_{i,1}$. The OT receiver uses the bits of their input $x$ as OT choice bits, and learns values of the form $r_{i, x_i}$. They can compute the value $R_x = H(x, r_{1,x_1}, r_{2,x_2}, \ldots, r_{n,x_n} )$, where $H$ is a random oracle. The OT sender has their own private input $y$, and knows all of the $r_{i,b}$ values. So they can compute a value $R_y = H( y, r_{1,y_1}, r_{2,y_2}, \ldots, r_{n,y_n} )$ and send it to the OT receiver. If $R_x = R_y$ then $x$ and $y$ were equal; otherwise $x$ and $y$ were different (and $R_y$ looks random to the receiver). How can a corrupt OT sender cheat? They could send $H$ of some other stuff (i.e., include something not equal to any $r_{i,b}$ as input to $H$). But that would simply guarantee that the receiver will conclude "the strings didn't match" -- no matter what $x$ they hold. This is easy for the simulator to handle.

So we have three different high-level approaches to the problem you asked about:

  • This protocol arranges for the receiver's OT choice bits to be effectively independent of their private data --- so if cheating leads to an abort, the probability of that abort does not depend on the OT receiver's private input.

  • The OT receiver is given a way to detect the cheating so it can stop the protocol before any of the bad consequences of the cheating can be seen.

  • The other parts of the protocol just don't have any bad consequences if the OT sender chooses to act inconsistently with their OT inputs.

There are surely other clever approaches as well, but these are the first that came to mind.

what happens if the receiver instead of asking according to the rule, asks randomly? As far as I can understand this behaviour is not considered during security proof, am I wrong?

In most OT-based protocols, the receiver's OT choice bits are a direct encoding of their input to the main protocol. That is surely the case in the 3 examples above. Using the "wrong" OT choice bits is exactly equivalent to choosing a different input for the main protocol, and this is not considered an attack for malicious security.

miky avatar
cn flag
Oh, thank you very much. The first and the second bullet point seems to answer my question. I don't understand the third however, it seems to have the exact same problem. The OT sender can set the OT input as ri0=ri1 (or something more clever). So it can force the protocol to output false, falling in the fourth bullet point of my original message. And it seems there is no way to allow the receiver to check whether the ri are both correct or not. I will look for the first and the second solution! thank you very much
us flag
In the equality-test protocol I include the string $y$ in the hash, so that even if all $r_{i,0} = r_{i,1}$, different $y$ still lead to different $R_y$ values. If $n$ (length of the strings) is sufficiently long, then it is not an attack to "force the protocol to output false" since you can also achieve that in the ideal world (by choosing a random input).
miky avatar
cn flag
yes, that was what I said. It seems strange to me that in such a protocol it is not "included" in the security definition that the adversary must use its own input and not something else. I guess it is impossible, but still strange. Thank you
us flag
There is no such thing as a "correct input" for a malicious party. In the ideal world, a malicious party can send anything as input to the ideal functionality. That is just part of the standard MPC definition for malicious adversaries.
mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.