Score:1

Independence of the inputs in multi-party computations

cn flag

My Main Question:

Do we always require to be sure that the inputs of the parties in a secure multi-party computation are independent? i.e., one party's inputs do not have dependency on the other parties inputs? (this paper claims that the inputs must be independent)


Explanation and subquestions:

If this is the case, isn't it in contrast with the power of malicious adversaries who can select their arbitrary inputs? Do the malicious adversaries can select the inputs adaptively? I mean he be able to select $x_1$, then receive $y_1$ from the other party, and after that based on this answer, select $x_2$, instead of selecting $x_1$ and $x_2$ at the beginning of the protocol.

It has been written in page 11 of the Cramer and Damgard's book that we cannot assume that the partie select their inputs dishonestly.

Cramer's Book page 11

But Hazay and Lindell say that:" Specifically, the adversary in the ideal model for malicious adversaries is allowed to change its input, whereas the adversary in the ideal model for semi-honest adversaries is not"

  1. How can we justify the contradiction between Cramer and Hazay assumptions?

  2. So far, are the inputs independent? Can the adversary select them step-by-step based on each message it receives from the other party?

  3. Can we always derive an efficient secure protocol against malicious adversary by having a secure protocol which has been proven to be secure in semi-honest model (I am not sure it is always right or it is under some circumstances, but I have seen such deduction in some protocols)? If so, since in a semi-honest model the inputs are fixed beforehand and independently, doesn't it mean that we always have to have such an assumption for malicious adversaries to?

Score:0
us flag

If we're talking about standard secure function evaluation, then the adversary's inputs are independent of the honest parties'. It's easy to see this from the ideal functionality: it doesn't give out any information until collecting inputs from everyone. In the ideal world, the adversary has no information about honest parties' inputs at the time it selects its input.

If the MPC protocol is secure, then the same can be said about the real world interaction. In the protocol execution, there is going to be some round at which the simulator sends an input to the functionality on behalf of the adversary. At that round, the adversary could not have learned anything about the honest parties' inputs, and the adversary is now bound to that input, and can't force the honest parties to output anything that is inconsistent with that choice of input.

(If more than one party is corrupt, then those parties' inputs don't need to be independent of each other. We always imagine a single adversary controlling all of these parties. But collectively their choice of inputs will be independent of the honest parties'.)

I mean he be able to select $x_1$, then receive $y_1$ from the other party, and after that based on this answer, select $x_2$, instead of selecting $x_1$ and $x_2$ at the beginning of the protocol.

This is only legal if the ideal functionality explicitly reveals (something about) $y_1$ before allowing a corrupt party to choose $x_2$. In a typical secure function evaluation, this would not be the case.

But Hazay and Lindell say that:" Specifically, the adversary in the ideal model for malicious adversaries is allowed to change its input, whereas the adversary in the ideal model for semi-honest adversaries is not"

They are making a comparison to the semi-honest model, where both honest parties and corrupt parties are provided (by the environment) with their inputs when they start. In this quote I believe they are saying that the environment could choose an input for a corrupt party, but the corrupt party is free to ignore that input and behave however it wants in the protocol. That is the "change" they refer to. It is not a change that happens after learning something about the honest parties' inputs.

I dislike this way of describing the malicious model. I think it's much better to say "there is no such thing as a malicious adversary's input" when it runs the protocol. The only thing you could rightfully call "the adversary's input" is whatever the simulator extracts in the ideal world.

How can we justify the contradiction between Cramer and Hazay assumptions?

I don't see a contradiction. In the highlighted passage, Cramer et al are focusing on the adversary's ability to choose any input. And indeed, a malicious party is allowed to provide any input it wants, but it must do so before it learns anything about the honest parties' inputs.

So far, are the inputs independent? Can the adversary select them step-by-step based on each message it receives from the other party?

Only if the ideal functionality explicitly allows this; usually it doesn't.

Can we always derive an efficient secure protocol against malicious adversary by having a secure protocol which has been proven to be secure in semi-honest model (I am not sure it is always right or it is under some circumstances, but I have seen such deduction in some protocols)?

No, a semi-honest secure protocol can fail spectacularly in the presence of malicious adversaries.

You might be able to use a semi-honest protocol as part of a larger protocol, but the larger protocol will probably have to work hard to make up for the deficiencies of the semi-honest subprotocol. It's all on a case-by-case basis.

I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.