Score:0

Security against malicious adversaries in MPC

cn flag

I have a question in the context of two-party computation and the proof of the security of an MPC. I have looked at some of the beginning parts of this, this chapter 7, and
this But I couldn't achieve my answer.

I want to understand the difference of a malicious adversary with an honest-but-curious one, and to get what does it imply.

Does the security against malicious adversary exactly means that if the adversary changes any part of the data that he must send to the other party, he still can't achieve any information about the input of the other party to the protocol (except the information he already has)? Does it imply anything else?

Score:4
mo flag

I think your question is not well formulated. There are two orthogonal parts in a security definition: the security goal and the threat model. The security goal is what security you want to achieve (privacy? correctness? something else?) while the threat model is who do you want security to hold against. The difference between malicious and semi-honest adversary concerns the threat model you are considering. Other aspects of the threat model include how many parties are corrupted by the adversary.

So "security" against a malicious adversary means that whatever properties you specified when defining what "security" means should also hold when the adversary can misbehave arbitrarily.

m123 avatar
cn flag
Yes. You are right. I was confused in this context. And additionally, my main problem was the meaning of arbitrary misbehaviors. Does the 'correctness' goal against 'malicious' adversary, implies that he cannot stablish his own two- party computation with the trusted party? Is it like security against man-in-the-middle attack? I mean, if the protocol lets the final result be f(Alice, adversary) instead of f(Alice, Bob), while adversary cannot infer anything about Alice' private data, is it still considered _secure_?
m123 avatar
cn flag
:(when having 'privacy' and 'correctness' as security goals). Does such a protocol achieve 'correctness'?
Score:0
cm flag

A malicious adversary has full freedom to mess with the protocol given the information available to it. Examples of possible behavior are sending erroneous information or send nothing at all.

An honest-but-curious adversary is one that attempts to infer all information it can from the data it receives but behaves correctly otherwise.

An example of an attack that a malicious adversary could make is send a circuit that exposes the inputs of the honest party instead of a circuit that computed the intended objective.

I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.