Score:1

MPC Definitions: UC-Security vs. Real-Ideal Simulation?

de flag

I consider the "standard" definition of maliciously-secure 2PC to be the simulation-based, ideal–real-world indistinguishability definition of e.g. Lindell's How to Simulate It [Lin17, Definition 6.1].

How does this definition differ—or does it—from what is sometimes called "UC Security"? For example, in this 2013 paper, § 2, Lindell references an "environment machine" $\mathcal{Z}$. No such machine (obviously) appears in [Lin17, Definition 6.1]. Is the "standard" definition less secure than the UC definition? If so, what is an example of a protocol which is secure under one definition but not the other? If they are equivalent, then how is that (roughly) shown?

Score:3
us flag

Lindell's "How to Simulate It" tutorial uses what is known as the standalone security model. See Section 10.1 for a discussion.

The standalone model analyzes the security of a protocol instance, in isolation. The UC model analyzes security in the presence of arbitrary "other things going on in the world" concurrent with the protocol instance. Those "other things" are captured by this environment machine in the UC definition.

In the standalone model we defined security as "for every adversary attacking the protocol, there is a simulator such that real $\approx$ ideal." Note that the simulator can depend on the adversary in a totally arbitrary way. In particular, one thing a simulator can do is run the adversary program many times, frequently rewinding the adversary program to a previous state.

In the UC model, the security definition is extended: "for every adversary attacking the protocol, there is a simulator such that for all environments, real $\approx$ ideal." Now the same simulator has to work for all environments, which significantly restricts its capabilities. An environment may expect to talk with the adversary program during the execution of the protocol. Rewinding the adversary in this case will cause the ideal world to look very different from the real world, since the simulator can't rewind the environment as well. It is possible to show that the simulator must execute the adversary program in a "straight line" (no rewinding).

Here is an example of a function that can be securely computed in the standalone model but not UC. The example is from this paper. One party chooses a row and the other party chooses a column, and the output is determined by this table:

$$ \begin{array}{cccc} 0 & 0 & 1 & 1 \\ 2 & 3 & 2 & 3 \end{array} $$

The standalone protocol is very simple: the row-player announces their input and then the column-player (now knowing both parties' inputs) computes the output and announces it. Why is it secure? The simulator for a corrupt row-player is trivial -- the first thing that happens in the protocol is the row-player announcing their input, so the simulator can easily extract. The simulation for a corrupt column-player is a little trickier because their choice of protocol message can depend on what the row-player said in the first protocol message! However, the simulator can run the adversary twice (i.e., rewind), and see how it would respond to first message "top row" (responding either 0 or 1) and first message "bottom row" (responding either 2 or 3). There are 4 possible ways the corrupt column-player might respond in total, and, because of the structure of this function, each way corresponds to a valid input to this function for the column-player. So the simulator can again extract the column-player's input.

However, it is possible to prove (but non-trivial) that there is no secure protocol for this function in the UC model. The rewinding strategy that I described is somehow unavoidable for this function.

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.