Score:4

Are there different definitions of secure two-party computation?

mm flag

While reading tutorials on two-party computation I encountered two (at least formally) different definitions of security (with semi-honest adversaries). What I want to know is whether these definitions are actually different or can be shown to be equivalent. I suspect that they are different, but I might be missing something, considering that I have not read anywhere about different definitions.

In Lindell (2016), secure two-party computation is defined like this: For each party, the joint distribution of the simulation and the ideal result must be computationally indistinguishable from the joint distribution of the adversarial view and the computed output. Formally, for each $i \in \{1, 2\}$, there must exist p.p.t. algorithms $S_i$ such that $$ {\lbrace (S_i(1^n, x, f_1(x, y)), f(x, y)) \rbrace}_{x, y, n} \stackrel{c}{\equiv} {\lbrace (\operatorname{view}^\pi_i(x, y, n), \operatorname{output}^\pi_i(x, y, n)) \rbrace}_{x, y, n} \,\text{.} $$ This definition makes sense to me, since the author defines computational indistinguishability over probability ensembles indexed by both the security parameter and the input:

Two probability ensembles $X = \{X(a, n)\}_{a \in {\{0, 1\}}^*; n \in \mathbb{N}}$ and $Y = \{Y(a, n)\}_{a \in {\{0, 1\}}^*; n \in \mathbb{N}}$ are said to be computationally indistinguishable, denoted by $X \stackrel{c}{\equiv} Y$, if for every non-uniform polynomial-time algorithm $D$ there exists a negligible function $\mu(\cdot)$ such that for every $a \in \{0, 1\}^*$ and every $n \in \mathbb{N}$, $$ \lvert \Pr[D(X(a, n)) = 1] - \Pr[D(Y(a, n)) = 1] \rvert \leq \mu(n) \,\text{.} $$

This means that there must be a single negligible function $\mu$ for all inputs governing the security.

In contrast, Evans et al. (2018) define computational indistinguishability for probability ensembles only indexed by the security parameter. I have also seen definitions of computational indistinguishability like this elsewhere. Then, when defining the security they require that the joint distributions are computationally indistinguishable for all inputs. At least formally this suggests to me that here, the negligible function may depend on the input.

Answers to the following questions would be very much appreciated:

  1. Am I missing something or misunderstanding the definitions? Can the definitions shown to be equivalent exploiting the non-uniformity of the adversary?
  2. If not, is it the case that in the latter definition, it is not required that there is a single negligible function that "works" for all inputs? If I am not mistaken that would imply that the two definitions are in fact different?
  3. In case they are different: Which of the definitions should be preferred?
Geoffroy Couteau avatar
cn flag
For every polytime adversary A, we can limit the size of $(x,y)$ to $p(n)$ for some fixed polynomial $p$ (larger than A's runtime) in the definition, since A cannot read more than $p(n)$ bits of their tape anyway. Then, once we have a finite number of $x$'s and $y$'s, what's wrong with extracting a single universal negligible function by taking the maximum of all functions we get for each pair $(x,y)$?
Distinguishable Llama avatar
mm flag
@GeoffroyCouteau Maybe I misunderstand your comment, but I don't see how to bound the size of the input; the security parameter $n$ is not fixed. In the second definition, for all inputs a negligible function $\mu$ exists so that security holds for all $n$. So it may be that, e.g., for any polynomial $q$ $\mu(n) < 1/q(n)$ only holds for $n \geq |(x, y)|$ (since $\mu$ may depend on the input). Then it would be impossible to find a single $n_0$ so that $\mu(n) < 1/q(n)$ ($n \geq n_0$) holds for any $\mu$, since we can always find a bigger input for which it doesn't. But can such an input exist?
Geoffroy Couteau avatar
cn flag
Ok, that makes sense. Then I'll have to wrap my head around it, it seems... Somewhat tedious :)
Distinguishable Llama avatar
mm flag
@GeoffroyCouteau Yes, tedious it is. Thanks for the input anyway :)
Yehuda Lindell avatar
us flag
@GeoffroyCouteau See my answer. I don't think that it's equivalent actually.
Geoffroy Couteau avatar
cn flag
Yes, this is actually what I was now believing, after pondering for a moment. Thanks for the clarifying answer!
Score:4
us flag

Defining indistinguishability is very tricky. I actually think that the definition in the book by Evans et al. is too weak, but maybe Mike Rosulek will weigh in. If you define security by saying that for every input, the distributions of REAL/IDEAL are indistinguishable, then what you are actually saying is as follows: for every input and every distinguisher there exists a negligible function $\mu$ so that the distinguisher succeeds with probability at most $\mu(n)$. This means that you may need a different negligible function for each input. To be more concrete, if we open this up further, what this definition says is that for every input, every distinguisher $D$ and every polynomial $p$ there exists a value $n_0$ such that for every $n>n_0$ the distinguisher succeeds with probability less than $1/p(n)$. This means that $n$ can depend on the input and in particular there is no $n_0$ so that beyond $n_0$ there is indistinguishability for all inputs. Stated differently, you would potentially have to take a different security parameter for different inputs. That is not something that you would want to do (and it wouldn't even be possible since how do you agree on the security parameter without knowing the input, and how do you determine what that security parameter should be). In contrast, in the definition where the input is part of the ensemble, there is one $n_0$ for all inputs. The question of how we determine that $n_0$ is simple in practice - it is what we need for all of the primitives we use to be secure. Needless to say, Evans et al. are not doing anything different in their constructions. However, the definition is flawed, to the best of my understanding.

[On a side note, there is a short paper by Mihir Bellare called A Note on Negligible Functions that enables you to reverse the quantifiers on the adversary and negligible function. However, to the best of my knowledge, this doesn't work for inputs.]

Distinguishable Llama avatar
mm flag
That answers my question :) Thanks also suggesting the paper on negligible functions.
us flag
This assessment sounds right to me. I will fix the definition in our next errata update. Thanks to everyone here for the collective bug report!
Distinguishable Llama avatar
mm flag
Just a note on the paper by Bellare: I think his argument for reversing the quantifiers might work for inputs as well, but I'm not entirely sure (the argument is somewhat more complicated for non-uniform adversaries, but it should at least work for uniform adversaries). The two definitions stated here are still different, because Bellare uses yet a different notion of a single negligible function $\mu$: He only requires that the expression is **eventually** less than $\mu$, that still allows $n_0$ to depend on the input.
Yehuda Lindell avatar
us flag
In that case, it wouldn't suffice for a good definition of security here (having $n_0$ depend on the input). However, there are big differences between input and the adversary so I would be careful with verifying that.
mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.