Score:3

Security proof regarding a zero-knowledge counterexample that is secure in the stand-alone model but insecure in the UC model

in flag

Background

The following zero-knowledge (ZK) counterexample is described in Canetti's work [Security and Composition of Cryptographic Protocols: A Tutorial, page 26] to show that there exists some protocol that is secure in the stand-alone model but insecure in the UC model:

Assume there is such a "puzzle system" that both the prover and the verifier can efficiently generate puzzles, and

  • The prover can efficiently solve any given puzzle, and
  • The verifier cannot efficiently (i) solve puzzles or (ii) distinguish between a valid solution or a random invalid one for a given puzzle, even that the puzzle is generated by itself.

Now, given a ZK protocol $\pi$ for some NP relation $R$, we can construct another protocol $\pi'$ where

  1. The prover and the verifier run $\pi$,
  2. The prover sends a random puzzle $p$ to the verifier,
  3. The verifier responds with a "purported" solution $s$ for $p$, plus a puzzle $p'$,
  4. If $s$ is a valid solution for $p$, the prover reveals its secret witness $w$ (in the ZK protocol $\pi$) to the verifier. Otherwise, the prover sends a solution $s'$ for $p'$ to the verifier.

My questions regarding the security in the stand-alone model

As claimed in Canetti's work, if $\pi$ is ZK in the stand-alone setting, then so is $\pi'$. To see this, I assume that $\pi$ securely realize the ZK functionality $\mathcal{F}_{\textsf{zk}}^R: ((x, w), x) \rightarrow (\lambda, R(x, w))$ in the stand-alone setting and try to show that $\pi'$ also securely realizes $\mathcal{F}_{\textsf{zk}}^R: ((x, w), x) \rightarrow (\lambda, R(x, w))$. In particular, if the verifier is malicious in $\pi'$, I can construct the following simulator $\mathcal{S}_V$ that internally runs the real-world adversary $\mathcal{A}$:

  • $\mathcal{S}_V$ invokes the simulator for $\pi$ to generate the $\pi$-part transcripts between the honest prover and the malicious verifier (or equivalently, we can rephrase this in $\mathcal{F}_{\textsf{zk}}^R$-hybrid model),
  • $\mathcal{S}_V$ sends a random puzzle $p$ to $\mathcal{A}$,
  • $\mathcal{S}_V$ receives a pair $(s, p')$ from $\mathcal{A}$,
  • If $s$ is a valid solution for $p$, $\mathcal{S}_V$ aborts. Otherwise, $\mathcal{S}_V$ sends a solution $s'$ for $p'$ to $\mathcal{A}$.

It is clear that, unless $\mathcal{S}_V$ aborts, the ideal execution in the presence of $\mathcal{S}_V$ is indistinguishable from the real execution in the presence of $\mathcal{A}$. With the assumption (i), we can see that this abort happens with negligible probability. Therefore, $\pi'$ is secure against the malicious verifier and thus ZK.

Question 1:

What confuses me is that the above proof does not require the assumption (ii). Indeed, Canetti argued that

Furthermore, the fact that $P$ (i.e., prover) provides $V$ (i.e., verifier) with a solution $s'$ to the puzzle $p'$ is not really a problem in a stand-alone setting, since $V$ cannot distinguish $s'$ from a random value (which $V$ could have generated by itself).

How does this assumption help to establish the ZK property of $\pi'$ in the stand-alone setting?

My understanding:

For me, the sequential executions of $\pi'$ (even for the same inputs) remain ZK since the honest prover samples a random puzzle $p$ each time. These puzzles are different from those whose solutions are known to the malicious verifier with overwhelming probability (depending on the space of puzzles). Hence, the verifier cannot use these known puzzles to force the prover to output the witness with non-negligible probability in the sequential executions of $\pi'$.

Of course, to incorporate the assumption (ii) into proof, it seems that we can modify $\mathcal{S}_V$ so that in the last step $\mathcal{S}_V$ sends a random solution $s'$ to $\mathcal{A}$ (rather than a valid solution for $p'$). Without loss of generality, we consider that $\mathcal{A}$ outputs its view in this case. Then, we can see that the solution $s'$ is included in the $\mathcal{A}$'s view. To achieve malicious security in the stand-alone model, the random solution $s'$ in the simulation should be indistinguishable from the real execution. However, the assumption of the verifier's inability to distinguish solutions CANNOT be translated into the one that the distinguisher (which distinguishes between ideal and real executions) cannot distinguish between a real solution and a random one. Hence, the assumption (ii) seems helpless since the distinguisher can still efficiently distinguish the two worlds by observing the solution $s'$ in $\mathcal{A}$'s view.

In my opinion, this problem may come from the fact that there is a reduction between assumptions (i) and (ii), as the reduction between CDH and DDH assumptions. This reduction makes one of the two assumptions redundant. Is this a correct understanding?

My questions regarding the security in the UC model

As claimed in Canetti's work, $\pi'$ is insecure in the UC model since two concurrent executions of $\pi'$ with the same input $(x, w)$ can leak the witness $w$ to the malicious verifier. More specifically, the attack works as follows:

$V$ first waits to receive the puzzles $p_1$ and $p_2$ from $P$ in the two sessions. It then sends $(s, p_2)$ to $P$ in the first session, for some arbitrary $s$. In response, $V$ gets from $P$ a solution $s_2$ to $p_2$, which it returns to $P$ in the second session. Since $s_2$ is a correct solution, $P$ will now disclose $w$.

Let us assume that $\pi$ securely realizes $\mathcal{F}_{\textsf{zk}}^R$ in the UC model and revisit this insecurity from the perspective of security proof. Now, we may consider a simulator $\mathcal{S}_V'$ in the UC model. More specifically, $\mathcal{S}_V'$ is identical to $\mathcal{S}_V$ except that it externally interacts with the environment machine $\mathcal{Z}$ which acts as the malicious verifier (rather than internally interacting with $\mathcal{A}$). Since $\pi'$ is NOT UC-secure, $\mathcal{S}_V'$ should abort in the last step with non-negligible probability. In other words, $\mathcal{Z}$ should be able to efficiently solve the puzzle $p$ sampled by the honest prover. $\mathcal{Z}$ is more informed than $\mathcal{A}$ in the stand-alone model since the solution $s$ picked by $\mathcal{Z}$ depends on not only its view in the current session but also other concurrent ones. So, there seem to be two possible ways for $\mathcal{Z}$ to solve $p$: (a) $\mathcal{Z}$ solves $p$ by itself, or (b) $\mathcal{Z}$ cannot efficiently solve $p$ but uses other sessions to solve it. The attack belongs to case (b).

Question 2:

Even if we assume that $\mathcal{Z}$ cannot launch case (b) attack, is it $\pi'$ still insecure in the UC model? For me, if $\mathcal{Z}$ can follow the code of the honest prover to solve $p$ and the assumption (i) regarding the verifier does not apply to $\mathcal{Z}$. In this case, $\mathcal{Z}$ can use case (a) attack to distinguish two worlds.

More generally, if we make some assumptions on the corrupted parties in the UC model, do these assumptions apply to the environment machine $\mathcal{Z}$ that controls these parties? Note that in the stand-alone model, the distinguisher and the adversary are distinct entities. In contrast, the distinguisher and the adversary are both the environment machine $\mathcal{Z}$ in the UC model.

Question 3:

In order to formally argue that a protocol is secure in the UC model, should we explicitly enumerate all possible ways for $\mathcal{Z}$ to construct its outgoing messages based on its views of arbitrary asynchronous executions of (a possibly unlimited number of) sessions?

I have read some UC proofs, but most do not explicitly mention the asynchronous executions of sessions. Since the protocols over other sessions can be arbitrary, I think that it is impossible to enumerate $\mathcal{Z}$'s messages based on $\mathcal{Z}$'s views of all sessions. So, how can we make sure that the $\mathcal{Z}$'s messages forwarded to the simulator will not fail simulation when we write a security proof in the UC model?

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.