Score:2

batch Fiat-Shamir

yt flag

The prover has $n$ group elements $g_1, ..., g_n$ and wishes to demonstrate the knowledge of the discrete logarithm to base $g$ for each of them, i.e, for each $i \in [1,n]$ she knows some $e_i$ s.t. $g_i = g^{e_i}$. We know that by applying Fiat-Shamir to the Schnorr's protocol, we can get $n$ non-interactive proofs in the form of $(R_i, c_i, s_i)$ where $c_i$ is the hash of $(g_i, R_i)$.

The question is: can we use a same challenge $c$ which is defined as: $c = hash((R_1,..., R_n), (g_1, ..., g_n))$? Each proof will look like $(R_i, c, s_i)$ sharing the same $c$. Would that change the soundness of the scheme? I vaguely feel that this may work through correlation intractability?

Score:2
cn flag

One way to answer your question is to check if the following proof system is sound:

  • Prover sends $R_1,\dots,R_n$
  • Verifier sends challenge $c$
  • Prover responds with $s_1,\dots,s_n$
  • Verifier checks for each $i$ that $(R_i,c,s_i)$ is an accepting transcript of the Schnorr protocol

Note that this proves a different language, namely $L=\{(g_1,\dots,g_n)\,|\, \forall i\exists w_i:\,g_i=g^{w_i}\}$. Now, if the original protocol was special sound, which is the case for Schnorr's protocol, then this is also special sound. Given two accepting transcripts $(R_{1..n},c,s_{1..n})$ and $(R_{1..n},c',s'_{1..n})$ with $c\neq c'$, you can use the witness extractor of Schnorr's protocol for each $i$ on input $(R_i,c,c',s_i,s'_i)$ to recover every $w_i$.

So soundness is preserved, what about zero-knowledge? You can just invoke the special HVZK simulator for Schnorr's protocol $n$ times on input $c$ to get $n$ accepting transcripts of the form $(R_i,c,s_i)$. This should have the same distribution as a real transcript.

In short, the new protocol is also a $\Sigma$-protocol, i.e. it satisfies special soundness (it's a proof of knowledge), special HVZK and public randomness for the verifier. So applying the Fiat-Shamir transform to it should get you a non-interactive zero-knowledge proof of knowledge with provable security in the random oracle model.

Sean avatar
yt flag
Many thanks for the insights.
I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.