Score:0

Verification in Bulletproof commitment scheme

hu flag

I am reviewing the ZKP course, represented by the university of Berkley (https://zk-learning.org/). In pages 44 of lecture 6 that is attached below (https://zk-learning.org/assets/lecture6.pdf), the instructor explains the Poly-commitment based on Bulletproofs scheme.

I am a little confused that why the verifier compute com', g', and v' when it just checks v=v_L+v_R u^(d/2). Does the verifer need com' and g' for the last round? If so, how can it use them and what is the extra verification in the last round.

enter image description here

Score:1
tr flag

Observe that it is not enough for the verifier to check that $v = v_L v_Ru^{d/2}$ only. The prover provides $v, v_L$ and $v_R$, furthermore, they know $u$. So the prover can cheat by claiming that some $v$ is the evaluation of $f(u)$ even if $com_f$ is not at all a commitment to $f$.

The reason for this extra computation is found in the note "recurse log d times". At a high level, the PCS built on bulletproofs is a recursive procedure where: a commitment $com_f$ to a polynomial $f$ and a claimed evaluation value $v = f(u)$, the problem is recursively transformed into a similar problem of half the size. This means we have a new polynomial $f'$ with a commitment value $com_f'$ and claimed evaluation value $v'$. Those values are related to the initial ones in a manner that guarantees soundness.

Finally, the verifier can compute the inputs to the new (smaller) problem. But the verifier mustn't simply take the values $com_f', gp', v'$ from the prover. First, this improves bandwidth, but importantly, it allows the verifier to enforce consistency checks necessary for soundness.

tesoke avatar
hu flag
Thanks for the explanation. 1) Receiver means verifier in your explanation? 2) What extra verification would be done in the last round?
Marc Ilunga avatar
tr flag
1) I changed receiver to verifier instead for clarity. 2) Since we are halving the problem in each round, in the final round the polynomial only has 1 coefficient. Which the prover can send directly. The receiver can recompute and check that it matches what it has derived from the previous round.
tesoke avatar
hu flag
Thanks. I know the algorithm for halving in each round and it generates two left and right parts, such as L, R and v_L and v_R. So, I do not know how the final round, the algorithm generates just one coefficient! Would you please elaborates that what happen in the next round. I appreciate.
Marc Ilunga avatar
tr flag
The final round only has one coefficient because we have been reducing the polynomial side in half at each step, so it ends up being a constant polynomial in the final round. See Slide 42 for how the size of the polynomial is reduced.
tesoke avatar
hu flag
Thanks. I have read the pdf, it does not mention how to reduce a degree one polynomial into a constant polynomial in the final round. The general algorithm is clear for reduction from degree 3 into degree one, but is not clear for reduction from one into constant. Suppose that we have f0' and f1' and general parameters g0' and g1', in the second round of page 42. How should we generate the f0'' and g0'' for the final round? I appreciate if you explain it.
Marc Ilunga avatar
tr flag
From the slides, we go from $f = f_0,\cdots , f_3$ to $f_0', f_1'$ computed as $(rf_0 + f_2, rf_1 + f_3)$. What this means is that we look at $f$ as a vector and divide it into 2 halves, $f_l, f_r$. To reduce the vector length by two, we compute $f' = rf_l + f_r$ (this is adding two vectors together, then interpret it as the coefficient of a polynomial). So if we had $f = (f_0, f_1)$ then we have $f' = rf_0 + f_1$. In general, you'll actually find this having computed as $rf_0 + r^{-1}f_1$.
I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.