Score:1

zkSnarks: Ensuring that a variable has a single value across all the operations it is used in

et flag

I am reading this explanation of zkSnarks written by Maksym Petkus - http://www.petkus.info/papers/WhyAndHowZkSnarkWorks.pdf

In section 4.5, the pdf explains how to represent the following operations

$a$ x $b = r1$
$r1$ x $c = r2$

as $l(x)r(x) - o(x)$ where $l(x)$ is the left operand polynomial, $r(x)$ is the right operand polynomial & $o(x)$ is the output polynomial.

Here if you see that $a$ is used only once in the LHS of the first set of operations - it's used only in the first one (i.e. $a$ x $b = r1$) - it doesn't feature in the 2nd one.

In 4.6, he moves on to how to do the same thing when $a$ repeats. i.e.

$a$ x $b = r1$
$a$ x $c = r2$

Here $a$ is present in both operations

So he says

Nevertheless, because our protocol allows prover to set any coefficients to a polynomial, he is not restricted from setting different values of $a$ for different operations (i.e., represented by some x)

This freedom breaks consistency and allows prover to prove the execution of some other program which is not what verifier is interested in. Therefore we must ensure that any variable can only have a single value across every operation it is used in

Further ahead in 4.6.1 he says

Consequently, if a verifier needs to enforce the prover to set the same value in all operations, then it should only be possible to modify the proportion and not the individual coefficients.

I am unable to understand this - the coefficients of $l(x)$ come from all the operations (you find $l(x)$ by using something like Lagrange's Polynomial Interpolation on both operations). So how can the prover use different coefficients for different operations. Can some explain with an example?

Score:1
gb flag

Suppose you want to use $a$ in two constraints as you've written. You want to assume that $l(x_1) = a$ and $l(x_2) = a$, where $x_1$ and $x_2$ are the indices of the two constraints. But $l(x)$ is created by the prover, and they could have certainly chosen an $l(x)$ which evaluates to two different values at $x_1$ and $x_2$. Then you basically have two independent variables instead of using $a$ twice. This is visualised at the top of page 36 in the PDF linked.

Up until that point, every constraint is verified independently. By this, I mean that $l(x_i)*r(x_i) - o(x_i) = 0$ at each index $x_i$, which is checked by ensuring $(x-x_i)$ is a root of the polynomial. Now, we need a way to verify equality of variables between two different constraints. In other words, to somehow establish things like $l(x_1) = l(x_2)$ between different indices.

The way this is done is by further limiting the way the prover can construct the polynomials (e.g. $l(x)$), so they can't just interpolate whatever values they like. One way to do this is by giving the prover different polynomials $l_a(x)$ which evaluate to 1 whenever $a$ is used and 0 everywhere else. For example, a polynomial where $l_a(x_1) = l_a(x_2) = 1$, and is zero everywhere else. Then they can simply multiply this by $a$ to set the same value of $a$ in all locations it is used. To force the prover to use this, we again encrypt it and provide the $\alpha$ shifted version: $$g^{l_a(x)}, g^{\alpha l_a(x)}$$ (as was done many times previously). The prover can then raise each of these to the power of $a$ to set that value in all places.

If we have another such pair for another variable $d$: $$g^{l_d(x)}, g^{\alpha l_d(x)}$$ Then the prover can set both $a$ and $d$ and then multiply the encrypted polynomials together (which corresponds to adding the polynomials in the exponents together).

The same is done for $R(x)$ and $O(x)$.

There is one more catch, addressed in section 4.9.3, that allows the prover to add extra things to their polynomials by multiplying by another $g^1$ and $g^{\alpha}$. This is fixed by introducing another secret shift by $\gamma$.

meshcollider avatar
gb flag
> "that's the case even in the earlier case. even in the earlier case, the prover could choose whatever they want" - yes, and that's why we slowly need to limit what the prover is able to choose throughout the PDF, so by the time we are done, we are convinced that they can't have cheated.
et flag
Is there an example of how the prover could exploit something in the case where prover is not restricted?
meshcollider avatar
gb flag
Absolutely. Let's say you're trying to prove something which boils down to some constraints including that $b = a + 1$ and $c = a - 1$. Without the assurance that the value of $a$ used in both of these constraints is the same, $b$ and $c$ can be given arbitrary values independently, and the check will still pass. Let's say a malicious prover uses a value $b = 8$ and $c = 0$. Obviously there doesn't exist an $a$ for which these both can be true. But using $a = 7$ in the first constraint and $a = 1$ in the second, both constraints will verify independently.
et flag
When the malicious prover is forming the $l(x)$, $o(x)$ & $r(x)$ with different values than what actually is the right one, then how will Lagrange Interpolation work? Won't the prover be tripped at that point?
meshcollider avatar
gb flag
Lagrange interpolation works for any values you like. For example, if $l(x)$ is supposed to equal $a$ at $l(x_1)$ and $l(x_2)$, there is nothing stopping the prover from interpolating $l(x_1) = a$ and $l(x_2) = b$, obtaining a different $l(x)$ than the one you would obtain if you honestly set both $a$'s to the same value. All you're doing is finding a polynomial that goes through the points $(x_1, a)$, $(x_2, b)$ instead of $(x_1, a)$, $(x_2, a)$
et flag
I have created a chat room for this - https://chat.stackexchange.com/rooms/134216/temp-zksnark - I have put up my question there - could you please look at it
mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.