Score:3

STARKs for arbitrary computation

ma flag

I have been reading Vitalik's series on STARKs recently (Part 1, 2 and 3). It is a nice and very understandable read for a layman like me.

Brutal summary of my current understanding

Vitalik outlines the following technique to prove the correctness of some arbitrary computation:

  • Encode the computation trace in the values of a polynomial P(x).

  • Define a constraint checking polynomial C(z) such that, if the computation is correct, we will have C(F(x)) = 0 for all values of x in the computation range. I will use Q to denote the degree of the composed polynomial C(F(x)).

    (In general, C can take multiple arguments, e.g., C(z1, z2, z3) = z3 - z2 - z1 checks the correct computation of a Fibonacci sequence when we require C(F(x), F(x + 1), F(x + 2)) = 0).

  • Because C(F(x)) must be 0 for all x in 1..N, we define Z(x) = (x - 1) (x - 2) .. (x - N): C(F(x)) must be equal to Z(x) multiplied with some D(x). D is relatively easy to compute from C(F(_)) and P.

  • The prover commits to a table of the first M values of both P and D, with M >> Q (e.g., M = 100 Q).

  • The verifier probes the table at multiple values x, checking C(P(x)) = Z(x) D(x) at various points to be convinced that the computation was executed correctly.

So far so good. I think I understand. Vitalik goes in detail explaining two specific checks that can be done with constraint-checking polynomials:

  • Range-checking (0 <= P(x) <= H for all x in 1..N, with C(z) = z (z - 1) (z - 2) .. (Z - H).
  • Fibonacci computation, with C(z1, z2, z3) = z3 - z2 - z1.

My random attempt at generalization

Sadly, Vitalik's explanation ends there, short of explaining how STARKs can be generalized to prove arbitrary computation. I have an intuition of how it could be done, but I don't see how it could be done efficiently.

For example, I thought we could define two "operand polynomials" L(x) and R(x), then:

  • Require P(x) being either 0 or 1, for all x in 1..N (this can be easily done by range checking, which Vitalik already explains how to do).
  • Additionally require (1 - (P(L(x))P(R(x)))) - P(x) = 0 for all x in 1..N. In doing so, we require each P(x) to be equal to P(L(x)) NAND P(R(x)). Because any logic circuit can be built out of NANDs, a suitable choice of L and R should make any computation provable.

The big issue with this idea, however, is that the degree Q of the composition between P and the constraint-checking polynomial here is huge, in the order of N^2. Complexity for the prover would become unmanageable quite quickly.

Where do I read next?

I am wondering where I can read up how to define a constraint-checking polynomial that can actually verify any computation. The whole "accessing the memory at random locations" bit seems very tricky to me, and I'm eager to learn further.

ckamath avatar
ag flag
[This](https://eprint.iacr.org/2018/046) paper by Ben Sasson et al has a primer on arithmetisation (see Sections 2 and B). Does this help?
Score:2
ru flag

Your approach certainly works by Shannon's theorem. Perhaps a less artificial model for universal computation is to use a Turing machine with an alphabet of $N$ symbols. This allows us to dispense with $P(x)$ as all values of $x$ can naturally fall in our alphabet. We then can have values $x_{p,t}$ to represent the symbol at position $p$ of the Turing machine's tape at step $t$.

To have a verifiable execution of a rule $r$ at tape position $p$ at time $t$ where the rule tells us to write $y_r(x)$ when the tape reads $x$ and then move the tape by $s_r(x)$ and execute rule $r'_r(x)$, we can construct an interpolative polynomial for $y_r(x)$ and then use indicator polynomials $\delta_i(x)$ to create the verification polynomial $$f_{r,t}(\mathbf x)=y_r(x_{p,t})-x_{p,t+1}+\sum_{i=1}^N\delta_i(x_{p,t})f_{i,t+1}(\mathbf x).$$

However, for most computations the verification quickly becomes unwieldy for both the Shannon and Turing models. This is because we are trying to write down a family of polynomials that verifies an enormous range of possible computational processes so that the family itself must be enormous.

Practically speaking STARKs are best used for verifying traces where the verification polynomial is wieldy which naturally lead us towards integer addition, subtraction and multiplication or discrete set membership (with some tricks to cope with recursive computations). Even in these representation-friendly situations the verification overhead can still be a significant multiple of the underlying operation.

I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.