Score:0

Are there batched vector/polynomial commitment proofs with sublinear proof size for Verkle Trees?

il flag

High level goal: a Verkle tree (Merkle tree using algebraic vector commitments at each level rather than hashes) with depth d where I can prove the existence of n key/value pairs in the tree. Assuming the verifier already has the tree root commitment as well as the key/value pairs, I would like the additional proof size to be sublinear in either d or n, or ideally both. Zero-knowledge is not required.

I have reviewed Vitalik's and Dankrad's posts on Bulletproofs-style inner product argument and KZG polynomial commitment batching at https://vitalik.ca/general/2021/06/18/verkle.html and https://dankradfeist.de/ethereum/2021/06/18/pcs-multiproofs.html.

If I am understanding correctly, then to prove n relations of the form f_i(x_i) = y_i, assuming the verifier already has each x_i/y_i, the proof consists of one commitment for each polynomial f_i, as well as a constant (with respect to n) sized batched proof. The commitment per node is the major cost here and means we can only achieve a roughly (depth of merkle tree / depth of verkle tree ~= 8x) improvement in bandwidth.

Note that for each path, these relations have the property that f_i(x_i) = F_{i+1}, where F represents the commitment. This seems like it might help compress the proof for each path, but I don't have any concrete ideas.

Any references/relevant papers would be helpful. Thanks!

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.