Score:5

Why do Lattice-based Proof Systems not use the $\ell_2$ norm and canonical embedding?

ng flag

I was recently reading the paper A non-PCP Approach to Succinct Quantum-Safe Zero-Knowledge. Among other things, it discusses an adaption of the "folding" technique (from Bulletproofs) to SIS-based proofs.

The paper measures distances in the $\ell_\infty$ norm (rather than $\ell_2$), and is vague on which choice of embedding it uses (although I imagine it is the coefficient embedding). These choices mean that they pick up extra factors of $n$ when bounding the norm of multiplications in particular.

For example, at one point (on page 20) they want to establish a bound

$$\lVert 8\lambda_i\rVert_\infty \leq 2n^2$$

where $\lambda_i$ is of the form $$\lambda_i = \pm\frac{f}{(X^u-X^v)(X^v-X^w)(X^w-X^u)},$$ $\lVert f\rVert_1 \leq 2$, and it is known that $2/(X^i-X^j)$ has coefficients in $\{-1,0,1\}$ for $i\neq j$. I can establish the desired bound as follows

  1. Write $$8\lambda_i = \pm f \frac{2}{X^u-X^v}\frac{2}{X^v-X^w}\frac{2}{X^w-X^u}$$

  2. For each multiplication, use a result of the form that $\lVert rs\rVert_\infty \leq \lVert r\rVert_\infty\lVert s\rVert_\infty \min(\lVert r\rVert_0,\lVert s\rVert_0)$. In particular, for product-ands $r, s$ non-sparse, we have that $\min(\lVert r\rVert_0,\lVert s\rVert_0) \leq n$, losing us a factor of $n$ on each of the multiplications (not involving $f$), and a factor of 2 on the multiplication involving $f$.

The inequality $\lVert rs\rVert_\infty \leq \lVert r\rVert_\infty \lVert s\rVert_\infty \min(\lVert r\rVert_0,\lVert s\rVert_0)$ (or something very close to it --- perhaps missing a constant factor of 2) should hold as each coefficient in the product $rs$ is the convolution of the coefficients of $r$ and $s$. This convolution has $\min(\lVert r\rVert_0, \lVert s\rVert_0)$ many non-zero terms, and each of these non-zero summands has size at most $\lVert r\rVert_\infty \lVert s\rVert_\infty$.

In particular, this means that when analyzing $\lVert rs\rVert_\infty$, they often (implicitly) bound this as $\lVert rs\rVert_\infty \leq n\lVert r\rVert_\infty\lVert s\rVert_\infty$, picking up an additional factor of $n$ for each multiplication (except for multiplication by $f$, which is sparse).

This is to be contrasted with analysis in the canonical embedding (and the $\ell_2$-norm), where sub-multiplicativity would get one that

$$\lVert 8\lambda_i\rVert_2^{can} \leq \lVert f\rVert_2^{can}(\lVert 2/(X^i-X^j)\rVert_2^{can})^3$$

picking up no extra factors of $n$ along the way (although I believe $\lVert r\rVert_2^{can} = \sqrt{n}\lVert r\rVert_2$, so there may be some implicit factors of $n$ picked up).

I guess my overall question is

Is there some conceptual reason why the canonical embedding seems less popular in work on lattice-based proof systems?

I had been under the impression that when one wants to optimize bounds (especially bounds involving multiplications!) that the canonical embedding is generally preferable due to sub-multiplicativity. But in my cursory reading, the coefficient embedding seems more popular in works on proof systems, and I am interested in knowing why.

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.