Score:1

On the spectral norm in lattice-based cryptography

pt flag

In the preliminaries section of a paper$^\color{magenta}{\star}$ on lattice-based cryptography, the matrix norm $\| \cdot \|_{2}$ is used. Why do we define such norm? What's the purpose of defining such a norm? Is it common in matrix mathematics?


picture


$\color{magenta}{\star}$ Dan Boneh, Craig Gentry, Sergey Gorbunov, Shai Halevi, Valeria Nikolaenko, Gil Segev, Vinod Vaikuntanathan, Dhinakaran Vinayagamurthy, Fully key-homomorphic encryption, arithmetic circuit ABE, and compact garbled circuits, EUROCRYPT 2014.

Rodrigo de Azevedo avatar
id flag
Is it common in matrix mathematics? [Kind of](https://math.stackexchange.com/questions/tagged/spectral-norm)
Score:1
ng flag

Matrix norms are common in mathematics. It is worth mentioning that "matrix norm" can mean (at least) two things

  1. a norm on the space of matrices, or
  2. a sub-multiplicative norm on the space of matrices.

What does sub-multiplicative mean? A norm must satisfy a few properties, namely

  1. non-negative
  2. = 0 $\iff$ one is taking the norm of the "zero element"
  3. homogenous of degree 1: $\lVert cx\rVert = |c| \lVert x\rVert$, for $c$ as "scalar", and
  4. triangle inequality: $\lVert a+b \rVert \leq \lVert a\rVert + \lVert b\rVert$.

This last property could analogously be called "sub-additivity" (or "convexity" --- they're roughly the same). Essentially, given a complicated expression ($a+b$), to take its norm one can reduce to the norms of simpler expressions ($a$,$b$ separately).

Anyway, one can define a norm on the space of matrices by taking any vector norm $\lVert \cdot\rVert$, considering it on $\mathbb{R}^{n^2}\cong \mathbb{R}^{n\times n}$, i.e. by viewing matrices as "square vectors", and working entry-wise. This is somewhat boring though.

Instead, one can try to work with sub-multiplicative matrix norms. The operator norm is one such norm. It allows you to get bounds such as

$$\lVert Ax\rVert_2 \leq \lVert A\rVert_{\mathsf{op}}\lVert x\rVert_2$$

Here, $Ax$ and $x$ are vectors. $A$ is a matrix. To bound the norm of the vector $Ax$, it suffices to know the vector norm of $\lVert x\rVert_2$, and a matrix norm of $\lVert A\rVert_{\mathsf{op}}$. Note that the operator norm is also sub-multiplicative, i.e. $\lVert AB\rVert_{\mathsf{op}} \leq \lVert A\rVert_{\mathsf{op}} \lVert B\rVert_{\mathsf{op}}$. So one can start with some complicated expression

$$(A + BC + DEF)x$$

and bound its $\ell_2$ norm in terms of $\lVert x\rVert_2$ and the operator norms of $A, B, C, D, E, F$.

As for why this shows up in lattice-based cryptography, often

  1. one ends up with situations where error terms multiply (i.e. $e_1e_2$ for ring elements $e_1, e_2$), and
  2. one can rewrite ring multiplication $e_1e_2$ as matrix-vector multiplication $M_{e_1}v_{e_2}$, for appropriate matrices and vectors $M_{e_1}$ and $e_{v_2}$.

So if you want a bound on $\lVert e_1e_2\rVert_\infty$, you can

  1. rewrite in terms of matrices and vectors, and
  2. appeal to appropriate norms (often at least one matrix norm)

to bound the multiplied error.

I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.