Matrix norms are common in mathematics.
It is worth mentioning that "matrix norm" can mean (at least) two things
- a norm on the space of matrices, or
- a sub-multiplicative norm on the space of matrices.
What does sub-multiplicative mean? A norm must satisfy a few properties, namely
- non-negative
- = 0 $\iff$ one is taking the norm of the "zero element"
- homogenous of degree 1: $\lVert cx\rVert = |c| \lVert x\rVert$, for $c$ as "scalar", and
- triangle inequality: $\lVert a+b \rVert \leq \lVert a\rVert + \lVert b\rVert$.
This last property could analogously be called "sub-additivity" (or "convexity" --- they're roughly the same). Essentially, given a complicated expression ($a+b$), to take its norm one can reduce to the norms of simpler expressions ($a$,$b$ separately).
Anyway, one can define a norm on the space of matrices by taking any vector norm $\lVert \cdot\rVert$, considering it on $\mathbb{R}^{n^2}\cong \mathbb{R}^{n\times n}$, i.e. by viewing matrices as "square vectors", and working entry-wise. This is somewhat boring though.
Instead, one can try to work with sub-multiplicative matrix norms.
The operator norm is one such norm.
It allows you to get bounds such as
$$\lVert Ax\rVert_2 \leq \lVert A\rVert_{\mathsf{op}}\lVert x\rVert_2$$
Here, $Ax$ and $x$ are vectors.
$A$ is a matrix.
To bound the norm of the vector $Ax$, it suffices to know the vector norm of $\lVert x\rVert_2$, and a matrix norm of $\lVert A\rVert_{\mathsf{op}}$.
Note that the operator norm is also sub-multiplicative, i.e. $\lVert AB\rVert_{\mathsf{op}} \leq \lVert A\rVert_{\mathsf{op}} \lVert B\rVert_{\mathsf{op}}$.
So one can start with some complicated expression
$$(A + BC + DEF)x$$
and bound its $\ell_2$ norm in terms of $\lVert x\rVert_2$ and the operator norms of $A, B, C, D, E, F$.
As for why this shows up in lattice-based cryptography, often
- one ends up with situations where error terms multiply (i.e. $e_1e_2$ for ring elements $e_1, e_2$), and
- one can rewrite ring multiplication $e_1e_2$ as matrix-vector multiplication $M_{e_1}v_{e_2}$, for appropriate matrices and vectors $M_{e_1}$ and $e_{v_2}$.
So if you want a bound on $\lVert e_1e_2\rVert_\infty$, you can
- rewrite in terms of matrices and vectors, and
- appeal to appropriate norms (often at least one matrix norm)
to bound the multiplied error.