Score:1

Sigma parameter from Trapdoors for Lattices

es flag

In the document Trapdoors for Lattices, section 5.4 Gaussian Sampling, they introduce the parameter $\sqrt{\Sigma_{\bf G}}$, which is related to the lattice $\Lambda^\perp(\bf G)$. They use it as a bound for the smoothing parameter of this lattice, therefore $\sqrt{\Sigma_{\bf G}}\in\mathbb{R}$. But later on, they do some calculation as if it were a matrx when they write $s_1(\sqrt{\Sigma_{\bf G}})$, where $s_1(\cdot)$ is the first singular value of the argument.

If this is not confusing enough, in 2.1 they use the $\Sigma$ notation extensively to refer to matrices. Even when defining the root of a matrix

I feel like there is a small detail I'm missing, so if anyone could help me understand what's going on here that'd be great.

Score:0
ng flag

While I haven't verified this, I think a more natural interpretation of this is that $\sqrt{\Sigma_{\mathbf{G}}}$ is simply the matrix square root (in the sense of the Cholesky decomposition) of the positive semidefinite $\Sigma_{\mathbf{G}}$. Your main concern seems to be the bound $$\sqrt{\Sigma_{\mathbf{G}}} \geq \eta_\epsilon(\Lambda^\perp(\mathbf{G})).$$ While it is natural to think this is an inequality between real numbers (and therefore $\Sigma_{\mathbf{G}}$ should be real, which would be confusing), there is another interpretation. Namely, one can write this (perhaps less ambiguously) as

$$\sqrt{\Sigma_{\mathbf{G}}} \succeq \eta_\epsilon(\Lambda^\perp(\mathbf{G}))\cdot I,$$ where $\succeq$ is the inequality in the the Loewner order on matrices, and $I$ is an appropriately-sized identity matrix.

This interpretation is consistent with the paper's (claimed) chosen notation. I quote from the first pagagraph of section 2

or convenience, we sometimes use a scalar $s$ to refer to the scaled identity matrix $sI$, where the dimension will be clear from context.

For the notation selected for the Loewner order, I quote the 4th paragraph of Section 2.1

A symmetric matrix $\Sigma\in\mathbb{R}^{n\times n}$ is positive definite (respectively, positive semidefinite), written $\Sigma > 0$ (resp., $\Sigma\geq 0$), if $x^t\Sigma x > 0$ (resp., $x^t\Sigma x \geq 0$) for all nonzero $x \in\mathbb{R}^n$. We have $\Sigma > 0$ if and only if $\Sigma$ is invertible and $\Sigma^{-1} > 0$, and $\Sigma \geq 0$ if and only if $\Sigma^+ \geq 0$. Positive (semi)definiteness defines a partial ordering on symmetric matrices: we say that $\Sigma_1 > \Sigma_2$ if $(\Sigma_1 − \Sigma_2) > 0$, and similarly for $\Sigma_1 \geq \Sigma_2$ ...

In general though, this is the natural interpretation solely because $\sqrt{\Sigma_{\mathbf{G}}}$ is mentioned as the parameter of a Gaussian. This should be a strong hint you are in the multi-dimensional setting (and this is the Cholesky decomposition of a covariance matrix, which is always positive-semidefinite, i.e. the Cholesky decomposition always exists). This is because in 1 dimensions it is horrible notation --- one would simplify $\sqrt{\sigma^2} = \sigma$ to get a much simpler expression in terms of more familiar parameters. While overloading $\geq$ and writing $s$ rather than $sI$ is somewhat ambiguous, it is much less of a notational crime than writing $\sqrt{\Sigma_{\mathbf{G}}}$ for $\sigma$, so is the natural interpretation.

Cristian Baeza avatar
es flag
Well that makes a lot of sense, thanks! I knew there was something fishy :)
I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.