While I haven't verified this, I think a more natural interpretation of this is that $\sqrt{\Sigma_{\mathbf{G}}}$ is simply the matrix square root (in the sense of the Cholesky decomposition) of the positive semidefinite $\Sigma_{\mathbf{G}}$.
Your main concern seems to be the bound
$$\sqrt{\Sigma_{\mathbf{G}}} \geq \eta_\epsilon(\Lambda^\perp(\mathbf{G})).$$
While it is natural to think this is an inequality between real numbers (and therefore $\Sigma_{\mathbf{G}}$ should be real, which would be confusing), there is another interpretation.
Namely, one can write this (perhaps less ambiguously) as
$$\sqrt{\Sigma_{\mathbf{G}}} \succeq \eta_\epsilon(\Lambda^\perp(\mathbf{G}))\cdot I,$$
where $\succeq$ is the inequality in the the Loewner order on matrices, and $I$ is an appropriately-sized identity matrix.
This interpretation is consistent with the paper's (claimed) chosen notation.
I quote from the first pagagraph of section 2
or convenience, we sometimes use a scalar $s$ to refer to the scaled identity
matrix $sI$, where the dimension will be clear from context.
For the notation selected for the Loewner order, I quote the 4th paragraph of Section 2.1
A symmetric matrix $\Sigma\in\mathbb{R}^{n\times n}$ is positive definite (respectively, positive semidefinite), written $\Sigma > 0$ (resp., $\Sigma\geq 0$), if $x^t\Sigma x > 0$ (resp., $x^t\Sigma x \geq 0$) for all nonzero $x \in\mathbb{R}^n$. We have $\Sigma > 0$ if and only if $\Sigma$
is invertible and $\Sigma^{-1} > 0$, and $\Sigma \geq 0$ if and only if $\Sigma^+ \geq 0$. Positive (semi)definiteness defines a partial ordering on symmetric matrices: we say that $\Sigma_1 > \Sigma_2$ if $(\Sigma_1 − \Sigma_2) > 0$, and similarly for $\Sigma_1 \geq \Sigma_2$ ...
In general though, this is the natural interpretation solely because $\sqrt{\Sigma_{\mathbf{G}}}$ is mentioned as the parameter of a Gaussian.
This should be a strong hint you are in the multi-dimensional setting (and this is the Cholesky decomposition of a covariance matrix, which is always positive-semidefinite, i.e. the Cholesky decomposition always exists).
This is because in 1 dimensions it is horrible notation --- one would simplify $\sqrt{\sigma^2} = \sigma$ to get a much simpler expression in terms of more familiar parameters.
While overloading $\geq$ and writing $s$ rather than $sI$ is somewhat ambiguous, it is much less of a notational crime than writing $\sqrt{\Sigma_{\mathbf{G}}}$ for $\sigma$, so is the natural interpretation.