Score:1

How "unorthogonal" can a LLL-reduced basis be?

va flag

I have been recently studying LLL-reduction. I get from the size condition and Lovasz condition that the basis are guaranteed to be somewhat orthogonal. But I couldn't figure out how orthogonal the LLL-reduced basis has to be in the geometric or intuitionistic sense. For example,

  1. How small can an angle be between two LLL-reduced basis?
  2. How long can the "diagonal" of the fundamental parallelepiped of the LLL-reduced basis be, comparing to its Gram-Schmidt counterpart?
Score:2
ng flag

While this is a different measure than the two you talk about, a typical measure of "orthogonality" for lattice basis reduction is the orthognality defect. The motivation in defining it is what is known as Hadamard's Inequality.

Let $\mathbf{B}$ be a matrix with columns $\mathbf{b}_1,\dots,\mathbf{b}_n$. Then $\det \mathbf{B}\leq \prod_{i=1}^n \lVert \mathbf{b}_i\rVert_2$, with equality if and only if $\mathbf{b}_i$ are mutually orthogonal.

Recall thatthe determinant is invariant under orthogonal transformations. This suggests that we can freely switch $\det \mathbf{B} = \det \widehat{\mathbf{B}}$, where $\widehat{\mathbf{B}}$ is the Gram-Schmidt orthogonalization of $\mathbf{B}$.

Anyway, this suggests that the ratio

$$\frac{\prod_{i = 1}^n \lVert \mathbf{b}_i\rVert_2}{\det \widehat{\mathbf{B}}}$$

may be a useful (numerical) measure of how orthogonal a given basis $\mathbf{B}$ is. This quantity is known as the orthogonality defect, and (by Hadamard's inequality) is lower-bounded by 1. A natural interpretation of your question is then

How bad can the orthogonality defect of an LLL-reduced basis be?

To this end, there are some bounds known, but I do not recall how tight they are (I remember someone's thesis --- perhaps Phong Nyguen's or Damien Stehle's --- containing some discussion of this). Still, the following upper-bound is known.

If $\mathbf{B}$ is a LLL-reduced basis of an $n$-dimensional lattice, then its orthogonality defect is at most $2^{n^2/2}$.

See for example Lemma 1.18, but this is a fairly well-known result (once you know the term "orthogonality defect" to search on).

haoyu avatar
va flag
I've read from the [wiki page of LLL-reduction](https://en.wikipedia.org/wiki/Lenstra%E2%80%93Lenstra%E2%80%93Lov%C3%A1sz_lattice_basis_reduction_algorithm) that $\prod \|\mathbf{b}_i\| \leq 2^{n(n-1)/4} \, \det(\mathcal{L})$. Is this a tighter bound than the one you gave or am I missing something?
Mark avatar
ng flag
@haoyu it is, but both of them are $2^{O(n^2)}$, i.e. there isn't that big of a difference between them practically.
Score:2
ch flag

When it comes to LLL-reduced basis vectors, we can't guarantee that they'll be perfectly orthogonal like we learned in math class, but they're definitely going to be pretty darn close! The angle between any two basis vectors will be no smaller than a certain constant multiple of the square root of the minimum nonzero Gram-Schmidt coefficient. Don't worry too much about the math jargon here - what this means is that the basis vectors will be almost orthogonal, so they won't be too far off from what you might expect.

Similarly, the "diagonal" of the fundamental parallelepiped of the LLL-reduced basis might not be super short, but it won't be too long either! It will be no more than a certain constant multiple of the square root of the product of the norms of the basis vectors. Again, don't get too hung up on the math terms - just know that the basis vectors will be pretty short, and the diagonal of the parallelepiped won't be too long compared to the length of the vectors.

Overall, LLL-reduced basis vectors are usually good enough for most applications, but in some cases, you might need something a little stronger. If you need perfectly orthogonal basis vectors or even shorter vectors, there are other algorithms out there that might work better for you!

I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.