Score:2

Example of bad basis for lattices (worst-case for LLL)

es flag

Summary. Given some dimension $n$ (say $n=50$), is it possible to describe explicitly a lattice $L$ and a basis $B$ of $L$ such that $$ \frac{ \| LLL(B)_1 \| }{ \lambda_1(L) } > 1.02^n $$ where $LLL(B)_1$ is the first vector of the LLL-reduced basis of $B$ (for $\delta=1$ say)? The constant 1.02 is the one given in "LLL on average" by Nguyen–Stehlé. Or at least, how can I produce (deterministically) such a basis $B$?


I am trying to understand how to produce (deterministically) a lattice $L \subset \mathbb R^n$ and a "bad basis" for $L$, in any given dimension, say $n=50$. By "bad basis" $(b_1, ..., b_n)$, I mean that when we apply the LLL-algorithm to it (say for $\delta = 1$) then we get a basis $(v_1, ..., v_n)$ such that $\| v_1 \|$ is "as far as possible" from $\lambda_1(L)$. The upper bound $\| v_1 \| \leq (\delta - 1/4)^{-(n-1)/2} \lambda_1(L)$ is typically not sharp, we usually have $\| v_1 \| \leq 1.02^n \lambda_1(L)$.

I am interested in explicit examples where one could describe the $b_i$'s easily (or define explicitly a unimodular matrix $U$ that would give the bad basis $B$ from a "good basis" $G$, which could be used to compute $\lambda_1(L)$). I have seen that one can sometimes use $B = HNF(G)$, but it might not always work to get a bad basis, and therefore it is not explicit (nor very clear to me...).

Watson avatar
es flag
In Nguyen, Stehlé - LLL on the Average, they say "It is easy to prove that these bounds are tight in the worst case: both are reached for some reduced basis of some particular lattices." Basically my question is that I would like to know precisely (exactly, explicitly) what are those particular lattices.
Score:3
es flag

I finally found an answer! It is explained in the thesis of N. Gama (page 36), available here. I copy the matrix whose rows form a basis $B$ of a lattice for which the upper bound $$\| LLL(B)_1 \| \leq (4/3)^{(n-2)/2} \lambda_1(L)$$ is actually tight (sharp) for all $n$. (Note the exponent $n-2$ and not $n-1$).

basis

where $\gamma_2 = \sqrt{ 4/3 }$ is Hermite's constant in dimension 2. Note that the triangular matrix on the right is the matrix of Gram-Schmidt coefficients, the off-diagonal entries are all $1/2$ which is the maximum allowed in the definition of LLL-reduced basis.

Score:2
ng flag

I have seen that one can sometimes use $B=HNF(G)$, but it might not always work to get a bad basis, and therefore it is not explicit (nor very clear to me...).

Let $\mu : \mathbb{R}^{n\times n}\to \mathbb{R}_{\geq 0}$ be a measure of the quality of a basis. For example, one could have $\mu(B) = \lVert LLL(B)_1\rVert$ be the norm of the first coordinate of the LLL reduction of $B$ (as you suggest), but there are many other quality measures possible.

The point of using the HNF is not that it leads to a basis that is "the worst" abstractly (for most quality measures, multiplying by a random unimodular matrix of high operator norm would likely lead to a very low-quality basis), but that it is the worst that any adversary will plausibly use. This is because of the following simple result.

Let $\mu$ be efficiently computable. Let $A$ be an efficient adversary that, on input $B$, computes some $U$ such that $\mu(BU) \leq f(B)$ for some bounding function $f$. Then, there exists an efficient adversary $A'$ that computes $U$ such that $\mu(BU) \leq \min(f(B), f(HNF(B))$.

Notably, $HNF(B)$ is an invariant of the lattice, and not the particular basis used. Therefore, the above result says that the worst quality basis an adversary can find is at most $f(HNF(B))$, so giving the adversary a non-HNF basis may only lead to them computing a better quality basis than they would have with an HNF basis.

The adversary $A'$ is easy to describe. It simply runs $A$ on $B$ and $HNF(B)$, and returns the basis that (of these two) minimizes the quality measure. This does require that the quality measure be efficiently computable, but in most cases this is true.


Edit: In response to your clarification in the comments, I'll point you to a partial answer, which is "in principle yes, but I do not have an explicit matrix on hand". The following is from Chapter 3 of Seungki Kim's thesis.

Theorem 3.4 Let $n$ be sufficiently large, $p\in (0,100)$, and suppose $k < \frac{n}{6(\log n + K(p))}$ is a positive integer, where $K(p)$ is a constant depending only on $p$. Then at least $p\%$ of $n$-dimensional $(1, 1/2)$-LLL bases have

$$\forall 1\leq i \leq n-1 : \frac{k-1}{k}\frac{1}{2}\leq |\mu_{i+1,i}|\leq \frac{1}{2}.$$

Here "most" requires technical care to get right (see Kim's thesis for details). Kim goes on how to discuss this result is counterintuitive, in the sense that it implies that for "most LLL bases $B$", $\lVert LLL(B)_1\rVert \approx (4/3)^{n/4}\approx (1.0745)^n$, the worst-case bound for LLL. But it is also known that, for most distributions over lattice bases $B$, $\lVert LLL(B)_1\rVert \approx 1.02^n$ --- what gives?

The claim is that $LLL$ is somehow biased, in the sense that it does not transport the aforementioned distributions over lattice bases to the uniform distribution over LLL bases.

Therefore, to find bad LLL bases, it makes sense to (directly) sample matrices that satisfy the LLL condition (rather than attempt to use LLL to compute bases experimentally). I haven't thought if the natural way to do this (one vector at a time, uniformly at random from a suitable set, conditioned on preserving the LLL condition) will yield the desired result. There may be a more intelligent way to uniformly sample an LLL basis (which, by Kim's result, is likely a bad basis). But, in light of Kim's thesis, it is a reasonable thing to attempt to do.

It's worth mentioning that Kim's result is (at minimum) an existence result. Plausibly, the reason LLL behaves well on average is that the worst-case bound is loose. Kim's work shows this is not true though.

Watson avatar
es flag
Thank you for the input. But I would like something more explicit, and my measure of quality would be rather $$ \mu(B) := \frac{ \| LLL(B)_1 \| }{ \lambda_1(L) } . $$ Is it possible to explicitly find basis $B$ (in rank $n=50$ say) such that $\mu(B) > 1.02^n$ ?
Mark avatar
ng flag
@Watson I've updated the answer. In principle the answer is yes, by sampling uniformly random LLL bases, but I do not know how to do this off-hand,.
Watson avatar
es flag
Thanks for the update; I did not know about Kim's thesis. I actually found an answer; it is not obvious (while many references state that the LLL upper bound is sharp... without explanation!).
mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.