Score:0

The asymptotic form of Hermite's constant in lattice

in flag

The are some linearly upper bounds on Hermite's constant $\gamma_d$, such as $\gamma_d \leq 2d/3$, $\gamma_d \leq d/4+1$. So we can claim that $\gamma_d=O(d)$. There is also a rather tight asymptotical bound for $\gamma_n$: $\frac{d}{2\pi e}+\frac{\log(\pi d)}{2\pi e}+o(1) \leq \gamma_d \leq \frac{1.744d}{2 \pi e}(1+o(1))$ (see page 34 of The LLL algorithm edited by Phong Q. Nguyen et al.). After this tight asymptotical bound, there is a sentence that "Thus, $\gamma_d$ is enssentially linear in $d$" in this book (also in page 34). My questions is that can we express $\gamma_d$ in the asymptotical form of $\Theta(d)$. I have searched some related papers, but I can't find the the corresponding expression.

Mark avatar
ng flag
What you want seems to follow from the inequality you have linked? The first inequality implies that $\gamma_d = \Omega(d)$. The second inequality implies that $\gamma_d = O(d)$. Combined you have that $\gamma_d =\Theta(d)$.
I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.