Score:1

Why the output of G-lattice sampling is spherical in the paper GM18?

it flag

In the paper GM18, they say that the sampling algorithm, SampleG, is shown in Figure 2. It takes as input a modulus $q$, an integer variance $s$, a coset $u$ of $\Lambda^{\perp}(g^T )$, and outputs a sample statistically close to $D_{\Lambda^{\perp}_u(g^T),s}$. SampleG relies on subroutines Perturb and SampleD where Perturb($\sigma$) returns a perturbation, $p$, statistically close to $D_{L(\Sigma_2),\sigma \cdot \sqrt{\Sigma_2}}$ , and SampleD($σ, c$) returns a sample $z$ such that $Dz$ is statistically close to $D_{L(D),−c,σ}$.

figure2

I can't really understand why SampleG outputs a sample statistically close to $D_{\Lambda^{\perp}_u(g^T),s}$, where the distribution is spherical(I think it's covariance should be $\sqrt{\Sigma_G}$, not $I$).

Also, I can't understand the meaning of $\beta$ in subroutin Perturb($\sigma$).

In addition, the perturbation $p$ statistically close to $D_{L(\Sigma_2),\sigma \cdot \sqrt{\Sigma_2}}$, why the lattice is $L(\Sigma_2)$, in other words, why the lattice basis is $\Sigma_2$.

I understand the framework of the SampleG algorithm in high level, but I am confused in pseudocode of the algorithm, anyone can help or discuss with me? Thanks a lot.

I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.