Score:1

Non-Gaussian distribution in continuous learning with error

US flag

The CLWE problem (and related) talks about the hardness of finding the secret key $\vec{s}$, given polynomially many samples $(\vec{a},t)$, where $\vec{a}$ is sampled from the normal distribution, and $t=\gamma \vec{a}\cdot\vec{s}+e \pmod{1}$, where $e$ is also sampled from the normal distribution.

Is the distribution of $\vec{a}$ key to the hardness of CLWE? For example, what if I chose $\vec{a}$ from a Laplacian distribution, or Poisson distribution? Can I make any statements on the hardness of the CLWE problem?

Related:

  1. LWE versus neural nets
  2. LWE with a binary matrix A
  3. Is LWE easy when the matrix $A$ is sparse?
I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.