It is worth mentioning there are two answers to this depending on whether you are working with things like
- encryption, where everything is easy, or
- signatures, where it is less easy.
Encryption:
For encryption, most people practically think at this point that the particular distribution you sample $e$ from doesn't matter, provided it is high-enough entropy.
Common distributions are things like
- discrete Gaussians. These are the maximum entropy distribution on $\mathbb{Z}$ for given (fixed) mean and variance, i.e. are somewhat natural from this entropy point of view,
- uniform distributions --- especially of the form $[-2^k+1,\dots, 2^k]$ are extremely simple to sample from, it simply requires $k+1$ uniform bits
- binomial distributions: if you sample $k+1$ bits and add them (and recenter to be around 0), you'll get a more concentrated distribution (= correct for smaller ciphertext modulus $q$) that is still reasonably high entropy
there are plenty of other things you can do though. It doesn't matter too much, as long as the support of the error distribution isn't too small (where things like the Arora-Ge attack start becoming an issue).
Signatures:
For signatures, things can matter quite a bit more.
This is because often you end up in a situation where you have some random quantity $r$ that is distributed according to a distribution that depends on the secret key (for example $r\sim \mathcal{D}_{\mathsf{sk}, \sigma^2}$, or something).
If an adversary gets enough samples of the $r$, they can plausibly reconstruct the secret key, which is bad.
This can be fixed in a variety of ways.
A very common one is known as "rejection sampling".
You again
- generate $r\sim\mathcal{D}_{\mathsf{sk}, \sigma^2}$, and
- "reject" $r$ (i.e. regenerate an independent value $r'$) subject to some condition $\mathsf{Cond}(r)$.
The condition is chosen such that that the distribution $\mathcal{D}_{\mathsf{sk}, \sigma^2}\mid \mathsf{Cond}(r)$ is independent of the secret key, i.e. has no issues with leakage.
In this setting, Gaussians often perform better on a variety of metrics, for example
- shortness (which leads to smaller parameters of the schemes), and
- probability that $\mathsf{Cond}(r)$ occurs (so you have to generate less signatures to find one that "passes")
That being said, they are not obviously optimal.
In practice, rejection sampling (where the target distribution is Gaussian) seems to require computing transcedental functions to relatively high precision, which can be bad in practice (less efficient, requires precomputed constants for the transcedental functions, hard to make side-channel secure).
For this reason, the signature scheme Dilithium chose the target distribution to be uniform on a box.
There is a new signature scheme in NIST's "round 4" contest that chooses it to be uniform over a sphere (HAETAE).
In both cases, Gaussians are practically not used, though theoretically there are larger gains to be had here than in the case of (standard) encryption.