I think this is just an artifact of Regev representing values in $\mathbb{T} \cong \mathbb{Z} / q\mathbb{Z} = \{0, 1/q, \dots, (q-1)/q\}$, rather than in $\{0,1,\dots,q\}$ directly.
There are still a few things to mention though:
First, modern consensus is that for the hardness of $\mathsf{LWE}$ [1], the particular error distribution you use does not matter much. In particular, common ones (due to ease of sampling) are "centered binomials", or even uniform over a certain range. For examples, see NIST PQC finalists.
Of course, these distributions we use don't aren't always justified by the worst-case to average-case reductions. So what gives? We (roughly speaking) have two choices in how to choose parameters:
Cryptanalyze $\mathsf{SIVP}_\gamma$ directly, find safe parameters, and then shove them through the $\mathsf{SIVP}_\gamma \leq \mathsf{LWE}$ reduction, or
Cryptanalyze $\mathsf{LWE}$ directly, and choose those parameters.
The second is more popular by far. There are a few reasons for this:
The reduction $\mathsf{SIVP}_\gamma \leq \mathsf{LWE}$ is (concretely) not very "tight". In particular, it leads to a decently large inflation of parameters. This is mentioned in a few places, for example another Look at Tightness 2. Of course, this often happens when one initially concretely analyzes a proof that does not care about constants. There have been some follow-up work, for example this, this, and this, but nothing too mainstream. Of the work, I only really have looked into the first --- iirc it found by inflating $(n, \log_2 q, \sigma)$ by a factor of $\approx 2$ compared to what people use in practice, you can get concrete security from the worst-case to average-case reduction.
It's not clear that (worst-case) cryptanalyzing $\mathsf{SIVP}_\gamma$ is easier than (average-case) cryptanalyzing $\mathsf{LWE}$. This is simply because there isn't an obvious candidate for a worst-case instance of $\mathsf{SIVP}_\gamma$! Of course, one could always use an average-case analysis as a lower-bound of a worst-case analysis, but then why average case analyze a different problem than the one you care about?
This is all to say that the $\mathsf{SIVP}_\gamma \leq \mathsf{LWE}$ reduction is (mostly) seen as stating that $\mathsf{LWE}$ is (qualitatively) the "right distribution" to be looking at. There are many distributions of random instances you could look at, and some might be "structurally weak" (for examples of this, see work on "Weak Poly-LWE instances", or that RSA defined using a random $n$-bit integer, rather than random $n$-bit semiprime, would be the "wrong distribution" to use for structural reasons). The $\mathsf{SIVP}_\gamma \leq \mathsf{LWE}$ reduction can be interpreted as pinning down a particular distribution for $\mathsf{LWE}$, which we then quantitatively parameterize via (direct) cryptanalysis.
[1] Note that for signatures, the distribution does matter, but this is because of particulars of the constructions/security proofs of the signatures, and not abstractly because we think $\mathsf{LWE}$ is hard when you use discrete Gaussians and not rounded Gaussians/uniform random variables or whatever.