The below is essentially a comment, but perhaps long for one.
There are a number of ways which this problem seems underspecified currently.
For example, does one need to exactly obtain $\Sigma$, or is approximately obtaining it bad as well?
One method to approximately obtain it is
$$\frac{\sum_i a_i}{n} = \Sigma + \frac{\sum_i v_i\bmod\mathcal{L}}{n}.$$
Here, $x\bmod\mathcal{L} := x - \lfloor x\rceil$.
If $v_i$ is randomly sampled, under suitable assumptions of the underlying distribution (which are somewhat common), we will have that $\mathbb{E}[v_i\bmod\mathcal{L}] = 0$, and moreover $v_i$ is (at least close to) uniform on $\mathcal{V} = \{x\mid \lfloor x\rceil = 0\} = \mathbb{R}^m\bmod \mathcal{L}.$
Then $\frac{\sum_i v_i\bmod\mathcal{L}}{n}$ can be seen as an empirical/sample mean.
By things like the Central Limit Theorem, it will be distributed as $\mathcal{N}(0, \sigma^2/n)$ for large-enough $n$, where $\sigma^2 = \mathsf{Var}[v_i\bmod\mathcal{L}]$.
Therefore if $n\gg \sigma^2$, one starts to expect significant issues, even if we have to exactly obtain $\Sigma$.
If we only have to approximately obtain $\Sigma$, the problem of course becomes significantly easier.
This is to say that the hardness of your problem seems closely-related to the ratio $\sigma^2/n$.
This makes sense --- when this quantity is small, $\lfloor x\rceil\approx x$, and $a_i\approx \Sigma$ already.
When this quantity is large, this is no longer true.
Note that there are other potential issues as well, namely the pairwise differences $a_i - a_j = v_j\bmod\mathcal{L} - v_i\bmod\mathcal{L}$ are efficiently computable.
This doesn't directly seem to cause issues, but it seems uncomfortably close to causing issues.
If one can get many samples of $x\mapsto x\bmod \mathcal{L}$, one can
- use this to extract a description of $\mathcal{V}$, and then
- use this to construct an oracle for $\lfloor x\rceil$.
I believe this is (roughly) the content on the various "learning a hidden basis" papers, attacking things like GGH and NTRUSign.
Here, we don't precisely get samples of the form $x\mapsto x\bmod \mathcal{L}$.
Instead, we get the weaker samples $(v_i, v_j)\mapsto v_i\bmod\mathcal{L} - v_j\bmod\mathcal{L}$.
Perhaps this thwarts the previously-mentioned attacks, but it is not clear to me, and it is adjacent to something which is vulnerable to attacks.
Concretely, given enough samples (and under some assumptions), I expect an attacker to be able to learn $\mathcal{V} + \mathcal{V}$, where $A+B = \{a+b\mid a\in A, b\in B\}$.
If somehow they can "divide by two", i.e. go from $2\mathcal{V} := \mathcal{V}+\mathcal{V}$ to $\mathcal{V}$, I expect there to be a straightforward attack on the proposal via constructing a CVP oracle for $\lfloor \cdot \rceil$.
I won't look into this further myself, but it is a somewhat-concerning potential way to attack your proposed problem.