You've inadvertently stumbled on a particularly thorny/polarizing topic in lattice-based cryptography, namely what is known as the tightness of the search-to-decision reduction.
In short, a statement of the form
Any algorithm $A$ against problem $A$ that runs in time $T_A$ with advantage $\epsilon_A$ implies an algorithm $B$ against problem $B$ that runs in time $T_B$ of advantage $\epsilon_B$.
is a simplistic way to summarize a cryptographic reduction.
Ideally, the reduction is tight, meaning that both:
- $T_A\approx T_B$ (the best is when $T_B = T_A$ plus some small overhead to run the reduction), and
- $\epsilon_A\approx \epsilon_B$.
There are (roughly) two notions of tightness, asymptotic (where say $T_B= O(T_A)$, and concrete.
Note that a reduction with running time $T_A = 2^{512}T_B$ is asymptotically tight, but not concretely tight.
The Regev's reduction is decidedly "concretely not tight". It is technically intricate to specify precisely what this means, so I will solely link to the relative literature.
Another Look at Tightness 2, which initially brought up the issues of tightness of the reduction (I think in section 6 or 7, but can't remember).
A Masters Thesis claimed to give a tighter reduction (and parameterize a cryptosystem based on SVP directly, as you are discussing sort of), but
(A subset of) the authors of the first paper put out another paper this month, claiming the computations of the thesis are wrong, and the reduction is still non-tight. There are other interesting claims here as well --- Regev's reduction is a BQP algorithm. They notice it is non-trivial, in particular Shor's algorithm (concretely) is much easier to execute, which is perhaps not great.
There has also been some discussion on the NIST-PQC google group that may be of interest.
Roughly, Dan Bernstein finds the tightness gap to be problematic.
Chris Peikert has sketched a distinction between "advantage tightness" ($\epsilon_A\approx \epsilon_B$) and "running time tightness" ($T_A\approx T_B$).
While it sounds like one could attempt to formalize this distinction, I am unaware of any writing on this matter beyond the linked email thread.
This is all to say that many authors have suggested that the concrete implications of the worst-case to average-case reduction to SVP do not lead to meaningful notions of security, unless one chooses incredibly large parameters, that are not used in practice.
The "smallest parameters" people have managed to get with this strategy are still large (see the masters thesis), and recently some authors have claimed that these "large, but only by a factor 2x - 3x" parameters are incorrect.
While all of these works contain descriptions of the LWE to SVP reduction, I'm unsure if any of it will be particularly useful from an expository perspective.
Still, your proposal makes the most sense under the assumption the reduction is tight --- this is unfortunately not the case (assuming the correctness of the first and third papers above, which I have not carefully read), so your proposal may be more difficult than you assume.
Note that the existence of a tightness gap does not mean that LWE is insecure, just that (concretely) basing the security of LWE on SVP is not possible.
Most proponents of lattice-based cryptography have therefore proceeded as follows:
use the existence of the worst-case to average-case reduction to argue qualitatively that the "LWE distribution" is not "structurally flawed". As an example of a "structurally flawed" distribution for a problem, RSA is known to be easy if $N = pq$ is such that $|p-q|$ is small (via Fermat's factoring algorithm). This should never really happen if things are properly generated, but the prior link found some improperly generated keys.
concretely set parameters based on direct cryptanalysis of LWE, say using the LWE estimator.
If your main goal is solely to attack an LWE instance, I would suggest reading the LWE estimator.
Roughly, it is used for parameterizing LWE instances.
It does this by estimating the (concrete) cost of various known attacks against LWE.
It probably makes the most sense for you to look into these known attacks.
There are a variety of good resources on known attacks against LWE, the LWE estimator documentation is probably the most up-to-date though.