The claimed attack does not "break" lattice-based cryptography, merely further improves known attacks.
I'll try to briefly describe the situation.
Asymptotics:
Asymptotically, our best indication is that LWE takes time $2^{cn}$ to solve for some constant $c$.
This is to be contrasted with RSA $2^{(\log N)^{1/3}}$ (this is very imprecise, and should properly use L-notation), and Elliptic curve DLog $2^{\frac{1}{2}\log_2|\mathbb{G}|}$.
This is to say that, even with the improvements to attacks against LWE variants, we believe it to have "better scaling" than assumptions like RSA (which you specifically mention), although similar to ECDlog.
Current estimates of $c$ can vary, but it is $< 1/2$ (I believe perhaps $\approx 0.28$ is current, but haven't checked), so ECDlog is the best on this metric, and then LWE, and then (far in the distance) RSA/finite field DH.
It's also not clear how useful this notion is practically --- it says that some parameter set may be secure, but doesn't help pick which one.
Concretely:
Even if LWE does take $2^{cn}$ time to solve, this doesn't help us if we don't know $c$.
Improved attacks have (theoretically) improved $c$ over time.
This can lead to various parameter sets being "broken" (although not in the sense that we can mount practical attacks against them --- in the sense they do not meet NISTs minimum standards to be in a certain "security level").
This situation is arguably better than the situation that factoring went through with the development of "index calculus" techniques.
For ECDlog, the best attacks are still (variants of) the Pollard Rho algorithm, and it has been this way for a long time.
This is all to say that ECDlog concretely has the best-understood security, although I would say that LWE still beats out RSA/other things vulnerable to index calculus techniques.
While the NFS was created ~30 years ago, it is still plausible that it is improved.
For example, the road of (small characteristic) finite field DLog was from roughly the same complexity as RSA, to (roughly) $2^{(\log p^n)^{1/4}}$, and then to quasipolynomial time (cf Nadia Heninger, via Matthew Green).
This doesn't mean something similar will happen to factoring (it hasn't for ~30 years), but it is perhaps still an uncomfortable possibility.
Practically:
The final way to understand security of a hardness assumption is to solve really big instances of it.
This leads to questions
- what is the largest set of parameters that have been broken in a real computation?
- how much time was invested in this computation?
For RSA, things like RSA-240 (a 768-bit semiprime) have been broken in 1000 core years (or 6 wall months iirc).
In this sense RSA (and finite-field DH for non-small characteristic) is our best tested (and perhaps best understood) hardness assumption.
Elliptic curves have also had long computations.
The records page lists a number of breaks in the range of groups of size $2^{110}$ to $2^{120}$, and wall clock times are again up to 6 months on the high end.
On this metric, LWE really does lag behind.
There are some well-known "lattice challenges" (a la the RSA challenges), for example the TU Darmstadt challenges.
These are for plain LWE (Peikert has made some for RLWE, but I don't see a public leaderboard for them).
Anyway, for the plain LWE challenges, the highest dimension anyone has ever solved is 85, while dimensions 500+ (although 700+ is more common) is used in practical schemes.
All the records I looked at used less than 200 wall-clock hours (usually on a handful of processors), so the computations are really much smaller scale than RSA/ECDlog computations.
This is to say that "big attacks" against LWE are at least an order of magnitude smaller than similar attacks against RSA/ECDlog, although this likely massively undersells the size of modern academic attacks against RSA.
So both the biggest instance of LWE that has been broken is small (which is a good security indication), and it hasn't had "truly big attacks" mounted against it yet (which is a bad one).
The above ignores many subtleties about hardness assumptions, such as
- leakage resilience,
- ease of implementation,
- ease of side-channel mitigation.
LWE does actually do pretty well on all these metrics (as long as you aren't doing signatures), definitely better than RSA, I don't personally know for ECDlog though.
That all being said, I personally view LWE as
- a massive improvement over RSA/(finite field)DLog, and
- perhaps worse than ECDlog.
Definitely if quantum computers weren't a risk, there would be no reason to move away from ECDlog (except for specialty applications like FHE).
But they are, so we must.
From this perspective, comparisons shouldn't be between LWE and RSA/ECDlog, as unfortunately the world of them making sense is rapidly leaving us.
Instead, comparisons should be to other post-quantum assumptions.
But among post-quantum assumptions, LWE is really toward the top.
There are assumptions that have better security (such as McEliece), but their efficiency is too bad to be usable except for special cases.
The main other feasible assumptions are
- NTRU, which is also lattice-based, and therefore not really "independent" from attacks on LWE, and
- SIDH.
SIDH itself is still an order of magnitude slower than lattice-based constructions iirc (although it is more compact).
There are some reasons to argue its security has "stood the test of time better" than lattice-based assumptions (cf The Case for SIKE), but also its only ~10 years old iirc, so is itself still a new assumption (and was even more so in 2016, when NIST started its selection process).
It also is rather mathematically complex, which may make implementations difficult (but I haven't tried).
This is somewhat long (and ignores many technical topics, say the hardness of "small moduli" LWR, or claims about the impact of galois group sizes on RLWE hardness, which have been unresolved for years.
These are the topics that seem most likely for "big breaks" to be possible in, but nothing has happened yet), but I hope begins to answer your question at least.