There is no public paper available yet, so this answer is preliminary and based on what was presented in the talk and the follow-up discussion. A full understanding cannot be reached until there is a paper to verify, evaluate, and compare to prior work and known results. However, a good understanding of the situation already seems to be emerging.
The tl;dr is: the special problem the authors address is classically easy to solve using standard lattice algorithms (no quantum needed), as shown in this note. Moreover, the core new quantum step can instead be implemented classically (and much more simply and effectively) as well. So, the work doesn’t show any quantum advantage versus what we already knew how to do classically, nor anything new about what we can do classically. Details follow.
The clause “on a class of integer lattices” is a very important qualifier. The BDD problem the authors address is one where the lattice is “$q$-ary” and generated by a single $n$-dimensional mod-$q$ vector (or a small number of them), the modulus $q \gg 2^{\sqrt{n}}$, and the target vector is within a $\ll 2^{-\sqrt{n}}$ factor of the minimum distance of the lattice. This setting is far from anything that has ever been used in lattice cryptography (to my knowledge), so the result would not have any direct effect on proposed lattice systems. Of course the broader question is whether the techniques could lead to stronger results that do affect lattice crypto.
Based on the description given in the talk, several expert attendees believe it’s very likely that the special lattice problem the authors address is already easily solvable using known classical techniques (no quantum needed). UPDATE: this has turned out to be the case, and is substantiated in this note. In other words, the particular form of the BDD problem makes it easy to solve in known and unsurprising ways. The algorithm is merely the standard sequence of LLL basis reduction followed by Babai nearest-plane decoding, but showing that this actually works relies on some deeper (but previously known) properties of LLL than the ones that are usually invoked.
What about the broader question: could the main techniques lead to stronger results that we can’t currently obtain classically? It turns out that what the core quantum step accomplishes, the “worst-case to average-case” transformation, can be done classically (and more simply and efficiently) using a well known randomization technique—what’s called the “LWE self reduction“ or “($q$-ary) BDD to LWE reduction.” See Section 5 and Theorem 5.3 of this paper and the earlier works cited therein for details.
More precisely, $n$-dimensional $q$-ary BDD for relative distance $\alpha$ (the problem considered by the authors) classically reduces to LWE with error rate $\alpha \cdot O(\sqrt{n})$. While this reduction looks unnecessary to solve the original BDD problem in question, it shows that the core new quantum step can be replaced by something classical that performs at least as well (and likely even better in terms of parameters). This indicates that the main quantum technique probably does not hold any novel or surprising power in this context.