Score:3

Can reinforcement learning speed up modular multiplication?

ng flag

In Discovering faster matrix multiplication algorithms with reinforcement learning (Nature, 2022; lightweight intro), the authors used reinforcement learning (an artificial intelligence technique) to devise a new, slightly faster matrix multiplication algorithm.

Could a similar technique work towards a better multiple-precision modular multiplication algorithm, as at the core of RSA and ECC using prime fields, for practical parameters (say, 256 to 4096-bit modulus)?


The question is focused on computing $a\times b\bmod n$ (or as a special case $a\times a\bmod n$ ) for arbitrary $a,b\in[0,n)$ with $n$ of $\ell$ bits (possibly restricted to odd $n$), using standard CPU instructions operating on $w$-bit words. I'm looking for a practical competitor to Karatsuba, Toom-Cook, or perhaps Schönhage-Strassen multiplication (but AFAIK the later is not competitive until much larger $\ell/w$ than used in RSA).

Optimizing modular exponentiation or ECC arithmetic might also be possible, but the question is not about that.

kelalaka avatar
in flag
I think, the problem there one needs good insight to find better equations as Strassen did. As we see, there are similar cases with ECC operations that we can expect similar achievements,
Mark avatar
ng flag
Is there as nice of a theory of modular multiplication compared to matrix multiplication? In this setting, the problem reduces to finding tensor rank decompositions iirc. Has something similar to this been found for modular multiplication (or polynomial multiplication more generally)
Maarten Bodewes avatar
in flag
Hmm, I don't know how we can square that with side channels and such, at least for private / secret key operations. Surprising results none-the-less and an interesting question.
pe flag
Pretty much all fast multiplication algorithms can be understood as evaluation-multiplication-interpolation, when treating the integers to multiply as polynomials. For example, Karatsuba evaluates at $0, 1, \infty$, another variant of Karatsuba at $0, -1, \infty$. It can get pretty complicated to find the optimal set of points to evaluate, particularly with Toom-Cook, see for example [this](http://marco.bodrato.it/papers/Bodrato2007-OptimalToomCookMultiplicationForBinaryFieldAndIntegers.pdf).
fgrieu avatar
ng flag
@Samuel Neves: if reinforcement learning could uncover better parameters for a Toom-Cook variant, that would be what I'm dreaming about in this Q.
I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.