Score:2

Why can't RSA signatures be forged algebraically?

US flag

Compute $n = pq$ where p and q are prime. Fix $e$ to be coprime to $\phi(n)$. Compute $d = e^{-1} \pmod n$ and verify $ed \equiv \phi(n) \pmod n$. We sign the (hash of) a message with $s = h^{d}$. A verifier computes $s^e = h \pmod n$. Why can't an attacker fix $h'$ and solve ${s'}^e = h' \pmod n$ to forge a signature $s'$ for a given $(n, e)$? What assumption tells us this computation is hard? If $e$ was unknown, this would obviously be the discrete logarithm problem, but in this case, $s'$ is the unknown.

This question: 1024 RSA Hash Signature attack. Forge a valid signature notes that $h'$ has a "tiny probability of being a perfect cube" (assuming $e=3$) If this is the underlying answer to the question above, why is a random element of a finite field (of sufficiently large number of bits), unlikely to be a perfect power?

fgrieu avatar
ng flag
$p$ and $q$ need to be _distinct_ primes fro RSA to work over the full $[0,n)$. Exponent $e$ needs to be coprime to both $p-1$ and $q-1$, or equivalently to $ϕ(n)$, not $ϕ(x)$. The condition $d=e^{-1}\pmod n$ (no matter if it's considered to imply that $0\le d<n$ or not, which the notation does not allow to discern) is contradictory with $ed \equiv \phi(n) \pmod n$, and neither is relevant to RSA. Both can and often are replaced with $d=e^{-1}\bmodϕ(n)$, or (rather) $d=e^{-1}\bmod\operatorname{lcm}(p-1,q-1)$, which yields the smallest positive valid $d$ for given valid $p,q,e$.
Score:2
cn flag

Solving $s'^e = h' \mod n$ for $s'$ means calculating the $e$th root of $h'$ in $\mathbb{Z}/n\mathbb{Z}$. There is no efficient generic method to do that.

The holder of the private key solves this equation by computing $s' = h'^d$. But for the attacker, this only reduces the problem to finding $d$, and that's the private key, which we assume the attacker doesn't have.

If you know how to factor $n$ to recover the primes $p$ and $q$, then you can calculate the $e$th root modulo $p$ and modulo $q$ and combine the two results. But we believe that factoring a product of two large primes is infeasible.

Knowing how to factor an RSA modulus $n$ is equivalent to finding the private exponent. From $p$ and $q$, you can obviously find $d$: that's how the private key is generated. Conversely, given $n$, $e$ and $d$, there is an efficient procedure to factor $n$. So factoring the modulus is equivalent to finding the “magic number” needed to calculate $e$th roots.

There are special cases where it's easy to find an $e$th root without knowing the private key. One such case is when $h'$ has an $e$th root in the usual integers, because that root is also a root modulo $n$. But only $\lfloor\sqrt[e]{n}\rfloor$ integers between $1$ and $n$ have an integer $e$th root, so a “random” integer in that range usually doesn't. Still, avoiding this small risk is one of the reasons why RSA signature and encryption methods, if they are based on applying the RSA operation to a value derived from the message, use padding to make sure that the input to the exponentiation is larger than $\lceil\sqrt[e]{n}\rceil$. (In practice, they all arrange to have an input that's larger than $n/256$.)

Another case where it's easy to find an $e$th root is if you already know some $e$th roots. For example, if you already know that $s_1^e = h_1 \mod n$ and $s_2^e = h_2 \mod n$ then you can forge $(s_1 \cdot s_2)^e = (h_1 \cdot h_2) \mod n$, i.e. a signature for the RSA input $h_1 \cdot h_2$. Once again, padding arranges for $h_1 \cdot h_2$ to not be a valid padded hash, so that what the adversary can forge is never a valid signature. (For encryption: so what the adversary can decrypt is not a message that will ever be sent.)

fgrieu avatar
ng flag
_«factoring the modulus is equivalent to finding the “magic number” needed to calculate $e^\text{th}$ roots»_ has issues: there are _several_ magic numbers $d$ that work equally well to calculate $e^\text{th}$ roots (if $d$ is one, then $d'=d+ϕ(n)/2\bmodϕ(n)$ is another); so much for _the_ magic number. Plus, no magic number is _needed_ to compute $e^\text{th}$ roots: if efficiency is not an issue, alternate methods include repeatedly raising to the $e^\text{th}$ modulo $n$ until cycling. And, we have no proof that there is no much faster general method.
Jeffrey avatar
md
This is the crux of what I'm getting at. Under what conditions can we say computing $e^{th}$ roots is hard, putting aside $d$ for a second? It isn't obvious that it should be, or that it is reducible to known hard problems like factorization or discrete logarithms. A quick literature search shows a lot of material about finding polynomial roots in finite fields. I can't easily find literature that says computing $e^{th}$ roots are hard.
Gilles 'SO- stop being evil' avatar
cn flag
@Jeffrey As far as I know, it's an open question whether there is a general method that's more efficient than finding $d$ (or as fgrieu notes, a $d$ equivalent). An informal argument is that the function that calculates $e$th roots in $\mathbb{Z}/n\mathbb{Z}$ is of the form $x \mapsto x^d$ for some $d$, or at least (since there are multiple $e$th roots) one such function has this form. If you knew how to compute this function efficiently in the general case, then surely you'd be able to find the $d$. But this is just an informal argument.
I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.