Score:0

Random sampling vs incrementing randomness in cryptographic protocols

do flag

As an example to my question, I post the ECDSA signing algorithm for reference (from wikipedia) to sign a message $m$:

  1. Calculate $e = H ( m )$.
  2. Select a random integer $k \in [ 1 , n − 1 ] $
  3. Calculate the curve point $( x_1 , y_1 ) = k × G $
  4. Calculate $r = x_1$ mod $n$. If $r = 0$ , go back to step 2.
  5. Calculate $s = k ^{− 1} ( z + r d_A )$ mod $n$. If $s = 0$, go back to step 2.
  6. The signature is the pair $( r , s )$

My question is as follows. When steps 4 or 5 fail, i.e. $r=0$ or $s=0$ the algorithm needs to loop back and sample a new random $k$ (although probability for that is low as the order $n$ of the group $G$ is very large). But in that case, why does the algorithm needs to do a new random sampling? Wouldn't it be more efficient to increment $k$ instead? I brought this question for ECDSA as an example, but this applies to other cryptographic protocols as well, I see resampling randomness everywhere instead of incrementing. And from a security standpoint, incrementing randomness should not make a difference..

Score:3
my flag

And from a security standpoint, incrementing randomness should not make a difference..

There are two reasons to 'go back' rather than do some special logic.

The first reason is to reduce the amount of 'hard-to-test' special purpose code. Any way of 'incrementing $k$' would involve code that is extremely rarely run (and for which it would be hard to devise unit-tests for). Any such hard-to-touch code is a fertile location for undetected coding errors, and so should be avoided if at all possible. In contrast, going back and rerunning essentially the same procedure is considerably less error prone.

The other reason is that reusing the data that was considered 'unacceptable' can leak.

As an example, suppose that the adversary is able to detect when such an increment occurs (e.g. by closely monitoring the time taken to generate such a signature), and it does in fact happen because $s=0$. If so, we've just leaked the private key.

Here is how that happens: in the first iteration, we compute $s = k^{-1}( z + rd_A )$ and find that it is 0. So, we such $r' = ((k+1)G)_x$ instead and output that (and go on to compute $s'$, which this attack doesn't use).

What the attacker can do is reconstruct the point $(k+1)G$ from the x-coordinate we just gave him (actually, it's one of two points; that just means he tries both); that then allows him to recompute $kG$, and thus the original $r$

Now, he knows that $k^{-1}(z + rd_A) = 0$, now $k^{-1} \ne 0$ (inverses never are zero), and so $d_A = -r^{-1}z$; he knows $z$ (from the message that was signed) and $r$, that gives him the private key $d_A$.

Now, if we select a completely random $k$ each time, we don't have to worry about nonobvious attacks like this.

Now, in practice, this $s=0$ condition essentially never happens (it happens with the same probability that a random guess at the private key just happens to be correct), so it might seem like we don't need to worry about it. I would still trust the less-error-prone and more-secure method, even if it takes more time (and if it hardly ever happens, the time it takes is usually irrelevant).

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.