Score:1

Why is implementation relevant to timing attacks?

in flag

Discussions from highly respected sources (details below) emphasize the importance of the implementation of cryptographic software to the effective security provided, with one particular case being sensitivity to timing attacks.


Clarification - Context is cryptographic signing of non-secret messages

@poncho's initial answer notes that attackers don't always have the luxury of determining the "user's" implementation.

In fact, I'm interested in this question precisely in the very real situation where the attacker is the user, and has the ability to choose their own implementation - in cryptographic signing of non-secret messages. My citation below is about NaCl, which includes an implementation of ed25519. In the use cases I'm working on, you can assume that the "attacker" / user has the plaintext message, the signature, and the public key. All three are necessary for the signature to even be useful. The only thing they lack is the private key.


Is this not irrelevant? Logically, a cryptographic algorithm is inherently susceptible to timing attacks if the algorithm itself is capable of identifying a validation failure in a fewer number of operations than is required to confirm a validation success. The more distinct points at which validation failure can be mathematically determined, the more susceptible the algorithm is.

E.g.

  • If an algorithm cannot mathematically determine the validity of a set of inputs until the final step, then it is inherently immune to timing attacks. It is my understanding that all NIST-approved SHA algorithms as of FIPS-180-4 are in this group.
  • If an algorithm can mathematically identify invalid input at every incremental byte/word/block of some stream processed, it is extremely sensitive to timing attacks.
  • In the worst case, the minimum number of operations required to determine success or failure depends on each incremental bit in a secret key.

The ability of any given implementation to avoid or ignore the algorithm's underlying susceptibility to a timing attack is logically irrelevant, because a rational attacker will simply choose to use an implementation that is intentionally sensitive to timing attacks. Even if most attackers lack the skill or motivation to write their own implementation, it only takes one to create and then distribute an implementation that accentuates timing attacks.

Likewise, if an algorithm is inherently immune to timing attacks (e.g. it cannot determine validity or invalidity until the final step), then it is by definition not possible to create an implementation that is susceptible to timing attacks, whether on purpose or on accident.

An author can be proud of any number of qualities of their implementation of an algorithm, but it does not seem that "resistance to timing attacks" is a logically meaningful quality.

Am I missing something here?

Reference: Daniel Bernstein / NaCl

The features section of the docs for Daniel Bernstein's original NaCl library, which is the reference implementation of the ed25519 signature algorithm which he and co-authors invented, makes multiple claims about the superiority of their implementation which makes it resistant to timing attacks (emphasis mine):

No data-dependent branches

... The literature has many examples of successful timing attacks that extracted secret keys from these parts of the CPU. NaCl systematically avoids all data flow from secret information to the instruction pointer and the branch predictor

No data-dependent array indices

... The literature has several examples of successful cache-timing attacks that used secret information leaked through addresses. NaCl systematically avoids all data flow from secret information to the addresses used in load instructions and store instructions

fgrieu avatar
ng flag
An attack model where the attacker can choose the implementation is not realistic. It leaves too many avenues for the attacker to obtain the key.
Score:3
ng flag

This should be a (long) comment, but I do not have space. It is meant to explain why the idea of letting the attacker choose the underlying implementation is too strong --- it "trivially" breaks any primitive.


For any cryptographic primitive, if an attacker wants to extract a key $k\in \{0,1\}^n$ for some $n$, and can:

  1. Choose arbitrary messages (at least messages contained in $\{0,\dots,n-1\}$) into whatever primitive is under consideration,

  2. Arbitrarily modify the implementation of the algorithm (but not the input-output behavior of the mathematical function the algorithm implements)

  3. Has access to any quality method to measure timing at all

it is quite simple to modify the implementation of the algorithm to entirely leak the secret key in $n$ queries. If $\mathcal{O}(k, m)$ is the "old" primitive, define a "new" primitive via:

O'(k, m)
    if m in {0,...,n-1} and k[m] == 1:
        wait(T)
    return O(k, m)

Here, T is some unspecified constant that is "large enough" so that the attacker can reliably measure the difference between the algorithm taking $\ll T$ time, or $\approx T$ time to execute. $\mathcal{O}'$ clearly has precisely the same input-output behavior as $\mathcal{O}$.


The above example should show that letting the attacker choose the implementation is a massive security issue, even if one restricts the implementation to have exactly the same input/output behavior as the desired function. As a result, being "mathematically susceptible to timing attacks" is not well-defined, as any algorithm that relies on "secret" data is susceptible to the above attack.

This is not really an issue though, as letting the attacker pick the algorithm that is used is not seen as a large issues in practice (the only time it may really occur is in "backdooring standards committee" type attacks, say what happened with DUEL_EC_DRBG, but timing backdoors seem worse than "I know a secret key" backdoors, as it may be easier for others to observe the timing backdoors).

While being "susceptible to timing attacks" is not well-defined, there are primitives that are harder to implement in a constant-time way. This generally means one of two things:

  1. The overhead in transforming from non-constant time to constant-time is large
  2. It is easy to make mistakes in the transformation from non-constant time to constant-time.

These are not formal properties of problems though, and instead are empirical observations of practitioners. The first problem might be able to be formalized (specifically, in a large gap between the minimum-sized circuit computing something and the minimum-sized TM in the RAM model computing something), but I don't normally see people try to do this (it does not seem like it would be interesting to implementers).

Score:1
my flag

Is this not irrelevant? Logically, a cryptographic algorithm is inherently susceptible to timing attacks if the algorithm itself is capable of identifying a validation failure in a fewer number of operations than is required to confirm a validation success.

The implementation is quite relevant; any algorithm that completes in bounded time can be implemented in a constant-time/constant-memory-access manner; hence any such algorithm could be implemented safely against these side channel attacks. Of course, some algorithms make such an implementation easy, and others make it quite costly.

If an algorithm cannot mathematically determine the validity of a set of inputs until the final step, then it is inherently immune to timing attacks.

Actually, the implementation may take time dependent on the secret key or secret intermediate value.

If an algorithm can mathematically identify invalid input at every incremental byte/word/block of some stream processed, it is extremely sensitive to timing attacks.

No; the implementation may decide to abort early, or it may set an internal 'failed' flag and continue on (and at the end, notice that the failed flag is set, and handle the failure at that point) and yes, the setting of the failed flag can be done in constant time. Using a failed flag which is checked at the end is quite common to constant-time implementations.

a rational attacker will simply choose to use an implementation that is intentionally sensitive to timing attacks.

If the attacker is attacking a user's system (which has access to the secret value), he cannot ask the user to change to a more convenient (for the attacker) implementation; he has to use what the user has.

if an algorithm is inherently immune to timing attacks (e.g. it cannot determine validity or invalidity until the final step), then it is by definition not possible to create an implementation that is susceptible to timing attacks, whether on purpose or on accident.

No, this is not true; you could easily introduce timing variations that have nothing to do with validity or invalidity.

If the attacker could specify the implementation the user has, he could (say) specify one that takes an extra second if bit 0 of the secret value was a 1 - that is easily detectable. And, by specifying $n$ different implementations, he could read off all $n$ bits of the secret value - hence, in this attack model, there is no security possible (which means that users probably don't want to blindly trust implementations provided by the attacker).

Of course, if the attacker is working on his own system, he can run whatever he wants; of course, since his system doesn't have the secret value, any side channels in that implementation doesn't tell him anything he doesn't already know.

in flag
These are good points for situations where computations are executed on a machine outside an attackers control. I've added clarification to my question that I'm most interested in cryptographic signing of non-secret messages. In that case the "attacker" is the user, they have complete control of the implementation, and the only thing they lack is the private key. The same thing is true for any PKI system, such as the SSL certs deployed on the public internet.
poncho avatar
my flag
@JoshuaHonig: if the attacker can run an arbitrary implementation that has access to the private key, why can't they run an 'implementation' that reads the private key, and outputs that as the 'signature'? That gives them immediate access to the private key without having to bother with measuring any side channel timings...
poncho avatar
my flag
@JoshuaHonig: as for TLS certs on the internet, only the signer (typically the server) has access to the private key; the verifier (the client) has access only to the public key (and so there's no secret values to leak)
in flag
As with any SSL cert, the attacker doesn't have the private key. They want to obtain the key using a timing attack: Try various private key candidates and see if they produce the correct signature. If the signing algorithm is sensitive to timing attacks, then the attacker can use an implementation to discover the private key, thus fundamentally breaking PKI, as now they can produce their own fake certs. If the signing algorithms are not susceptible to timing attacks, then implementation is irrelevant, no? (this is my original question)
poncho avatar
my flag
"If the signing algorithm is sensitive to timing attacks"; again, all algorithms (with a secret value) have implementations that are sensitive to timing attacks. So, if you attack is 'hack into the TLS server and replace the signature implementation with your own', yes, you've broken security (actually, there are probably easier ways if you can hack in); is that your point?
in flag
To be specific, if `secp256k1` is mathematically susceptible to timing attacks, then I can take my sweet time to calculate the private key of a high-value BTC wallet and steal its balance. If either `rsaEncryption` or `secp384r1` is mathematically susceptible to timing attacks, then I can calculate the private key of a widely-trusted root CA and implement man-in-the-middle attacks. These are all possible without hacking anything, that's the point. Further, the private keys in these cases are extremely long-lived.
mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.