Score:4

confusion about the meaning of reduction

ng flag
DDD

I'm learning provable security, and I'm a bit confused with the concept of reduction.

So, here's my understanding:

to prove a protocol/scheme/generic construction is at some level of security, there are three components: the scheme itself ---> a security proof ---> the scheme achieves a security property.

The security reduction is used as a way of security proof, meaning a procedure to show that attacking a scheme/protocol/construction $\mathsf{S}$ is at least as difficult as solving the underlying hard problem $\mathsf{P}$. Say $\mathsf{P} \leq \mathsf{S}$. It also means if an adversary can attack the scheme successfully, then there exist a solver can solve the underlying hard problem successfully by using the adversary's attack as a subroutine/black box/oracle.

We also have a tightness gap defined as:

  1. adversary $\mathcal{A}$ attacks scheme $\mathsf{S}$ with time $\leq T_{\mathcal{A}}$ and advantage (success probability) $\geq \epsilon_{\mathcal{A}}$;
  2. solver $\mathcal{B}$ solves problem $\mathsf{P}$ with $\leq T_{\mathcal{B}}$ and advantage $\geq \epsilon_{\mathcal{B}}$;
  3. (informal) tightness gap: $\frac{T_{\mathcal{B}}\cdot \epsilon_{\mathcal{A}}}{T_{\mathcal{A}}\cdot \epsilon_{\mathcal{B}}}$ [1]

My questions are:

  1. There could be other vulnerability in the scheme/protocol/construction, for example, unsafe design of protocol, how can we be sure that the scheme is at least as difficult as the hard problem? Or, if the scheme/protocol has this kind of vulnerability, it should not be considered holding a reduction to the problem, even the scheme does intend to use the problem as its underlying hard problem?

  2. Is there a situation like solving problem $\mathsf{P}$ as a subroutine of breaking scheme $\mathsf{S}$? It looks more reasonable to me, but all I see is the vice versa way.

  3. Why in tightness gap, the definition is with "greater than or equal to" advantage? I mean, greater advantage and longer time both contribute to break the scheme/problem, why time is at most and advantage is at least? Does it have anything to do with average/worst case or computational complexity?

Many thanks in advance!

[1] Chatterjee, S., Menezes, A., Sarkar, P. (2012). Another Look at Tightness. In: Miri, A., Vaudenay, S. (eds) Selected Areas in Cryptography. SAC 2011. Lecture Notes in Computer Science, vol 7118. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-28496-0_18

Score:5
ng flag
  1. There could be other vulnerability in the scheme/protocol/construction, for example, unsafe design of protocol, how can we be sure that the scheme is at least as difficult as the hard problem? Or, if the scheme/protocol has this kind of vulnerability, it should not be considered holding a reduction to the problem, even the scheme does intend to use the problem as its underlying hard problem?

Having a reduction to a problem is a formal mathematical theorem. It is either valid, or isn't. Reductions are used as a guide against the design flaws that you mention --- a protocol with a reduction has a guarantee that if the protocol is broken, then provided the reduction (and its proof!) are valid, you have an algorithm to attack the underlying hard problem.

A scheme "intending" to do anything doesn't really matter. A reduction is a formal procedure to convert attacks on the scheme into (roughly, up to things like the aforementioned tightness gap) equally effective attacks on the problem. None of this has anything to do with intent.

  1. Is there a situation like solving problem P as a subroutine of breaking scheme S? It looks more reasonable to me, but all I see is the vice versa way.

Yes, this almost always holds, but is the opposite of what you are talking about. Namely, for most cryptographic schemes $\mathsf{S}$ based on problem $\mathsf{P}$, we get that $\mathsf{S} \leq \mathsf{P}$. As an example, if we can take discrete logs, discrete-log based crypto is broken. The interesting statement is the reversal of this, $\mathsf{P} \leq \mathsf{S}$, or if discrete-log based crypto is broken, we at least get an algorithm for taking discrete logs as a consolation prize.

Note that there are often formal gaps between the problem $\mathsf{P}$ a cryptosystem is based on in a high-level conversation (discrete logs, factoring), and the problem which shows up in the formal reduction (DDH, RSA assumption i.e. taking modular roots).

Why in tightness gap, the definition is with "greater than or equal to" advantage? I mean, greater advantage and longer time both contribute to break the scheme/problem, why time is at most and advantage is at least? Does it have anything to do with average/worst case or computational complexity?

It doesn't. It really has to do that the informal way of measuring the complexity of an attack is something like $\log_2\frac{\mathsf{time}}{\mathsf{advantage}}$. Often this isn't precisely what you want, i.e. maybe you use $\mathsf{advantage}^2$ rather than $\mathsf{advantage}$. Anyway, with an expression of this form, if we want to upper bound the complexity of an attack (i.e. show the attack is not too bad), we need

  • an upper bound on time, and
  • an upper bound on $\frac{1}{\mathsf{advantage}}$, or a lower bound on $\mathsf{advantage}$.
DDD avatar
ng flag
DDD
Thanks for your explanation! It's more clearer to me now :)
I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.