Score:1

What is a differential attack on a hash function? How would one attack a SHA algorithm and what would achieve?

cn flag

Currently, I have been assigned to attack a reduced version of SHA-1. What are we trying to achieve? How do we attack it?

kelalaka avatar
in flag
Little search [Higher-Order Differential Attack on Reduced SHA-256](https://eprint.iacr.org/2011/037.pdf) and definitions are there...
cn flag
@kelalaka I did read this paper. Unfortunately it was too difficult for me to understand. Thought someone could explain it briefly. Thanks for your help!
Score:1
in flag

The answer is not easy, however, a good starting point is Florent Chabaud and Antoine Joux's paper on SHA-0 and SHA-1

Simple Introduction

In, MD4, SHA-0,SHA-1, and SHA-2 there is a compression function. $$C:\{0,1\}^\ell \to :\{0,1\}^n$$ where $n$ is the output size and $\ell$ is the block lenght to process.

The compression function $C$ can be viewed as a block cipher where the key is the message, this is meaningful since the only part is the input. And, SHACAL is called the block cipher of the SHA-1 and similarly SHACAL-2 for SHA-2.

Now, we have a block cipher, and we have the input ( the initial values $H_i$s and the output value ( the hash - the digest). We need to find the key. If we can find the key we have pre-image attacks and collisions. This might lead one to consider that we are attacking as in the differential attacks, not exactly, but in a similar way.

The authors first started with weakened SHA-1 (SHI1) where the input expansion is not considered and the ADD is converted to X-or so that there is no non-linear part. With single bit perturbation and corrections on the inputs, they showed to find collisions with a mask.

enter image description here

Later, the only converted ADD to X-or (SHI2) to extend the attack. They find a differential mask $\mathcal{M}$ that can be applied to plaintext (input) to find collisions.

   lookRandomCollision(h, block):
      m = ramdom_message(1-block).
      h = Hash(m)
      m' = mask(m)
      h' = Hash(m')
      If h=h':
         return (m)
      else: 
         return -1

    while (1):
        success lookRandomCollision(SHA-1, 512)
        if seccess != -1:
            print("The collision pair is - ", m, mask(m))

The success of the attack is the success probability of the mask $\mathcal{M}$ to introduce collisions. If the success probability of the mask is $1/t$ then around $c\cdot t$ random pairs one will find the collision.

After these, they look at real SHA-0 and SHA-1 to find such a mask. They find one for SHA-0 with $2^{61}$-complexity time. The mask they found for SHA-1 doesn't have good complexity than the generic collision i.e it has complexity $> 2^{80}$.

The article is well written from understanding the attack from low-level compression function (no non-linear part) to higher-level function that includes non-linearity.

And, with high probability, NSA was aware of this so they modified SHA-0's input expansion on SHA-1 with an introduction of rotation.

Why is it dangerous

Consider the case as the SHA-1 is broken, we have a Prefix block(s) $P$ and Suffix block(s) $S$ and we are looking for a collision

$$Hash(P\mathbin\| m \mathbin\|S) = Hash(P\mathbin\| m' \mathbin\|S)$$

If we can find a good differential mask $\mathcal{M}$ with good probability then we can execute the attack on the block $m$. If the part of the file is part of the redundancy as in PDF files, then one can create two different PDFs that have the same hash values. A real danger, of course, requires more than this. The change in the $m'$ has an advantage for the signer. This can be doable, too.

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.