Score:1

How is Argon2's Blake2b different than normal Blake2b?

in flag

This post says that Argon2's Blake2b is a reduced one, which is also agreed by Argon2's specs as it states that it uses only a 2 round Blake2b.

But, on the other hand, page 15 of Argon2's specs states that it modifies Blake2b to add 32-bit multiplications in order to increase latency (I guess they mean by needing to wait for extra CPU cycles?).

My questions are:

  1. If Argon2 wants to make Blake2b harder, why would it reduce its 12 rounds to only 2?
  2. Are there any other differences that I didn't mention here?
  3. In which ways do these differences affect the security of Argon2's Blake2b compared to standard Blake2b found in, say, libsodium?

My thoughts

I think Argon2's use of a hashing function (Blake2b) for filling the memory pad is not the best choice. Because there is no compression involved in filling the memory; 1024 bytes input becomes another 1024 bytes output. No compression happening, hence all the aggressive rounds of a hashing function that tries to preserve maximum input entropy while compressing into much fewer bytes is totally not needed.

I think this is why Argon2 went on to create its own reduced variant of Blake2b with only 2 rounds instead of 12: because it is obvious that hashing is not needed.

Effectively by modifying Blake2b, Argon2 created a variation of a symmetric block cipher, and they went on to use it like one too (fixed input becoming fixed output of equal size).

I think the better approach than Argon2's is, instead of re-inventing a symmetric block cipher off a hashing function (by modifying Blake2b), is to cut the chase and use an existing symmetric cipher like ChaCha20.

Using a symmetric cipher like ChaCha20 will be about as fast as Argon2's re-invented symmetric cipher (reduced Blake2b), even though it is the 20-round ChaCha20. ChaCha20 as per my tests is only very slightly slower than Argon2 for doing the same job. Plus other benefits: taking advantage of existing libraries and more cryptanalysis research that's already went to existing ciphers.

Score:1
cn flag

You are correct when you say that in this case no compression is required and that a symmetric block cipher could also be used for this purpose. Several considerations probably played a role in the selection of Blake2b.

  1. the reduced version is used to fill a large vector as fast as possible, which is then accessed to make the scheme memory-hard. The thought behind this was probably not only to quickly fill a vector, but also that the memory hardness can be set independently of the time hardness. This means that the algorithm should be fast. Hence the reduced number of rounds.

  2. the algorithm should not run much faster on custom hardware, e.g. ASICs, than on a typical user device. This means, that not only the pure performance is crucial, but also the performance difference between different hardware. Blake2b is optimized exactly for typical user devices.

  3. the algorithm should not be vulnerable to cache-timing attacks. Therefore, an ARX design was preferred.

ChaCha20 also fulfills these criteria in principle, probably Blake2b only performs better in point 2. than a reduced form of ChaCha20. Perhaps it also played a role that at the time hash functions received more attention due to the SHA-3 competition that had just been completed, and that new hash algorithms such as Blake2b were therefore better studied than new block ciphers.

An indication that Blake2b is suitable for this application: In addition to Argon2, the Catena team also developed a (different) reduced form of Blake2b for the same purpose.

And an indication that your assumption about block ciphers is also correct: In contrast, the scheme Scrypt, developed in 2010, uses a reduced form of Salsa20, a predecessor of ChaCha20.

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.