Score:4

Cache-hard or memory-hard password hashing algorithms?

bs flag

bscrypt is a cache-hard password hashing algorithm/KDF from Steve Thomas (aka Sc00bz/TobTu), who was on the Password Hashing Competition (PHC) panel. He argues it is better than the alternative algorithms on his minimum password hashing parameters page. There are two talks for further information, but it's not thoroughly documented currently.

Pufferfish2 is another cache-hard, but not memory-hard, password hashing algorithm by Jeremi Gosney (aka epixoip), another member of the PHC panel. It's designed to be an improvement on bcrypt and is based on Pufferfish, a finalist in the PHC.

The idea is that cache-hard algorithms are better than memory-hard algorithms at shorter run times since memory bandwidth/small CPU caches are more of a bottleneck than the amount of memory required.

Searching 'cache-hard' on the Cryptology ePrint Archive comes up with no results compared to 36 results at the time of writing for 'memory-hard', suggesting this has not been academically studied. However, this seems to be supported by claims that bcrypt, a 'minimally' cache-hard algorithm, is better than Argon2 for short run times, which is now on the Wikipedia page. There was also apparently an Argon2 variant called Argon2ds that's cache-hard, although I have no idea where information about this variant is unless it's hidden in the PHC mailing list archive.

This information leads to several interesting questions worthy of discussion:

  1. What are the pros and cons of cache-hardness vs memory-hardness?
  2. Should we recommend algorithms like bcrypt/hmac-bcrypt (and eventually newer algorithms like bscrypt after further analysis) for password hashing instead of memory-hard algorithms like Argon2?
  3. Should cache-hard algorithms be used for key derivation or only password hashing?
  4. If cache-hard algorithms are so good, why were they not more of a thing/more successful in the PHC and why are they still not being researched? The PHC final report is quite vague.

Thanks to @fgrieu for inspiring this question; sorry it took so long to ask.

Score:1
ca flag

You have a lot to unpack in that question, but basically memory-hard is the way to go.

From a hardware point of view, there's a few things here that you need to consider. I'm going to assume that you have commodity hardware. In the x86 model, you generally have cache and memory so that you do not have to wait for the memory, which is slower than the cache. What Apple did with their silicon was have the memory on the die next to the CPU, and this is what people who make custom hardware have done forever in the non-commodity hardware space. If you have some of these custom CPUs, the cache and the memory run at the same speed. This means that there's not an advantage to cache vs. memory. This is also why cache/memory seemed to disappear as a discussion (my opinion) is because the through-die vias that you need for this architecture became, or are becoming, a commodity item; however, very few people use them (Apple via TSMC, and POWER via GF). I expect that we'll see more of these used in 2nm via buried power rails and such, so that we'll have more ICs with memory on the same die via bonding or chiplet as a CPU.

samuel-lucas6 avatar
bs flag
So there's not a case for switching to/recommending cache-hard algorithms like bcrypt for short run times?
b degnan avatar
ca flag
@samuel-lucas6 in my opinion, no, but I'm a hardware designer and not a proper cryptographer.
samuel-lucas6 avatar
bs flag
Regardless, I appreciate your input. Argon2 certainly has less pitfalls than bcrypt, and it doesn't seem like these new cache-hard schemes are going anywhere anytime soon unless they get implemented in a library like libsodium.
I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.