I remember reading a previous stack exchange post (unfortunately was unable to find the link, if someone knows the link that would be great!) about a method to make password checking time for the server be on average less than that for an attacker. Essentially, when the password is first created a random number between, for example, 1 and 100 is added to the salt and plain-text password. This result is not saved. Then, when checking the password for correctness, all numbers from 1 to 100 are added by the server and see if any of them matches the stored record. For a correct password, this takes 50 guesses on average. On the other hand, an attacker would have to try all 100 numbers for every single wrong password, making the average checking time 2x more expensive.
I was wondering if it makes any sense to use a nonuniform distribution at initialization time to be able to tune this ratio factor. For instance, one could weight the first 10 numbers higher, reducing the entropy and thus the average amount of checks for the server, but an attacker would still need to check all 100 numbers to be sure that the password is wrong.
- Would there be any problems with this?
- Would this have any advantages compared to a uniform distribution?
The one thing I think could potentially flaw my reasoning is that an attacker might just check all the likely guesses, and move on to other passwords if none show up (only returning back to this password and the less likely guesses much more later on).
Anyway, my overall thinking is that we could potentially use the fact that the percentage of correct passwords the server is checking is a lot different that the percentage of correct passwords an attacking is checking to create a wedge in the average hash times.
EDIT: Also realized that someone could potentially use response time to get an idea of what the guess number could be.