Score:1

Could I Use a Hash Function to Help Generate Cryptographic Key

tk flag

If I had a true random number generator that had a fault where 10-20% of the bits never change (these bits always produced the same value every time the TRNG was called), could I feed the result of the TRNG through a hash function (let's say SHA-256) to generate a unique and secure key?

SAI Peregrinus avatar
si flag
Look up Key Derivation Functions, e.g. HKDF.
Score:2
my flag

could I feed the result of the TRNG through a hash function (let's say SHA-256) to generate a unique and secure key?

Yes, and in fact, that is very common practice.

Real TRNGs often don't produce precisely uniform and independent bits, due to device defects or subtle interactions with the environment. To compensate for this reality, we generally pass the TRNG output through a conditioner, which takes good (but not perfect) entropy and converts it into a more uniform output.

A SHA-256 would be one example of a good conditioner; what you would do is estimate how much TRNG output would have at least 256 bits of entropy (worse case), send that much through SHA-256 and the SHA-256 output would have approximately 256 bits of entropy (assuming that SHA-256 does not have an unsuspected weakness).

In your example, if we have for each 8 bits of TRNG output, 2 bits are fixed and the other 6 is uniform and independent, then you'd want at least $8/6 \cdot 256 < 344$ bits of TRNG output per SHA-256 hash (and if you have doubts about the other 6 bits, feel free to bump that up; the only downsides for having too much input are the practical ones).

heduncook avatar
tk flag
Forgive my naivety, but if I use a good hash function that outputs a 256-bit number, and I feed it a random number that is 256 bits long, but with only 200 bits of entropy, why doesn't the output of the hash have 256 bits of entropy? Don't all bits of a good hash function have a 50% chance of being a one or a zero, despite the input?
poncho avatar
my flag
@heduncook: remember what entropy is; it's not 'does it look random', but rather 'how much information is in there that the attacker doesn't know'. If the attacker knows that the input is one of $2^{200}$ values (the simplest way of getting 200 bits of entropy), then he knows that the output is one of (at most) $2^{200}$ values. More generally, applying a deterministic function *never* increases entropy.
heduncook avatar
tk flag
Ok, that makes sense. The TRNG I'm using is a PUF, so I was assuming that an attacker wouldn't be able to figure out which 200 bits have entropy, but I can see how that assumption may not be able to hold up if an attacker was motivated enough. Thanks for the help!
I sit in a Tesla and translated this thread with Ai:

mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.