Score:1

What is a good and bulletproof private key for ECC curves?

it flag

I am quite new to cryptography low-level mathematic details, though had worked in the crypto area for 2.5 years before. So if I am wrong about any of below part, please correct me without a facepalm gesture ;)

There are already some discussion about this along with questions about what is a valid private key per say, but none of those answers is convincing. At least I don't agree with their arguments that any valid private key is a good key.

e.g.

EdDSA (Ed25519) is any random number sufficient for a good private key? Yes .... answered Oct 28 '17 at 9:03 Frank Denis

Are all possible EC private keys valid? @Fozi: 1 is neither better or worse than 0x3b6ddba1f4b325cee4505084bc507d2019e86539f8d4be027004b69f9aa0bc74, as long as they have equal probability of occurring, namely 1/n or about ∼1/2256, so that the adversary has no better probability of success by guessing one or the other first. Note that the probability of getting either one of them individually is negligible, just like any other possible secret scalar. – Squeamish Ossifrage Sep 11 '17 at 18:28

Here is my argument:

AFAIK, a public key is calculated by multiplying n times of generator point G, where n is a random number generated as private key.

The notion is pub = n * G

Then for any known curve, I can calculate a million public keys as a rainbow-alike table, where n ranges from 1 to 1,000,000. That would take only 32 MiB space in case of a 256-bit curve. If somebody use any value in that range to generate a public key, I can easily nail it by comparing it to the table. In practice, test vectors for a given curve normally starts from n = 1, 2, 3, ... e.g. https://crypto.stackexchange.com/a/21206/95843

In a wilder scale, someone can store even a 1TiB or 1PiB(oh money!) this type of table. So in that sense, any private key less than 2^20 or even 2^50 is not good!

Now from above deduction the lower bound of private key is settled. But is there a Upper bound? Or any valid number above the lower bound is good and bullet proof enough?

jp flag
0x3b6ddba1f4b325cee4505084bc507d2019e86539f8d4be027004b69f9aa0bc74 is a pretty good and bulletproof private key. You should use that one. (this is a joke)
jjj avatar
cn flag
jjj
Why would someone create such a table? One would never crack any randomly generated key, because it is so unlikely that they are in that range. That said, those are as good as any other random key
jjj avatar
cn flag
jjj
When you rule out the first million, then the 2nd Million would become the new first one. By induction that would mean that no key is good
Match Man avatar
it flag
Thanks a lot to everyone answers and comments to this topic. I close the conversation as I already got what I am looking for.
Score:7
us flag

What's so special about the first 1 million keys? Isn't there also a small rainbow table containing keys in the range 1M to 2M, and a small rainbow table containing keys in the range 2M to 3M, and a small rainbow table containing keys in the range 842347283958M to 842347283959M, etc etc?

The same logic that makes you concerned about small keys (less than 1M) can also make you equally concerned about keys in any range! No matter which key I have, there is indeed a small rainbow table out there somewhere that could allow someone to easily break my key. But an attacker would have to know which of the $2^{236}$ of the possible rainbow tables (ranges of keys) to try!

I think it's a mistake to think of a particular range of keys (from 0 to 1M, for example) as being problematic in a special way. If the lowest range of keys are especially bad, then the key generation process should avoid them. But then the next lowest range of keys become bad by the same logic, so you should avoid them too. This chain of logic concludes with no safe keys to choose from.

cn flag
The issue might be that if it's a human manually picking `n` they have a strong tendency to go with very low values. I know some people who can recognize the md5 hash of an empty string because they have seen it so often. Wouldn't be surprised if some people that work with crypto a lot can recognize some keys that are used in examples all the the time.
Match Man avatar
it flag
Because it is easy to figure out. And cheap. Also thanks to some security area "expert" claims that 0 or 1 is as safer as 0x3b6ddba1f4b325cee4505084bc507d2019e86539f8d4be027004b69f9aa0bc74 so someone will use them as a bullet proof "random" number to generate their public key. And a capable party can scan all the public key to see if someone "luckily" uses a too small random number to generate it. The same rules also applies to why `123456` or your birthday is not a good pass word. As they are all so easy to try out.
us flag
Security is not an intrinsic property of a *particular* key. It comes from the *process* of choosing the key. If you have the right process, then all of these concerns go away.
Match Man avatar
it flag
>This chain of logic concludes with no safe keys to choose from. < I can't agree with this conclusion. First 1 million is used as an example, my meaning is that using a pre-computed table is quite cheap, so somebody will eventually do it to try their luck. Same thing applies to lottery ticket buyer, they know the chance is very low, but eventually some of them may be lucky enough to get a power ball bonus!
Match Man avatar
it flag
Second, the chain doesn't recur infinitely, due to the limitation of storage. Referring https://theconversation.com/the-worlds-data-explained-how-much-were-producing-and-where-its-all-stored-159964#:~:text=In%202018%2C%20the%20total%20amount,One%20zettabyte%20is%208%2C000%2C000%2C000%2C000%2C000%2C000%2C000%20bits., till 2020 there are 59 ZB storage in the world. Then it is *almost* impossible to store 590 ZB table, even powered by compressing technology. In that sense, a random number greater than 590 ZB is treated as bullet proof safe. Which is over 2^82.
Match Man avatar
it flag
Which is over 2^82.... But it is still relatively small, compared to the whole key space like 2^256 or 2^512.
us flag
So any key above $2^{82}$ is "bulletproof safe". Then do you propose to have a key generation process that completely excludes keys < $2^{82}$? If so, then no one has an incentive to store a rainbow tables for keys $\{0,..., 2^{82}\}$. Won't they now store rainbow tables for the *new lowest* $2^{82}$ keys? Did these keys now become unsafe, when previously they were "bulletproof"?
Match Man avatar
it flag
No, the point is that anyone can choose a range that he thinks is safe enough. And a capable party, or say a predator only chases the easiest target.
Score:2
ng flag

The question is correct that any private key less than 250 can be found from the public key, and the method that it describes would be practicable. However

  • There are much better methods, that can find a key less than k2 with effort proportional to k (e.g. Pollard'd rho), and little memory, with sizable probability. Thus 250 operations are enough to find a key less than 2100 with sizable probability.
  • 250 is a small number for a number of operations to break crypto. In the 1970's, the NSA agreed that DES has 256 keys, because they knew they could break that if necessary. By 2000, the baseline for security was 280. The modern baseline might be 296, and practice for new systems is in the order of 2128 or more. That's before squaring to account for the attacks above.

is there a Upper bound?

Yes. For every Elliptic Curve-based signature method and curve, there is a prescribed set for private keys. For ECDSA, it's the interval [1, n-1] where n is the order of the generator (not the n in the question). The value of n depends on the curve. For curve secp256k1, n is a little below 2256. Thus the expected effort to find the private key by the public key using Pollard's rho (or any other known method) is in the order of 2128 operations (additions on the Elliptic Curve), or more.

If the upper bound was exceeded, the private key would be rejected by compliant software. If it was not rejected, and the software was working correctly from a mathematical standpoint, and for signature systems where the private key is directly what multiplies the generator to form the public key (as in the question and ECDSA), the key would work as if it was reduced modulo n (the order of the generator), both for generating the public key, and signing.

Rules for private key generation depend on the signature system; e.g. Ed25519 specifies a space for the private key that's much larger than n, in order to improve resistance to some attacks (but not to finding a private key generating valid signature).


Removing possible keys in a manner known to the adversary decreases security, because the adversary has less keys to test. It's tolerable to remove a few possible keys. It's not desirable in any way.

The usual/recommended/best way to select a private key is uniformly at random in the set of valid private keys values. It's tolerable to exclude some values (like small values as in the question), but that's only as long as only a small fraction of the values are rejected. E.g. for ECDSA on secp256k1 it would be OK to reject private keys less than 2192, because that removes only a minuscule proportion about 2-64 (0.000000000000000005%) of the keyspace. But that's also pointless, because it's so improbable one of the excluded key is generated. And increasing the lower bound so that this proportion becomes non-negligible will non-negligibly reduce our choice of keys, allowing a specially designed key search algorithm to find it easier than for a uniformly random choice.

An ECDSA private key on secp256k1 can be generated by rolling an hex dice 64 times (if the first 23 throws all yield the same value, question the dice and stop). Faithfully use the 64 results, in big-endian order. The outcome is a valid private key (the test we did insures the outcome is in the appropriate interval).

Match Man avatar
it flag
Thanks a lot for sharing your insights to this topic. Yes 2^128 is a fair number, if considering a more "capable" party than a civilian individual. But talking about upper bound of a "good" key, my question is that if any valid big enough key is also a good key, meaning that there is no upper bound between the biggest possible good key and the biggest valid key. e.g. whole area of key space is like: [least valid private key,2^128] ∪ [2^128+1, (good key range) , biggest good key] ∪ [gap, biggest valid key] ∪ [invalid key...]
fgrieu avatar
ng flag
@MatchMan: removing possible keys in a manner known to the adversary _decreases_ security, because the adversary has less keys to test. It's _tolerable_ to remove a few possible keys. It's not desirable in any way.
Match Man avatar
it flag
Thanks, it's really clear for me now.
id flag
@fgrieu: If one is generating many keys, and e.g. 1% of them could be brute-force tested 10% cheaper (cost *per candidate*) than the rest, then attacker would be able to search 1% of the key space with only 0.9% as much effort as would be needed to search all of it. If one generated enough keys that at least some would likely fall within that 1%, this would reduce an attacker's effort to find at least one key by about 10%. By contrast, excluding that 1% of keys would only reduce an attacker's required effort by 1%.
fgrieu avatar
ng flag
@supercat: Key search in ECC (or DSA, or RSA) is not by scanning the keyspace. Ed25519 has $>2^{252}$ keys, and scanning even 0.00…01% (where … replaces 40 zeroes) of that is beyond reach with all the effort wasted in bitcoin mining. Attacks less far from feasible exist, that directly attack public keys. Contrary to symmetric crypto sometime, and RSA to much lesser degree, attacking many ECC keys simultaneously does not seem to make breaking one easier. And I fail to find that there are sizable classes of keys (1%) that could be attacked much faster (-10%) than others.
id flag
@fgrieu: My response was related to the general concept that one could not define the number of "problematic" keys sufficiently broadly that one might accidentally hit one, without it being so broad that excluding them would reduce the brute-force key space. That would be vacuously true for cryptosystems where the probability of hitting a "bad" key would be practically indistinguishable from zero, but I don't think it would hold for cryptosystems where it wasn't vacuously true.
mangohost

Post an answer

Most people don’t grasp that asking a lot of questions unlocks learning and improves interpersonal bonding. In Alison’s studies, for example, though people could accurately recall how many questions had been asked in their conversations, they didn’t intuit the link between questions and liking. Across four studies, in which participants were engaged in conversations themselves or read transcripts of others’ conversations, people tended not to realize that question asking would influence—or had influenced—the level of amity between the conversationalists.