The question is correct that any private key less than 250 can be found from the public key, and the method that it describes would be practicable. However
- There are much better methods, that can find a key less than k2 with effort proportional to k (e.g. Pollard'd rho), and little memory, with sizable probability. Thus 250 operations are enough to find a key less than 2100 with sizable probability.
- 250 is a small number for a number of operations to break crypto. In the 1970's, the NSA agreed that DES has 256 keys, because they knew they could break that if necessary. By 2000, the baseline for security was 280. The modern baseline might be 296, and practice for new systems is in the order of 2128 or more. That's before squaring to account for the attacks above.
is there a Upper bound?
Yes. For every Elliptic Curve-based signature method and curve, there is a prescribed set for private keys. For ECDSA, it's the interval [1, n-1] where n is the order of the generator (not the n
in the question). The value of n depends on the curve. For curve secp256k1, n is a little below 2256. Thus the expected effort to find the private key by the public key using Pollard's rho (or any other known method) is in the order of 2128 operations (additions on the Elliptic Curve), or more.
If the upper bound was exceeded, the private key would be rejected by compliant software. If it was not rejected, and the software was working correctly from a mathematical standpoint, and for signature systems where the private key is directly what multiplies the generator to form the public key (as in the question and ECDSA), the key would work as if it was reduced modulo n (the order of the generator), both for generating the public key, and signing.
Rules for private key generation depend on the signature system; e.g. Ed25519 specifies a space for the private key that's much larger than n, in order to improve resistance to some attacks (but not to finding a private key generating valid signature).
Removing possible keys in a manner known to the adversary decreases security, because the adversary has less keys to test. It's tolerable to remove a few possible keys. It's not desirable in any way.
The usual/recommended/best way to select a private key is uniformly at random in the set of valid private keys values. It's tolerable to exclude some values (like small values as in the question), but that's only as long as only a small fraction of the values are rejected. E.g. for ECDSA on secp256k1 it would be OK to reject private keys less than 2192, because that removes only a minuscule proportion about 2-64 (0.000000000000000005%) of the keyspace. But that's also pointless, because it's so improbable one of the excluded key is generated. And increasing the lower bound so that this proportion becomes non-negligible will non-negligibly reduce our choice of keys, allowing a specially designed key search algorithm to find it easier than for a uniformly random choice.
An ECDSA private key on secp256k1 can be generated by rolling an hex dice 64 times (if the first 23 throws all yield the same value, question the dice and stop). Faithfully use the 64 results, in big-endian order. The outcome is a valid private key (the test we did insures the outcome is in the appropriate interval).