A few months ago I decided to write my own custom prime number generator. One of the tests I use is a modified Miller-Rabin test that tests the number against base 2 and then only tests random odd numbers, instead of completely random numbers. I figured this might get better probability than 1/4^k for k rounds since if we factor out all the powers of 2 from a random base there should be an odd remainder. Does this actually have better probability bounds, or would it be about the same, and how would we prove that? Also, even if it had the same probability bounds, would this test at least be faster in the average case scenario for random prime generation since the base 2 test weeds out almost all composites and the computer doesn't have to generate as many random numbers?
This is my code. I previously defined the functions MR(p, a, s, d) and extract(a, b) and a class PrimeList which uses trial division to generate a list of prime numbers; small_primes is the one instance I defined in the program.
def probable_prime(p, k = 5): #Quick custom-made primality test for large primes
#Test small cases
if p <= 5000:
return small_primes.is_prime(p)
#Find s, d
(s, d) = extract(2, p - 1)
#Test for divisibility by 2
if s == 0:
return False
#Quick trial division for small primes
for i in small_primes.odd_primes(p.bit_length() // 2):
if p % i == 0:
return False
#Miller-Rabin test for base 2
if not MR(p, 2, s, d):
return False
#Miller-Rabin test for odds only
return all(MR(p, random.randint(2, p // 2) * 2 - 1, s, d) for i in range(k))
```