Message boards :
Project Staging Area :
Chance of a Factorial Prime
Author 
Message 
RogerVolunteer developer Volunteer tester
Send message
Joined: 27 Nov 11 Posts: 1137 ID: 120786 Credit: 267,684,352 RAC: 4,733

Prime Number Theory says that the chance of a random integer x being prime is about 1/log x
For x = n!+/1, chance is about w/log n!
To find the probability over a given range we integrate from n = A to n = B.
You can use approximation to Stirling's Formula, where n! ~= n^n * e^n and some log identities, i.e.
1/log(n^n * e^n)
= 1/(log(n^n) + log(e^n))
= 1/(n*log(n)  n*log(e))
= 1/(n*log(n)  n)
Integrating 1/(n*log(n)  n) gives log(log(n)1) so:
Number of Primes = w * (log(log(B)1)  log(log(A)1))
Weight is scaled so w=1.0 has the same density of primes as a randomly chosen set of odd numbers of the same magnitude.
You can use fpsieve to find P1e6; the number of candidates remaining after testing the range n=100001 to 110000 for factors up to 1 million.
>fpsieve n 100001 N 110000 P 1e6 x
Running this I found:
2528 terms with factors for n!1, so 7472 remain and
2651 terms with factors for n!+1, so 7349 remain.
That's a good start. I'll finish trying to crack it later, unless someone else wants to take on the challenge.  

JimBHonorary cruncher Send message
Joined: 4 Aug 11 Posts: 916 ID: 107307 Credit: 974,494,172 RAC: 418

Rather than use fpsieve, why not just grab the sieve file that we use to generate work? It contains all n < 1M and has been sieved to p=5.505T.  

rogueVolunteer developer
Send message
Joined: 8 Sep 07 Posts: 1219 ID: 12001 Credit: 18,565,548 RAC: 0

Sieving for min n > 2 would require some optimizations to fsievecl and fpsieve.  

Yves GallotVolunteer developer Project scientist Send message
Joined: 19 Aug 12 Posts: 651 ID: 164101 Credit: 305,042,960 RAC: 2,038

Weight is scaled so w=1.0 has the same density of primes as a randomly chosen set of odd numbers of the same magnitude.
w > 1 because if p  n!+/1 then p > n.
See http://www.ams.org/journals/mcom/200271237/S0025571801013151/home.html.  

RogerVolunteer developer Volunteer tester
Send message
Joined: 27 Nov 11 Posts: 1137 ID: 120786 Credit: 267,684,352 RAC: 4,733

Rather than use fpsieve, why not just grab the sieve file that we use to generate work? It contains all n < 1M and has been sieved to p=5.505T.
Looking at the sieve file fsieve_5505G.txt there are 3474 terms remaining for n!1 and 3495 for n!+1 in the range n=100001 to 110000 for factors up to 1 million.
Using a deeper sieved file will give a more accurate result for the chance value.
Sieving for min n > 2 would require some optimizations to fsievecl and fpsieve.
Not sure what you mean by this. Please elaborate.
w > 1 because if p  n!+/1 then p > n.
See http://www.ams.org/journals/mcom/200271237/S0025571801013151/home.html.
Cool. I didn't know about this paper. I'll have to study it. Has anything been done on compositorial? As far as I can tell its only been searched to 14,000. I searched to 11,000 with 2 cores in only 2 days.  

rogueVolunteer developer
Send message
Joined: 8 Sep 07 Posts: 1219 ID: 12001 Credit: 18,565,548 RAC: 0

Sieving for min n > 2 would require some optimizations to fsievecl and fpsieve.
Not sure what you mean by this. Please elaborate.
The current software starts at n=2 and iteratively multiplies terms up to minn. That's fine for the project, but if someone wants to go above n=1e6 (the maxn of the current range being sieved), then there are some optimizations that could be made to compute 1000001! % p where n=1000001 would be the minn of the next range.  

RogerVolunteer developer Volunteer tester
Send message
Joined: 27 Nov 11 Posts: 1137 ID: 120786 Credit: 267,684,352 RAC: 4,733

w > 1 because if p  n!+/1 then p > n.
See http://www.ams.org/journals/mcom/200271237/S0025571801013151/home.html.
I didn't think w=1.0, I just hadn't put a formula to it yet.
Caldwell and Yves Paper starts off with Stirlings Formula like I did but then says n!+/1 does not behave like a random variable so has weight e^γ ln(n),
where γ is the Euler–Mascheroni constant = 0.577215664901533...
(Don't think I would have figured that out in a hurry.)
Before the start of the challenge Leading edge of the Factorial prime search was up to 165415 on both the plus and minus sides.
At the end of the challenge leading edge was up to about 171716 (that's the last one I completed crunching anyway).
From the paper number of Factorial primes expected in range A to B = e^γ (ln(B) – ln(A)) = 0.0665844691
So we had a 6.6% chance of finding a prime!
In the paper Primorial ends up with the same formula.
Before the start of the challenge Leading edge of the Primorial prime search was up to 1,790,070 on both the plus and minus sides.
At the end of the challenge leading edge was up to about 1,865,579 (that's the last one I completed crunching anyway).
From the paper number of Primorial primes expected in range A to B = e^γ (ln(B) – ln(A)) = 0.0735880979
So we had a 7.3% chance of finding a prime#
Can this formula be extended to Compositorial? Formula expects 17 primes for n<=14,000
Ignoring n's yielding same primes we see 19 primes found on the plus side and 22 on the minus side, for n<=14,000.
This is similar to the difference between expected and actual for Factorial/Primorial, so I'll adopt it.
The current software starts at n=2 and iteratively multiplies terms up to minn. That's fine for the project, but if someone wants to go above n=1e6 (the maxn of the current range being sieved), then there are some optimizations that could be made to compute 1000001! % p where n=1000001 would be the minn of the next range.
I don't think we're going to need minn > 2 for a long time.  

Message boards :
Project Staging Area :
Chance of a Factorial Prime 