## Other

drummers-lowrise

Message boards : Project Staging Area : Chance of a Factorial Prime

 Subscribe SortOldest firstNewest firstHighest rated posts first
Author Message
Message 80804 - Posted: 16 Nov 2014 | 5:09:37 UTC

Prime Number Theory says that the chance of a random integer x being prime is about 1/log x
For x = n!+/-1, chance is about w/log n!
To find the probability over a given range we integrate from n = A to n = B.
You can use approximation to Stirling's Formula, where n! ~= n^n * e^-n and some log identities, i.e.

1/log(n^n * e^-n)
= 1/(log(n^n) + log(e^-n))
= 1/(n*log(n) - n*log(e))
= 1/(n*log(n) - n)

Integrating 1/(n*log(n) - n) gives log(log(n)-1) so:
Number of Primes = w * (log(log(B)-1) - log(log(A)-1))

Weight is scaled so w=1.0 has the same density of primes as a randomly chosen set of odd numbers of the same magnitude.
You can use fpsieve to find P1e6; the number of candidates remaining after testing the range n=100001 to 110000 for factors up to 1 million.
>fpsieve -n 100001 -N 110000 -P 1e6 -x

Running this I found:
2528 terms with factors for n!-1, so 7472 remain and
2651 terms with factors for n!+1, so 7349 remain.

That's a good start. I'll finish trying to crack it later, unless someone else wants to take on the challenge.

Message 80805 - Posted: 16 Nov 2014 | 10:13:09 UTC

Rather than use fpsieve, why not just grab the sieve file that we use to generate work? It contains all n < 1M and has been sieved to p=5.505T.

rogue
Volunteer developer Joined: 8 Sep 07
Posts: 1249
ID: 12001
Credit: 18,565,548
RAC: 0  Message 80810 - Posted: 16 Nov 2014 | 14:25:22 UTC

Sieving for min n > 2 would require some optimizations to fsievecl and fpsieve.

Yves Gallot
Volunteer developer
Project scientist

Joined: 19 Aug 12
Posts: 712
ID: 164101
Credit: 305,166,630
RAC: 0 Message 80811 - Posted: 16 Nov 2014 | 15:00:31 UTC - in response to Message 80804.

Weight is scaled so w=1.0 has the same density of primes as a randomly chosen set of odd numbers of the same magnitude.

w > 1 because if p | n!+/-1 then p > n.
See http://www.ams.org/journals/mcom/2002-71-237/S0025-5718-01-01315-1/home.html.

Message 80819 - Posted: 17 Nov 2014 | 12:01:50 UTC - in response to Message 80805.

Rather than use fpsieve, why not just grab the sieve file that we use to generate work? It contains all n < 1M and has been sieved to p=5.505T.

Looking at the sieve file fsieve_5505G.txt there are 3474 terms remaining for n!-1 and 3495 for n!+1 in the range n=100001 to 110000 for factors up to 1 million.
Using a deeper sieved file will give a more accurate result for the chance value.

Sieving for min n > 2 would require some optimizations to fsievecl and fpsieve.

Not sure what you mean by this. Please elaborate.

w > 1 because if p | n!+/-1 then p > n.
See http://www.ams.org/journals/mcom/2002-71-237/S0025-5718-01-01315-1/home.html.

Cool. I didn't know about this paper. I'll have to study it. Has anything been done on compositorial? As far as I can tell its only been searched to 14,000. I searched to 11,000 with 2 cores in only 2 days.

rogue
Volunteer developer Joined: 8 Sep 07
Posts: 1249
ID: 12001
Credit: 18,565,548
RAC: 0  Message 80828 - Posted: 17 Nov 2014 | 22:40:43 UTC - in response to Message 80819.

Sieving for min n > 2 would require some optimizations to fsievecl and fpsieve.

Not sure what you mean by this. Please elaborate.

The current software starts at n=2 and iteratively multiplies terms up to minn. That's fine for the project, but if someone wants to go above n=1e6 (the maxn of the current range being sieved), then there are some optimizations that could be made to compute 1000001! % p where n=1000001 would be the minn of the next range.

Message 80894 - Posted: 22 Nov 2014 | 9:04:11 UTC - in response to Message 80811.

w > 1 because if p | n!+/-1 then p > n.
See http://www.ams.org/journals/mcom/2002-71-237/S0025-5718-01-01315-1/home.html.

I didn't think w=1.0, I just hadn't put a formula to it yet.

Caldwell and Yves Paper starts off with Stirlings Formula like I did but then says n!+/-1 does not behave like a random variable so has weight e^γ ln(n),
where γ is the Euler–Mascheroni constant = 0.577215664901533...
(Don't think I would have figured that out in a hurry.)
Before the start of the challenge Leading edge of the Factorial prime search was up to 165415 on both the plus and minus sides.
At the end of the challenge leading edge was up to about 171716 (that's the last one I completed crunching anyway).
From the paper number of Factorial primes expected in range A to B = e^γ (ln(B) – ln(A)) = 0.0665844691
So we had a 6.6% chance of finding a prime!

In the paper Primorial ends up with the same formula.
Before the start of the challenge Leading edge of the Primorial prime search was up to 1,790,070 on both the plus and minus sides.
At the end of the challenge leading edge was up to about 1,865,579 (that's the last one I completed crunching anyway).
From the paper number of Primorial primes expected in range A to B = e^γ (ln(B) – ln(A)) = 0.0735880979
So we had a 7.3% chance of finding a prime#

Can this formula be extended to Compositorial? Formula expects 17 primes for n<=14,000
Ignoring n's yielding same primes we see 19 primes found on the plus side and 22 on the minus side, for n<=14,000.
This is similar to the difference between expected and actual for Factorial/Primorial, so I'll adopt it.

The current software starts at n=2 and iteratively multiplies terms up to minn. That's fine for the project, but if someone wants to go above n=1e6 (the maxn of the current range being sieved), then there are some optimizations that could be made to compute 1000001! % p where n=1000001 would be the minn of the next range.

I don't think we're going to need minn > 2 for a long time.

Message boards : Project Staging Area : Chance of a Factorial Prime