Join PrimeGrid
Returning Participants
Community
Leader Boards
Results
Other
drummers-lowrise
|
Message boards :
Sophie Germain Prime Search :
Possible interest in searching n=1.7M and/or n=3.322M once n=1.29M is complete?
Author |
Message |
|
I've done some quad-sieving on n=1.7M and on n=3.322M and was wondering whether anyone here is interested in eventually testing those candidates. Nothing's been set in stone yet, but TPS may soon start testing n=1.7M and/or n=3.322M, either on a PRPNet server or as manual reservations on the TPS site.
However, unless TPS gets exceptionally lucky, there won't be a twin or sophie found for either candidate by the time our n=1.29M file wraps up a few years from now. At that time, if the admins agree, we could then work on the n=1.7M and n=3.322M candidates that haven't yet been searched by TPS. It would also be up to the admins, but another thing that could be considered is reviving the old TPS badge for those candidates.
In case anyone's curious, those two n's were chosen primarily for efficiency, as they were near an FFT length and are just over 500,000 digits and 1,000,000 digits long (more details are available at: https://www.mersenneforum.org/showthread.php?t=28336). As an added bonus, quad-sieving yields a greater chance of finding sophies than triple-sieving, which was used for n=1.29M (this comparison assumes that you're quad-sieving and triple-sieving the same n-value).
Feedback is greatly appreciated, so please let me know your thoughts. Thanks!
____________
My largest prime: 342924651 · 2^3394939-1 (1,021,988 digits)
My largest twin prime: 12599682117 · 2^211088+/-1 (63,554 digits)
My pictures: https://www.flickr.com/people/michael-kwok/ | |
|
|
Yes, it is a shame that our SGS project (at n=1.29M) is not based on a proper quad-sieve.
But how much sieving work does it take to reach a point where it is reasonable to start primality tests of numbers with the new exponent? You say you already sieved for two new n. How deeply? How many candidates survived the sieving?
Personally, I would not mind if PrimeGrid abandoned the current SGS region (n=1.29M) and went to new territory already. But as I understand, the current plan (and promise?) is to stay there until the old sieve-file is exhausted.
Of course, a new bigger n should be run with LLR2 with the Gerbicz hardware error check and the Pietrzak fast verification enabled.
/JeppeSN | |
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13956 ID: 53948 Credit: 393,160,197 RAC: 187,115
                               
|
Personally, I would not mind if PrimeGrid abandoned the current SGS region (n=1.29M) and went to new territory already. But as I understand, the current plan (and promise?) is to stay there until the old sieve-file is exhausted.
Of course, a new bigger n should be run with LLR2 with the Gerbicz hardware error check and the Pietrzak fast verification enabled.
/JeppeSN
Also, there's higher priority projects that need to get done sooner, so even if we wanted to start a new SGS range there would be things ahead of it in the queue.
____________
My lucky number is 75898524288+1 | |
|
|
Yes, it is a shame that our SGS project (at n=1.29M) is not based on a proper quad-sieve.
But how much sieving work does it take to reach a point where it is reasonable to start primality tests of numbers with the new exponent? You say you already sieved for two new n. How deeply? How many candidates survived the sieving?
Personally, I would not mind if PrimeGrid abandoned the current SGS region (n=1.29M) and went to new territory already. But as I understand, the current plan (and promise?) is to stay there until the old sieve-file is exhausted.
Of course, a new bigger n should be run with LLR2 with the Gerbicz hardware error check and the Pietrzak fast verification enabled.
/JeppeSN
Thanks for your input!
Calculating the ideal sieve depth for triple and quad sieves is a bit tricky since the number of remaining candidates drops much faster than the average number of tests per prime. For example, let's suppose you sieved k=1-1T for n=3.322M. At a sieve depth of p=10G, you'd expect to find an average of one mega prime for every 56,150 completed tests, and there would be approximately 1,468,000 candidates in the file. If you sieved the same file to p=10T instead, you'd expect to find an average of one mega prime for every 43,192 tests, but there would only be about 514,000 candidates left in the file.
In case anyone's interested, the stats for n=1.7M are below. Note that the number of candidates in the sieve file is the same for both n=1.7M and n=3.322M; it's the number of tests per prime that's different.
p=1G: 2,237,000 candidates per 1T; 31,928 tests per prime
p=10G: 1,468,000 candidates per 1T; 28,735 tests per prime
p=100G: 1,003,000 candidates per 1T; 26,123 tests per prime
p=1T: 708,000 candidates per 1T; 23,946 tests per prime
p=10T: 514,000 candidates per 1T; 22,104 tests per prime
p=100T: 382,000 candidates per 1T; 20,525 tests per prime
p=1P: 290,000 candidates per 1T; 19,157 tests per prime
p=10P: 224,000 candidates per 1T; 17,960 tests per prime
The optimal sieve depth is therefore generally much lower than it would be for a twin-only sieve. IIRC, n=333333 was twin-sieved to p=25000T, n=666666 was quad-sieved to p=200T, and n=1.29M was triple-sieved to p=1500T.
I completely agree with you on promoting quad sieving and doing the faster Gerbicz/Pietrzak verification instead of running two separate LLR tests and hoping that you're the first one to finish. I also ran the calcs and found that even if n=1.29M were quad-sieved to only p=3T (!), a random candidate from that file would have a greater chance of being either a twin or a sophie than if it were triple-sieved to the actual sieve depth of p=1500T.
AFAIK, the admins have wanted to finish n=1.29M before starting a new n, but if they change their mind, I can provide the file earlier (it's sieved to a few hundred T). But while I'm confident about those n-values, I think it would still be a good idea to confirm on many different CPUs that n=1.7M and n=3.322M are indeed the best choices (see https://www.primegrid.com/forum_thread.php?id=4284 for reference). If interested crunchers could post their CPU specs and their times on the following k/n values, it'll be a great help.
1651608592:Z:3:2:46
100000000311945 1660000
100000000695105 1680000
100000000418475 1700000
100000000045455 1720000
100000000734975 3282000
100000001333085 3302000
100000000362885 3322000
100000000678245 3342000
____________
My largest prime: 342924651 · 2^3394939-1 (1,021,988 digits)
My largest twin prime: 12599682117 · 2^211088+/-1 (63,554 digits)
My pictures: https://www.flickr.com/people/michael-kwok/ | |
|
|
If you or someone else posts the right command line arguments to use to get the desired information, I’d be happy to check FFT sizes on the various computers I own.
____________
| |
|
Yves Gallot Volunteer developer Project scientist Send message
Joined: 19 Aug 12 Posts: 803 ID: 164101 Credit: 305,700,039 RAC: 5,444

|
Calculating the ideal sieve depth for triple and quad sieves...
See Quad Sieve for twin and Sophie Germain primes and the qsieve project.
For one mega-twin or mega-SG, more than a billion candidates must be tested and more than 35,000 mega-primes are needed.
Then the sieve must be a distributed project. It should run concurrently with the twin/SG search.
I wrote on Discord:
The successor of WW could be a Quad Sieve for the search for twin and SG primes.
The two programs are not so different: (1) a fast prime number generator, (2) the evaluation of a relation modulo p and what's new for qsieve (3) fill a bitmap.
mfl0p already wrote (1) and (2) for WW.
qsieve defines a possible method. It the target is 600,000+ digits, we can start at n0 = 2M. Each task checks 3 <= k < 232 and ni <= n < ni + 64. A first task can test p < 1013, then 141k candidates are generated (a small file). With 100 or 1000 tasks per block, the depth of the sieve can be pmax = 1015 or 1016 and the number of remaining candidates is 80k or 60k. We can set k < 232 and the blocks are n in [2M, 2M+64[, [2M+64, 2M+128[, ...
The size of the bitmap is 1 gigabyte. If it was a problem 10 years ago, it is not anymore. The sieve can run on GPU or CPU and be a distributed search.
If someone is interested in, qsieve is a proof of concept but a real program for a distributed search has to be done. And for now, there are other developments I would like to work on. | |
|
|
If you or someone else posts the right command line arguments to use to get the desired information, I’d be happy to check FFT sizes on the various computers I own.
Thanks for your offer!
Some time ago, Michael Goetz posted a helpful guide at: https://www.primegrid.com/forum_thread.php?id=4284&nowrap=true#53322
[Noob mode on]
What's the command I should give to llr?
[Noob mode off]
Copy the numbers from Lennart's post (including the header line into a file, e.g., "lennart.in") and then use the following command line:
llr -d lennart.in
The "-d" causes llr to print out information as it crunches.
-- break --
Lennart, I just started running that bunch on my Core2Quad Q6600 @2.4GHz. The first one is running at about 2.243 ms per iteration. I'll post the full results once it's done.
Another option would be to use the GUI version of LLR, which is available for download at: http://jpenne.free.fr/index2.html . But if you're still having difficulties, please let us know.
____________
My largest prime: 342924651 · 2^3394939-1 (1,021,988 digits)
My largest twin prime: 12599682117 · 2^211088+/-1 (63,554 digits)
My pictures: https://www.flickr.com/people/michael-kwok/ | |
|
|
Personally, I would not mind if PrimeGrid abandoned the current SGS region (n=1.29M) and went to new territory already. But as I understand, the current plan (and promise?) is to stay there until the old sieve-file is exhausted.
Of course, a new bigger n should be run with LLR2 with the Gerbicz hardware error check and the Pietrzak fast verification enabled.
/JeppeSN
Also, there's higher priority projects that need to get done sooner, so even if we wanted to start a new SGS range there would be things ahead of it in the queue.
Yeah, an=3.322M search would currently compete with PPS and GFN17 since they all have a similar number of digits.
There's plenty of time to decide, but once we're done with n=1.29M, I wonder whether it's better to run both n=1.7M and n=3.322M at the same time, or whether we should continue to do one n-value at a time (this assumes that the timings for both exponents are confirmed to be good).
____________
My largest prime: 342924651 · 2^3394939-1 (1,021,988 digits)
My largest twin prime: 12599682117 · 2^211088+/-1 (63,554 digits)
My pictures: https://www.flickr.com/people/michael-kwok/ | |
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13956 ID: 53948 Credit: 393,160,197 RAC: 187,115
                               
|
Some time ago, Michael Goetz posted a helpful guide at: https://www.primegrid.com/forum_thread.php?id=4284&nowrap=true#53322
The only important words there are the first three.
____________
My lucky number is 75898524288+1 | |
|
|
Some time ago, Michael Goetz posted a helpful guide at: https://www.primegrid.com/forum_thread.php?id=4284&nowrap=true#53322
The only important words there are the first three.
I just tried it on the latest LLR version (4.0.3), and fortunately, those instructions still seem to work. The process is pretty much the same - first, create a new file called "timings.txt" containing the following:
1651608592:Z:3:2:46
100000000311945 1660000
100000000695105 1680000
100000000418475 1700000
100000000045455 1720000
100000000734975 3282000
100000001333085 3302000
100000000362885 3322000
100000000678245 3342000
Then, put the timings.txt file in the same folder as the LLR folder and use the following command: cllr64 -d timings.txt
The test should begin; I've also taken a screenshot in case it helps: https://imgur.com/NotyJtJ
____________
My largest prime: 342924651 · 2^3394939-1 (1,021,988 digits)
My largest twin prime: 12599682117 · 2^211088+/-1 (63,554 digits)
My pictures: https://www.flickr.com/people/michael-kwok/ | |
|
|
Yes, it is a shame that our SGS project (at n=1.29M) is not based on a proper quad-sieve.
But how much sieving work does it take to reach a point where it is reasonable to start primality tests of numbers with the new exponent? You say you already sieved for two new n. How deeply? How many candidates survived the sieving?
Personally, I would not mind if PrimeGrid abandoned the current SGS region (n=1.29M) and went to new territory already. But as I understand, the current plan (and promise?) is to stay there until the old sieve-file is exhausted.
AFAIK, the admins have wanted to finish n=1.29M before starting a new n, but if they change their mind, I can provide the file earlier (it's sieved to a few hundred T).
n=1.7M has been sieved to p=540T for k=1-750T. It's now available for testing on PRPNet (port 12000) at TPS:
http://www.noprimeleftbehind.net/tps/ .
Number of candidates that survived the testing: 233.6M
Probability of finding at least one significant prime pair (either twin or Sophie Germain) in that range: 84%
Probability of finding at least one Sophie: 71%
Probability of finding at least one twin: 46%
For reference, here were the stats for PrimeGrid's n=666666 quad sieve:
Range: k=1-41T
Sieve depth: p=200T
Candidates remaining: 34.2M
Probability of one or more significant pair = 80%
Probability of one or more SG = 67%
Probability of one or more Twin = 42%
Note that since PrimeGrid is only doing n=1.29M now, there won't be any PSA or SGS credits awarded for n=1.7M. You get bragging points on TPS, though ;)
On a related note, quad-sieving for n=3.322M is currently a bit over p=700T and will be available for testing at TPS in the near future. Happy prime hunting!
____________
My largest prime: 342924651 · 2^3394939-1 (1,021,988 digits)
My largest twin prime: 12599682117 · 2^211088+/-1 (63,554 digits)
My pictures: https://www.flickr.com/people/michael-kwok/ | |
|
|
See Quad Sieve for twin and Sophie Germain primes and the qsieve project.
For one mega-twin or mega-SG, more than a billion candidates must be tested and more than 35,000 mega-primes are needed.
Then the sieve must be a distributed project. It should run concurrently with the twin/SG search.
I wrote on Discord:
The successor of WW could be a Quad Sieve for the search for twin and SG primes.
The two programs are not so different: (1) a fast prime number generator, (2) the evaluation of a relation modulo p and what's new for qsieve (3) fill a bitmap.
mfl0p already wrote (1) and (2) for WW.
qsieve defines a possible method. It the target is 600,000+ digits, we can start at n0 = 2M. Each task checks 3 <= k < 232 and ni <= n < ni + 64. A first task can test p < 1013, then 141k candidates are generated (a small file). With 100 or 1000 tasks per block, the depth of the sieve can be pmax = 1015 or 1016 and the number of remaining candidates is 80k or 60k. We can set k < 232 and the blocks are n in [2M, 2M+64[, [2M+64, 2M+128[, ...
The size of the bitmap is 1 gigabyte. If it was a problem 10 years ago, it is not anymore. The sieve can run on GPU or CPU and be a distributed search.
If someone is interested in, qsieve is a proof of concept but a real program for a distributed search has to be done. And for now, there are other developments I would like to work on.
I'm excited to announce that n=3.322M has been quad-sieved to p=1130T (a bit over 2^50) and is now available for testing on TPS port 13000. Let's see if we can find that mega-twin or Sophie before 2033 ;)
____________
My largest prime: 342924651 · 2^3394939-1 (1,021,988 digits)
My largest twin prime: 12599682117 · 2^211088+/-1 (63,554 digits)
My pictures: https://www.flickr.com/people/michael-kwok/ | |
|
Post to thread
Message boards :
Sophie Germain Prime Search :
Possible interest in searching n=1.7M and/or n=3.322M once n=1.29M is complete? |