Message boards :
321 Prime Search :
Contents of _cmd files
Author 
Message 
BurVolunteer tester
Send message
Joined: 25 Feb 20 Posts: 515 ID: 1241833 Credit: 415,429,336 RAC: 21,060

To find out which numbers are actually checked on my computer, I had a look into the _cmd files. They contain something like sr2sieve p52830560e9 P52830570e9 These numbers appear a bit large to be the n in the 321 primes.
There is also a large .sieveinput file where the first line read ABCD 3*2^$a1 [25000034] The remainder of the file are many lines of 1 to 3 digit numbers.
Does anyone here know what is the meaning of these two types of files?  


Are you aware of the difference between "321 Prime Search (Sieve)" and "321 Prime Search (LLR)"? You are doing the former, the sieve.
That means you are not primality testing any numbers. You are instead checking if the primes in the interval from 52830560e9 to 52830570e9, i.e. near 52*10^15 or 52 quadrillion ("52 peta"), divide any 3*2^n  1 candidates, I think.
If you want to check a particular number, choose the 321 LLR subproject instead. Tasks take much longer.
/JeppeSN  

Ravi FernandoProject administrator Volunteer tester Project scientist Send message
Joined: 21 Mar 19 Posts: 211 ID: 1108183 Credit: 14,457,450 RAC: 2,357

To add to JeppeSN's correct answer: the second file is a sieve file following the "ABCD" format which is used (for example) by the program OpenPFGW. The full documentation for ABCD (as well as two related formats, ABC and ABC2) is included with OpenPFGW; you can find a download link via google.
The short explanation is that it encodes a very long list of numbers of the form 3*2^a1, where "a" starts at 25,000,034 and then increments by each of the numbers you see below. (There are gaps because we already have factors for some of these numbers.) The 321 sieve program will determine whether any of these numbers are divisible by a prime between 52,830,560,000,000,000 and 52,830,570,000,000,000; if it finds any, it will report the factors back to the server. We will later get to skip the primality test for 3*2^a1 for that value of a, since we will already know it's composite.
If you are exceptionally observant, you may notice that the ABCD file actually has another line that looks like the first, a little over halfway down. This is for numbers of the form 3*2^a+1 (instead of 1). We are currently sieving both forms for 25,000,000 <= a < 50,000,000.  

BurVolunteer tester
Send message
Joined: 25 Feb 20 Posts: 515 ID: 1241833 Credit: 415,429,336 RAC: 21,060

Thanks for your answers, that helped a lot.
I looked up sieving and found the sieve of Eratosthenes, so I thought 321 Sieve meant the computer was doing that, i.e. crossing out numbers in a given range until only the prime remained.  


I looked up sieving and found the sieve of Eratosthenes, so I thought 321 Sieve meant the computer was doing that, i.e. crossing out numbers in a given range until only the prime remained.
It is crossing out numbers!
The difference is, it is absolutely impossible to continues until only primes remain. It would take forever because the numbers are so big.
Instead we cross out numbers as long as the time/effort it takes to remove one additional number by "crossing out" is still lower or comparable to the time it would take to remove one number by running a full primality test on such a number.
/JeppeSN  

BurVolunteer tester
Send message
Joined: 25 Feb 20 Posts: 515 ID: 1241833 Credit: 415,429,336 RAC: 21,060

On my computer the 321 sieving finds factors within every 40 to 45th WU, which is probably a general average. According to preferences 321 LLR WUs take 2515 mins on average and 321 sieve WUs take 81 mins on average.
At first glance this seems a waste of time, since 42.5 times 81 mins is 3442.5 mins. But I think I read somewhere each successful sieve WU removes 2 LLR WUs. In which case sieving saves a lot of CPU time.
Why is it 2 prime candidates, because it sieves for both 3*2^a + 1 and  1?
And if that is so, wouldn't it make sense to at first offer 321 sieving only, until it doesn't make sense anymore in respect to CPU time and only then begin to hand out LLR prime test WUs?  

Ravi FernandoProject administrator Volunteer tester Project scientist Send message
Joined: 21 Mar 19 Posts: 211 ID: 1108183 Credit: 14,457,450 RAC: 2,357

Why is it 2 prime candidates, because it sieves for both 3*2^a + 1 and  1?
It's not two candidates. It's just that two people have to test each candidate, and sieving avoids both of those tests.
And if that is so, wouldn't it make sense to at first offer 321 sieving only, until it doesn't make sense anymore in respect to CPU time and only then begin to hand out LLR prime test WUs?
In a sense, that's what we're doing. The current 321 sieve is for values of n between 25 million and 50 million. The 321 LLR project is currently a little short of 16 million, and progressing by about a million per year. So the sieving we're currently doing won't be needed for several years. (And the numbers we're currently LLR testing were sieved years ago.)
Edit: My understanding is that the admins added n's between 16M and 25M back into the sieve file just a few days ago, as described in this post. So we are actually sieving some "presentday" candidates as well. But those candidates have already been sieved far enough (to 61P) that LLR testing is roughly as efficient as sieving at this point.  

BurVolunteer tester
Send message
Joined: 25 Feb 20 Posts: 515 ID: 1241833 Credit: 415,429,336 RAC: 21,060

It's not two candidates. It's just that two people have to test each candidate, and sieving avoids both of those tests. But the sieve WU is also double checked. So that shouldn't matter?
Sieving WUs check both the +1 and 1 candidates, do the LLR WUs, too?
My average got way down to 1 in 59 which means recently it's even lower. And if it's a 1:2 sieve:llr ratio, then probably the recent days have not been productive.  


The sieve WU is doubled checked to prevent eliminating one extra WU or one less WU.
LLR WUs check one candidate at a time. Sieving eliminates the obviously composite candidates so that the LLR WUs don't include them.
About the average, the deeper sieving goes, the less candidates are found. But 321 SV isn't going deeper quickly, so your situation is just random luck.
____________
My lucky number is 6219*2^3374198+1
 

BurVolunteer tester
Send message
Joined: 25 Feb 20 Posts: 515 ID: 1241833 Credit: 415,429,336 RAC: 21,060

To be honest, I still don't understand why one sieving WU eliminates two LLR WUs. I thought all WUs were doublechecked to rule out errors in calculation, isn't that true for sieving?
But 321 SV isn't going deeper quickly, so your situation is just random luck. Interesting, it's decreasing further and is now down to 1:65. Do you have stats for the whole subproject?  


To be honest, I still don't understand why one sieving WU eliminates two LLR WUs.
It is not quite accurate. If you go to Your account, under 321 Prime Search tasks (Sieve) you will see something called Factors found. If one sieving WU (both 1st task and 2nd task) finds n factors, then that can potentially eliminate the need for n LLR WUs (each with a 1st and a 2nd task).
But maybe this is what you mean by 1:65? Do you mean we are near 0.0154 factors found per WU in 321 sieve?
As far as I know, most 321 sieve WUs, nowadays, find no factors. But a WU that does find one or more factors, will mean we can save one or more huge LLR WUs.
Regarding +1 and 1. The 321 subprojects considers both forms. For the LLR WUs, it means some of them are a prime candidate of the +1 form, and others are of the 1 form. From a search, you can see that historically LLR has had success with both +1 and 1 forms.
/JeppeSN  


So here's the tl;dr:
A good and deep sieve is crucial for a prime search. But how deep is too deep? This question isn't suitable for 321 SV for now, but we're coming close in around 2 years.
321 SV is getting deeper and deeper slowly with every task, but the rate should be slow and unnoticeable, your case was pure luck. Really. I have about 41 tasks per factor, which constantly moves up and down. Possibly your next task will contain a factor, reducing the 1:65 rate to around 1:56 (just a random number, don't take this seriously).
Because 321 SV was closed around 2012 last time, the admins decided to reopen it in Mar 2019. (could it be that they wanted to fill the gap left by GCW SV closing?) This new sieve is sieving everything from the current leading edge to 50M. If a factor is found for a specific candidate, it's automatically removed from the candidates.
To be honest, I still don't understand why one sieving WU eliminates two LLR WUs. I thought all WUs were doublechecked to rule out errors in calculation, isn't that true for sieving?
All WUs are doublechecked on the PrimeGrid BOINC Server. PRPNet projects 27 (1 form), 121, FPS, and PRS aren't doublechecked. One sieving WU, if it finds a factor, eliminates two tasks, instead of two WUs. This is because one candidate gets turned into two tasks.
As far as I know, most 321 sieve WUs, nowadays, find no factors. But a WU that does find one or more factors, will mean we can save one or more huge LLR WUs.
About the huge WUs, I'd always wanted to ask why 321 WUs are significantly shorter than ESP and GCW, though the numbers tested being bigger than them.
Do you have stats for the whole subproject?
Yes sir, the stats are here.
____________
My lucky number is 6219*2^3374198+1
 


One sieving WU, if it finds a factor, eliminates two tasks, instead of two WUs. This is because one candidate gets turned into two tasks.
One sieving WU (two sieving tasks), finding a single factor, eliminates one LLR WU (two LLR tasks). So in that case, it takes two sieving tasks to eliminate two LLR tasks. So the ratio is twototwo, 2:2. /JeppeSN  


Quoting the original post again.
To find out which numbers are actually checked on my computer, I had a look into the _cmd files. They contain something like sr2sieve p52830560e9 P52830570e9 These numbers appear a bit large to be the n in the 321 primes.
There is also a large .sieveinput file where the first line read ABCD 3*2^$a1 [25000034] The remainder of the file are many lines of 1 to 3 digit numbers.
Does anyone here know what is the meaning of these two types of files?
The range p from 52 peta to 53 peta is now done, and we see in stats_321_sieve.php that 1049 factors were found. Is that reliable?
In the task quoted, a range in p from 52.83056 peta to 52.83057 peta was covered. That is one 100'000th of the length of the range 52 peta to 53 peta. In the ABCD file (sieveinput), the minus form (3*2^$a1) was given. The exponent n (called $a in that file) started from 25'000'034 and was incremented by "delta n" values given on each of the lines following.
Is there more than one _cmd file per task? Is there more than one ABCD/sieveinput file per task? For example, one with the +1 form? What was the total number of WUs in the interval p = 52 peta ... 53 peta?
/JeppeSN  

Ravi FernandoProject administrator Volunteer tester Project scientist Send message
Joined: 21 Mar 19 Posts: 211 ID: 1108183 Credit: 14,457,450 RAC: 2,357

The range p from 52 peta to 53 peta is now done, and we see in stats_321_sieve.php that 1049 factors were found. Is that reliable?
In the task quoted, a range in p from 52.83056 peta to 52.83057 peta was covered. That is one 100'000th of the length of the range 52 peta to 53 peta. In the ABCD file (sieveinput), the minus form (3*2^$a1) was given. The exponent n (called $a in that file) started from 25'000'034 and was incremented by "delta n" values given on each of the lines following.
Is there more than one _cmd file per task? Is there more than one ABCD/sieveinput file per task? For example, one with the +1 form? What was the total number of WUs in the interval p = 52 peta ... 53 peta?
As I said above, there is a second ABCD header in the same file, a little over halfway down, for c=+1. (If you don't want to scroll through half a million lines of text, you can ctrlF for "+".) Yes, there are 100000 WUs per P. And I have no reason to doubt the subproject stats. If you check the stderr file for a current task, you'll see the line "Expecting to find factors for about 0.01 terms in this range." That's consistent with the numbers you quoted.
Of course, people who started running 321 sieve in a lower p range will likely see that they've found more than 1 factor per 100 tasks. (I for one started early and took a long break when DIV started, so I'm at 1/23.) If you're worried that we've passed the optimal sieving depth, keep in mind that we're sieving for n up to 50M. At the high end of that range, the 321 tasks will take ~10 times longer than they currently do.  


Ah, yes, I must read Ravi's posts much more carefully!
100'000 WUs for each 1peta interval, that is easy. And so, for newest ranges, just under 0.01 factors found per WU.
Current 321 LLR has "Recent average CPU time: 42:01:39". Future tasks (near n = 50 mega) may be ten times longer (Ravi). Current 321 sieve has "Recent average CPU time: 1:18:05".
/JeppeSN  

BurVolunteer tester
Send message
Joined: 25 Feb 20 Posts: 515 ID: 1241833 Credit: 415,429,336 RAC: 21,060

To clarify:
If one factor is found every n sieve tasks, then it takes n sieve taks to eliminate 1 LLR task?
Is that correct?
Average CPU time for LLR tasks is 2575 minutes. Average CPU time for sieving tasks is 79 minutes. An LLR task takes 32 times as long.
So it would seem that any n < 32 would make sieving inefficient. But it isn't, as Ravi explained because we sieve for LLRs that will take 10 times as long.
Though I feel you'd have to take into account that computational power will increase and once LLR reaches those ranges, it won't be a factor of 10 anymore.
Possibly your next task will contain a factor, reducing the 1:65 rate to around 1:56 (just a random number, don't take this seriously). I'm at 3100 tasks with 50 factors found or one in 62. It doesn't change quickly anymore.
Yes sir, the stats are here. Thanks, I didn't know such stats existed. It also explains why I am seeing my factors/task rate decreasing further.  


To clarify:
If one factor is found every n sieve tasks, then it takes n sieve taks to eliminate 1 LLR task?
Is that correct?
Average CPU time for LLR tasks is 2575 minutes. Average CPU time for sieving tasks is 79 minutes. An LLR task takes 32 times as long.
So it would seem that any n < 32 would make sieving inefficient. But it isn't, as Ravi explained because we sieve for LLRs that will take 10 times as long.
Though I feel you'd have to take into account that computational power will increase and once LLR reaches those ranges, it won't be a factor of 10 anymore.
All of that seems correct to me. One thing we expect in the future, is running primality tests of 321 candidates on GPU (rather than CPU). /JeppeSN  


To clarify:
If one factor is found every n sieve tasks, then it takes n sieve taks to eliminate 1 LLR task?
Is that correct?
Average CPU time for LLR tasks is 2575 minutes. Average CPU time for sieving tasks is 79 minutes. An LLR task takes 32 times as long.
So it would seem that any n < 32 would make sieving inefficient. But it isn't, as Ravi explained because we sieve for LLRs that will take 10 times as long.
Though I feel you'd have to take into account that computational power will increase and once LLR reaches those ranges, it won't be a factor of 10 anymore.
All of that seems correct to me. One thing we expect in the future, is running primality tests of 321 candidates on GPU (rather than CPU). /JeppeSN
Ah yes, about that, isn't Proth20 only for +1 candidates?
____________
My lucky number is 6219*2^3374198+1
 


All of that seems correct to me. One thing we expect in the future, is running primality tests of 321 candidates on GPU (rather than CPU). /JeppeSN
Ah yes, about that, isn't Proth20 only for +1 candidates?
Possibly, yes. Yves should confirm. See github.com/galloty/proth20. /JeppeSN  

Message boards :
321 Prime Search :
Contents of _cmd files 