Author |
Message |
John Honorary cruncher
 Send message
Joined: 21 Feb 06 Posts: 2875 ID: 2449 Credit: 2,681,934 RAC: 0
                 
|
As many of you have noticed, PrimeGrid is now testing another prime search...Cullen & Woodall. The implementation has progressed faster than estimated. The initial test ranges were quickly completed.
We now need help in preparing more work. The sieve depths of the current work files are OK but they could be much, much better. We currently have the following:
Cullen 1<n<2M p=5.5G
Cullen 2M<n<10M p=5.14G
Woodall 1<n<2M p=15G
Woodall 2M<n<10M p=304M
If anyone would like to assist in the sieve effort, please write to me, jmblazek, through the forum (look upper right below the google ad in the forum). Sieving is a manual but very simple process. You won't find any primes but you'll narrow down the search efficiently so PrimeGrid can focus on fewer candidates.
A sieving reservation thread will be created soon but until then, we need all the help we can get.
Thanks!
p.s. If everyone with multiple cores/multiple computers running PrimeGrid just volunteered 1 core for a week or so, the sieve files would be in excellent, maybe even optimal condition.
EDIT: Sieving is done outside of PrimeGrid. Running Cullen and Woodall searches in the PrimeGrid Preference section does not contribute to sieving. However, it does help test for primes.
Sieving is done before the WU's enter PrimeGrid.
____________
|
|
|
|
Have Shut Down Seti, Einstien & Prime Grid, ( have enabled "Cullen & Woodall" only. ) Hope This Helps.
Mike
If Not Please Email to L13oki@aol.com,
Mike |
|
|
|
If anyone would like to assist in the sieve effort, please write to me, jmblazek, through the forum (...)
You've got mail :)
____________
?SYNTAX ERROR
READY. |
|
|
|
At 10.21% already. :)
____________
?SYNTAX ERROR
READY. |
|
|
|
At 10.21% already. :)
where can I learn more about this specific prime search?
thanks
|
|
|
John Honorary cruncher
 Send message
Joined: 21 Feb 06 Posts: 2875 ID: 2449 Credit: 2,681,934 RAC: 0
                 
|
where can I learn more about this specific prime search?
thanks
More info to come later. Until then, you can read up here:
http://www.primegrid.com/orig/forum_thread.php?id=608&nowrap=true#5932
If you're interested in the sieve, please write me at jmblazek in the forum private messages.
____________
|
|
|
|
HI,
what is the reason, why the sieve is not done with primegrid as well?
As far as I understood rieselsieve sieving was the first goal, and now they got the LLR as well, because the sieved enough to go for testing the primes?
____________
|
|
|
John Honorary cruncher
 Send message
Joined: 21 Feb 06 Posts: 2875 ID: 2449 Credit: 2,681,934 RAC: 0
                 
|
HI,
what is the reason, why the sieve is not done with primegrid as well?
As far as I understood rieselsieve sieving was the first goal, and now they got the LLR as well, because the sieved enough to go for testing the primes?
Good question. Basically it's time and development. PG already has an LLR app so it was "easier" to add another LLR project.
Also, it's easier to coordinate a manual sieving effort than a manual LLR effort. PrimeGrid's power is in primality testing right now. Can it also be used to sieve...absolutely. And in time, it probably will.
The Cullen/Woodall search project is very well suited for a BOINC sieving effort. It is similar to RS in the fact that it has a relatively small sieve file...~18 MB. TPS on the other hand will probably never be BOINC sieved as it still has a sieve file of over 500 MB at p=5300T.
RS still sieves along with LLR. It's a nice combination they have. Something that PG will probably have in the future.
BTW, Rytis, PrimeGrid's lone volunteer developer, has already shown us SUPER HUMAN talents. It just takes time.
____________
|
|
|
John Honorary cruncher
 Send message
Joined: 21 Feb 06 Posts: 2875 ID: 2449 Credit: 2,681,934 RAC: 0
                 
|
Sieve Update:
This past weekend was extremely productive. Current sieve depths are as follows:
Cullen 1<n<2M p=105G
Cullen 2M<n<10M p=105G
Woodall 1<n<2M p=106G
Woodall 2M<n<10M p=121G
We will continue again when all factor files have been returned and sieve files updated.
Hopefully by then we'll have a reservation thread up and running...shooting for sometime this week.
Thanks again to everyone who contributed.
____________
|
|
|
|
I'm interested in helping... I created a thread in the cw forum before I saw this.
I suppose that credit is going to split the same way as in the TPS project? ie top siever, finder, top llr tester... |
|
|
|
I'm interested in helping... I created a thread in the cw forum before I saw this.
I suppose that credit is going to split the same way as in the TPS project? ie top siever, finder, top llr tester...
Giving credit to the top siever, top llr tester ect... is kind of an odd idea. I do not believe I have ever seen this in other projects excepting TPS of course. |
|
|
|
Well, I think it is a good one. The only reason I sieve is to get the credit. If they didn't give credit, the largest chunk of sieving would be undone, and the second largest tpo at least(skiglumnd I'm pretty sure wouldn't have done it either.) It would not be nearly as well sieved and thousands of candidates would have to have been tested by llr which takes much longer. |
|
|
|
I could possible see giving credit to the person that sieved in the particular range where a prime was found, but to give someone credit just because they sieved more ranges but not neccessarily in the range where the prime was found?
I don't buy the arguement that the sieving would not get done. It has been getting done on mutiple projects for many years. Sure some of the smaller less well known projects may have struggled with sieving, but it has been done.
I would suggest the same holds true for LLR credit too.
|
|
|
|
Well, I think it is a good one. The only reason I sieve is to get the credit. If they didn't give credit, the largest chunk of sieving would be undone, and the second largest tpo at least(skiglumnd I'm pretty sure wouldn't have done it either.) It would not be nearly as well sieved and thousands of candidates would have to have been tested by llr which takes much longer.
I worked on the sieveing for the current Cullen & Woodall wu's for this project and I for one am happy to do so without credit.
As for giveing credit to someone for simply sieving more than anyone else and possibley not even in the right range is absurd and not to mention extremly unfair to the people who sieved ranges with a prime and got no credit.
____________
m4rtyn
|
|
|
|
As for giveing credit to someone for simply sieving more than anyone else and possibley not even in the right range is absurd and not to mention extremly unfair to the people who sieved ranges with a prime and got no credit.
There is no right range for sieving. All sievers sieve the entire range, so your argument falls apart. The top siever just removes the most factors, and thus saves the most llr work. This is similar to someone gets credit if they speed up the llr application. So does Rytis or whoever is in charge declare there will be credit for the top siever of this project? Because if not, I'm going to stick to for credit stuff. |
|
|
|
It would be a nice gesture to credit *ALL* of those who have been sieving with some manual credit, even if the credit given is less than what is deserved. |
|
|
John Honorary cruncher
 Send message
Joined: 21 Feb 06 Posts: 2875 ID: 2449 Credit: 2,681,934 RAC: 0
                 
|
It would be a nice gesture to credit *ALL* of those who have been sieving with some manual credit, even if the credit given is less than what is deserved.
Thank you for bringing the topic up. While initially it was not planned, manual credit may be given once we have established a checks and balance system.
PrimeGrid sieving is still in its infancy and will be going through many changes so please be patient. All records will be kept in the forum so when the time comes we'll be able to grant credit from the beginning.
Thanks again for your suggestion...and for sieving!
____________
|
|
|
|
It would be a nice gesture to credit *ALL* of those who have been sieving with some manual credit, even if the credit given is less than what is deserved.
It may actually be better to give higher credit than the regular llr's and primegen. This way more people would be willing to put in the minimal effort to seive meaning less LLR work and more chance of finding a prime.
~BoB
____________
|
|
|
|
It would be a nice gesture to credit *ALL* of those who have been sieving with some manual credit, even if the credit given is less than what is deserved.
It may actually be better to give higher credit than the regular llr's and primegen. This way more people would be willing to put in the minimal effort to seive meaning less LLR work and more chance of finding a prime.
~BoB
In my experience, BOINC community anger for too much credit is always far *far* more destructive than too little credit. |
|
|
|
It would be a nice gesture to credit *ALL* of those who have been sieving with some manual credit, even if the credit given is less than what is deserved.
It may actually be better to give higher credit than the regular llr's and primegen. This way more people would be willing to put in the minimal effort to seive meaning less LLR work and more chance of finding a prime.
~BoB
In my experience, BOINC community anger for too much credit is always far *far* more destructive than too little credit.
This is why you make it within reason... A little higher than llr/pgen but still within the range of the other projects.
~BoB
____________
|
|
|
|
what is sieving depth for 2M? |
|
|
|
I've noticed that the name of the sieve file changes each day. Does that mean that it's a whole new sieve file, or just that it's renamed?
I kind of liked the previous naming scheme for the sieve file--it tells you right in the name what the current sieving depth is. |
|
|
John Honorary cruncher
 Send message
Joined: 21 Feb 06 Posts: 2875 ID: 2449 Credit: 2,681,934 RAC: 0
                 
|
I've noticed that the name of the sieve file changes each day. Does that mean that it's a whole new sieve file, or just that it's renamed?
I kind of liked the previous naming scheme for the sieve file--it tells you right in the name what the current sieving depth is.
Not daily...but it has been frequent these past few days. This file will last at least a week or so.
Yes, it's a whole new sieve file...minus the returned factored n to date. Since sieve time is related to the # of n remaining, it makes sense to remove n from the sieve file to speed the sieve up.
I agree, the previous naming convention was nice and I liked it. However, it wasn't completely accurate. Outstanding ranges prevented the naming convention from including all completed ranges.
For example, currently the highest the sieve could be right now is p=320G for the Cullen 2M range. However, all the ranges between 330G-505G are completed. Calling the new sieve file cullen2M_p320G.txt would not be completely accurate...and leaving out the factors in the completed ranges would not be efficient.
So the current naming convention is sieve file, date created, and # of candidates remaining. I'll add a "mostly sieved to" section to the post...maybe that will help.
Any suggestions on how to improve this is greatly appreciated.
____________
|
|
|
John Honorary cruncher
 Send message
Joined: 21 Feb 06 Posts: 2875 ID: 2449 Credit: 2,681,934 RAC: 0
                 
|
what is sieving depth for 2M?
Currently being researched...initial goal was p=1500G. However, since the Cullen search still is not active in PG, that goal may be modified.
____________
|
|
|
|
Okay, now it makes more sense. :-)
BTW, your mentioning about the Cullen search not being touched yet in the BOINC LLR department of PrimeGrid, prompted a question: When is the Cullen LLR search expected to actually start sending out work? |
|
|
John Honorary cruncher
 Send message
Joined: 21 Feb 06 Posts: 2875 ID: 2449 Credit: 2,681,934 RAC: 0
                 
|
Okay, now it makes more sense. :-)
BTW, your mentioning about the Cullen search not being touched yet in the BOINC LLR department of PrimeGrid, prompted a question: When is the Cullen LLR search expected to actually start sending out work?
Minimum 2 weeks out...
____________
|
|
|
|
I don't know, is it possible to reserve ranges < 1G? Wouldn't that mess things up big time? |
|
|
John Honorary cruncher
 Send message
Joined: 21 Feb 06 Posts: 2875 ID: 2449 Credit: 2,681,934 RAC: 0
                 
|
I don't know, is it possible to reserve ranges < 1G? Wouldn't that mess things up big time?
No, not really. While whole G's are preferred, all work is accepted. :)
The only change is to the e9. Repaxan's command line file would look like this:
gcwsieve -p1121e9 -P11215e8 -icullen2M_20070730_46874.txt -fcfactors2M_1121G-1121.5G.txt
btw, a .5G range takes about 1.5 hours on an Athlon 64 3400+ 2.19GHz.
CORRECTION: 3.25 hours instead of 1.5 hours
____________
|
|
|
|
I don't know, is it possible to reserve ranges < 1G? Wouldn't that mess things up big time?
No, not really. While whole G's are preferred, all work is accepted. :)
The only change is to the e9. Repaxan's command line file would look like this:
gcwsieve -p1121e9 -P11215e8 -icullen2M_20070730_46874.txt -fcfactors2M_1121G-1121.5G.txt
btw, a .5G range takes about 1.5 hours on an Athlon 64 3400+ 2.19GHz.
Oh, I see. Thanks for clearing that up--it may be useful for me if we get back into doing 10M or bigger ranges! |
|
|
|
I noticed that running gcwsieve in Linux on a Core 2 causes tremendously faster timings. For this reason, but only if it's really needed, I'm willing to boot into Damn Small Linux on my quad-core and sieve on all four cores for 2-3 weeks.
But only if it's really necessary. If it's a super-emergency, I can be found on the freenode irc network in Riesel Sieve. |
|
|
|
I've noticed that the name of the sieve file changes each day. Does that mean that it's a whole new sieve file, or just that it's renamed?
I kind of liked the previous naming scheme for the sieve file--it tells you right in the name what the current sieving depth is.
Not daily...but it has been frequent these past few days. This file will last at least a week or so.
Yes, it's a whole new sieve file...minus the returned factored n to date. Since sieve time is related to the # of n remaining, it makes sense to remove n from the sieve file to speed the sieve up.
I agree, the previous naming convention was nice and I liked it. However, it wasn't completely accurate. Outstanding ranges prevented the naming convention from including all completed ranges.
For example, currently the highest the sieve could be right now is p=320G for the Cullen 2M range. However, all the ranges between 330G-505G are completed. Calling the new sieve file cullen2M_p320G.txt would not be completely accurate...and leaving out the factors in the completed ranges would not be efficient.
So the current naming convention is sieve file, date created, and # of candidates remaining. I'll add a "mostly sieved to" section to the post...maybe that will help.
Any suggestions on how to improve this is greatly appreciated.
I think that using the date for the sieve file name is much more preferable than the original nomenclature, as sieving may be completed and returned out of sequence.
|
|
|
|
I've noticed that the name of the sieve file changes each day. Does that mean that it's a whole new sieve file, or just that it's renamed?
I kind of liked the previous naming scheme for the sieve file--it tells you right in the name what the current sieving depth is.
Not daily...but it has been frequent these past few days. This file will last at least a week or so.
Yes, it's a whole new sieve file...minus the returned factored n to date. Since sieve time is related to the # of n remaining, it makes sense to remove n from the sieve file to speed the sieve up.
I agree, the previous naming convention was nice and I liked it. However, it wasn't completely accurate. Outstanding ranges prevented the naming convention from including all completed ranges.
For example, currently the highest the sieve could be right now is p=320G for the Cullen 2M range. However, all the ranges between 330G-505G are completed. Calling the new sieve file cullen2M_p320G.txt would not be completely accurate...and leaving out the factors in the completed ranges would not be efficient.
So the current naming convention is sieve file, date created, and # of candidates remaining. I'll add a "mostly sieved to" section to the post...maybe that will help.
Any suggestions on how to improve this is greatly appreciated.
I think that using the date for the sieve file name is much more preferable than the original nomenclature, as sieving may be completed and returned out of sequence.
Now that it's explained more clearly in the reservation thread, I totally agree with you--and there's always no harm in releasing a new sieve file more frequently (as is now possible with the new naming convention), since it will only make the sieving faster.
____________
|
|
|
|
Will the sieving depth become deeper when the range of n increase? |
|
|
John Honorary cruncher
 Send message
Joined: 21 Feb 06 Posts: 2875 ID: 2449 Credit: 2,681,934 RAC: 0
                 
|
Will the sieving depth become deeper when the range of n increase?
Yes, as n increases, we'll sieve deeper.
____________
|
|
|
geoff Volunteer developer Send message
Joined: 3 Aug 07 Posts: 99 ID: 10427 Credit: 343,437 RAC: 0
 
|
If you run gcwsieve with the -v switch (add -v to the gcwsieve-command-line.txt file) it will report the estimated number of factors it expects to find in the range being sieved.
The formula N*(1-log(P0)/log(P1)) is used for the expected number of factors in the range P0 <= p <= P1 for a file containing N candidate terms.
When sieving Cullen ranges this estimate will usually be too high, because some sieving on the prime exponents has already been done which isn't accounted for in the formula.
|
|
|
|
I'm wondering, is it possible to switch sieve files in the middle of a range, by simply stopping gcwsieve, replacing the sieve file (and updating the references in gcwsieve-command-line.txt), then restarting gcwsieve? Or would that mess things up?
____________
|
|
|
John Honorary cruncher
 Send message
Joined: 21 Feb 06 Posts: 2875 ID: 2449 Credit: 2,681,934 RAC: 0
                 
|
I'm wondering, is it possible to switch sieve files in the middle of a range, by simply stopping gcwsieve, replacing the sieve file (and updating the references in gcwsieve-command-line.txt), then restarting gcwsieve? Or would that mess things up?
Yes, it's just like you say.
All you have to change in the gcwsieve-command-line.txt file is the input file -i.
When you start gcwsieve up again, it will read the checkpoint file to pick up where it left off.
____________
|
|
|
|
x86-64 Linux: gcwsieve-1.0.20 (~2X faster than 32 bit)
x86 Linux: gcwsieve-1.0.18
x86 Windows: gcwsieve-1.0.18
version 1.0.20: 64 bit only
Why no 64-bit Windows app? |
|
|
geoff Volunteer developer Send message
Joined: 3 Aug 07 Posts: 99 ID: 10427 Credit: 343,437 RAC: 0
 
|
Why no 64-bit Windows app?
It is being worked on. The main problems are that I don't know much about Windows, and while there is a good Linux to Win32 cross-compiler (MinGW) , there is no Linux to Win64 cross-compiler yet.
The program source code is a mix of C and assembly. The C source needs some work to get the Microsoft compiler to accept it, or it could probably be compiled with the Intel compiler as is. However much of the assembly is in a GCC-specific inline format and requires a lot of tedious hand-editing to get it to a form that other compilers can work with.
Progress so far:
1. Rytis has managed to compile the C part with MSVC.
2. I have converted all the GCC-specific inline assembly into standard unix external format, and am testing a program (Agner Fog's objconv) that should allow me to do the final unix to Windows conversion.
|
|
|
geoff Volunteer developer Send message
Joined: 3 Aug 07 Posts: 99 ID: 10427 Credit: 343,437 RAC: 0
 
|
If anyone has experience with compiling C programs for 64-bit Windows, and has time to fiddle with the source code a bit where necessary, it should in principle be possible to compile gcwsieve version 1.0.21 with a Microsoft C compiler.
There are some rough instructions in the file msc/README, along with my email address. Any feedback, even just a report on how it failed to compile, would be welcome.
http://www.geocities.com/g_w_reynolds/gcwsieve/
|
|
|
|
By some strange ideas, when I decrease the sieve file into less than 4 term, it don't allow me to do that,why? |
|
|
geoff Volunteer developer Send message
Joined: 3 Aug 07 Posts: 99 ID: 10427 Credit: 343,437 RAC: 0
 
|
By some strange ideas, when I decrease the sieve file into less than 4 term, it don't allow me to do that,why?
The algorithm processes 4 terms in parallel (6 in parallel on Athlon 64), so at least 4 (6) are needed for it to produce correct results.
It would be possible to add dummy terms to bring the number up to the minimum, but the algorithm is _very_ inefficient when there are only a few terms in the sieve anyway. Better to use a general purpose trial factoring program instead.
|
|
|
|
the tag --intel doesn't work... It tries to open input file "ntel"
Also the --amd tag doesn't work
at least it doesn't work for me
version 1.1.5
windows x86
~BoB
____________
|
|
|
geoff Volunteer developer Send message
Joined: 3 Aug 07 Posts: 99 ID: 10427 Credit: 343,437 RAC: 0
 
|
the tag --intel doesn't work... It tries to open input file "ntel"
Also the --amd tag doesn't work
at least it doesn't work for me
version 1.1.5
windows x86
~BoB
There are currently no --intel or --amd options for the 32-bit version of gcwsieve.
|
|
|
|
the tag --intel doesn't work... It tries to open input file "ntel"
Also the --amd tag doesn't work
at least it doesn't work for me
version 1.1.5
windows x86
~BoB
There are currently no --intel or --amd options for the 32-bit version of gcwsieve.
Ok thats fine...I saw it in the readme and thought I would test it out... maybe add a note to the readme file about that? (unless it will be fixed soon?)
~BoB
____________
|
|
|
|
I'm wondering, what's the optimal sieving depth for Cullen/Woodall 3M-4M, 4M-5M, and 5M-10M? (I know that 5M-10M is being done separately right now, but I would think that wouldn't affect the optimal depth.)
____________
|
|
|
John Honorary cruncher
 Send message
Joined: 21 Feb 06 Posts: 2875 ID: 2449 Credit: 2,681,934 RAC: 0
                 
|
I'm wondering, what's the optimal sieving depth for Cullen/Woodall 3M-4M, 4M-5M, and 5M-10M? (I know that 5M-10M is being done separately right now, but I would think that wouldn't affect the optimal depth.)
Unknown at the moment. 64 bit sieving is still finding factors faster than 32 bit LLR can test for primality.
____________
|
|
|
|
I'm wondering, what's the optimal sieving depth for Cullen/Woodall 3M-4M, 4M-5M, and 5M-10M? (I know that 5M-10M is being done separately right now, but I would think that wouldn't affect the optimal depth.)
Unknown at the moment. 64 bit sieving is still finding factors faster than 32 bit LLR can test for primality.
Hmm...is the 32 bit sieving finding factors in less time than an LLR test, too?
____________
|
|
|
John Honorary cruncher
 Send message
Joined: 21 Feb 06 Posts: 2875 ID: 2449 Credit: 2,681,934 RAC: 0
                 
|
Hmm...is the 32 bit sieving finding factors in less time than an LLR test, too?
No, not at the leading edge of LLR.
____________
|
|
|
|
Hmm...is the 32 bit sieving finding factors in less time than an LLR test, too?
No, not at the leading edge of LLR.
Ah, I see. So, 32 bit sieving would be better done in the 4M-5M range?
Also, how's the Woodall 5M-10M range doing, with BOINC working on it and all? Is it anywhere near optimal depth? And, when it reaches optimal depth, is BOINC sieving going to be switched to a different target, maybe Cullen 5M-10M or one of the combined ranges?
____________
|
|
|
John Honorary cruncher
 Send message
Joined: 21 Feb 06 Posts: 2875 ID: 2449 Credit: 2,681,934 RAC: 0
                 
|
No, not at the leading edge of LLR.
Ah, I see. So, 32 bit sieving would be better done in the 4M-5M range?
Yes
Also, how's the Woodall 5M-10M range doing, with BOINC working on it and all? Is it anywhere near optimal depth? And, when it reaches optimal depth, is BOINC sieving going to be switched to a different target, maybe Cullen 5M-10M or one of the combined ranges?
Woodall 5M-10M is nowhere close to optimal at the moment. We'll switch to Cullen 5M-10M soon and bring them even...then continue on with combined.
____________
|
|
|
|
No, not at the leading edge of LLR.
Ah, I see. So, 32 bit sieving would be better done in the 4M-5M range?
Yes
Also, how's the Woodall 5M-10M range doing, with BOINC working on it and all? Is it anywhere near optimal depth? And, when it reaches optimal depth, is BOINC sieving going to be switched to a different target, maybe Cullen 5M-10M or one of the combined ranges?
Woodall 5M-10M is nowhere close to optimal at the moment. We'll switch to Cullen 5M-10M soon and bring them even...then continue on with combined.
Ah, I see now. Thanks!
____________
|
|
|
|
Hmm...is the 32 bit sieving finding factors in less time than an LLR test, too?
No, not at the leading edge of LLR.
I just had a thought...shouldn't the Cullen/Woodall 3M-4M sieving be marked as "64 bit only" then? Otherwise we might have people who didn't see this little discussion putting 32 bit computers on the 3M-4M sieving when they would be much better utilized on 4M-5M.
____________
|
|
|
|
[quote][quote]14400-14401G
Testing for newpgen..
scrap the newpgen idea...
NewPGen doesn't work for Cullen/Woodall numbers in general, it only works for a single number at a time. If you try to sieve more than one number at a time it will seem to work OK, but in fact it will be doing fixed-n sieving and will not produce the results you expect.
I noticed that...
____________
|
|
|