Join PrimeGrid
Returning Participants
Community
Leader Boards
Results
Other
drummers-lowrise
|
Message boards :
Project Staging Area :
That guy JimB ....
Author |
Message |
|
Jim,
A very sincere and heartfelt thanks for being so quick in confirming and assigning pending credit with PPR12M each and every time we return our results!!! | |
|
JimB Honorary cruncher Send message
Joined: 4 Aug 11 Posts: 918 ID: 107307 Credit: 977,945,376 RAC: 61
                     
|
You're welcome. | |
|
|
Jim,
A very sincere and heartfelt thanks for being so quick in confirming and assigning pending credit with PPR12M each and every time we return our results!!!
a huge +1 as well.
Just wondering, are you still having to do the check manually or have you automated the process any?
Thanks for getting this whole area moved and working so smoothly!
Cheers Rick | |
|
|
Jim,
Do you have access to any stats in relation to GPU and performance (minimum turnaround time)?
I am looking at a custom build and I would like to begin with the GPU and proceed accordingly. | |
|
RafaelVolunteer tester
 Send message
Joined: 22 Oct 14 Posts: 906 ID: 370496 Credit: 478,773,166 RAC: 255,635
                   
|
Also, I know Michael said it isn't a priority, but is it much of a shore to up the sieving limit on GFN to b=400.000.000? | |
|
JimB Honorary cruncher Send message
Joined: 4 Aug 11 Posts: 918 ID: 107307 Credit: 977,945,376 RAC: 61
                     
|
Just wondering, are you still having to do the check manually or have you automated the process any?
The procedure hasn't changed. I download the factor files via an SSH connection to my computer. I run one program that sorts each factor file, removes duplicates and checks every factor for validity. I run a second program that makes sure the sieving data matches the filename, makes sure the start and end are close to where they should be and checks for gaps in the sieving. That same program also looks for too many +1 or -1 factors in a row, such as would happen if the sieving was started with the wrong parameters. It also picks out the higher and lowest k values and highest and lowest n values found. We're getting to the point on PPR12M where on 1T reservations sometimes the lowest k is not 3 or highest is not 9999. Every p, k, n and c value is tested to make sure it's within the proper range. Both programs are extremely fast with factor files the size we see today. Even a 10T file doesn't take more than a second.
So that's two command lines that each run on all files I've downloaded. It's the same if I have one file or 100. I then run another one that puts those files into their permanent location and also copies them to my local server. I run a similar program on the server which does the same tests but also stores the results in a database table. I then run one more program that generates the sieving data table and copies it up to the primegrid server. Before granting credit, I look at that table to make sure the factor density matches adjacent ranges. It takes a lot more to type out the procedure than it does to do it.
I didn't used to check the factor density until there were some problems that I noticed afterwards. I'd much rather address the problem with a sieve range before I grant credit on it.
GFN checking is similar except that I have to run the same program once for each n value represented in the uploads. So if it's only 65536 that's one run, but 65536 and 4194304 together are two runs. | |
|
JimB Honorary cruncher Send message
Joined: 4 Aug 11 Posts: 918 ID: 107307 Credit: 977,945,376 RAC: 61
                     
|
Jim,
Do you have access to any stats in relation to GPU and performance (minimum turnaround time)?
I am looking at a custom build and I would like to begin with the GPU and proceed accordingly.
I don't record (or even want to know) which hardware is doing the sieving, sorry. | |
|
JimB Honorary cruncher Send message
Joined: 4 Aug 11 Posts: 918 ID: 107307 Credit: 977,945,376 RAC: 61
                     
|
Also, I know Michael said it isn't a priority, but is it much of a shore to up the sieving limit on GFN to b=400.000.000?
We're discussing it internally. | |
|
RafaelVolunteer tester
 Send message
Joined: 22 Oct 14 Posts: 906 ID: 370496 Credit: 478,773,166 RAC: 255,635
                   
|
Also, I know Michael said it isn't a priority, but is it much of a shore to up the sieving limit on GFN to b=400.000.000?
We're discussing it internally.
I was actually asking that purely from a coding perspective. | |
|
JimB Honorary cruncher Send message
Joined: 4 Aug 11 Posts: 918 ID: 107307 Credit: 977,945,376 RAC: 61
                     
|
I'm not the author of any of that code. I believe a simple change would make the GFNSvCUDA report factors for up to b=400M, but I'm not certain. The only n's it could possibly apply to are 15 and 16. We won't ever get high enough in the others. Even for n=15 we're talking about a few years before we run out of candidates. We're not currently sieving n=15 and given that it's only sieved to 1100P it wouldn't take long to redo for b=0-400M.
| |
|
GDBSend message
Joined: 15 Nov 11 Posts: 280 ID: 119185 Credit: 3,375,347,102 RAC: 3,722,992
                      
|
I'm not the author of any of that code. I believe a simple change would make the GFNSvCUDA report factors for up to b=400M, but I'm not certain. The only n's it could possibly apply to are 15 and 16. We won't ever get high enough in the others. Even for n=15 we're talking about a few years before we run out of candidates. We're not currently sieving n=15 and given that it's only sieved to 1100P it wouldn't take long to redo for b=0-400M.
Would it even be necessary to redo anything? Couldn't you just add in sieve entries 100 - 400M, and then start sieving from the current P for b = 0-400M? It would seen a waste to redo anything since factors are usually found multiple times during sieving. | |
|
JimB Honorary cruncher Send message
Joined: 4 Aug 11 Posts: 918 ID: 107307 Credit: 977,945,376 RAC: 61
                     
|
Would it even be necessary to redo anything? Couldn't you just add in sieve entries 100 - 400M, and then start sieving from the current P for b = 0-400M? It would seen a waste to redo anything since factors are usually found multiple times during sieving.
The factor density at the beginning of sieving is very very high. Have a look at http://www.primegrid.com/sieving/gfn and see just how high. Sieving would have to be redone starting at p=0. In fact between p=0-1P the GPU programs can't be used. I'll have to check the older Windows code from David Underbakke to make sure his b limits go that high. | |
|
|
And back to:
Thank you for everything you've done.
And now back to our regularly scheduled program:
Another big thanks for answering even more questions in a "Thank You" thread. LOL
____________
Largest Primes to Date:
As Double Checker: SR5 109208*5^1816285+1 Dgts-1,269,534
As Initial Finder: SR5 243944*5^1258576-1 Dgts-879,713
| |
|
GDBSend message
Joined: 15 Nov 11 Posts: 280 ID: 119185 Credit: 3,375,347,102 RAC: 3,722,992
                      
|
Would it even be necessary to redo anything? Couldn't you just add in sieve entries 100 - 400M, and then start sieving from the current P for b = 0-400M? It would seen a waste to redo anything since factors are usually found multiple times during sieving.
The factor density at the beginning of sieving is very very high. Have a look at http://www.primegrid.com/sieving/gfn and see just how high. Sieving would have to be redone starting at p=0. In fact between p=0-1P the GPU programs can't be used. I'll have to check the older Windows code from David Underbakke to make sure his b limits go that high.
OK, so you might redo 0-1P. But to redo everything seems such a waste. Currently, we're almost at 12% of the optimal sieving range. That means we still have 7 times as much work remaining to do. Since each factor is likely to be found multiple times over the sieving range, it just seems a lot of rerun work to eliminate a few factors that MIGHT not be eliminated in the remaining 88% work to do.
Current GPU work is low in the OCL/OLC3 range. Any sieving we do is better focused on NEW sieving ranges to eliminate new OCL/OCL3/OCL2 factors than to redo everything, and find ZERO new OCL/OCL3/OCL2 factors. | |
|
JimB Honorary cruncher Send message
Joined: 4 Aug 11 Posts: 918 ID: 107307 Credit: 977,945,376 RAC: 61
                     
|
OK, so you might redo 0-1P. But to redo everything seems such a waste. Currently, we're almost at 12% of the optimal sieving range. That means we still have 7 times as much work remaining to do. Since each factor is likely to be found multiple times over the sieving range, it just seems a lot of rerun work to eliminate a few factors that MIGHT not be eliminated in the remaining 88% work to do.
Current GPU work is low in the OCL/OLC3 range. Any sieving we do is better focused on NEW sieving ranges to eliminate new OCL/OCL3/OCL2 factors than to redo everything, and find ZERO new OCL/OCL3/OCL2 factors.
You're kidding, right? We're talking about sieving up to a higher b-limit rather than stopping at b=100M like now. A quicy-and-dirty calculation suggests that resieving 0-1100P to b=1G will result in 388 million additional candidates being sieved out. That's because there will be ten times the potential candidates that there are now. Candidates that were not found the first time because anything over 100M was not reported. Candidates that we'll start primality-testing within two years.
Look at the graph here and think about it going from b=100,000,000 in height (which it is now) to ten times that high or b=1,000,000,000. That's where the additional candidates come from. So we stand to discover 9x (1G - 100M = 900M) as many new factors for any sieving range. In fact we might as well sieve to the limit of the program, which is 2^32-2 or 4,294,967,294. It'll take exactly the same time as sieving to 100M did, the factor files will just be that much larger. The sieve files will also be larger, but that's something that only exists on my computer. A 5GB sieve file (uncompressed) is not all that large. I have 500GB+ of various generations of PPS/RSP sieves here for n=0-12M, with two full backups on other computers. | |
|
GDBSend message
Joined: 15 Nov 11 Posts: 280 ID: 119185 Credit: 3,375,347,102 RAC: 3,722,992
                      
|
OK, so you might redo 0-1P. But to redo everything seems such a waste. Currently, we're almost at 12% of the optimal sieving range. That means we still have 7 times as much work remaining to do. Since each factor is likely to be found multiple times over the sieving range, it just seems a lot of rerun work to eliminate a few factors that MIGHT not be eliminated in the remaining 88% work to do.
Current GPU work is low in the OCL/OLC3 range. Any sieving we do is better focused on NEW sieving ranges to eliminate new OCL/OCL3/OCL2 factors than to redo everything, and find ZERO new OCL/OCL3/OCL2 factors.
You're kidding, right? We're talking about sieving up to a higher b-limit rather than stopping at b=100M like now. A quicy-and-dirty calculation suggests that resieving 0-1100P to b=1G will result in 388 million additional candidates being sieved out. That's because there will be ten times the potential candidates that there are now. Candidates that were not found the first time because anything over 100M was not reported. Candidates that we'll start primality-testing within two years.
Look at the graph here and think about it going from b=100,000,000 in height (which it is now) to ten times that high or b=1,000,000,000. That's where the additional candidates come from. So we stand to discover 9x (1G - 100M = 900M) as many new factors for any sieving range. In fact we might as well sieve to the limit of the program, which is 2^32-2 or 4,294,967,294. It'll take exactly the same time as sieving to 100M did, the factor files will just be that much larger. The sieve files will also be larger, but that's something that only exists on my computer. A 5GB sieve file (uncompressed) is not all that large. I have 500GB+ of various generations of PPS/RSP sieves here for n=0-12M, with two full backups on other computers.
I was just talking about the redoing of n=22. I don't see us get to testing 100M+ n=22 candidates for over 10+ years. Unless we're going to skip a lot of them to get to WR territory. My question is: How many additional 100M+ factors are you going to find sieving 0-1Z vs. 120,000P-1Z? And if we find it necessary to redo 0-120,000P, wouldn't it be better to continue now from 120,000P-1Z, and then redo 0-120,000P? Any OCL/OCL3/OCL2 factors we find now doing 120,000P-1Z is of IMMEDIATE benefit. While any additional 100M+ factors we find sieving 0-120,000P is of FUTURE (10+ years) benefit?
| |
|
JimB Honorary cruncher Send message
Joined: 4 Aug 11 Posts: 918 ID: 107307 Credit: 977,945,376 RAC: 61
                     
|
We're not talking about redoing n=22. We're talking about redoing n=15. | |
|
GDBSend message
Joined: 15 Nov 11 Posts: 280 ID: 119185 Credit: 3,375,347,102 RAC: 3,722,992
                      
|
We're not talking about redoing n=22. We're talking about redoing n=15.
Thanks! Never mind. | |
|
Message boards :
Project Staging Area :
That guy JimB .... |