Join PrimeGrid
Returning Participants
Community
Leader Boards
Results
Other
drummers-lowrise
|
Message boards :
Number crunching :
Operational LLR-GPU application already running here?
Author |
Message |
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13526 ID: 53948 Credit: 245,062,730 RAC: 273,562
                          
|
I'm not going to provide links, but I noticed that the wingman on one of my Cullen LLR WUs had already returned the result -- in one fifth the time it takes my computer to do it. Looking at the wingman's computer, it's returned many of them that quickly, and they all validated correctly.
My computer is a C2Q, his is an i5. No CPU is currently fast enough to do the calculations that quickly; with current architectures you would need to be running above 10 GHz or maybe higher.
The only conclusion is that he's running a GPU version of LLR under an anonymous platform. He's running Cullen and Woodall using the anonymous platform mechanism, both with exceptionally fast run times. Other LLR WUs, which are not using an anonymous platform, are showing normal run times on the same computer.
So, does anyone know anything about the status of GPU-LLR programming right now? It seems like someone is making some serious progress.
____________
My lucky number is 75898524288+1 | |
|
Honza Volunteer moderator Volunteer tester Project scientist Send message
Joined: 15 Aug 05 Posts: 1893 ID: 352 Credit: 3,278,241,147 RAC: 5,211,959
                              
|
So, does anyone know anything about the status of GPU-LLR programming right now? It seems like someone is making some serious progress.
llrCUDA at Mersenne forum is worth watching.
(I haven't seen Windows build to test it myself)
____________
My stats
Badge score: 1*1 + 5*1 + 8*3 + 9*11 + 10*1 + 11*1 + 12*3 = 186 | |
|
Ken_g6 Volunteer developer
 Send message
Joined: 4 Jul 06 Posts: 917 ID: 3110 Credit: 187,492,568 RAC: 103,864
                         
|
I am in a position to make llrCUDA work with BOINC relatively quickly, I think, but I haven't done so yet. And no Windows builds are working right now, that I know of.
Were you going by elapsed time or by date and time of return minus date and time of send? I've noticed in some cases the wrapper can forget about elapsed time previously spent on a WU when it restarts.
____________
| |
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13526 ID: 53948 Credit: 245,062,730 RAC: 273,562
                          
|
So, does anyone know anything about the status of GPU-LLR programming right now? It seems like someone is making some serious progress.
llrCUDA at Mersenne forum is worth watching.
(I haven't seen Windows build to test it myself)
Actually, since you're the wingman, might I ask how you're getting Cullen Woodall LLRs completed successfully in the 50,000 second range?
____________
My lucky number is 75898524288+1 | |
|
Honza Volunteer moderator Volunteer tester Project scientist Send message
Joined: 15 Aug 05 Posts: 1893 ID: 352 Credit: 3,278,241,147 RAC: 5,211,959
                              
|
When running anonymous platform using old wrapper and client got restarted, it was reporting wrong run time (ie. it goes to zero upon restart, even task is 75% finished and continue from checkpoint). I even have results showing 15k, 10k or 5k sec on C/W LLRs.
It results in unbelievable low run-time and even lower credit, albeit real run-time was in ~150k sec expected range.
____________
My stats
Badge score: 1*1 + 5*1 + 8*3 + 9*11 + 10*1 + 11*1 + 12*3 = 186 | |
|
pschoefer Volunteer developer Volunteer tester
 Send message
Joined: 20 Sep 05 Posts: 663 ID: 845 Credit: 2,250,752,061 RAC: 1,019,651
                         
|
It results in unbelievable low run-time and even lower credit, albeit real run-time was in ~150k sec expected range.
This will also be a problem as soon as we have a llrCUDA ready for BOINC... with the current credit system, we'll have high credits (if two CPU results validate against each other), medium credits (CPU vs GPU) and low credits (two GPUs) for similar WUs.
____________
| |
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13526 ID: 53948 Credit: 245,062,730 RAC: 273,562
                          
|
It results in unbelievable low run-time and even lower credit, albeit real run-time was in ~150k sec expected range.
This will also be a problem as soon as we have a llrCUDA ready for BOINC... with the current credit system, we'll have high credits (if two CPU results validate against each other), medium credits (CPU vs GPU) and low credits (two GPUs) for similar WUs.
I'm pretty sure SETI@home has a similar scenario -- credits based upon run-times and CPU vs, GPU validations -- and I don't remember having a problem there with credits being too low on GPU apps. It's been over a year since I crunched there, however, so I might be remembering some of the details incorrectly.
____________
My lucky number is 75898524288+1 | |
|
Vato Volunteer tester
 Send message
Joined: 2 Feb 08 Posts: 788 ID: 18447 Credit: 295,822,861 RAC: 937,830
                      
|
LLR should use an algorithmic server-side formula based on n, k, digits, FFT size etc that takes client claim out of the picture. However, I'm not sure how easy it is to come up with a single formula that copes equally well with PPS and SoB WUs for instance. The aim would be to reward work done, rather than how it was done - the same formula would apply regardless of CPU versus GPU etc.
____________
| |
|
pschoefer Volunteer developer Volunteer tester
 Send message
Joined: 20 Sep 05 Posts: 663 ID: 845 Credit: 2,250,752,061 RAC: 1,019,651
                         
|
I'm pretty sure SETI@home has a similar scenario -- credits based upon run-times and CPU vs, GPU validations -- and I don't remember having a problem there with credits being too low on GPU apps. It's been over a year since I crunched there, however, so I might be remembering some of the details incorrectly.
S@h was granting credit based on FLOPs (counted by the app itself), when I crunched there last time. But that's over two years ago, so it might have changed.
____________
| |
|
Honza Volunteer moderator Volunteer tester Project scientist Send message
Joined: 15 Aug 05 Posts: 1893 ID: 352 Credit: 3,278,241,147 RAC: 5,211,959
                              
|
LLR should use an algorithmic server-side formula based on n, k, digits, FFT size etc that takes client claim out of the picture.
I believe similar formula is used on PRPNet
____________
My stats
Badge score: 1*1 + 5*1 + 8*3 + 9*11 + 10*1 + 11*1 + 12*3 = 186 | |
|
John Honorary cruncher
 Send message
Joined: 21 Feb 06 Posts: 2875 ID: 2449 Credit: 2,681,934 RAC: 0
                 
|
Michael Goetz wrote: So, does anyone know anything about the status of GPU-LLR programming right now? It seems like someone is making some serious progress.
Good progress is being made on llrCUDA but not to the effect that you're describing. Currently, it's only slightly faster than a single core. Honza posts a more practical explanation. Although, I wish yours was correct and llrCUDA was that much faster. :)
vato wrote: LLR should use an algorithmic server-side formula based on n, k, digits, FFT size etc that takes client claim out of the picture. However, I'm not sure how easy it is to come up with a single formula that copes equally well with PPS and SoB WUs for instance. The aim would be to reward work done, rather than how it was done - the same formula would apply regardless of CPU versus GPU etc.
The holy grail of LLR credit. :D
____________
| |
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13526 ID: 53948 Credit: 245,062,730 RAC: 273,562
                          
|
When running anonymous platform using old wrapper and client got restarted, it was reporting wrong run time (ie. it goes to zero upon restart, even task is 75% finished and continue from checkpoint). I even have results showing 15k, 10k or 5k sec on C/W LLRs.
It results in unbelievable low run-time and even lower credit, albeit real run-time was in ~150k sec expected range.
::sigh:: And I was hoping you were testing what would have been a major breakthrough. :(
____________
My lucky number is 75898524288+1 | |
|
Message boards :
Number crunching :
Operational LLR-GPU application already running here? |