Author |
Message |
RogerVolunteer developer Volunteer tester
 Send message
Joined: 27 Nov 11 Posts: 1138 ID: 120786 Credit: 268,621,444 RAC: 0
                    
|
Wall-Sun-Sun - "The Sun is Hidden" - "The Dark Side of the Moon" - "Floyd Challenge" - 08-20 October
Announced is a challenge at WSS (Wall-Sun-Sun prime search) from 8th till 20th of October celebrating the Total Lunar Eclipse. I suggest we start at the first Penumbral contact at 08:15:33 UTC and end at 08:15:33 UTC twelve days later.
Zhi-Wei Sun notes PrimeGrid's search efforts with great interest. His opinion is that the first WSS prime would be found by Primegrid: I encourage you and those working with the PrimeGrid project to continue your search for WSS primes. It is interesting to see who will be the lucky one to find the first WSS prime.
We're currently at 5.3e16. The two day May 2013 WSS challenge saw 41,880 WUs completed. Based on this rate we should complete 251,280 WUs in twelve days taking the leading edge to 5.6e16.
More about the Wall-Sun-Sun prime search can be found here. News and infos about the PRPNet client can be found here.
To take part, you have to activate the following lines in prpclient.ini:
server=WALLSUNSUN:100:2:prpnet.primegrid.com:13001
Stats will be available at the well known place here.
All previous PRPNet challenge stats can be found here.
Good luck!
____________
|
|
|
|
For those of you using GP/PARI, you can easily check the finds and near-finds we encouter with this function:
checkWSS(p) = {
if(!isprime(p)||p<=5,print("warning, p is not a prime greater than 5");return);
index = p-kronecker(p,5);
result = Mod([1,1;1,0],p^2)^(index-1);
result = lift(result[1,1])/p;
if(result>p/2,result -= p);
[0, result]
}
For example checkWSS(52803883808536447) should give [0,502] because this is a near-Wall-Sūn-Sūn with A=+502, while checkWSS(52803883808536459) has an |A| that is too huge to be a near-hit, [0, -19227206626185725].
If A=0 (so [0,0]), we have found the first Wall-Sūn-Sūn prime ever discovered!
(This similar to the GP/PARI function for Wieferichs I posted in the September challenge thread.)
/JeppeSN |
|
|
Honza Volunteer moderator Volunteer tester Project scientist Send message
Joined: 15 Aug 05 Posts: 1931 ID: 352 Credit: 5,712,393,943 RAC: 1,064,047
                                   
|
We're currently at 5.2e16. The two day May 2013 WSS challenge saw 41,880 WUs completed. Based on this rate we should complete 251,280 WUs in twelve days taking the leading edge to 5.5e16.
During latest Wieferich challenge, we were doing ~60,000 WUs a day in peak performance.
With shorter duration of WSS WU and with latest fix for AMD GPUs, we can do even more WUs if participation rate stays the same.
____________
My stats
Badge score: 1*1 + 5*1 + 8*3 + 9*11 + 10*1 + 11*1 + 12*3 = 186 |
|
|
Scott Brown Volunteer moderator Project administrator Volunteer tester Project scientist
 Send message
Joined: 17 Oct 05 Posts: 2329 ID: 1178 Credit: 15,638,668,234 RAC: 10,105,288
                                           
|
We're currently at 5.2e16. The two day May 2013 WSS challenge saw 41,880 WUs completed. Based on this rate we should complete 251,280 WUs in twelve days taking the leading edge to 5.5e16.
During latest Wieferich challenge, we were doing ~60,000 WUs a day in peak performance.
With shorter duration of WSS WU and with latest fix for AMD GPUs, we can do even more WUs if participation rate stays the same.
There is actually one more speed-up for some machines. WSS uses far less CPU cores and memory than WFS. So for machines like my dual GTX-660 machine that was bottlednecked in a Core2 quad with only 8MB cache, the results are MUCH faster. WFS would run around 350 sec/unit due to the bottleneck. WSS runs at about 55 sec/unit...or almost 7 times faster!
Note, however, that WSS more efficiently uses the GPU, and this will result in some screen lag in many cases (especially on entry-level and lower mid-tier GPUs).
|
|
|
Honza Volunteer moderator Volunteer tester Project scientist Send message
Joined: 15 Aug 05 Posts: 1931 ID: 352 Credit: 5,712,393,943 RAC: 1,064,047
                                   
|
Note, however, that WSS more efficiently uses the GPU, and this will result in some screen lag in many cases (especially on entry-level and lower mid-tier GPUs).
Yes. Fortunatelly, this can be fine-tunned via -t and -b values how it fits.
____________
My stats
Badge score: 1*1 + 5*1 + 8*3 + 9*11 + 10*1 + 11*1 + 12*3 = 186 |
|
|
|
Will the statistics page feature a section "(near) finds during the challenge" just like in the September challenge? I found it really useful. /JeppeSN |
|
|
Sysadm@Nbg Volunteer moderator Volunteer tester Project scientist
 Send message
Joined: 5 Feb 08 Posts: 1212 ID: 18646 Credit: 816,613,648 RAC: 185,781
                      
|
Will the statistics page feature a section "(near) finds during the challenge" just like in the September challenge? I found it really useful. /JeppeSN
I am working on it, but it isnt as easy as in september :)
____________
Sysadm@Nbg
my current lucky number: 113856050^65536 + 1
PSA-PRPNet-Stats-URL: http://u-g-f.de/PRPNet/
|
|
|
|
There is actually one more speed-up for some machines. WSS uses far less CPU cores and memory than WFS. So for machines like my dual GTX-660 machine that was bottlednecked in a Core2 quad with only 8MB cache, the results are MUCH faster. WFS would run around 350 sec/unit due to the bottleneck. WSS runs at about 55 sec/unit...or almost 7 times faster!
Note, however, that WSS more efficiently uses the GPU, and this will result in some screen lag in many cases (especially on entry-level and lower mid-tier GPUs).
"Some" machines is indeed an apt description. Linux users (a few different versions of ubuntu, at least) will see their CPUs max-out due to the OpenCL driver "CPU hog" misbehavior... it's the same problem discussed in one of the Genefer (OCL) threads a few months ago.
On the Wieferich challenge, on my 770 box, I ran 3 instances with threads=2 and blocks=8000 on each. This worked pretty well. Re-tuning will be needed for WSS, I suspect. Looking forward to the challenge (at least after we get TRP LLR out of the way!)
--Gary |
|
|
|
I'm trying to get my computer ready for the WSS challenge and I'm having some very curious events occurring when I run with the WSS server: It randomly shuts down. Times are as short as 5 seconds and as long as 2 minutes from starting. Digging into the log, I found the message "Could not find completion line in log file [wwww.log]. Assuming user stopped with ^C". Here is a snippet from the end of the log showing error without workunit completion and with it:
[2014-10-01 09:32:30 PDT] PRPNet Client application v5.3.1 started
[2014-10-01 09:32:30 PDT] User name Grebuloner at email address is -removed-
[2014-10-01 09:32:35 PDT] : Could not find completion line in log file [wwww.log]. Assuming user stopped with ^C
[2014-10-01 09:32:35 PDT] Total Time: 0:00:05 Total Work Units: 0 Special Results Found: 0
[2014-10-01 09:32:35 PDT] Client shutdown complete
[2014-10-01 09:33:26 PDT] PRPNet Client application v5.3.1 started
[2014-10-01 09:33:26 PDT] User name Grebuloner at email address is -removed-
[2014-10-01 09:34:04 PDT] WALLSUNSUN: Range 52947610000000000 to 52947620000000000 completed
[2014-10-01 09:34:15 PDT] : Could not find completion line in log file [wwww.log]. Assuming user stopped with ^C
[2014-10-01 09:34:15 PDT] Total Time: 0:00:49 Total Work Units: 1 Special Results Found: 0
[2014-10-01 09:34:15 PDT] Client shutdown complete
Here's some background and info.
The server line for WSS in the prpclient.ini is:
server=WALLSUNSUN:100:2:prpnet.primegrid.com:13001
I'm using wwwwcl64.exe on a GTX580 at stock clocks with the following wwww.ini settings:
blocks=3000
threads=8
I tested with threads=16 to match the number of SMs and was rewarded in the middle of my second WU with a driver reset (344.11). Performance on the first was the same as 8 threads. threads=4 results in lower performance and the same log file error. Playing around with block size either results in a crash (6000) or lower performance with same error (1000).
When it completes work, there is no reporting message like Wieferich (which runs fine at the same settings), and my p/sec is about 1/7th of it as well. 6.8M for WSS vs 50M for Wieferich, compute times are more than half that of Wieferich (38 vs 55 sec). GPU-Z has utilization generally hovering 95%-99% and temps are no more than 70C.
So, three things here, most importantly, how do I get WSS to keep running and is it reporting results? What settings recommendations can you make so that it actually runs 7 times faster (or nearly so) as suggested in the challenge thread?
____________
Eating more cheese on Thursdays. |
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13804 ID: 53948 Credit: 345,369,032 RAC: 1,967
                              
|
It sounds like WSS is crashing. The most obvious question is whether the GPU is overclocked (including factory overclocked). If it is, I'd try running at stock clock rates, or even below, to see if that makes the problem go away.
____________
My lucky number is 75898524288+1 |
|
|
Scott Brown Volunteer moderator Project administrator Volunteer tester Project scientist
 Send message
Joined: 17 Oct 05 Posts: 2329 ID: 1178 Credit: 15,638,668,234 RAC: 10,105,288
                                           
|
...Digging into the log, I found the message "Could not find completion line in log file [wwww.log]. Assuming user stopped with ^C". Here is a snippet from the end of the log showing error without workunit completion and with it...
There should be a blank text file named wwww.log in your prpnet directory where you are running WSS. If it is not there, create one using a plain text editor and save it there.
my p/sec is about 1/7th of it as well. 6.8M for WSS vs 50M for Wieferich, compute times are more than half that of Wieferich (38 vs 55 sec). GPU-Z has utilization generally hovering 95%-99% and temps are no more than 70C.
This is normal. WSS p/sec rates are substantially lower as reported than they are for WFS. For example, my GTX 645 runs at 17M p/sec on WFS (150 sec/unit roughly), and runs at 2.7M p/sec on WSS (100 sec/unit roughly).
|
|
|
|
I had it at stock previously (772 MHz), downclocked to 700, same error, 600 and now it's working. I will slowly up the clocks until I find the unstable point, as this is substantially lower performance. Thanks for the suggestion!
One other problem solved: It is reporting work...after a morning of completing and restarting and errors, it is now reporting completed WUs after each task, and it shot off all the original never reported results.
____________
Eating more cheese on Thursdays. |
|
|
Honza Volunteer moderator Volunteer tester Project scientist Send message
Joined: 15 Aug 05 Posts: 1931 ID: 352 Credit: 5,712,393,943 RAC: 1,064,047
                                   
|
I'm using wwwwcl64.exe on a GTX580 at stock clocks with the following wwww.ini settings:
blocks=3000
threads=8
GPU-Z has utilization generally hovering 95%-99% and temps are no more than 70C.
-t4, -b1024, stock speed at 772MHz, 36secs - which is all expected.
Temps are ~89-90C, fans 75% with ambient ~25C.
Stable so far but temps are high. I might need to lower t or b to have lower GPU utilization and temps...
____________
My stats
Badge score: 1*1 + 5*1 + 8*3 + 9*11 + 10*1 + 11*1 + 12*3 = 186 |
|
|
|
Temps are ~89-90C, fans 75% with ambient ~25C.
Stable so far but temps are high. I might need to lower t or b to have lower GPU utilization and temps...
Toasty! After some further optimization testing -t8 -b1024 seems to be my best stable performance mark at around 37-39 sec/WU. I had to go all the way down to -t1 to get a temperature reduction from 71C to 67C (ambient 12C), with a WU time of 56s. Good luck with those temps on yours, at least they're in the safe range.
____________
Eating more cheese on Thursdays. |
|
|
|
I am moving my laptop gpu (nVidia 540M) from Collatz and PPS_sieve to WSS for the upcoming challenge. What are the default thread and block values for WSS (that is without changing the wwww.ini, i.e., commenting those lines out in the file)? And does anyone have some recommendations for thread and block sizes for a 540M gpu? |
|
|
|
blocks=8000
threads=8
On 660Ti looks fine : GPU Load up to 100 %, about 40 secs / WU, 63°, ...
But not working (at all) on GTX970.
Does anyone have WSS experience with 780Ti GV-N78TWF3-3GD ?
If yes, I might maybe use these cards i/o 970 for the challenge.
Best,
Philippe |
|
|
|
To take part, you have to activate the following lines in prpclient.ini:
server=WALLSUNSUN:100:2:prpnet.primegrid.com:13001
Any reason why you set "workunits" in <suffix>:<pct>:<workunits>:<server IP>:<port> to 2? In earlier challenges (non-WSS PRPNet challenges) you suggested to use 1.
Also, why are the work units in WSS so small? Each work unit covers an interval of length 1e10. If you made longer intervals (for example of length 1e11), less internet traffic would be needed, and maybe less load on your PRPNet server as well.
/JeppeSN |
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13804 ID: 53948 Credit: 345,369,032 RAC: 1,967
                              
|
Also, why are the work units in WSS so small? Each work unit covers an interval of length 1e10. If you made longer intervals (for example of length 1e11), less internet traffic would be needed, and maybe less load on your PRPNet server as well.
/JeppeSN
Server load isn't (usually) a problem, and we just did a massive upgrade to the PRPNet server, doubling the memory and putting the database on SSDs.
Also, remember that people run this on CPUs too. While this may take less than a minute on a fast GPU, it takes quite a while on a CPU core.
____________
My lucky number is 75898524288+1 |
|
|
|
Do you know if WSS can run on 970/980 ? |
|
|
|
Do you know if WSS can run on 970/980 ?
If you run...
wwwwcl64.exe -l
from a command prompt in the directory that the exe is in, does it see the card?
There is no reason I know of that those card would not work.
Send a couple my way and I can test them for you . ;)
____________
|
|
|
|
;)
Will test them again and post the error message.
EDIT : It's working - My mistake.
27 seconds - GPU @ 60° -
wallsunsun_threads=8
wallsunsun_blocks=3000
However, it runs on 1 card only, although
// platform and device specify the platform and device to run on
platform=2
device=0
device=1
|
|
|
|
A lot of work has been done in the time leading up to the challenge start, yet no near-hits have been found since September 29. Surely we will find some near-hits during the challenge...
Less than half an hour to the start.
/JeppeSN |
|
|
|
User composite seems to be way faster than everybody else in the beginning of this race. However, his victory is not given yet.
(Has he been crunching these work units before the start of the competition and held back the results in order to return a bunch of "old" results just after the challenge opened?)
/JeppeSN |
|
|
|
tera3@EAnewsplus found the near-hit 54681487403236027 (0 +853 p). Clearly it is 7 (mod 10), so 2 (mod 5).
/JeppeSN
Addition: Sysadm@Nbg, the work you described above seems successful; tera3's near-find is on the challenge stats page already.
New addition: This find was returned during the totality of the lunar eclipse, and is now known to be the only find made during the total phase of this astronomical event. |
|
|
compositeVolunteer tester Send message
Joined: 16 Feb 10 Posts: 1022 ID: 55391 Credit: 888,938,802 RAC: 133,359
                       
|
Not holding back results, just scheduling to start earlier. It was a lot more work to check completion rates and calculate starting times. The lead will be contested by a few people currently focussed on the BOINC challenge, in a day or two. |
|
|
|
Hello !
Just to check if I need to amend the settings, what are your average WU's lengths ?
Here : About 48 seconds on 660Ti and about 28 seconds on 970 G1
Thank You
Philippe |
|
|
|
Hello !
Just to check if I need to amend the settings, what are your average WU's lengths ?
Here : About 48 seconds on 660Ti and about 28 seconds on 970 G1
Thank You
Philippe
I think you should be able to do a bit better on the 660TI.
Try these specs on wwww.ini file:
wallsunsun_threads=7
wallsunsun_blocks=2048
____________
676754^262144+1 is prime |
|
|
|
Thank You
I will try with these settings. |
|
|
|
[AF%3EAmis_des_Lapins]_Phil1966, you found a near-Wall-Sūn-Sūn: 54794048128499083 (0 +289 p). It is 3 (mod 5). /JeppeSN |
|
|
Sysadm@Nbg Volunteer moderator Volunteer tester Project scientist
 Send message
Joined: 5 Feb 08 Posts: 1212 ID: 18646 Credit: 816,613,648 RAC: 185,781
                      
|
a late Welcome to the challenge!
sadly I was offline and will be soon again
but as far as I concern my half automatic scripting and the well prepared php-sites work as they should (thanks for the feedback, JeppeSN)
Have fun and I hope to see you later at the challenge, too
____________
Sysadm@Nbg
my current lucky number: 113856050^65536 + 1
PSA-PRPNet-Stats-URL: http://u-g-f.de/PRPNet/
|
|
|
|
[AF%3EAmis_des_Lapins]_Phil1966, you found a near-Wall-Sūn-Sūn: 54794048128499083 (0 +289 p). It is 3 (mod 5). /JeppeSN
Coooooooooooooool !
Thank You for advising me :)
I am happy :D |
|
|
compositeVolunteer tester Send message
Joined: 16 Feb 10 Posts: 1022 ID: 55391 Credit: 888,938,802 RAC: 133,359
                       
|
zunewantan takes the lead |
|
|
RogerVolunteer developer Volunteer tester
 Send message
Joined: 27 Nov 11 Posts: 1138 ID: 120786 Credit: 268,621,444 RAC: 0
                    
|
To take part, you have to activate the following lines in prpclient.ini:
server=WALLSUNSUN:100:2:prpnet.primegrid.com:13001
Any reason why you set "workunits" in <suffix>:<pct>:<workunits>:<server IP>:<port> to 2? In earlier challenges (non-WSS PRPNet challenges) you suggested to use 1.
The 2 in this example is the number of WUs to download per batch. For GPU project like WSS I set this to 10 myself. Max is ~20 before WUs start getting dropped in each batch downloaded. If running on CPU you may want to consider a lower number, up to the user what they pick.
I did a cut and paste from here. Some of the projects list 0, not sure what the behaviour is then. In any case 2 will be lower Server load than 1. Remember last challenge there were too many connections at a time and this forced an upgrade. |
|
|
|
zunewantan takes the lead
And now he found the third near-hit of this challenge, 55082473174467043 (0 +38 p). It is 3 (mod 5).
/JeppeSN |
|
|
compositeVolunteer tester Send message
Joined: 16 Feb 10 Posts: 1022 ID: 55391 Credit: 888,938,802 RAC: 133,359
                       
|
To take part, you have to activate the following lines in prpclient.ini:
server=WALLSUNSUN:100:2:prpnet.primegrid.com:13001
Any reason why you set "workunits" in <suffix>:<pct>:<workunits>:<server IP>:<port> to 2? In earlier challenges (non-WSS PRPNet challenges) you suggested to use 1.
The 2 in this example is the number of WUs to download per batch. For GPU project like WSS I set this to 10 myself. Max is ~20 before WUs start getting dropped in each batch downloaded. If running on CPU you may want to consider a lower number, up to the user what they pick.
I did a cut and paste from here. Some of the projects list 0, not sure what the behaviour is then. In any case 2 will be lower Server load than 1. Remember last challenge there were too many connections at a time and this forced an upgrade.
I think what you are saying by "dropped" is that your computer doesn't have enough time to process the whole batch before the 120 hour time limit runs out. The real limit on batch size allowed by the server is 100. You can download more than this to a particular computer if you distribute them into multiple client directories. So the practical maximum is what your computer can handle in 120 hours, and depends on what else is running on it. This practical maximum typically exceeds 1000 work units for a system with 8 hyperthreads. If you do this well, you can get a head start on the challenge, but you will be quickly overtaken by more powerful systems. It does however give you a significant edge over similar speed challengers. You must cut back the batch size soon after the challenge begins, or manually intervene to force reporting of completed work units just before the end of the challenge, for the head start to have effect in the final standings. |
|
|
|
Run the "#-install-prpclient.sh" (.bat for Windows) (.command for Mac) file to build
the required folders.
1-single install
2-dual install
4-quad install
6-hex install
8-oct install
12-dodeca install
16-hexadeca install
I presume I would use '4' for my quad core cpu?
I ask only because this particular PRPNet challenge will not run correctly for me.
Thanks
____________
PrimeSearch Team |
|
|
|
Run the "#-install-prpclient.sh" (.bat for Windows) (.command for Mac) file to build
the required folders.
1-single install
2-dual install
4-quad install
6-hex install
8-oct install
12-dodeca install
16-hexadeca install
I presume I would use '4' for my quad core cpu?
I ask only because this particular PRPNet challenge will not run correctly for me.
Thanks
Well, if memory serves me right the "number" install is the number of instances you want to run not the number of cores you have. IE, I run 2 instances on a single pc so I have 2 prpclient folders and did the 2-dual install. It will basically setup 2 folders and a batch file that will run prpclient.exe from those 2 folders. You will see 2 command windows. I've also done a single install, copied the folders, modified the prpclient.ini and wwww.ini files and let it rip.
On your issue, since you didnt say what was wrong or not doing correctly. |
|
|
|
I had uncommented the wwwwexe=wwwwcl64.exe based on my 64 bit OS.
It didn't like that, but the batch file closed too quickly to see the error.
I uncommented wwwwexe=wwww.exe instead, and it is running fine now.
Thank you for the heads up on the # of instances question, Rick.
____________
PrimeSearch Team |
|
|
RogerVolunteer developer Volunteer tester
 Send message
Joined: 27 Nov 11 Posts: 1138 ID: 120786 Credit: 268,621,444 RAC: 0
                    
|
I upgraded from Catalyst 14.4 to 14.9 and my average WSS WU time decreased from 90 sec to 75 sec!
Using 1 thread and 4096 blocks on an AMD 7970Ghz.
See this thread:
http://www.primegrid.com/forum_thread.php?id=5157
I was requesting 25 WUs at a time and returning just 24 of them due to sockets error in the PRPNet code. You can see if this is happening to you by looking at the Pending Test list.
Nothing to do with expiry times. Developer of PRPNet, rogue, recommends reducing it to 20 workunits max so that you don't lose any work.
Users that currently might be having this problem on WSS:
Aillas
TrueBlue
composite
Doesn't cause problems in the long run as WUs expire and get put back in the pool. |
|
|
|
I had uncommented the wwwwexe=wwwwcl64.exe based on my 64 bit OS.
It didn't like that, but the batch file closed too quickly to see the error.
I uncommented wwwwexe=wwww.exe instead, and it is running fine now.
Thank you for the heads up on the # of instances question, Rick.
The wwwwcl64.exe is for GPUs, neither of your computers has a GPU. For CPUs you need to run the wwww.exe |
|
|
|
I lent my GPU to a friend to try a crossfire setup, and it didn't occur to me when editing the ini.
I shall reinstall it, fine sir. Thank you
____________
PrimeSearch Team |
|
|
|
I lent my GPU to a friend to try a crossfire setup, and it didn't occur to me when editing the ini.
I shall reinstall it, fine sir. Thank you
Next time there is a challenge, ask to borrow your friends GPU. ;-)
|
|
|
|
Reinstalled GPU, upgraded to Catalyst 14.9 as Roger did, and uncommented wwwwcl64.exe
8 instances running, and chugging along nicely.
Thanks for the help guys
____________
PrimeSearch Team |
|
|
|
Stojag found 55820727418183613 (0 -778 p). Again it is 3 (mod 5).
/JeppeSN |
|
|
RogerVolunteer developer Volunteer tester
 Send message
Joined: 27 Nov 11 Posts: 1138 ID: 120786 Credit: 268,621,444 RAC: 0
                    
|
We're currently at 5.3e16. The two day May 2013 WSS challenge saw 41,880 WUs completed. Based on this rate we should complete 251,280 WUs in twelve days taking the leading edge to 5.6e16.
We've completed 132,471 WUs in slightly less than two days which is easily triple the work rate of the May 2013 WSS challenge.
There was also the matching Total Lunar Eclipse WSS challenge in April this year when we did 815,355 WUs, (with 20 near hits). That's at about the same WU rate as we're pulling now.
At this rate we should hit 6.3e16 by the end of the challenge.
I did a little research on implementing the Wilson prime search. Apparently you would need 256GB to run efficiently:
http://www.primegrid.com/forum_thread.php?id=3609&nowrap=true#40411
http://www.mersenneforum.org/showthread.php?t=16028&page=13
Wolstenholme might be possible:
http://en.wikipedia.org/wiki/Wolstenholme_prime
You can evaluate the binomial with large numbers using the natural logarithm of the gamma function. It is implemented in a lot of maths libraries. |
|
|
RogerVolunteer developer Volunteer tester
 Send message
Joined: 27 Nov 11 Posts: 1138 ID: 120786 Credit: 268,621,444 RAC: 0
                    
|
Last of the TRP CPU tasks have been finishing on my PC from the BOINC challenge.
Running WSS on the 7970 GPU, set to use one thread only, 4096 blocks. 6 core CPU running Catalyst 14.9.
4 CPU cores on TRP: average 71 sec per WSS WU
3 CPU cores on TRP: average 64 sec per WSS WU!
2 CPU cores on TRP: average 58 sec per WSS WU!!
0 CPU cores on TRP: average 52 sec per WSS WU OMG!
Anyone else see the unexpected awesome speedup when they stop all other work on all their CPU cores? |
|
|
|
We're currently at 5.3e16. The two day May 2013 WSS challenge saw 41,880 WUs completed. Based on this rate we should complete 251,280 WUs in twelve days taking the leading edge to 5.6e16.
We've completed 132,471 WUs in slightly less than two days [...]
By the way, leading edge has already passed the 5.6e16 from the post from before the challenge start. /JeppeSN |
|
|
|
See this thread:
http://www.primegrid.com/forum_thread.php?id=5157
I was requesting 25 WUs at a time and returning just 24 of them due to sockets error in the PRPNet code. You can see if this is happening to you by looking at the Pending Test list.
Nothing to do with expiry times. Developer of PRPNet, rogue, recommends reducing it to 20 workunits max so that you don't lose any work.
I am requesting 100 WUs at a time but I haven't seen a single error yet. |
|
|
|
We're currently at 5.3e16. The two day May 2013 WSS challenge saw 41,880 WUs completed. Based on this rate we should complete 251,280 WUs in twelve days taking the leading edge to 5.6e16.
We've completed 132,471 WUs in slightly less than two days [...]
By the way, leading edge has already passed the 5.6e16 from the post from before the challenge start. /JeppeSN
Absolutely fantastic!
____________
|
|
|
|
False positive: 56901713453025043 (0 -113571097284905 p)
/JeppeSN |
|
|
|
Today has also found no less than three genuine near-Wall-Sūn-Sūns:
zunewantan: 57481225651950193 (0 -379 p)
pschoefer: 57491949281466541 (0 -490 p)
zunewantan: 57550542208424521 (0 -871 p)
The first one of these has the same "client" ID as the false positive earlier today, so may come from the same machine. Of course I have verified (with my GP/PARI function) that the three near-hits here are real.
/JeppeSN |
|
|
|
New false positive: 57847000068372089 (0 +17069604295500148 p)
/JeppeSN |
|
|
compositeVolunteer tester Send message
Joined: 16 Feb 10 Posts: 1022 ID: 55391 Credit: 888,938,802 RAC: 133,359
                       
|
There is definitely something wonky with challenge stats. I previously noted stats decrease for another user, and this time I saw it happen to me. At the end of this challege we need to compare client logs with challenge stats to find out what is going on. |
|
|
Sysadm@Nbg Volunteer moderator Volunteer tester Project scientist
 Send message
Joined: 5 Feb 08 Posts: 1212 ID: 18646 Credit: 816,613,648 RAC: 185,781
                      
|
There is definitely something wonky with challenge stats. I previously noted stats decrease for another user, and this time I saw it happen to me. At the end of this challege we need to compare client logs with challenge stats to find out what is going on.
you can find some diagnostic output here
format is:
line 1: test-now ./. test-at-start ./. 0 (formerly used for some tests with pending work)
line 2: points-now ./. points-at-start ./. 0 (formerly used for some tests with pending work)
line 3: generated SQL-Statement for the stats table
____________
Sysadm@Nbg
my current lucky number: 113856050^65536 + 1
PSA-PRPNet-Stats-URL: http://u-g-f.de/PRPNet/
|
|
|
compositeVolunteer tester Send message
Joined: 16 Feb 10 Posts: 1022 ID: 55391 Credit: 888,938,802 RAC: 133,359
                       
|
There is definitely something wonky with challenge stats. I previously noted stats decrease for another user, and this time I saw it happen to me. At the end of this challege we need to compare client logs with challenge stats to find out what is going on.
you can find some diagnostic output here
format is:
line 1: test-now ./. test-at-start ./. 0 (formerly used for some tests with pending work)
line 2: points-now ./. points-at-start ./. 0 (formerly used for some tests with pending work)
line 3: generated SQL-Statement for the stats table
The query doesn't look incorrect, given the numbers. I theorize that it happens when someone overtakes another in the primegrid user stats (you are scraping that page for numbers, right?) so that an array index changes for a user. That would be hard to verify with real-time numbers, so you would have to test it with a dummy stats page. Or just inspect the code. |
|
|
Honza Volunteer moderator Volunteer tester Project scientist Send message
Joined: 15 Aug 05 Posts: 1931 ID: 352 Credit: 5,712,393,943 RAC: 1,064,047
                                   
|
Latest found by brinktastee 59946565017892219 (0 +442 p)
Leading edge is at 6.0e16
____________
My stats
Badge score: 1*1 + 5*1 + 8*3 + 9*11 + 10*1 + 11*1 + 12*3 = 186 |
|
|
|
Latest found by brinktastee 59946565017892219 (0 +442 p)
Leading edge is at 6.0e16
My bad luck streak may be over!
____________
|
|
|
|
We're currently at 5.3e16. The two day May 2013 WSS challenge saw 41,880 WUs completed. Based on this rate we should complete 251,280 WUs in twelve days taking the leading edge to 5.6e16.
After 8 days, leading edge is at 6.0e16. 6 days left. We should finish at 6.5e16 !!
5.6e16 target has been smashed...
The only thing missing is to find a WSS in order to perfectly complete this challenge.
____________
Badge Score: 1*2 + 1*5 + 12*6 + 4*7 + 2*8 + 1*9 = 132 |
|
|
|
Finds 60627446670169549 and 60959756688913409 are false positives (see GP/PARI function in my first post above). /JeppeSN |
|
|
|
I was wondering if it is possible to reduce the priority of the program on a GPU.
I am running one GPU (NVIDIA) and using wwwwl64.exe with
wallsunsun_threads=7
wallsunsun_blocks=2048
While it is running well, it really slows my PC down and I am unable to do useful work so I have to shutdown the client and run in during the night. I have 8 cores CPU, and running only one instance, so CPU utilization is not an issue. The GPU is dragging when I try to even open a web browser.
I was hoping it will run with low priority so that my PC is still responsive and it can use the spare cycle and when I am not using the PC (which is quite a lot), it t can take over
Thanks
|
|
|
|
The only way I can think of to do that is to "de-tune" the values in wwww.ini. That is, reduce the values of blocks (try that first) and threads. Of course you won't be crunching WSS as fast, but maybe you can find an acceptable trade-off between crunching and screen lag by playing with the numbers.
--Gary |
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13804 ID: 53948 Credit: 345,369,032 RAC: 1,967
                              
|
There's no such thing as "priority" on a GPU, so you can't directly tell the computer to let it run your GUI instead of crunching the way you can with a CPU program. However, as Gary said, you can intentionally "de-tune" the WSS parameters to let the GPU pause more often and therefore be more responsive to the user.
Almost all GPU programs have some sort of tuning parameters, although some of them don't permit the user to modify the parameters. With most programs you can use these parameters to vary the behavior between "maximum performance" and "no screen lag".
____________
My lucky number is 75898524288+1 |
|
|
|
Thanks Gary & Michael.
That worked. I dropped the blocks to 512 and the screen lag has become very manageable and hardly noticeable.
The speed decreased from 1.3M p/s to about 1.15M p/s -- which is not too bad.
|
|
|
compositeVolunteer tester Send message
Joined: 16 Feb 10 Posts: 1022 ID: 55391 Credit: 888,938,802 RAC: 133,359
                       
|
See this thread:
http://www.primegrid.com/forum_thread.php?id=5157
I was requesting 25 WUs at a time and returning just 24 of them due to sockets error in the PRPNet code. You can see if this is happening to you by looking at the Pending Test list.
Nothing to do with expiry times. Developer of PRPNet, rogue, recommends reducing it to 20 workunits max so that you don't lose any work.
Users that currently might be having this problem on WSS:
Aillas
TrueBlue
composite
Doesn't cause problems in the long run as WUs expire and get put back in the pool.
I just saw a problem in a client's log, but not exactly the problem above. A client was set to request 100 WUs, and it was working fine. Then in one batch, only 25 WUs were received by the client, which processed and returned those 25 WUs. The server thinks the client received 100 units, so 75 units are still left on the server's pending list. No client processing time was lost. Here is the relevant portion of the client log.
[2014-10-15 17:45:58 CDT] WALLSUNSUN: INFO: All 100 test results were accepted
[2014-10-15 17:45:59 CDT] WALLSUNSUN: Getting work from server prpnet.primegrid.com at port 13001
[2014-10-15 17:45:59 CDT] WALLSUNSUN: PRPNet server is version 5.2.8
[2014-10-15 17:47:22 CDT] WALLSUNSUN: Range 61498910000000000 to 61498920000000000 completed
[2014-10-15 17:48:43 CDT] WALLSUNSUN: Range 61498920000000000 to 61498930000000000 completed
[2014-10-15 17:50:06 CDT] WALLSUNSUN: Range 61498930000000000 to 61498940000000000 completed
[2014-10-15 17:51:28 CDT] WALLSUNSUN: Range 61498940000000000 to 61498950000000000 completed
[2014-10-15 17:52:50 CDT] WALLSUNSUN: Range 61498950000000000 to 61498960000000000 completed
[2014-10-15 17:54:12 CDT] WALLSUNSUN: Range 61498960000000000 to 61498970000000000 completed
[2014-10-15 17:55:35 CDT] WALLSUNSUN: Range 61498970000000000 to 61498980000000000 completed
[2014-10-15 17:56:57 CDT] WALLSUNSUN: Range 61498980000000000 to 61498990000000000 completed
[2014-10-15 17:58:19 CDT] WALLSUNSUN: Range 61498990000000000 to 61499000000000000 completed
[2014-10-15 17:59:39 CDT] WALLSUNSUN: Range 61499000000000000 to 61499010000000000 completed
[2014-10-15 18:01:00 CDT] WALLSUNSUN: Range 61499010000000000 to 61499020000000000 completed
[2014-10-15 18:02:21 CDT] WALLSUNSUN: Range 61499020000000000 to 61499030000000000 completed
[2014-10-15 18:03:44 CDT] WALLSUNSUN: Range 61499030000000000 to 61499040000000000 completed
[2014-10-15 18:05:06 CDT] WALLSUNSUN: Range 61499040000000000 to 61499050000000000 completed
[2014-10-15 18:06:28 CDT] WALLSUNSUN: Range 61499050000000000 to 61499060000000000 completed
[2014-10-15 18:07:47 CDT] WALLSUNSUN: Range 61499060000000000 to 61499070000000000 completed
[2014-10-15 18:09:10 CDT] WALLSUNSUN: Range 61499070000000000 to 61499080000000000 completed
[2014-10-15 18:10:31 CDT] WALLSUNSUN: Range 61499080000000000 to 61499090000000000 completed
[2014-10-15 18:11:53 CDT] WALLSUNSUN: Range 61499090000000000 to 61499100000000000 completed
[2014-10-15 18:13:15 CDT] WALLSUNSUN: Range 61499100000000000 to 61499110000000000 completed
[2014-10-15 18:14:35 CDT] WALLSUNSUN: Range 61499110000000000 to 61499120000000000 completed
[2014-10-15 18:15:56 CDT] WALLSUNSUN: Range 61499120000000000 to 61499130000000000 completed
[2014-10-15 18:17:19 CDT] WALLSUNSUN: Range 61499130000000000 to 61499140000000000 completed
[2014-10-15 18:18:43 CDT] WALLSUNSUN: Range 61499140000000000 to 61499150000000000 completed
[2014-10-15 18:20:05 CDT] WALLSUNSUN: Range 61499150000000000 to 61499160000000000 completed
[2014-10-15 18:20:05 CDT] Total Time: 41:09:24 Total Work Units: 1763 Special Results Found: 0
[2014-10-15 18:20:05 CDT] WALLSUNSUN: Returning work to server prpnet.primegrid.com at port 13001
[2014-10-15 18:20:06 CDT] WALLSUNSUN: INFO: Test for range 61498910000000000:61498920000000000 was accepted
[2014-10-15 18:20:06 CDT] WALLSUNSUN: INFO: Test for range 61498920000000000:61498930000000000 was accepted
[2014-10-15 18:20:06 CDT] WALLSUNSUN: INFO: Test for range 61498930000000000:61498940000000000 was accepted
[2014-10-15 18:20:06 CDT] WALLSUNSUN: INFO: Test for range 61498940000000000:61498950000000000 was accepted
[2014-10-15 18:20:06 CDT] WALLSUNSUN: INFO: Test for range 61498950000000000:61498960000000000 was accepted
[2014-10-15 18:20:06 CDT] WALLSUNSUN: INFO: Test for range 61498960000000000:61498970000000000 was accepted
[2014-10-15 18:20:07 CDT] WALLSUNSUN: INFO: Test for range 61498970000000000:61498980000000000 was accepted
[2014-10-15 18:20:07 CDT] WALLSUNSUN: INFO: Test for range 61498980000000000:61498990000000000 was accepted
[2014-10-15 18:20:07 CDT] WALLSUNSUN: INFO: Test for range 61498990000000000:61499000000000000 was accepted
[2014-10-15 18:20:07 CDT] WALLSUNSUN: INFO: Test for range 61499000000000000:61499010000000000 was accepted
[2014-10-15 18:20:07 CDT] WALLSUNSUN: INFO: Test for range 61499010000000000:61499020000000000 was accepted
[2014-10-15 18:20:07 CDT] WALLSUNSUN: INFO: Test for range 61499020000000000:61499030000000000 was accepted
[2014-10-15 18:20:08 CDT] WALLSUNSUN: INFO: Test for range 61499030000000000:61499040000000000 was accepted
[2014-10-15 18:20:08 CDT] WALLSUNSUN: INFO: Test for range 61499040000000000:61499050000000000 was accepted
[2014-10-15 18:20:08 CDT] WALLSUNSUN: INFO: Test for range 61499050000000000:61499060000000000 was accepted
[2014-10-15 18:20:08 CDT] WALLSUNSUN: INFO: Test for range 61499060000000000:61499070000000000 was accepted
[2014-10-15 18:20:08 CDT] WALLSUNSUN: INFO: Test for range 61499070000000000:61499080000000000 was accepted
[2014-10-15 18:20:08 CDT] WALLSUNSUN: INFO: Test for range 61499080000000000:61499090000000000 was accepted
[2014-10-15 18:20:09 CDT] WALLSUNSUN: INFO: Test for range 61499090000000000:61499100000000000 was accepted
[2014-10-15 18:20:09 CDT] WALLSUNSUN: INFO: Test for range 61499100000000000:61499110000000000 was accepted
[2014-10-15 18:20:09 CDT] WALLSUNSUN: INFO: Test for range 61499110000000000:61499120000000000 was accepted
[2014-10-15 18:20:09 CDT] WALLSUNSUN: INFO: Test for range 61499120000000000:61499130000000000 was accepted
[2014-10-15 18:20:09 CDT] WALLSUNSUN: INFO: Test for range 61499130000000000:61499140000000000 was accepted
[2014-10-15 18:20:09 CDT] WALLSUNSUN: INFO: Test for range 61499140000000000:61499150000000000 was accepted
[2014-10-15 18:20:10 CDT] WALLSUNSUN: INFO: Test for range 61499150000000000:61499160000000000 was accepted
[2014-10-15 18:20:10 CDT] WALLSUNSUN: INFO: All 25 test results were accepted
[2014-10-15 18:20:10 CDT] WALLSUNSUN: Getting work from server prpnet.primegrid.com at port 13001
|
|
|
|
Finds 60627446670169549 and 60959756688913409 are false positives (see GP/PARI function in my first post above). /JeppeSN
And 61618050087323003 is false as well (should be 61618050087323003 (0 +7487522768260222 p)).
Could we somehow stop these bad results from being treated as true WSS finds? Port 13001 currently claims we have found 7 Wall-Sūn-Sūns. In fact, none are known.
/JeppeSN |
|
|
|
Keep the number of work units requested to something like 20 at the most. The server imposes a limit (might be 25), and at one point there was a bug that caused clients that grabbed exactly that number of units to lose credit for one of them. They get recycled, but still a waste of time. Not sure if this bug has been fixed or not.
Just play it safe - take 10 or 20 at a time; the network overhead will be miniscule.
--Gary |
|
|
JimB Honorary cruncher Send message
Joined: 4 Aug 11 Posts: 918 ID: 107307 Credit: 977,945,376 RAC: 19
                     
|
Finds 60627446670169549 and 60959756688913409 are false positives (see GP/PARI function in my first post above). /JeppeSN
And 61618050087323003 is false as well (should be 61618050087323003 (0 +7487522768260222 p)).
Could we somehow stop these bad results from being treated as true WSS finds? Port 13001 currently claims we have found 7 Wall-Sūn-Sūns. In fact, none are known.
/JeppeSN
I'll be (carefully) looking into erasing those results and causing those candidates to be retested. It's easy to make things worse and having lived through that experience once I have no wish to repeat it. I'll do the same on port 13000. |
|
|
|
Finds 60627446670169549 and 60959756688913409 are false positives (see GP/PARI function in my first post above). /JeppeSN
And 61618050087323003 is false as well (should be 61618050087323003 (0 +7487522768260222 p)).
Could we somehow stop these bad results from being treated as true WSS finds? Port 13001 currently claims we have found 7 Wall-Sūn-Sūns. In fact, none are known.
/JeppeSN
I'll be (carefully) looking into erasing those results and causing those candidates to be retested. It's easy to make things worse and having lived through that experience once I have no wish to repeat it. I'll do the same on port 13000.
Of course, testing an entire WU range, like 6161805e10 - 6161806e10, takes a short while whereas testing one single number, like 61618050087323003, is instantaneous.
All 7 false finds are by zunewantan. He has 2 from an earlier event which are "hidden", and 5 from the current challenge (October 2014), see the bottom of the user finds page.
Same user does not appear to produce false near-hits (his has found many valid near-WSS), only false full-hits.
Here on WSS, the false finds appear without parenthesis, for example:
61618050087323003
where the correct would be:
61618050087323003 (0 +7487522768260222 p)
On the other hand, on WFS, the false finds (which have not been by that same user) appear with impossible parenthesis:
214714110964439699 (+16384 -29001 p)
(the first number in the parenthesis must be ±1 in WFS, and 0 in WSS) instead of the correct:
214714110964439699 (-1 +12113703224047600 p)
/JeppeSN |
|
|
|
If there are "false positive" hits as we have seen, is it not also probable that there are "false negative" misses?
I'm also curious as to the mathematical significance of a "near find" on either Wieferich or WSS. We're looking for primes with a certain characteristic, not primes that "almost" make it, right? Seems just like a consolation prize. That being said, I continue to crunch WSS full steam ahead. :-)
--Gary |
|
|
|
Hello !
It looks like some crunchers have turned on the "afterburner" ;) :D
Have Fun
Philippe
NB : http://scholar.google.fr/scholar?oe=utf-8&rls=org.mozilla:fr:official&client=firefox-a&channel=sb&gfe_rd=cr&um=1&ie=UTF-8&lr=&q=related:vp-9i1HsRwoU8M:scholar.google.com/
Kuo, J., & Fu, H. L. (2010). ON NEAR RELATIVE PRIME NUMBER IN A SEQUENCE OF POSITIVE INTEGERS. Taiwanese Journal of Mathematics, 14(1), pp-123.
https://www.princeton.edu/~achaney/tmve/wiki100k/docs/Prime_number_theorem.html
Mybe these links might help to understand "near prime numbers" ... but not sure, as I am not a mathematician :/ |
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13804 ID: 53948 Credit: 345,369,032 RAC: 1,967
                              
|
If there are "false positive" hits as we have seen, is it not also probable that there are "false negative" misses?
That is true, but the same can also be said about LLR, PFGW, and Genefer.
____________
My lucky number is 75898524288+1 |
|
|
|
I think I got another near!
____________
|
|
|
RogerVolunteer developer Volunteer tester
 Send message
Joined: 27 Nov 11 Posts: 1138 ID: 120786 Credit: 268,621,444 RAC: 0
                    
|
Leading edge has now passed 6.4e16. We were at 5.3e16 before the start of the Challenge. So leading edge has advanced 20% during the last 10 days. Impressive effort! |
|
|
Sysadm@Nbg Volunteer moderator Volunteer tester Project scientist
 Send message
Joined: 5 Feb 08 Posts: 1212 ID: 18646 Credit: 816,613,648 RAC: 185,781
                      
|
announcement:
I will eliminate the false positives after the challenge from the challenge stats
they will stay in the findlist until they are hidden server side
____________
Sysadm@Nbg
my current lucky number: 113856050^65536 + 1
PSA-PRPNet-Stats-URL: http://u-g-f.de/PRPNet/
|
|
|
|
Some new finds have been made! I have checked all that appear with a (0 ±XXX p) parenthesis, and they are all valid near-WSS. And I have checked all that appear with no parenthesis (seemingly claiming to be true WSS), and they are all false finds (i.e. they are not even near-WSS).
However, on our challenge stats page the prime 29387058234405803 (0 -558 p) is included. But this prime was reported back in April, half a year before the current challenge. Certainly the "Completed Thru" measure had passed 3e16 a long time before October. So there seems to be a bug somewhere?
/JeppeSN |
|
|
compositeVolunteer tester Send message
Joined: 16 Feb 10 Posts: 1022 ID: 55391 Credit: 888,938,802 RAC: 133,359
                       
|
It looks like some crunchers have turned on the "afterburner" ;) :D
I should be able to hold onto 5th place, unless Gary can pull a couple of thousand units from a rat's ass.
|
|
|
|
The challenge is over with 8 hours remaining...whats up? |
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13804 ID: 53948 Credit: 345,369,032 RAC: 1,967
                              
|
The challenge is over with 8 hours remaining...whats up?
Time zone.
____________
My lucky number is 75898524288+1 |
|
|
|
from the challenge statspage:
PRPNet Wall Sun Sun Search Project Challenge (Oct 2014)
The Dark Side of the Moon
the begin is at 10/08/2014 08:15:33 UTC, the end is at 10/20/2014 08:15:33 UTC
now it is 10/20/2014 02:17 UTC
sorry, challenge is over more info in the challenge thread at primegrid.com
not a time zone issue...
edit: and from Roger
I suggest we start at the first Penumbral contact at 08:15:33 UTC and end at 08:15:33 UTC twelve days later. |
|
|
compositeVolunteer tester Send message
Joined: 16 Feb 10 Posts: 1022 ID: 55391 Credit: 888,938,802 RAC: 133,359
                       
|
from the challenge statspage:
PRPNet Wall Sun Sun Search Project Challenge (Oct 2014)
The Dark Side of the Moon
the begin is at 10/08/2014 08:15:33 UTC, the end is at 10/20/2014 08:15:33 UTC
now it is 10/20/2014 02:17 UTC
sorry, challenge is over more info in the challenge thread at primegrid.com
not a time zone issue...
edit: and from Roger
I suggest we start at the first Penumbral contact at 08:15:33 UTC and end at 08:15:33 UTC twelve days later.
The challenge isn't over until the computer says so. It's still going on.
the begin is at 10/08/2014 08:15:33 UTC, the end is at 10/20/2014 08:15:33 UTC
now it is 10/20/2014 03:14 UTC
sorry, challenge is over more info in the challenge thread at primegrid.com
...
rank UID tests done Score
1 zunewantan 135,098 135,098,000
the begin is at 10/08/2014 08:15:33 UTC, the end is at 10/20/2014 08:15:33 UTC
now it is 10/20/2014 03:19 UTC
sorry, challenge is over more info in the challenge thread at primegrid.com
...
rank UID tests done Score
1 zunewantan 135,228 135,228,000
|
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13804 ID: 53948 Credit: 345,369,032 RAC: 1,967
                              
|
Ah, you're correct. I assumed (my error) that the challenge started and ended at midnight local time (which is UTC +0200). Chances are, so did the u-g-f.de server, given when it seemed to end.
____________
My lucky number is 75898524288+1 |
|
|
|
It looks like some crunchers have turned on the "afterburner" ;) :D
I should be able to hold onto 5th place, unless Gary can pull a couple of thousand units from a rat's ass.
I'm still running, but not going to catch you here. I got a late start after the BOINC challenge time-overlap at the start. "Well done", and I'll see you at the next challenge! :-)
-Gary |
|
|
compositeVolunteer tester Send message
Joined: 16 Feb 10 Posts: 1022 ID: 55391 Credit: 888,938,802 RAC: 133,359
                       
|
It looks like some crunchers have turned on the "afterburner" ;) :D
I should be able to hold onto 5th place, unless Gary can pull a couple of thousand units from a rat's ass.
I'm still running, but not going to catch you here. I got a late start after the BOINC challenge time-overlap at the start. "Well done", and I'll see you at the next challenge! :-)
-Gary
Thanks! Unless the rules change, I might press for 3rd or 4th place next time.
My natural position is around 8th place, but I had a head start. Not sure if I lost some credit early on due to wayward units - I thought I saw my tally drop a couple of thousand units in one 15-minute interval.
I also suspect the 30-second delay to kill prpclient may be insufficent when the system is severely bogged down with GPU saturation (and output going to the terminal), which could account for reports of a lost unit when returning about 25 of them.
|
|
|
Sysadm@Nbg Volunteer moderator Volunteer tester Project scientist
 Send message
Joined: 5 Feb 08 Posts: 1212 ID: 18646 Credit: 816,613,648 RAC: 185,781
                      
|
from the challenge statspage:
PRPNet Wall Sun Sun Search Project Challenge (Oct 2014)
The Dark Side of the Moon
the begin is at 10/08/2014 08:15:33 UTC, the end is at 10/20/2014 08:15:33 UTC
now it is 10/20/2014 02:17 UTC
sorry, challenge is over more info in the challenge thread at primegrid.com
not a time zone issue...
edit: and from Roger
I suggest we start at the first Penumbral contact at 08:15:33 UTC and end at 08:15:33 UTC twelve days later.
The challenge isn't over until the computer says so. It's still going on.
the begin is at 10/08/2014 08:15:33 UTC, the end is at 10/20/2014 08:15:33 UTC
now it is 10/20/2014 03:14 UTC
sorry, challenge is over more info in the challenge thread at primegrid.com
...
rank UID tests done Score
1 zunewantan 135,098 135,098,000
the begin is at 10/08/2014 08:15:33 UTC, the end is at 10/20/2014 08:15:33 UTC
now it is 10/20/2014 03:19 UTC
sorry, challenge is over more info in the challenge thread at primegrid.com
...
rank UID tests done Score
1 zunewantan 135,228 135,228,000
sorry for the confusion
there was a bug generating the statement (it was to soon declared over but worked on)
elseif ( time() >= mktime(10,15,33,10,8,14) AND time() < mktime(2,0,0,10,20,14))
the crontab works as announced until 08:15 UTC; last update of challenge stats is from this timestamp
____________
Sysadm@Nbg
my current lucky number: 113856050^65536 + 1
PSA-PRPNet-Stats-URL: http://u-g-f.de/PRPNet/
|
|
|
RogerVolunteer developer Volunteer tester
 Send message
Joined: 27 Nov 11 Posts: 1138 ID: 120786 Credit: 268,621,444 RAC: 0
                    
|
Stats are final. Congratulation to zunewantan and Aggie_The_Pew.
The top of the challenge rankings is as follows:
rank userid tests score
1 zunewantan 137,859 137,859,000
2 Scott_Brown 125,906 125,906,000
3 288larsson 125,571 125,571,000
4 brinktastee 74,412 74,412,000
5 composite 64,934 64,934,000
rank teamid tests score
1 Aggie_The_Pew 336,625 336,625,000
2 Sicituradastra. 157,051 157,051,000
3 SETI.Germany 151,191 151,191,000
Congratulations also go to tera3@EAnewsplus, [AF%3EAmis_des_Lapins]_Phil1966, zunewantan, Stojag, pschoefer, brinktastee, Scott_Brown and SysadmAtNbg for their near finds. With 12 near hits during the challenge we had a 0.6% chance of finding a true Wall-Sun-Sun prime.
See you at the next challenge!
____________
|
|
|
Sysadm@Nbg Volunteer moderator Volunteer tester Project scientist
 Send message
Joined: 5 Feb 08 Posts: 1212 ID: 18646 Credit: 816,613,648 RAC: 185,781
                      
|
Some new finds have been made! I have checked all that appear with a (0 ±XXX p) parenthesis, and they are all valid near-WSS. And I have checked all that appear with no parenthesis (seemingly claiming to be true WSS), and they are all false finds (i.e. they are not even near-WSS).
However, on our challenge stats page the prime 29387058234405803 (0 -558 p) is included. But this prime was reported back in April, half a year before the current challenge. Certainly the "Completed Thru" measure had passed 3e16 a long time before October. So there seems to be a bug somewhere?
/JeppeSN
Challenge stats are now available here
as announced I eliminated the false positives and the earlier reported find
____________
Sysadm@Nbg
my current lucky number: 113856050^65536 + 1
PSA-PRPNet-Stats-URL: http://u-g-f.de/PRPNet/
|
|
|
|
This challenge has been called the October challenge, from the Roman/Christian calendar month name.
The "months" in this calendar are rather artificial, not representing actual lunations (as found by observing the sky). So we give here the name of the current month in some popular calendars that respect the phases of the moon.
In the Chinese Han calendar, this is the 9th month, Jǐuyuè (first-frost month), in the year of the (wooden) horse. A leap month will be inserted from next new moon (sort of duplicating this 9th month). In lunisolar calendars, approximately 3% of the months will need to be leap months.
In the Hebrew calendar, we currently have the 7th month, Tishri, in the year 5775 anno mundi (year after World's creation). The year number increments in the beginning of every month Tishri. The year 5775 is a common year (meaning that the 12th year will not be duplicated, like in leap years like 5774 or 5776).
In the Islamic calendar (lunar Hijri calendar), we are in the 12th month, Dhu al-Hijjah, in the year 1435 anno Hegirae (year after the Hijra, Muhammad's migration from Mecca to Medina). We will go to 1436 when the new moon is seen in the sky in a couple of days (Islamic new year). This calendar is purely lunar, meaning that it never introduces leap months.
The PRPNet Wall-Sūn-Sūn challenge of this thread started at the middle of this lunation, the full moon (with lunar eclipse), at October 8 (UTC). The next lunation (next new moon) will start October 23, and there will be a solar eclipse at that time. It will be a partial eclipse, visible from e.g. North America. The partial solar eclipse will be at its greatest at 21:44:32 UTC this Thursday.
/JeppeSN
|
|
|
|
While the message board thread title says "October Challenge", I believe the challenge itself was styled "dark side of the moon", and was timed with the lunar eclipse at the start. While that name is of course something of a misnomer, there was at some point in the early discussion a connection with the Pink Floyd album of the same name... maybe anniversary of its release or something... I forget.
I'm just sad I didn't get even one near find :-) ... but glad brinktastee broke his bad luck streak.
--Gary |
|
|