Author |
Message |
|
Welcome to Summer Solstice Challenge
The longest day of the year is drawing near and to mark this turning point in the year, PrimeGrid is offering a 5 day challenge on Proth MEGA Prime Search (LLR). Only PPS MEGA is part of the challenge!
To participate in the Challenge, please select only the Proth MEGA Prime Search (LLR) project in your PrimeGrid preferences section. The challenge will begin 20 June 2016 22:34 UTC and end 25 June 2016 22:34 UTC. Application builds are available for Linux , Windows and MacIntel 32 bit and 64 bit. CPU's with AVX capabilities will be significantly faster than the ones without, as this instruction set allows for more computing power.
Please note the atypical start and stop times of this challenge!
ATTENTION: The primality program LLR is CPU intensive; so, it is vital to have a stable system with good cooling. It does not tolerate "even the slightest of errors." Please see this post for more details on how you can "stress test" your computer. WU's will take just under 2 hours on the fastest/newest computers and 3(+) hours on slower/older computers. If your computer is highly overclocked, please consider "stress testing" it. Sieving is an excellent alternative for computers that are not able to LLR. :)
Highly overclocked Haswell (i.e., Intel Core i7, i5, and i3 -4xxx) computers running the application will see fastest times. Note that PPS is now running the latest, brand new FMA3 version of LLR which takes full advantage of the new Haswell features. It's faster than the previous LLR app and draws more power and produces more heat. The new FMA3 LLR app is version 6.24. If you have a Haswell CPU, especially if it's overclocked or has overclocked memory, and haven't run the new FMA3 LLR before, we strongly suggest running it before the challenge while you are monitoring the temperatures.
Please, please, please make sure your machines are up to the task.
Time zone converter:
The World Clock - Time Zone Converter
NOTE: The countdown clock on the front page uses the host computer time. Therefore, if your computer time is off, so will the countdown clock. For precise timing, use the UTC Time in the data section to the left of the countdown clock.
Scoring Information
Scores will be kept for individuals and teams. Only work units issued AFTER 20 June 2016 22:34 UTC and received BEFORE 23 April 2014 2014 16:16 UTC will be considered for credit. We will use the same scoring method as for BOINC credit. The only difference is that the primary and double checker of a WU will receive the same score.
Therefore, each completed WU will earn a unique score based on its n value. The higher the n, the higher the score. This is different than BOINC cobblestones! A quorum of 2 is NOT needed to award Challenge score - i.e. no double checker. Therefore, each returned result will earn a Challenge score. Please note that if the result is eventually declared invalid, the score will be removed.
For details on how the score is calculated, please see this thread.
At the Conclusion of the Challenge
We kindly ask users "moving on" to ABORT their WU's instead of DETACHING, RESETTING, or PAUSING.
ABORTING WU's allows them to be recycled immediately; thus a much faster "clean up" to the end of an LLR Challenge. DETACHING, RESETTING, and PAUSING WU's causes them to remain in limbo until they EXPIRE. Therefore, we must wait until WU's expire to send them out to be completed.
Please consider either completing what's in the queue or ABORTING them. Thank you. :)
About the Proth Prime Search
The Proth Prime Search is done in collaboration with the Proth Search project. This search looks for primes in the form of k*2^n+1. With the condition 2^n > k, these are often called Proth primes. This project also has the added bonus of possibly finding factors of "classical" Fermat numbers or Generalized Fermat numbers. As this requires PrimeFormGW (PFGW) (a primality-testing program), once PrimeGrid finds a prime, it is then tested on PrimeGrid's servers for divisibility.
Proth Search only searches for k<1200. PrimeGrid created an extension to that which includes all candidates 1200<k<10000 for n<5M. It is this extension which we call PPSE that the Challenge will be on.
Initially, PrimeGrid's PPS project's goal was to double check all previous work up to n=500K for odd k<1200 and to fill in any gaps that were missed. We have accomplished that now and have increased it to n=2M. PG's LLRNet searched up to n=200,000 and found several missed primes in previously searched ranges. Although primes that small did not make it into the Top 5000 Primes database, the work was still important as it may have led to new factors for "classical" Fermat numbers or Generalized Fermat numbers. While there are many GFN factors, currently there are only 293 "classical" Fermat number factors known. Current primes found in PPS definitely make it into the Top 5000 Primes database.
For more information about "Proth" primes, please visit these links:
About Proth Search
The Proth Search project was established in 1998 by Ray Ballinger and Wilfrid Keller to coordinate a distributed effort to find Proth primes (primes of the form k*2^n+1) for k < 300. Ray was interested in finding primes while Wilfrid was interested in finding divisors of Fermat number. Since that time it has expanded to include k < 1200. Mark Rodenkirch (aka rogue) has been helping Ray keep the website up to date for the past few years.
Early in 2008, PrimeGrid and Proth Search teamed up to provide a software managed distributed effort to the search. Although it might appear that PrimeGrid is duplicating some of the Proth Search effort by re-doing some ranges, few ranges on Proth Search were ever double-checked. This has resulted in PrimeGrid finding primes that were missed by previous searchers. By the end of 2008, all new primes found by PrimeGrid were eligible for inclusion in Chris Caldwell's Prime Pages Top 5000. Sometime in 2009, over 90% of the tests handed out by PrimeGrid were numbers that have never been tested.
PrimeGrid intends to continue the search indefinitely for Proth primes.
What is LLR?
The Lucas-Lehmer-Riesel (LLR) test is a primality test for numbers of the form N = k*2^n − 1, with 2^n > k. Also, LLR is a program developed by Jean Penne that can run the LLR-tests. It includes the Proth test to perform +1 tests and PRP to test non base 2 numbers. See also:
(Edouard Lucas: 1842-1891, Derrick H. Lehmer: 1905-1991, Hans Riesel: 1929-2014).
____________
PrimeGrid Challenge Overall standings --- Last update: From Pi to Paddy (2016)
|
|
|
Dave  Send message
Joined: 13 Feb 12 Posts: 3028 ID: 130544 Credit: 2,027,225,853 RAC: 948,853
                      
|
Now I kan haz internetz, I'm in.
A few errors noted:
Only work units issued AFTER 20 June 2016 22:34 UTC and received BEFORE 23 April 2014 2014 16:16 UTC will be considered for credit.
- oopsy
This is different than BOINC cobblestones!
- "to" not "than"
Proth Search only searches for k<1200. PrimeGrid created an extension to that which includes all candidates 1200<k<10000 for n<5M. It is this extension which we call PPSE that the Challenge will be on.
- this challenge is Mega not PPSE :).
Best of luck to all! |
|
|
RafaelVolunteer tester
 Send message
Joined: 22 Oct 14 Posts: 905 ID: 370496 Credit: 459,403,918 RAC: 159,669
                   
|
WU's will take just under 2 hours on the fastest/newest computers and 3(+) hours on slower/older computers.
*Just under 1hour on newer systems.
Let's show Skylake a bit of love. |
|
|
mackerel Volunteer tester
 Send message
Joined: 2 Oct 08 Posts: 2577 ID: 29980 Credit: 548,740,837 RAC: 20,745
                             
|
That link isn't working for me, but my main i7-6700k at 4.2 GHz does 4 MEGA units every 44 minutes. I'm going to have a go at overclocking my 2nd 6700k next weekend as I just fitted a bigger cooler last weekend... :) |
|
|
RafaelVolunteer tester
 Send message
Joined: 22 Oct 14 Posts: 905 ID: 370496 Credit: 459,403,918 RAC: 159,669
                   
|
That link isn't working for me, but my main i7-6700k at 4.2 GHz does 4 MEGA units every 44 minutes. I'm going to have a go at overclocking my 2nd 6700k next weekend as I just fitted a bigger cooler last weekend... :)
Maybe now it's working...
But yeah, around 46min here. |
|
|
mackerel Volunteer tester
 Send message
Joined: 2 Oct 08 Posts: 2577 ID: 29980 Credit: 548,740,837 RAC: 20,745
                             
|
6600k OC much? |
|
|
RafaelVolunteer tester
 Send message
Joined: 22 Oct 14 Posts: 905 ID: 370496 Credit: 459,403,918 RAC: 159,669
                   
|
6600k OC much?
4332.25 core, 4231.5 cache, 3021 RAM (yay baseclock).
There's some variance on the WU speed, though. Half of them are taking just over 43min, while the other half is getting shy of 47. |
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13780 ID: 53948 Credit: 343,945,598 RAC: 11,355
                              
|
6600k OC much?
4332.25 core, 4231.5 cache, 3021 RAM (yay baseclock).
There's some variance on the WU speed, though. Half of them are taking just over 43min, while the other half is getting shy of 47.
Rafael,
You've got several inconclusives and invalids on the machine over the last couple of months, but none in the last week. Is that a problem that you've corrected, or is it an ongoing problem where a small percentage of tasks have errors?
I'm wondering if Skylakes are not quite totally stable yet.
____________
My lucky number is 75898524288+1 |
|
|
RafaelVolunteer tester
 Send message
Joined: 22 Oct 14 Posts: 905 ID: 370496 Credit: 459,403,918 RAC: 159,669
                   
|
6600k OC much?
4332.25 core, 4231.5 cache, 3021 RAM (yay baseclock).
There's some variance on the WU speed, though. Half of them are taking just over 43min, while the other half is getting shy of 47.
Rafael,
You've got several inconclusives and invalids on the machine over the last couple of months, but none in the last week. Is that a problem that you've corrected, or is it an ongoing problem where a small percentage of tasks have errors?
I'm wondering if Skylakes are not quite totally stable yet.
It's my fault, don't worry. I'm trying to learn DDR4 tertiary timings OC, and that's where all of the invalids come from. Those tasks are the results of settings passing my Prime95 stress test, but actually being unstable for 24/7 operation.
No invalids this last week because I'm back at more stable settings, in preparation for the challenge. And there was a Bios update which nuked my previous OC profiles, which I'm lazy to reapply.
But yeah, the takeaway is: this is not Skylake's fault. It's the fault of a defective piece of hardware located between the keboard and the chair =P |
|
|
mackerel Volunteer tester
 Send message
Joined: 2 Oct 08 Posts: 2577 ID: 29980 Credit: 548,740,837 RAC: 20,745
                             
|
To add to that, pretty much all the bad WUs I've ever produced on Skylake were due to running aggressive settings on ram. Even if the ram is rated for higher speeds, it is still a system level overclock. For bigger units, it can provide quite a big boost to performance, so the temptation is there to push it. I've not really OC'd the CPU as much since there is a bigger power/performance tradeoff there which is secondary to ram. For units which fit inside the processor cache (MEGA is borderline depending on exact CPU) ram isn't as important and core clock can play a bigger role. |
|
|
|
Thanks Charley!
The team CRUNCHERS SANS FRONTIERES will participate in this challenge! :)
____________
Founder of CRUNCHERS SANS FRONTIERES
www.crunchersansfrontieres.org
CSF lucky number 22872882^65536+1 |
|
|
|
time check!!! I'm seeing 30 more minutes from Puerto Rico.
____________
|
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13780 ID: 53948 Credit: 343,945,598 RAC: 11,355
                              
|
time check!!! I'm seeing 30 more minutes from Puerto Rico.
Please turn off your flux capacitor. You're a bit early.
The challenge will begin 20 June 2016 22:34 UTC and end 25 June 2016 22:34 UTC.
That's 6:34 PM, EDT. Four hours from now.
____________
My lucky number is 75898524288+1 |
|
|
|
jajaja, just checking to see if you are alert!!. 4 hours more to go.
____________
|
|
|
|
We at Alien Prime Cult are ready to crunch! ;-) |
|
|
|
We are on the go!!
____________
|
|
|
|
Already submitted a couple of tasks from my (So-Last-Season!) 4690k ... and just popped up in 10th place! ... but not likely to stay there long... wondering whether it was worth staying up till 2am...? U Bet it Was!! :-)
____________
|
|
|
|
Haswell @ 4.4 with 2133 DDR3 RAM does them in ~53 minutes, with all 4 cores running. |
|
|
tng Send message
Joined: 29 Aug 10 Posts: 448 ID: 66603 Credit: 39,006,337,135 RAC: 24,571,508
                                                
|
Hmm...
http://primes.utm.edu/primes/page.php?id=121815
Jumping the gun a bit?
____________
|
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13780 ID: 53948 Credit: 343,945,598 RAC: 11,355
                              
|
Hmm...
http://primes.utm.edu/primes/page.php?id=121815
Jumping the gun a bit?
Yup :)
Doesn't it always work this way? We have a challenge, and we find one of the challenge's primes either right before or right after the challenge.
Of course, the more important "coincidence" is that, once again, Scott's on vacation, and not only do we find a mega prime, but he's the one who found it!
____________
My lucky number is 75898524288+1 |
|
|
|
Hmm...
http://primes.utm.edu/primes/page.php?id=121815
Jumping the gun a bit?
Wow !
Congratz tng* =)
|
|
|
|
E5-2623 v3 taking 1.5 hours per wu. :(
All eight cores (2 cpus) running and task manager showing only 55% being used. Do you think I should hyperthread?
It's a HP DL360 GEN9 with only 16gb of DDR4 memory, only 1.8 is being used. No overclockng here on the cpu or memory.
Suggestion on getting better performance?
____________
|
|
|
mackerel Volunteer tester
 Send message
Joined: 2 Oct 08 Posts: 2577 ID: 29980 Credit: 548,740,837 RAC: 20,745
                             
|
To make sure I understand, you have a dual socket system with two of those Xeons. Looking at the task results, note the run time and CPU time are very different.
Is HT on or off? If HT is on, turn it off would seem easiest way to prevent performance problems. If HT is already off, I'm not sure what to do. It does seem like the system might be putting the work onto one CPU and not spreading it evenly. I have rarely seen similar (tasks are put on same core, while another is idle) and restarting usually sorts it. Ram performance shouldn't matter for MEGA as the Xeons have plenty of cache. |
|
|
|
Thanks for the info!
Hyper is off. Yes, it's a dual Xeon. Restarted BOINC and no change.
____________
|
|
|
Monkeydee Volunteer tester
 Send message
Joined: 8 Dec 13 Posts: 496 ID: 284516 Credit: 985,777,296 RAC: 1,599,964
                         
|
It looks like demand for MEGA is so high that I ended up getting an ESP unit... I noticed I had accidentally left "Send work from any subproject if selected projects have no work" set to "Yes". That has since been changed to "No".
____________
My Primes
Badge Score: 4*2 + 6*2 + 7*9 + 8*3 + 10*1 + 11*3 = 149
|
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13780 ID: 53948 Credit: 343,945,598 RAC: 11,355
                              
|
It looks like demand for MEGA is so high that I ended up getting an ESP unit... I noticed I had accidentally left "Send work from any subproject if selected projects have no work" set to "Yes". That has since been changed to "No".
There's plenty of tasks, and lots of capacity. I can't think of any reason why you wouldn't have been sent MEGA. I'll do some digging and let you know if I find anything.
EDIT: Sorry, but I don't have a definitive explanation for this. There's several things that can cause this, and none of the potential causes appear to have happened:
* The instantaneous demand for MEGA tasks could deplete the server's in-memory storage of tasks. Since this cache gets refilled every second (or sooner), this would require sending out more than 400 tasks in a single second. During the period in question, the most I see is 53 tasks being sent out in a single second. So that's not the problem.
* 10,000 ready to send tasks are kept in the database. It's possible that all of the tasks in the database can be depleted if more than 10 thousand tasks are sent out before the work generator and transitioner can replenish them. The work generator runs once a minute. The transitioner runs approximately every 15 seconds. To run out of tasks in the database, we'd need to send out more than 10 thousand tasks within a span of about 75 seconds. During the period in question, there's never more than 2000 tasks sent out in a single minute, so we could not have run out of tasks in the database.
* If a single user runs so many tasks that he's already the wingman on all of the tasks kept in memory, the server won't find any tasks that are can be sent to that particular user. With a single machine, running no cache, this certainly is not the problem.
* Your settings could have been wrong. They're certainly correct right now, and I have no reason to think they were wrong before (other than having "send other tasks" set, of course.) This doesn't appear to be the problem either.
* If, for some reason, the database got congested and slowed down, that could keep the feeder from pumping new MEGA tasks into memory. Even 426 slots isn't a lot when you're sending out 50 per second. If the database slowed down/hung/paused for about 10 seconds, and there was a heavy demand for MEGA tasks, it's possible that all the tasks in memory were depleted and the server had no more MEGA tasks to send out. This is my best guess for what may have happened, but there's no way to retroactively investigate whether this actually occurred. It's pure speculation.
____________
My lucky number is 75898524288+1 |
|
|
Monkeydee Volunteer tester
 Send message
Joined: 8 Dec 13 Posts: 496 ID: 284516 Credit: 985,777,296 RAC: 1,599,964
                         
|
There's plenty of tasks, and lots of capacity. I can't think of any reason why you wouldn't have been sent MEGA. I'll do some digging and let you know if I find anything.
Much appreciated.
Thank you!
____________
My Primes
Badge Score: 4*2 + 6*2 + 7*9 + 8*3 + 10*1 + 11*3 = 149
|
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13780 ID: 53948 Credit: 343,945,598 RAC: 11,355
                              
|
See my updated response, two posts back.
____________
My lucky number is 75898524288+1 |
|
|
Monkeydee Volunteer tester
 Send message
Joined: 8 Dec 13 Posts: 496 ID: 284516 Credit: 985,777,296 RAC: 1,599,964
                         
|
See my updated response, two posts back.
Thanks for the detailed checking and explanations.
I will take this as a random one-off as I have since resumed getting MEGA units without issue.
Hopefully nobody else experiences this during the challenge.
____________
My Primes
Badge Score: 4*2 + 6*2 + 7*9 + 8*3 + 10*1 + 11*3 = 149
|
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13780 ID: 53948 Credit: 343,945,598 RAC: 11,355
                              
|
I think I might know what caused the problem. Just a guess, but the timing seems about right.
During challenges, I reduce the number of tasks the server will send out to each host. Normally, we allow computers to have a cache of 320 tasks per core. Before a challenge starts, I reduce this to 8 tasks per core. Especially with challenges with short tasks, we don't want a few computers sucking out all the tasks faster than we can replenish them.
After the challenge starts, I usually gradually increase this limit back to the normal 320.
Last night, I forgot to restore the limit to its normal value, so this morning I corrected my mistake by setting the limit to 320.
All at once.
I didn't do it gradually, and I think that's the problem.
The load on the server from computers wanting to fill up their caches may have backed things up enough to cause temporary outages in memory.
I've left myself a note for next time to be more careful with this. Thanks for bringing this to my attention!
EDIT: There's in fact ample supporting evidence that this is indeed what happened. So mystery solved, and I know how to prevent this in the future.
____________
My lucky number is 75898524288+1 |
|
|
mackerel Volunteer tester
 Send message
Joined: 2 Oct 08 Posts: 2577 ID: 29980 Credit: 548,740,837 RAC: 20,745
                             
|
Thanks for the info!
Hyper is off. Yes, it's a dual Xeon. Restarted BOINC and no change.
So you have 8 core and 8 tasks, but it sounds like the system isn't assigning them around. Did you ever do anything relating to CPU affinity? If you look in task manager, you would ideally see each task using 12.5% CPU (8*=100%), but I'm guessing you're having some lower than that?
I don't have recent experience of multi-sockets so I'm not sure if there could be something else going on there. Do both CPUs have their own dedicated ram? Not sure if it is possible or valid for it to route it through a single one. Just wondering if something NUMA related might be happening, but again I have zero experience of it so this is guessing... |
|
|
Monkeydee Volunteer tester
 Send message
Joined: 8 Dec 13 Posts: 496 ID: 284516 Credit: 985,777,296 RAC: 1,599,964
                         
|
I've left myself a note for next time to be more careful with this. Thanks for bringing this to my attention!
EDIT: There's in fact ample supporting evidence that this is indeed what happened. So mystery solved, and I know how to prevent this in the future.
Happy to be of assistance!
____________
My Primes
Badge Score: 4*2 + 6*2 + 7*9 + 8*3 + 10*1 + 11*3 = 149
|
|
|
streamVolunteer moderator Project administrator Volunteer developer Volunteer tester Send message
Joined: 1 Mar 14 Posts: 981 ID: 301928 Credit: 543,185,506 RAC: 36,711
                        
|
I think I might know what caused the problem. Just a guess, but the timing seems about right.
I don't understood well when it happened according to your scenario, but I had same issue as OP and it happened right at the start of the challenge on one of PCs. No problem for me (had "Send any work" disabled from the beginning), just hit "Update" once again when noticed. But it happened.
21-Jun-2016 01:34:43 [PrimeGrid] work fetch resumed by user
21-Jun-2016 01:34:45 [PrimeGrid] Sending scheduler request: To fetch work.
21-Jun-2016 01:34:45 [PrimeGrid] Requesting new tasks for CPU
21-Jun-2016 01:34:47 [PrimeGrid] Scheduler request completed: got 0 new tasks
21-Jun-2016 01:34:47 [PrimeGrid] No tasks sent
21-Jun-2016 01:34:47 [PrimeGrid] No tasks are available for PPS-Mega (LLR)
21-Jun-2016 01:34:47 [PrimeGrid] No tasks are available for Genefer 131072 Low
21-Jun-2016 01:34:47 [PrimeGrid] No tasks are available for the applications you have selected.
21-Jun-2016 01:38:52 [PrimeGrid] update requested by user
21-Jun-2016 01:38:57 [PrimeGrid] Sending scheduler request: Requested by user.
21-Jun-2016 01:38:57 [PrimeGrid] Requesting new tasks for CPU
21-Jun-2016 01:38:59 [PrimeGrid] Scheduler request completed: got 4 new tasks
UTC is +3, computer ID 494604
|
|
|
|
Each socket has its ram. Now task manager showing 77%. Each task about 7.9%
____________
|
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13780 ID: 53948 Credit: 343,945,598 RAC: 11,355
                              
|
I think I might know what caused the problem. Just a guess, but the timing seems about right.
I don't understood well when it happened according to your scenario, but I had same issue as OP and it happened right at the start of the challenge on one of PCs. No problem for me (had "Send any work" disabled from the beginning), just hit "Update" once again when noticed. But it happened.
No explanation for that. Guesswork follows. Even if the database bogged down at the start, causing the feeder to have trouble keeping the memory cache full, it seems unlikely to have been a problem. The feeder runs once a second (or continuously when it's behind), and 426 MEGA tasks are kept in memory. In the first minute of the challenge, only 1373 tasks were sent out. It's hard to imagine the feeder falling so far behind that we ran out of tasks. But let's assume that this is what happened anyway. This is the number of MEGA tasks sent out at the beginning of the challenge, by minute:
+------+--------+----------+
| hour | minute | count(*) |
+------+--------+----------+
| 22 | 30 | 31 |
| 22 | 31 | 62 |
| 22 | 32 | 97 |
| 22 | 33 | 138 |
| 22 | 34 | 1373 |
| 22 | 35 | 1405 |
| 22 | 36 | 1237 |
| 22 | 37 | 896 |
| 22 | 38 | 656 |
| 22 | 39 | 428 |
About 1200 to 1400 per minute.
Now, this is when I set the task limit to 320:
| 12 | 5 | 94 |
| 12 | 6 | 42 |
| 12 | 7 | 37 |
| 12 | 8 | 951 |
| 12 | 9 | 1306 |
| 12 | 10 | 1451 |
| 12 | 11 | 1238 |
| 12 | 12 | 1467 |
| 12 | 13 | 1341 |
| 12 | 14 | 1241 |
| 12 | 15 | 1139 |
| 12 | 16 | 808 |
| 12 | 17 | 955 |
| 12 | 18 | 1022 |
| 12 | 19 | 1292 |
| 12 | 20 | 1085 |
| 12 | 21 | 746 |
| 12 | 22 | 806 |
| 12 | 23 | 998 |
| 12 | 24 | 916 |
It might be a coincidence, but notice that even though the surge from raising the limit was much larger than the surge at the challenge start, both surges peaked at about the same 1200-1400 tasks per minute.
It's possible that the system as a whole isn't allowing the feeder to operate as quickly as it needs to. This is something that I can compensate for, by lowering the task limit even further. This will reduce the number of tasks that are sent out, reducing the load on the database, and presumably speeding everything up. The limit can then be slowly raised back to its normal value.
____________
My lucky number is 75898524288+1 |
|
|
streamVolunteer moderator Project administrator Volunteer developer Volunteer tester Send message
Joined: 1 Mar 14 Posts: 981 ID: 301928 Credit: 543,185,506 RAC: 36,711
                        
|
This is the number of MEGA tasks sent out at the beginning of the challenge, by minute:
Thanks for info, at least it's clear that my issue really happened during peak server load:
21-Jun-2016 01:34:47 [PrimeGrid] No tasks sent
With UTC +3 it'll be 22:34:47 UTC, and...
+------+--------+----------+
| hour | minute | count(*) |
+------+--------+----------+
| 22 | 33 | 138 |
| 22 | 34 | 1373 |
| 22 | 35 | 1405 |
| 22 | 36 | 1237 |
| 22 | 37 | 896 |
| 22 | 38 | 656 |
| 22 | 39 | 428 |
1373 at 22:34 and 1405 in next minute. Retry was at 22:38:52 UTC, no such huge load at this time.
As far as I remember, Boinc will retry fetch request after some time so this problem, if it'll ever appear again, will affect only peoples who either fight for every second of challenge time but don't monitor their clients (quite strange combination), either still have "Send any work" option on (which by itself it too dangerous at my taste, it's too easy to get TRP or SOB on a slow or unstable system).
|
|
|
Dave  Send message
Joined: 13 Feb 12 Posts: 3028 ID: 130544 Credit: 2,027,225,853 RAC: 948,853
                      
|
I got no tasks availabke at 22:34 - it was 14 minutes until it chose to reattempt (& was successful). |
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13780 ID: 53948 Credit: 343,945,598 RAC: 11,355
                              
|
Challenge: Summer Solstice
App: 21 (PPS-Mega)
(As of 2016-06-22 00:28:56 UTC)
137331 tasks have been sent out. [CPU/GPU/anonymous_platform: 136827 (100%) / 0 (0%) / 504 (0%)]
Of those tasks that have been sent out:
1285 (1%) came back with some kind of an error. [1285 (1%) / 0 (0%) / 0 (0%)]
56184 (41%) have returned a successful result. [55752 (41%) / 0 (0%) / 433 (0%)]
79862 (58%) are still in progress. [79791 (58%) / 0 (0%) / 71 (0%)]
Of the tasks that have been returned successfully:
21049 (37%) are pending validation. [20867 (37%) / 0 (0%) / 183 (0%)]
34994 (62%) have been successfully validated. [34745 (62%) / 0 (0%) / 249 (0%)]
31 (0%) were invalid. [31 (0%) / 0 (0%) / 0 (0%)]
110 (0%) are inconclusive. [109 (0%) / 0 (0%) / 1 (0%)]
The current leading edge (i.e., latest work unit for which work has actually been sent out to a host) is n=3534185. The leading edge was at n=3485907 at the beginning of the challenge. Since the challenge started, the leading edge has advanced 1.38% as much as it had prior to the challenge!
____________
My lucky number is 75898524288+1 |
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13780 ID: 53948 Credit: 343,945,598 RAC: 11,355
                              
|
Two primes have been found so far during the challenge. This does not include the prime found by Scott Brown hours before the challenge started.
This is the first one: 329*2^3518451+1, found by 288larsson.
The second one will be announced later.
____________
My lucky number is 75898524288+1 |
|
|
|
It's mine!!!!!!!
____________
|
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13780 ID: 53948 Credit: 343,945,598 RAC: 11,355
                              
|
It's mine!!!!!!!
345*2^3532957+1
Congratulations!
____________
My lucky number is 75898524288+1 |
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13780 ID: 53948 Credit: 343,945,598 RAC: 11,355
                              
|
This is turning out to be quite a lucky challenge. After a bit more than a day and a half, it looks like we've found three mega primes. (The third one hasn't been made public yet. It was just discovered and I'm in the process of verifying it as well as testing for for XGFN divisors.)
____________
My lucky number is 75898524288+1 |
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13780 ID: 53948 Credit: 343,945,598 RAC: 11,355
                              
|
Congratulation to Van Zimmerman for discovering the third mega-prime of the challenge!
351*2^3545752+1
____________
My lucky number is 75898524288+1 |
|
|
|
Me and the Number Crunchers Team have been running the PPSMega but forgot to sign up here. I have already found 2 numbers this week that appear to be prime because they've taken over a day to validate. Can I still enter?
____________
Nathan Hood, Amateur Mathematician
Favorite Number-53
1146966*79^50005-1 is prime! (94897 decimal digits, P = 3) Time : 2930.129 sec.
PRIMES AS DOUBLE CHECKER:
29320766^8192+1 is prime!
24526702^8192+1 is prime! |
|
|
|
Thanks!!! Just got another server today, DL360e Gen8. Getting crazy with this!!
____________
|
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13780 ID: 53948 Credit: 343,945,598 RAC: 11,355
                              
|
Me and the Number Crunchers Team have been running the PPSMega but forgot to sign up here. I have already found 2 numbers this week that appear to be prime because they've taken over a day to validate. Can I still enter?
There is no "entry" or "signup". Everyone who correctly completes tasks (both starting and returning to the server) during the challenge is scored in the challenge.
As for the numbers being prime, unfortunately that's not at all related to the amount of time it takes to validate. Validation involves matching your result (a 64-bit hash) against the result returned in an identical task sent to a second computer. How long the validation takes is entirely dependant on how long it takes the other computer to return the task.
EDIT: Please note that for a task to count towards the challenge, it must be sent to your computer during the challenge, and returned to the server during the challenge. You have several PPS-Mega tasks that will not count for the challenge because they were sent to your computers before the challenge started.
____________
My lucky number is 75898524288+1 |
|
|
|
The ones sent to computer 518793 are not in the contest. I dropped that computer off my workload as soon as it started. |
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13780 ID: 53948 Credit: 343,945,598 RAC: 11,355
                              
|
Challenge: Summer Solstice
App: 21 (PPS-Mega)
(As of 2016-06-22 22:41:15 UTC)
204283 tasks have been sent out. [CPU/GPU/anonymous_platform: 203362 (100%) / 0 (0%) / 921 (0%)]
Of those tasks that have been sent out:
2557 (1%) came back with some kind of an error. [2557 (1%) / 0 (0%) / 0 (0%)]
114154 (56%) have returned a successful result. [113298 (55%) / 0 (0%) / 856 (0%)]
87572 (43%) are still in progress. [87507 (43%) / 0 (0%) / 65 (0%)]
Of the tasks that have been returned successfully:
39240 (34%) are pending validation. [38926 (34%) / 0 (0%) / 314 (0%)]
74677 (65%) have been successfully validated. [74136 (65%) / 0 (0%) / 541 (0%)]
98 (0%) were invalid. [98 (0%) / 0 (0%) / 0 (0%)]
139 (0%) are inconclusive. [138 (0%) / 0 (0%) / 1 (0%)]
The current leading edge (i.e., latest work unit for which work has actually been sent out to a host) is n=3557811. The leading edge was at n=3485907 at the beginning of the challenge. Since the challenge started, the leading edge has advanced 2.06% as much as it had prior to the challenge!
____________
My lucky number is 75898524288+1 |
|
|
|
This is turning out to be quite a lucky challenge. After a bit more than a day and a half, it looks like we've found three mega primes. (The third one hasn't been made public yet. It was just discovered and I'm in the process of verifying it as well as testing for for XGFN divisors.)
Links:
323*2^3482789+1 (before challenge start)
329*2^3518451+1
345*2^3532957+1
351*2^3545752+1
409*2^2360166+1 (not mega)
If any of those new primes turn out to divide some F(m), GF(m,b), or xGF(m,a,b), will you submit a comment to the entry linked (on Caldwell's site), or will we have to wait for the official PrimeGrid announcement?
/JeppeSN |
|
|
|
On all three that I have discovered, there is no official comment.
____________
|
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13780 ID: 53948 Credit: 343,945,598 RAC: 11,355
                              
|
If any of those new primes turn out to divide some F(m), GF(m,b), or xGF(m,a,b), will you submit a comment to the entry linked (on Caldwell's site), or will we have to wait for the official PrimeGrid announcement?
Divisor information would be put on T5K as soon as we have the information. None of those 4 primes are divisors. There's a fifth one (including Scott's prime from before the challenge), which was just found. The XGFN testing on numbers of this size takes about 6 hours on my computer.
____________
My lucky number is 75898524288+1 |
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13780 ID: 53948 Credit: 343,945,598 RAC: 11,355
                              
|
Including the as yet unannounced fifth mega prime, we've found as many mega primes this week (actually, the last three days) as we did in all of 2013.
It's also more mega primes than we found in all of 2010, or any single year before that.
____________
My lucky number is 75898524288+1 |
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13780 ID: 53948 Credit: 343,945,598 RAC: 11,355
                              
|
Congratulation to RNDr. MF1 for discovering 381*2^3563676+1, our fourth prime of the challenge and the fifth PPS-MEGA prime this week!
____________
My lucky number is 75898524288+1 |
|
|
RafaelVolunteer tester
 Send message
Joined: 22 Oct 14 Posts: 905 ID: 370496 Credit: 459,403,918 RAC: 159,669
                   
|
Including the as yet unannounced fifth mega prime, we've found as many mega primes this week (actually, the last three days) as we did in all of 2013.
Kinda wish we did this during Tour de Primes.... would have made it even more fun. |
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13780 ID: 53948 Credit: 343,945,598 RAC: 11,355
                              
|
Another prime:
309*2^3577339+1, discovered by Russ. Congratulations!
Our total number of mega primes since Monday (6) now ties the most mega primes in a single month, set this past February.
____________
My lucky number is 75898524288+1 |
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13780 ID: 53948 Credit: 343,945,598 RAC: 11,355
                              
|
Challenge: Summer Solstice
App: 21 (PPS-Mega)
(As of 2016-06-24 03:35:03 UTC)
286357 tasks have been sent out. [CPU/GPU/anonymous_platform: 284774 (99%) / 0 (0%) / 1583 (1%)]
Of those tasks that have been sent out:
4418 (2%) came back with some kind of an error. [4418 (2%) / 0 (0%) / 0 (0%)]
190466 (67%) have returned a successful result. [189072 (66%) / 0 (0%) / 1396 (0%)]
91473 (32%) are still in progress. [91285 (32%) / 0 (0%) / 187 (0%)]
Of the tasks that have been returned successfully:
53558 (28%) are pending validation. [53143 (28%) / 0 (0%) / 417 (0%)]
136450 (72%) have been successfully validated. [135473 (71%) / 0 (0%) / 977 (1%)]
218 (0%) were invalid. [218 (0%) / 0 (0%) / 0 (0%)]
240 (0%) are inconclusive. [238 (0%) / 0 (0%) / 2 (0%)]
The current leading edge (i.e., latest work unit for which work has actually been sent out to a host) is n=3586341. The leading edge was at n=3485907 at the beginning of the challenge. Since the challenge started, the leading edge has advanced 2.88% as much as it had prior to the challenge!
____________
My lucky number is 75898524288+1 |
|
|
|
Still no results back from one of my numbers from 3 days ago. Wonder why? All of my others were valid almost immediately. |
|
|
RafaelVolunteer tester
 Send message
Joined: 22 Oct 14 Posts: 905 ID: 370496 Credit: 459,403,918 RAC: 159,669
                   
|
Still no results back from one of my numbers from 3 days ago. Wonder why? All of my others were valid almost immediately.
Each WU is sent to 2 separate people. Even if you were to return your result immediately, it won't validate until the second person (which we call wingman) sends theirs. And often, your wingman is going to take longer than you.
Don't feel bad for waiting 3 days and not having your result. That's not really long at all: one of my Megas has been sitting for 10 days already. And that's because MEGA is relatively fast; some of the bigger projects take months to validate at times. |
|
|
|
Still no results back from one of my numbers from 3 days ago. Wonder why? All of my others were valid almost immediately.
Each WU is sent to 2 separate people. Even if you were to return your result immediately, it won't validate until the second person (which we call wingman) sends theirs. And often, your wingman is going to take longer than you.
Don't feel bad for waiting 3 days and not having your result. That's not really long at all: one of my Megas has been sitting for 10 days already. And that's because MEGA is relatively fast; some of the bigger projects take months to validate at times.
...and if you end up with an Inconclusive one, yours & your "wingmans" result is different, it will be sent out to the third person to check...and so on... , but in a challenge like this, all results returned (that are downloaded after the start & retuned before the end) will give you points, but they will be removed afterwards (called cleanup), if they are invalid. The final score will be announced when all work units are verified that affect the challenge... Sorry, English is not my strongest language, but tried to make it clear =) |
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13780 ID: 53948 Credit: 343,945,598 RAC: 11,355
                              
|
Challenge: Summer Solstice
App: 21 (PPS-Mega)
(As of 2016-06-24 22:12:28 UTC)
339415 tasks have been sent out. [CPU/GPU/anonymous_platform: 337215 (99%) / 0 (0%) / 2200 (1%)]
Of those tasks that have been sent out:
8771 (3%) came back with some kind of an error. [8770 (3%) / 0 (0%) / 1 (0%)]
240126 (71%) have returned a successful result. [238070 (70%) / 0 (0%) / 2056 (1%)]
90519 (27%) are still in progress. [90376 (27%) / 0 (0%) / 143 (0%)]
Of the tasks that have been returned successfully:
57273 (24%) are pending validation. [56690 (24%) / 0 (0%) / 583 (0%)]
182230 (76%) have been successfully validated. [180758 (75%) / 0 (0%) / 1472 (1%)]
314 (0%) were invalid. [313 (0%) / 0 (0%) / 1 (0%)]
309 (0%) are inconclusive. [309 (0%) / 0 (0%) / 0 (0%)]
The current leading edge (i.e., latest work unit for which work has actually been sent out to a host) is n=3599999. The leading edge was at n=3485907 at the beginning of the challenge. Since the challenge started, the leading edge has advanced 3.27% as much as it had prior to the challenge!
Note: These status reports are (obviously) automatically generated. That last paragraph that shows the increase in "n" (or "b" or "k", as appropriate) isn't always as meaningful for PPS-MEGA as it is for other sub-projects. For PPS-MEGA, we have been running "n" up to 3599999 and then dropping it back down with a new set of "k"s to keep the size close to one million digits. Tracking the progress by looking at the size of "n" isn't very meaningful.
What is very meaningful, however, is the fact that we're doing about 10 times the normal number of PPS-MEGA and that we've found five -- FIVE!!! -- primes so far during the challenge. I would have been thrilled if we found one during the challenge. Five is incredible.
____________
My lucky number is 75898524288+1 |
|
|
|
Challenge: Summer Solstice
App: 21 (PPS-Mega)
(As of 2016-06-24 22:12:28 UTC)
339415 tasks have been sent out. [CPU/GPU/anonymous_platform: 337215 (99%) / 0 (0%) / 2200 (1%)]
Of those tasks that have been sent out:
8771 (3%) came back with some kind of an error. [8770 (3%) / 0 (0%) / 1 (0%)]
240126 (71%) have returned a successful result. [238070 (70%) / 0 (0%) / 2056 (1%)]
90519 (27%) are still in progress. [90376 (27%) / 0 (0%) / 143 (0%)]
Of the tasks that have been returned successfully:
57273 (24%) are pending validation. [56690 (24%) / 0 (0%) / 583 (0%)]
182230 (76%) have been successfully validated. [180758 (75%) / 0 (0%) / 1472 (1%)]
314 (0%) were invalid. [313 (0%) / 0 (0%) / 1 (0%)]
309 (0%) are inconclusive. [309 (0%) / 0 (0%) / 0 (0%)]
The current leading edge (i.e., latest work unit for which work has actually been sent out to a host) is n=3599999. The leading edge was at n=3485907 at the beginning of the challenge. Since the challenge started, the leading edge has advanced 3.27% as much as it had prior to the challenge!
Note: These status reports are (obviously) automatically generated. That last paragraph that shows the increase in "n" (or "b" or "k", as appropriate) isn't always as meaningful for PPS-MEGA as it is for other sub-projects. For PPS-MEGA, we have been running "n" up to 3599999 and then dropping it back down with a new set of "k"s to keep the size close to one million digits. Tracking the progress by looking at the size of "n" isn't very meaningful.
What is very meaningful, however, is the fact that we're doing about 10 times the normal number of PPS-MEGA and that we've found five -- FIVE!!! -- primes so far during the challenge. I would have been thrilled if we found one during the challenge. Five is incredible.
What is very meaningful, however, is the fact that we're doing about 10 times the normal number of PPS-MEGA and that we've found five -- FIVE!!! -- primes so far during the challenge. I would have been thrilled if we found one during the challenge. Five is incredible>>>are you desperate and also hope happy to someone from abroad to prove..!"
|
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13780 ID: 53948 Credit: 343,945,598 RAC: 11,355
                              
|
With about 10 hours to go, it's time for the usual end of challenge reminder:
At the Conclusion of the Challenge
We would prefer users "moving on" to finish those tasks they have downloaded, if not then please ABORT the WU's instead of DETACHING, RESETTING, or PAUSING.
ABORTING WU's allows them to be recycled immediately; thus a much faster "clean up" to the end of a Challenge. DETACHING, RESETTING, and PAUSING WU's causes them to remain in limbo until they EXPIRE. Therefore, we must wait until WU's expire to send them out to be completed.
____________
My lucky number is 75898524288+1 |
|
|
Crun-chi Volunteer tester
 Send message
Joined: 25 Nov 09 Posts: 3101 ID: 50683 Credit: 72,155,825 RAC: 519
                       
|
It looks like there is six primes found :)
____________
92*10^1439761-1 NEAR-REPDIGIT PRIME :) :) :)
4 * 650^498101-1 CRUS PRIME
314187728^131072+1 GENERALIZED FERMAT
Proud member of team Aggie The Pew. Go Aggie! |
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13780 ID: 53948 Credit: 343,945,598 RAC: 11,355
                              
|
It looks like there is six primes found :)
There are. :)
403*2^3334410+1 was found by stream!
That's the 7th PPS-MEGA prime this week, which sets a new record for the number of mega primes found in any single month. It is:
The sixth PPS-MEGA prime of the challenge
The seventh mega prime and PPS-MEGA prime this week (almost certainly a record)
The seventh mega prime and PPS-MEGA prime of the month (new record)
The 15th PPS-MEGA prime of the year (most mega primes of a single type in any year)
The 25th mega prime of 2016 (the record is 29 primes in 2014, and this year isn't even half over)
____________
My lucky number is 75898524288+1 |
|
|
Crun-chi Volunteer tester
 Send message
Joined: 25 Nov 09 Posts: 3101 ID: 50683 Credit: 72,155,825 RAC: 519
                       
|
Can you tell me exact start point on MEGA project?
Thanks
____________
92*10^1439761-1 NEAR-REPDIGIT PRIME :) :) :)
4 * 650^498101-1 CRUS PRIME
314187728^131072+1 GENERALIZED FERMAT
Proud member of team Aggie The Pew. Go Aggie! |
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13780 ID: 53948 Credit: 343,945,598 RAC: 11,355
                              
|
Can you tell me exact start point on MEGA project?
Thanks
I believe it's n=3322000. That's just above 1 million digits for any k. (floor(log10(2) * 3322000) = 1000021)
____________
My lucky number is 75898524288+1 |
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13780 ID: 53948 Credit: 343,945,598 RAC: 11,355
                              
|
Final statistics:
Challenge: Summer Solstice
App: 21 (PPS-Mega)
(As of 2016-06-25 22:38:01 UTC)
423586 tasks have been sent out. [CPU/GPU/anonymous_platform: 420076 (99%) / 0 (0%) / 3510 (1%)]
Of those tasks that have been sent out:
40590 (10%) came back with some kind of an error. [40552 (10%) / 0 (0%) / 38 (0%)]
307945 (73%) have returned a successful result. [304599 (72%) / 0 (0%) / 3346 (1%)]
74714 (18%) are still in progress. [74598 (18%) / 0 (0%) / 116 (0%)]
Of the tasks that have been returned successfully:
55287 (18%) are pending validation. [54521 (18%) / 0 (0%) / 748 (0%)]
251839 (82%) have been successfully validated. [249262 (81%) / 0 (0%) / 2595 (1%)]
474 (0%) were invalid. [473 (0%) / 0 (0%) / 1 (0%)]
345 (0%) are inconclusive. [343 (0%) / 0 (0%) / 2 (0%)]
Thank you all for an absolutely tremendous challenge! Participation was amazing, but the really impressive statistic is, of course, the number of primes that were found. Six! Absolutely amazing. Great job everyone, and congratulations to the six of you on discovering a mega prime!!!
____________
My lucky number is 75898524288+1 |
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13780 ID: 53948 Credit: 343,945,598 RAC: 11,355
                              
|
I expect the cleanup to take 2 to 4 weeks.
Cleanup status:
Jun-25: Summer Solstice: 55441 tasks outstanding; 46330 affecting individual (295) scoring positions; 29872 affecting team (62) scoring positions.
Jun-26: Summer Solstice: 43311 tasks outstanding; 36060 affecting individual (295) scoring positions; 21139 affecting team (57) scoring positions.
Jun-27: Summer Solstice: 22617 tasks outstanding; 16505 affecting individual (280) scoring positions; 7224 affecting team (44) scoring positions.
Jun-28: Summer Solstice: 14896 tasks outstanding; 10084 affecting individual (262) scoring positions; 4629 affecting team (36) scoring positions.
Jun-29: Summer Solstice: 10904 tasks outstanding; 6764 affecting individual (249) scoring positions; 2967 affecting team (29) scoring positions.
Jun-30: Summer Solstice: 5487 tasks outstanding; 3004 affecting individual (192) scoring positions; 499 affecting team (19) scoring positions.
____________
My lucky number is 75898524288+1 |
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13780 ID: 53948 Credit: 343,945,598 RAC: 11,355
                              
|
Cleanup status:
Jun-25: Summer Solstice: 55441 tasks outstanding; 46330 affecting individual (295) scoring positions; 29872 affecting team (62) scoring positions.
Jul-1: Summer Solstice: 1845 tasks outstanding; 434 affecting individual (97) scoring positions; 44 affecting team (8) scoring positions.
Jul-2: Summer Solstice: 965 tasks outstanding; 173 affecting individual (51) scoring positions; 12 affecting team (2) scoring positions.
Jul-3: Summer Solstice: 652 tasks outstanding; 88 affecting individual (33) scoring positions; 1 affecting team (1) scoring positions.
Jul-4: Summer Solstice: 422 tasks outstanding; 39 affecting individual (21) scoring positions; 1 affecting team (1) scoring positions.
Jul-5: Summer Solstice: 121 tasks outstanding; 8 affecting individual (6) scoring positions; 1 affecting team (1) scoring positions.
Jul-6: Summer Solstice: 47 tasks outstanding; 5 affecting individual (4) scoring positions; 0 affecting team (0) scoring positions.
Jul-7: Summer Solstice: 17 tasks outstanding; 2 affecting individual (2) scoring positions; 0 affecting team (0) scoring positions.
Jul-8: Summer Solstice: 13 tasks outstanding; 2 affecting individual (2) scoring positions; 0 affecting team (0) scoring positions.
Jul-9: Summer Solstice: 9 tasks outstanding; 1 affecting individual (1) scoring positions; 0 affecting team (0) scoring positions.
____________
My lucky number is 75898524288+1 |
|
|
|
With the cleanup done and stats updated, we can announce the top participants and teams.
Congratulations go to:
1 zunewantan
2 Scott Brown
3 tng*
And in the teams department we can find these at the top:
1 Aggie The Pew
2 Czech National Team
3 Sicituradastra.
Thanks for all the participation and hope to see you during the next challenge!
____________
PrimeGrid Challenge Overall standings --- Last update: From Pi to Paddy (2016)
|
|
|