Join PrimeGrid
Returning Participants
Community
Leader Boards
Results
Other
drummers-lowrise
|
Message boards :
Number crunching :
Halloween Challenge
Author |
Message |
RogerVolunteer developer Volunteer tester
 Send message
Joined: 27 Nov 11 Posts: 1138 ID: 120786 Credit: 268,668,824 RAC: 0
                    
|
Welcome to the Halloween Challenge!
The seventh Challenge of the 2018 Challenge series is a 7 day challenge to celebrate Halloween. The challenge is being offered on the Woodall Prime Search (LLR) application.
Halloween or Hallowe'en (a contraction of Hallows' Evening), also known as Allhalloween, All Hallows' Eve, or All Saints' Eve, is a celebration observed in a number of countries on 31 October, the eve of the Western Christian feast of All Hallows' Day. It begins the three-day observance of Allhallowtide, the time in the liturgical year dedicated to remembering the dead, including saints (hallows), martyrs, and all the faithful departed.
It is widely believed that many Halloween traditions originated from ancient Celtic harvest festivals, particularly the Gaelic festival Samhain; that such festivals may have had pagan roots; and that Samhain itself was Christianised as Halloween by the early Church. Some believe, however, that Halloween began solely as a Christian holiday, separate from ancient festivals like Samhain.
Halloween activities include trick-or-treating (or the related guising), attending Halloween costume parties, carving pumpkins into jack-o'-lanterns, lighting bonfires, apple bobbing, divination games, playing pranks, visiting haunted attractions, telling scary stories, and watching horror films. In many parts of the world, the Christian religious observances of All Hallows' Eve, including attending church services and lighting candles on the graves of the dead, remain popular, although elsewhere it is a more commercial and secular celebration. Some Christians historically abstained from meat on All Hallows' Eve, a tradition reflected in the eating of certain vegetarian foods on this vigil day, including apples, potato pancakes, and soul cakes.
To participate in the Challenge, please select only the Woodall Prime Search LLR (WOO) project in your PrimeGrid preferences section. The challenge will begin 24th October 2018 23:59:59 UTC and end 31st October 2018 23:59:59 UTC.
Application builds are available for Linux 32 and 64 bit, Windows 32 and 64 bit and MacIntel. Intel CPUs with AVX capabilities (Sandy Bridge, Ivy Bridge, Haswell, Broadwell, Skylake, Kaby Lake, Coffee Lake) will have a very large advantage, and Intel CPUs with FMA3 (Haswell, Broadwell, Skylake, Kaby Lake, Coffee Lake) will be the fastest.
ATTENTION: The primality program LLR is CPU intensive; so, it is vital to have a stable system with good cooling. It does not tolerate "even the slightest of errors." Please see this post for more details on how you can "stress test" your computer. Tasks on one CPU core will take ~2 days on fast/newer computers and 7+ days on slower/older computers. If your computer is highly overclocked, please consider "stress testing" it. Sieving is an excellent alternative for computers that are not able to LLR. :)
Highly overclocked Haswell, Broadwell, Skylake, Kaby Lake or Coffee Lake (i.e., Intel Core i7, i5, and i3 -4xxx or better) computers running the application will see fastest times. Note that WOO is running the latest FMA3 version of LLR which takes full advantage of the features of these newer CPUs. It's faster than the previous LLR app and draws more power and produces more heat. If you have a Haswell, Broadwell, Skylake, Kaby Lake or Coffee Lake CPU, especially if it's overclocked or has overclocked memory, and haven't run the new FMA3 LLR before, we strongly suggest running it before the challenge while you are monitoring the temperatures.
Please, please, please make sure your machines are up to the task.
Multi-threading optimisation instructions
Those looking to maximise their computer's performance during this challenge, or when running LLR in general, may find this information useful.
- Your mileage may vary. Before the challenge starts, take some time and experiment and see what works best on your computer.
- If you have an Intel CPU with hyperthreading, either turn off the hyperthreading in the BIOS, or set BOINC to use 50% of the processors.
- If you're using a GPU for other tasks, it may be beneficial to leave hyperthreading on in the BIOS and instead tell BOINC to use 50% of the CPU's. This will allow one of the hyperthreads to service the GPU.
- Use LLR's multithreaded mode. It requires a little bit of setup, but it's worth the effort. Follow these steps:
Time zone converter:
The World Clock - Time Zone Converter
NOTE: The countdown clock on the front page uses the host computer time. Therefore, if your computer time is off, so will the countdown clock. For precise timing, use the UTC Time in the data section at the very top, above the countdown clock.
Scoring Information
Scores will be kept for individuals and teams. Only tasks issued AFTER 24th October 2018 23:59:59 UTC and received BEFORE 31st October 2018 23:59:59 UTC will be considered for credit. We will be using the same scoring method as we currently use for BOINC credits. A quorum of 2 is NOT needed to award Challenge score - i.e. no double checker. Therefore, each returned result will earn a Challenge score. Please note that if the result is eventually declared invalid, the score will be removed.
At the Conclusion of the Challenge
We kindly ask users "moving on" to ABORT their tasks instead of DETACHING, RESETTING, or PAUSING.
ABORTING tasks allows them to be recycled immediately; thus a much faster "clean up" to the end of an LLR Challenge. DETACHING, RESETTING, and PAUSING tasks causes them to remain in limbo until they EXPIRE. Therefore, we must wait until tasks expire to send them out to be completed.
Please consider either completing what's in the queue or ABORTING them. Thank you. :)
About Woodall Prime Search
Woodall Numbers (sometimes called Cullen numbers 'of the second kind') are positive integers of the form Wn = n*2^n-1, where n is also a positive integer greater than zero. Woodall numbers that are prime are called Woodall primes (or Cullen primes of the second kind).
The Woodall numbers Wn are primes for the following n:
2, 3, 6, 30, 75, 81, 115, 123, 249, 362, 384, 462, 512, 751, 822, 5312, 7755, 9531, 12379, 15822, 18885, 22971, 23005, 98726, 143018, 151023, 667071, 1195203, 1268979, 1467763, 2013992, 2367906, 3752948, and 17016602 and composite for all other n less than 17337441.
It is conjectured that there are infinitely many such primes. Currently, PrimeGrid is testing for Woodall primes in the n=17M - n=18M level (M=mega, 10^6). The last 4 Woodall primes found by PrimeGrid are:
2013992*2^2013992−1 (Lasse Mejling Andersen): official announcement | Prime Pages Entry
2367906*2^2367906−1 (Stephen Kohlman): official announcement | Prime Pages Entry
3752948*2^3752948−1 (Matthew J. Thompson): official announcement | Prime Pages Entry
17016602*2^17016602−1 (Diego Bertolotti): official announcement | Prime Pages Entry
For more information on Woodall numbers, please visit the following sites:
What is LLR?
The Lucas-Lehmer-Riesel (LLR) test is a primality test for numbers of the form N = k*2^n − 1, with 2^n > k. Also, LLR is a program developed by Jean Penne that can run the LLR-tests. It includes the Proth test to perform +1 tests and PRP to test non base 2 numbers. See also:
(Edouard Lucas: 1842-1891, Derrick H. Lehmer: 1905-1991, Hans Riesel: 1929-2014).
____________
| |
|
Monkeydee Volunteer tester
 Send message
Joined: 8 Dec 13 Posts: 532 ID: 284516 Credit: 1,437,442,459 RAC: 1,901,851
                           
|
Good luck to everyone!
We've already found one Woodall prime this year. Will luck strike twice?
____________
My Primes
Badge Score: 4*2 + 6*2 + 7*4 + 8*8 + 11*3 + 12*1 = 157
| |
|
|
Thought I'd test these (6-t) and it's running fine so I'm wondering about this message in the Boinc manager.
PrimeGrid: Notice from BOINC
Your app_config.xml file refers to an unknown application 'llrWOO'. Known applications: 'genefer17low', 'genefer16', 'gcw_sieve', 'genefer15', 'pps_sr2sieve', 'llrSOB', 'llrESP', 'llr321', 'llrPPSE', 'llrSR5', 'llrPPS', 'llrCUL'
10/16/2018 11:21:00 AM | |
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13954 ID: 53948 Credit: 392,586,193 RAC: 178,879
                               
|
You can ignore that message.
____________
My lucky number is 75898524288+1 | |
|
|
The Woodall numbers Wn are primes for the following n:
2, 3, 6, 30, 75, 81, 115, 123, 249, 362, 384, 462, 512, 751, 822, 5312, 7755, 9531, 12379, 15822, 18885, 22971, 23005, 98726, 143018, 151023, 667071, 1195203, 1268979, 1467763, 2013992, 2367906, 3752948, and 17016602 and composite for all other n less than 17337441.
Trivia (source): Keller found 115, 123, 249, 362, 384, 462, 512, 751, 822, 5312, 7755, 9531, 12379 back in 1984. However, not all of them were actually new. In fact W_512 (the largest of these that was not really new) is identical to the Mersenne prime M_521 found in 1952 by Robinson with SWAC.
Every time n is a power of two, say n=2^j, we have W_n:
W_{2^j} = 2^j * 2^{2^j} - 1 = 2^{j + 2^j} - 1 = M_{j + 2^j}
This can only be prime if the number j+2^j is prime (but that is certainly not sufficient). The example I mentioned, was j=9.
It must be unlikely that any larger prime would be simultaneously Woodall and Mersenne.
/JeppeSN | |
|
|
Nice challenge badge. | |
|
|
Nice challenge badge.
:)
____________
"Accidit in puncto, quod non contingit in anno."
Something that does not occur in a year may, perchance, happen in a moment. | |
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13954 ID: 53948 Credit: 392,586,193 RAC: 178,879
                               
|
After day 1:
Challenge: Halloween
App: 3 (WOO-LLR)
(As of 2018-10-26 00:20:47 UTC)
7731 tasks have been sent out. [CPU/GPU/anonymous_platform: 7731 (100%) / 0 (0%) / 0 (0%)]
Of those tasks that have been sent out:
734 (9%) came back with some kind of an error. [734 (9%) / 0 (0%) / 0 (0%)]
655 (8%) have returned a successful result. [655 (8%) / 0 (0%) / 0 (0%)]
6342 (82%) are still in progress. [6342 (82%) / 0 (0%) / 0 (0%)]
Of the tasks that have been returned successfully:
552 (84%) are pending validation. [552 (84%) / 0 (0%) / 0 (0%)]
103 (16%) have been successfully validated. [103 (16%) / 0 (0%) / 0 (0%)]
0 (0%) were invalid. [0 (0%) / 0 (0%) / 0 (0%)]
0 (0%) are inconclusive. [0 (0%) / 0 (0%) / 0 (0%)]
The current leading edge (i.e., latest work unit for which work has actually been sent out to a host) is n=17651595. The leading edge was at n=17528522 at the beginning of the challenge. Since the challenge started, the leading edge has advanced 0.70% as much as it had prior to the challenge!
____________
My lucky number is 75898524288+1 | |
|
|
Hey all. I don't post much, but just wanted to say I was absolutely delighted when I saw the little ghostie Woo badge show up. For someone that's constantly checking his stats and looking at my badge collection it's little treats like that that make it a pleasure to crunch for this project. :)
____________
| |
|
|
Wohoo, Woo havo new look badgo .. | |
|
|
which one. i see a few of them in check list | |
|
Monkeydee Volunteer tester
 Send message
Joined: 8 Dec 13 Posts: 532 ID: 284516 Credit: 1,437,442,459 RAC: 1,901,851
                           
|
which one. i see a few of them in check list
The Woodall project badge is currently a ghost instead of the usual square on your profile page.
____________
My Primes
Badge Score: 4*2 + 6*2 + 7*4 + 8*8 + 11*3 + 12*1 = 157
| |
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13954 ID: 53948 Credit: 392,586,193 RAC: 178,879
                               
|
Day 2:
Challenge: Halloween
App: 3 (WOO-LLR)
(As of 2018-10-27 01:11:02 UTC)
9578 tasks have been sent out. [CPU/GPU/anonymous_platform: 9578 (100%) / 0 (0%) / 0 (0%)]
Of those tasks that have been sent out:
960 (10%) came back with some kind of an error. [960 (10%) / 0 (0%) / 0 (0%)]
1927 (20%) have returned a successful result. [1927 (20%) / 0 (0%) / 0 (0%)]
6691 (70%) are still in progress. [6691 (70%) / 0 (0%) / 0 (0%)]
Of the tasks that have been returned successfully:
1442 (75%) are pending validation. [1442 (75%) / 0 (0%) / 0 (0%)]
476 (25%) have been successfully validated. [476 (25%) / 0 (0%) / 0 (0%)]
1 (0%) were invalid. [1 (0%) / 0 (0%) / 0 (0%)]
8 (0%) are inconclusive. [8 (0%) / 0 (0%) / 0 (0%)]
The current leading edge (i.e., latest work unit for which work has actually been sent out to a host) is n=17678083. The leading edge was at n=17528522 at the beginning of the challenge. Since the challenge started, the leading edge has advanced 0.85% as much as it had prior to the challenge!
____________
My lucky number is 75898524288+1 | |
|
compositeVolunteer tester Send message
Joined: 16 Feb 10 Posts: 1140 ID: 55391 Credit: 1,022,313,530 RAC: 1,720,016
                        
|
Conventional wisdom (for challenges without "bunkering" allowed) has everyone starting to crunch as soon as possible after the beginning of the challenge, and running tasks continuously until the end of the challenge. This strategy works best for crunchers having the aggregate CPU power to rank in the top 50 or so on the individual leaderboard, and in general for everyone when tasks have relatively short runtimes.
Otherwise, other strategies will maximize individual score and rank in challenges having long-running tasks like this one.
Strategy 1: wait
Wait a period of time from the start of the challenge before retreiving the first challenge tasks to be crunched. This works because the majority of tasks are handed out by the server in order of increasing score, and below the top 50 competitors, individuals fall into peer groups who will complete the same number of tasks during a challenge. Those who crunch higher-scoring tasks on average will rank higher within their peer group.
However, the waiting period must not be so long that fewer tasks will be completed within the challenge time frame. A corollary is that a machine can maximize its contribution to a challenge score without needing to enter a challenge near the beginning. There is usually sufficient slack time to complete other work in progress without aborting that work to enter a challenge at the beginning.
Another benefit of waiting is that the high-volume crunchers will be first to consume the older part of the task queue (from before additional tasks were added for the challenge), which contains the lower-scoring recycled tasks that previously timed out or were aborted.
Strategy 2: abort recycled tasks
A related strategy is to ensure that you are always crunching fresh tasks rather than lower-scoring recycled tasks. For this you must "actively manage" your assigned workload, by aborting recycled tasks. Since information on tasks in progress is suppressed pending completion, you must check the name of each task. Recycled tasks are those having a number following the rightmost _ which is (usually) something other than 0 or 1. For instance, llrWOO_307159609_2 is a recycled task, so it will have a lower score at completion than a fresh task fetched at around the same time.
Once again there is a caveat. If the task you intend to abort has been crunching for a significant period of time, then you will be better off letting it go to completion. Otherwise you might end up with a lower number of tasks completed by the end of the challenge.
For both these strategies, the total challenge time sacrificed by a CPU (in the initial waiting period, or completely lost due to active task management) must not exceed the fastest multithreaded runtime of a single task on that CPU. For Woodall tasks, there is a lot of time to play with because single task runtimes can exceed several hours.
Strategy 3: slow and steady
Naturally, the most important strategy you can have is to ensure that all the work done by your CPUs are of good quality. Invalid results due to CPU overheating are wasted energy and CPU time. So clean out the dust inside the computer and blocking the vents, and reduce the clock speed if necessary. See the first post. | |
|
|
...the majority of tasks are handed out by the server in order of increasing score...
They are also of increasing size. There is a direct correlation of size, and therefore "crunch time" to the score. Waiting only means less chance to get in 1 more task at the end.
Aborting a smaller "recycled" task would be worse. You get x points for y time, the larger the tasks get the less tasks you can do during the challenge. Near the end you'll reach a point in which you will be unable to finish in time, the smaller the tasks are the better they "fit". If you could request *only* the recycled tasks you would have a better chance of maximizing your score.
A better strategy is to determine how many cores to use per task for max throughput in the allotted time. | |
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13954 ID: 53948 Credit: 392,586,193 RAC: 178,879
                               
|
Day 3:
Challenge: Halloween
App: 3 (WOO-LLR)
(As of 2018-10-28 01:58:01 UTC)
11200 tasks have been sent out. [CPU/GPU/anonymous_platform: 11200 (100%) / 0 (0%) / 0 (0%)]
Of those tasks that have been sent out:
1258 (11%) came back with some kind of an error. [1258 (11%) / 0 (0%) / 0 (0%)]
3369 (30%) have returned a successful result. [3369 (30%) / 0 (0%) / 0 (0%)]
6573 (59%) are still in progress. [6573 (59%) / 0 (0%) / 0 (0%)]
Of the tasks that have been returned successfully:
2175 (65%) are pending validation. [2175 (65%) / 0 (0%) / 0 (0%)]
1162 (34%) have been successfully validated. [1162 (34%) / 0 (0%) / 0 (0%)]
5 (0%) were invalid. [5 (0%) / 0 (0%) / 0 (0%)]
27 (1%) are inconclusive. [27 (1%) / 0 (0%) / 0 (0%)]
The current leading edge (i.e., latest work unit for which work has actually been sent out to a host) is n=17698926. The leading edge was at n=17528522 at the beginning of the challenge. Since the challenge started, the leading edge has advanced 0.97% as much as it had prior to the challenge!
____________
My lucky number is 75898524288+1 | |
|
compositeVolunteer tester Send message
Joined: 16 Feb 10 Posts: 1140 ID: 55391 Credit: 1,022,313,530 RAC: 1,720,016
                        
|
Michael & Roger,
Here is a suggestion to alter the challenge scoring algorithm by dividing challenges into "sprints" and "marathons". Longer-running challenges, especially those with longer-running tasks, would operate as marathons and shorter challenges as sprints.
Sprint challenges would have fixed starting and ending times (as they do now).
For marathon challenges, each participant establishes their individual starting time when their first challenge task is fetched during the "starting interval", say on a particular date (UTC). If they fetch their first challenge task after the end of the starting interval, then their start time is set at the end of the starting interval.
For this Halloween Challenge, the beginning of the start interval would have been 1 day earlier than the actual current starting time. This would allow each participant to start a marathon challenge at a time which is most convenient to them.
A challenger's individual deadline is then set at a fixed time offset from the individual start time (e.g. 7 days), and it would be no later than that fixed offset from the end of the starting interval (i.e. the actual deadline for the current challenge).
Naturally this would make setting up challenge score accounting a bit more challenging.
But you enjoy some kinds of challenges, n'est-ce pas? | |
|
compositeVolunteer tester Send message
Joined: 16 Feb 10 Posts: 1140 ID: 55391 Credit: 1,022,313,530 RAC: 1,720,016
                        
|
A better strategy is to determine how many cores to use per task for max throughput in the allotted time.
Agreed. This is implicit in the discussion. You can still play with initial waiting time, as even with maximum throughput you have some slack time where the last task on your CPU will be unfinished by the deadline.
You can also interrupt the last running task and assign more cores to it as cores become available, speeding up that task beyond the maximum throughput operating point, toward the minimum completion time operating point. | |
|
|
Hello everyone,
does everyone else experience very inconsistent run times, while the CPU times for the Work Units is more or less the same.
Check my computers for this:
http://www.primegrid.com/hosts_user.php?userid=63648
I use all cores on each CPU (usually 4) to run one WU. There is no other Boinc Application or program running in the background. So i am very confused why my run times are so inconsistent. I am using the app_config file as posted in the thread.
Thanks for any help. | |
|
Monkeydee Volunteer tester
 Send message
Joined: 8 Dec 13 Posts: 532 ID: 284516 Credit: 1,437,442,459 RAC: 1,901,851
                           
|
Hello everyone,
does everyone else experience very inconsistent run times, while the CPU times for the Work Units is more or less the same.
Check my computers for this:
http://www.primegrid.com/hosts_user.php?userid=63648
I use all cores on each CPU (usually 4) to run one WU. There is no other Boinc Application or program running in the background. So i am very confused why my run times are so inconsistent. I am using the app_config file as posted in the thread.
Thanks for any help.
If you look in the BOINC manager on each machine you will likely see that each unit takes the same amount of actual time.
When using multi-threading through app_config any units you have waiting to go will count time towards the run time even if they are not actively being worked on. That counter stops when run time = CPU time.
So the units themselves are not taking any more or less run time than any other unit you have.
The same thing happens when you pause an in-progress unit and leave the BOINC manager open. The run time continues to count even though the unit is not actively crunching.
____________
My Primes
Badge Score: 4*2 + 6*2 + 7*4 + 8*8 + 11*3 + 12*1 = 157
| |
|
|
Sebastian* wrote: Hello everyone,
does everyone else experience very inconsistent run times, while the CPU times for the Work Units is more or less the same.
Check my computers for this:
http://www.primegrid.com/hosts_user.php?userid=63648
I use all cores on each CPU (usually 4) to run one WU. There is no other Boinc Application or program running in the background. So i am very confused why my run times are so inconsistent. I am using the app_config file as posted in the thread.
Thanks for any help.
I have seen varying run times too, but I expected these and therefore did not check how this correlates with CPU times. Instead, I noticed the following:
- The power meter which I have in front of the biggest machines of mine shows a wider variation of power draw (over the course of several hours/ ~half a day maybe) than I am used to from previous LLR challenges.
- The L3 cache that your 4-core CPUs have is a lot smaller than llrWOO tasks like.
- Your hosts are running Windows; would be interesting to look up Linux hosts with same hardware and same app_config.
Keith wrote: When using multi-threading through app_config any units you have waiting to go will count time towards the run time even if they are not actively being worked on.
This is misleadingly worded, IMO.
When multithreading is on, and more than 1 task is downloaded within a single request for new work, run time reporting when a job is completed is buggy.
You need to ignore the run times that you see in the results tables on the project web server, unless it was made sure that no more than 1 task was downloaded per request, in each and every request.
Keith wrote: The same thing happens when you pause an in-progress unit and leave the BOINC manager open. The run time continues to count even though the unit is not actively crunching.
No. Time which a task spends suspended does not count towards its run time. (Modulo the above mentioned reporting/ book keeping bug.) | |
|
|
Thank you for all the answers so far. :)
I've checked the actual time of the upload / report. And this seems pretty consistent.
On a quick check what version of Boinc people use, it seems that the Boinc version 7.14.2 is reporting incorrect run times. On older versions the time seem very consistent.
My best guess for now is that the latest Boinc version is just reporting false times. Maybe a calculating bug of some kind?
xii5ku wrote: - The L3 cache that your 4-core CPUs have is a lot smaller than llrWOO tasks like.
I run several Broadwell LGA 1150 CPUs which have a 4th level cache of 128MB. They are very fast, and perform better then any other quad core cpu with even faster frequencies. But the smaller the project the less of an impact it has.
There might be improved times of the huge L3 caches of the high core count cpus, when you run one WU on all cores, but i have no idea how much cache memory it takes for the llrWOO tasks.
Keith wrote: When using multi-threading through app_config any units you have waiting to go will count time towards the run time even if they are not actively being worked on. xii5ku wrote: This is misleadingly worded, IMO.
When multithreading is on, and more than 1 task is downloaded within a single request for new work, run time reporting when a job is completed is buggy.
You need to ignore the run times that you see in the results tables on the project web server, unless it was made sure that no more than 1 task was downloaded per request, in each and every request.
And this might be what causes my inconsistent run times. I have several tasks waiting, sometimes it is just one, on other times it is more. The run time shown is just several times the time of the lowest run time.
How can you make sure, that only one WU is downloaded per request, when already one is in progess? Is it even worth the trouble? | |
|
|
Check my computers for this:
http://www.primegrid.com/hosts_user.php?userid=63648
I use all cores on each CPU (usually 4) to run one WU. There is no other Boinc Application or program running in the background. So i am very confused why my run times are so inconsistent. I am using the app_config file as posted in the thread.
My Run and CPU times look just as inconsistent; however, I have been adjusting thread allocation in app_config.xml throughout the challenge. For all my hyper-threaded processors, running 50% is most efficient as Roger indicated in the challenge description - especially for llrWOO. I noticed some of your machines are running all threads, not 50%.
If the CPU/Run time calculation references -oThreadsPerTest in Stderr XML per task, I've made a mess of the time values. :) | |
|
Monkeydee Volunteer tester
 Send message
Joined: 8 Dec 13 Posts: 532 ID: 284516 Credit: 1,437,442,459 RAC: 1,901,851
                           
|
How can you make sure, that only one WU is downloaded per request, when already one is in progess? Is it even worth the trouble?
To always download one task at a time make sure that "Store at least X days of work" and Store up to an additional X days of work are both set to 0. That setting is found in the BOINC manager under Options > Computing Preferences.
As long as you don't have no work you will get one task. If you run out of work for whatever reason then BOINC will download one task per core as specified in the app_config. So if you tell app_config that you want to use 4 cores then 4 units will download if you run completely out of work. Otherwise a new task will usually download about 3 minutes before the previous one finishes so you won't run out of work and you will keep getting one task at a time.
There are advantages and disadvantages to running without a cache.
Advantages:
- Better chance to be 1st on a prime find. If you have a task in your cache that's prime, someone else could do that unit while your computer is waiting to run it.
- Accurate run time on the stats page
Disadvantages:
- No work if you have a network outage or otherwise can't reach the site for an extended period. With a cache you will at least have something for your computer to do during the outage.
As to whether it's worth it to set yourself up that way or not depends on what you place as important and what your personal situation is like.
I run without a cache to try to be 1st on any prime my computers crunch, but I have also been stung by not having any work for long internet outages.
EDIT: As I was writing this my internet decided to go on the blink... Maybe I should reconsider running a small cache...
____________
My Primes
Badge Score: 4*2 + 6*2 + 7*4 + 8*8 + 11*3 + 12*1 = 157
| |
|
|
FishFry wrote: For all my hyper-threaded processors, running 50% is most efficient as Roger indicated in the challenge description - especially for llrWOO.
Perhaps this is true on Windows. I neglected to perform proper measurements for llrWOO, but based on what I know from other LLR based subprojects, I suspect that the optimum thread count is larger on Linux. | |
|
mackerel Volunteer tester
 Send message
Joined: 2 Oct 08 Posts: 2639 ID: 29980 Credit: 568,393,769 RAC: 2,469
                              
|
- The L3 cache that your 4-core CPUs have is a lot smaller than llrWOO tasks like.
A better way to phrase it would be that the data delivery system may not be sufficient to keep up with the core potential. A small L3 cache is not necessarily a problem if you have fast enough ram for example.
Perhaps this is true on Windows. I neglected to perform proper measurements for llrWOO, but based on what I know from other LLR based subprojects, I suspect that the optimum thread count is larger on Linux.
In recent general testing of CPU architectures I did see, for 2048k FFT in Prime95 I saw ~10% increase using all HT threads on Skylake over not. I didn't investigate it further but that was unexpected. I had previously saw HT inefficiencies when running multiple single thread tasks, but multi-thread operation wasn't affected by that previously. Zen and Zen+ took a hit in the same situation. I haven't investigated further yet.
For indication, Woodall tasks seem to be mostly 1920k so very similar to the above. I would caution I was not attempting to replicate LLR-like running conditions in that testing, as I was looking more for peak performance if not limited by resources. | |
|
mackerel Volunteer tester
 Send message
Joined: 2 Oct 08 Posts: 2639 ID: 29980 Credit: 568,393,769 RAC: 2,469
                              
|
I run several Broadwell LGA 1150 CPUs which have a 4th level cache of 128MB. They are very fast, and perform better then any other quad core cpu with even faster frequencies. But the smaller the project the less of an impact it has.
There might be improved times of the huge L3 caches of the high core count cpus, when you run one WU on all cores, but i have no idea how much cache memory it takes for the llrWOO tasks.
It doesn't seem to matter much if the data is in L3, L4 or ram, as long as it gets fed fast enough. Latency didn't seem to provide much of an impact in performance, only bandwidth. The L4 of broadwell desktop CPUs from memory was rated around 50GB/s bandwidth, comparable to dual channel 3200. Combined with the lower clock potential of Broadwell CPUs, the L4 cache is practically unlimited bandwidth as far as the software is concerned. When I had a ram shortage, I ran my Broadwell system with a single stick of ram, unthinkable for performance if it wasn't for the presence of L4 cache.
As for data requirements, the FFT data by itself is 8x the FFT size, so Woodall units would currently take just under 16MB each. There is also some other data that is needed, but based on observations only considering the FFT data relative to L3 (or L4) cache is sufficient to indicate where cache makes ram considerations unimportant. Exceeding that generally pushes you into ram limited performance. | |
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13954 ID: 53948 Credit: 392,586,193 RAC: 178,879
                               
|
After 4 days:
Challenge: Halloween
App: 3 (WOO-LLR)
(As of 2018-10-29 00:25:42 UTC)
13052 tasks have been sent out. [CPU/GPU/anonymous_platform: 13052 (100%) / 0 (0%) / 0 (0%)]
Of those tasks that have been sent out:
1487 (11%) came back with some kind of an error. [1487 (11%) / 0 (0%) / 0 (0%)]
4799 (37%) have returned a successful result. [4799 (37%) / 0 (0%) / 0 (0%)]
6766 (52%) are still in progress. [6766 (52%) / 0 (0%) / 0 (0%)]
Of the tasks that have been returned successfully:
2707 (56%) are pending validation. [2707 (56%) / 0 (0%) / 0 (0%)]
2043 (43%) have been successfully validated. [2043 (43%) / 0 (0%) / 0 (0%)]
10 (0%) were invalid. [10 (0%) / 0 (0%) / 0 (0%)]
39 (1%) are inconclusive. [39 (1%) / 0 (0%) / 0 (0%)]
The current leading edge (i.e., latest work unit for which work has actually been sent out to a host) is n=17724668. The leading edge was at n=17528522 at the beginning of the challenge. Since the challenge started, the leading edge has advanced 1.12% as much as it had prior to the challenge!
____________
My lucky number is 75898524288+1 | |
|
|
Hello.
How can you make sure, that only one WU is downloaded per request, when already one is in progess? Is it even worth the trouble?
To always download one task at a time make sure that "Store at least X days of work" and Store up to an additional X days of work are both set to 0. That setting is found in the BOINC manager under Options > Computing Preferences.
Unfortunately, this doesn't help if you go into a race with multi-threaded applications if previously you let your machine run empty. You'll get four work-units on a machine with four threads.
But, you can tweak Boinc via one of the config-files to only ask for one work-unit at a time, as I read some time ago. Don't remember which file it was, though. Found no time to tweak my systems for this although I'd like to use this on some other projects as well.
____________
Greetings, Jens
147433824^131072+1 | |
|
dukebgVolunteer tester
 Send message
Joined: 21 Nov 17 Posts: 242 ID: 950482 Credit: 23,670,125 RAC: 0
                  
|
You can also set the preferences to 25% "use CPUs", that way it will only request 1 task when running dry (if 4 cores). | |
|
|
@mackerel, thank you for this info. | |
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13954 ID: 53948 Credit: 392,586,193 RAC: 178,879
                               
|
There are, so far, 14 results reporting a new Woodall prime.
However, as with all numbers, such as Woodall, where b=2 and c=-1, computation errors often result in a false prime result, and there's strong reasons to suspect that all 14 primes are errors.
____________
My lucky number is 75898524288+1 | |
|
Honza Volunteer moderator Volunteer tester Project scientist Send message
Joined: 15 Aug 05 Posts: 1949 ID: 352 Credit: 6,011,728,744 RAC: 1,522,448
                                      
|
I guess it will take a while to manually check those false positive...
____________
My stats | |
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13954 ID: 53948 Credit: 392,586,193 RAC: 178,879
                               
|
I guess it will take a while to manually check those false positive...
We don't have to manually check them. BOINC will check them. Some of them have already been proven wrong (which is why other 'primes' from the same computer are considered unreliable.)
Right now, all of the 'prime' results have a reason to be considered unreliable.
____________
My lucky number is 75898524288+1 | |
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13954 ID: 53948 Credit: 392,586,193 RAC: 178,879
                               
|
Composite wrote:
Conventional wisdom...
Here's some raw data for you. Values shown are credit and runtime (h:m:s).
I'm running multi-threaded, and allowing multiple tasks to be downloaded in the initial work fetch. So 4 tasks are downloaded for computer 1 (quad core), and the others get 2 tasks (dual core).
Computer 1: (4 tasks downloaded in the initial 10 seconds of the challenge, others downloaded one by one as needed)
1 _1 18,111.68 14:22:51 -- initial fetch
2 _1 18,111.03 13:50:25 -- initial fetch
3 _7 17,304.54 13:47:53 -- initial fetch
4 _5 16,442.88 12:48:19 -- initial fetch
5 _1 16,224.22 12:05:06
6 _3 18,084.16 13:51:24
7 _1 18,292.67 14:03:14
8 _1 18,312.54 13:57:07
Computer 2: (1st two tasks downloaded in initial 10 seconds)
1 _3 18,062.86 32:46:51 -- initial fetch
2 _8 18,088.38 31:50:04 -- initial fetch
3 _6 17,424.88 30:22:01
Computer 3: (1st two tasks downloaded in initial 10 seconds)
1 _2 18,102.78 50:06:49 -- initial fetch
2 _1 18,111.61 49:43:22 -- initial fetch
Edit: I added in the task suffixes. _0 and _1 are the original 2 tasks in each workunit. Anything else is a resend.
What's interesting is that of the 12 tasks in the initial fetch (about 5 seconds after the challenge started), only 3 were one of the original 2 tasks in the work unit. (This includes two computers not shown above because they have not yet returned at least two tasks.) Of tasks sent at least half a day after the start, 3 of 5 tasks are the initial two tasks.
That behavior makes sense, because in the first day or two, after the first hundred or so tasks are sent out, all the tasks will be newly created, or resends of tasks that errored out immediately. But they should almost all be from newly created workunits. At the very beginning, you might get resends from work units created days or weeks ago.
Whether old or new tasks is better is up for debate. Newer tasks will be longer and give more credit, and if you can only do N tasks, more credit is better. But if longer means you end up doing N-1 tasks instead of N tasks, you lose a lot of credit by running one less task (or worse, one less task per core.)
____________
My lucky number is 75898524288+1 | |
|
|
There are, so far, 14 results reporting a new Woodall prime.
However, as with all numbers, such as Woodall, where b=2 and c=-1, computation errors often result in a false prime result, and there's strong reasons to suspect that all 14 primes are errors.
So for forms k*2^n - 1, it appears that hardware errors sometimes lead to a particular wrong residue/result, namely: 0 == 'Is prime!'.
There must be a small probability (epsilon squared) that both tasks (sent out for one workunit) return the same wrong result, 0 == 'Is prime!'.
Historically, when BOINC finds that both tasks of one workunit (with b=2; c=-1) agree it is a prime, what has been the empirical odds this was due to two bad computers accidentally agreeing on the wrong conclusion (rather than a genuine prime discovery)?
/JeppeSN | |
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13954 ID: 53948 Credit: 392,586,193 RAC: 178,879
                               
|
There are, so far, 14 results reporting a new Woodall prime.
However, as with all numbers, such as Woodall, where b=2 and c=-1, computation errors often result in a false prime result, and there's strong reasons to suspect that all 14 primes are errors.
So for forms k*2^n - 1, it appears that hardware errors sometimes lead to a particular wrong residue/result, namely: 0 == 'Is prime!'.
There must be a small probability (epsilon squared) that both tasks (sent out for one workunit) return the same wrong result, 0 == 'Is prime!'.
Historically, when BOINC finds that both tasks of one workunit (with b=2; c=-1) agree it is a prime, what has been the empirical odds this was due to two bad computers accidentally agreeing on the wrong conclusion (rather than a genuine prime discovery)?
/JeppeSN
To the best of my knowledge, it's never happened. The sample size is therefore too small to determine what the odds are.
Every prime is double checked at least once by T5K -- at least for primes large enough for T5K. (Some REALLY large primes aren't double checked by T5K, but they're checked by us.)
It might be possible, therefore, for a double-error in today's SGS project to go undetected. The server does check confirmed SGS primes for both an SG pair and a twin prime, but it doesn't do a triple check on the original prime.
There must be a small probability (epsilon squared) that both tasks (sent out for one workunit) return the same wrong result, 0 == 'Is prime!'.
It's actually less than that. The errors aren't a random occurrence. They're somewhat predictable hardware errors, very frequently due to overly ambitious overclocking. The errors aren't randomly distributed amongst the hosts. They all occur on a very small number of problematic computers. Often, it's just one computer. (There's several during this challenge, however.)
Since we don't allow a computer to be its own wingman, that severely limits the odds of the wingman also being a malfunctioning computer. Often, it lowers the odds of a double false-prime to almost 0.
____________
My lucky number is 75898524288+1 | |
|
|
Thanks. Interesting.
It should not take too long for someone with a non-overclocked processor to run through all the finds in SGS, as a triple check. But the probability that a "prime" can be removed by such an effort, must be microscopic.
/JeppeSN | |
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13954 ID: 53948 Credit: 392,586,193 RAC: 178,879
                               
|
Thanks. Interesting.
It should not take too long for someone with a non-overclocked processor to run through all the finds in SGS, as a triple check. But the probability that a "prime" can be removed by such an effort, must be microscopic.
/JeppeSN
As of right now, there are 1491 SGS primes and 3176 TPS primes which do not have a T5K id.
The SGS list will continue to grow, but the TPS list won't.
____________
My lucky number is 75898524288+1 | |
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13954 ID: 53948 Credit: 392,586,193 RAC: 178,879
                               
|
Thanks. Interesting.
It should not take too long for someone with a non-overclocked processor to run through all the finds in SGS, as a triple check. But the probability that a "prime" can be removed by such an effort, must be microscopic.
/JeppeSN
As of right now, there are 1491 SGS primes and 3176 TPS primes which do not have a T5K id.
The SGS list will continue to grow, but the TPS list won't.
I just had a conversation with Jim about this. Our validator also compares the SGS +1 residues, and those have to match as well before the validator will accept the prime. Furthermore, faulty hosts that report false SGS primes often report residues of 0000000000000001 for the +1 test, so if both hosts report that residue along with the SGS -1 prime, the validator won't accept the prime.
So the odds of a false SGS prime slipping through are really small.
The validator has worked that way since before SGS became too small for T5K, so we have a high confidence in all of the SGS primes in the database.
The much older TPS primes which are not in T5K probably were not subject to the same rigorous validation tests. TPS was before my time here.
____________
My lucky number is 75898524288+1 | |
|
robish Volunteer moderator Volunteer tester
 Send message
Joined: 7 Jan 12 Posts: 2196 ID: 126266 Credit: 7,314,996,372 RAC: 3,234,957
                               
|
The much older TPS primes which are not in T5K probably were not subject to the same rigorous validation tests. TPS was before my time here.
Another future double check Michael?
____________
My lucky numbers 10590941048576+1 and 224584605939537911+81292139*23#*n for n=0..26 | |
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13954 ID: 53948 Credit: 392,586,193 RAC: 178,879
                               
|
The much older TPS primes which are not in T5K probably were not subject to the same rigorous validation tests. TPS was before my time here.
Another future double check Michael?
Not by us, no.
____________
My lucky number is 75898524288+1 | |
|
robish Volunteer moderator Volunteer tester
 Send message
Joined: 7 Jan 12 Posts: 2196 ID: 126266 Credit: 7,314,996,372 RAC: 3,234,957
                               
|
The much older TPS primes which are not in T5K probably were not subject to the same rigorous validation tests. TPS was before my time here.
Another future double check Michael?
Not by us, no.
Phew :)
SOB will keep us going for a while yet anyhow.
____________
My lucky numbers 10590941048576+1 and 224584605939537911+81292139*23#*n for n=0..26 | |
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13954 ID: 53948 Credit: 392,586,193 RAC: 178,879
                               
|
The much older TPS primes which are not in T5K probably were not subject to the same rigorous validation tests. TPS was before my time here.
Another future double check Michael?
Not by us, no.
Phew :)
SOB will keep us going for a while yet anyhow.
Those are very small numbers. It's probably a one person job.
____________
My lucky number is 75898524288+1 | |
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13954 ID: 53948 Credit: 392,586,193 RAC: 178,879
                               
|
Day 5:
Challenge: Halloween
App: 3 (WOO-LLR)
(As of 2018-10-30 00:44:34 UTC)
15366 tasks have been sent out. [CPU/GPU/anonymous_platform: 15366 (100%) / 0 (0%) / 0 (0%)]
Of those tasks that have been sent out:
1924 (13%) came back with some kind of an error. [1924 (13%) / 0 (0%) / 0 (0%)]
6556 (43%) have returned a successful result. [6556 (43%) / 0 (0%) / 0 (0%)]
6886 (45%) are still in progress. [6886 (45%) / 0 (0%) / 0 (0%)]
Of the tasks that have been returned successfully:
3093 (47%) are pending validation. [3093 (47%) / 0 (0%) / 0 (0%)]
3393 (52%) have been successfully validated. [3393 (52%) / 0 (0%) / 0 (0%)]
15 (0%) were invalid. [15 (0%) / 0 (0%) / 0 (0%)]
55 (1%) are inconclusive. [55 (1%) / 0 (0%) / 0 (0%)]
The current leading edge (i.e., latest work unit for which work has actually been sent out to a host) is n=17756199. The leading edge was at n=17528522 at the beginning of the challenge. Since the challenge started, the leading edge has advanced 1.30% as much as it had prior to the challenge!
____________
My lucky number is 75898524288+1 | |
|
|
Day 5:
Since the challenge started, the leading edge has advanced 1.30% as much as it had prior to the challenge!
If you took, say, the week or two before the challenge began, by what percentage does the leading edge advance on a normal day?
____________
1 PPSE (+2 DC) & 5 SGS primes | |
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13954 ID: 53948 Credit: 392,586,193 RAC: 178,879
                               
|
Day 5:
Since the challenge started, the leading edge has advanced 1.30% as much as it had prior to the challenge!
If you took, say, the week or two before the challenge began, by what percentage does the leading edge advance on a normal day?
That's not information we keep.
With some projects, such as SGS or PPSE, you can gauge that progression from the primes that are discovered. But you can't do that when there's a prime every 10 years or so.
What we do have, however, is a chart that shows us how many tasks are completed each day:
We're currently doing about 1800 tasks per day, while the minimum value (i.e., prior to the challenge) is about 150 per day. We're doing somewhat more than ten times the normal rate of Woodall tasks, which is typical of a challenge.
____________
My lucky number is 75898524288+1 | |
|
|
Is the new Woodall badge a permanent addition or just an event treat?
____________
676754^262144+1 is prime | |
|
Azmodes Volunteer tester
 Send message
Joined: 30 Dec 16 Posts: 184 ID: 479275 Credit: 2,197,504,179 RAC: 13,692
                       
|
So is multi-threading actually worth it, throughput-wise?
I tested it with the LLR program through the command prompt and in this case apparently not. Should I have turned off hyperthreading? (It's a TR 1950X)
https://i.imgur.com/NHwqb5f.jpg (EDIT: the first "h per day" should read 10.4)
I waited until the 20,000th iteration before entering the times.
____________
Long live the sievers.
+ Encyclopaedia Metallum: The Metal Archives + | |
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13954 ID: 53948 Credit: 392,586,193 RAC: 178,879
                               
|
Is the new Woodall badge a permanent addition or just an event treat?
They're set to vanish at the stroke of midnight when the challenge ends. :)
____________
My lucky number is 75898524288+1 | |
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13954 ID: 53948 Credit: 392,586,193 RAC: 178,879
                               
|
So is multi-threading actually worth it, throughput-wise?
I tested it with the LLR program through the command prompt and in this case apparently not. Should I have turned off hyperthreading? (It's a TR 1950X)
https://i.imgur.com/NHwqb5f.jpg (EDIT: the first "h per day" should read 10.4)
I waited until the 20,000th iteration before entering the times.
It's exceptionally beneficial on larger tasks, generally anything larger than PPS-MEGA. On a typical quad-core, by running a single 4-thread tasks instead of four individual single-thread tasks, you're using 25% of the memory, which means your cache hit rate is higher. Hence, more speed and better overall throughput. The difference is significant, especially on Intel FMA3-capable CPUs because the extreme performance of the CPU makes the memory delays even more significant.
The faster the CPU, the larger the gain you'll see from multi-threading. But even slow CPUs should see an improvement. The memory problem was evident even back when we were running Core-2 CPUs 8 years ago.
____________
My lucky number is 75898524288+1 | |
|
|
Is the new Woodall badge a permanent addition or just an event treat?
They're set to vanish at the stroke of midnight when the challenge ends. :)
Aww, we should be able to keep them.... | |
|
|
Michael Goetz wrote: Azmodes wrote: So is multi-threading actually worth it, throughput-wise?
I tested it with the LLR program through the command prompt and in this case apparently not. Should I have turned off hyperthreading? (It's a TR 1950X)
https://i.imgur.com/NHwqb5f.jpg (EDIT: the first "h per day" should read 10.4)
I waited until the 20,000th iteration before entering the times.
It's exceptionally beneficial on larger tasks, generally anything larger than PPS-MEGA. On a typical quad-core, by running a single 4-thread tasks instead of four individual single-thread tasks, you're using 25% of the memory, which means your cache hit rate is higher. Hence, more speed and better overall throughput. The difference is significant, especially on Intel FMA3-capable CPUs because the extreme performance of the CPU makes the memory delays even more significant.
The faster the CPU, the larger the gain you'll see from multi-threading. But even slow CPUs should see an improvement. The memory problem was evident even back when we were running Core-2 CPUs 8 years ago.
This is, of course, what I see too on all of my CPUs (currently all Intel). (Edit: And it was clearly showing in my own measurements, some with random tasks, others with fixed tasks.)
But here is a transcript of Azmodes' screenshot:
#tasks ___ iterations ____ msecs per __ tasks per d __ #threads __ h per task
__ 1 ____ 17,016,603 _____ 2.199 _____ 2.309 _______ 16 ______ 10.4
__ 2 ____ 17,016,603 _____ 3.409 _____ 2.979 ________ 8 ______ 16.1
__ 4 ____ 17,016,603 _____ 6.653 _____ 3.053 ________ 4 ______ 31.4
__ 8 ____ 17,016,603 ____ 13.271 _____ 3.061 ________ 2 ______ 62.7
_ 16 ____ 17,016,603 ____ 23.403 _____ 3.471 ________ 1 _____ 110.6
What is going on? A mistake in the test, or does AMD Zen behave so much differently? | |
|
mackerel Volunteer tester
 Send message
Joined: 2 Oct 08 Posts: 2639 ID: 29980 Credit: 568,393,769 RAC: 2,469
                              
|
Based on my testing primarily with Intel quad cores (excluding Broadwell, Skylake-X), I see 3 performance zones:
1, small tasks where 1 per core does not exceed L3 cache. Here it is more advantageous to run 1 per core, as multi-thread overhead can significantly reduce throughput.
2, multi-thread task(s) substantially filling L3 cache get a speedup compared to one per core, as it is no longer fighting for ram bandwidth. With sufficient ram bandwidth, this may not be seen.
3, Single or multi-thread tasks that significantly exceed L3 cache. These will be ram bandwidth limited.
Boradwell with L4 cache goes into practically unlimited ram bandwidth negating case 3. Skylake-X with its cache structure I'm less clear about. It doesn't seem to scale as well as I'd hope in practice but I don't have enough data to make any detailed observations. I suspect it may behave similarly to Zen in some respects.
Both Zen and Skylake-X are unlike traditional Intels with inclusive cache. Instead they have exclusive or non-inclusive cache respectively. Data is not duplicated in L2 and L3. I suspect there is extra data shuffling because of that, but you potentially have a bigger effective cache. I'm also unclear what happens if more than one core needs the same data.
In the case of Threadripper, we have an additional potential problem, you have two CCX per die, and two die per socket. 1 task running all core may potentially be impacted from having to cross between those barriers. 2 tasks of 8 cores each, assuming the scheduler is smart enough to put them on different dies, would be effectively like two 8 core consumer Ryzen CPUs in terms of operation and ram bandwidth (assuming all channels populated). The 1950X has a total 32MB L3 cache, so that should support 4 simultaneous Woo tasks without being limited by the ram, if at all. Based on this, I would suspect 2 or 4 tasks dividing the cores equally between them would be most efficient. That 8 is comparable, and 16 even better, I'm not sure how to explain.
One possible reason is that Ryzen family AVX units are much weaker than Intel ones, approx. half the performance. Because of that you could feed about double the Zen cores compared to Intel ones for a given number of ram channels, especially with more limited clock on higher core counts. What speed ram was the system using? If 3200 or faster, that could be fairly close to not being limited. I'm not sure this is a complete potential reason, but a starting point towards one. | |
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13954 ID: 53948 Credit: 392,586,193 RAC: 178,879
                               
|
One day to go!
Challenge: Halloween
App: 3 (WOO-LLR)
(As of 2018-10-31 00:35:13 UTC)
17398 tasks have been sent out. [CPU/GPU/anonymous_platform: 17398 (100%) / 0 (0%) / 0 (0%)]
Of those tasks that have been sent out:
2398 (14%) came back with some kind of an error. [2398 (14%) / 0 (0%) / 0 (0%)]
8305 (48%) have returned a successful result. [8305 (48%) / 0 (0%) / 0 (0%)]
6695 (38%) are still in progress. [6695 (38%) / 0 (0%) / 0 (0%)]
Of the tasks that have been returned successfully:
3391 (41%) are pending validation. [3391 (41%) / 0 (0%) / 0 (0%)]
4824 (58%) have been successfully validated. [4824 (58%) / 0 (0%) / 0 (0%)]
24 (0%) were invalid. [24 (0%) / 0 (0%) / 0 (0%)]
66 (1%) are inconclusive. [66 (1%) / 0 (0%) / 0 (0%)]
The current leading edge (i.e., latest work unit for which work has actually been sent out to a host) is n=17781662. The leading edge was at n=17528522 at the beginning of the challenge. Since the challenge started, the leading edge has advanced 1.44% as much as it had prior to the challenge!
____________
My lucky number is 75898524288+1 | |
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13954 ID: 53948 Credit: 392,586,193 RAC: 178,879
                               
|
I have good news and bad news.
The good news is we found a prime.
It's a mega prime!
It's a HUGE mega prime!
The bad news it's so huge, it's about a million digits larger than a Woodall prime would be. It's not a Woodall. :(
____________
My lucky number is 75898524288+1 | |
|
Monkeydee Volunteer tester
 Send message
Joined: 8 Dec 13 Posts: 532 ID: 284516 Credit: 1,437,442,459 RAC: 1,901,851
                           
|
I have good news and bad news.
The good news is we found a prime.
It's a mega prime!
It's a HUGE mega prime!
The bad news it's so huge, it's about a million digits larger than a Woodall prime would be. It's not a Woodall. :(
So it's either a PSP or a GFN20
I really hope it's a PSP.
____________
My Primes
Badge Score: 4*2 + 6*2 + 7*4 + 8*8 + 11*3 + 12*1 = 157
| |
|
RafaelVolunteer tester
 Send message
Joined: 22 Oct 14 Posts: 909 ID: 370496 Credit: 529,897,285 RAC: 383,035
                        
|
I have good news and bad news.
The good news is we found a prime.
It's a mega prime!
It's a HUGE mega prime!
The bad news it's so huge, it's about a million digits larger than a Woodall prime would be. It's not a Woodall. :(
please be PSP
Please be PSP
PLEASE BE PSP!!!!
EDIT: RIP in Pepperonis, just saw the GFN-20 thread. | |
|
robish Volunteer moderator Volunteer tester
 Send message
Joined: 7 Jan 12 Posts: 2196 ID: 126266 Credit: 7,314,996,372 RAC: 3,234,957
                               
|
I have good news and bad news.
The good news is we found a prime.
It's a mega prime!
It's a HUGE mega prime!
The bad news it's so huge, it's about a million digits larger than a Woodall prime would be. It's not a Woodall. :(
Wahoooooo! EXCELLENT.
____________
My lucky numbers 10590941048576+1 and 224584605939537911+81292139*23#*n for n=0..26 | |
|
|
The bad news it's so huge, it's about a million digits larger than a Woodall prime would be. It's not a Woodall.[/color][/b][/size] :(
Wahoooooo! EXCELLENT.
I feel a hat, baton, or other appropriate device passing. | |
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13954 ID: 53948 Credit: 392,586,193 RAC: 178,879
                               
|
With a bit less than a day remaining in the challenge, it's time to remind everyone...
At the Conclusion of the Challenge
We would prefer users "moving on" to finish those tasks they have downloaded, if not then please ABORT the WU's (and then UPDATE the PrimeGrid project) instead of DETACHING, RESETTING, or PAUSING.
ABORTING WU's allows them to be recycled immediately; thus a much faster "clean up" to the end of a Challenge. DETACHING, RESETTING, and PAUSING WU's causes them to remain in limbo until they EXPIRE. Therefore, we must wait until WU's expire to send them out to be completed. Thank you!
____________
My lucky number is 75898524288+1 | |
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13954 ID: 53948 Credit: 392,586,193 RAC: 178,879
                               
|
With half a day to go...
Challenge: Halloween
App: 3 (WOO-LLR)
(As of 2018-10-31 12:06:03 UTC)
18151 tasks have been sent out. [CPU/GPU/anonymous_platform: 18151 (100%) / 0 (0%) / 0 (0%)]
Of those tasks that have been sent out:
2541 (14%) came back with some kind of an error. [2541 (14%) / 0 (0%) / 0 (0%)]
9229 (51%) have returned a successful result. [9229 (51%) / 0 (0%) / 0 (0%)]
6381 (35%) are still in progress. [6381 (35%) / 0 (0%) / 0 (0%)]
Of the tasks that have been returned successfully:
3505 (38%) are pending validation. [3505 (38%) / 0 (0%) / 0 (0%)]
5619 (61%) have been successfully validated. [5619 (61%) / 0 (0%) / 0 (0%)]
29 (0%) were invalid. [29 (0%) / 0 (0%) / 0 (0%)]
76 (1%) are inconclusive. [76 (1%) / 0 (0%) / 0 (0%)]
The current leading edge (i.e., latest work unit for which work has actually been sent out to a host) is n=17792006. The leading edge was at n=17528522 at the beginning of the challenge. Since the challenge started, the leading edge has advanced 1.50% as much as it had prior to the challenge!
____________
My lucky number is 75898524288+1 | |
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13954 ID: 53948 Credit: 392,586,193 RAC: 178,879
                               
|
Interesting statistic:
So far, 9382 challenge tasks have been successfully received back at the server. That doesn't mean all of those calculations were done correctly, but it does mean the host computer believes that the calculation was correct.
Sometimes it's not correct, especially when there's too much overclocking. With Woodall numbers, a lot of those incorrect calculations erroneously declare the number to be prime.
So far, of those 9382 returned tasks, 18 are prime.
Of those 18, 5 have been proven wrong, meaning that two other computers have run the same test resulting in a composite result with a matching residue.
8 more of the suspect primes are almost certainly wrong as well. For each, another computer has found the number to be composite. We're still waiting for a third computer to confirm that the number is composite.
For the remaining 5 potential Woodall primes, the computers for 4 of them already have a Woodall "prime" that's been proven to be wrong, and thus the computer has a history of producing faulty results. While we don't have proof yet that these particular results are wrong, we know that the computers are faulty, so it's extremely likely that these prime results are also errors.
The sole remaining potential prime also was produced by a suspect computer. It doesn't have any other proven false primes, but it does have other invalid tasks. That's essentially the same thing as a false prime -- it's a composite with the wrong residue.
All of the 18 results claiming to have found a prime are all either proven to be wrong, partially proven to be wrong, or were done on a computer with a track record of producing bad results.
And so it goes with Woodall tasks. Primes of that size are very difficult to find.
____________
My lucky number is 75898524288+1 | |
|
|
Any chance to have the current overall standings page updated?
I know it's an external page, but last year available is 2016.
____________
676754^262144+1 is prime | |
|
Tyler Project administrator Volunteer tester Send message
Joined: 4 Dec 12 Posts: 1078 ID: 183129 Credit: 1,376,122,338 RAC: 6,351
                         
|
Any chance to have the current overall standings page updated?
I know it's an external page, but last year available is 2016.
Eudy has made some overall stats pages, it's updated to the Oktoberfest challenge..
https://whereismy.coffee/primegrid/
2018 Individual stats overall: https://whereismy.coffee/primegrid/2018_Challenge_Series_Current_Standings_Individuals.html
2018 team stats overall: https://whereismy.coffee/primegrid/2018_Challenge_Series_Current_Standings_Teams.html
____________
275*2^3585539+1 is prime!!! (1079358 digits)
Proud member of Aggie the Pew
| |
|
|
1998golfer wrote: Eudy has made some overall stats pages, it's updated to the Oktoberfest challenge..
https://whereismy.coffee/primegrid/
Oktoberfest isn't quite over yet, though:
http://www.primegrid.com/forum_thread.php?id=8194&nowrap=true#120708 | |
|
|
1998golfer wrote: ... Eudy has made some overall stats pages, it's updated to the Oktoberfest challenge..
https://whereismy.coffee/primegrid/
...
And 1998golfer is kindly hosting those files.
xii5ku wrote: Oktoberfest isn't quite over yet, though:
http://www.primegrid.com/forum_thread.php?id=8194&nowrap=true#120708
That's right. The Oktoberfet's clean up is still in progress.
So these results are provisional.
I intend to update the 2018 challenge results, today, after the Halloween challenge ends.
But those results will also be provisional.
____________
"Accidit in puncto, quod non contingit in anno."
Something that does not occur in a year may, perchance, happen in a moment. | |
|
|
Eudy has made some overall stats pages, it's updated to the Oktoberfest challenge..
https://whereismy.coffee/primegrid/
2018 Individual stats overall: https://whereismy.coffee/primegrid/2018_Challenge_Series_Current_Standings_Individuals.html
2018 team stats overall: https://whereismy.coffee/primegrid/2018_Challenge_Series_Current_Standings_Teams.html
Thanks!
Those are very nice pages. They should be linked on the challenge series page.
____________
676754^262144+1 is prime | |
|
Monkeydee Volunteer tester
 Send message
Joined: 8 Dec 13 Posts: 532 ID: 284516 Credit: 1,437,442,459 RAC: 1,901,851
                           
|
xii5ku wrote: Oktoberfest isn't quite over yet, though:
http://www.primegrid.com/forum_thread.php?id=8194&nowrap=true#120708
That's right. The Oktoberfet's clean up is still in progress.
So these results are provisional.
I intend to update the 2018 challenge results, today, after the Halloween challenge ends.
But those results will also be provisional.
Perhaps making a note of that on the stats page would be good.
Whether that "note" be an actual note, a different colour scheme, or some other demarcation to differentiate official results from provisional results.
Great work on the site by the way!
____________
My Primes
Badge Score: 4*2 + 6*2 + 7*4 + 8*8 + 11*3 + 12*1 = 157
| |
|
|
And it's over. Well done everyone. May the clean up begin!
____________
676754^262144+1 is prime | |
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13954 ID: 53948 Credit: 392,586,193 RAC: 178,879
                               
|
Final statistics:
Challenge: Halloween
App: 3 (WOO-LLR)
(As of 2018-11-01 00:04:53 UTC)
18810 tasks have been sent out. [CPU/GPU/anonymous_platform: 18810 (100%) / 0 (0%) / 0 (0%)]
Of those tasks that have been sent out:
3097 (16%) came back with some kind of an error. [3097 (16%) / 0 (0%) / 0 (0%)]
10567 (56%) have returned a successful result. [10567 (56%) / 0 (0%) / 0 (0%)]
5138 (27%) are still in progress. [5138 (27%) / 0 (0%) / 0 (0%)]
Of the tasks that have been returned successfully:
3380 (32%) are pending validation. [3380 (32%) / 0 (0%) / 0 (0%)]
7068 (67%) have been successfully validated. [7068 (67%) / 0 (0%) / 0 (0%)]
44 (0%) were invalid. [44 (0%) / 0 (0%) / 0 (0%)]
75 (1%) are inconclusive. [75 (1%) / 0 (0%) / 0 (0%)]
The current leading edge (i.e., latest work unit for which work has actually been sent out to a host) is n=17793704. The leading edge was at n=17528522 at the beginning of the challenge. Since the challenge started, the leading edge has advanced 1.51% as much as it had prior to the challenge!
____________
My lucky number is 75898524288+1 | |
|
|
Great race to the finish, I had one task that completed 8 minutes after the deadline but wouldn't have caught the well deserved winner - zunewantan. Congratulations and well done. | |
|
|
Congratulations to all challenge crunchers !
Keith wrote: ... Perhaps making a note of that on the stats page would be good ...
Good suggestion. Thanks.
Now there is (ongoing cleanup) in the headers of challenges with provisional results.
(Make sure to refresh your browser to see the actual current files.)
2018 Individual stats overall: https://whereismy.coffee/primegrid/2018_Challenge_Series_Current_Standings_Individuals.html
2018 team stats overall: https://whereismy.coffee/primegrid/2018_Challenge_Series_Current_Standings_Teams.html
All challenge series: https://whereismy.coffee/primegrid/
____________
"Accidit in puncto, quod non contingit in anno."
Something that does not occur in a year may, perchance, happen in a moment. | |
|
RogerVolunteer developer Volunteer tester
 Send message
Joined: 27 Nov 11 Posts: 1138 ID: 120786 Credit: 268,668,824 RAC: 0
                    
|
Thanks everyone for a great challenge. Aggie The Pew was challenging SETI.Germany for second place right up to the end.
Miss the ghost badges already. Hopefully we see them again next Halloween.
Eudy's stats are a big improvement over my spreadsheet efforts.
Now it's time to move focus over to some GPU apps. Make sure you maintain your machines by checking the temps and blowing the dust out once in a while.
Time to get my Telescope out of storage. | |
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13954 ID: 53948 Credit: 392,586,193 RAC: 178,879
                               
|
And now the cleanup begins. I'm guessing this will take 6 to 12 weeks to complete.
Cleanup Status:
Oct 31: Halloween: 3444 tasks outstanding; 3433 affecting individual (266) scoring positions; 3083 affecting team (67) scoring positions.
Nov 1: Halloween: 3001 tasks outstanding; 2992 affecting individual (260) scoring positions; 2129 affecting team (62) scoring positions.
____________
My lucky number is 75898524288+1 | |
|
|
Eudy Silva and 1998golfer, thank you for the statistics!
vaughan wrote: Great race to the finish, I had one task that completed 8 minutes after the deadline but wouldn't have caught the well deserved winner - zunewantan. Congratulations and well done.
You two took your sweet time though to make it past my humble 8...10 home computers. :-P | |
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13954 ID: 53948 Credit: 392,586,193 RAC: 178,879
                               
|
Cleanup Status:
Oct 31: Halloween: 3444 tasks outstanding; 3433 affecting individual (266) scoring positions; 3083 affecting team (67) scoring positions.
Nov 1: Halloween: 3001 tasks outstanding; 2992 affecting individual (260) scoring positions; 2129 affecting team (62) scoring positions.
Nov 2: Halloween: 2419 tasks outstanding; 2413 affecting individual (250) scoring positions; 1277 affecting team (55) scoring positions.
____________
My lucky number is 75898524288+1 | |
|
|
I was watching this task and thought it was going to be 5 minutes late.
Kept watching and it was getting closer and closer.
Well, it did miss the deadline by 30 seconds, oh so close.
584955719 31 Oct 2018 | 13:36:51 UTC 1 Nov 2018 | 0:00:30 UTC | |
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13954 ID: 53948 Credit: 392,586,193 RAC: 178,879
                               
|
Cleanup Status:
Oct 31: Halloween: 3444 tasks outstanding; 3433 affecting individual (266) scoring positions; 3083 affecting team (67) scoring positions.
Nov 1: Halloween: 3001 tasks outstanding; 2992 affecting individual (260) scoring positions; 2129 affecting team (62) scoring positions.
Nov 2: Halloween: 2419 tasks outstanding; 2413 affecting individual (250) scoring positions; 1277 affecting team (55) scoring positions.
Nov 3: Halloween: 2116 tasks outstanding; 2109 affecting individual (240) scoring positions; 1030 affecting team (50) scoring positions.
Nov 4: Halloween: 1841 tasks outstanding; 1835 affecting individual (233) scoring positions; 689 affecting team (48) scoring positions.
Nov 5: Halloween: 1657 tasks outstanding; 1612 affecting individual (225) scoring positions; 615 affecting team (46) scoring positions.
Nov 6: Halloween: 1542 tasks outstanding; 1390 affecting individual (216) scoring positions; 536 affecting team (42) scoring positions.
Nov 7: Halloween: 1395 tasks outstanding; 1262 affecting individual (211) scoring positions; 482 affecting team (40) scoring positions.
Nov 8: Halloween: 1289 tasks outstanding; 1165 affecting individual (204) scoring positions; 362 affecting team (36) scoring positions.
Nov 9: Halloween: 1195 tasks outstanding; 1070 affecting individual (199) scoring positions; 337 affecting team (33) scoring positions.
Nov 10: Halloween: 1105 tasks outstanding; 989 affecting individual (193) scoring positions; 318 affecting team (33) scoring positions.
Nov 11: Halloween: 995 tasks outstanding; 882 affecting individual (185) scoring positions; 276 affecting team (32) scoring positions.
Nov 12: Halloween: 816 tasks outstanding; 728 affecting individual (172) scoring positions; 212 affecting team (25) scoring positions.
Nov 13: Halloween: 762 tasks outstanding; 652 affecting individual (162) scoring positions; 194 affecting team (23) scoring positions.
Nov 14: Halloween: 672 tasks outstanding; 571 affecting individual (150) scoring positions; 169 affecting team (22) scoring positions.
Nov 15: Halloween: 603 tasks outstanding; 471 affecting individual (145) scoring positions; 154 affecting team (22) scoring positions.
Nov 16: Halloween: 531 tasks outstanding; 409 affecting individual (136) scoring positions; 133 affecting team (21) scoring positions.
Nov 17: Halloween: 473 tasks outstanding; 364 affecting individual (126) scoring positions; 116 affecting team (19) scoring positions.
Nov 18: Halloween: 414 tasks outstanding; 320 affecting individual (118) scoring positions; 94 affecting team (16) scoring positions.
Nov 19: Halloween: 384 tasks outstanding; 292 affecting individual (113) scoring positions; 88 affecting team (16) scoring positions.
Nov 20: Halloween: 348 tasks outstanding; 262 affecting individual (106) scoring positions; 83 affecting team (15) scoring positions.
Nov 21: Halloween: 312 tasks outstanding; 234 affecting individual (100) scoring positions; 74 affecting team (14) scoring positions.
Nov 22: Halloween: 278 tasks outstanding; 210 affecting individual (91) scoring positions; 32 affecting team (13) scoring positions.
Nov 23: Halloween: 257 tasks outstanding; 189 affecting individual (88) scoring positions; 30 affecting team (12) scoring positions.
Nov 24: Halloween: 238 tasks outstanding; 174 affecting individual (83) scoring positions; 26 affecting team (11) scoring positions.
Nov 25: Halloween: 216 tasks outstanding; 156 affecting individual (77) scoring positions; 23 affecting team (11) scoring positions.
Nov 26: Halloween: 203 tasks outstanding; 128 affecting individual (71) scoring positions; 23 affecting team (11) scoring positions.
Nov 27: Halloween: 178 tasks outstanding; 108 affecting individual (66) scoring positions; 21 affecting team (11) scoring positions.
Nov 28: Halloween: 163 tasks outstanding; 95 affecting individual (60) scoring positions; 20 affecting team (10) scoring positions.
Nov 29: Halloween: 143 tasks outstanding; 79 affecting individual (53) scoring positions; 18 affecting team (9) scoring positions.
Nov 30: Halloween: 128 tasks outstanding; 66 affecting individual (49) scoring positions; 14 affecting team (9) scoring positions.
____________
My lucky number is 75898524288+1 | |
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13954 ID: 53948 Credit: 392,586,193 RAC: 178,879
                               
|
Cleanup Status:
Oct 31: Halloween: 3444 tasks outstanding; 3433 affecting individual (266) scoring positions; 3083 affecting team (67) scoring positions.
Nov 30: Halloween: 128 tasks outstanding; 66 affecting individual (49) scoring positions; 14 affecting team (9) scoring positions.
Dec 1: Halloween: 119 tasks outstanding; 61 affecting individual (46) scoring positions; 12 affecting team (8) scoring positions.
Dec 2: Halloween: 102 tasks outstanding; 49 affecting individual (39) scoring positions; 10 affecting team (8) scoring positions.
Dec 3: Halloween: 98 tasks outstanding; 48 affecting individual (38) scoring positions; 9 affecting team (7) scoring positions.
Dec 4: Halloween: 86 tasks outstanding; 41 affecting individual (31) scoring positions; 8 affecting team (6) scoring positions.
Dec 5: Halloween: 79 tasks outstanding; 37 affecting individual (30) scoring positions; 7 affecting team (5) scoring positions.
Dec 6: Halloween: 68 tasks outstanding; 30 affecting individual (26) scoring positions; 7 affecting team (5) scoring positions.
Dec 7: Halloween: 58 tasks outstanding; 23 affecting individual (21) scoring positions; 6 affecting team (4) scoring positions.
Dec 8: Halloween: 45 tasks outstanding; 20 affecting individual (19) scoring positions; 6 affecting team (4) scoring positions.
Dec 9: Halloween: 42 tasks outstanding; 18 affecting individual (17) scoring positions; 6 affecting team (4) scoring positions.
Dec 10: Halloween: 41 tasks outstanding; 17 affecting individual (16) scoring positions; 6 affecting team (4) scoring positions.
Dec 11: Halloween: 39 tasks outstanding; 16 affecting individual (15) scoring positions; 6 affecting team (4) scoring positions.
Dec 12: Halloween: 35 tasks outstanding; 15 affecting individual (14) scoring positions; 6 affecting team (4) scoring positions
Dec 13: Halloween: 35 tasks outstanding; 15 affecting individual (14) scoring positions; 6 affecting team (4) scoring positions.
Dec 14: Halloween: 32 tasks outstanding; 15 affecting individual (14) scoring positions; 6 affecting team (4) scoring positions.
Dec 15: Halloween: 25 tasks outstanding; 10 affecting individual (9) scoring positions; 4 affecting team (2) scoring positions.
Dec 16: Halloween: 24 tasks outstanding; 9 affecting individual (8) scoring positions; 4 affecting team (2) scoring positions.
Dec 17: Halloween: 19 tasks outstanding; 3 affecting individual (3) scoring positions; 3 affecting team (2) scoring positions.
____________
My lucky number is 75898524288+1 | |
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13954 ID: 53948 Credit: 392,586,193 RAC: 178,879
                               
|
Note for December 18th: Over the last few days I've manually run about a half dozen tasks to clear out some of the remaining tasks where the wingmen were either going to miss their deadlines or take a very long time to complete. Three tasks now remain, all of which are making steady progress and should complete on their own within the next week or so.
Cleanup Status:
Oct 31: Halloween: 3444 tasks outstanding; 3433 affecting individual (266) scoring positions; 3083 affecting team (67) scoring positions.
Nov 30: Halloween: 128 tasks outstanding; 66 affecting individual (49) scoring positions; 14 affecting team (9) scoring positions.
Dec 1: Halloween: 119 tasks outstanding; 61 affecting individual (46) scoring positions; 12 affecting team (8) scoring positions.
Dec 2: Halloween: 102 tasks outstanding; 49 affecting individual (39) scoring positions; 10 affecting team (8) scoring positions.
Dec 3: Halloween: 98 tasks outstanding; 48 affecting individual (38) scoring positions; 9 affecting team (7) scoring positions.
Dec 4: Halloween: 86 tasks outstanding; 41 affecting individual (31) scoring positions; 8 affecting team (6) scoring positions.
Dec 5: Halloween: 79 tasks outstanding; 37 affecting individual (30) scoring positions; 7 affecting team (5) scoring positions.
Dec 6: Halloween: 68 tasks outstanding; 30 affecting individual (26) scoring positions; 7 affecting team (5) scoring positions.
Dec 7: Halloween: 58 tasks outstanding; 23 affecting individual (21) scoring positions; 6 affecting team (4) scoring positions.
Dec 8: Halloween: 45 tasks outstanding; 20 affecting individual (19) scoring positions; 6 affecting team (4) scoring positions.
Dec 9: Halloween: 42 tasks outstanding; 18 affecting individual (17) scoring positions; 6 affecting team (4) scoring positions.
Dec 10: Halloween: 41 tasks outstanding; 17 affecting individual (16) scoring positions; 6 affecting team (4) scoring positions.
Dec 11: Halloween: 39 tasks outstanding; 16 affecting individual (15) scoring positions; 6 affecting team (4) scoring positions.
Dec 12: Halloween: 35 tasks outstanding; 15 affecting individual (14) scoring positions; 6 affecting team (4) scoring positions
Dec 13: Halloween: 35 tasks outstanding; 15 affecting individual (14) scoring positions; 6 affecting team (4) scoring positions.
Dec 14: Halloween: 32 tasks outstanding; 15 affecting individual (14) scoring positions; 6 affecting team (4) scoring positions.
Dec 15: Halloween: 25 tasks outstanding; 10 affecting individual (9) scoring positions; 4 affecting team (2) scoring positions.
Dec 16: Halloween: 24 tasks outstanding; 9 affecting individual (8) scoring positions; 4 affecting team (2) scoring positions.
Dec 17: Halloween: 19 tasks outstanding; 3 affecting individual (3) scoring positions; 3 affecting team (2) scoring positions.
Dec 18: Halloween: 18 tasks outstanding; 3 affecting individual (3) scoring positions; 1 affecting team (1) scoring positions.
Dec 19: Halloween: 17 tasks outstanding; 3 affecting individual (3) scoring positions; 1 affecting team (1) scoring positions.
Dec 20: Halloween: 16 tasks outstanding; 2 affecting individual (2) scoring positions; 0 affecting team (0) scoring positions.
Dec 21: Halloween: 16 tasks outstanding; 2 affecting individual (2) scoring positions; 0 affecting team (0) scoring positions.
Dec 22: Halloween: 16 tasks outstanding; 2 affecting individual (2) scoring positions; 0 affecting team (0) scoring positions.
____________
My lucky number is 75898524288+1 | |
|
RogerVolunteer developer Volunteer tester
 Send message
Joined: 27 Nov 11 Posts: 1138 ID: 120786 Credit: 268,668,824 RAC: 0
                    
|
The results are final!
Top 3 individuals:
1: zunewantan
2: vaughan
3: xii5ku
Top 3 teams:
1: Czech National Team
2: SETI.Germany
3: Aggie The Pew
Congratulations to the winners, and well done to everyone who participated.
Have a Happy New Year and see you at the Conjunction of Venus & Jupiter Challenge in January!
____________
| |
|
|
Thanks a lot!
I'll have even more fun with the challenges next year fielding new(er) hardware.
____________
Greetings, Jens
147433824^131072+1 | |
|
Message boards :
Number crunching :
Halloween Challenge |