Join PrimeGrid
Returning Participants
Community
Leader Boards
Results
Other
drummers-lowrise
|
Message boards :
Number crunching :
Solar Eclipse Challenge
Author |
Message |
RogerVolunteer developer Volunteer tester
 Send message
Joined: 27 Nov 11 Posts: 1138 ID: 120786 Credit: 268,668,824 RAC: 0
                    
|
Welcome to the Solar Eclipse Challenge
The fifth Challenge of the 2018 Challenge series is a 3 day challenge to celebrate the Solar Eclipse on August 11th. The challenge is being offered on the Proth Prime Search (LLR) Project.
A solar eclipse occurs when the disk of the moon appears to cross in front of the disk of the sun. A total solar eclipse — like the one that took place on Aug. 21, 2017 — occurs when the disk of the moon blocks 100 percent of the solar disk. A partial eclipse occurs when the moon covers only part of the sun.
The Aug. 11, 2018 partial solar eclipse will touch many countries in the Northern Hemisphere. This animation shows the path of the moon's shadow.
The eclipse will start out over the North Atlantic Ocean and Greenland, moving north and east so that the shadow simultaneously moves toward Iceland, northern Europe and the northern polar regions. Continuing its path over the top of the planet, the shadow will be wide enough to cover most of northern Russia from east to west. It will then dip down into Mongolia, China and surrounding areas.
It will officially begin when the moon first appears to make contact with the sun's disk at 4:02 a.m. EDT (0802 UTC). Maximum eclipse will happen at 5:46 a.m. EDT (0946 UTC), when the eclipse is at magnitude 0.7361.
To participate in the Challenge, please select only the Proth Prime Search LLR (PPS) project in your PrimeGrid preferences section. The challenge will begin 10th August 2018 18:00 UTC and end 13th August 2018 18:00 UTC. Note that PPSE and PPS Mega do not count towards this challenge.
Application builds are available for Linux 32 and 64 bit, Windows 32 and 64 bit and MacIntel. Intel CPUs with AVX capabilities (Sandy Bridge, Ivy Bridge, Haswell, Broadwell, Skylake, Kaby Lake, Coffee Lake) will have a very large advantage, and Intel CPUs with FMA3 (Haswell, Broadwell, Skylake, Kaby Lake, Coffee Lake) will be the fastest.
ATTENTION: The primality program LLR is CPU intensive; so, it is vital to have a stable system with good cooling. It does not tolerate "even the slightest of errors." Please see this post for more details on how you can "stress test" your computer. Tasks on one CPU core will take ~30mins on fast/newer computers and 2+ hours on slower/older computers. If your computer is highly overclocked, please consider "stress testing" it. Sieving is an excellent alternative for computers that are not able to LLR. :)
Highly overclocked Haswell, Broadwell, Skylake, Kaby Lake or Coffee Lake (i.e., Intel Core i7, i5, and i3 -4xxx or better) computers running the application will see fastest times. Note that SR5 is running the latest FMA3 version of LLR which takes full advantage of the features of these newer CPUs. It's faster than the previous LLR app and draws more power and produces more heat. If you have a Haswell, Broadwell, Skylake, Kaby Lake or Coffee Lake CPU, especially if it's overclocked or has overclocked memory, and haven't run the new FMA3 LLR before, we strongly suggest running it before the challenge while you are monitoring the temperatures.
Please, please, please make sure your machines are up to the task.
Time zone converter:
The World Clock - Time Zone Converter
NOTE: The countdown clock on the front page uses the host computer time. Therefore, if your computer time is off, so will the countdown clock. For precise timing, use the UTC Time in the data section at the very top, above the countdown clock.
Scoring Information
Scores will be kept for individuals and teams. Only tasks issued AFTER 10th August 2018 18:00 UTC and received BEFORE 13th August 2018 18:00 UTC will be considered for credit. We will be using the same scoring method as we currently use for BOINC credits. A quorum of 2 is NOT needed to award Challenge score - i.e. no double checker. Therefore, each returned result will earn a Challenge score. Please note that if the result is eventually declared invalid, the score will be removed.
At the Conclusion of the Challenge
We kindly ask users "moving on" to ABORT their tasks instead of DETACHING, RESETTING, or PAUSING.
ABORTING tasks allows them to be recycled immediately; thus a much faster "clean up" to the end of an LLR Challenge. DETACHING, RESETTING, and PAUSING tasks causes them to remain in limbo until they EXPIRE. Therefore, we must wait until tasks expire to send them out to be completed.
Please consider either completing what's in the queue or ABORTING them. Thank you. :)
About the Proth Prime Search
The Proth Prime Search is done in collaboration with the Proth Search project. This search looks for primes in the form k*2^n+1. With the condition 2^n > k, these are often called Proth primes. This project also has the added bonus of possibly finding factors of "classical" Fermat numbers or Generalized Fermat numbers. As this requires PrimeFormGW (PFGW) (a primality-testing program), once PrimeGrid finds a prime, it is then tested on PrimeGrid's servers for divisibility.
Proth Search only searches for k<1200. PrimeGrid created an extension to that which includes all candidates 1200<k<10000 for n<5M. It is this extension which we call PPSE.
Initially, PrimeGrid's PPS project's goal was to double check all previous work up to n=500K for odd k<1200 and to fill in any gaps that were missed. We have accomplished that now and have increased it to n=3M. PG's LLRNet searched up to n=200,000 and found several missed primes in previously searched ranges. Although primes that small did not make it into the Top 5000 Primes database, the work was still important as it may have led to new factors for "classical" Fermat numbers or Generalized Fermat numbers. While there are many GFN factors, currently there are only 297 "classical" Fermat number factors known. Current primes found in PPS definitely make it into the Top 5000 Primes database.
For more information about "Proth" primes, please visit these links:
About Proth Search
The Proth Search project was established in 1998 by Ray Ballinger and Wilfrid Keller to coordinate a distributed effort to find Proth primes (primes of the form k*2^n+1) for k < 300. Ray was interested in finding primes while Wilfrid was interested in finding divisors of Fermat number. Since that time it has expanded to include k < 1200. Mark Rodenkirch (aka rogue) has been helping Ray keep the website up to date for the past few years.
Early in 2008, PrimeGrid and Proth Search teamed up to provide a software managed distributed effort to the search. Although it might appear that PrimeGrid is duplicating some of the Proth Search effort by re-doing some ranges, few ranges on Proth Search were ever double-checked. This has resulted in PrimeGrid finding primes that were missed by previous searchers. By the end of 2008, all new primes found by PrimeGrid were eligible for inclusion in Chris Caldwell's Prime Pages Top 5000. Sometime in 2009, over 90% of the tests handed out by PrimeGrid were numbers that had never been tested.
PrimeGrid intends to continue the search indefinitely for Proth primes.
What is LLR?
The Lucas-Lehmer-Riesel (LLR) test is a primality test for numbers of the form N = k*2^n − 1, with 2^n > k. Also, LLR is a program developed by Jean Penne that can run the LLR-tests. It includes the Proth test to perform +1 tests and PRP to test non base 2 numbers. See also:
(Edouard Lucas: 1842-1891, Derrick H. Lehmer: 1905-1991, Hans Riesel: 1929-2014).
____________
| |
|
|
Solar eclipses can occur at new moon only. Note that we have eclipses at consecutive new moons this time since 2018-Jul-13 was an eclipse as well (Wikipedia). /JeppeSN | |
|
|
Solar eclipses can occur at new moon only. Note that we have eclipses at consecutive new moons this time since 2018-Jul-13 was an eclipse as well (Wikipedia). /JeppeSN
Also, do not forget the great lunar eclipse coming in the middle between these two events, i.e. tomorrow. /JeppeSN | |
|
robish Volunteer moderator Volunteer tester
 Send message
Joined: 7 Jan 12 Posts: 2197 ID: 126266 Credit: 7,321,624,790 RAC: 3,177,013
                               
|
Solar eclipses can occur at new moon only. Note that we have eclipses at consecutive new moons this time since 2018-Jul-13 was an eclipse as well (Wikipedia). /JeppeSN
Also, do not forget the great lunar eclipse coming in the middle between these two events, i.e. tomorrow. /JeppeSN
Https://www.timeanddate.com/eclipse/lunar/2018-july-27
Check if you can see it in your part of the world :)
____________
My lucky numbers 10590941048576+1 and 224584605939537911+81292139*23#*n for n=0..26 | |
|
Monkeydee Volunteer tester
 Send message
Joined: 8 Dec 13 Posts: 532 ID: 284516 Credit: 1,445,403,828 RAC: 2,229,578
                           
|
So a couple days ago I decided to switch to PPS LLR in preparation for the challenge. The throughput of PPS was rather uninspiring compared to the other projects I had been running. So I decided to look at how I was running PPS and did some quick testing on both my i5-4670 (DDR3-1600) and my i7-4790 (DDR3-1600). I started out with one task on four cores and was getting a lot of 1st's, but not a lot of total units done. So I tried two tasks on two cores each and found that while each unit was slower, overall throughput was higher. At that point I knew I had to try the traditional one task per core and see what happens. Then it only seemed prudent to run all three conditions on both CPUs to see if there was a difference.
These tests were all run using live units and the times are visually taken from the BOINC manager. So please take these results with more than a few grains of salt. haha
i7-4790 DDR3-1600 PPS LLR runtimes
Per Unit:
4c 1t = 10 minutes
2c 2t = 17 minutes
1c 4t = 37 minutes
Per two units:
4c 1t = 20 minutes
2c 2t = 17 minutes
1c 4t = 37 minutes
Per four units:
4c 1t = 40 minutes
2c 2t = 34 minutes
1c 4t = 37 minutes
i5-4670 DDR3-1600 PPS LLR runtimes
Per unit:
4c 1t = 10 minutes
2c 2t = 17 minutes
1c 4t = 40 minutes
Per two units:
4c 1t = 20 minutes
2c 2t = 17 minutes
1c 4t = 40 minutes
Per four units:
4c 1t = 40 minutes
2c 2t = 34 minutes
1c 4t = 40 minutes
PPS runs remarkably similarly on the two CPUs when running one or two tasks at a time. Bump up to four tasks at a time and the i5 starts to fall off.
If you have a Haswell CPU it looks like the best for speed is one task on four cores and if you want the best throughput it looks like the best is two tasks on two cores each.
Running four tasks at once was all around bad on both. It had the slowest speed per unit, lowest or second lowest throughput, and had the highest CPU temperature.
Different CPU families and different RAM speeds will drastically alter these results.
May whoever reads this find it informative or inspiring.
Happy prime finding!
____________
My Primes
Badge Score: 4*2 + 6*2 + 7*4 + 8*8 + 11*3 + 12*1 = 157
| |
|
mackerel Volunteer tester
 Send message
Joined: 2 Oct 08 Posts: 2639 ID: 29980 Credit: 568,393,769 RAC: 1,834
                              
|
Interesting results. I should probably test same.
With the caution I haven't had the 1st coffee of the day yet, for the i7 I would have expected 4 tasks (one per core) to be best throughput as the work should fit on CPU cache comfortably, thus ram speed shouldn't really matter here. I note HT is on, and unless you've forced affinity, Windows scheduler can result in about 10% performance drop when running one task per core. If you turn off HT, or manually set affinity, you might see a small boost there. When multi-thread is enabled, it works a bit differently and you don't have to worry about it then.
On the i5, we're borderline running out of CPU cache, so there may be a performance impact one task per core, and thus 2 tasks of two threads each might offset that. | |
|
RafaelVolunteer tester
 Send message
Joined: 22 Oct 14 Posts: 909 ID: 370496 Credit: 531,032,365 RAC: 385,939
                        
|
Interesting results. I should probably test same.
With the caution I haven't had the 1st coffee of the day yet, for the i7 I would have expected 4 tasks (one per core) to be best throughput as the work should fit on CPU cache comfortably, thus ram speed shouldn't really matter here.
Remember when Broadwell (as in, i5 5675c) was a thing and LLR Multithreading wasn't? Man, fun times, fun times indeed...
Dunno, it just came to my mind and I felt like sharing it. | |
|
Monkeydee Volunteer tester
 Send message
Joined: 8 Dec 13 Posts: 532 ID: 284516 Credit: 1,445,403,828 RAC: 2,229,578
                           
|
Interesting results. I should probably test same.
With the caution I haven't had the 1st coffee of the day yet, for the i7 I would have expected 4 tasks (one per core) to be best throughput as the work should fit on CPU cache comfortably, thus ram speed shouldn't really matter here. I note HT is on, and unless you've forced affinity, Windows scheduler can result in about 10% performance drop when running one task per core. If you turn off HT, or manually set affinity, you might see a small boost there. When multi-thread is enabled, it works a bit differently and you don't have to worry about it then.
On the i5, we're borderline running out of CPU cache, so there may be a performance impact one task per core, and thus 2 tasks of two threads each might offset that.
I've left HT on for the i7 since the i7 is also my "daily driver" and the extra threads can keep the GPU fed (currently running AP) with minimal impact on the CPU tasks. Yes, AP uses very little CPU time anyway, but still it is best to minimize any impact from GPU crunching.
It should also be noted that the CPU temperatures on both CPUs are lower with 2 tasks as opposed to 4 tasks at a time. And this time of year that is very important. I did pay attention to clock speeds and did not see any evidence of thermal throttling.
____________
My Primes
Badge Score: 4*2 + 6*2 + 7*4 + 8*8 + 11*3 + 12*1 = 157
| |
|
mackerel Volunteer tester
 Send message
Joined: 2 Oct 08 Posts: 2639 ID: 29980 Credit: 568,393,769 RAC: 1,834
                              
|
You can still try setting manual affinity, but, with a GPU thrown in the mix I don't know how that would work. How much CPU time does AP take? When running multi-thread mode, it appears to set affinity in some way, so you don't need to do it manually.
Higher temps are an indicator of doing more work, although not necessarily useful work... | |
|
Dave  Send message
Joined: 13 Feb 12 Posts: 3171 ID: 130544 Credit: 2,234,035,580 RAC: 550,711
                           
|
Interesting re temperatures. I was just going to leave mine on 1 task but will now benchmark 2 tasks to see which is cooler. | |
|
Monkeydee Volunteer tester
 Send message
Joined: 8 Dec 13 Posts: 532 ID: 284516 Credit: 1,445,403,828 RAC: 2,229,578
                           
|
Interesting re temperatures. I was just going to leave mine on 1 task but will now benchmark 2 tasks to see which is cooler.
1 task was the coolest on the i7 around 70C
2 tasks was around 75C
4 tasks was around 80C
The temperatures will of course vary with room temperature, air flow, CPU cooler quality, etc. That's why I didn't really include temperature as a key part of the test unless throttling was taking place.
Looking at your systems though, Dave, I think 2 tasks at a time on your i7-2600K would probably be better than 1 task for overall performance. But test it out and see for yourself for sure.
____________
My Primes
Badge Score: 4*2 + 6*2 + 7*4 + 8*8 + 11*3 + 12*1 = 157
| |
|
Dave  Send message
Joined: 13 Feb 12 Posts: 3171 ID: 130544 Credit: 2,234,035,580 RAC: 550,711
                           
|
Ty Keith I'll give it a bash. | |
|
Dave  Send message
Joined: 13 Feb 12 Posts: 3171 ID: 130544 Credit: 2,234,035,580 RAC: 550,711
                           
|
Ty Keith I'll give it a bash.
Literally just 1C difference. | |
|
Monkeydee Volunteer tester
 Send message
Joined: 8 Dec 13 Posts: 532 ID: 284516 Credit: 1,445,403,828 RAC: 2,229,578
                           
|
Ty Keith I'll give it a bash.
Literally just 1C difference.
And that's why you test.
If 1 task 4 cores and 2 tasks 2 cores each gives the same throughput then go with the fastest run time so you have a better chance of being 1st.
____________
My Primes
Badge Score: 4*2 + 6*2 + 7*4 + 8*8 + 11*3 + 12*1 = 157
| |
|
mackerel Volunteer tester
 Send message
Joined: 2 Oct 08 Posts: 2639 ID: 29980 Credit: 568,393,769 RAC: 1,834
                              
|
Just done some testing on a 6700k.
Single -t4: 695s
Dual -t2: 940.5s
Quad -t1: 1553.9s
As suspected, running 4x single thread tasks is maximum throughput. Using -t2 speeds things up, but you lose efficiency in the process, and only does 82% of the work compared to single thread units. Running units with 4 threads, just no. That's only 55% the throughput.
Basically this doesn't seem to have changed at all from the last time I looked at it. I'd caution the difference between my testing and Keith's is that I'm NOT running any GPU tasks.
Before anyone says I used Skylake and Keith was on Haswell, I don't really expect that to be significant. The i5 vs i7 difference might come into it a little though, as the smaller cache of the i5 could start to impact things. | |
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13955 ID: 53948 Credit: 392,952,071 RAC: 176,354
                               
|
Before anyone says I used Skylake and Keith was on Haswell, I don't really expect that to be significant. The i5 vs i7 difference might come into it a little though, as the smaller cache of the i5 could start to impact things.
Don't you have DDR4 memory in the Skylake? If so, that may have a significant effect. Faster memory would give an advantage to running 4 tasks of 1 thread each. The Haswell's had DDR3-1600 memory.
____________
My lucky number is 75898524288+1 | |
|
Monkeydee Volunteer tester
 Send message
Joined: 8 Dec 13 Posts: 532 ID: 284516 Credit: 1,445,403,828 RAC: 2,229,578
                           
|
Just done some testing on a 6700k.
Single -t4: 695s
Dual -t2: 940.5s
Quad -t1: 1553.9s
Interesting results. I expected the Skylake to beat every one of my tests. It certainly beat the quad -t1 times by a wide margin, but lost out in the single -t4.
I suspect, as Michael said, that the DDR4 helped once the cache was loaded up with the four tasks. That likely explains why Skylake and newer seems to beat the 4790 by quite a wide margin on the larger tasks. The 6700K also has a very slight clock speed advantage over the 4790 at stock speeds, but I doubt that means much here.
And yes, the tests I conducted were less synthetic and more "real world" as many people, myself included, will be running GPU tasks of various kinds during the challenge.
____________
My Primes
Badge Score: 4*2 + 6*2 + 7*4 + 8*8 + 11*3 + 12*1 = 157
| |
|
mackerel Volunteer tester
 Send message
Joined: 2 Oct 08 Posts: 2639 ID: 29980 Credit: 568,393,769 RAC: 1,834
                              
|
On an i7, it shouldn't touch the ram even at one task per core. The work can be done out of L3 cache. Only on i5 might that become a factor.
As for time differences, I'd expect it to scale with clock, plus another 14% for -lake processors compared to Haswell when not ram limited as they added something to make it go faster.
The faster -t 4 result on Haswell has got my interest now... I have a Haswell quad I can excavate later and try on that. | |
|
mackerel Volunteer tester
 Send message
Joined: 2 Oct 08 Posts: 2639 ID: 29980 Credit: 568,393,769 RAC: 1,834
                              
|
Haswell test is still running, but I also ran a test on my 6600k. My theory was, the the smaller L3 cache may start to hinder it.
Single -t4: 695s
Dual -t2: 1072.5s
Quad -t1: 1767.8s
For total throughput, going one per core is still best. Like the 6700k before, running two units of two threads each was 82% the throughput. And one task of 4 threads, came out 64%, higher than on the 6700k yesterday of 55%.
What is interesting is if you look at it in terms of work done. The 6600k runs at 3.6 GHz all cores, stock, compared to 4.0 GHz on the 6700k. The 6700k should be 11% faster on clock.
Running 4 tasks, one per core, the 6700k was 15% faster. Running two tasks of two threads, it was similar at 14%. Maybe it is the cache in the case of 4 tasks, but it shouldn't be the case in 2 of 2, so I don't think I have a complete picture here. And the most interesting part... running one task of 4 threads, they were the same. The 6600k and 6700k both took the same median time to the second. My assumption here is that the code isn't able to scale well in that scenario, hence some limitation other than core clock dictates how fast it can go.
It should be really interesting to see how the Haswell results come out now... | |
|
mackerel Volunteer tester
 Send message
Joined: 2 Oct 08 Posts: 2639 ID: 29980 Credit: 568,393,769 RAC: 1,834
                              
|
Haswell results are in... and I'm not sure how to digest it.
Forget comparing across architectures for now... how does it scale?
CPU: i5-4570s (3.2 GHz all cores active)
Single -t4: 708s
Dual -t2: 1183.5s
Quad -t1: 2243.8s
Once again, running one task per core was highest throughput, with two of two close at 95%, and one task of 4 threads at 79%.
The one task with 4 threads is only 2% slower than either of the Skylake results. This is suggestive of some other limit in scaling this work to 4 threads.
I've previously come up with a value that Skylake is 14% faster per clock than Haswell, when not limited elsewhere e.g. ram bandwidth. Following allows for this 14% and the clock differences.
Running 4x -t1, it seems to scale within expectations, within a few % or so.
Running 2x -t2, I'm not sure what to do with this... it is over-performing compared to Skylake. I don't have an explanation for this. I would guess the lower running clock might be a contributing factor, and mask multi-thread scaling losses in some way, but I haven't tried to prove this or otherwise. | |
|
|
Keith wrote: Different CPU families and different RAM speeds will drastically alter these results.
Maybe not drastically, but yes, it will change results.
mackerel wrote: Just done some testing on a 6700k.
Single -t4: 695s
Dual -t2: 940.5s
Quad -t1: 1553.9s
As suspected, running 4x single thread tasks is maximum throughput.
I tested an i7 7700K (with DDR4 3200 C14, and with a moderate negative AVX offset for the CPU clock defined in the BIOS), not only with the three configs that you chose, but with a few others more. It's an i7 after all, which has 8 logical CPUs, not 4. I came to a different conclusion than you; maybe because our systems differ, or maybe because I tested more configs.
Furthermore, while I haven't investigated what the variability between currently issued PPS-LLR WUs is, I found with several other LLR subprojects in the past that the variability between WUs is greater than the throughput difference between the best and next best #tasks x #threads configuration on a given machine. This is especially true on machines with higher core count. I therefore eliminate WU variability in my own throughput tests.
Take-home message for all who don't have time to perform their own testing:
If you configure your machine as per test results that you read in forum posts, taking care that the tested machine and yours are similar, then you will probably get a config which is close enough to the optimum. It may be somewhat off the precise optimum though, due to hardware differences or due to imprecision of the tests. | |
|
mackerel Volunteer tester
 Send message
Joined: 2 Oct 08 Posts: 2639 ID: 29980 Credit: 568,393,769 RAC: 1,834
                              
|
I tested an i7 7700K (with DDR4 3200 C14, and with a moderate negative AVX offset for the CPU clock defined in the BIOS), not only with the three configs that you chose, but with a few others more. It's an i7 after all, which has 8 logical CPUs, not 4. I came to a different conclusion than you; maybe because our systems differ, or maybe because I tested more configs.
Care to share those results?
In my experience, I've never managed to prove a case where using HT provides more throughput than not using HT. There are some cases where using HT appears to help, but they only allow you to reclaim losses from elsewhere. One example is running one per core without affinity results in around 10% throughput loss. You can regain that 10% loss by either running more tasks, or setting affinity. Using affinity is more power efficient so would be preferred, as tapping into HT also increases power usage, even if there is no performance gain.
Furthermore, while I haven't investigated what the variability between currently issued PPS-LLR WUs is, I found with several other LLR subprojects in the past that the variability between WUs is greater than the throughput difference between the best and next best #tasks x #threads configuration on a given machine. This is especially true on machines with higher core count. I therefore eliminate WU variability in my own throughput tests.
There are a number of different "k" values being tested. Even when at a similar size, it can end up working differently. I kinda brute force around it. Do a large enough sample, and take the median. Less controlled than running the same task, but I find it good enough within a couple % or so. | |
|
|
mackerel wrote: xii5ku wrote: I tested an i7 7700K (with DDR4 3200 C14 [...] I came to a different conclusion than you; maybe because our systems differ, or maybe because I tested more configs.
Care to share those results?
Processor clock was 4.2 GHz.
Operating system: Linux Mint 18.3.
Input parameters:
1000000000:P:1:2:257
839 2610671 which gave 159.04 credits. From a quick glance at others' hosts with PPS results, this seems to be in the middle of the range of current PPS credit/task.
Meaning of the following numbers:number of simultaneous tasks x number of tasks per process: median run time per task -- points per day per host
4p x 1t: 1434 s -- 38,300 ppd
2p x 2t: 870 s -- 31,600 ppd
1P x 4t: 490 s -- 28,000 ppd
8p x 1t: 3938 s -- 27,900 ppd
4p x 2t: 1374 s -- 40,000 ppd
2p x 4t: 747 s -- 36,800 ppd
1p x 8t: 436 s -- 31,500 ppd mackerel wrote: In my experience, I've never managed to prove a case where using HT provides more throughput than not using HT.
Measurements on the hardware that I have have shown throughput gains from HT in several LLR based subprojects.
But in PPS-LLR, the host with i7 7700K was the only one among the hosts which I tested which gained from HT.
mackerel wrote: One example is running one per core without affinity results in around 10% throughput loss.
I run PrimeGrid on Linux only.
mackerel wrote: There are a number of different "k" values being tested. Even when at a similar size, it can end up working differently. I kinda brute force around it. Do a large enough sample, and take the median. Less controlled than running the same task, but I find it good enough within a couple % or so.
Right; I didn't explore the current range of available PPS-LLR WUs. Something I should look into in future testing.
Since I run a variety of projects besides PrimeGrid, I prefer my PrimeGrid tests to take as little time as can be reasonably arranged. And if the test regime is fully reproducible, I like it even better. | |
|
mackerel Volunteer tester
 Send message
Joined: 2 Oct 08 Posts: 2639 ID: 29980 Credit: 568,393,769 RAC: 1,834
                              
|
Interesting... your 4p x 1t is what I considered the optimal situation, but you got 4.4% more ppd on 4p x 2t. I don't have extensive or recent Linux experience but when I last looked at it, I think its scheduler did a better job than Windows.
On that note, I had seen hints of up to 2% improvement when using HT, but I decided that wasn't significant, and even if correct wasn't enough to offset the extra power it took. I never measured that power difference, but it was 10C hotter on the core on the system I was using. That is significant. | |
|
|
xii5ku wrote: number of simultaneous tasks x number of tasks per process
correction: number of simultaneous tasks x number of threads per process | |
|
|
mackerel wrote: On that note, I had seen hints of up to 2% improvement when using HT, but I decided that wasn't significant, and even if correct wasn't enough to offset the extra power it took. I never measured that power difference, but it was 10C hotter on the core on the system I was using. That is significant.
Yes, power efficiency goes down when HT is used.
Same i7 7700K, with two idle Pascal GPUs, whole system power measured:
idle: 50 W
4p x 1t: 38,300 points/d / 137 W = 3.24 points/kJ
4p x 2t: 40,000 points/d / 145 W = 3.19 points/kJ Of course the 7700K with the clock that I drive it at is far from power efficient to begin with. (The host was built to run GPU projects, not CPU projects.)
| |
|
|
xii5ku wrote: mackerel wrote: There are a number of different "k" values being tested. Even when at a similar size, it can end up working differently. I kinda brute force around it. Do a large enough sample, and take the median. Less controlled than running the same task, but I find it good enough within a couple % or so.
Right; I didn't explore the current range of available PPS-LLR WUs. Something I should look into in future testing.
BTW, looking back at notes that I made during the SR5-LLR challenge, I saw two different FFT lengths being chosen by llr/gwnum for different WUs that I received during that challenge (probably on hosts with same hardware architecture, if not even on the same host, but my notes are not clear on that). In light of this, your suggestion to try to cover the current range of varying WUs in some way in the test setup definitely has merit.
At the other end of the spectrum of test regimes, I have seen proponents of very short and quick prime95/gwnum test cases --- instead of long running llr/gwnum test cases with input from PrimeGrid --- but setting prime95 to the same FFT length as seen with PrimeGrid's WUs. I haven't looked closer into this approach yet, but I see potential for certain systematic errors in it. | |
|
mackerel Volunteer tester
 Send message
Joined: 2 Oct 08 Posts: 2639 ID: 29980 Credit: 568,393,769 RAC: 1,834
                              
|
Prime95 does have a benchmark mode, and the interface in 29.x makes it easier to use than before. You can choose FFT lengths, thread and worker configurations, HT or not, and "complex FFTs". I don't know what the last one is, but I saw somewhere that might give more representative results for non-mersenne uses like LLR.
I have used it to build an observation model of how hardware works under various scenarios, but for sure there will be variables I have not accounted for, so it doesn't always match "real" PrimeGrid task behavior. While I'm fairly confident in the use case on quad core, maybe 6 core consumer level Intel CPUs, I'm less sure on Ryzen (CCX, infinity fabric, exclusive cache), Skylake-X (non-inclusive mesh cache), multi-socket configuration (NUMA or not), and any other high core count system (>8 or so).
As a rule of thumb, best throughput seems to be when the L3 cache (assuming inclusive) is filled but not exceeded by the running task(s). | |
|
|
Hot in heels of completing my GCW-sieve goals, I have had a string of invalid single core SGS tasks when the number of active Ryzen cores are greater than 4. Going back in time I remember 8 core multithreaded ESP tasks that got invalidated after completing them. This was around the time LLR 8.01 was implemented. While gaming and other desktop apps I have no problem with 4 cores active and feeding 2 GPUs with PPS-sieve.
I do know about Ryzens split AVX units / halved throughput. Anyway, with the upcoming challenge, Ryzen users beware.
The system is a Ryzen 1700@stock 3,2 GHz with 2x8 GB 3200 MHz (XMP) CL16-18-18-38.
Awaiting another sieve project patiently. | |
|
mackerel Volunteer tester
 Send message
Joined: 2 Oct 08 Posts: 2639 ID: 29980 Credit: 568,393,769 RAC: 1,834
                              
|
While the Ryzen FPU might be comparatively slower than Intel's, it doesn't mean it will give errors if you push it. Double check there isn't another problem elsewhere that might be contributing... | |
|
|
I stand corrected. Dropped down to 2666MHz (offical suported speed on Taichi), worked my way up to 3066MHz, so far stable and no inconclusives. 3200 MHz gave me hardship with LLR, but worked fine with sieving.
Just a side note, Asrock BIOS releases seem to be hit or miss lately, citing the PC community. I'm on 4.70.
Other factors like heat are definitely not a problem. | |
|
|
Here's quick tests on my 8700K:
It seems running 3 tasks on 2 cores each is the sweet spot for PPS.
6 tasks on 1 core and 2 tasks on 3 cores are practically tied; 6 tasks runs a bit faster overall but 2 tasks use a little less CPU time. | |
|
Dad Send message
Joined: 28 Feb 18 Posts: 284 ID: 984171 Credit: 182,080,291 RAC: 0
                 
|
Love these challenges - another drop down to SGS prior to the challenge and another prime found ;)
Dad
____________
Tonight's lucky numbers are
555*2^3563328+1 (PPS-MEGA)
and
58523466^131072+1 (GFN-17 MEGA) | |
|
Honza Volunteer moderator Volunteer tester Project scientist Send message
Joined: 15 Aug 05 Posts: 1952 ID: 352 Credit: 6,015,648,658 RAC: 1,568,470
                                      
|
Here's quick tests on my 8700K:
...
It seems running 3 tasks on 2 cores each is the sweet spot for PPS.
6 tasks on 1 core and 2 tasks on 3 cores are practically tied; 6 tasks runs a bit faster overall but 2 tasks use a little less CPU time.
Well, i7 8700K, HT off, 4x8GB DDR4-2133.
Times are as follow, testing a recent prime 1071*2^2609316+1
Column: 1 - Cores, 2- Tasks, 3 - Average times per task, 4 - Tasks per hour
1 ˙6 ˙1450 ˙14,89
2 ˙3 ˙1030 ˙10,48
3 ˙2 ˙762 ˙9,44
4 ˙1 ˙486 ˙7,41
5 ˙1 ˙445˙8,09
6 ˙2 ˙484 ˙7,44
My home computer has a bit faster DDR4 and I expect a bit better times, but doing 6 tasks on 1 core is best option.
____________
My stats | |
|
mackerel Volunteer tester
 Send message
Joined: 2 Oct 08 Posts: 2639 ID: 29980 Credit: 568,393,769 RAC: 1,834
                              
|
On the last two 8700k results, if I assume the first one has HT on, it could be understandable. When running multi-thread at all, the scheduling seems to work better, but you may lose some performance from multi-thread scaling. One task per core with HT on can result in some loss of efficiency from scheduler unless you set affinity, or have HT turned off.
I haven't had time to test my 8086k on this at all and it will be set up at last minute when I get home from work... | |
|
|
Hmm. I'm also running GFN and using the computer, so it might affect my results. I need to try running some more 1-core tasks to make my results more reliable. I have 2x8GB 3200 Mhz DDR4. | |
|
Dave  Send message
Joined: 13 Feb 12 Posts: 3171 ID: 130544 Credit: 2,234,035,580 RAC: 550,711
                           
|
Hmm. I'm also running GFN and using the computer, so it might affect my results. I need to try running some more 1-core tasks to make my results more reliable. I have 2x8GB 3200 Mhz DDR4.
I plan to pause AP27 to make sure I get absolute most out of the machines. They will not be in significant use. | |
|
|
Haha, so there was apparently a windows update running during the 1-core tests :D
Here's the new results:
So, running each task on 1 physical core is the best. Sorry for misleading :P Always doubt all results! | |
|
|
I suppose most or all (other than myself) who are posting performance numbers here are testing with random WUs. However, nobody bothers to state their error of measurement (standard deviation for example), or alternatively how many tasks they measured and whether or not they waited for validation and normalized for credit/task.
(I did not state my error of measurement in post 119753 because the error is extremely low if a fixed WU is tested, the hardware is stable, and no other load is happening on the system.)
Some forget to state the operating system. (Was an OS with poor CPU scheduler used? Was a desktop used which runs face detection on your photo collection at random times? E.t.c.)
Performance reports which lack this type of data are of rather little value [edit:] because they may easily lead to wrong conclusions.
--------
mackerel wrote: When running multi-thread at all, the scheduling seems to work better, but you may lose some performance from multi-thread scaling. One task per core with HT on can result in some loss of efficiency from scheduler unless you set affinity, or have HT turned off.
When you are telling people this, please don't forget to include that this only applies to Windows. Linux gets only a very small effect from this, about an order of magnitude below the error of measurement that occurs when random WUs are chosen. | |
|
Sysadm@Nbg Volunteer moderator Volunteer tester Project scientist
 Send message
Joined: 5 Feb 08 Posts: 1217 ID: 18646 Credit: 859,721,422 RAC: 206,910
                      
|
and now to something completly different: GO! GO! GO!
Happy Challenge :)
____________
Sysadm@Nbg
my current lucky number: 113856050^65536 + 1
PSA-PRPNet-Stats-URL: http://u-g-f.de/PRPNet/
| |
|
|
Partial solar eclipses never get as much media attention (or scientific study/viewing!) as the full or annular eclipses. Does anyone live near enough to catch a view? There's also the yearly Perseid meteor showers tonight.
https://en.wikipedia.org/wiki/Solar_eclipse_of_August_11,_2018
https://en.wikipedia.org/wiki/Perseids | |
|
|
And BOINC@MIXI had the best start!
Have a fun race, everyone. :-)
____________
Greetings, Jens
147433824^131072+1 | |
|
|
____________
"Accidit in puncto, quod non contingit in anno."
Something that does not occur in a year may, perchance, happen in a moment.
| |
|
|
Partial solar eclipses never get as much media attention (or scientific study/viewing!) as the full or annular eclipses. Does anyone live near enough to catch a view? There's also the yearly Perseid meteor showers tonight.
https://en.wikipedia.org/wiki/Solar_eclipse_of_August_11,_2018
https://en.wikipedia.org/wiki/Perseids
Maybe some photos will appear at https://commons.wikimedia.org/wiki/Category:Solar_eclipse_of_2018_August_11. /JeppeSN | |
|
|
Any bets how many primes will be found during this short challenge? (A prime is counted if the task that first returned the prime result, is included in the challenge.) /JeppeSN | |
|
tng Send message
Joined: 29 Aug 10 Posts: 466 ID: 66603 Credit: 45,721,299,949 RAC: 23,102,493
                                                   
|
Any bets how many primes will be found during this short challenge? (A prime is counted if the task that first returned the prime result, is included in the challenge.) /JeppeSN
I'll go with 2.
____________
| |
|
|
Hello!
My guess:
I hope we will find 4 and I'll catch one of them!
Still searching for a Mega Prime!!
____________
MyStats
My Badges | |
|
Dave  Send message
Joined: 13 Feb 12 Posts: 3171 ID: 130544 Credit: 2,234,035,580 RAC: 550,711
                           
|
I have not a scooby-diddly-dumplings.
1 maybe, in the last hour. | |
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13955 ID: 53948 Credit: 392,952,071 RAC: 176,354
                               
|
Any bets how many primes will be found during this short challenge? (A prime is counted if the task that first returned the prime result, is included in the challenge.) /JeppeSN
We actually started betting on this yesterday in Discord. My vote was 5. There's 2 so far.
Actually, there's one more. So there's 3 now.
____________
My lucky number is 75898524288+1 | |
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13955 ID: 53948 Credit: 392,952,071 RAC: 176,354
                               
|
After 1+ days:
Challenge: Solar Eclipse
App: 10 (PPS-LLR)
(As of 2018-08-11 21:28:52 UTC)
238411 tasks have been sent out. [CPU/GPU/anonymous_platform: 238148 (100%) / 0 (0%) / 263 (0%)]
Of those tasks that have been sent out:
4311 (2%) came back with some kind of an error. [4311 (2%) / 0 (0%) / 0 (0%)]
148384 (62%) have returned a successful result. [148131 (62%) / 0 (0%) / 257 (0%)]
85727 (36%) are still in progress. [85723 (36%) / 0 (0%) / 6 (0%)]
Of the tasks that have been returned successfully:
51525 (35%) are pending validation. [51439 (35%) / 0 (0%) / 90 (0%)]
96791 (65%) have been successfully validated. [96624 (65%) / 0 (0%) / 167 (0%)]
40 (0%) were invalid. [40 (0%) / 0 (0%) / 0 (0%)]
28 (0%) are inconclusive. [28 (0%) / 0 (0%) / 0 (0%)]
The current leading edge (i.e., latest work unit for which work has actually been sent out to a host) is n=2625781. The leading edge was at n=2616900 at the beginning of the challenge. Since the challenge started, the leading edge has advanced 0.34% as much as it had prior to the challenge!
____________
My lucky number is 75898524288+1 | |
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13955 ID: 53948 Credit: 392,952,071 RAC: 176,354
                               
|
Any bets how many primes will be found during this short challenge? (A prime is counted if the task that first returned the prime result, is included in the challenge.) /JeppeSN
We actually started betting on this yesterday in Discord. My vote was 5. There's 2 so far.
Actually, there's one more. So there's 3 now.
Make that 4 primes.
My guess of 5 may have been a bit pessimistic.
EDIT: I missed one. We're up to 5.
____________
My lucky number is 75898524288+1 | |
|
|
So a couple days ago I decided to switch to PPS LLR in preparation for the challenge. The throughput of PPS was rather uninspiring compared to the other projects I had been running. So I decided to look at how I was running PPS and did some quick testing on both my i5-4670 (DDR3-1600) and my i7-4790 (DDR3-1600). I started out with one task on four cores and was getting a lot of 1st's, but not a lot of total units done. So I tried two tasks on two cores each and found that while each unit was slower, overall throughput was higher. At that point I knew I had to try the traditional one task per core and see what happens. Then it only seemed prudent to run all three conditions on both CPUs to see if there was a difference.
These tests were all run using live units and the times are visually taken from the BOINC manager. So please take these results with more than a few grains of salt. haha
i7-4790 DDR3-1600 PPS LLR runtimes
Per Unit:
4c 1t = 10 minutes
2c 2t = 17 minutes
1c 4t = 37 minutes
Per two units:
4c 1t = 20 minutes
2c 2t = 17 minutes
1c 4t = 37 minutes
Per four units:
4c 1t = 40 minutes
2c 2t = 34 minutes
1c 4t = 37 minutes
i5-4670 DDR3-1600 PPS LLR runtimes
Per unit:
4c 1t = 10 minutes
2c 2t = 17 minutes
1c 4t = 40 minutes
Per two units:
4c 1t = 20 minutes
2c 2t = 17 minutes
1c 4t = 40 minutes
Per four units:
4c 1t = 40 minutes
2c 2t = 34 minutes
1c 4t = 40 minutes
PPS runs remarkably similarly on the two CPUs when running one or two tasks at a time. Bump up to four tasks at a time and the i5 starts to fall off.
If you have a Haswell CPU it looks like the best for speed is one task on four cores and if you want the best throughput it looks like the best is two tasks on two cores each.
Running four tasks at once was all around bad on both. It had the slowest speed per unit, lowest or second lowest throughput, and had the highest CPU temperature.
Different CPU families and different RAM speeds will drastically alter these results.
May whoever reads this find it informative or inspiring.
Happy prime finding!
How do you go about changing how many tasks per core or how many cores per task (however its setup)?
I'm interested because I never knew you could change those values and I have a laptop with an i7-7700hq that runs extremely slow on task when it is useing all 8 cores.
| |
|
|
How do you go about changing how many tasks per core or how many cores per task (however its setup)?
I'm interested because I never knew you could change those values and I have a laptop with an i7-7700hq that runs extremely slow on task when it is useing all 8 cores.
One observation first-that is a 4 core processor, with 4 additional virtual cores (i.e. hyperthreading cores). You generally don't want to run LLR on HT cores, so either disable HT or run at most on 50% of cores.
To run multithreaded (dedicate multiple cores to a task), you need to use app_config.xml. Look under "Multi-threading optimisation instructions" in the "Welcome to the World Cup Challenge" for a brief primer. http://www.primegrid.com/forum_thread.php?id=8027&nowrap=true#118359 (there's probably a better post)
For an example of fairly comprehensive app_config file, http://www.primegrid.com/forum_thread.php?id=7589&nowrap=true#109750
Please note, I included "max_cpu" tag lines, when those are in fact unnecessary. | |
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13955 ID: 53948 Credit: 392,952,071 RAC: 176,354
                               
|
Now there's 6 primes.
____________
My lucky number is 75898524288+1 | |
|
|
Van Zimmerman wrote: You generally don't want to run LLR on HT cores, so either disable HT or run at most on 50% of cores.
This information is outdated at best and should not be perpetuated.
Whether Hyperthreading is detrimental or beneficial for LLR depends on a number of factors:
- the particular LLR based subproject
- processor type, and (possibly, but if so, to a lesser degree) RAM performance
- operating system
- whether or not multithreading is used and set near to the optimum thread count for the given combination of the above factors
- (edit) whether LLR is run exclusively or together with other workload
| |
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13955 ID: 53948 Credit: 392,952,071 RAC: 176,354
                               
|
After 2 days:
Challenge: Solar Eclipse
App: 10 (PPS-LLR)
(As of 2018-08-12 18:14:01 UTC)
372521 tasks have been sent out. [CPU/GPU/anonymous_platform: 372026 (100%) / 0 (0%) / 495 (0%)]
Of those tasks that have been sent out:
5793 (2%) came back with some kind of an error. [5793 (2%) / 0 (0%) / 0 (0%)]
277575 (75%) have returned a successful result. [277091 (74%) / 0 (0%) / 489 (0%)]
89211 (24%) are still in progress. [89202 (24%) / 0 (0%) / 6 (0%)]
Of the tasks that have been returned successfully:
63874 (23%) are pending validation. [63749 (23%) / 0 (0%) / 130 (0%)]
213520 (77%) have been successfully validated. [213161 (77%) / 0 (0%) / 359 (0%)]
117 (0%) were invalid. [117 (0%) / 0 (0%) / 0 (0%)]
64 (0%) are inconclusive. [64 (0%) / 0 (0%) / 0 (0%)]
The current leading edge (i.e., latest work unit for which work has actually been sent out to a host) is n=2630853. The leading edge was at n=2616900 at the beginning of the challenge. Since the challenge started, the leading edge has advanced 0.53% as much as it had prior to the challenge!
We're up to 8 primes.
____________
My lucky number is 75898524288+1 | |
|
|
DwightHoward2 wrote: How do you go about changing how many tasks per core or how many cores per task (however its setup)?
I'm interested because I never knew you could change those values and I have a laptop with an i7-7700hq that runs extremely slow on task when it is useing all 8 cores.
See post 119753 for measurements of PPS-LLR with different thread counts on a desktop i7 7700K with Linux and good RAM. (Typo in this message: The #t figures give the number of threads per task. #p is the number of tasks running concurrently. The product of #p x #t is the number of logical CPUs that BOINC needs to be allowed to use for PPS-LLR.)
Minimum app_config.xml for the case that happened to be the measured host's optimum (2 threads per task, together with HT enabled, BOINC set to use 100 % CPUs, no other load on the host):
<app_config>
<app_version>
<app_name>llrPPS</app_name>
<cmdline>-t 2</cmdline>
<avg_ncpus>2</avg_ncpus>
</app_version>
</app_config>
After saving the file in the correct folder (C:\ProgramData\BOINC\projects\www.primegrid.com\ on most Windows hosts), I recommend to restart boinc-client.
Different hosts definitely have other optimum settings. However, the performance delta between optimum and next best settings is usually small, hence: no worries.
If that i7 7700K for which I reported throughput ran Windows instead of Linux, then the optimum might either be the same, or might be 1 thread per task, HT disabled in the BIOS, BOINC set to use 100 % CPUs, no other load on the host.
I have no idea how far your laptop differs from my desktop. First of all, computational performance of laptops is nowadays limited by the heat dissipation capacity of the cooling system. Second, do you have dual channel RAM in it? Many laptops are sold with single channel RAM per default. LLR benefits from RAM bandwidth. With single channel RAM, it is again possible that the optimum is 1 thread per task, HT disabled. But this is just a wild guess on my part. | |
|
|
I have no idea how far your laptop differs from my desktop. First of all, computational performance of laptops is nowadays limited by the heat dissipation capacity of the cooling system. Second, do you have dual channel RAM in it? Many laptops are sold with single channel RAM per default. LLR benefits from RAM bandwidth. With single channel RAM, it is again possible that the optimum is 1 thread per task, HT disabled. But this is just a wild guess on my part.
Thank you all for the information... Probably my first time deep diving into this program so I'm very new into all of it.
But I actually do have adequate cooling for a laptop even at full load but the RAM bandwidth would explain why my i5-4570 is outperforming it. I'm going to play around with setting to hopefully be more ready for the next challenge | |
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13955 ID: 53948 Credit: 392,952,071 RAC: 176,354
                               
|
With a couple of hours remaining in the challenge, it's time to remind everyone...
At the Conclusion of the Challenge
We would prefer users "moving on" to finish those tasks they have downloaded, if not then please ABORT the WU's (and then UPDATE the PrimeGrid project) instead of DETACHING, RESETTING, or PAUSING.
ABORTING WU's allows them to be recycled immediately; thus a much faster "clean up" to the end of a Challenge. DETACHING, RESETTING, and PAUSING WU's causes them to remain in limbo until they EXPIRE. Therefore, we must wait until WU's expire to send them out to be completed. Thank you!
____________
My lucky number is 75898524288+1 | |
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13955 ID: 53948 Credit: 392,952,071 RAC: 176,354
                               
|
9 primes so far.
____________
My lucky number is 75898524288+1 | |
|
|
9 primes so far.
Nice! Some of them are already showing up in a "PrimeGrid Primes by Project" search. /JeppeSN | |
|
|
At the Conclusion of the Challenge
Liebe Mitcruncher!
Bitte brecht Arbeitspakete, die ihr nicht mehr rechnen wollt, ab, anstatt sie zu pausieren oder eure Rechner abzumelden.
Abbrechen ermöglicht eine sofortige Neuausgabe, so daß das Ergebnis des Wettkampfes möglichst bald verifiziert werden kann.
VIELEN DANK !
Hope this reminder in our forums helps.
____________
Greetings, Jens
147433824^131072+1 | |
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13955 ID: 53948 Credit: 392,952,071 RAC: 176,354
                               
|
Final results!
Challenge: Solar Eclipse
App: 10 (PPS-LLR)
(As of 2018-08-13 18:00:23 UTC)
521654 tasks have been sent out. [CPU/GPU/anonymous_platform: 520905 (100%) / 0 (0%) / 749 (0%)]
Of those tasks that have been sent out:
24374 (5%) came back with some kind of an error. [24374 (5%) / 0 (0%) / 0 (0%)]
437399 (84%) have returned a successful result. [436652 (84%) / 0 (0%) / 747 (0%)]
59008 (11%) are still in progress. [59004 (11%) / 0 (0%) / 2 (0%)]
Of the tasks that have been returned successfully:
56174 (13%) are pending validation. [56077 (13%) / 0 (0%) / 97 (0%)]
380860 (87%) have been successfully validated. [380210 (87%) / 0 (0%) / 650 (0%)]
279 (0%) were invalid. [279 (0%) / 0 (0%) / 0 (0%)]
86 (0%) are inconclusive. [86 (0%) / 0 (0%) / 0 (0%)]
The current leading edge (i.e., latest work unit for which work has actually been sent out to a host) is n=2636035. The leading edge was at n=2616900 at the beginning of the challenge. Since the challenge started, the leading edge has advanced 0.73% as much as it had prior to the challenge!
9 PPS primes were found during the challenge. All were challenge tasks.
____________
My lucky number is 75898524288+1 | |
|
|
Congratulations to the Czech National Team and to zunewantan.
It was a great challenge, despite the heat.
____________
676754^262144+1 is prime | |
|
|
Congratulations to the Czech National Team and to zunewantan.
It was a great challenge, despite the heat.
Yes, it was a good challenge and congrats to the Czech National Team (pulled a page out of our playbook) and big congrats to zunewantan.
Cheers Rick
and a double congrats to the prime finders. | |
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13955 ID: 53948 Credit: 392,952,071 RAC: 176,354
                               
|
The cleanup begins!
Solar Eclipse:
Aug-13: Solar Eclipse: 55801 tasks outstanding; 44514 affecting individual (293) scoring positions; 23540 affecting team (68) scoring positions.
____________
My lucky number is 75898524288+1 | |
|
|
Thanks and congrats to everyone who overcame the heat to take part in this challenge!
____________
Greetings, Jens
147433824^131072+1 | |
|
RogerVolunteer developer Volunteer tester
 Send message
Joined: 27 Nov 11 Posts: 1138 ID: 120786 Credit: 268,668,824 RAC: 0
                    
|
Had a great Challenge.
Awesome photos of the Eclipse:
https://www.space.com/41464-partial-solar-eclipse-august-2018-photos.html
Eclipse from space:
https://phys.org/news/2018-08-image-partial-solar-eclipse-space.html
Full listing of Solar and Lunar Eclipses, and when the next one is visible from your nearest city:
https://www.timeanddate.com/eclipse/list.html | |
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13955 ID: 53948 Credit: 392,952,071 RAC: 176,354
                               
|
Solar Eclipse:
Aug-13: Solar Eclipse: 55801 tasks outstanding; 44514 affecting individual (293) scoring positions; 23540 affecting team (68) scoring positions.
Aug-14: Solar Eclipse: 25191 tasks outstanding; 18643 affecting individual (278) scoring positions; 7467 affecting team (40) scoring positions.
____________
My lucky number is 75898524288+1
| |
|
|
Solar Eclipse:
Aug-13: Solar Eclipse: 55801 tasks outstanding; 44514 affecting individual (293) scoring positions; 23540 affecting team (68) scoring positions.
Aug-14: Solar Eclipse: 25191 tasks outstanding; 18643 affecting individual (278) scoring positions; 7467 affecting team (40) scoring positions.
Still crunching here for the cleanup. | |
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13955 ID: 53948 Credit: 392,952,071 RAC: 176,354
                               
|
Solar Eclipse:
Aug-13: Solar Eclipse: 55801 tasks outstanding; 44514 affecting individual (293) scoring positions; 23540 affecting team (68) scoring positions.
Aug-14: Solar Eclipse: 25191 tasks outstanding; 18643 affecting individual (278) scoring positions; 7467 affecting team (40) scoring positions.
Still crunching here for the cleanup.
Please understand that the majority of the cleanup (and, at this moment, "majority" means exactly 100%) involves waiting for tasks that have been sent to computers to either be completed or to time out. The tasks your computer is getting right now are not cleanup tasks. Run PPS-LLR or not, as you desire. It won't affect the cleanup either way.
____________
My lucky number is 75898524288+1 | |
|
|
Solar Eclipse:
Aug-13: Solar Eclipse: 55801 tasks outstanding; 44514 affecting individual (293) scoring positions; 23540 affecting team (68) scoring positions.
Aug-14: Solar Eclipse: 25191 tasks outstanding; 18643 affecting individual (278) scoring positions; 7467 affecting team (40) scoring positions.
Still crunching here for the cleanup.
Please understand that the majority of the cleanup (and, at this moment, "majority" means exactly 100%) involves waiting for tasks that have been sent to computers to either be completed or to time out. The tasks your computer is getting right now are not cleanup tasks. Run PPS-LLR or not, as you desire. It won't affect the cleanup either way.
Good to know.
I went for CUL to test the crunch time, but forgot to change my config (it was for PPS) and BOINC downloaded 3 CUL tasks intstead of only one, as I expected.
I have a personal policy and try to never abort downloaded tasks.
So I'm stuck with CUL for a while.
Knowing that the clean up won't miss me crunching PPS, makes me feel better :)
____________
"Accidit in puncto, quod non contingit in anno."
Something that does not occur in a year may, perchance, happen in a moment. | |
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13955 ID: 53948 Credit: 392,952,071 RAC: 176,354
                               
|
Challenge Cleanup:
Aug-13: Solar Eclipse: 55801 tasks outstanding; 44514 affecting individual (293) scoring positions; 23540 affecting team (68) scoring positions.
Aug-14: Solar Eclipse: 25191 tasks outstanding; 18643 affecting individual (278) scoring positions; 7467 affecting team (40) scoring positions.
Aug-15: Solar Eclipse: 18195 tasks outstanding; 12107 affecting individual (256) scoring positions; 1357 affecting team (29) scoring positions.
____________
My lucky number is 75898524288+1 | |
|
|
Solar Eclipse:
Aug-13: Solar Eclipse: 55801 tasks outstanding; 44514 affecting individual (293) scoring positions; 23540 affecting team (68) scoring positions.
Aug-14: Solar Eclipse: 25191 tasks outstanding; 18643 affecting individual (278) scoring positions; 7467 affecting team (40) scoring positions.
Still crunching here for the cleanup.
Please understand that the majority of the cleanup (and, at this moment, "majority" means exactly 100%) involves waiting for tasks that have been sent to computers to either be completed or to time out. The tasks your computer is getting right now are not cleanup tasks. Run PPS-LLR or not, as you desire. It won't affect the cleanup either way.
Ah ic, thanks for the info. | |
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13955 ID: 53948 Credit: 392,952,071 RAC: 176,354
                               
|
Challenge Cleanup:
Aug-13: Solar Eclipse: 55801 tasks outstanding; 44514 affecting individual (293) scoring positions; 23540 affecting team (68) scoring positions.
Aug-14: Solar Eclipse: 25191 tasks outstanding; 18643 affecting individual (278) scoring positions; 7467 affecting team (40) scoring positions.
Aug-15: Solar Eclipse: 18195 tasks outstanding; 12107 affecting individual (256) scoring positions; 1357 affecting team (29) scoring positions.
Aug-16: Solar Eclipse: 7545 tasks outstanding; 3279 affecting individual (188) scoring positions; 172 affecting team (17) scoring positions.
Aug-17: Solar Eclipse: 4290 tasks outstanding; 1269 affecting individual (125) scoring positions; 47 affecting team (7) scoring positions.
____________
My lucky number is 75898524288+1 | |
|
|
9 PPS primes were found during the challenge. All were challenge tasks.
I see one of them was a factor of the enormous number:
xGF(2630493,7,4) = 7^(2^2630493) + 4^(2^2630493) = 7^(2^2630493) + 2^(2^2630494)
That number is comparable to 10^(1.67 * 10^791857) or 10^(10^(10^5.8986)).
/JeppeSN | |
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13955 ID: 53948 Credit: 392,952,071 RAC: 176,354
                               
|
Challenge Cleanup:
Aug-13: Solar Eclipse: 55801 tasks outstanding; 44514 affecting individual (293) scoring positions; 23540 affecting team (68) scoring positions.
Aug-14: Solar Eclipse: 25191 tasks outstanding; 18643 affecting individual (278) scoring positions; 7467 affecting team (40) scoring positions.
Aug-15: Solar Eclipse: 18195 tasks outstanding; 12107 affecting individual (256) scoring positions; 1357 affecting team (29) scoring positions.
Aug-16: Solar Eclipse: 7545 tasks outstanding; 3279 affecting individual (188) scoring positions; 172 affecting team (17) scoring positions.
Aug-17: Solar Eclipse: 4290 tasks outstanding; 1269 affecting individual (125) scoring positions; 47 affecting team (7) scoring positions.
Aug-18: Solar Eclipse: 1820 tasks outstanding; 291 affecting individual (61) scoring positions; 18 affecting team (3) scoring positions.
Aug-19: Solar Eclipse: 631 tasks outstanding; 40 affecting individual (15) scoring positions; 4 affecting team (2) scoring positions.
Aug-20: Solar Eclipse: 193 tasks outstanding; 7 affecting individual (4) scoring positions; 0 affecting team (0) scoring positions.
____________
My lucky number is 75898524288+1 | |
|
RogerVolunteer developer Volunteer tester
 Send message
Joined: 27 Nov 11 Posts: 1138 ID: 120786 Credit: 268,668,824 RAC: 0
                    
|
The results are final!
Special Congratulations to:
● LookAS of Czech National Team for finding the first prime of the challenge 981*2^2622032+1
● JayPi of SETI.Germany for finding the second prime of the challenge 693*2^2623557+1
● UBT - Mikeejones of UK BOINC Team for finding the third prime of the challenge 807*2^2625044+1
● xii5ku of TeAm AnandTech for finding the fourth prime of the challenge 819*2^2627529+1
● zunewantan of Aggie The Pew for finding the fifth prime of the challenge 967*2^2629344+1
● JG4KEZ(Koichi Soraku) of BOINC@MIXI for finding the prime with the highest k of the challenge 1131*2^2629345+1
● vaughan of AMD Users for finding the prime with the lowest k of the challenge 465*2^2630496+1 that is also a Factor of xGF(2630493,7,4)!!!!
● Scott Brown of Aggie The Pew for finding the largest prime of the challenge 741*2^2634385+1
● eisler jiri of Czech National Team for finding the final prime of the challenge 813*2^2626224+1
Top 3 individuals:
1: zunewantan
2: xii5ku
3: Scott Brown
Top 3 teams:
1: Czech National Team
2: Aggie The Pew
3: SETI.Germany
Congratulations to the winners, and well done to everyone who participated.
See you at the Oktoberfest Challenge!
____________
| |
|
Message boards :
Number crunching :
Solar Eclipse Challenge |