Author |
Message |
RogerVolunteer developer Volunteer tester
 Send message
Joined: 27 Nov 11 Posts: 1138 ID: 120786 Credit: 268,668,824 RAC: 0
                    
|
Welcome to the Transit of Mercury Across the Sun Challenge
The ninth Challenge of the 2019 Challenge series is a 10 day challenge celebrating the Transit of Mercury Across the Sun. The challenge is being offered on the Prime Sierpinski Problem (LLR) application. The challenge will begin 1st November 2019 18:04 UTC and end 11th November 2019 18:04 UTC.
The transit of Mercury – the innermost planet of our solar system – will be visible on November 11, 2019. A transit occurs when Mercury passes directly in front of the sun. At such times, Mercury can been seen through telescopes with solar filters as a small black dot crossing the sun’s face. Mercury’s diameter is only 1/194th of that of the sun, as seen from Earth. That’s why the eclipse masters recommend using a telescope with a magnification of 50 to 100 times for witnessing the event.
Unless you are well-versed with the telescope and how to properly use solar filters, we advise you to seek out a public program via a nearby observatory or astronomy club.
Mercury will come into view on the sun’s face around 12:36 UTC. It’ll make a leisurely journey across the sun’s face, reaching greatest transit (closest to sun’s center) at approximately 15:20 UTC and finally exiting around 18:04 UTC. The entire 5 1/2 hour path across the sun will be visible across the U.S. East – with magnification and proper solar filters – while those in the U.S. West can observe the transit already in progress after sunrise.
The transit will be visible (at least in part) from most of the globe, with the exception of Indonesia, most of Asia, and Australia. Mercury takes some 5 1/2 hours to cross the sun’s disk, and this transit of Mercury is entirely visible (given clear skies) from eastern North America, South America, southern tip of Greenland, and far-western Africa.
To participate in the Challenge, please select only the Prime Sierpinski Problem LLR (PSP) project in your PrimeGrid preferences section.
Application builds are available for Linux 32 and 64 bit, Windows 32 and 64 bit and MacIntel. Intel CPUs with FMA3 capabilities (Haswell, Broadwell, Skylake, Kaby Lake, Coffee Lake) will have a very large advantage, and Intel CPUs with dual AVX-512 (certain recent Intel Skylake-X and Xeon CPUs) will be the fastest.
ATTENTION: The primality program LLR is CPU intensive; so, it is vital to have a stable system with good cooling. It does not tolerate "even the slightest of errors." Please see this post for more details on how you can "stress test" your computer. Tasks on one CPU core will take 18 hours on fast/newer computers and 3 days+ on slower/older computers. If your computer is highly overclocked, please consider "stress testing" it. Sieving is an excellent alternative for computers that are not able to LLR. :)
Highly overclocked Haswell, Broadwell, Skylake, Kaby Lake or Coffee Lake (i.e., Intel Core i7, i5, and i3 -4xxx or better) computers running the application will see fastest times. Note that PSP is running the latest AVX-512 version of LLR which takes full advantage of the features of these newer CPUs. It's faster than the previous LLR app and draws more power and produces more heat. If you have certain recent Intel Skylake-X and Xeon CPUs, especially if it's overclocked or has overclocked memory, and haven't run the new AVX-512 LLR before, we strongly suggest running it before the challenge while you are monitoring the temperatures.
Please, please, please make sure your machines are up to the task.
Multi-threading optimisation instructions
Those looking to maximise their computer's performance during this challenge, or when running LLR in general, may find this information useful.
- Your mileage may vary. Before the challenge starts, take some time and experiment and see what works best on your computer.
- If you have an Intel CPU with hyperthreading, either turn off the hyperthreading in the BIOS, or set BOINC to use 50% of the processors.
- If you're using a GPU for other tasks, it may be beneficial to leave hyperthreading on in the BIOS and instead tell BOINC to use 50% of the CPU's. This will allow one of the hyperthreads to service the GPU.
- The new multi-threading system is now live. This will allow you to select multi-threading from the project preferences web page. No more app_config.xml. It works like this:
- In the preferences selection, there are selections for "max jobs" and "max cpus", similar to the settings in app_config.
- Unlike app_config, these two settings apply to ALL apps. You can't chose 1 thread for SGS and 4 for SoB. When you change apps, you need to change your multithreading settings if you want to run a different number of threads.
- There will be individual settings for each venue (location).
- This will eliminate the problem of BOINC downloading 1 task for every core.
- The hyperthreading control isn't possible at this time.
- The "max cpus" control will only apply to LLR apps. The "max jobs" control applies to all apps.
- If you want to continue to use app_config.xml for LLR tasks, you need to change it if you want it to work. Please see this message for more information.
- Some people have observed that when using multithreaded LLR, hyperthreading is actually beneficial. We encourage you to experiment and see what works best for you.
Time zone converter:
The World Clock - Time Zone Converter
NOTE: The countdown clock on the front page uses the host computer time. Therefore, if your computer time is off, so will the countdown clock. For precise timing, use the UTC Time in the data section at the very top, above the countdown clock.
Scoring Information
Scores will be kept for individuals and teams. Only tasks issued AFTER 1st November 2019 18:04 UTC and received BEFORE 11th November 2019 18:04 UTC will be considered for challenge credit. We will be using the same scoring method as we currently use for BOINC credits. A quorum of 2 is NOT needed to award Challenge score - i.e. no double checker. Therefore, each returned result will earn a Challenge score. Please note that if the result is eventually declared invalid, the score will be removed.
At the Conclusion of the Challenge
We kindly ask users "moving on" to ABORT their tasks instead of DETACHING, RESETTING, or PAUSING.
ABORTING tasks allows them to be recycled immediately; thus a much faster "clean up" to the end of an LLR Challenge. DETACHING, RESETTING, and PAUSING tasks causes them to remain in limbo until they EXPIRE. Therefore, we must wait until tasks expire to send them out to be completed.
Please consider either completing what's in the queue or ABORTING them. Thank you. :)
About the Prime Sierpinski Problem
Wacław Franciszek Sierpiński (14 March 1882 — 21 October 1969), a Polish mathematician, was known for outstanding contributions to set theory, number theory, theory of functions and topology. It is in number theory where we find the Sierpinski problem.
Basically, the Sierpinski problem is "What is the smallest Sierpinski number" and the prime Sierpinski problem is "What is the smallest 'prime' Sierpinski number?"
First we look at Proth numbers (named after the French mathematician François Proth). A Proth number is a number of the form k*2^n+1 where k is odd, n is a positive integer, and 2^n>k.
A Sierpinski number is an odd k such that the Proth number k*2^n+1 is not prime for all n. For example, 3 is not a Sierpinski number because n=2 produces a prime number (3*2^2+1=13). In 1962, John Selfridge proved that 78,557 is a Sierpinski number...meaning he showed that for all n, 78557*2^n+1 was not prime.
Most number theorists believe that 78,557 is the smallest Sierpinski number, but it hasn't yet been proven. In order to prove that it is the smallest Sierpinski number, it has to be shown that every single k less than 78,557 is not a Sierpinski number, and to do that, some n must be found that makes k*2^n+1 prime.
The smallest proven 'prime' Sierpinski number is 271,129. In order to prove that it is the smallest prime Sierpinski number, it has to be shown that every single 'prime' k less than 271,129 is not a Sierpinski number, and to do that, some n must be found that makes k*2^n+1 prime.
Previously, PrimeGrid was working in cooperation with Seventeen or Bust on the Sierpinski problem and working with the Prime Sierpinski Project on the 'prime' Sierpinski problem. Although both Seventeen or Bust and the Prime Sierpinski Project have ceased operations, PrimeGrid continues the search independently to solve both conjectures.
The following k's remain for each project:
Sierpinski problem 'prime' Sierpinski problem
21181 22699*
22699 67607*
24737 79309
55459 79817
67607 152267
156511
222113
225931
237019
* being tested as part of our Seventeen or Bust project
Fortunately, the two projects (and later PrimeGrid's Extended Sierpinski Project) combined their sieving efforts into a single file. Therefore, PrimeGrid's PSP sieve supports all three projects.
Additional Information
For more information about PSP, please see:
For more information about Sierpinski, Sierpinski number, and the Sierpinsk problem, please see these resources:
Most recently discovered primes:
258317*2^5450519+1 is prime! (found by Sloth@PSP on 28/7/2008)
90527*2^9162167+1 is prime! (found by Bold_Seeker@PSP on 19/6/2010)
10223*2^31172165+1 discovered as part of our Seventeen or Bust subproject, eliminating 10223 from both the Sierpinski Problem and the Prime Sierpinski Problem, by Szabolcs Péter (SyP). (official announcement)
168451*2^19375200+1 is prime! Found by Ben Maloney (paleseptember) on September 17th, 2017. (official announcement)
____________
|
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14037 ID: 53948 Credit: 477,214,715 RAC: 293,590
                               
|
Will you be able to see the transit with the naked eye, using appropriate protection as you would for a solar eclipse? Or is Mercury too small?
EDIT: Nope, it's too small to see the transit without some type of telescope or binoculars or something else to magnify the image.
Now, I need to check and see if this topic is intentionally read-only. Thanks to those who tried to respond... and couldn't.
____________
My lucky number is 75898524288+1 |
|
|
Ken_g6 Volunteer developer
 Send message
Joined: 4 Jul 06 Posts: 940 ID: 3110 Credit: 265,155,351 RAC: 110,466
                            
|
I'm hoping that this trick, which I did for the last solar eclipse, might work. But Mercury is really tiny. |
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14037 ID: 53948 Credit: 477,214,715 RAC: 293,590
                               
|
The challenge starts tomorrow, so this is a good time to remind everyone:
Please remember that this challenge starts 4 minutes after the hour! Tasks sent at 18:00 won't count, and these are long tasks, so you don't want to waste a lot of time because you started 240 seconds too early!!!
Also, for anyone wishing to throw a few *free* cloud servers at the challenge for the first time, most of the cloud providers out there have introductory offers where you get a certain amount of free computing services for a month (or longer, in some cases). A few challenges ago, I wrote up a guide to using Digital Ocean, one of the less expensive and less complicated cloud providers. Their $50 credit (with the referral code) doesn't sound like a lot, but it's enough to run ten 3-core AVX512 servers for the entire 10 day challenge.
An executive summary of the process is here, and the full instructions can be found here: https://www.primegrid.com/forum_thread.php?id=8831&nowrap=true#133770
If you do use a cloud computing service, please remember to delete/destroy all the servers when you're done with them. You get charged for them for as long as they exist, whether they're running or not. You have to completely delete them to stop the billing.
Besides Digital Ocean, other services that offer free introductory credit are Amazon Web Services, Google Computing Platform, and Azure (Microsoft). I'm sure there's others as well.
Be warned, though: adding extra computing power to the challenge at the click of a button can be very addictive!
____________
My lucky number is 75898524288+1 |
|
|
|
I'm hoping that this trick, which I did for the last solar eclipse, might work. But Mercury is really tiny.
Solar projection at that scale will NOT work. Mercury is incredibly small against the solar disk.
I went after Venus back in 2012 and which is much larger and you can see my results here: http://www.perseus.gr/Astro-Planet-Ven-Tr2012.htm (note: each thumbnail is hyperlinked and leads to a much better result and resolution).
With Mercury being about 40% of the diameter of Venus, you can imagine how difficult it will be when projected onto a piece of paper. Here is my result from 2016: http://www.perseus.gr/Astro-Planet-Mer-Transit-2016-Ingress.htm.
Both of the above (filtered) results are at 2400mm focal length which is both fairly long and necessary for such work involving the interior planets. |
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14037 ID: 53948 Credit: 477,214,715 RAC: 293,590
                               
|
With Mercury being about 40% of the diameter of Venus, you can imagine how difficult it will be when projected onto a piece of paper. Here is my result from 2016: http://www.perseus.gr/Astro-Planet-Mer-Transit-2016-Ingress.htm.
Both of the above (filtered) results are at 2400mm focal length which is both fairly long and necessary for such work involving the interior planets.
Love the photography!
You may want to update the caption on the 2016 Mercury transit photo. It was out of date, even in 2016.
The only satellite mission to Mercury was Mariner 10 which made three fly-bys between March 1974 and March 1975 when it photographed half its surface.
https://www.nasa.gov/mission_pages/messenger/main/index.html
____________
My lucky number is 75898524288+1 |
|
|
|
Not sure if this is the right place to ask. Spinning up a few Digital Ocean instances for fun.
cat /proc/cpuinfo reports a gold cpu with the avx512 flags, but they all only report a single core. I've tried both the 3 CPU and 2 CPU sizes from a few regions. Primegrid will only download a single task, and won't download multi-threaded tasks.
Any ideas?
Here's the cpuinfo for a 2 cpu instance:
root@mek4-debian-s-2vcpu-2gb-ams3-01:~# cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 85
model name : Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz
stepping : 4
microcode : 0x1
cpu MHz : 2294.608
cache size : 25344 KB
physical id : 0
siblings : 1
core id : 0
cpu cores : 1
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single pti ssbd ibrs ibpb tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves pku ospke md_clear
bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs
bogomips : 4589.21
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits virtual
power management:
|
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14037 ID: 53948 Credit: 477,214,715 RAC: 293,590
                               
|
Not sure if this is the right place to ask. Spinning up a few Digital Ocean instances for fun.
cat /proc/cpuinfo reports a gold cpu with the avx512 flags, but they all only report a single core. I've tried both the 3 CPU and 2 CPU sizes from a few regions. Primegrid will only download a single task, and won't download multi-threaded tasks.
Any ideas?
Not off the top of my head. That's strange. That's clearly 1 2-core VM, given the name. Maybe someone else has an idea. I'd do some testing, but I need to leave in a few minutes. Hopefully someone else will solve this mystery before I get back!
____________
My lucky number is 75898524288+1 |
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14037 ID: 53948 Credit: 477,214,715 RAC: 293,590
                               
|
It works for me...
root@debian-s-3vcpu-1gb-nyc1-01:~# cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 85
model name : Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz
stepping : 4
microcode : 0x1
cpu MHz : 2294.608
cache size : 25344 KB
physical id : 0
siblings : 1
core id : 0
cpu cores : 1
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx1 6 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c r drand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single pti ssbd i brs ibpb tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 h le avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflusho pt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves pku ospke md_c lear
bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs
bogomips : 4589.21
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits virtual
power management:
processor : 1
vendor_id : GenuineIntel
cpu family : 6
model : 85
model name : Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz
stepping : 4
microcode : 0x1
cpu MHz : 2294.608
cache size : 25344 KB
physical id : 1
siblings : 1
core id : 0
cpu cores : 1
apicid : 1
initial apicid : 1
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx1 6 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c r drand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single pti ssbd i brs ibpb tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 h le avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflusho pt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves pku ospke md_c lear
bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs
bogomips : 4589.21
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits virtual
power management:
processor : 2
vendor_id : GenuineIntel
cpu family : 6
model : 85
model name : Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz
stepping : 4
microcode : 0x1
cpu MHz : 2294.608
cache size : 25344 KB
physical id : 2
siblings : 1
core id : 0
cpu cores : 1
apicid : 2
initial apicid : 2
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx1 6 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c r drand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single pti ssbd i brs ibpb tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 h le avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflusho pt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves pku ospke md_c lear
bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs
bogomips : 4589.21
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits virtual
power management:
____________
My lucky number is 75898524288+1 |
|
|
|
You may want to update the caption on the 2016 Mercury transit photo. It was out of date, even in 2016.
The only satellite mission to Mercury was Mariner 10 which made three fly-bys between March 1974 and March 1975 when it photographed half its surface.
https://www.nasa.gov/mission_pages/messenger/main/index.html
Thanks, Michael. Will do. |
|
|
|
I'm clearly not functioning completely this morning. :)
Three processors, each with one core. Scrolling for the win. Now I have to figure out why only get 1 task...
|
|
|
pschoefer Volunteer developer Volunteer tester
 Send message
Joined: 20 Sep 05 Posts: 686 ID: 845 Credit: 3,010,470,502 RAC: 658,088
                              
|
Will you be able to see the transit with the naked eye, using appropriate protection as you would for a solar eclipse? Or is Mercury too small?
EDIT: Nope, it's too small to see the transit without some type of telescope or binoculars or something else to magnify the image.
There are claims that sunspots as small as 10-12 arcsec have been seen with just the naked eye and the usual solar eclipse goggles. Mercury can reach up to 12 arcsec during a transit and has an even higher contrast than the darkest sunspots, so it might just be possible. However, 12 arcsecs are only reached if the transit happens in May (like the last one in 2016), while it's more like 7 arcsec for a transit in November.
Fun fact: Most people who are familiar with our Solar System's planets might think that Venus or maybe Mars is the planet closest to Earth. However, it turns out that -on average- Mercury is our closest neighbor.
____________
|
|
|
|
However, 12 arcsecs are only reached if the transit happens in May (like the last one in 2016), while it's more like 7 arcsec for a transit in November.
I just checked with SkyMap Pro (v8) and Mercury on transit day will be 9.95 arc-seconds. |
|
|
|
Their $50 credit (with the referral code) doesn't sound like a lot, but it's enough to run ten 3-core AVX512 servers for the entire 10 day challenge.
Michael, I'd be somewhat cautious this time for the number of droplets. In last challenge, after signing up, I spun up 9 droplets and within a day, my account was locked by DigitalOcean Admin, stating a high number of droplets creation activity along with high load, which they deemed as "unusual". Eventually, it was unlocked after 7 days, once I stated my usage. But, not before that eating up a significant portion of free $50 credit (9 droplets turned off and not destroyed for 7 days).
This time, I'm taking a conservative approach of maxing out to 5-6 droplets and see what happens. I just want to share the experience which I had and in no way want to influence others on their decisioning for the challenge. :) |
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14037 ID: 53948 Credit: 477,214,715 RAC: 293,590
                               
|
Their $50 credit (with the referral code) doesn't sound like a lot, but it's enough to run ten 3-core AVX512 servers for the entire 10 day challenge.
Michael, I'd be somewhat cautious this time for the number of droplets. In last challenge, after signing up, I spun up 9 droplets and within a day, my account was locked by DigitalOcean Admin, stating a high number of droplets creation activity along with high load, which they deemed as "unusual". Eventually, it was unlocked after 7 days, once I stated my usage. But, not before that eating up a significant portion of free $50 credit (9 droplets turned off and not destroyed for 7 days).
This time, I'm taking a conservative approach of maxing out to 5-6 droplets and see what happens. I just want to share the experience which I had and in no way want to influence others on their decisioning for the challenge. :)
That's happened to a few of us. You got more of an explanation of "why" than I did.
If it does happen to you, be sure to request a refund for the charges incurred while the droplets were powered down!
____________
My lucky number is 75898524288+1 |
|
|
|
FYI, Google Cloud gives a free $400 credit for first time users. Their n1-highcpu-4 (4 vCPUs, 3.6 GB memory) vm has avx512 - at least in the one I created. It's a 2.0 GHZ and takes about 26 hours to complete the PSP LLR I started yesterday (it's not done).
The 3 CPU digital ocean takes about 18 hours to finish one.
A 4CPU Azure instance (their cpu optimized instances have avx512 also) takes about 21 hours to finish a unit (again not finished yet).
I also tried a 16CPU Azure instance. A 16 thread LLR appears to run at about 80% efficiency vs a 4 - 4 thread jobs. Ran 4 - 4 thread tasks overnight and they appear to take about 25 hours each (again not finished). Seems like 3-4 cpu is the sweet spot.
You're right Michael, click to spin up instances is addicting... |
|
|
Sysadm@Nbg Volunteer moderator Volunteer tester Project scientist
 Send message
Joined: 5 Feb 08 Posts: 1233 ID: 18646 Credit: 918,266,861 RAC: 386,090
                      
|
has it already started? YES!!
GO GO GO!!!
____________
Sysadm@Nbg
my current lucky number: 113856050^65536 + 1
PSA-PRPNet-Stats-URL: http://u-g-f.de/PRPNet/
|
|
|
|
has it already started? YES!!
GO GO GO!!!
Darn, missed the start time.
Will have to wait for a PPS-Mega task to finish so I can begin crunching these huge PSP tasks.
Not that this will have a big impact on my team score.
Go AtP !
____________
"Accidit in puncto, quod non contingit in anno."
Something that does not occur in a year may, perchance, happen in a moment. |
|
|
|
FYI, Google Cloud gives a free $400 credit for first time users.
I signed up last year and never got around to using my credit at the time ($300). It still remains at $300 with 46 days remaining for the trial period.
Anyone have instructions for setting up a few instances? |
|
|
|
Sorry, I didn't write down instructions.
I basically followed the digital ocean instructions (the script commands are identical). The firewall only needs the 31416 entry, because you can open a terminal season from the google UI (you do need to do a 'sudo su' before executing the script commands). I forget where you enter your ssh key, but it only needs to go in once. Otherwise, they are pretty easy to setup.
There is a cpu limit of 24 per region for the free accounts and check to make sure you get an ax512 capable instance. The few I tried in the west region weren't ax512.
My go to instance type is - n2-highcpu-4 (4 vCPUs, 4 GB memory). |
|
|
|
Sorry, I didn't write down instructions.
I basically followed the digital ocean instructions (the script commands are identical). The firewall only needs the 31416 entry, because you can open a terminal season from the google UI. I forget where you enter your ssh key, but it only needs to go in once. Otherwise, they are pretty easy to setup.
There is a cpu limit of 24 per region for the free accounts and check to make sure you get an ax512 capable instance. The few I tried in the west region weren't ax512.
My go to instance type is - n2-highcpu-4 (4 vCPUs, 4 GB memory).
I did try using the google UI but I am having problems with the two cfg files. I tried editing them directly using vi but they are read-only. I tried doing a chmod (from 644 to 666) but it will not take my request to make them read-write. |
|
|
|
I did try using the google UI but I am having problems with the two cfg files. I tried editing them directly using vi but they are read-only. I tried doing a chmod (from 644 to 666) but it will not take my request to make them read-write.
Ran into that. You're not logged in as super user.
First thing you type when you connect to the instance 'sudo su'
Also, found out about custom instance types. You can setup a custom instance type that is 4 cpu's and 2GB of ram. It's a little cheaper. Must be an N2 region (central or europe west). |
|
|
|
I did try using the google UI but I am having problems with the two cfg files. I tried editing them directly using vi but they are read-only. I tried doing a chmod (from 644 to 666) but it will not take my request to make them read-write.
Ran into that. You're not logged in as super user.
First thing you type when you connect to the instance 'sudo su'
Also, found out about custom instance types. You can setup a custom instance type that is 4 cpu's and 2GB of ram. It's a little cheaper. Must be an N2 region (central or europe west).
Thanks for the heads-up. Back at it tomorrow since I am approaching local midnight. |
|
|
|
Hi Team,
Any issues with reporting? I see my computers completed 11 PSP tasks (7 pending; 4 valid). Challenge statistics page only displays 5 units completed.
Yes, I checked they are PSP tasks ;)
Good luck all!
Brendon |
|
|
|
Aww, nuts - think I started at 18:00 not 18:04. Attention to detail :( |
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14037 ID: 53948 Credit: 477,214,715 RAC: 293,590
                               
|
Day 1 is complete!
Challenge: Transit of Mercury Across the Sun
App: 8 (PSP-LLR)
(As of 2019-11-02 18:31:49 UTC)
8079 tasks have been sent out. [CPU/GPU/anonymous_platform: 8074 (100%) / 0 (0%) / 5 (0%)]
Of those tasks that have been sent out:
428 (5%) were aborted. [428 (5%) / 0 (0%) / 0 (0%)]
737 (9%) came back with some kind of an error. [737 (9%) / 0 (0%) / 0 (0%)]
778 (10%) have returned a successful result. [775 (10%) / 0 (0%) / 3 (0%)]
6136 (76%) are still in progress. [6134 (76%) / 0 (0%) / 2 (0%)]
Of the tasks that have been returned successfully:
654 (84%) are pending validation. [652 (84%) / 0 (0%) / 2 (0%)]
124 (16%) have been successfully validated. [123 (16%) / 0 (0%) / 1 (0%)]
0 (0%) were invalid. [0 (0%) / 0 (0%) / 0 (0%)]
0 (0%) are inconclusive. [0 (0%) / 0 (0%) / 0 (0%)]
The current leading edge (i.e., latest work unit for which work has actually been sent out to a host) is n=21221890. The leading edge was at n=20981446 at the beginning of the challenge. Since the challenge started, the leading edge has advanced 1.15% as much as it had prior to the challenge!
Pavel Atnashev has focused his computing farm on this challenge, and currently has a sizeable lead in both the individual and team standings. Will anyone be able to catch him?
Will we be able to eliminate any k's from the conjecture? We're certainly going to try!
____________
My lucky number is 75898524288+1 |
|
|
|
Finishing just one single task is incredibly hard. It will take centuries before these Sierpiński-related problems/conjectures are solved. /JeppeSN |
|
|
|
...
Pavel Atnashev has focused his computing farm on this challenge, and currently has a sizeable lead in both the individual and team standings. Will anyone be able to catch him?
No.
The last challenge cost over $2400 in AWS fees and that was a 3 day challenge. This time its 10 days, so no way. |
|
|
|
Finishing just one single task is incredibly hard. It will take centuries before these Sierpiński-related problems/conjectures are solved. /JeppeSN
I thought that as well when I started participating here. The leading edge of SoB was lower than PSP is now. Now a decade later and the leading edge of SoB is more than double it was for my first SoB workunits. Great progress continues to be made on these projects.
____________
|
|
|
|
Finishing just one single task is incredibly hard. It will take centuries before these Sierpiński-related problems/conjectures are solved. /JeppeSN
I thought that as well when I started participating here. The leading edge of SoB was lower than PSP is now. Now a decade later and the leading edge of SoB is more than double it was for my first SoB workunits. Great progress continues to be made on these projects.
I absolutely agree great progress continues to be made. In the thread id=7356, Yves Gallot has two posts on when the primary conjecture project here, the SoB, might be completed. I think it might take even longer. /JeppeSN |
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14037 ID: 53948 Credit: 477,214,715 RAC: 293,590
                               
|
Two days are done; eigth remain:
Challenge: Transit of Mercury Across the Sun
App: 8 (PSP-LLR)
(As of 2019-11-03 21:26:37 UTC)
11683 tasks have been sent out. [CPU/GPU/anonymous_platform: 11672 (100%) / 0 (0%) / 11 (0%)]
Of those tasks that have been sent out:
1241 (11%) were aborted. [1241 (11%) / 0 (0%) / 0 (0%)]
1074 (9%) came back with some kind of an error. [1074 (9%) / 0 (0%) / 0 (0%)]
2619 (22%) have returned a successful result. [2610 (22%) / 0 (0%) / 9 (0%)]
6749 (58%) are still in progress. [6747 (58%) / 0 (0%) / 2 (0%)]
Of the tasks that have been returned successfully:
1844 (70%) are pending validation. [1837 (70%) / 0 (0%) / 7 (0%)]
767 (29%) have been successfully validated. [765 (29%) / 0 (0%) / 2 (0%)]
1 (0%) were invalid. [1 (0%) / 0 (0%) / 0 (0%)]
7 (0%) are inconclusive. [7 (0%) / 0 (0%) / 0 (0%)]
The current leading edge (i.e., latest work unit for which work has actually been sent out to a host) is n=21313013. The leading edge was at n=20981446 at the beginning of the challenge. Since the challenge started, the leading edge has advanced 1.58% as much as it had prior to the challenge!
____________
My lucky number is 75898524288+1 |
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14037 ID: 53948 Credit: 477,214,715 RAC: 293,590
                               
|
Day 3 is done!
Challenge: Transit of Mercury Across the Sun
App: 8 (PSP-LLR)
(As of 2019-11-04 19:11:49 UTC)
14278 tasks have been sent out. [CPU/GPU/anonymous_platform: 14263 (100%) / 0 (0%) / 15 (0%)]
Of those tasks that have been sent out:
1367 (10%) were aborted. [1367 (10%) / 0 (0%) / 0 (0%)]
1410 (10%) came back with some kind of an error. [1410 (10%) / 0 (0%) / 0 (0%)]
4255 (30%) have returned a successful result. [4242 (30%) / 0 (0%) / 13 (0%)]
7246 (51%) are still in progress. [7244 (51%) / 0 (0%) / 2 (0%)]
Of the tasks that have been returned successfully:
2507 (59%) are pending validation. [2498 (59%) / 0 (0%) / 9 (0%)]
1728 (41%) have been successfully validated. [1724 (41%) / 0 (0%) / 4 (0%)]
4 (0%) were invalid. [4 (0%) / 0 (0%) / 0 (0%)]
16 (0%) are inconclusive. [16 (0%) / 0 (0%) / 0 (0%)]
The current leading edge (i.e., latest work unit for which work has actually been sent out to a host) is n=21384128. The leading edge was at n=20981446 at the beginning of the challenge. Since the challenge started, the leading edge has advanced 1.92% as much as it had prior to the challenge!
4255 tasks have been returned so far. That's over 1400 per day. Normal is less than 100 per day.
Impressive!
____________
My lucky number is 75898524288+1 |
|
|
RogerVolunteer developer Volunteer tester
 Send message
Joined: 27 Nov 11 Posts: 1138 ID: 120786 Credit: 268,668,824 RAC: 0
                    
|
Just published,
Guide to the November 11th Transit of Mercury Across the Sun
https://www.universetoday.com/143562/our-guide-to-the-november-11th-2019-transit-of-mercury-across-the-sun/
Remember safety first! |
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14037 ID: 53948 Credit: 477,214,715 RAC: 293,590
                               
|
We are halfway done! 40% done!
Challenge: Transit of Mercury Across the Sun
App: 8 (PSP-LLR)
(As of 2019-11-05 18:46:47 UTC)
16471 tasks have been sent out. [CPU/GPU/anonymous_platform: 16452 (100%) / 0 (0%) / 19 (0%)]
Of those tasks that have been sent out:
1889 (11%) were aborted. [1889 (11%) / 0 (0%) / 0 (0%)]
1446 (9%) came back with some kind of an error. [1446 (9%) / 0 (0%) / 0 (0%)]
5968 (36%) have returned a successful result. [5951 (36%) / 0 (0%) / 17 (0%)]
7168 (44%) are still in progress. [7166 (44%) / 0 (0%) / 2 (0%)]
Of the tasks that have been returned successfully:
3103 (52%) are pending validation. [3092 (52%) / 0 (0%) / 11 (0%)]
2831 (47%) have been successfully validated. [2825 (47%) / 0 (0%) / 6 (0%)]
11 (0%) were invalid. [11 (0%) / 0 (0%) / 0 (0%)]
23 (0%) are inconclusive. [23 (0%) / 0 (0%) / 0 (0%)]
The current leading edge (i.e., latest work unit for which work has actually been sent out to a host) is n=21437782. The leading edge was at n=20981446 at the beginning of the challenge. Since the challenge started, the leading edge has advanced 2.17% as much as it had prior to the challenge!
That's nearly 15 times the normal rate for PSP.
____________
My lucky number is 75898524288+1 |
|
|
|
We are halfway done!
Even better, we're only 40% done :)
____________
|
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14037 ID: 53948 Credit: 477,214,715 RAC: 293,590
                               
|
We are halfway done!
Even better, we're only 40% done :)
Oops.
I'm really messing up these status reports. First of all, I forgot about the shift back to standard time and have been aiming for 2 PM local time instead of 1 PM local time.
And then I somehow decided that 4 days was halfway. :)
____________
My lucky number is 75898524288+1 |
|
|
|
We are halfway done!
Even better, we're only 40% done :)
Oops.
I'm really messing up these status reports. First of all, I forgot about the shift back to standard time and have been aiming for 2 PM local time instead of 1 PM local time.
And then I somehow decided that 4 days was halfway. :)
And you expect us to trust you with apps to determine primes? :D :D
At least we know you're human.
____________
5912891284485*2^1290000-1
(Sophie Germain Prime Search) |
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14037 ID: 53948 Credit: 477,214,715 RAC: 293,590
                               
|
We are halfway(?) done!
Challenge: Transit of Mercury Across the Sun
App: 8 (PSP-LLR)
(As of 2019-11-06 19:03:11 UTC)
18653 tasks have been sent out. [CPU/GPU/anonymous_platform: 18629 (100%) / 0 (0%) / 24 (0%)]
Of those tasks that have been sent out:
2002 (11%) were aborted. [2002 (11%) / 0 (0%) / 0 (0%)]
1577 (8%) came back with some kind of an error. [1577 (8%) / 0 (0%) / 0 (0%)]
7883 (42%) have returned a successful result. [7861 (42%) / 0 (0%) / 22 (0%)]
7191 (39%) are still in progress. [7189 (39%) / 0 (0%) / 2 (0%)]
Of the tasks that have been returned successfully:
3453 (44%) are pending validation. [3441 (44%) / 0 (0%) / 12 (0%)]
4384 (56%) have been successfully validated. [4374 (55%) / 0 (0%) / 10 (0%)]
21 (0%) were invalid. [21 (0%) / 0 (0%) / 0 (0%)]
25 (0%) are inconclusive. [25 (0%) / 0 (0%) / 0 (0%)]
The current leading edge (i.e., latest work unit for which work has actually been sent out to a host) is n=21507573. The leading edge was at n=20981446 at the beginning of the challenge. Since the challenge started, the leading edge has advanced 2.51% as much as it had prior to the challenge!
____________
My lucky number is 75898524288+1 |
|
|
|
I need a little help guys. Check out the number one user for this challenge.
http://www.primegrid.com/challenge/2019_9/top_users.html
http://www.primegrid.com/hosts_user.php?userid=914937
He only has 2 2600K CPU, and one Q9400 CPU.
Could somebody explain what is going on here?
Is Bunkering allowed at this time? Or is just cheating going on?
Or is the Primegrid score system just broken?
I challenges like this, only the Workunits sent out, and returned within the challenge time count, right? |
|
|
|
Cheater?? Hohoooo.. megarofl ..
Yess, We will welcome that "user" on rakesearch,tomasgrid,amicablenumbers,ODLK, where results comming on 100%. But he like to play "prime" lottery.. sad ..)) |
|
|
Nick  Send message
Joined: 11 Jul 11 Posts: 2301 ID: 105020 Credit: 10,038,690,256 RAC: 29,453,415
                            
|
I need a little help guys. Check out the number one user for this challenge.
http://www.primegrid.com/challenge/2019_9/top_users.html
http://www.primegrid.com/hosts_user.php?userid=914937
He only has 2 2600K CPU, and one Q9400 CPU.
Could somebody explain what is going on here?
Is Bunkering allowed at this time? Or is just cheating going on?
Or is the Primegrid score system just broken?
I challenges like this, only the Workunits sent out, and returned within the challenge time count, right?
I got an explanation from Scott and Michael. Pavel is using 3 computers through which he is running a huge amount of other computers running the tasks. He custom wrote software to be able to do this. I'm sure Michael will be able to give a more eloquent explanation |
|
|
|
I got an explanation from Scott and Michael. Pavel is using 3 computers through which he is running a huge amount of other computers running the tasks. He custom wrote software to be able to do this. I'm sure Michael will be able to give a more eloquent explanation
Yes, it looks like really big cluster based on server Xeon E5 26XX CPUs. Desktops are used as boinc gates distributing tasks over 2 private subnets 10.1.1.xx and 10.2.1.xx
See p.e. task protocol http://www.primegrid.com/result.php?resultid=1036428551
<core_client_version>7.12.1</core_client_version><![CDATA[
<stderr_txt>
BOINC llr wrapper (version 8.00.99)
Using Jean Penne's llr (64 bit)
Primality test requested
LLR Program - Version 3.8.21.99, using Gwnum Library Version 28.14.99
LLR command line: 10.2.1.231/sllr64 -iNUMA0 http://primetest:5555/api/
Using all-complex AVX FFT length 2304K, Pass1=0, Pass2=0, clm=0, 10 threads, a = 0
2660v2, time per bit 2.208 ms.
</stderr_txt>
]]> |
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14037 ID: 53948 Credit: 477,214,715 RAC: 293,590
                               
|
el_teniente wrote: <expletives deleted>
Fuunny??!!
hope majkl goetz willl delete your and my comments )
1) i'm Russian
2) it's the final phrase of favorite boneym's song Rasputin
3) it was said just to note that Pavel Atnashev(sure using national equipment) demonstrated this miracle NOT IN Russian BOINC project(such as Rake Search, OLDK and so on) BUT here in Forein Land)
Gah, don't quote the profanity! Now I've got to remove your post too.
Everyone just cool down. You all actually seem to be on the same side. I don't want to see people banished from the forums over a misunderstanding.
Be happy and keep on crunching!
And, yes, Pavel isn't in anyway cheating, and we're very, very appreciative of all he's doing.
____________
My lucky number is 75898524288+1 |
|
|
|
yes it was the most real misunderstanding |
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14037 ID: 53948 Credit: 477,214,715 RAC: 293,590
                               
|
6 days done!
Challenge: Transit of Mercury Across the Sun
App: 8 (PSP-LLR)
(As of 2019-11-07 20:37:54 UTC)
21366 tasks have been sent out. [CPU/GPU/anonymous_platform: 21338 (100%) / 0 (0%) / 28 (0%)]
Of those tasks that have been sent out:
2579 (12%) were aborted. [2579 (12%) / 0 (0%) / 0 (0%)]
1650 (8%) came back with some kind of an error. [1650 (8%) / 0 (0%) / 0 (0%)]
9861 (46%) have returned a successful result. [9835 (46%) / 0 (0%) / 26 (0%)]
7276 (34%) are still in progress. [7274 (34%) / 0 (0%) / 2 (0%)]
Of the tasks that have been returned successfully:
3863 (39%) are pending validation. [3849 (39%) / 0 (0%) / 14 (0%)]
5945 (60%) have been successfully validated. [5933 (60%) / 0 (0%) / 12 (0%)]
28 (0%) were invalid. [28 (0%) / 0 (0%) / 0 (0%)]
25 (0%) are inconclusive. [25 (0%) / 0 (0%) / 0 (0%)]
The current leading edge (i.e., latest work unit for which work has actually been sent out to a host) is n=21580493. The leading edge was at n=20981446 at the beginning of the challenge. Since the challenge started, the leading edge has advanced 2.86% as much as it had prior to the challenge!
____________
My lucky number is 75898524288+1 |
|
|
compositeVolunteer tester Send message
Joined: 16 Feb 10 Posts: 1172 ID: 55391 Credit: 1,211,016,878 RAC: 1,196,437
                        
|
It is a Russian miracle.
I've tried to estimate how many cores he has.
He's putting 10 threads on each task and getting 40% CPU efficiency,
where as I get 95% efficiency using 4 threads on a 4-core system; and 83% efficiency on a 6-core system running 2 tasks simultaneously with 3 threads each.
His total throughput is like having about 2900 cores: (his tasks completed: 1677)/(my tasks completed: 13) * (my cores: 10) = 1290; 1290 * (my avg efficiency: ~ 90%) / (his avg efficiency: 40%) =~ 2900 cores.
The straightforward calculation of his tasks in progress (288) times 10 threads gives the answer of 2880 cores. |
|
|
|
I have 100% task cache, so you need to cut that in half. 56+88 tasks running. |
|
|
|
Sorry to have caused a major misunderstanding here, that was not my intention. :)
I didn't know that people go so far and run custom software at this point. Apart from optimized .exe files.
I still have to figure out how to convince Boinc on Windows to put tasks on numa nodes. And then stopping it from trowing tasks around from one node to the other.
Pavel, how many processors, and which ones are you running for this challenge? |
|
|
|
I still have to figure out how to convince Boinc on Windows to put tasks on numa nodes. And then stopping it from trowing tasks around from one node to the other.
When I failed at this I started writing my own software. It's the only known way for me.
The bulk of my power is Xeon 2660v2. I run one task per CPU, so it's 144 CPUs at the moment. L3 cache is the king. |
|
|
compositeVolunteer tester Send message
Joined: 16 Feb 10 Posts: 1172 ID: 55391 Credit: 1,211,016,878 RAC: 1,196,437
                        
|
I have 100% task cache, so you need to cut that in half. 56+88 tasks running.
You are giving up a lot of chances to return tasks first. You should download tasks just-in-time, unless you are purposely giving the wingman a chance. That doesn't seem to be the motivation when you have all cores running on a single task.
Also, is turbo enabled on those CPUs? Fully loaded they run 2.2 GHz but are capable of 3.0 GHz in turbo mode. In theory you could run one or two threads on each socket and it would go up to 36% faster unless LLR is so intensive that heat is an issue. Since there are diminishing returns on applying more threads is there an optimal number of threads less than the number of cores?
LLR executable is 37 MB statically linked but your L3 cache is 25 MB. How much of the executable needs to be cache-resident to complete a single iteration of LLR? BOINC Manager shows the working set size is over 90 MB, so there's potential speedup by having a thread dedicated to pre-reading portions of the executable to ensure it occupies L3 just before it is used. That co-routine would be tiny and fit in another core's L1. But I'm unsure what the cache-coherency protocol is - does cache line occupancy in L2 prevent it's eviction from L3? |
|
|
|
You are giving up a lot of chances to return tasks first. You should download tasks just-in-time, unless you are purposely giving the wingman a chance.
I respect my wingmen. Sometimes it takes 3 months for them to validate a task that I returned in 16 hours.
Since there are diminishing returns on applying more threads is there an optimal number of threads less than the number of cores?
No. Performance is limited by access to data, not by the speed of cores.
How much of the executable needs to be cache-resident to complete a single iteration of LLR?
The code is tiny and not worth the mention. What is really huge is FFT data. 2304K FFT requires 20MB of L3 cache. If you have less cache, or if you run more than one LLR, you get such a huge drop in performance that can't be compensated by less multithreading overhead.
Some of my 2660v2's have just one very slow memory module. It doesn't affect performance a bit because 2304K FFT fits into 25MB L3 cache. |
|
|
mackerel Volunteer tester
 Send message
Joined: 2 Oct 08 Posts: 2652 ID: 29980 Credit: 570,442,335 RAC: 10,182
                              
|
How much of the executable needs to be cache-resident to complete a single iteration of LLR? BOINC Manager shows the working set size is over 90 MB, so there's potential speedup by having a thread dedicated to pre-reading portions of the executable to ensure it occupies L3 just before it is used. That co-routine would be tiny and fit in another core's L1. But I'm unsure what the cache-coherency protocol is - does cache line occupancy in L2 prevent it's eviction from L3?
From an outside looking in perspective, code size doesn't matter. FFT data size compared to effective L2/L3 cache seems to be dominant before ram factors in. The gwnum code doing heavy lifting already tries to pre-cache data before it is needed.
I say effective L2/L3 because Intel CPUs before Skylake-X are inclusive cache. Data is duplicated between L2 and L3, effective size is therefore L3. With Skylake-X it has non-inclusive cache. So code could make use of total L2+L3 in best case. This seems to work based on limited observations on Skylake-X. I assume same could apply to exclusive cache in Zen, but there are further complications there from the CCX structure and large FFT sizes which makes it not as easy as it first seems. For smaller FFTs it fits in L3 regardless. Ratio wise Zen has much bigger L3 compared to L2, so L2 wont matter much. Skylake-X is much closer in L2/L3 so L2 potentially plays a bigger role. |
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14037 ID: 53948 Credit: 477,214,715 RAC: 293,590
                               
|
7 days done; three to go.
Challenge: Transit of Mercury Across the Sun
App: 8 (PSP-LLR)
(As of 2019-11-08 18:08:43 UTC)
23956 tasks have been sent out. [CPU/GPU/anonymous_platform: 23924 (100%) / 0 (0%) / 32 (0%)]
Of those tasks that have been sent out:
2969 (12%) were aborted. [2969 (12%) / 0 (0%) / 0 (0%)]
1738 (7%) came back with some kind of an error. [1738 (7%) / 0 (0%) / 0 (0%)]
11765 (49%) have returned a successful result. [11735 (49%) / 0 (0%) / 30 (0%)]
7484 (31%) are still in progress. [7482 (31%) / 0 (0%) / 2 (0%)]
Of the tasks that have been returned successfully:
4022 (34%) are pending validation. [4012 (34%) / 0 (0%) / 10 (0%)]
7666 (65%) have been successfully validated. [7646 (65%) / 0 (0%) / 20 (0%)]
33 (0%) were invalid. [33 (0%) / 0 (0%) / 0 (0%)]
44 (0%) are inconclusive. [44 (0%) / 0 (0%) / 0 (0%)]
The current leading edge (i.e., latest work unit for which work has actually been sent out to a host) is n=21663416. The leading edge was at n=20981446 at the beginning of the challenge. Since the challenge started, the leading edge has advanced 3.25% as much as it had prior to the challenge!
____________
My lucky number is 75898524288+1 |
|
|
|
WOW! all on 3 Hosts with 4 Cores.
Quad CPU Q9400, i7-2600K, i7-2600K Power House indeed.
1 Pavel Atnashev Ural Federal University 58 032 294.67 1 884
http://www.primegrid.com/hosts_user.php?userid=914937 |
|
|
|
WOW! all on 3 Hosts with 4 Cores.
Quad CPU Q9400, i7-2600K, i7-2600K Power House indeed.
1 Pavel Atnashev Ural Federal University 58 032 294.67 1 884
http://www.primegrid.com/hosts_user.php?userid=914937
See post Messages 134642 and 134651 |
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14037 ID: 53948 Credit: 477,214,715 RAC: 293,590
                               
|
WOW! all on 3 Hosts with 4 Cores.
Quad CPU Q9400, i7-2600K, i7-2600K Power House indeed.
1 Pavel Atnashev Ural Federal University 58 032 294.67 1 884
http://www.primegrid.com/hosts_user.php?userid=914937
The Core2Quad is running at 3.4THz. ;)
____________
My lucky number is 75898524288+1 |
|
|
|
The Core2Quad is running at 3.4THz. ;)
I wonder the size of the cache memory and the RAM speed :)
____________
"Accidit in puncto, quod non contingit in anno."
Something that does not occur in a year may, perchance, happen in a moment. |
|
|
|
The Core2Quad is running at 3.4THz. ;)
I wonder the size of the cache memory and the RAM speed :)
I wonder at the thermal output ;)
____________
Badge score: 5*1 + 6*1 + 7*8 + 8*5 + 10*2 + 11*2 + 12*1 + 13*1 = 174 |
|
|
Ken_g6 Volunteer developer
 Send message
Joined: 4 Jul 06 Posts: 940 ID: 3110 Credit: 265,155,351 RAC: 110,466
                            
|
WOW! all on 3 Hosts with 4 Cores.
Quad CPU Q9400, i7-2600K, i7-2600K Power House indeed.
1 Pavel Atnashev Ural Federal University 58 032 294.67 1 884
http://www.primegrid.com/hosts_user.php?userid=914937
The Core2Quad is running at 3.4THz. ;)
Hey, I still have one of those! The low-end mobo probably wouldn't support running it that fast, though. :( |
|
|
robish Volunteer moderator Volunteer tester
 Send message
Joined: 7 Jan 12 Posts: 2223 ID: 126266 Credit: 7,942,315,293 RAC: 5,419,769
                               
|
WOW! all on 3 Hosts with 4 Cores.
Quad CPU Q9400, i7-2600K, i7-2600K Power House indeed.
1 Pavel Atnashev Ural Federal University 58 032 294.67 1 884
http://www.primegrid.com/hosts_user.php?userid=914937
The Core2Quad is running at 3.4THz. ;)
🤣
____________
My lucky number 10590941048576+1 |
|
|
Scott Brown Volunteer moderator Project administrator Volunteer tester Project scientist
 Send message
Joined: 17 Oct 05 Posts: 2417 ID: 1178 Credit: 20,023,545,253 RAC: 20,488,901
                                                
|
WOW! all on 3 Hosts with 4 Cores.
Quad CPU Q9400, i7-2600K, i7-2600K Power House indeed.
1 Pavel Atnashev Ural Federal University 58 032 294.67 1 884
http://www.primegrid.com/hosts_user.php?userid=914937
The Core2Quad is running at 3.4THz. ;)
Q9400 = Quantum 9400!
|
|
|
compositeVolunteer tester Send message
Joined: 16 Feb 10 Posts: 1172 ID: 55391 Credit: 1,211,016,878 RAC: 1,196,437
                        
|
WOW! all on 3 Hosts with 4 Cores.
Quad CPU Q9400, i7-2600K, i7-2600K Power House indeed.
1 Pavel Atnashev Ural Federal University 58 032 294.67 1 884
http://www.primegrid.com/hosts_user.php?userid=914937
The Core2Quad is running at 3.4THz. ;)
Q9400 = Quantum 9400!
So why bother with this long-running LLR algorithm when you can just factor it in an instant? |
|
|
Ken_g6 Volunteer developer
 Send message
Joined: 4 Jul 06 Posts: 940 ID: 3110 Credit: 265,155,351 RAC: 110,466
                            
|
WOW! all on 3 Hosts with 4 Cores.
Quad CPU Q9400, i7-2600K, i7-2600K Power House indeed.
1 Pavel Atnashev Ural Federal University 58 032 294.67 1 884
http://www.primegrid.com/hosts_user.php?userid=914937
The Core2Quad is running at 3.4THz. ;)
Q9400 = Quantum 9400!
So why bother with this long-running LLR algorithm when you can just factor it in an instant?
I was going to say that Shor's Algorithm takes longer than that. But then I looked it up and I read:
The efficiency of Shor's algorithm is due to the efficiency of the quantum Fourier transform, and modular exponentiation by repeated squarings.
Basically, LLR tests a number by doing just one of those! But it would require a lot of qubits. At least n*log(n) for testing 2^n. I don't see a computer with millions of qubits coming anytime soon. |
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14037 ID: 53948 Credit: 477,214,715 RAC: 293,590
                               
|
Eight days done!
Two to go.
Challenge: Transit of Mercury Across the Sun
App: 8 (PSP-LLR)
(As of 2019-11-09 20:10:39 UTC)
26268 tasks have been sent out. [CPU/GPU/anonymous_platform: 26232 (100%) / 0 (0%) / 36 (0%)]
Of those tasks that have been sent out:
3102 (12%) were aborted. [3102 (12%) / 0 (0%) / 0 (0%)]
1815 (7%) came back with some kind of an error. [1815 (7%) / 0 (0%) / 0 (0%)]
13964 (53%) have returned a successful result. [13930 (53%) / 0 (0%) / 34 (0%)]
7387 (28%) are still in progress. [7385 (28%) / 0 (0%) / 2 (0%)]
Of the tasks that have been returned successfully:
4431 (32%) are pending validation. [4419 (32%) / 0 (0%) / 12 (0%)]
9433 (68%) have been successfully validated. [9411 (67%) / 0 (0%) / 22 (0%)]
48 (0%) were invalid. [48 (0%) / 0 (0%) / 0 (0%)]
52 (0%) are inconclusive. [52 (0%) / 0 (0%) / 0 (0%)]
The current leading edge (i.e., latest work unit for which work has actually been sent out to a host) is n=21727832. The leading edge was at n=20981446 at the beginning of the challenge. Since the challenge started, the leading edge has advanced 3.56% as much as it had prior to the challenge!
____________
My lucky number is 75898524288+1 |
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14037 ID: 53948 Credit: 477,214,715 RAC: 293,590
                               
|
It's the last day! There's less than 21 hours remaining.
Challenge: Transit of Mercury Across the Sun
App: 8 (PSP-LLR)
(As of 2019-11-10 21:32:24 UTC)
28717 tasks have been sent out. [CPU/GPU/anonymous_platform: 28677 (100%) / 0 (0%) / 40 (0%)]
Of those tasks that have been sent out:
3811 (13%) were aborted. [3811 (13%) / 0 (0%) / 0 (0%)]
1930 (7%) came back with some kind of an error. [1930 (7%) / 0 (0%) / 0 (0%)]
16179 (56%) have returned a successful result. [16140 (56%) / 0 (0%) / 39 (0%)]
6797 (24%) are still in progress. [6796 (24%) / 0 (0%) / 1 (0%)]
Of the tasks that have been returned successfully:
4414 (27%) are pending validation. [4401 (27%) / 0 (0%) / 13 (0%)]
11655 (72%) have been successfully validated. [11629 (72%) / 0 (0%) / 26 (0%)]
60 (0%) were invalid. [60 (0%) / 0 (0%) / 0 (0%)]
50 (0%) are inconclusive. [50 (0%) / 0 (0%) / 0 (0%)]
The current leading edge (i.e., latest work unit for which work has actually been sent out to a host) is n=21790046. The leading edge was at n=20981446 at the beginning of the challenge. Since the challenge started, the leading edge has advanced 3.85% as much as it had prior to the challenge!
____________
My lucky number is 75898524288+1 |
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14037 ID: 53948 Credit: 477,214,715 RAC: 293,590
                               
|
With less than a day remaining in the challenge, it's time to remind everyone...
At the Conclusion of the Challenge
When the challenge completes, we would prefer users "moving on" to finish those tasks they have downloaded, if not then please ABORT the WU's (and then UPDATE the PrimeGrid project) instead of DETACHING, RESETTING, or PAUSING.
ABORTING WU's allows them to be recycled immediately; thus a much faster "clean up" to the end of a Challenge. DETACHING, RESETTING, and PAUSING WU's causes them to remain in limbo until they EXPIRE. Therefore, we must wait until WU's expire to send them out to be completed.
Likewise, if you're shutting down the computer for an extended period of time, or deleting the VM (Virtual Machine), please ABORT all remaining tasks first. Also, be aware that merely shutting off a cloud server doesn't stop the billing. You have to destroy/delete the server if you don't want to continue to be charged for it.
Thank you!
____________
My lucky number is 75898524288+1 |
|
|
|
It's the last day! There's less than 21 hours remaining.
Challenge: Transit of Mercury Across the Sun
App: 8 (PSP-LLR)
(As of 2019-11-10 21:32:24 UTC)
28717 tasks have been sent out. [CPU/GPU/anonymous_platform: 28677 (100%) / 0 (0%) / 40 (0%)]
Of those tasks that have been sent out:
3811 (13%) were aborted. [3811 (13%) / 0 (0%) / 0 (0%)]
1930 (7%) came back with some kind of an error. [1930 (7%) / 0 (0%) / 0 (0%)]
16179 (56%) have returned a successful result. [16140 (56%) / 0 (0%) / 39 (0%)]
6797 (24%) are still in progress. [6796 (24%) / 0 (0%) / 1 (0%)]
Of the tasks that have been returned successfully:
4414 (27%) are pending validation. [4401 (27%) / 0 (0%) / 13 (0%)]
11655 (72%) have been successfully validated. [11629 (72%) / 0 (0%) / 26 (0%)]
60 (0%) were invalid. [60 (0%) / 0 (0%) / 0 (0%)]
50 (0%) are inconclusive. [50 (0%) / 0 (0%) / 0 (0%)]
The current leading edge (i.e., latest work unit for which work has actually been sent out to a host) is n=21790046. The leading edge was at n=20981446 at the beginning of the challenge. Since the challenge started, the leading edge has advanced 3.85% as much as it had prior to the challenge!
So, we're nearing about 4 % increase in the leading edge. Since higher n require more work, what would be the work % increase in the project? This would be an interesting statistics to also see for the challenges.
I would guess it could easily be double than the increase in the leading edge? So could we have work-wise advanced the project something like over 10 %? |
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14037 ID: 53948 Credit: 477,214,715 RAC: 293,590
                               
|
So, we're nearing about 4 % increase in the leading edge. Since higher n require more work, what would be the work % increase in the project? This would be an interesting statistics to also see for the challenges.
I would guess it could easily be double than the increase in the leading edge? So could we have work-wise advanced the project something like over 10 %?
Yes, that would be an interesting number. The first way is to guess about the actual numbers tested, assume an even distribution of numbers, and use some algebra to come up with that number. Anyone can do that with a calculator.
The second, slightly more accurate way of doing it is to use the actual candidates for that calculation. When we are doing a double check, that's what I do because I have a list of all the candidates being checked. Normally, however, I only have access to a recent subset of candidates. After a short while, the data is purged from the database to make space and are only available offline.
In the case of PSP, I have data going back to n=500220, so I can give you numbers for right now, but this isn't something that is normally available. It can't be done for all projects.
If you go just by leading edge, and only counting from k=500220, the leading edge has increased by 9.8% when you take into account the size of each candidate.
____________
My lucky number is 75898524288+1 |
|
|
|
Regarding the transit, I look up and all I can see are clouds everywhere :(
____________
"Accidit in puncto, quod non contingit in anno."
Something that does not occur in a year may, perchance, happen in a moment. |
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14037 ID: 53948 Credit: 477,214,715 RAC: 293,590
                               
|
Regarding the transit, I look up and all I can see are clouds everywhere :(
I've got clear skies (NYC area), and eclipse glasses for protection, but as expected it's too small to see.
____________
My lucky number is 75898524288+1 |
|
|
|
Regarding the transit, I look up and all I can see are clouds everywhere :(
Same here...
____________
5912891284485*2^1290000-1
(Sophie Germain Prime Search) |
|
|
Ken_g6 Volunteer developer
 Send message
Joined: 4 Jul 06 Posts: 940 ID: 3110 Credit: 265,155,351 RAC: 110,466
                            
|
I tried my projection trick, but I didn't see any spot before my eyes. Which, I suppose, is both good and bad. |
|
|
|
I was lucky as weather was good at the start of the transit. But after one hour it got cloudy. It IS pretty hard to spot.
|
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14037 ID: 53948 Credit: 477,214,715 RAC: 293,590
                               
|
We are done!
Challenge: Transit of Mercury Across the Sun
App: 8 (PSP-LLR)
(As of 2019-11-11 18:31:25 UTC)
30083 tasks have been sent out. [CPU/GPU/anonymous_platform: 30041 (100%) / 0 (0%) / 42 (0%)]
Of those tasks that have been sent out:
4486 (15%) were aborted. [4486 (15%) / 0 (0%) / 0 (0%)]
1946 (6%) came back with some kind of an error. [1946 (6%) / 0 (0%) / 0 (0%)]
18324 (61%) have returned a successful result. [18283 (61%) / 0 (0%) / 41 (0%)]
5277 (18%) are still in progress. [5276 (18%) / 0 (0%) / 1 (0%)]
Of the tasks that have been returned successfully:
4134 (23%) are pending validation. [4123 (23%) / 0 (0%) / 11 (0%)]
14026 (77%) have been successfully validated. [13996 (76%) / 0 (0%) / 30 (0%)]
74 (0%) were invalid. [74 (0%) / 0 (0%) / 0 (0%)]
90 (0%) are inconclusive. [90 (0%) / 0 (0%) / 0 (0%)]
The current leading edge (i.e., latest work unit for which work has actually been sent out to a host) is n=21821837. The leading edge was at n=20981446 at the beginning of the challenge. Since the challenge started, the leading edge has advanced 4.01% as much as it had prior to the challenge!
18324 tasks were returned. That's 18 times what I would normally expect to see in ten days. Great job everyone!
____________
My lucky number is 75898524288+1 |
|
|
|
I was lucky as weather was good at the start of the transit. But after one hour it got cloudy. It IS pretty hard to spot.
Nice! Thanks for sharing this. |
|
|
|
I was lucky as weather was good at the start of the transit. But after one hour it got cloudy. It IS pretty hard to spot.
Nice! Thanks for sharing this.
Yes, nice! I have to scroll up and down a bit to be certain what is dirt on my display and what is a planet, but then I do see it. /JeppeSN |
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14037 ID: 53948 Credit: 477,214,715 RAC: 293,590
                               
|
The cleanup begins...
Cleanup Status:
Nov 11: Transit of Mercury Across the Sun: 4201 tasks outstanding; 3523 affecting individual (272) scoring positions; 3119 affecting team (71) scoring positions.
____________
My lucky number is 75898524288+1 |
|
|
|
Great race everyone.
It was a real nail biter to decide 2, 3, 4 this time.
____________
|
|
|
robish Volunteer moderator Volunteer tester
 Send message
Joined: 7 Jan 12 Posts: 2223 ID: 126266 Credit: 7,942,315,293 RAC: 5,419,769
                               
|
Great race everyone.
It was a real nail biter to decide 2, 3, 4 this time.
👍 and 9 and 10 was real close too 🙂
____________
My lucky number 10590941048576+1 |
|
|
|
Very many thanks to the PrimeGrid team!
Thanks to the competitors as well!
Great race!
Hope to see you all back for GFN!
____________
Greetings, Jens
147433824^131072+1 |
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14037 ID: 53948 Credit: 477,214,715 RAC: 293,590
                               
|
Cleanup Status:
Nov 11: Transit of Mercury Across the Sun: 4201 tasks outstanding; 3523 affecting individual (272) scoring positions; 3119 affecting team (71) scoring positions.
Nov 12: Transit of Mercury Across the Sun: 3764 tasks outstanding; 3143 affecting individual (271) scoring positions; 2550 affecting team (66) scoring positions.
Nov 13: Transit of Mercury Across the Sun: 3328 tasks outstanding; 2764 affecting individual (264) scoring positions; 1714 affecting team (62) scoring positions.
Nov 14: Transit of Mercury Across the Sun: 3027 tasks outstanding; 2415 affecting individual (259) scoring positions; 1295 affecting team (57) scoring positions.
Nov 15: Transit of Mercury Across the Sun: 2834 tasks outstanding; 2209 affecting individual (253) scoring positions; 1215 affecting team (55) scoring positions.
Nov 16: Transit of Mercury Across the Sun: 2662 tasks outstanding; 2066 affecting individual (251) scoring positions; 1140 affecting team (54) scoring positions.
Nov 17: Transit of Mercury Across the Sun: 2503 tasks outstanding; 1863 affecting individual (243) scoring positions; 1060 affecting team (51) scoring positions.
Nov 18: Transit of Mercury Across the Sun: 2354 tasks outstanding; 1725 affecting individual (238) scoring positions; 1010 affecting team (50) scoring positions.
Nov 19: Transit of Mercury Across the Sun: 2222 tasks outstanding; 1626 affecting individual (236) scoring positions; 956 affecting team (50) scoring positions.
Nov 20: Transit of Mercury Across the Sun: 2120 tasks outstanding; 1538 affecting individual (231) scoring positions; 829 affecting team (47) scoring positions.
Nov 21: Transit of Mercury Across the Sun: 2013 tasks outstanding; 1464 affecting individual (227) scoring positions; 794 affecting team (47) scoring positions.
Nov 22: Transit of Mercury Across the Sun: 1914 tasks outstanding; 1364 affecting individual (221) scoring positions; 759 affecting team (46) scoring positions.
Nov 23: Transit of Mercury Across the Sun: 1841 tasks outstanding; 1310 affecting individual (219) scoring positions; 716 affecting team (45) scoring positions.
Nov 24: Transit of Mercury Across the Sun: 1757 tasks outstanding; 1250 affecting individual (216) scoring positions; 381 affecting team (43) scoring positions.
Nov 25: Transit of Mercury Across the Sun: 1638 tasks outstanding; 1153 affecting individual (209) scoring positions; 362 affecting team (41) scoring positions.
Nov 26: Transit of Mercury Across the Sun: 1547 tasks outstanding; 1097 affecting individual (203) scoring positions; 344 affecting team (41) scoring positions.
Nov 27: Transit of Mercury Across the Sun: 1463 tasks outstanding; 1036 affecting individual (199) scoring positions; 315 affecting team (38) scoring positions.
Nov 28: Transit of Mercury Across the Sun: 1359 tasks outstanding; 966 affecting individual (192) scoring positions; 278 affecting team (36) scoring positions.
Nov 29: Transit of Mercury Across the Sun: 1274 tasks outstanding; 893 affecting individual (184) scoring positions; 262 affecting team (35) scoring positions.
Nov 30: Transit of Mercury Across the Sun: 1203 tasks outstanding; 822 affecting individual (179) scoring positions; 249 affecting team (34) scoring positions.
____________
My lucky number is 75898524288+1 |
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14037 ID: 53948 Credit: 477,214,715 RAC: 293,590
                               
|
Cleanup Status:
Nov 11: Transit of Mercury Across the Sun: 4201 tasks outstanding; 3523 affecting individual (272) scoring positions; 3119 affecting team (71) scoring positions.
Nov 30: Transit of Mercury Across the Sun: 1203 tasks outstanding; 822 affecting individual (179) scoring positions; 249 affecting team (34) scoring positions.
Dec 1: Transit of Mercury Across the Sun: 1112 tasks outstanding; 757 affecting individual (173) scoring positions; 226 affecting team (33) scoring positions.
Dec 2: Transit of Mercury Across the Sun: 1046 tasks outstanding; 700 affecting individual (168) scoring positions; 175 affecting team (31) scoring positions.
Dec 3: Transit of Mercury Across the Sun: 955 tasks outstanding; 636 affecting individual (157) scoring positions; 160 affecting team (31) scoring positions.
Dec 4: Transit of Mercury Across the Sun: 897 tasks outstanding; 593 affecting individual (152) scoring positions; 146 affecting team (29) scoring positions.
Dec 5: Transit of Mercury Across the Sun: 820 tasks outstanding; 536 affecting individual (149) scoring positions; 108 affecting team (28) scoring positions.
Dec 6: Transit of Mercury Across the Sun: 768 tasks outstanding; 499 affecting individual (144) scoring positions; 99 affecting team (26) scoring positions.
Dec 7: Transit of Mercury Across the Sun: 707 tasks outstanding; 443 affecting individual (136) scoring positions; 92 affecting team (26) scoring positions.
Dec 8: Transit of Mercury Across the Sun: 626 tasks outstanding; 381 affecting individual (125) scoring positions; 71 affecting team (25) scoring positions.
Dec 9: Transit of Mercury Across the Sun: 625 tasks outstanding; 381 affecting individual (125) scoring positions; 71 affecting team (25) scoring positions.
Dec 10: Transit of Mercury Across the Sun: 591 tasks outstanding; 357 affecting individual (121) scoring positions; 69 affecting team (25) scoring positions.
Dec 11: Transit of Mercury Across the Sun: 559 tasks outstanding; 334 affecting individual (116) scoring positions; 66 affecting team (25) scoring positions.
Dec 12: Transit of Mercury Across the Sun: 525 tasks outstanding; 319 affecting individual (115) scoring positions; 61 affecting team (23) scoring positions.
Dec 13: Transit of Mercury Across the Sun: 485 tasks outstanding; 293 affecting individual (113) scoring positions; 58 affecting team (23) scoring positions.
Dec 14: Transit of Mercury Across the Sun: 455 tasks outstanding; 266 affecting individual (105) scoring positions; 53 affecting team (22) scoring positions.
Dec 15: Transit of Mercury Across the Sun: 438 tasks outstanding; 256 affecting individual (101) scoring positions; 50 affecting team (20) scoring positions.
Dec 16: Transit of Mercury Across the Sun: 419 tasks outstanding; 237 affecting individual (96) scoring positions; 50 affecting team (20) scoring positions.
Dec 17: Transit of Mercury Across the Sun: 386 tasks outstanding; 208 affecting individual (92) scoring positions; 47 affecting team (19) scoring positions.
Dec 18: Transit of Mercury Across the Sun: 355 tasks outstanding; 187 affecting individual (86) scoring positions; 44 affecting team (18) scoring positions.
Dec 19: Transit of Mercury Across the Sun: 331 tasks outstanding; 171 affecting individual (80) scoring positions; 42 affecting team (18) scoring positions.
Dec 20: Transit of Mercury Across the Sun: 318 tasks outstanding; 164 affecting individual (78) scoring positions; 31 affecting team (17) scoring positions.
Dec 21: Transit of Mercury Across the Sun: 294 tasks outstanding; 148 affecting individual (72) scoring positions; 26 affecting team (14) scoring positions.
Dec 22: Transit of Mercury Across the Sun: 265 tasks outstanding; 126 affecting individual (61) scoring positions; 25 affecting team (13) scoring positions.
Dec 23: Transit of Mercury Across the Sun: 243 tasks outstanding; 93 affecting individual (53) scoring positions; 24 affecting team (12) scoring positions.
Dec 24: Transit of Mercury Across the Sun: 221 tasks outstanding; 70 affecting individual (46) scoring positions; 18 affecting team (10) scoring positions.
Dec 25: Transit of Mercury Across the Sun: 201 tasks outstanding; 58 affecting individual (41) scoring positions; 15 affecting team (9) scoring positions.
Dec 26: Transit of Mercury Across the Sun: 175 tasks outstanding; 45 affecting individual (35) scoring positions; 11 affecting team (8) scoring positions.
Dec 27: Transit of Mercury Across the Sun: 161 tasks outstanding; 38 affecting individual (30) scoring positions; 7 affecting team (5) scoring positions.
Dec 28: Transit of Mercury Across the Sun: 144 tasks outstanding; 30 affecting individual (24) scoring positions; 7 affecting team (5) scoring positions.
Dec 29: Transit of Mercury Across the Sun: 134 tasks outstanding; 27 affecting individual (22) scoring positions; 4 affecting team (3) scoring positions.
Dec 30: Transit of Mercury Across the Sun: 116 tasks outstanding; 19 affecting individual (15) scoring positions; 1 affecting team (1) scoring positions.
Dec 31: Transit of Mercury Across the Sun: 99 tasks outstanding; 13 affecting individual (12) scoring positions; 0 affecting team (0) scoring positions.
____________
My lucky number is 75898524288+1 |
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14037 ID: 53948 Credit: 477,214,715 RAC: 293,590
                               
|
Cleanup Status:
Nov 11: Transit of Mercury Across the Sun: 4201 tasks outstanding; 3523 affecting individual (272) scoring positions; 3119 affecting team (71) scoring positions.
Nov 30: Transit of Mercury Across the Sun: 1203 tasks outstanding; 822 affecting individual (179) scoring positions; 249 affecting team (34) scoring positions.
Dec 31: Transit of Mercury Across the Sun: 99 tasks outstanding; 13 affecting individual (12) scoring positions; 0 affecting team (0) scoring positions.
Jan 1: Transit of Mercury Across the Sun: 93 tasks outstanding; 8 affecting individual (7) scoring positions; 0 affecting team (0) scoring positions.
Jan 2: Transit of Mercury Across the Sun: 82 tasks outstanding; 2 affecting individual (2) scoring positions; 0 affecting team (0) scoring positions.
Jan 3: Transit of Mercury Across the Sun: 77 tasks outstanding; 0 affecting individual (0) scoring positions; 0 affecting team (0) scoring positions.
____________
My lucky number is 75898524288+1 |
|
|
Michael Gutierrez Volunteer moderator Project administrator Project scientist
 Send message
Joined: 21 Mar 17 Posts: 375 ID: 764476 Credit: 46,579,835 RAC: 13,712
                 
|
The results are final!
Top three individuals:
1: Pavel Atnashev
2: vaughan
3: Scott Brown
Top three teams:
1: Czech National Team
2: Ural Federal University
3: Aggie The Pew
Congratulations to the winners, and well done to everyone who participated.
See you at the International Education Day Challenge! |
|
|
|
Current challenge series results updated.
____________
"Accidit in puncto, quod non contingit in anno."
Something that does not occur in a year may, perchance, happen in a moment. |
|
|