Join PrimeGrid
Returning Participants
Community
Leader Boards
Results
Other
drummers-lowrise
|
Message boards :
Number crunching :
Telephone Challenge
| Author |
Message |
Darren Li Volunteer moderator Project administrator Project scientist Send message
Joined: 25 Dec 21 Posts: 84 ID: 1465220 Credit: 709,175,851 RAC: 0
            
|
|
Welcome to Telephone Challenge: 10 day Seventeen or Bust (LLR) from March 7 10:00 UTC to March 17 10:00 UTC
"Telephone" is a song by American singer Lady Gaga from her third extended play (EP), The Fame Monster (2009)—the reissue of her debut studio album, The Fame (2008). Featuring American singer Beyoncé, it was released as the EP's second single on January 26, 2010. Gaga and Beyoncé wrote "Telephone" with Rodney Jerkins, LaShawn Daniels, and Lazonate Franklin. Jerkins was responsible for the production, with Gaga co-producing with him. Gaga originally wrote the song for Britney Spears, who recorded a demo. "Telephone" conveys Gaga's fear of not finding time for fun given the increasing pressure for her to work harder as an artist. Musically, the song consists of an expanded bridge, verse-rap, and a sampled voice of an operator announcing that the phone line is unreachable. Beyoncé appears in the middle of the song, singing the verses in a "rapid-fire" way and accompanied by double beats.
Thank you BorgProcessor for the challenge thread.
To participate in the challenge:
- Wait until the challenge timeframe starts (or set your BOINC Client download schedule accordingly), as tasks issued before the challenge will not count.
- In your PrimeGrid preferences section, select only the Seventeen or Bust (LLR) project.
Important reminders:
- Note on SoB tasks: LLR2 (the program SoB runs on) has eliminated the need for a full doublecheck task on each workunit, but has replaced it with a short verification task. Expect to receive a few tasks about 1% of normal length.
- The typical deadline for some of these WUs is longer than the challenge time-frame, so make sure your computer is able to return the WUs within 10 days. Only tasks issued AFTER the start time and returned BEFORE the finish time will be counted.
- At the Conclusion of the Challenge: We kindly ask users "moving on" to ABORT their tasks instead of DETACHING, RESETTING, or PAUSING. ABORTING tasks allows them to be recycled immediately; thus a much faster "clean up" to the end of a Challenge. DETACHING, RESETTING, and PAUSING tasks causes them to remain in limbo until they EXPIRE. Therefore, we must wait until tasks expire to send them out to be completed. Please consider either completing what's in the queue or ABORTING them. Thanks!
Let's talk about hardware:
Note on LLR2 tasks: LLR2 has eliminated the need for a full doublecheck task on each workunit, but has replaced it with a short verification task. Expect to receive a few tasks about 1% of normal length.
Application builds are available for Linux 32 and 64 bit, Windows 32 and 64 bit and MacIntel. Intel and recent AMD CPUs with FMA3 capabilities (Haswell or better for Intel, Zen-2 or better for AMD) will have a very large advantage, and Intel CPUs with dual AVX-512 (certain recent Intel Skylake-X and Xeon CPUs) will be the fastest.
Note that LLR is running the latest AVX-512 version of LLR which takes full advantage of the features of these newer CPUs. It's faster than the previous LLR app and draws more power and produces more heat, especially if they're highly overclocked. If you have certain recent Intel Skylake-X and Xeon CPUs, especially if it's overclocked or has overclocked memory, and haven't run the new AVX-512 LLR before, we strongly suggest running it before the challenge while you are monitoring the temperatures.
Multi-threading is supported and IS recommended for slower computers.
(SoB tasks take an average of 550 CPU hours (faster on newer computers).)
Those looking to maximize their computer's performance during this challenge, or when running LLR in general, may find this information useful.
- Your mileage may vary. Before the challenge starts, take some time and experiment and see what works best on your computer.
- If you have a CPU with hyperthreading or SMT, either turn off this feature in the BIOS, or set BOINC to use 50% of the processors.
- If you're using a GPU for other tasks, it may be beneficial to leave hyperthreading on in the BIOS and instead tell BOINC to use 50% of the CPU's. This will allow one of the hyperthreads to service the GPU.
- The new multi-threading system is now live. Click here to set the maximum number of threads. This will allow you to select multi-threading from the project preferences web page. No more app_config.xml. It works like this:
- In the preferences selection, there are selections for "max jobs" and "max cpus", similar to the settings in app_config.
- Unlike app_config, these two settings apply to ALL apps. You can't chose 1 thread for SGS and 4 for SoB. When you change apps, you need to change your multithreading settings if you want to run a different number of threads.
- There will be individual settings for each venue (location).
- This will eliminate the problem of BOINC downloading 1 task for every core.
- The hyperthreading control isn't possible at this time.
- The "max cpus" control will only apply to LLR apps. The "max jobs" control applies to all apps.
- If you want to continue to use app_config.xml for LLR tasks, you need to change it if you want it to work. Please see this message for more information.
- Some people have observed that when using multithreaded LLR, hyperthreading is actually beneficial. We encourage you to experiment and see what works best for you.
What is LLR?
The Lucas-Lehmer-Riesel (LLR) test is a primality test for numbers of the form N = k*2^n − 1, with 2^n > k. Also, LLR is a program developed by Jean Penne that can run the LLR-tests. It includes the Proth test to perform +1 tests and PRP to test non base 2 numbers. See also:
What is LLR2?
LLR2 is an improvement to the LLR application developed by our very own Pavel Atnashev and stream. It utilizes Gerbicz checks to enable the Fast DoubleCheck feature, which will nearly double the speed of PrimeGrid's progress on the projects it's applied to. For more information, see this forum post.
As with all number crunching, excessive heat can potentially cause permanent hardware failure. Please ensure your cooling system is sufficient. Please see this post for more details on how you can "stress test" your CPU.
Additional information:
Time zone converter:
March 7 10:00 UTC to March 17 10:00 UTC
NOTE: The countdown clock on the front page uses the host computer time. Therefore, if your computer time is off, so will the countdown clock. For precise timing, use the UTC Time in the data section at the very top, above the countdown clock.
Scoring Information
Scores will be kept for individuals and teams. Only tasks issued AFTER March 7 10:00 UTC and received BEFORE March 17 10:00 UTC will be considered for credit. We will be using the same scoring method as we currently use for BOINC credits. A quorum of 2 is NOT needed to award Challenge score - i.e. no double checker. Therefore, each returned result will earn a Challenge score. Please note that if the result is eventually declared invalid, the score will be removed.
About the Seventeen or Bust Search
Seventeen or Bust was a distributed computing project attempting to solve the Sierpinski problem. The name of the project is due to the fact that, when founded, there were seventeen values of k < 78,557 for which no primes were known.
The project was conceived in March of 2002 by two college undergraduates. After some planning and a lot of programming, the first public client was released on April 1. Seventeen or Bust ceased operations in 2016. The project was administered by:
- Louis Helm, a computer engineer in Austin, Texas.
- David Norris, a software engineer in Urbana, Illinois.
- Michael Garrison, a Computer Science undergraduate at Eastern Michigan University in Ypsilanti, Michigan.
Starting in 2010, PrimeGrid partnered with Seventeen or Bust to work towards solving the Sierpinski Problem. After the demise of the original Seventeen or Bust project in 2016, PrimeGrid is continuing by itself to continue the Seventeen or Bust project in looking to solve the Sierpinski Problem.
As of October of 2016, PrimeGrid and Seventeen or Bust have eliminated twelve of those seventeen candidates. The project might now be styled "Five or Bust," but the original name will be kept for consistency.
PrimeGrid and Seventeen or Bust's twelve prime discoveries are:
- 46157*2^698207+1 with 210,186 decimal digits, discovered November 27, 2002. Crunched by Stephen Gibson.
- 65567*2^1013803+1 with 305,190 decimal digits, discovered December 2, 2002. Crunched by James Burt.
- 44131*2^995972+1 with 299,823 decimal digits, discovered December 5, 2002. Crunched by an anonymous participant.
- 69109*2^1157446+1 with 348,431 decimal digits, discovered December 6, 2002. Crunched by Sean DiMichele.
- 54767*2^1337287+1 with 402,569 decimal digits, discovered December 23, 2002. Crunched by Peter Coels.
- 5359*2^5054502+1 with 1,521,561 decimal digits, discovered December 6, 2003. Crunched by Randy Sundquist.
- 28433*2^7830457+1 with 2,357,207 decimal digits, discovered December 30, 2004. Crunched by a member of Team Prime Rib.
- 27653*2^9167433+1 with 2,759,677 decimal digits, discovered June 8, 2005. Crunched by Derek Gordon.
- 4847*2^3321063+1 with 999,744 decimal digits, discovered October 15, 2005 while double checking earlier tests. Crunched by Richard Hassler.
- 19249*2^13018586+1 with 3,918,990 decimal digits, discoverd March 26, 2007. Crunched by Konstantin Agafonov.
- 33661*2^7031232+1 with 2,116,617 decimal digits, discovered October 17, 2007 while double checking earlier tests. Crunched by Sturle Sunde.
- 10223*2^31172165+1 with 9,383,761 decimal digits, discovered October 31, 2016. Crunched by Szabolcs Péter (SyP). This prime eliminated k=10223 from both the Sierpinski Problem and the Prime Sierpinski Problem. (official announcement)
About the Sierpinski Problem
Wacław Franciszek Sierpiński (14 March 1882 — 21 October 1969), a Polish mathematician, was known for outstanding contributions to set theory, number theory, theory of functions and topology. It is in number theory where we find the Sierpinski problem.
Basically, the Sierpinski problem is "What is the smallest Sierpinski number?"
First we look at Proth numbers (named after the French mathematician François Proth). A Proth number is a number of the form k*2^n+1 where k is odd, n is a positive integer, and 2^n>k.
A Sierpinski number is an odd k such that the Proth number k*2^n+1 is not prime for all n. For example, 3 is not a Sierpinski number because n=2 produces a prime number (3*2^2+1=13). In 1962, John Selfridge proved that 78,557 is a Sierpinski number...meaning he showed that for all n, 78557*2^n+1 was not prime.
Most number theorists believe that 78,557 is the smallest Sierpinski number, but it hasn't yet been proven. In order to prove it, it has to be shown that every single k less than 78,557 is not a Sierpinski number, and to do that, some n must be found that makes k*2^n+1 prime.
The smallest proven 'prime' Sierpinski number is 271,129. In order to prove it, it has to be shown that every single 'prime' k less than 271,129 is not a Sierpinski number, and to do that, some n must be found that makes k*2^n+1 prime.
Seventeen or Bust is working on the Sierpinski problem and the Prime Sierpinski Project is working on the 'prime' Sierpinski problem. The following k's remain for each project:
Sierpinski problem 'prime' Sierpinski problem
21181 22699*
22699 67607*
24737 79309
55459 79817
67607 152267
156511
168451
222113
225931
237019
* being tested by Seventeen or Bust
Fortunately, the two projects (and later PrimeGrid's Extended Sierpinski Project) combined their sieving efforts into a single file. Therefore, PrimeGrid's PSP sieve supports all three projects.
Additional Information
For more information about Sierpinski, Sierpinski number, and the Sierpinsk problem, please see these resources:
| |
|
Davina   Send message
Joined: 13 Feb 12 Posts: 3478 ID: 130544 Credit: 2,849,366,470 RAC: 343,176
                                   
|
|
I fail to see any connection. | |
|
|
|
|
Hang up and try again.
| |
|
|
|
I fail to see any connection.
If the suggestion from Werinbert does not help, I suggest you dial the specific code used to reach a human operator live. This code is simply "0" in some areas, including the U.K. where you seem to be located, but it may have changed to "100" if Subscriber Trunk Dialling (STD) has been introduced already. The telephone operator will help you connect properly with Lady Gaga. /JeppeSN | |
|
|
|
|
I still think there should be a special badge for a SOB discovery. Seems pretty significant. | |
|
Davina   Send message
Joined: 13 Feb 12 Posts: 3478 ID: 130544 Credit: 2,849,366,470 RAC: 343,176
                                   
|
|
I'd rather HAVE an STD than listen to that shite. | |
|
|
|
I still think there should be a special badge for a SOB discovery. Seems pretty significant.
Significant, yes! It would be the largest prime ever found by PrimeGrid. And the largest known Proth prime. In fact, the largest known prime that is not a Mersenne.
You get a K badge, plus a boring M badge.
/JeppeSN | |
|
|
|
I still think there should be a special badge for a SOB discovery. Seems pretty significant.
Significant, yes! It would be the largest prime ever found by PrimeGrid. And the largest known Proth prime. In fact, the largest known prime that is not a Mersenne.
You get a K badge, plus a boring M badge.
/JeppeSN
There should be a special SOB badge with a number depending on how many K's left. Would be a very sought after badge! | |
|
|
|
|
I'm afraid she will not return my phone calls | |
|
|
|
|
I think Jim Croce sang about hearing other people talk about her:
Operator (That's Not the Way It Feels) | |
|
|
|
|
Her number is not 867-5309 is it? Better tell Rikki not to lose that number.
Good luck everyone! You will need it for sure. | |
|
|
|
Her number is not 867-5309 is it? Better tell Rikki not to lose that number.
8675309 is a prime. /JeppeSN
Added:
Observation: 8675309 is not a Sierpiński number. Proof: Test the number 8675309*2^5 + 1; it is prime. Can I have my K badge now? | |
|
|
|
|
My daughter took Music in college...briefly. She was mad, saying it was all MATH.
But someone's gonna have to explain why this challenge is named after a pop song....Are the number of beats a prime or something? Is it because a prime number of people (5) created it? It hit prime number (1) on the charts, for a prime number of (3) months? Did someone put the lead back in the paint chips?
Ah, I get it now. We look at the project and say, "Well, ain't that a S.O.aB." | |
|
|
|
|
They got to be pulling our leg.
"Bell was granted US Patent Number 174,465 for an “Improvement in Telegraphy” on March 7, 1876, and it was March 10, 1876 that Bell declared to his assistant, “Mr. Watson, come here, I want to see you!” over the lines of his working telephone, as he wrote in his laboratory journal. "
Coincidence? I think not. | |
|
|
|
|
Well, I'll be a S.O.B, you may be on to something. Only it needed to run until April 1st for the prank to be truly effective. :D
It is unfortunate that the patent number isn't a prime. But at least March 7 is a prime, no matter if it is USA format, or international format (37/73). | |
|
|
|
|
73 is also Sheldon Coopers favorite number.
73 is the 21st prime number. Its mirror, 37, is the 12th and its mirror, 21, is the product of multiplying 7 and 3. In binary, 73 is a palindrome too | |
|
|
|
|
Great. Now I have to Google "palindrome." Thanks a lot. :P
A palindrome is
a word, phrase, number, or sequence that reads the same backward as forward, such as "radar," "madam," "step on no pets," or "1991". Originating from the Greek palindromos ("running back again"), they are checked by reversing the sequence to see if it remains identical. | |
|
|
|
|
Yeah and 73 in binary is 1001001. lol | |
|
|
|
|
We have been severely pranked!
Ok, PG. What additional details did we miss? | |
|
Lumiukko Volunteer tester Send message
Joined: 7 Jul 08 Posts: 169 ID: 25183 Credit: 1,281,774,160 RAC: 2,003,169
                                 
|
It hit prime number (1) on the charts,
Small correction: 1 is NOT a prime number.
Prime numbers start from 2 (by definition).
--
Lumiukko | |
|
|
|
|
And I guess Pluto isn't a planet. :D (oops) | |
|
|
|
|
It's not. It's a dwarf planet. And 1 is not prime in the prime number theorem. | |
|
Rollo Send message
Joined: 11 May 25 Posts: 6 ID: 1877730 Credit: 62,330,749 RAC: 184,589
              
|
|
I must be doing something terribly wrong for this challenge. 7 machines running and all report 1 day to 12 hours before the first completion. I see that others have completed one or two within the first hour lol. | |
|
|
|
I must be doing something terribly wrong for this challenge. 7 machines running and all report 1 day to 12 hours before the first completion. I see that others have completed one or two within the first hour lol.
Your machine's estimates are correct.
What is already on the table is a bunch of proof tasks which have by design a shorter run-time.
____________
Greetings, Jens
147433824^131072+1 | |
|
|
|
|
Those are almost certainly proof tasks, which are about 1% of the size of a normal task. Your ranges for a "standard" task certainly look about right. My AVX-512 instance on TSC is estimating around 30 hours for a task on 12 cores.
____________
Proud member of Aggie The Pew!
7881*2^327265+1 is prime!
9411*2^367623+1 is prime!
| |
|
|
|
|
Beat me to it by 10 seconds :)
____________
Proud member of Aggie The Pew!
7881*2^327265+1 is prime!
9411*2^367623+1 is prime!
| |
|
|
|
Beat me to it by 10 seconds :)
Beat me in the challenge, and we're even. ;-)
____________
Greetings, Jens
147433824^131072+1 | |
|
Darren Li Volunteer moderator Project administrator Project scientist Send message
Joined: 25 Dec 21 Posts: 84 ID: 1465220 Credit: 709,175,851 RAC: 0
            
|
|
The challenge has started and everyone seems to be at a loss for the origin of the challenge name, so I owe an explanation:
This challenge takes place on the event of the 100th anniversary of the first experimental and non-commercial transatlantic telephone call. | |
|
|
|
|
Using short wave radio to connect from London to New York. Interesting!
The first transatlantic cable wasn't placed in service until 1956. Did not know that. I would have guessed much earlier! | |
|
Darren Li Volunteer moderator Project administrator Project scientist Send message
Joined: 25 Dec 21 Posts: 84 ID: 1465220 Credit: 709,175,851 RAC: 0
            
|
|
Indeed, cables were a much more difficult technology. It took me quite some time to find again the source I used for the date, since everything else pointed at January of 1927 - which was the first commercial call, done at a staggering price. (Maybe we can have a Telephone Challenge II next year.)
Here's a primary source in case anyone's interested: https://www.theguardian.com/theguardian/2012/mar/08/archive-1926-long-distance-small-talk | |
|
|
|
The typical deadline for some of these WUs is longer than the challenge time-frame, so make sure your computer is able to return the WUs within 10 days.
Nice joke.
My oviously old and slow Ryzen 3950X was at 2% progress after 6 hours (8Threads per WU). Prognosed time to finish is 12 days.
So I aborted all WUs and will not try again. | |
|
|
|
|
That's because of the L3 cache required. You should try 16 threads/8 cores, and something like process lasso to make sure the tasks don't jump from one half of the chip to the other.
I'd have to do some research, and I don't want to, but the 3950X might be one of AMD's designs that split the cache into 16MB chunks per 4 cores/8 threads, so it may not help to change the thread count. | |
|
mackerel Volunteer tester
 Send message
Joined: 2 Oct 08 Posts: 2964 ID: 29980 Credit: 783,116,370 RAC: 83,294
                                       
|
I'd have to do some research, and I don't want to, but the 3950X might be one of AMD's designs that split the cache into 16MB chunks per 4 cores/8 threads, so it may not help to change the thread count.
3950X is Zen 2, which indeed had 4 core CCX. Even if it has 16 cores, it behaves like 4x 4 cores due to the limited bandwidth connecting between each group. I don't know how you would optimise it for SOB. It is probably least worst to run it as one task with 16 cores, with some way to balance the core usage (e.g. SMT off, manual affinity setting, process lasso). | |
|
|
|
|
I started using .ps1 files. Will never bother with process lasso again. | |
|
|
|
It's not. It's a dwarf planet. And 1 is not prime in the prime number theorem.
A dwarf planet ain't a planet? So my mini-fridge ain't a fridge? | |
|
|
|
|
Correct. You have a mini-fridge or a frigoid if you will | |
|
|
|
I'd have to do some research, and I don't want to, but the 3950X might be one of AMD's designs that split the cache into 16MB chunks per 4 cores/8 threads, so it may not help to change the thread count.
It helps.
I had the 3700X which ran two consecutive tasks at eight cores over both CCX' faster than two tasks at four cores pinned to one CCX each in parallel.
3950X is Zen 2, which indeed had 4 core CCX. Even if it has 16 cores, it behaves like 4x 4 cores due to the limited bandwidth connecting between each group. I don't know how you would optimise it for SOB. It is probably least worst to run it as one task with 16 cores, with some way to balance the core usage (e.g. SMT off, manual affinity setting, process lasso).
I'd run two CCX' per task, either as eight cores or twelve threads (probably the former).
If I had this CPU and time for testing, I'd also try one task at twelve cores or eighteen threads (three CCX' and definitely enough cache) with a side of proof-tasks at four cores or six threads (one CCX), and one task at sixteen cores or twenty-four threads (four CCX').
Running 75% of the threads is a suggestion I got from another cruncher which might or might not generate more throughput, depending on the machine's config and settings. It _will_ draw more power from the socket.
____________
Greetings, Jens
147433824^131072+1 | |
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14675 ID: 53948 Credit: 1,041,806,060 RAC: 101,538
                                           
|
I had the 3700X which ran two consecutive tasks at eight cores over both CCX' faster than two tasks at four cores pinned to one CCX each in parallel.
I didn't even bother testing 2x4 on my 3700X. Cache, even inter-CCX cache, is still going to be much faster than main memory. 1x8 fits in cache and 2x4 does not.
____________
My lucky number is 75898524288+1 | |
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14675 ID: 53948 Credit: 1,041,806,060 RAC: 101,538
                                           
|
|
Challenge: Telephone
App: 13 (SoB)
Fast DC tasks are NOT included.
(As of 2026-03-08 12:10:34 UTC)
7905 tasks have been sent out. [CPU/GPU/anonymous_platform: 7905 (100%) / 0 (0%) / 0 (0%)]
Of those tasks that have been sent out:
653 (8%) were aborted. [653 (8%) / 0 (0%) / 0 (0%)]
439 (6%) came back with some kind of an error. [439 (6%) / 0 (0%) / 0 (0%)]
329 (4%) have returned a successful result. [329 (4%) / 0 (0%) / 0 (0%)]
6484 (82%) are still in progress. [6484 (82%) / 0 (0%) / 0 (0%)]
Of the tasks that have been returned successfully:
254 (77%) are pending validation. [254 (77%) / 0 (0%) / 0 (0%)]
75 (23%) have been successfully validated. [75 (23%) / 0 (0%) / 0 (0%)]
0 (0%) were invalid. [0 (0%) / 0 (0%) / 0 (0%)]
0 (0%) are inconclusive. [0 (0%) / 0 (0%) / 0 (0%)]
The current leading edge (i.e., latest work unit for which work has actually been sent out to a host) is n=46240220. The leading edge was at n=45665204 at the beginning of the challenge. Since the challenge started, the leading edge has advanced 1.26% as much as it had prior to the challenge!
____________
My lucky number is 75898524288+1 | |
|
mackerel Volunteer tester
 Send message
Joined: 2 Oct 08 Posts: 2964 ID: 29980 Credit: 783,116,370 RAC: 83,294
                                       
|
I'd run two CCX' per task, either as eight cores or twelve threads (probably the former).
If I had this CPU and time for testing, I'd also try one task at twelve cores or eighteen threads (three CCX' and definitely enough cache) with a side of proof-tasks at four cores or six threads (one CCX), and one task at sixteen cores or twenty-four threads (four CCX').
Running 75% of the threads is a suggestion I got from another cruncher which might or might not generate more throughput, depending on the machine's config and settings. It _will_ draw more power from the socket.
Running the throughput benchmark in Prime95 at 4608K FFT size (current indicated largest for SOB) and various core configurations can give an indication. It doesn't always work out 1:1 but close enough.
For an optimised system running LLR, I've not seen SMT give more than margin of error benefit over not, but it will reduce power efficiency for the work done. For a non-optimised system, using SMT can help move you towards optimised performance level, at the cost of reduced power efficiency.
Currently largest SOBs are nominally 36MB data size. This would be on the absolute limit for 2 CCX. My past observations also don't show a binary enough or not enough transition, but there is a ramp starting a little before, ending a little after the data size requirement. So perf will already be dropping from ideal at that point. That's why I'd go straight to 4 CCX. 3 CCX isn't really an option since you can't pick and choose what goes on the remaining CCX. | |
|
|
|
3 CCX isn't really an option since you can't pick and choose what goes on the remaining CCX.
I beg to differ.
SoB needs enough time that manual pinning is quite efficient.
If I run one BOINC client at 50% of the CPUs and pin it to the cores I can go ahead and pin one SoB task to twelve (in three CCX') of them, thus making the machine use the last four cores for anything else BOINC has. Or pin one four-core SoB task to one CCX and run the rest as 12c.
If I run two BOINC clients and pin one to three CCX' and the other to one CCX I can give them the same work, but one will run it at 4c, one at 12c.
And, especially with the third option, I can trash main tasks until I have a sufficient amount of proof tasks for the single CCX. Would need to repeat that due to deadlines, but with all these tasks finishing it might work out. Would be much more likely if we talked GFN due to added GPU power and, thus, more tasks all over.
All in all, it would not be too practical.
Better suited for the mixed-GFN challenges we had in the past. 21 on three CCX' plus 19 or 20 on the forth one.
I first thought about the above because someone asked during a GFN challenge. ;-)
Btw.: The RKN-Cluster's 3700X's BOINC client thinks it needs about fourty hours for one main task. If that's about correct (which it might be as my 5700X just finished its first task after 31h) it could finish about five our four (due to increasing FFT sizes), with the 3950X doing about twice as much.
____________
Greetings, Jens
147433824^131072+1 | |
|
mackerel Volunteer tester
 Send message
Joined: 2 Oct 08 Posts: 2964 ID: 29980 Credit: 783,116,370 RAC: 83,294
                                       
|
I beg to differ.
I was thinking about the general case of one BOINC client and leave it to do its own thing unattended. Biggest SOB tasks are at the point I would not be comfortable running two of them on 64MB L3 AMD CPUs, which has to be considered the worst case. Running one task on all cores might not be the best efficiency but it would be safe.
Two client case does open up more possibilities for anyone who is motivated to set up and configure such an arrangement. | |
|
|
|
|
Usable CPU cache capacity per AMD architecture:
3950x 4x4 config = 16MB+2MB = 18MB/WU
5950x 2x8 config = 32MB+4MB = 36MB/WU
7950x 2x8 config = 32MB+8MB = 40MB/WU
9950x 2x8 config = 32MB+8MB = 40MB/WU
9800X3D 1x8 config = 96MB+8MB = 104MB/WU
Current SOB tasks are 32-36MB
on Zen 2 (3950x), the cache size is too limited, so it may be beneficial to trade higher latency for all core utilization.
Zen 3-Zen 5 the available cache is barely enough to run 2 tasks (2xCCD variants like 5950x) efficiently
The same rule applies to Threadripper and EPYC processors with a higher number of CCDs, since each CCD contains a maximum of 8 cores
Only X3D variants will soon provide sufficient cache headroom to run SOB tasks comfortably. Or with Zen 6, that should increase the core count (+cache) from 8 to 12 cores. | |
|
|
|
Usable CPU cache capacity per AMD architecture:
9800X3D 1x8 config = 96MB+8MB = 104MB/WU
The 9800X should easily be able to run two tasks at four cores for better thoughput.
Same for any R7 x800X CPU.
____________
Greetings, Jens
147433824^131072+1 | |
|
Chooka   Send message
Joined: 15 May 18 Posts: 487 ID: 1014486 Credit: 1,949,342,006 RAC: 35,356
                                    
|
|
I've done 2 wu's simultaneously on my 9950X3D then I went to 1 task at a time. Running 2 was marginally better throughput so I've gone back to that.
You can see my pc.
All my other pc's are set to 1 wu at a time.
My Threadripper 1950X should be retired! lol.
____________
Слава Україні! | |
|
|
|
Using short wave radio to connect from London to New York. Interesting!
The first transatlantic cable wasn't placed in service until 1956. Did not know that. I would have guessed much earlier!
I think the first cable across the Atlantic they tried was for the telegraph (not voice calls!). But the understanding of the impedance of such a long cable was not perfect at the time. It turned out that even sending simple Morse code over such a long distance was more difficult than expected.
(The telegraph works like this: The sender writes the message on a piece of paper. He hands it to an operator who will read the text and translate it into Morse code which will be sent as electrical impulses over a long cable (or over radio waves). In the other end, another operator will listen to the Morse code and write the text on a piece of paper which is then handed to the recipient. This is very fast compared to a letter that would need to be sent by ship which would take days.)
/JeppeSN | |
|
|
|
|
Hello,
under one of my (valid) tasks there is a big red "WARNING!"
this is scary, what does it mean ?
____________
Greetz
Chris
561*2^1423021+1
446891680^131072+1
| |
|
|
|
Hello,
under one of my (valid) tasks there is a big red "WARNING!"
this is scary, what does it mean ?
Too aggressive undervolting GPU, CPU, disable XMP, EXPO profile.
____________
| |
|
|
|
|
Hi.
Hello,
under one of my (valid) tasks there is a big red "WARNING!"
this is scary, what does it mean ?
It means the application had to use its built-in capabilities to find and recover from computational errors.
Typical causes of these errors include temperatures outside the specification or voltages outside the specifications.
Using less aggressive overclocking, undervolting or memory timings might help to prevent these errors just as better cooling might.
Other problems might be memory sticks losing their seat, mainboard peripherie heating up too much, a PSU on its last leg, or whatever other things.
To trouble-shoot all this, reseating things or playing with the OC section of your BIOS are good starting points, alongside going over your machine's cooling system.
____________
Greetings, Jens
147433824^131072+1 | |
|
|
|
|
thank you men, than i know whats happend. Indeed i had very hot RAM in this machine and i putted on some aluminium and an extra fan while machine has running... maybe not a good idea to work on the RAM while machine is working :-p
____________
Greetz
Chris
561*2^1423021+1
446891680^131072+1
| |
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14675 ID: 53948 Credit: 1,041,806,060 RAC: 101,538
                                           
|
|
Challenge: Telephone
App: 13 (SoB)
Fast DC tasks are NOT included.
(As of 2026-03-10 19:47:14 UTC)
13170 tasks have been sent out. [CPU/GPU/anonymous_platform: 13170 (100%) / 0 (0%) / 0 (0%)]
Of those tasks that have been sent out:
1861 (14%) were aborted. [1861 (14%) / 0 (0%) / 0 (0%)]
929 (7%) came back with some kind of an error. [929 (7%) / 0 (0%) / 0 (0%)]
2372 (18%) have returned a successful result. [2372 (18%) / 0 (0%) / 0 (0%)]
8008 (61%) are still in progress. [8008 (61%) / 0 (0%) / 0 (0%)]
Of the tasks that have been returned successfully:
824 (35%) are pending validation. [824 (35%) / 0 (0%) / 0 (0%)]
1548 (65%) have been successfully validated. [1548 (65%) / 0 (0%) / 0 (0%)]
0 (0%) were invalid. [0 (0%) / 0 (0%) / 0 (0%)]
0 (0%) are inconclusive. [0 (0%) / 0 (0%) / 0 (0%)]
The current leading edge (i.e., latest work unit for which work has actually been sent out to a host) is n=46540364. The leading edge was at n=45665204 at the beginning of the challenge. Since the challenge started, the leading edge has advanced 1.92% as much as it had prior to the challenge!
____________
My lucky number is 75898524288+1 | |
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14675 ID: 53948 Credit: 1,041,806,060 RAC: 101,538
                                           
|
|
Challenge: Telephone
App: 13 (SoB)
Fast DC tasks are NOT included.
(As of 2026-03-11 20:08:47 UTC)
15109 tasks have been sent out. [CPU/GPU/anonymous_platform: 15109 (100%) / 0 (0%) / 0 (0%)]
Of those tasks that have been sent out:
2025 (13%) were aborted. [2025 (13%) / 0 (0%) / 0 (0%)]
1123 (7%) came back with some kind of an error. [1123 (7%) / 0 (0%) / 0 (0%)]
3289 (22%) have returned a successful result. [3289 (22%) / 0 (0%) / 0 (0%)]
8672 (57%) are still in progress. [8672 (57%) / 0 (0%) / 0 (0%)]
Of the tasks that have been returned successfully:
695 (21%) are pending validation. [695 (21%) / 0 (0%) / 0 (0%)]
2594 (79%) have been successfully validated. [2594 (79%) / 0 (0%) / 0 (0%)]
0 (0%) were invalid. [0 (0%) / 0 (0%) / 0 (0%)]
0 (0%) are inconclusive. [0 (0%) / 0 (0%) / 0 (0%)]
The current leading edge (i.e., latest work unit for which work has actually been sent out to a host) is n=46668271. The leading edge was at n=45665204 at the beginning of the challenge. Since the challenge started, the leading edge has advanced 2.20% as much as it had prior to the challenge!
____________
My lucky number is 75898524288+1 | |
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14675 ID: 53948 Credit: 1,041,806,060 RAC: 101,538
                                           
|
|
Challenge: Telephone
App: 13 (SoB)
Fast DC tasks are NOT included.
(As of 2026-03-13 09:55:58 UTC)
18225 tasks have been sent out. [CPU/GPU/anonymous_platform: 18225 (100%) / 0 (0%) / 0 (0%)]
Of those tasks that have been sent out:
2873 (16%) were aborted. [2873 (16%) / 0 (0%) / 0 (0%)]
1446 (8%) came back with some kind of an error. [1446 (8%) / 0 (0%) / 0 (0%)]
4913 (27%) have returned a successful result. [4913 (27%) / 0 (0%) / 0 (0%)]
8993 (49%) are still in progress. [8993 (49%) / 0 (0%) / 0 (0%)]
Of the tasks that have been returned successfully:
733 (15%) are pending validation. [733 (15%) / 0 (0%) / 0 (0%)]
4180 (85%) have been successfully validated. [4180 (85%) / 0 (0%) / 0 (0%)]
0 (0%) were invalid. [0 (0%) / 0 (0%) / 0 (0%)]
0 (0%) are inconclusive. [0 (0%) / 0 (0%) / 0 (0%)]
The current leading edge (i.e., latest work unit for which work has actually been sent out to a host) is n=46842271. The leading edge was at n=45665204 at the beginning of the challenge. Since the challenge started, the leading edge has advanced 2.58% as much as it had prior to the challenge!
____________
My lucky number is 75898524288+1 | |
|
|
|
|
Does anyone else have experience running these llrSOB tasks on an Intel Core 9 Ultra 285K? Looking for advice. My processor is set to the built in Intel 200S mode. I've been running one task at a time on 21 to 24 cores in about 16 hours. Tried running two tasks concurrently on 11 cores each and found that the avg time per task increased about 25%. Running a single task on only the 8 P-cores also seemed considerably slower. Any thoughts? Maybe 16 hours each is the best it's going to get. | |
|
|
|
|
Hello.
Does anyone else have experience running these llrSOB tasks on an Intel Core 9 Ultra 285K?
I've got no experiences with this CPU, but you have eight Performance Cores with sixteen Efficency Cores at the side.
I'd run one task on the P-Cores, thus saturating the cache, and nothing more.
____________
Greetings, Jens
147433824^131072+1 | |
|
|
|
|
Thanks! I'll give that another try!
____________
459245604131072+1 | |
|
|
|
Does anyone else have experience running these llrSOB tasks on an Intel Core 9 Ultra 285K? Looking for advice. My processor is set to the built in Intel 200S mode. I've been running one task at a time on 21 to 24 cores in about 16 hours. Tried running two tasks concurrently on 11 cores each and found that the avg time per task increased about 25%. Running a single task on only the 8 P-cores also seemed considerably slower. Any thoughts? Maybe 16 hours each is the best it's going to get.
The physical cores are what you want to run as primary compute. Use AI (Perplexity, ChatGPT, other) and ask for BOINC Primegrid project settings. Give AI your CPU/GPU information and it will help you with specific configuration recommendations. You can tune it further from there. I found it very helpful to get over the 'learning curve' of how to best approach specific PG projects.
Good luck!
____________
| |
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14675 ID: 53948 Credit: 1,041,806,060 RAC: 101,538
                                           
|
|
Challenge: Telephone
App: 13 (SoB)
Fast DC tasks are NOT included.
(As of 2026-03-14 15:54:49 UTC)
20111 tasks have been sent out. [CPU/GPU/anonymous_platform: 20111 (100%) / 0 (0%) / 0 (0%)]
Of those tasks that have been sent out:
3358 (17%) were aborted. [3358 (17%) / 0 (0%) / 0 (0%)]
1658 (8%) came back with some kind of an error. [1658 (8%) / 0 (0%) / 0 (0%)]
6237 (31%) have returned a successful result. [6237 (31%) / 0 (0%) / 0 (0%)]
8858 (44%) are still in progress. [8858 (44%) / 0 (0%) / 0 (0%)]
Of the tasks that have been returned successfully:
692 (11%) are pending validation. [692 (11%) / 0 (0%) / 0 (0%)]
5545 (89%) have been successfully validated. [5545 (89%) / 0 (0%) / 0 (0%)]
0 (0%) were invalid. [0 (0%) / 0 (0%) / 0 (0%)]
0 (0%) are inconclusive. [0 (0%) / 0 (0%) / 0 (0%)]
The current leading edge (i.e., latest work unit for which work has actually been sent out to a host) is n=46942510. The leading edge was at n=45665204 at the beginning of the challenge. Since the challenge started, the leading edge has advanced 2.80% as much as it had prior to the challenge!
____________
My lucky number is 75898524288+1 | |
|
|
|
|
Thanks for the AI suggestion! I did find a bunch more helpful info using that method. Learning as I go here!
____________
459245604131072+1 | |
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14675 ID: 53948 Credit: 1,041,806,060 RAC: 101,538
                                           
|
|
Challenge: Telephone
App: 13 (SoB)
Fast DC tasks are NOT included.
(As of 2026-03-15 21:45:13 UTC)
22909 tasks have been sent out. [CPU/GPU/anonymous_platform: 22909 (100%) / 0 (0%) / 0 (0%)]
Of those tasks that have been sent out:
4671 (20%) were aborted. [4671 (20%) / 0 (0%) / 0 (0%)]
1923 (8%) came back with some kind of an error. [1923 (8%) / 0 (0%) / 0 (0%)]
7499 (33%) have returned a successful result. [7499 (33%) / 0 (0%) / 0 (0%)]
8816 (38%) are still in progress. [8816 (38%) / 0 (0%) / 0 (0%)]
Of the tasks that have been returned successfully:
843 (11%) are pending validation. [843 (11%) / 0 (0%) / 0 (0%)]
6656 (89%) have been successfully validated. [6656 (89%) / 0 (0%) / 0 (0%)]
0 (0%) were invalid. [0 (0%) / 0 (0%) / 0 (0%)]
0 (0%) are inconclusive. [0 (0%) / 0 (0%) / 0 (0%)]
The current leading edge (i.e., latest work unit for which work has actually been sent out to a host) is n=47069084. The leading edge was at n=45665204 at the beginning of the challenge. Since the challenge started, the leading edge has advanced 3.07% as much as it had prior to the challenge!
____________
My lucky number is 75898524288+1 | |
|
Conan   Send message
Joined: 24 Mar 09 Posts: 1146 ID: 37336 Credit: 281,028,960 RAC: 167,968
                         
|
|
With just over a Day and 7 hours to go, I will only be able to complete 4 more work units for the challenge.
I will run down my stockpile to see if I can get my Sapphire badge, which is an effort in itself.
Thanks Admin team for a well run challenge again
Congratulations to all the winners and everyone else returning these big tasks.
Conan
____________
| |
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14675 ID: 53948 Credit: 1,041,806,060 RAC: 101,538
                                           
|
|
As we approach the end of the challenge...
There's a bit more than a day to go in the challenge.
First of all, thanks to everyone for participating. A phenomenal amount of work has been done. You all have done great!
As the challenge ends some of you are going to move on to other projects. If you still have challenge tasks on your computer(s), please either let them run to completion (preferred), or abort them so that those tasks can be immediately resent to other computers. This helps us finalize the challenge results in a timely manner. Thank you all, and I'm looking forward to seeing you in June for the h4ck3r's Birthday Challenge!
____________
My lucky number is 75898524288+1 | |
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14675 ID: 53948 Credit: 1,041,806,060 RAC: 101,538
                                           
|
|
Challenge: Telephone
App: 13 (SoB)
Fast DC tasks are NOT included.
(As of 2026-03-16 12:40:38 UTC)
25187 tasks have been sent out. [CPU/GPU/anonymous_platform: 25187 (100%) / 0 (0%) / 0 (0%)]
Of those tasks that have been sent out:
6443 (26%) were aborted. [6443 (26%) / 0 (0%) / 0 (0%)]
2081 (8%) came back with some kind of an error. [2081 (8%) / 0 (0%) / 0 (0%)]
8188 (33%) have returned a successful result. [8188 (33%) / 0 (0%) / 0 (0%)]
8475 (34%) are still in progress. [8475 (34%) / 0 (0%) / 0 (0%)]
Of the tasks that have been returned successfully:
1077 (13%) are pending validation. [1077 (13%) / 0 (0%) / 0 (0%)]
7111 (87%) have been successfully validated. [7111 (87%) / 0 (0%) / 0 (0%)]
0 (0%) were invalid. [0 (0%) / 0 (0%) / 0 (0%)]
0 (0%) are inconclusive. [0 (0%) / 0 (0%) / 0 (0%)]
The current leading edge (i.e., latest work unit for which work has actually been sent out to a host) is n=47106226. The leading edge was at n=45665204 at the beginning of the challenge. Since the challenge started, the leading edge has advanced 3.16% as much as it had prior to the challenge!
____________
My lucky number is 75898524288+1 | |
|
|
|
|
Hello,
great challenge, thx. Can you please update time schedule ? There are different data for next challenge on this sites.
https://www.primegrid.com/challenge/2026_challenge.php
https://www.rechenaugust.de/boinc/2026_Challenge_Series_Current_Standings_Individuals.html
Greetz
Chris | |
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14675 ID: 53948 Credit: 1,041,806,060 RAC: 101,538
                                           
|
|
And we're done!
Challenge: Telephone
App: 13 (SoB)
Fast DC tasks are NOT included.
(As of 2026-03-17 10:17:06 UTC)
28835 tasks have been sent out. [CPU/GPU/anonymous_platform: 28835 (100%) / 0 (0%) / 0 (0%)]
Of those tasks that have been sent out:
11096 (38%) were aborted. [11096 (38%) / 0 (0%) / 0 (0%)]
2235 (8%) came back with some kind of an error. [2235 (8%) / 0 (0%) / 0 (0%)]
9328 (32%) have returned a successful result. [9328 (32%) / 0 (0%) / 0 (0%)]
6167 (21%) are still in progress. [6167 (21%) / 0 (0%) / 0 (0%)]
Of the tasks that have been returned successfully:
1355 (15%) are pending validation. [1355 (15%) / 0 (0%) / 0 (0%)]
7973 (85%) have been successfully validated. [7973 (85%) / 0 (0%) / 0 (0%)]
0 (0%) were invalid. [0 (0%) / 0 (0%) / 0 (0%)]
0 (0%) are inconclusive. [0 (0%) / 0 (0%) / 0 (0%)]
The current leading edge (i.e., latest work unit for which work has actually been sent out to a host) is n=47183206. The leading edge was at n=45665204 at the beginning of the challenge. Since the challenge started, the leading edge has advanced 3.32% as much as it had prior to the challenge!
Nearly ten thousand candidates were checked. No primes so far, but there's a lot of tasks still in progress.
____________
My lucky number is 75898524288+1 | |
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14675 ID: 53948 Credit: 1,041,806,060 RAC: 101,538
                                           
|
|
The cleanup begins:
Telephone: 1293 tasks outstanding; 1110 affecting individual (202) scoring positions; 318 affecting team (50) scoring positions.
____________
My lucky number is 75898524288+1 | |
|
Post to thread
Message boards :
Number crunching :
Telephone Challenge |