Please visit donation page to help the project cover running costs for this month

Toggle Menu

Join PrimeGrid

Returning Participants


Leader Boards



1) Message boards : Number crunching : How to optimize the number of threads (Message 161176)
Posted 7 hours ago by Profile Michael GoetzProject donor
When running LLR, LLR2, or Genefer CPU tasks, you can specify how many threads each task should use. The website's project preferences page gives general hints about whether or not multithreading is recommended, but doesn't tell you how many threads is optimal because that will depend on your CPU.

Specifically, it depends on the amount of cache on your CPU, which varies greatly.

In general, there's two rules:

1) More threads per task decreases efficiency, sometimes very significantly, because the threads have to wait to synchronize with each other. So you want to use the fewest number of threads per task as possible. If you can, 1 thread per task is best. However...

2) Because CPU cache is MUCH faster than main memory, when possible you want to make sure all of the tasks' data fits in the CPU cache. This is more important than the first rule, and implies that running fewer tasks with more threads per task will be better if the tasks are large and/or your CPU has a small cache.

To make it easier for you to figure out the optimal number of threads, the project preferences page now tells you the cache requirements for each sub-project.

LLR/LLR2 tasks show it like this:
Sierpinski/Riesel Base 5 LLR (SR5)
k·5n±1 for 86 specific values of k

Supported platforms:

Recent average CPU time: 27:53:271
FFT sizes: 864K to 1120K (uses up to 8960K cache per task)

Ignoring hyperthreads, my CPU has 8 cores and 32 MB of L3 cache. If I want to run SR5, each task uses just over 8MB of cache. If I run 8 single threaded tasks, or 4 two-threaded tasks, it won't fit in L3 cache. That will significantly slow down the tasks.

The best choice for me would therefor be either running three tasks with two threads each, or two tasks with 4 threads each. Running three tasks would only utilize 6 of the 8 cores, so it might not be the best choice, although 2 threads per task is more efficient than 4 threads per task. I probably would want to test it both ways and see which completes more tasks per day. If I had to guess, I'd go with running two tasks with 4 threads each (i.e., -t4).

Genefer tasks look like this:

Generalized Fermat Prime Search n=16 (GFN-16 or Genefer 65536)
b65536+1 (or b216+1)

Deadline: 4 days (up to 30 days)

Recent average CPU time: 0:23:05
Recent average GPU time: 0:04:03
CPU tasks use 1.19M cache per task

GFN-16 tasks use only slightly more than 1 MB of cache, so running 8 single-threaded tasks will easily fit in my CPUs 32 MB L3 cache, so running 8 single threaded tasks is the optimal way to run GFN-16 CPU tasks on my computer.

Note that GFN-21 and GFN-22 require a minimum of 4 threads per task.
2) Message boards : General discussion : Late returning (Message 161170)
Posted 18 hours ago by Profile Michael GoetzProject donor
thank you for your reply
1 has completed and the other 2 should be finished this week
another one is however showing timed out on the site and is at at 21 hours to go 19/03/2023 10:51:33 | PrimeGrid | Task genefer20_83979525_1 is 2.54 days overdue;

It's not "timed out", it's "aborted". It's dead.
3) Message boards : Number crunching : International Women's Day Challenge (Message 161161)
Posted 1 day ago by Profile Michael GoetzProject donor
And the challenge results are final!

Congrats to Nick and TeAm AnandTech for winning the individual and team competitions, respectively!

Next up is the 7 day Gotthold Eisenstein's Birthday Challenge on the PSP project, starting on April 16th at 16:00:00 UTC.

Hope you all had as much fun during this crazy one day challenge as we did!
4) Message boards : Number crunching : Proof Task Not Available (Message 161143)
Posted 3 days ago by Profile Michael GoetzProject donor
Are more crunches required to help process the roughly half million tasks? If so what subproject is this

Dates are important. This is old news and is no longer relevant.

Also, the backlog was on the server side, and not something users could help with.
5) Message boards : General discussion : Late returning (Message 161064)
Posted 10 days ago by Profile Michael GoetzProject donor
I have the following messages
10/03/2023 10:03:26 | PrimeGrid | Task llrSOB_420665536_0 is 31.72 days overdue; you may not get credit for it. Consider aborting it.
10/03/2023 10:03:26 | PrimeGrid | Task llrWOO_437145920_1 is 16.75 days overdue; you may not get credit for it. Consider aborting it.
10/03/2023 10:03:26 | PrimeGrid | Task llrESP_440996017_0 is 1.54 days overdue; you may not get credit for it. Consider aborting it.

I use my computer only during the day and the first task when I got it estimated 40 days and the deadline was 20 days it has been running so far 29 days with an estimate of 9 days left so running for half a day is 18 days
The second one has been running for 14 days with 20 hours to go
the 3rd one has been running 3 days with 12 days to go which I i am suspending to check if i get credit from the first 2
should i not get credit I will delete boinc from my computer and abort all other tasks you should allow much more time yo complete long cpu tasks

What your BOINC client tells you doesn't necessarily reflect reality. In many ways. Consider it "fake news", if you will.

As long as your computer is actively working on a task (so do NOT suspend them!!!), the server will extend the deadline for your task. It won't time out unless it's REALLY REALLY REALLY overdue. However, your BOINC client won't reflect the extended deadline. It only knows about the original deadline.

Your SoB task *currently* has a deadline of March 17th, but the server will happily keep extending its deadline until May 22nd as long as your computer keeps working on the task. If it stops working on the task, the deadline won't be extended, and the task will timeout and be sent to someone else.

Your Woodall task currently has a deadline of March 16th, but can be extended as far as April 4th.

Your ESP task currently has a deadline of March 16th and can be extended as far as April 1st. It's the only one of the three that might not make the deadline. It's progressing very slowly and is only at 21% done after 8 days.

Note that even if your tasks timeout, that doesn't necessarily mean you don't get credit. As long as you return the task, with the correct result, before the work unit is purged from the database, you will get credit. I think most other BOINC projects don't grant credit when you're late, but we think you should get the credit if you did the work, as long as it's still possible to grant credit for a task.

It's a shame that BOINC is actually advising you to abort the task. You definitely shouldn't.
6) Message boards : Number crunching : International Women's Day Challenge (Message 161032)
Posted 11 days ago by Profile Michael GoetzProject donor
If we sustain the SGS challenge rate, how many days would remain before exhausting the sieve file?

select count(*) from result where appid=2 and received_time between unix_timestamp("2023-3-8 15:0:0") and unix_timestamp("2023-3-9 15:0:0") and server_state=5 and outcome=1 and validate_state in (0,1,4); +----------+ | count(*) | +----------+ | 2696625 | +----------+
Tasks per day: 2696625
Candidates per day: 1348312.5
select max(k) from result_llr where project="SGS" and appid=2 and server_state>2 and n=1290000; +---------------+ | max(k) | +---------------+ | 7908597656115 | +---------------+
Leading edge: k=7908597656115
grep -n 7908597656115 /hdd/sieving/sgs/current.txt 28956694:7908597656115 1290000
Position of leading edge in sieve file: 28956694
wc /hdd/sieving/sgs/current.txt 54843178 109686356 1206549916 /hdd/sieving/sgs/current.txt
Candidates in sieve file: 54843178

Remaining candidates in sieve file: 25886484

Days remaining at challenge rate: 19.1992

Normally, we do about 20K tasks (10K candidates). The challenge therefore ran at a rate of about 130 times time normal rate.

The last time we ran an SGS challenge it was three days long, and the server struggled because of the resulting size of the database. If we ran a 20 day SGS challenge, yes, we would theoretically finish off the current sieve file, but...

Normally the database has about 1.5 million tasks in the result table. Right now, after the 1 day challenge, it has about 5.7 million tasks. If we ran out the sieve file, it would have in excess of 50 million tasks. The database is currently close to its maximum all time size. Certain processes get very slow when the database gets large. It's unclear at what point the server would break.
7) Message boards : Number crunching : International Women's Day Challenge (Message 161029)
Posted 11 days ago by Profile Michael GoetzProject donor
I think there is a minor bug in the challenge stats. If I divide the score by the number of tasks for each participant, I get values between 39.91357038 and 39.91357125, i.e. no difference up to the 6th decimal place. However, as the credit granted for a SGS task resulting in a prime is twice as much as for a normal task, the variation should be much larger. Even for Nick's 311923 tasks, an additional prime task would change the 4th decimal place in the credit per task ratio.

I have to take a look, but I think you're right - it's a small oversight in the challenge leaderboards. I don't think it has the smarts (or the data) to do the adjustment for the when a task is prime.

For an SR5 challenge (which has a similar prime "bonus"), that would rarely affect challenge scores because those primes are so rare. There may be a few other examples, but they'll also be large and rare primes.

Therefore, SGS is the only project where this happens. I'm inclined to treat this as an oversight in the rules rather than an oversight in the code. For the purpose of challenge scores, we don't count the bonus credit you get when you find a prime. This produces an average difference between challenge score and actual credit of approximately 0.0077%, or three quarters of one basis point. That's acceptable to me, and avoids a really nasty and resource intensive revision of the challenge code. That's especially significant for SGS because of the number of tasks that run in a even a short SGS challenge.
8) Message boards : Number crunching : International Women's Day Challenge (Message 161012)
Posted 11 days ago by Profile Michael GoetzProject donor
My two primes so far are the same number of digits how odd.
7779439418187*2^1290000-1 (388342 digits)
7825167726087*2^1290000-1 (388342 digits)

I'm not sure if you were joking, or simply weren't aware. but almost all SGS candidates in the sieve file have the exact same number of digits. All SGS candidates tested since sometime in 2015 have 388342 digits. and I suspect all of the remaining candidates in the sieve file also have 388342 digits.

SGS candidates grow VERY slowly.
9) Message boards : Number crunching : International Women's Day Challenge (Message 161011)
Posted 11 days ago by Profile Michael GoetzProject donor
The task status page says we've hit 1.47 million SGS tasks per day. I severely underestimated then - my prediction of 50 primes was based on a guess that tasks per day would double from the 400K just before the challenge, to about 800K.

At this rate we could be on track for 100 primes during the challenge.

Yes, but that includes time before the challenge started. We're currently on a pace for 2.5 million tasks during the 24 hours of the challenge. However, I expect that number to go up.

The first hour of the challenge had no TSC droplets running. That's 7000+ core hours, Far more important, however, is that people tend to add more power as the challenge progresses.

My guess is we'll break 3 million tasks.
10) Message boards : Number crunching : International Women's Day Challenge (Message 160943)
Posted 13 days ago by Profile Michael GoetzProject donor
We'll see how it goes. I have an idea for speeding up the leaderboards, but... it's a big change for a problem that has a very short duration.

We have a very fast server called "kraken", which has 4 TB of RAID 1 NVMe memory. If I replicate the database to kraken, and then generate the leaderboards locally on that machine, it should be significantly faster than doing it on the older servers which use SSDs as well as having slower cpus.

It also depends on admin availability.

Next 10 posts
[Return to PrimeGrid main page]
Copyright © 2005 - 2023 Rytis Slatkevičius (contact) and PrimeGrid community. Server load 0.88, 1.33, 1.67
Generated 20 Mar 2023 | 19:42:34 UTC