PrimeGrid
Please visit donation page to help the project cover running costs for this month
1) Message boards : Number crunching : Alan Turing's Birthday Challenge (Message 141231)
Posted 1066 days ago by clemmoProject donor
Is it possible to know how many factors are found during challenge?


Possible, yes. Practical, no.

That information is not stored in any database. Someone would need to write a program to read every result file (nearly a million of them) that was returned during the challenge. But that's not enough, since many candidates may have previously been eliminated by other factors. All those factors would need to be compared to what's already in the database to eliminate those where the candidate was eliminated previously.

BUT WAIT!!!

The database already has all of the challenge's factor records, so ALL of them will register as "already found".

So you would have to, on another system, load up a database backup from before the challenge started, and compare all the factors against that database. It still wouldn't be exact because this database would be missing factors found between the time of the backup and the time the challenge started, but this would be the best that's possible.

All of this would need to happen before the challenge cleanup is done and the challenge tasks are purged.

It's a lot of work even if you don't care about the old factors, and a whole lot more if you do. Good luck convincing someone that this is worth doing.


Simple no will be sufficient.


Yes, a simple no would have been sufficient but some of us do enjoy the in depth explanations as to why or why not something can be done. I find it gives me a better understanding of how PG works behind the scenes and adds to the overall experience that puts this project above any other that I have participated in.
2) Message boards : Number crunching : Time to run a PPS Sieve on my CUDA went from an average of ... (Message 140410)
Posted 1102 days ago by clemmoProject donor
On my laptop I crunch with either the CPU or GPU but not both at the same time. With a shared heatsink, either one running max (especially with a llr task on CPU) will limit the other. This is even with a cooling pad underneath.
3) Message boards : Number crunching : GPU's being worked harder than I think is good for them (Message 138422)
Posted 1187 days ago by clemmoProject donor
When I was using TThrottle for GPU temp control, I found that it didn't work well or at all on the higher GFN's as it couldn't limit the CPU worker enough to keep the temps down. I found MSI Afterburners controls worked much better.

I delinked the power limit and temp limits, limited the temp, set priority to temp limit and it worked perfectly. My experience with this is on a GTX 770. Doesn't work for my GTX650 (too old) and 1660 (the newer ones Afterburner can't temp limit for some reason).


Are you using the latest version of AfterBurner? I'm using ver 4.6.2 on all of my Windows pc's and just checked and my Nvidia 760's have the power and temp limits linked, earlier versions did not have that link on my older cards either. I do not know how to delink them though as you said you do.

I use it more to monitor the temps for the different Projects and do crank up the fan if the temp rises more than I want it too.



The little link icon - press it and it should de-link them. The icon itself between temp and power limit is a button.
4) Message boards : Number crunching : GPU's being worked harder than I think is good for them (Message 138283)
Posted 1190 days ago by clemmoProject donor
When I was using TThrottle for GPU temp control, I found that it didn't work well or at all on the higher GFN's as it couldn't limit the CPU worker enough to keep the temps down. I found MSI Afterburners controls worked much better.

I delinked the power limit and temp limits, limited the temp, set priority to temp limit and it worked perfectly. My experience with this is on a GTX 770. Doesn't work for my GTX650 (too old) and 1660 (the newer ones Afterburner can't temp limit for some reason).

Edit to add: TThrottle takes a bit of time to pick up a task. 10-30 seconds before it starts to limit it. Even on GPU tasks that throttle well with it, there is a further lag as the GPU completes the work already sent to it. This will limit its effectiveness for short running tasks like GFN15/16.
5) Message boards : Number crunching : Tour de Primes 2020 (Message 137832)
Posted 1202 days ago by clemmoProject donor
PPS-MEGA at 4 cores is closer to 50% rate so I wonder if those with more cores on hand are running that, never mind scaling efficiency.


I'm still getting 64% first rate on PPS-MEGA with only 2x cores. This may demonstrate either luck or differences in hardware. I haven't tried with 1 core. I did a few with 4 cores a few days ago and from memory, 1st rate was closer to 75%.


1st rate isn't static. I can think of two big reasons, and no doubt there will be many others. 1 is simply luck. With relatively small sample sizes that could swing the numbers either way. The 2nd is it depends on what everyone else is doing at the time. Maybe I picked a worse time.

Edit: I thought I had moved all my systems away from MEGA, but it seems I left a system running 3 cores on it, currently indicating 67% 1st rate. My PPSE rate is below 50% on two systems though... doesn't stay constant!


My first rate has been dropping since my last post so change is def happening.
6) Message boards : Number crunching : Tour de Primes 2020 (Message 137810)
Posted 1202 days ago by clemmoProject donor
PPS-MEGA at 4 cores is closer to 50% rate so I wonder if those with more cores on hand are running that, never mind scaling efficiency.


I'm still getting 64% first rate on PPS-MEGA with only 2x cores. This may demonstrate either luck or differences in hardware. I haven't tried with 1 core. I did a few with 4 cores a few days ago and from memory, 1st rate was closer to 75%.
7) Message boards : Number crunching : Tour de Primes 2020 (Message 137497)
Posted 1207 days ago by clemmoProject donor
For comps like this one, what % of first's should you be looking for?
I'm guessing its a bit subjective because more 1st's might come at a cost of throughput right?

Is there a "magic" number? More than 50% 1st's is ok I guess?


I've set up a whole system to determine what my best thread settings are. But a good rule-of-thumb seems to be that if your PPSE WUs are taking over 700 seconds, you might need more threads per WU. If under 400 seconds, maybe less threads.

Edit: Or maybe my computers are just slow. ;)

My GFN 17 Low is only producing 51.8% firsts..... I'm wondering if this is worthwhile?


What does the % of firsts on GFN matter? You can't give GFN work more than one GPU at a time, can you?


I'm guessing that they may get better results moving down to GFN16 if faster cards are getting all the 1st's on a higher GFN.
8) Message boards : Number crunching : Intel or AMD for LLR Tasks? (Message 137495)
Posted 1207 days ago by clemmoProject donor
Something with AVX-512 and a large L3 cache is what should be fastest.
The i9-10980XE has AVX-512, the i9-9900KS does not.
The 10980XE has 8MB more L3 cache than the 9900KS.
AMD does not have AVX-512 but has 16MB L3 per CCX (4x16MB for the 3950X from memory).

Based on this, without seeing any performance figures, the i9-10980XE should be best for LLR on a per core basis. I'm not sure if the extra cores of a Threadripper would compensate.
9) Message boards : Number crunching : Tour de Primes 2020 (Message 137486)
Posted 1207 days ago by clemmoProject donor
For comps like this one, what % of first's should you be looking for?
I'm guessing its a bit subjective because more 1st's might come at a cost of throughput right?

Is there a "magic" number? More than 50% 1st's is ok I guess?
My GFN 17 Low is only producing 51.8% firsts..... I'm wondering if this is worthwhile?


My GFN 17 Mega is at 47%. I'm still running it as I already have a P2020 badge, so working towards the Mega badge (which I won't get knowing my luck).

You could try GFN16 to see if your 1st's are higher if you are after more numbers on your P2020 badge. You'll also complete more tasks as it runs quicker, improving your chances.
10) Message boards : Generalized Fermat Prime Search : GFN-14 Consecutive Primes Hunt Season is open! (Message 136779)
Posted 1218 days ago by clemmoProject donor
I will have one old GPU (650Ti) on it throughout Feb (and beyond) which will do the little bit it can.


Next 10 posts
[Return to PrimeGrid main page]
DNS Powered by DNSEXIT.COM
Copyright © 2005 - 2023 Rytis Slatkevičius (contact) and PrimeGrid community. Server load 1.31, 1.12, 1.23
Generated 30 May 2023 | 16:40:44 UTC