Join PrimeGrid
Returning Participants
Community
Leader Boards
Results
Other
drummerslowrise

11)
Message boards :
Number crunching :
Top Badges Leader Board
(Message 134918)
Posted 69 days ago by composite
I would like to see a new Leader Board for Top Badges, a summary listing the total number of each badge currently awarded among all participants. For instance we know that a silver badge supersedes a bronze badge, so when someone acquires a silver badge, the count of silver badges increases by one and the count of bronze badges decreases by one.
Since badges are awarded by subproject, it might make more sense to count badges grouped by colour and subproject, rather than just by colour.

12)
Message boards :
Number crunching :
Suggestions for 2020 Challenges
(Message 134741)
Posted 75 days ago by composite
negative opinions were accounted with a 1
I hope you looked up the meaning of the word "chuffed". It surprised me when I did.
It's a positive opinion (slang for "being pleased") so by my count the conjectures actually took all 5 top spots.
I'm sure there's room in the calendar for other challenges besides conjectures
 unless conjecture challenges are monthlong affairs of the "set it and forget it" type.
Long challenges might be good for those of us wanting more sunlight or exercise or social interaction.
This makes me wonder: is PrimeGrid a generally bad place for people suffering from OCD?
The endless array of badges and milestones, the unending quest for the next bigger prime...
and the interminable wait for a laggardly wingman.

13)
Message boards :
Number crunching :
Suggestions for 2020 Challenges
(Message 134710)
Posted 76 days ago by composite
The pursuit of shiny new badges and more megaprimes and T5K entries are distractions from advancing mathematics, which is what proving conjectures does. I quite agree that all conjecture progress has cumulative value while the accomplishment of having a bigger prime of type X has diminishing value as time passes. Being near the head of T5K is always impressive for its time, but eventually the significance fades as it is superceded by larger primes. This is why I suggest a year of all conjecture challenges. And if we do this, consider it to be taking a break from trying to get more megaprimes than in the previous year. 100 megaprimes in a year is a milestone we can be proud of, but there's no need to continuously reproduce this feat.

14)
Message boards :
Number crunching :
Transit of Mercury Across the Sun Challenge
(Message 134709)
Posted 76 days ago by composite
WOW! all on 3 Hosts with 4 Cores.
Quad CPU Q9400, i72600K, i72600K Power House indeed.
1 Pavel Atnashev Ural Federal University 58 032 294.67 1 884
http://www.primegrid.com/hosts_user.php?userid=914937
The Core2Quad is running at 3.4THz. ;)
Q9400 = Quantum 9400!
So why bother with this longrunning LLR algorithm when you can just factor it in an instant?

15)
Message boards :
Number crunching :
Transit of Mercury Across the Sun Challenge
(Message 134671)
Posted 77 days ago by composite
I have 100% task cache, so you need to cut that in half. 56+88 tasks running.
You are giving up a lot of chances to return tasks first. You should download tasks justintime, unless you are purposely giving the wingman a chance. That doesn't seem to be the motivation when you have all cores running on a single task.
Also, is turbo enabled on those CPUs? Fully loaded they run 2.2 GHz but are capable of 3.0 GHz in turbo mode. In theory you could run one or two threads on each socket and it would go up to 36% faster unless LLR is so intensive that heat is an issue. Since there are diminishing returns on applying more threads is there an optimal number of threads less than the number of cores?
LLR executable is 37 MB statically linked but your L3 cache is 25 MB. How much of the executable needs to be cacheresident to complete a single iteration of LLR? BOINC Manager shows the working set size is over 90 MB, so there's potential speedup by having a thread dedicated to prereading portions of the executable to ensure it occupies L3 just before it is used. That coroutine would be tiny and fit in another core's L1. But I'm unsure what the cachecoherency protocol is  does cache line occupancy in L2 prevent it's eviction from L3?

16)
Message boards :
Number crunching :
Transit of Mercury Across the Sun Challenge
(Message 134667)
Posted 77 days ago by composite
It is a Russian miracle.
I've tried to estimate how many cores he has.
He's putting 10 threads on each task and getting 40% CPU efficiency,
where as I get 95% efficiency using 4 threads on a 4core system; and 83% efficiency on a 6core system running 2 tasks simultaneously with 3 threads each.
His total throughput is like having about 2900 cores: (his tasks completed: 1677)/(my tasks completed: 13) * (my cores: 10) = 1290; 1290 * (my avg efficiency: ~ 90%) / (his avg efficiency: 40%) =~ 2900 cores.
The straightforward calculation of his tasks in progress (288) times 10 threads gives the answer of 2880 cores.

17)
Message boards :
Proth Prime Search :
PPSMEGA: Smaller FFT longer crunch time ?
(Message 134476)
Posted 84 days ago by composite
...those x parts does not complete their sin/cos calculations the same time and hence leaving one or more cores idle for a small time before doing next cos/sin calculation.
This is a wellknown effect of using multiple cooperating CPUs when they need to synchronize. It's related to the number of cores, not the FFT size. It's one reason for diminishing returns when adding more threads to a task.

18)
Message boards :
Proth Prime Search :
PPSMEGA: Smaller FFT longer crunch time ?
(Message 134419)
Posted 85 days ago by composite
Yes my tests were with live tasks, PPSMEGA.
The k is only available to us if we peek in BOINC's slot directories, and that has to be done while the task is running. Technically feasible, but I'm not interested in doing this at the moment.

19)
Message boards :
Proth Prime Search :
PPSMEGA: Smaller FFT longer crunch time ?
(Message 134379)
Posted 85 days ago by composite
I ran a bunch of singlethread tasks one at a time on my 6core system.
The effect is still there, but much less pronounced than with multiple threads.
Averaging over 11 tasks for 240K FFT and 8 tasks for 256K FFT:
240K FFT used 6% more run time and 11% more CPU time for than 256K FFT.
The run time is skewed on a couple of tasks. Probably the internet was unavailable for a time.

20)
Message boards :
Proth Prime Search :
PPSMEGA: Smaller FFT longer crunch time ?
(Message 134350)
Posted 86 days ago by composite
I just noticed this too, on 2 machines, using BOINC tasks over a span of 7 to 9 hours.
i54590T (4 cores, BOINC 100% CPU)
1 task @ 4 threads
240K FFT (average of 11 tasks) 1370 sec run time, 5099 CPU time
256K FFT (average of 7 tasks) 1802 sec run time, 6733 sec CPU time
240K FFT takes 32% more run time and 32% more CPU time than 256K FFT
i75820K (6 cores, HT on and BOINC 50% CPU)
3 tasks @ 2 threads each + AP27 on GPU
240K FFT (average of 11 tasks) 2645 sec run time, 4343 sec CPU time
256K FFT (average of 9 tasks) 3647 sec run time, 5535 sec CPU time
240K FFT takes 38% more run time and 27% more CPU time than 256K FFT
To prove or refute Crunchi's conjecture that the larger FFT is better at exploiting multicore hardware,
we would need to run PPSMEGA tasks on a singlecore system (no HT).
Does anyone have the CPU and the patience to try this?
FMA3 almost certainly isn't available on singlecore hardware.
Without saying that it proves anything, we can test 1 task 1 thread on multicore systems,
thanks to the recently introduced PrimeGrid preferences for cores and tasks.
I will report my results in a subsequent post.
It seems counterintuitive that a shorter FFT would be slower.
Is this effect similar to using a shorter word size for large number computations?
The appropriate test of this would be to try a FFT size of 280K vs 256K.
In the end, if we can't undestand why 256K FFT is faster than 240K FFT,
we should just use what we know works better.
"Shut up and calculate", as N. David Mermin said (often misattributed to Richard Feynman).

Next 10 posts
