1)
Message boards :
Generalized Fermat Prime Search :
GFN21 GFN22 Scoreboard, post your times/hardware
(Message 122918)
Posted 269 days ago by eXaPower
Some GFN21 times on CPU (all running maximum MT count):
Xeon E31270 (basically same as an i72600)  about 250,000 seconds or 70 hours
Xeon E51620 (similar to i74770 or i74790)  about 150,000 seconds or 42 hours
Xeon E51650 (similar to 6 core Haswell i7)  about 140,000 seconds or 39 hours
Xeon W2125 (4core SkylakeW @ 4 GHz)  about 87,000 seconds or 24 hours
OCL4 Transform
25000 seconds = (225W) RTX 2080 @ 2GHz (PCIe 3.0 x4)
40000 seconds = (220W) GTX 1080 @ 2GHz (PCIe 3.0 x8)
53000 seconds = (145W) GTX 1070 @ 2GHz (PCIe 2.0 x1)
72500 seconds = (125W) GTX 1060 3GB @ 2GHz (PCIe 3.0 x4)
78000 seconds = (170W) GTX 970 @ 1450MHz (PCIe 2.0 x1)

2)
Message boards :
Number crunching :
2019 Challenge Voting
(Message 122609)
Posted 278 days ago by eXaPower
TRP
321
GCW
SR5
ESP
PPS MEGA
GFN21 / GFN20 (any Genefer)
SoB
CUL

3)
Message boards :
Number crunching :
Primegrid and BOINC thoughts on the i99900k vs i79700k
(Message 122377)
Posted 285 days ago by eXaPower
The Basin Falls 589$ Skylake 44 Lane 3.0 PCIe CPU might be next choice for my Multi GPU build. Reasonble value for Multi GPU system 4 or more.

4)
Message boards :
Number crunching :
2018 Mega Primes
(Message 121777)
Posted 294 days ago by eXaPower
BTW: That whiskey must be lucky. Anything over 100 proof a true whiskey. 80 proof is nearly water. Where am located there plenty of Imperial IPA over 12% (24 proof) to choose from..
12%! Where are you from, I've only tasted 8%. I need to move there :) :) :)
Northeast East Coast where the IPA true. There 15 breweries or so that make >10%. >10% Russian stouts are popular too.
Sold, party in your house! Primegrid are we ready to meet up in person? :) :) ;)
Kidding but maybe a meet isn't such a bad idea, anyone for Vagas or Orlando? Say hi to the small world after all and Mickey and Donald?
I put odds of you finding first n=21 at 2/1 since you are first person to find n=13 all the way n=20!
You def have enough compute power to find elusive n=21.

5)
Message boards :
Number crunching :
2018 Mega Primes
(Message 121774)
Posted 294 days ago by eXaPower
BTW: That whiskey must be lucky. Anything over 100 proof a true whiskey. 80 proof is nearly water. Where am located there plenty of Imperial IPA over 12% (24 proof) to choose from..
12%! Where are you from, I've only tasted 8%. I need to move there :) :) :)
Northeast East Coast where the IPA true. There 15 breweries or so that make >10%. >10% Russian stouts are popular too.

6)
Message boards :
Number crunching :
2018 Mega Primes
(Message 121771)
Posted 294 days ago by eXaPower
OK, so GFN13 thru 20 AND a high score on Donkey Kong  now you're just bragging ;)
At least you can't spell!!!
DAD
Hahaha I'm working on it. But the jameson whiskey I've been celebrating with may hamper things a little ;)
Wow n=13 to N=20 are hard to find. I have 1 Mega find and 3 DC.
BTW: That whiskey must be lucky. Anything over 100 proof a true whiskey. 80 proof is nearly water. Where am located there plenty of Imperial IPA over 12% (24 proof) to choose from..

7)
Message boards :
Number crunching :
An Evaluation of the RTX 2070 for PrimeGrid Applications
(Message 121438)
Posted 300 days ago by eXaPower
Scott  thank you for the write up on 2070. Last night I ordered an FE 2070 open box for 430$.
I would add that RTX 2080 Turing has memory bottleneck on GFN similar Pascal 1080 vs. 1070.
Just as 970/980 or 1070/1080  the RTX 2070 has better core/bandwidth ratio than a 2080. As we know Genefer thrives with fast memory bandwidth.
GV100 (Volta Titan) OCL transform with HBM2 is nearly 2x faster than the 2080 OCL4/5.
Some impressions of my RTX 2080
 RTX 2080 overclocks to the moon with minimal difference in power compared to 1080.
 RTX 2080 has double the GTX 1080 performance for same power point.
 RTX 2080 3x faster than a GTX 1060 or GTX 980/970 on PPS Sieve and Genefer.
PPS Sieve runtimes increased per WU compared to 39* drivers runtimes. Power usage is up and down on Pascal and Turing while Maxwell steady.
Is an PPS Sieve app upgrade in order? 4** drivers have PPS Sieve GPU usage and power all over the place.
 GT 630 @ 1.2GHz (20W); 1900 seconds/WU
 GTX 970 @ 1.5GHz (165Wavg / 175W Max): 494 seconds/WU
 GTX 1060 3GB @ 2.1GHz (106W / 125W): 493 s/WU
 GTX 1070 @ 2.1GHz (135W / 150W ): 333 s/WU
 GTX 1080 @ 2050GHz (165W / 225W): 267 s/WU
 RTX 2080 @ 2085MHz (167W / 225W): 153sec/WU
My 1080 completed a PPS Sieve WU in 225s/WU while 1070 was 315s/WU with 3** drivers. Even the 4** branch GTX 1060 3GB and GTX 970 are affected with runtimes 2025s slower. Overclocked Turing lacks PPS Sieve scaling Kelper / Maxwell have. I believe Pascal and Turing is not being utilized completely. Gpu usage is all over the place instead of being pinned at 98% Maxwell / Kelper. Maybe an CUDA update will help out the newer cards find full potential.
GPU usage on RTX 2080 and GTX 1080 (1070) is currently only 67% with no CPU WU. With 1 CPU WU instance the GPU usage tanks even more.
My RTX 2080 boosted out the box at 2.0GHz. Running n=20 with these clocks had WU completion at 6920 seconds  choosing mostly the OCL5 transform.
n=19 runtime are 1800 seconds. Over 2X faster than GTX 1080.

8)
Message boards :
General discussion :
HARDWARE ⋮ RUMOR [Confirmed] NVIDIA Launching RTX 2080 Ti
(Message 120507)
Posted 342 days ago by eXaPower
Also, architectural information was released today. Looks like for every FP32 unit there's also an INT32 unit, which was also the case with Volta, but now is being mainstreamed. I know PG primarily uses FPheavy computations, but are there any INT operations that could be run on the side with a CUDA 10aware recompile? And speaking of FP: FP64 is still at 1/32 so that shiny new 2080ti is only 2x a GTX580 and half anything from AMD's Tahiti generation. According to Nvidia, apparently FP64 is "legacy" now; I thought more computational precision was the future?
genefer doesn't use FP32 instructions.
ocl transform uses FP64 and may run faster with an INT32 unit for address computation.
ocl2, ocl3, ocl4 and ocl5 are Numbertheoretic transform and use INT32 instructions.
A good point is the number of streaming multiprocessors:
RTX2080Ti: 68 vs GTX1080Ti: 28.
genefer uses local memory to share some data and there is one local memory per SM. Then more SM may improve parallelism.
Yves what you create will be most efficient app on BOINC platform as seen already. Currently no other BOINC application has complete full power.
I already purchased 2944 CUDA (46SMT) RTX 1180 preorder that will be water cooled for optimal clocking.
I'd say 280/300 Watts for RTX1180 and 300/350Watts on RTX1180ti. Pascal and Maxwell 300W on INT32 OCL4  Volta a rated 300W GPU.

9)
Message boards :
Number crunching :
2018 Tour de Primes
(Message 115008)
Posted 553 days ago by eXaPower
The bar just got raised for the red jersey.
Robish has found a GFN18.
vaughan took the "virtual" red jersey just 9 hours into the race and held it for almost 14 days. How long will Robish keep it now? /JeppeSN
Nice job Robish! Don't get to comfortable  there a lot compute on n=18 looking to find another before the month is over.
All the big hitters running SR5 for the red jersey are now shattered by Genefer.
I'd switch to one of ESP / TRP / 321 WU for the reminder of TdP. Those are ripe for a prime and the red jersey.

10)
Message boards :
Number crunching :
2018 Tour de Primes
(Message 114868)
Posted 555 days ago by eXaPower
0/0 cache slower machines are still able to find mega primes !
http://www.primegrid.com/workunit.php?wuid=556109615
49090656^131072+1 (1008075 digits)
384 Kelper core GPU overtaking a 2304 AMD Ellesmere core.
After being Mega n=17 DCer (feb 2nd) on a much faster GPU  this find is that much sweeter.
