Sponsored by:
Join PrimeGrid
Returning Participants
Community
Leader Boards
Results
Other
drummerslowrise

Message boards :
Generalized Fermat Prime Search :
GFN Status by nRange (1722)
Author 
Message 
Michael GoetzVolunteer moderator Project administrator Project scientist
Send message
Joined: 21 Jan 10 Posts: 10311 ID: 53948 Credit: 115,221,977 RAC: 116,865

IMPORTANT: Overclocking  including factory overclocking  on Nvidia GPUs is very strongly discouraged. Even if your GPU can run other tasks without difficulty, it may be unable to run GFN tasks when overclocked.
We now have nine different GFN searches running. This post provides details about what's being searched.
GFNs can be searched by both CPUs and GPUs. There's several variations of both the CPU and GPU transforms which vary greatly by speed and blimits. CPU transforms come in two flavors:
 64 bit: This includes the f64 (previously called "default"), SSE2, SSE4, AVX, FMA3, and FMA4 transforms. The fastest available transform is selected automatically by the CPU app.
 80 bit: This is the x87 transform. It has a very high blimit, but is much slower than the 64 bit transforms.
A single CPU app (for each platform) is used that can run all of the transforms.
Note that both 64 bit and 80 bit transforms will run on both 32 bit and 64 bit CPUs.
GPU transforms come in 7 varieties, including two that are deprecated: (Ordered from fastest to slowest)
 CUDA: CUDA has been deprecated and is no longer available from the server.
 OCL2 (old version): The old OCL2 transform has been deprecated and is no longer available from the server.
 OCL: This is fastest GPU transform.
 Requires a GPU with double precision hardware.
 Runs on both Nvidia and ATI/AMD GPUs.
 Usually will be skipped on Nvidia Maxwell or newer GPUs
 OCL4: OCL4 is slower than OCL, but has a higher blimit. (Preciously called "OCL4(low)")
 OCL4 does NOT require a GPU with double precision hardware.
 Runs on both Nvidia and ATI/AMD GPUs.
 OCL5: OCL5 is slower than OCL4, but has a higher blimit.
 OCL5 does NOT require a GPU with double precision hardware.
 Runs on both Nvidia and ATI/AMD GPUs.
 Usually will be skipped on Nvidia Maxwell or newer GPUs
 OCL3: OCL3 is slower than OCL5, but has a higher blimit.
 OCL3 does NOT require a GPU with double precision hardware.
 Runs on both Nvidia and ATI/AMD GPUs.
 OCL2: OCL2 is the GPU equivalent of the 80bit x87 transform on CPUs. It has a very high blimit, but is slower than the other GPU transforms.
 Name disambiguation: The current "OCL2" is the old "OCL4(high)". Only the name has changed. This is NOT the original OCL2 transform.
 OCL2 does NOT require a GPU with double precision hardware.
 On an Nvidia GPU, CC 2.0 (Compute Capability) is required. This generally means at least a GTX 4xx series GPU, although there are some exceptions. If you're not sure, you can check on Nvidia's CUDA GPU list and Nvidia's Legacy CUDA GPU list. There's no restriction on ATI/AMD GPUs.
 Runs on both Nvidia and ATI/AMD GPUs.
 The current OCL2 (previously called OCL4high) replaces the original OCL2, which is obsolete. The current OCL2 is significantly faster than the original OCL2 and also has an even higher blimit.
On each platform, there is one app for OCL which combines all the OCL transforms. The combined OCL app will use the fastest available transform (according to GPU type and blimit), which will usually be in this order:
Nvidia "Maxwell" (or newer) GPUs: OCL4 > OCL3 > OCL2
All other GPUs (All AMD GPUs and older Nvidia GPUs): OCL > OCL4 > OCL5 > OCL3 > OCL2
GFN15 (32768) status/history has been moved to another thread. Click here to jump directly to GFN15.
GFN16 (65536) status/history has been moved to another thread. Click here to jump directly to GFN16.
n=17 131072 (Low): (as of 2/19/2018)
CPU transform: 80 bit
GPU transform: OCL2 (CC2.0 or better required on Nvidia GPUs)
Optional app via app_info: n/a
Leading edge: 11,789,116
PRPNet status: n/a
BOINC status: ACTIVE
Status: Nominal
Milestone (6/16/2015): Project started on BOINC.
Milestone (7/2/2015): Prime 909548^131072+1 discovered by Van Zimmerman.
Milestone (7/6/2015): Project finished on BOINC at OCL blimit.
Milestone (10/16/2015): n=17 restarted on BOINC using OCL3.
Milestone (10/27/2015): b=~1,620,000: CPU app transitions from 64 bit to 80 bit.
Milestone (10/27/2015): Prime 1560730^131072+1 discovered by Scott Brown.
Milestone (11/5/2015): Prime 1660830^131072+1 discovered by Sahib.
Milestone (11/10/2015): Prime 1717162^131072+1 discovered by zzuupp.
Milestone (11/10/2015): Prime 1722230^131072+1 discovered by Honza.
Milestone (11/17/2015): Prime 1766192^131072+1 discovered by XSmeagolX.
Milestone (12/12/2015): Prime 1955556^131072+1 discovered by Tabaluga.
Milestone (1/11/2016): Prime 2194180^131072+1 discovered by SosRud.
Milestone (1/20/2016): Prime 2280466^131072+1 discovered by Scott Brown.
Milestone (2/5/2016): Prime 2639850^131072+1 discovered by 288larsson.
Milestone (3/28/16): Switched to new OCL4low (combined) GPU app.
Milestone (4/2/2016): Prime 3450080^131072+1 discovered by Scott Brown.
Milestone (4/15/2016): Prime 3615210^131072+1 discovered by Scott Brown.
Milestone (5/1/2016): Prime 3814944^131072+1 discovered by Scott Brown.
Milestone (5/16/2016): b=4,045,223: Switch from OCL4 (low) to OCL5 (most GPUs) or OCL3 (Nvidia Maxwell GPUs).
Milestone (5/18/2016): Prime 4085818^131072+1 discovered by Scott Brown.
Milestone (6/4/2016): Prime 4329134^131072+1 discovered by puh32.
Milestone (7/20/2016): Prime 4893072^131072+1 discovered by qbrent.
Milestone (7/29/2016): Prime 4974408^131072+1 discovered by Drainx1.
Milestone (8/27/2016): Prime 5326454^131072+1 discovered by Scott Brown.
Milestone (9/4/2016): Prime 5400728^131072+1 discovered by Scott Brown.
Milestone (9/12/2016): Prime 5471814^131072+1 discovered by rutherfordium.
Milestone (9/24/2016): Prime 5586416^131072+1 discovered by Gator 13.
Milestone (10/8/2016): Prime 5734100^131072+1 discovered by PDW.
Milestone (10/21/2016): Prime 5877582^131072+1 discovered by tng*.
Milestone (12/20/2016): Prime 6403134^131072+1 discovered by parabol.
Milestone (12/20/2016): Prime 6391936^131072+1 discovered by Jess.
Milestone (1/20/2017): Prime 6705932^131072+1 discovered by Scott Brown.
Milestone (3/14/2017): Prime 7379442^131072+1 discovered by Scott Brown.
Milestone (4/26/2017): Prime 7832704^131072+1 discovered by 288larsson.
Milestone (4/29/2017): Prime 7858180^131072+1 discovered by Scott Brown.
Milestone (5/5/2017): Prime 7926326^131072+1 discovered by 288larsson.
Milestone (5/22/2017): b=8,076,498: Switch from OCL5 to OCL3.
Milestone (5/31/2017): Prime 8150484^131072+1 discovered by 288larsson.
Milestone (6/25/2017): Switched from OCL3 to OCL2 transform.
Milestone (8/28/2017): Prime 8704114^131072+1 discovered by E. T. Drumm.
Milestone (9/9/2017): Prime 8770526^131072+1 discovered by tng*.
Milestone (11/9/2017): Prime 9240606^131072+1 discovered by DeleteNull.
Milestone (11/15/2017): Prime 9419976^131072+1 discovered by DeleteNull.
Milestone (11/17/2017): Prime 9785844^131072+1 discovered by David Steel.
Milestone (11/18/2017): Prime 9907326^131072+1 discovered by branja.
Milestone (11/18/2017): Prime 10037266^131072+1 discovered by Daniel.
Milestone (11/19/2017): Prime 10368632^131072+1 discovered by nenym.
Milestone (11/19/2017): Prime 10453790^131072+1 discovered by HKSteve.
Milestone (11/20/2017): Prime 10765720^131072+1 discovered by Ross*.
Milestone (11/26/2017): Prime 10921162^131072+1 discovered by zunewantan.
Milestone (11/28/2017): Prime 10962066^131072+1 discovered by k6xt.
Milestone (11/29/2017): Prime 10994460^131072+1 discovered by Crunchi.
Milestone (12/2/2017): Prime 11036888^131072+1 discovered by Lumiukko.
Milestone (12/19/2017): Prime 11195602^131072+1 discovered by DoctorNow.
Milestone (12/30/2017): Prime 11267296^131072+1 discovered by rvoskoboynikov.
Milestone (1/1/2018): Prime 11292782^131072+1 discovered by Renix1943.
Milestone (2/18/2018): Prime 11778792^131072+1 discovered by Brucifer.
Upcoming event: b=42,597,774: End of project when 131072Low reaches 131072Mega.
n=17 131072 (Mega): (as of 2/21/2018)
CPU transform: 80 bit
GPU transform: OCL2 (CC2.0 or better required on Nvidia GPUs)
Optional app via app_info: n/a
Leading edge: 49,565,626
PRPNet status: n/a
BOINC status: ACTIVE
Status: Nominal.
Comment: This search started at b=42,597,774, which is the lowest b such that b^131072+1 has at least one million digits. This search is specifically designed to find megaprimes.
Milestone (10/16/2015): Project started on BOINC.
Milestone (11/9/2015): MegaPrime 42654182^131072+1 discovered by Nortech. (official announcement)
Milestone (2/24/2016): MegaPrime 43163894^131072+1 discovered by dem0707. (official announcement)
Milestone (2/25/2016): MegaPrime 43165206^131072+1 discovered by Freezing. (official announcement)
Milestone (3/28/16): Switched to new OCL4 high (combined) app.
Milestone (10/5/2016): MegaPrime 44049878^131072+1 discovered by Alexander Falk. (official announcement)
Milestone (10/14/2016): MegaPrime 44085096^131072+1 discovered by Alejandro V. Mena. (official announcement)
Milestone (11/26/2016): MegaPrime 44330870^131072+1 discovered by Doc No. (official announcement)
Milestone (12/11/2016): MegaPrime 44438760^131072+1 discovered by sangis43. (official announcement)
Milestone (2/1/2017): MegaPrime 44919410^131072+1 discovered by yank. (official announcement)
Milestone (2/23/2017): MegaPrime 45315256^131072+1 discovered by Williamd007. (official announcement)
Milestone (3/18/2017): MegaPrime 45570624^131072+1 discovered by yank. (official announcement)
Milestone (4/26/2017): MegaPrime 46077492^131072+1 discovered by Novanglus. (official announcement)
Milestone (5/29/2017): MegaPrime 46371508^131072+1 discovered by Mektacular. (official announcement)
Milestone (5/31/2017): MegaPrime 46385310^131072+1 discovered by mattozan. (official announcement)
Milestone (6/4/2017): MegaPrime 46413358^131072+1 discovered by sagiil. (official announcement)
Milestone (7/9/2017): MegaPrime 46730280^131072+1 discovered by dh1saj. (official announcement)
Milestone (7/10/2017): MegaPrime 46736070^131072+1 discovered by tng*. (official announcement)
Milestone (7/15/2017): MegaPrime 46776558^131072+1 discovered by Dingo. (official announcement)
Milestone (8/30/2017): MegaPrime 47090246^131072+1 discovered by rvoskoboynikov. (official announcement)
Milestone (9/14/2017): MegaPrime 47179704^131072+1 discovered by AndreiO. (official announcement)
Milestone (1/9/2018): MegaPrime 48273828^131072+1 discovered by Johnny Rotten.
Milestone (1/18/2018): MegaPrime 48370248^131072+1 discovered by Gilbert Kalus.
Milestone (2/2/2018): MegaPrime 48643706^131072+1 discovered by Daniel.
Milestone (2/10/2018): MegaPrime 49038514^131072+1 discovered by Plšák Ráďa.
Milestone (2/11/2018): MegaPrime 49090656^131072+1 discovered by eXaPower.
Milestone (2/15/2018): MegaPrime 49225986^131072+1 discovered by polarbeardj.
Milestone (2/15/2018): MegaPrime 49243622^131072+1 discovered by mackerel.
Milestone (2/17/2018): MegaPrime 49331672^131072+1 discovered by [AF>Libristes]cguillem.
Milestone (2/18/2018): MegaPrime 49397682^131072+1 discovered by Reggie.
Milestone (2/20/2018): MegaPrime 49530004^131072+1 discovered by DeleteNull.
Upcoming event: b=~65,450,000: CPUs no longer usable.
Upcoming event: b=400,000,000: End of project when OCL2 limit is reached.
n=18 262144: (as of 2/15/2018)
CPU transform: 80 bit
GPU transform: OCL5 or OCL3
Optional app via app_info: n/a
Leading edge: 4,256,540
PRPNet status: retired
BOINC status: ACTIVE
Status: Nominal.
Comment: GeneferOCL will use whichever GPU transform is fastest. On AMD and older Nvidia GPUs, this is usually OCL5. On newer Nvidia GPUs (Maxwell or later), this is usually OCL3.
Milestone (6/9/2010): Project started on PRPNet.
Milestone (2/8/2011): MegaPrime 145310^262144+1 discovered by MiHost on PRPNet. (official announcement)
Milestone (3/8/2011): MegaPrime 40734^262144+1 discovered by syama on PRPNet. (official announcement)
Milestone (10/29/2011): MegaPrime 361658^262144+1 discovered by MichelJohnson on PRPNet. (official announcement)
Milestone (1/8/2012): Moved from PRPNet to BOINC.
Milestone (1/18/2012): MegaPrime 525094^262144+1 discovered by KWSN Raw Data. (official announcement)
Milestone (2/12/2012): MegaPrime 676754^262144+1 discovered by Usucapio Libertatis. (official announcement)
Milestone (2/23/2012): Moved from BOINC to PRPNet.
Milestone (4/19/2012): MegaPrime 773620^262144+1 discovered by syama on PRPNet. (official announcement)
Milestone (7/6/2015): Double check of PRPNet work started on BOINC.
Milestone (8/2/2015): Double check of PRPNet work complete.
Milestone (11/4/2015): Moved to BOINC from PRPNet starting with double check of recent PRPNet tests.
Milestone (12/5/2015): b=~1,350,000: CPU app transitions from 64 bit to 80 bit.
Milestone (12/6/2015): b=1,256,802: Double check ends and new work begins.
Milestone (2/16/2016): MegaPrime 1415198^262144+1 discovered by boss. (official announcement)
Milestone (3/5/2016): MegaPrime 1488256^262144+1 discovered by 288larsson. (official announcement)
Milestone (3/28/16): Switched to new OCL4low (combined) GPU app.
Milestone (5/4/2016): MegaPrime 1615588^262144+1 discovered by brinktastee. (official announcement)
Milestone (8/10/2016): MegaPrime 1828858^262144+1 discovered by brinktastee. (official announcement)
Milestone (11/24/2016): MegaPrime 2042774^262144+1 discovered by motsu. (official announcement)
Milestone (2/24/2017): MegaPrime 2514168^262144+1 discovered by wdethomas. (official announcement)
Milestone (3/11/2017): MegaPrime 2611294^262144+1 discovered by Tabaluga. (official announcement)
Milestone (3/22/2017): MegaPrime 2676404^262144+1 discovered by DeleteNull. (official announcement)
Milestone (5/11/2017): b=2,860,404: Switch from OCL4 to OCL5 or OCL3.
Milestone (6/30/2017): MegaPrime 3060772^262144+1 discovered by No.15. (official announcement)
Milestone (10/30/2017): MegaPrime 3547726^262144+1 discovered by Scott Brown. (official announcement)
Milestone (11/16/2017): MegaPrime 3596074^262144+1 discovered by [PST]Howard. (official announcement)
Milestone (12/3/2017): MegaPrime 3673932^262144+1 discovered by No.15. (official announcement)
Milestone (1/10/2018): MegaPrime 3853792^262144+1 discovered by rjs5. (official announcement)
Milestone (1/27/2018): MegaPrime 3933508^262144+1 discovered by Freezing. (official announcement)
Milestone (2/15/2018): MegaPrime 4246258^262144+1 discovered by Robish. (official announcement)
Upcoming event: b=5,710,946: Switch from OCL5 to OCL3.
Upcoming event: b=5,931,641: Switch from OCL3 to OCL2.
Upcoming event: b=~54,080,000: CPUs no longer usable.
Upcoming event: b=400,000,000: End of project when OCL2 limit is reached.
n=19 524288: (as of 1/16/2018)
CPU transform: 80 bit
GPU transform: OCL4
Optional app via app_info: n/a
Leading edge: 1,885,444
PRPNet status: retired
BOINC status: ACTIVE
Status: Nominal.
Milestone (6/9/2010): Project started on PRPNet.
Milestone (11/19/2011): MegaPrime 75898^524288+1 discovered by Michael Goetz on PRPNet. (official announcement)
Milestone (2/23/2012): Project moved from PRPNet to BOINC.
Milestone (6/15/2012): MegaPrime 341112^524288+1 discovered by Peyton Hayslette. (official announcement)
Milestone (6/20/2012): MegaPrime 356926^524288+1 discovered by bherbihyewrbg. (official announcement)
Milestone (8/8/2012): MegaPrime 475856^524288+1 discovered by ragnarag. (official announcement)
Milestone (11/29/2012): Moved from BOINC to PRPNet at end of CUDA limit.
Milestone (8/2/2015): Double check of PRPNet work started on BOINC.
Milestone (9/15/2015): Double check of PRPNet work completed.
Milestone (11/4/2015): Moved to BOINC from PRPNet starting with double check of recent PRPNet tests.
Milestone (12/20/2015): b=920,574: Double check ends and new work begins.
Milestone (3/28/16): Switched to new OCL4low (combined) GPU app.
Milestone (6/1/2016): b=~1,100,000: CPU app transitions from 64 bit to 80 bit.
Milestone (1/15/2018): MegaPrime 1880370^524288+1 discovered by Scott Brown. (official announcement)
Upcoming event: b=2,022,611: Switch from OCL4 to OCL5 or OCL3.
Upcoming event: b=4,038,249: Switch from OCL5 to OCL3.
Upcoming event: b=4,194,304: Switch from OCL3 to OCL2.
Upcoming event: b=~43,620,000: CPUs no longer usable.
Upcoming event: b=400,000,000: End of project when OCL2 limit is reached.
n=20 1048576: (as of 1/1/2018)
CPU transform: 64 bit
GPU transform: OCL4
Optional app via app_info: n/a
Leading edge: 956,836
PRPNet status: n/a
BOINC status: ACTIVE
Status: Nominal.
Milestone (11/29/2012): Project started on BOINC.
Milestone (6/16/2015): Project finished on BOINC (reached OCL limit).
Milestone (11/4/2015): Restarted on BOINC using OCL3.
Milestone (3/28/2016): Switched to new OCL4low (combined) GPU app.
Milestone (7/9/2017): b=~900,000: CPU app transitions from 64 bit to 80 bit.
Milestone (8/29/2017): MegaPrime 919444^1048576+1 discovered by Van Zimmerman. (official announcement)
Upcoming event: b=1,430,202: Switch from OCL4 to OCL5 or OCL3.
Upcoming event: b=2,855,473: Switch from OCL5 to OCL3.
Upcoming event: b=2,965,821: Switch from OCL3 to OCL2.
Upcoming event: b=~36,300,000: CPUs no longer usable.
Upcoming event: b=400,000,000: End of project when OCL2 limit is reached.
n=21 2097152: (as of 1/1/2018)
CPU transform: 64 bit
GPU transform: OCL (Double Precision GPU REQUIRED), or OCL4
Optional app via app_info: n/a
Leading edge: 193,426
PRPNet status: n/a
BOINC status: ACTIVE
Status: Nominal.
Comment: GeneferOCL will use whichever GPU transform is fastest. On AMD and older Nvidia GPUs, this is usually OCL. On newer Nvidia GPUs (Maxwell or later), this is usually OCL4.
Milestone (9/15/2015): Project started on BOINC.
Milestone (3/28/16): Switched to new combined OCL app which allows for faster OCL4(low) to be used automatically on "Maxwell" GPUs.
Upcoming event: b=~660,000: Switch from OCL to OCL4.
Upcoming event: b=~730,000: CPU app transitions from 64 bit to 80 bit.
Upcoming event: b=1,011,306: Switch from OCL4 to OCL5 or OCL3.
Upcoming event: b=2,019,124: Switch from OCL5 to OCL3.
Upcoming event: b=2,097,152: Switch from OCL3 to OCL2.
Upcoming event: b=~29,740,000: CPUs no longer usable.
Upcoming event: b=400,000,000: End of project when OCL2 limit is reached.
n=22 4194304: (as of 1/1/2018)
CPU transform: none
GPU transform: OCL (Double Precision GPU REQUIRED) or OCL4
Optional app via app_info: n/a
Leading edge: 109,798
PRPNet status: n/a
BOINC status: ACTIVE
Status: Nominal. Specifically designed to search for the world's largest known prime, but can now only find the third largest known prime.
Comment: GeneferOCL will use whichever GPU transform is fastest. On AMD and older Nvidia GPUs, this is usually OCL. On newer Nvidia GPUs (Maxwell or later), this is usually OCL4.
Milestone (2/24/2012): Project started on BOINC.
Milestone (11/3/2015): CPU tasks enabled.
Milestone (3/28/2016): Switched to new combined OCL app which allows for faster OCL4(low) to be used automatically on "Maxwell" GPUs.
Milestone (2/25/2017): CPU tasks disabled. (Too slow on most CPUs.)
Upcoming event: b=~540,000: Switch from OCL to OCL4.
Upcoming event: b=~610,000: CPU app transitions from 64 bit to 80 bit.
Upcoming event: b=715,101: Switch from OCL4 to OCL5 or OCL3.
Upcoming event: b=1,427,737: Switch from OCL5 to OCL3.
Upcoming event: b=1,482,910: Switch from OCL3 to OCL2.
Upcoming event: b=~24,100,000: CPUs no longer usable.
Upcoming event: b=400,000,000: End of project when OCL2 limit is reached.
____________
Please do not PM me with support questions. They will usually go unanswered. Ask on the forums instead. Thank you!
 


I've been running 32768 tasks just fine, about four minutes each (give or take), but it appears all the 65536 units return errors. Error (7) at runtime (which isn't described in stderr) and error (161) at completion (which is upload failure, for some reason).
Here's an example.
Both machines running GFN tasks are using (aging) XFX HD 6850 GPUs. I won't run with CPU as the one is a bit of a dinosaur and the other is doing something else already.
Stderr asks me to ensure my GPU is capable. Well, yes, it is, according to every other OCL task I've ever run on any project with decent developers, so...
Did I miss something?
(edit: removed signature as I've now got so many badges it breaks the CSS  sorry!!)  

HonzaVolunteer moderator Volunteer tester Project scientist Send message
Joined: 15 Aug 05 Posts: 1647 ID: 352 Credit: 1,931,980,611 RAC: 402,317

Stderr asks me to ensure my GPU is capable. Well, yes, it is, according to every other OCL task I've ever run on any project with decent developers, so...
Did I miss something?
My guess would be that 32768 tasks are running using OCL3 whereas 65536 tasks are using OCL.
As per Mike's yesterdays post here:
Double precision is NOT required on the GPUs for either OCL2 or OCL3. That not only means that a wider variety of ATI/AMD GPUs may be used, but it also means if you have a GTX TITAN you can run it in its higher clock speed mode rather than double precision mode.
HD 6850 has no double precision.
____________
My stats
Badge score: 1*1 + 5*2 + 8*13 + 9*2 + 12*3 = 169  

Michael GoetzVolunteer moderator Project administrator Project scientist
Send message
Joined: 21 Jan 10 Posts: 10311 ID: 53948 Credit: 115,221,977 RAC: 116,865

Stderr asks me to ensure my GPU is capable. Well, yes, it is, according to every other OCL task I've ever run on any project with decent developers, so...
Did I miss something?
My guess would be that 32768 tasks are running using OCL3 whereas 65536 tasks are using OCL.
As per Mike's yesterdays post here:
Double precision is NOT required on the GPUs for either OCL2 or OCL3. That not only means that a wider variety of ATI/AMD GPUs may be used, but it also means if you have a GTX TITAN you can run it in its higher clock speed mode rather than double precision mode.
HD 6850 has no double precision.
Correct. You currently need a doubleprecision capable card for n=16. n=21, and n=22. Other nranges use OCL2 or OCL3 which do not need double precision.
(At least on BOINC. PRPNet isn't set up to use OCL2 or OCL3.)
____________
Please do not PM me with support questions. They will usually go unanswered. Ask on the forums instead. Thank you!
 


Thank you for deleting/moving my post without notification. I also appreciate the use, in your explanation(s), of mathematical terminology with which I'm not explicitly familiar. If the answer to my question really must reference my inability to differentiate between different "types" of OCL dependencies, well, I'm terribly sorry for missing the part where the Preferences indicate I need a specific GPU (in this case, the significantly more useful 6950/70 or better...) Actually, I'm not sorry, because the Preferences page says nothing of the sort.
Don't worry, though  I read up on it. I wouldn't dream of asking a stupid question regarding something that's already been discussed. Believe me  I used to moderate the official RuneScape forums. .
I just wasn't specifically aware that these applications were referred to (at least internally) by names differing from their <userfriendly name/>. Oops!
I'm aware my equipment is old and outdated. Seriously, I wish I had DPcapable components in all of my machines, but I simply cannot afford it at this time. I don't want to resurrect my GTX 460 either, since the central heating is currently turned on and that effectively renders such a card useless, y'know...
Please make an effort to better explain specific application requirements in the "Preferences" area rather than simply placing a check box under "OpenCL." You can't just expect people to know what is meant by "n=16" or "n=15" when it only appears in size10 font in the Preferences menu and not at all in the BOINC Manager (indeed, the naming convention makes it appear as a version/build number, not an appspecific parameter.) ;)
It does not help that ALL of these tasks are simply lumped under the "Genefer" subproject. I obviously understand the rationale from an administrative standpoint, but I'm currently merely an observational project contributor, rather than a somewhatserious recreational mathematician, as I once was...what seems like ages ago...
I can't help but think that perhaps, if I was as involved in PrimeGrid now as I was in 20112012 or so, this exchange would not have happened. Apologies.  

Michael GoetzVolunteer moderator Project administrator Project scientist
Send message
Joined: 21 Jan 10 Posts: 10311 ID: 53948 Credit: 115,221,977 RAC: 116,865

Thank you for deleting/moving my post without notification.
First of all, the post immediately before yours specifically said I'd be moving posts. Also, the forum was supposed to have emailed you automatically informing you that the post was moved. Did that not happen for some reason? (Spam filter, bad email address, something else?) (Note that I received an email for my post when it was moved.)
I also appreciate the use, in your explanation(s), of mathematical terminology with which I'm not explicitly familiar. If the answer to my question really must reference my inability to differentiate between different "types" of OCL dependencies, well, I'm terribly sorry for missing the part where the Preferences indicate I need a specific GPU (in this case, the significantly more useful 6950/70 or better...) Actually, I'm not sorry, because the Preferences page says nothing of the sort.
If you have a suggestion as to how to better convey all of this information, we're listening. The preferences page says this:
AMD/ATI GPUs must have double precision floating point hardware for n=15, n=16, n=21 and n=22.
Seems pretty clear to me, considering that this is right above where the Genefer apps are listed with n=15, n=16, etc.
And, yes, I do consider it reasonable to expect users to have some understanding of the capabilities of the hardware components they've purchased. If not, hopefully they'll come here and ask why it's not working.
Please note that this is all brand new, and things are evolving rapidly. This DP/noDP situation is only temporary; we plan on combining the three OCL apps together so that it would automatically use the OCL3 transform instead of OCL if all that's available is a single precision GPU. But that's weeks or months away, unfortunately.
In fact, that information has since changed, and n=15 is now running OCL3 which doesn't require double precision. The web page has been changed accordingly.
Please make an effort to better explain specific application requirements in the "Preferences" area rather than simply placing a check box under "OpenCL."
See above. It always has listed the that requirement.
size10 font
Chances are good that your eyes are much better than mine. Ctrl+ is your friend.
It does not help that ALL of these tasks are simply lumped under the "Genefer" subproject.
I honestly believe that calling them something other than "Genefer" would make it more confusing.
Rather than complaining about all the things that were wrong, do you have any specific ideas about how to make things better? There's a LOT of options, a lot of subprojects, a lot of apps, and you're completely right that it's easy to get confused. Believe me, we all have cheat sheets to keep it all straight.
And somehow I've got to stuff all that information on the preferences webpage such that enough information is available to those who need details while not overwhelming those who don't.
And also not make it a worse "WallofText" than it already is.
I'm very serious about asking you for suggestions  if you have an idea about how to make it better, I'm listening.
____________
Please do not PM me with support questions. They will usually go unanswered. Ask on the forums instead. Thank you!
 

RogerVolunteer moderator Project administrator Volunteer developer Volunteer tester Project scientist
Send message
Joined: 27 Nov 11 Posts: 964 ID: 120786 Credit: 205,160,575 RAC: 67,895

Brand new ranges have opened up for GFN n=15, 16 and 17. Well done Admins! That couldn't have been easy.
I ran the stats on these new ranges for expected number of primes for Transforms up to their "B" limits:
GPU OCL3
n = 15, b = 10,428,486 thru 16,777,216 expect 68 primes
n = 16, b = 4,205,312 thru 11,863,283 expect 82 primes
n = 17, b = 1,322,216 thru 8,388,608 expect 38 primes
GPU OCL2
n = 15, b = 16,777,216 thru 95,520,000 expect 786 primes
n = 16, b = 11,863,283 thru 81,670,000 expect 680 primes
n = 17, b = 8,388,608 thru 42,597,772 expect 169 primes
n = 17, b = 42,597,774 thru 60,430,000 expect 84 Mega primes
CPU 64bit
n = 17, b = 1,322,216 thru 1,620,000 expect 1.76 primes
CPU 80bit
n = 15, b = 10,428,486 thru 99,460,000 expect 892 primes
n = 16, b = 4,205,312 thru 79,010,000 expect 737 primes
n = 17, b = 1,620,000 thru 64,150,000 expect 308 primes (101 of them Mega)
That should give you an idea how large some of these ranges are.
OCL3 ranges should be all complete in 6 months, OCL2 in a couple of years.
Residual bit of CPU 80bit beyond the reach of OCL2 will take a few more years.  

mackerelVolunteer tester
Send message
Joined: 2 Oct 08 Posts: 1814 ID: 29980 Credit: 228,261,256 RAC: 200,502

CPU 64bit
n = 17, b = 1,322,216 thru 1,620,000 expect 1.76 primes
Aww, so few? Any indication of how many candidates there are in that range? I was rather hoping for more than 2 primes!
Using some rough estimating based on current rate of b increase that'll take a couple of months or so.  


CPU 64bit
n = 17, b = 1,322,216 thru 1,620,000 expect 1.76 primes
Aww, so few? Any indication of how many candidates there are in that range? I was rather hoping for more than 2 primes!
The known primes are b = 62722, 130816, 228188, 386892, 572186, 689186, 909548, 1063730, 1176694, 1361244, 1372930.
You can extrapolate from them.
 

mackerelVolunteer tester
Send message
Joined: 2 Oct 08 Posts: 1814 ID: 29980 Credit: 228,261,256 RAC: 200,502

I hadn't thought to look for already known primes in that range, since there was an earlier statement that this is a new range being tested. The largest two b would be in the range we're about to test. So are the 1.76 primes expected new ones in addition to those?
It also raises the question, has that range been covered already, or were they found by other means? That is, other than specifically searching in the way we are. I also note they were discovered a long time ago.  

RogerVolunteer moderator Project administrator Volunteer developer Volunteer tester Project scientist
Send message
Joined: 27 Nov 11 Posts: 964 ID: 120786 Credit: 205,160,575 RAC: 67,895

I hadn't thought to look for already known primes in that range, since there was an earlier statement that this is a new range being tested. The largest two b would be in the range we're about to test. So are the 1.76 primes expected new ones in addition to those?
No. This is the average expected number. You can calculate for yourself:
http://yves.gallot.pagespersoorange.fr/primes/stat.html
I am guessing Heuer searched up to 1,400,000 based on his reservation:
http://yves.gallot.pagespersoorange.fr/primes/status.html
If the range before 1.4M is a double check then the real fresh range is after that point.
CPU 64bit
n = 17, b = 1,400,000 thru 1,620,000 expect 1.3 primes
It also raises the question, has that range been covered already, or were they found by other means? That is, other than specifically searching in the way we are. I also note they were discovered a long time ago.
You can see the way they worked in Heuer's time is essentially the same as now, sieving then using a program to test each candidate:
http://yves.gallot.pagespersoorange.fr/primes/how.html
Was before my time though.  

mackerelVolunteer tester
Send message
Joined: 2 Oct 08 Posts: 1814 ID: 29980 Credit: 228,261,256 RAC: 200,502

Thanks for the links and the history lesson. All the more impressive that this was done over 10 years ago.
I know things in statistics need to be interpreted with some caution when looking at ever smaller ranges, but I'll have to have a play with some numbers there.  


Prime numbers are not random but are like a rare event process!
The expected number of primes is a priori which is different from the a posteriori number of primes.
1.24 primes are expected in 700,000900,000 and there are none, 0.12 are expected in 1,360,0001,380,000 and two were found.
Our intuition is wrong with rare events. If an event occurs about every year, it will occur four times over the same year every century. People (commentators) think that something changed but this is just the Poisson distribution.
The prime number search helps us to understand better this sort of nonintuitive events.  

mackerelVolunteer tester
Send message
Joined: 2 Oct 08 Posts: 1814 ID: 29980 Credit: 228,261,256 RAC: 200,502

1.24 primes are expected in 700,000900,000 and there are none, 0.12 are expected in 1,360,0001,380,000 and two were found.
That is exactly the sort of thing was trying not to trap myself with. Over very large ranges, the numbers will more likely even out. Over very small ranges... is it even worth looking?
So when earlier expressing my disappointment at a predicted <2 primes over the test range of interest (especially as 2 have been found already), I was concious there could well be zero, or even more than 2.  


The original post in this thread is quite simply a masterpiece. How often will this info be updated? I am the furthest thing from a math whiz but would like to educate myself as to what all these terms mean without flooding this thread or others with a bunch of noob questions. Is there an external source that would be a good place to start?  

RafaelVolunteer tester
Send message
Joined: 22 Oct 14 Posts: 765 ID: 370496 Credit: 177,075,541 RAC: 143,901

Just wondering... what about n =< 14 and n => 22, any reason as to why 1521 are offered?  

Scott BrownVolunteer moderator Project administrator Volunteer tester Project scientist
Send message
Joined: 17 Oct 05 Posts: 1660 ID: 1178 Credit: 4,841,087,020 RAC: 2,112,004

Just wondering... what about n =< 14 and n => 22, any reason as to why 1521 are offered?
One of the main reasons that n=15 was the starting level was that, at that time, n=15 finds made the top 5000 primes list.
 

Michael GoetzVolunteer moderator Project administrator Project scientist
Send message
Joined: 21 Jan 10 Posts: 10311 ID: 53948 Credit: 115,221,977 RAC: 116,865

The original post in this thread is quite simply a masterpiece. How often will this info be updated? I am the furthest thing from a math whiz but would like to educate myself as to what all these terms mean without flooding this thread or others with a bunch of noob questions. Is there an external source that would be a good place to start?
Feel free to ask here.
I suspect that post will be updated when something changes, but don't expect it to be updated every day just to change the leading edge. You can get the leading edge information from http://www.primegrid.com/stats_genefer.php.
____________
Please do not PM me with support questions. They will usually go unanswered. Ask on the forums instead. Thank you!
 

Michael GoetzVolunteer moderator Project administrator Project scientist
Send message
Joined: 21 Jan 10 Posts: 10311 ID: 53948 Credit: 115,221,977 RAC: 116,865

Just wondering... what about n =< 14 and n => 22, any reason as to why 1521 are offered?
n=23 is HUGE  assume the run times would be 4 times as long as the n=22 world record tasks. Have you run any world record tasks yet?
As for the smaller numbers, there's a lot of reasons we're not running them. The GPU apps for both sieving and Genefer are less efficient on such small n's. We don't have a lot of sieving done for very low n's (possibly NO sieving at all.) It's not clear there's a demand for such work. The vast number of tiny tasks would be a problem for the server. And that's just the reasons that immediately come to mind.
____________
Please do not PM me with support questions. They will usually go unanswered. Ask on the forums instead. Thank you!
 

RafaelVolunteer tester
Send message
Joined: 22 Oct 14 Posts: 765 ID: 370496 Credit: 177,075,541 RAC: 143,901

Just wondering... what about n =< 14 and n => 22, any reason as to why 1521 are offered?
n=23 is HUGE  assume the run times would be 4 times as long as the n=22 world record tasks. Have you run any world record tasks yet?
Yes, I've run WR before. ~90h with OCL3 (almost finished) and I think ~130h with CUDA on my Gtx 970. But, given that no work seems to be done, wouldn't B start "small" enough to be worth it to make the tasks not as long?  

Michael GoetzVolunteer moderator Project administrator Project scientist
Send message
Joined: 21 Jan 10 Posts: 10311 ID: 53948 Credit: 115,221,977 RAC: 116,865

Just wondering... what about n =< 14 and n => 22, any reason as to why 1521 are offered?
n=23 is HUGE  assume the run times would be 4 times as long as the n=22 world record tasks. Have you run any world record tasks yet?
Yes, I've run WR before. ~90h with OCL3 (almost finished) and I think ~130h with CUDA on my Gtx 970. But, given that no work seems to be done, wouldn't B start "small" enough to be worth it to make the tasks not as long?
No. The tasks would rapidly grow to a very large size. It's not worth setting up a project just to run a few hundred tasks.
____________
Please do not PM me with support questions. They will usually go unanswered. Ask on the forums instead. Thank you!
 

HonzaVolunteer moderator Volunteer tester Project scientist Send message
Joined: 15 Aug 05 Posts: 1647 ID: 352 Credit: 1,931,980,611 RAC: 402,317

Just wondering... what about n =< 14 and n => 22, any reason as to why 1521 are offered?
About n <= 14
I've done quite a bit of initial CPU sieving on GFN on most n's. Low n's had advantage that there was no need to do extra deep sieving since PRP tests were quick.
Very low n's (ie. n<15) were run offline, ie. we haven't used PRPNet, I run them offline and uploaded results.
It was couple of months effort on dozens of cores. Could be done faster today, or with less cores. If really adventurous, one can choose a low n and the sieving and test alone. SSDs would be of benefit as well, there were no available back then.
Primes are very small there (relatively speaking and from year 2015 perspective).
It might be of some use for study purposes like Poisson's distribution etc. but not math wizz myself so can't really assess.
About n => 22
Largest known Genefer is for n=19; none for n=>20...yet.
There are 0.33 expected primes for n=22, b<=1M
And only 0.16 expected primes for n=23, b<=1M, ie. 85% of no prime.
Note that max b in progress for n=22 is 57540  after 3 years of running GFN WR.
____________
My stats
Badge score: 1*1 + 5*2 + 8*13 + 9*2 + 12*3 = 169  

mackerelVolunteer tester
Send message
Joined: 2 Oct 08 Posts: 1814 ID: 29980 Credit: 228,261,256 RAC: 200,502

If really adventurous, one can choose a low n and the sieving and test alone.
Be afraid. I was thinking that!
What has been done before? I saw this: http://yves.gallot.pagespersoorange.fr/primes/results.html
What software do we have? I see in the CPU genefer it can report supported b ranges. This started at n=8 (256).
Ok, let's look more at n=8 then. From the above link it has been tested b < 7M. That sounded big, until I tried to work out the decimal digit size. If I'm not mistaken, that would be approximated by 256 * log10(b). That doesn't figure in the "+1" which would be insignificant, and I don't know how to! Anyway, how big would a number be at b = 7M? 1753 digits. That's tiny. Ok, other way around, how big do we have to go before it gets more interesting? To reach Top 5000 list at the moment would need a size greater than 388340 digits. With b at 1T, we're only up to 3072 digits. Not even close. Genefer with x87 transform runs out of steam at 452M anyway.
I think we can write off n=8 unless you're really bored. A value of n at the upper end <15 might be more interesting. Again from the earlier link, it looks like n=13 and 14 were only single pass tested. Perhaps replicating that would be a test of ability, before continuing onwards above it. The upper b tested for n=14 at 2.7M would result in a 103572 digit number. So still quite small by current standards and should be fast to test, and it looks like CPU genefer can do that with fma3 transform.
Am I crazy enough?  

RafaelVolunteer tester
Send message
Joined: 22 Oct 14 Posts: 765 ID: 370496 Credit: 177,075,541 RAC: 143,901

If really adventurous, one can choose a low n and the sieving and test alone.
Be afraid. I was thinking that!
What has been done before? I saw this: http://yves.gallot.pagespersoorange.fr/primes/results.html
What software do we have? I see in the CPU genefer it can report supported b ranges. This started at n=8 (256).
Ok, let's look more at n=8 then. From the above link it has been tested b < 7M. That sounded big, until I tried to work out the decimal digit size. If I'm not mistaken, that would be approximated by 256 * log10(b). That doesn't figure in the "+1" which would be insignificant, and I don't know how to! Anyway, how big would a number be at b = 7M? 1753 digits. That's tiny. Ok, other way around, how big do we have to go before it gets more interesting? To reach Top 5000 list at the moment would need a size greater than 388340 digits. With b at 1T, we're only up to 3072 digits. Not even close. Genefer with x87 transform runs out of steam at 452M anyway.
I think we can write off n=8 unless you're really bored. A value of n at the upper end <15 might be more interesting. Again from the earlier link, it looks like n=13 and 14 were only single pass tested. Perhaps replicating that would be a test of ability, before continuing onwards above it. The upper b tested for n=14 at 2.7M would result in a 103572 digit number. So still quite small by current standards and should be fast to test, and it looks like CPU genefer can do that with fma3 transform.
Am I crazy enough?
Hmm... it would sound even crazier if you said you'd try using the iGPU to do crunching. Besides, this info seems pretty outdated; it says n=15 up to 2.2M, but (according to OP) single pass is all the way up to 10M, so......  

Michael GoetzVolunteer moderator Project administrator Project scientist
Send message
Joined: 21 Jan 10 Posts: 10311 ID: 53948 Credit: 115,221,977 RAC: 116,865

Math (particularly logarithms) being what it is, there's no way for n's below 16 to ever make the T5K list again, at least using Genefer. With n=15 (which at one time was large enough for the list), you would need b of 709,898,549,924 (709 billion) which is way, way beyond the ability of any version of Genefer. You could do it with LLR if you really wanted to.
With even smaller n's, it just gets worse. Much, much worse. Really quickly.
At n=14 (16384), you need b>=503,955,951,184,464,000,000,000
At n=13 (8192), you need b>=253,971,600,734,238,000,000,000,000,000,000,000,000,000,000,000
At n=12 (4096), you need b>=64,501,573,979,511,100,000,000,000,000,000,000,000,...
000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000
At n=11 (2048), you need b>=4,160,453,045,834,350,000,000,000,000,000,000,000,000,000,000,000,000,000,...
000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,...
000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,...
000,000,000,000,000,000,000
At n=10 (1024), Excel gives up with a numeric overflow. :)
____________
Please do not PM me with support questions. They will usually go unanswered. Ask on the forums instead. Thank you!
 

HonzaVolunteer moderator Volunteer tester Project scientist Send message
Joined: 15 Aug 05 Posts: 1647 ID: 352 Credit: 1,931,980,611 RAC: 402,317

What software do we have? I see in the CPU genefer it can report supported b ranges. This started at n=8 (256).
You can use PFGW.
____________
My stats
Badge score: 1*1 + 5*2 + 8*13 + 9*2 + 12*3 = 169  

mackerelVolunteer tester
Send message
Joined: 2 Oct 08 Posts: 1814 ID: 29980 Credit: 228,261,256 RAC: 200,502

I did have a quick play to size up the job and see how much computational work it was. Got a sieve started just to replicate the n=14 range of b up to 2.7M. Didn't take long to whittle that down to 600k candidates and still dropping before I stopped. Running genefer on the biggest unit in that range took about 21 seconds, while my CPU was already running 4 other tasks. That puts the estimated time required at around 145 core days but I'm sure that would still drop a lot more from having dedicated cores and working out where optimal sieve is.
Found the cuda sieve (forgot the exact name) but that only seems to work on n>=15. When I ran some low b through genefer to see what happens, surprised to find the very first test 2^16384+1 was PRP, which Proth later confirmed was composite. Haven't looked at PFGW at all yet.
None of this might have any use beyond personal interest, but I'm finding it an interesting learning exercise on how the different bits of software handle and are used together.
At the moment I'm debating if this is worth continuing on some offline boxes I have access to.  

HonzaVolunteer moderator Volunteer tester Project scientist Send message
Joined: 15 Aug 05 Posts: 1647 ID: 352 Credit: 1,931,980,611 RAC: 402,317

Found the cuda sieve (forgot the exact name) but that only seems to work on n>=15. When I ran some low b through genefer to see what happens, surprised to find the very first test 2^16384+1 was PRP, which Proth later confirmed was composite. Haven't looked at PFGW at all yet.
Upon note of PRP  they are tested again using PFGW.
On PG, and on T5K.
See largest known Genefer here for example.
btw, are we 3 years without any new Genefer megaprime?
____________
My stats
Badge score: 1*1 + 5*2 + 8*13 + 9*2 + 12*3 = 169  

Michael GoetzVolunteer moderator Project administrator Project scientist
Send message
Joined: 21 Jan 10 Posts: 10311 ID: 53948 Credit: 115,221,977 RAC: 116,865

btw, are we 3 years without any new Genefer megaprime?
Last one was found on August 14th, 2012.
____________
Please do not PM me with support questions. They will usually go unanswered. Ask on the forums instead. Thank you!
 

Van ZimmermanVolunteer moderator Project administrator Volunteer tester Project scientist Send message
Joined: 30 Aug 12 Posts: 1770 ID: 168418 Credit: 4,269,599,028 RAC: 4,918,118

The WR tasks aren't too bad with the right hardware. I'm crunching at a rate of 11 WUs/week, and as soon as one machine finishes some ocl2 test units, that should go to just under 17.
I'd be happy to point my machines at n=23 tasks if they were available. Particularly once the n=22 b gets too large to run straight ocl.
Just wondering... what about n =< 14 and n => 22, any reason as to why 1521 are offered?
n=23 is HUGE  assume the run times would be 4 times as long as the n=22 world record tasks. Have you run any world record tasks yet?
As for the smaller numbers, there's a lot of reasons we're not running them. The GPU apps for both sieving and Genefer are less efficient on such small n's. We don't have a lot of sieving done for very low n's (possibly NO sieving at all.) It's not clear there's a demand for such work. The vast number of tiny tasks would be a problem for the server. And that's just the reasons that immediately come to mind.
 

Michael GoetzVolunteer moderator Project administrator Project scientist
Send message
Joined: 21 Jan 10 Posts: 10311 ID: 53948 Credit: 115,221,977 RAC: 116,865

I'd be happy to point my machines at n=23 tasks if they were available. Particularly once the n=22 b gets too large to run straight ocl.
We don't plan on running n=23, and we're not going to exceed any n=22 blimits for many, many years. I'll keep your offer in mind in 2030 or 2040 when this happens. :)
In addition to having decades worth of work on n=22 remaining, there's several other reasons why we have no plans whatsoever to run n=23:
The primary reason for running n=22 is to search for the largest known prime. It can do that faster than n=23, so why search n=23?
As n goes up, the b range goes down, so there's fewer and fewer candidates to search. This exacerbates the problem that the expected number of primes to be found with a given range also goes down as n goes up. No primes were found at n=20. The expected number of primes to be found at n=22 is less than 1. The probability at n=23 will be even lower.
We haven't done any significant amount of sieving for n=23. We could sieve it, of course, but that would take away from other computing efforts.
Early tests with n=23 showed some problems related to memory usage on the GPU. The number of GPUs that can crunch it might be limited. (I have no idea whether this is still a valid concern since I haven't looked at n=23 in at least 3 years.)
We moved Genefer to BOINC specifically to attempt to find the world's largest known prime. That's the n=22 project. The tasks are long, and aren't for everyone, so we offer small projects too. They probably detract from the world record effort, but not everyone has hardware that can crunch such large tasks, and not everybody wants to run tasks that long even if their hardware is up to it. By offering n=23, we'd also be pulling computing power away from n=22. Yes, n=23 obviously would also but be looking for world record numbers, but it would be doing so less efficiently than n=22. That's not what we want.
Given all that, there would need to be a compelling reason to start testing n=23 other than "why not?".
____________
Please do not PM me with support questions. They will usually go unanswered. Ask on the forums instead. Thank you!
 


btw, are we 3 years without any new Genefer megaprime?
Last one was found on August 14th, 2012.
Correct.
However, sometimes Proth primes k*2^n + 1 where k is odd and small (but not k=1), can be generalized Fermat primes (x^2 + 1) even if they are not "Genefer" primes (i.e. they are not found with a Genefer program in a Genefer subproject).
If we include those, the most recent megaprime of GFN type is 9*2^3497442 + 1 from October, 2012.
/JeppeSN
 


Notice: CPU x87 (80bit) n=17low beginning as SIMD instruction sets maxerr.
SSE2 CPU's are first in line followed by AVX/FMA3: variation according to the CPU's maxerr 0.450 operating point tolerance.
GPU n=17 (Low/Mega) per WU efficiency will be bit better now that x87 driving CPU's.
 

Michael GoetzVolunteer moderator Project administrator Project scientist
Send message
Joined: 21 Jan 10 Posts: 10311 ID: 53948 Credit: 115,221,977 RAC: 116,865

I've switched the n=16 (65536) OpenCL app from OCL to OCL3.
____________
Please do not PM me with support questions. They will usually go unanswered. Ask on the forums instead. Thank you!
 

RogerVolunteer moderator Project administrator Volunteer developer Volunteer tester Project scientist
Send message
Joined: 27 Nov 11 Posts: 964 ID: 120786 Credit: 205,160,575 RAC: 67,895

My AMD 1100T CPU is now performing 5 out of 5 GFN 17 Low with x87 Transform.
With SSE2 Transform I was doing one WU per core in under 2 hours.
WU with x87 takes around 13 hours.
So I am moving my 1100T cores away from GFN 17 Low to chase sieving subproject badge levels, where AMD cores are relatively competitive.
I also have a 2 core i73540M laptop CPU, still using AVX Transform, one WU in about 35min.
Getting the odd WU using x87 Transform, one in about 1hr50min.
I'll keep this CPU on GFN 17 Low for the moment.
GFN 17 Low leading edge now past 1,530,000.  

streamVolunteer tester Send message
Joined: 1 Mar 14 Posts: 367 ID: 301928 Credit: 408,616,289 RAC: 30,241

My AMD 1100T CPU is now performing 5 out of 5 GFN 17 Low with x87 Transform.
So I am moving my 1100T cores away from GFN 17 Low to chase sieving subproject badge levels
GFN16 still have lot of CPU work available (current edge = 1640K, expected switch to x87 near 2000K). Want to help clean it up?
 

RogerVolunteer moderator Project administrator Volunteer developer Volunteer tester Project scientist
Send message
Joined: 27 Nov 11 Posts: 964 ID: 120786 Credit: 205,160,575 RAC: 67,895

My AMD 1100T CPU is now performing 5 out of 5 GFN 17 Low with x87 Transform.
So I am moving my 1100T cores away from GFN 17 Low to chase sieving subproject badge levels
GFN16 still have lot of CPU work available (current edge = 1640K, expected switch to x87 near 2000K). Want to help clean it up?
My GPU is currently offline. Crashing under load. Needs a bit of TLC. Once I get it dusted and tested I'll reassess.
I've dropped to 31st place on number of GFN WUs processed :(  

streamVolunteer tester Send message
Joined: 1 Mar 14 Posts: 367 ID: 301928 Credit: 408,616,289 RAC: 30,241

My AMD 1100T CPU is now performing 5 out of 5 GFN 17 Low with x87 Transform.
So I am moving my 1100T cores away from GFN 17 Low to chase sieving subproject badge levels
GFN16 still have lot of CPU work available (current edge = 1640K, expected switch to x87 near 2000K). Want to help clean it up?
My GPU is currently offline. Crashing under load. [...] I've dropped to 31st place on number of GFN WUs processed :(
So... why not put your 1100 _CPU_ cores to GFN16 then instead of sieving? Don't miss the chance, limited opportunity, only few days left! :) x87 transition is expected in about a week at current rates.
 

RogerVolunteer moderator Project administrator Volunteer developer Volunteer tester Project scientist
Send message
Joined: 27 Nov 11 Posts: 964 ID: 120786 Credit: 205,160,575 RAC: 67,895

My AMD 1100T CPU is now performing 5 out of 5 GFN 17 Low with x87 Transform.
So I am moving my 1100T cores away from GFN 17 Low to chase sieving subproject badge levels
GFN16 still have lot of CPU work available (current edge = 1640K, expected switch to x87 near 2000K). Want to help clean it up?
My GPU is currently offline. Crashing under load. [...] I've dropped to 31st place on number of GFN WUs processed :(
So... why not put your 1100 _CPU_ cores to GFN16 then instead of sieving? Don't miss the chance, limited opportunity, only few days left! :) x87 transition is expected in about a week at current rates.
Alright. You've twisted my arm. 1100 CPU now set to GFN 16. Leading edge >1,665,568
Upcoming event: b=~1,990,000: CPU app transitions from 64 bit to 80 bit.  

streamVolunteer tester Send message
Joined: 1 Mar 14 Posts: 367 ID: 301928 Credit: 408,616,289 RAC: 30,241

GFN16, b = 1833000, got first switches from FMA to AVX. It still can continue in AVX mode, but this will not stay long. The last fast CPU GFN range is coming to the end.
BTW why AVX is more precise then FMA? Shouldn't they use same hardware? Is FMA opcode (single command) introduces more rounding errors then same calculations done manually in few separate commands, or some other reason exist?
 

Michael GoetzVolunteer moderator Project administrator Project scientist
Send message
Joined: 21 Jan 10 Posts: 10311 ID: 53948 Credit: 115,221,977 RAC: 116,865

GFN16, b = 1833000, got first switches from FMA to AVX. It still can continue in AVX mode, but this will not stay long. The last fast CPU GFN range is coming to the end.
I'm also running 65536 on an FMA3 CPU. Within the last few hours I got the first two tasks that switched out of FMA3...
However, Genefer knows that it's possible that only one or two iterations need a more precise transform, so it will try reverting to FMA3:
Using FMA3 transform
Starting initialization...
Initialization complete (0.055 seconds).
Testing 1816532^65536+1...
Estimated time remaining for 1816532^65536+1 is 0:05:54
maxErr exceeded for 1816532^65536+1, 0.4531 > 0.4500
maxErr exceeded while using FMA3; switching to AVX (Intel).
Using AVX (Intel) transform
Resuming 1816532^65536+1 from a checkpoint (487423 iterations left)
Estimated time remaining for 1816532^65536+1 is 0:03:03
Successful computation progress with AVX (Intel); switching back to FMA3.
Using FMA3 transform
Resuming 1816532^65536+1 from a checkpoint (385023 iterations left)
Estimated time remaining for 1816532^65536+1 is 0:01:40
1816532^65536+1 is complete. (410206 digits) (err = 0.4062) (time = 0:06:42) 06:10:56
FMA3, then AVX, then back to FMA3 for the remainder of the computation.
As b increases, however, more and more iterations will need higher precision, and it will not be able to go back up to FMA3. As you say, "the end is near!" (Or at least the end of using 64 bit transforms on n=16.)
____________
Please do not PM me with support questions. They will usually go unanswered. Ask on the forums instead. Thank you!
 


BTW why AVX is more precise then FMA? Shouldn't they use same hardware? Is FMA opcode (single command) introduces more rounding errors then same calculations done manually in few separate commands, or some other reason exist?
Implementations are different:
In AVX mode, a cos/sin table is precomputed (cs=cos(theta), sn=sin(theta)) and operations are z = t + cs*x + sn*y.
In FMA mode, a cos/tan table is precomputed (cs=cos(theta), tn=tan(theta)) and operations are z = t + cs*(x + tn*y) = fma(t, cs, fma(x, tn, y)).
 

Michael GoetzVolunteer moderator Project administrator Project scientist
Send message
Joined: 21 Jan 10 Posts: 10311 ID: 53948 Credit: 115,221,977 RAC: 116,865

It's even more complicated. These are two Intel AVX CPUs on the same workunit. They're not identical CPUs, however. But one's Windows, the other is Mac. Different compilers.
Windows:
genefer 3.2.9 (Windows/CPU/64bit)
Supported transform implementations: avxintel sse4 sse2 default x87
Copyright 20012015, Yves Gallot
Copyright 2009, Mark Rodenkirch, David Underbakke
Copyright 20102012, Shoichiro Yamada, Ken Brazier
Copyright 20112015, Iain Bethune, Michael Goetz, Ronald Schneider
Genefer is free source code, under the MIT license.
Command line: projects/www.primegrid.com/primegrid_genefer_3_2_9_3.09_windows_x86_64__cpuGFN16.exe boinc q 1784630^65536+1
Priority change succeeded.
Using AVX (Intel) transform
Starting initialization...
Initialization complete (0.159 seconds).
Testing 1784630^65536+1...
Estimated time remaining for 1784630^65536+1 is 0:15:58
1784630^65536+1 is complete. (409702 digits) (err = 0.4062) (time = 0:16:00) 12:12:27
Mac:
genefer 3.2.9 (Applex86/CPU/64bit)
Supported transform implementations: avxintel sse4 sse2 default x87
Copyright 20012015, Yves Gallot
Copyright 2009, Mark Rodenkirch, David Underbakke
Copyright 20102012, Shoichiro Yamada, Ken Brazier
Copyright 20112015, Iain Bethune, Michael Goetz, Ronald Schneider
Genefer is free source code, under the MIT license.
Command line: primegrid_genefer_3_2_9_3.09_x86_64appledarwin__cpuGFN16 boinc q 1784630^65536+1
Priority change succeeded.
Using AVX (Intel) transform
Starting initialization...
Initialization complete (0.071 seconds).
Testing 1784630^65536+1...
Estimated time remaining for 1784630^65536+1 is 0:07:18
maxErr exceeded for 1784630^65536+1, 0.4688 > 0.4500
maxErr exceeded while using AVX (Intel); switching to SSE4.
Using SSE4 transform
Resuming 1784630^65536+1 from a checkpoint (446463 iterations left)
Estimated time remaining for 1784630^65536+1 is 0:04:30
Successful computation progress with SSE4; switching back to AVX (Intel).
Using AVX (Intel) transform
Resuming 1784630^65536+1 from a checkpoint (413695 iterations left)
Estimated time remaining for 1784630^65536+1 is 0:02:25
maxErr exceeded for 1784630^65536+1, 0.4688 > 0.4500
maxErr exceeded while using AVX (Intel); switching to SSE4.
Using SSE4 transform
Resuming 1784630^65536+1 from a checkpoint (413695 iterations left)
Estimated time remaining for 1784630^65536+1 is 0:04:15
Successful computation progress with SSE4; switching back to AVX (Intel).
Using AVX (Intel) transform
Resuming 1784630^65536+1 from a checkpoint (364543 iterations left)
Estimated time remaining for 1784630^65536+1 is 0:02:07
maxErr exceeded for 1784630^65536+1, 0.4688 > 0.4500
maxErr exceeded while using AVX (Intel); switching to SSE4.
Too many errors with AVX (Intel); Calculation will proceed using only more accurate transforms.
Using SSE4 transform
Resuming 1784630^65536+1 from a checkpoint (364543 iterations left)
Estimated time remaining for 1784630^65536+1 is 0:03:40
maxErr exceeded for 1784630^65536+1, 0.4531 > 0.4500
maxErr exceeded while using SSE4; switching to SSE2.
Using SSE2 transform
Resuming 1784630^65536+1 from a checkpoint (65535 iterations left)
Estimated time remaining for 1784630^65536+1 is 0:00:49
maxErr exceeded for 1784630^65536+1, 0.4531 > 0.4500
maxErr exceeded while using SSE2; switching to Default.
Using Default transform
Resuming 1784630^65536+1 from a checkpoint (65535 iterations left)
Estimated time remaining for 1784630^65536+1 is 0:01:33
maxErr exceeded for 1784630^65536+1, 0.4844 > 0.4500
maxErr exceeded while using Default; switching to x87 (80bit).
Using x87 (80bit) transform
Resuming 1784630^65536+1 from a checkpoint (65535 iterations left)
Estimated time remaining for 1784630^65536+1 is 0:03:38
Successful computation progress with x87 (80bit); switching back to SSE4.
Using SSE4 transform
Resuming 1784630^65536+1 from a checkpoint (49151 iterations left)
Estimated time remaining for 1784630^65536+1 is 0:00:29
1784630^65536+1 is complete. (409702 digits) (err = 0.4062) (time = 0:10:39) 01:36:56
01:36:56 (4812): called boinc_finish
Workunit link: http://www.primegrid.com/workunit.php?wuid=450820606
When you're well below the limit, everything works. Well above the limit and everything fails. But near the limit it's unpredictable. The mac went AVX > SSE4 > AVX > SSE4 > AVX > SSE4 > SSE2 > default > x87 > SSE4, and finished with SSE4. The Windows machine did the whole thing on AVX. (It goes up and down because Genefer will try to switch back to the faster transform a few times before giving up for good.)
____________
Please do not PM me with support questions. They will usually go unanswered. Ask on the forums instead. Thank you!
 

streamVolunteer tester Send message
Joined: 1 Mar 14 Posts: 367 ID: 301928 Credit: 408,616,289 RAC: 30,241

It's even more complicated. These are two Intel AVX CPUs on the same workunit. They're not identical CPUs, however. But one's Windows, the other is Mac. Different compilers.
Blind guess: different default AVX/SSE rounding mode set either by OS or compiler?
 


Some of my wus are switching to SSE4. Are we closing to the limits of AVX?
We are going really fast. :)  

Michael GoetzVolunteer moderator Project administrator Project scientist
Send message
Joined: 21 Jan 10 Posts: 10311 ID: 53948 Credit: 115,221,977 RAC: 116,865

Some of my wus are switching to SSE4. Are we closing to the limits of AVX?
We are going really fast. :)
65536 is pretty close, but it's going to vary depending on various factors, such as operating system, CPU, and it will also vary unpredictably with each workunit. I suspect we'll probably be solidly in the 80 bit CPU transforms by the weekend, and probably sooner.
131072 (low) is also pretty close, but it's progressing a lot slower because of the larger task size. It probably also only has a few days to go before completely switching.
____________
Please do not PM me with support questions. They will usually go unanswered. Ask on the forums instead. Thank you!
 


I had one workunit from 131072 (low) transition to x87 testing 1570012^131072+1 ...
it started on FMA3 then started switching between FMA3 and AVX before quickly transitioning from SSE4, SSE2, default, and settling on x87.
http://www.primegrid.com/result.php?resultid=656957557
EDIT: after bouncing around a bit, another 131072 (low) settled with x87 as well testing 1570262^131072+1 .  

Michael GoetzVolunteer moderator Project administrator Project scientist
Send message
Joined: 21 Jan 10 Posts: 10311 ID: 53948 Credit: 115,221,977 RAC: 116,865

I'm considering 65536 to have "switched" from 64 to 80 bit CPU processing.
EDIT: Same thing for 131072 Low. 80 bit CPU for an increasing percentage of tasks.
____________
Please do not PM me with support questions. They will usually go unanswered. Ask on the forums instead. Thank you!
 

streamVolunteer tester Send message
Joined: 1 Mar 14 Posts: 367 ID: 301928 Credit: 408,616,289 RAC: 30,241

I'm considering 65536 to have "switched" from 64 to 80 bit CPU processing.
Not completely yet. I've checked runtimes for last 24 hours. On true FMA transform, runtime should be about 8 minutes. Now they're between 811 minutes. Most of tasks are bouncing between FMA3AVX transforms, and only one task had full x87 processing (took more then one hour) and two or three switched in the middle (~30 minutes). But the situation may change within few hours.
 


I'm starting to hit x87 transform, but it's a small percent of all WU.
I will crunch to the night ( 10 000 WU done ) and going back to Factorial and Primorial Sieving. It will be closed after 1st Nov.
So much project, so less cpu power. :D  

Michael GoetzVolunteer moderator Project administrator Project scientist
Send message
Joined: 21 Jan 10 Posts: 10311 ID: 53948 Credit: 115,221,977 RAC: 116,865

I'm considering 65536 to have "switched" from 64 to 80 bit CPU processing.
Not completely yet. I've checked runtimes for last 24 hours. On true FMA transform, runtime should be about 8 minutes. Now they're between 811 minutes. Most of tasks are bouncing between FMA3AVX transforms, and only one task had full x87 processing (took more then one hour) and two or three switched in the middle (~30 minutes). But the situation may change within few hours.
It's fuzzy, so some tasks (and hosts) will switch, some will switch partway, and so forth.
I had a Windows FMA3 task permanently switch down to x87. Since that appears to be the combination with the greatest precision (or at least not less precise than others), if that used x87, it's possible for any host to use x87. So it's "switched", meaning it's possible for a CPU task to run at x87 speed from this point on. Some tasks will still run faster for a while, but you won't know which until they actually run. If you're concerned about tasks taking hours instead of minutes, it's time to stop running n=16 and n=17 on a CPU.
____________
Please do not PM me with support questions. They will usually go unanswered. Ask on the forums instead. Thank you!
 

streamVolunteer tester Send message
Joined: 1 Mar 14 Posts: 367 ID: 301928 Credit: 408,616,289 RAC: 30,241

It's fuzzy, so some tasks (and hosts) will switch, some will switch partway, and so forth.
If I remember correctly I saw a graph of maxErr obtained during GPU crunching; it looked like a saw with triangle teeth/spikes. At some point, you'll be on top of the teeth, got big maxErr and switch to x87 (or fail with classic GPU genefer). But when b grows, you'll "fall down from the spike", error decreases a bit, and processing will fast again for some time. But average error increases, and you'll have less and less "fast" workunits and finally, no one at all.
If you're concerned about tasks taking hours instead of minutes, it's time to stop running n=16 and n=17 on a CPU.
Right now I have average of 24 tasks in x87 mode on 8cores machine, but it's getting worse with every hour. I think I'll try it for another 12 hours, but not much more. Getting same credit for 6090 minutes of CPU time (with normal time of 9 minutes) is not fun.  


I'm considering 65536 to have "switched" from 64 to 80 bit CPU processing.
Not completely yet. I've checked runtimes for last 24 hours. On true FMA transform, runtime should be about 8 minutes. Now they're between 811 minutes. Most of tasks are bouncing between FMA3AVX transforms, and only one task had full x87 processing (took more then one hour) and two or three switched in the middle (~30 minutes). But the situation may change within few hours.
My Win7/10 SSE2 CPU's are now exclusively on x87 while Win8.1 Haswell going from (FMA3 > AVX > SSE4 > SSE2 > x87) 50% of current WUs.
90% of the time: Win8.1 Ivy Bridge tasks are switching AVX > SSE4 > SSE2 > x87 transform.
(GTX 750 at 30W OP) finishes (1) n=16 (OCL3) task in 272sec. (similar speed as 1 Skylake FMA core)
The switch from OCL to OCL3 on n=16  GTX 750 gained 20% efficiency per WU.
(GTX 970 at 100W OP) requires 127sec per n=16 WU which comparable output to (3) FMA3 Haswell or (2.4) Skylake cores but at a higher power cost.
OCL3 n=16 app 18% more efficient per WU than OCL on the GTX 970.
The speed at which we've exhausted the apt CPU SIMD sets very impressive  well done Genefer Crunchers and Project Scientists.
Once (n=16 CPU x87) transition is complete  OCL3 transform per WU efficiency will tilt in favor of Maxwell and Fiji (Nano) architectures.
Surprisingly the smalldie GTX750 (OCL2 n=17Mega) per WU output is 55% the OG flagship Titan Black and a midtier GTX970.
OCL3 (n15 / n=16 / N=17low) range is where a GTX 970 is slightly faster than the Titan B by a few percent.
OCL (n=21/22) app has a GTX970 around 3545% slower than the Titan.
 

RafaelVolunteer tester
Send message
Joined: 22 Oct 14 Posts: 765 ID: 370496 Credit: 177,075,541 RAC: 143,901

OCL (n=21/22) app has a GTX970 around 3545% slower than the Titan.
Do you have an OCL WU for n=22 on a 970? I have an OCL3 at stock clocks (and almost finished with a small OC) for WR tasks on my 970, and I'd love to compare the jump in performance from OCL to OCL3.  


OCL (n=21/22) app has a GTX970 around 3545% slower than the Titan.
Do you have an OCL WU for n=22 on a 970? I have an OCL3 at stock clocks (and almost finished with a small OC) for WR tasks on my 970, and I'd love to compare the jump in performance from OCL to OCL3.
None recently  but a few months ago OCL n=22 task took 99hr for me.
Highlighting how well OCL3 computes on Maxwell: this task's GTX970 31.5% faster than a Titan Black.
http://www.primegrid.com/workunit.php?wuid=450989956
n=16 OCL3 (55W) GTX750 is 31% slower than the (250W) GK110 board. The 750's per WU power consumption a no contest.
A quick note: OCL 64bit n=21 / n=22 fastest GPU's (Tahiti and GK110) with 32bit OCL3 lose nearly half of it's top notch OCL performance. (See "Geneferocl2 thread")
OCL n=21 / 22 app geared for the Titan and Tahiti GPU's.
Yves's three OCL applications showing each of the GPU's architectural strength and weakness is fascinating and educational.
To paraphrase Yves in OCL2 thread:
"geneferocl2 uses fixedpoint arithmetic. (Q63.64 fixedpoint numbers for data and Q0.63 for the sin/cos table.)
geneferocl3 uses a Number Theoretic Transform. The finite a field is Z/p with p = 2^64  2^32 + 1.
OCL3 Error check (for each coefficient of the transform we should have c_i <= n.(b1)^2)."
 

Michael GoetzVolunteer moderator Project administrator Project scientist
Send message
Joined: 21 Jan 10 Posts: 10311 ID: 53948 Credit: 115,221,977 RAC: 116,865

It's certainly been worthwhile doing the PRPNet double checks. So far we've found five!!! primes that were missed initially on PRPNet. Two on 32768 and three on 65536.
We're roughly halfway through the double checking on both ranges.
I can't wait to see how many primes we find once we're into new territory on those searches!
We're doing more than 45 thousand 32768 tests per day, and about 8 thousand 65536 tests per day.
____________
Please do not PM me with support questions. They will usually go unanswered. Ask on the forums instead. Thank you!
 

Michael GoetzVolunteer moderator Project administrator Project scientist
Send message
Joined: 21 Jan 10 Posts: 10311 ID: 53948 Credit: 115,221,977 RAC: 116,865

How many primes are still to be found?
I did some digging, finding all the GFN primes that PrimeGrid has found, and all (I think) the primes that anyone else has found for n=15 and above. Using Yves' expected primes calculator, and plugging in the total number of primes already found and the blimits of OCL3, I came up with the following table that shows how many primes we can expect to find using OCL3.
n 15 16 17 18 19 20 21 22
2^n 32,768 65,536 131,072 262,144 524,288 1,048,576 2,097,152 4,194,304
PG Primes 87 29 2 6 4 0 0 0
NonPG Primes 56 29 10 1 0 0 0 0
OCL3 limit 16,777,216 11,863,283 8,388,608 5,931,642 4,194,304 2,965,821 2,097,152 1,482,910
Expected primes 190.93 133.23 47.39 20.27 7.40 2.97 1.20 0.47
Remaining primes 47.93 75.23 35.39 13.27 3.40 2.97 1.20 0.47
The bottom line is how many more primes we can expect to find at each n if we search each to the limit of OCL3.
Of note is only can we expect to find many more primes at the lower n ranges, but that with the expanded range available with OCL3 we may also find some large primes.
____________
Please do not PM me with support questions. They will usually go unanswered. Ask on the forums instead. Thank you!
 


I came up with the following table that shows how many primes we can expect to find using OCL3.
Let E(a; b) be the expected number of primes in [a; b[ and P(a; b) be the number of primes in [a; b[.
We have E(a; c) = E(a; b) + E(b; c) and P(a; c) = P(a; b) + P(b; c)
but E(a; c) != P(a; b) + E(b; c).
n = 19, [2; 895,000[ was a lucky range (4 primes, 1.77 expected). But this doesn't imply that [895,000; 4,194,304[ is an unlucky range and that 3.40 primes are expected in it. 5.62 primes are expected in this range.
On the other hand, n = 20, [2; 720,000[ was unlucky with no prime and 0.8 expected. But 2.16 primes are expected in [720,000; 2,965,821[, not 2.97.
This counterintuitive result is related to the arcsine distribution and is known in French as the "persistance de la chance et de la malchance" (literally the persistence of the good fortune and of the misfortune).  

Michael GoetzVolunteer moderator Project administrator Project scientist
Send message
Joined: 21 Jan 10 Posts: 10311 ID: 53948 Credit: 115,221,977 RAC: 116,865

n = 19, [2; 895,000[ was a lucky range (4 primes, 1.77 expected). But this doesn't imply that [895,000; 4,194,304[ is an unlucky range and that 3.40 primes are expected in it. 5.62 primes are expected in this range.
On the other hand, n = 20, [2; 720,000[ was unlucky with no prime and 0.8 expected. But 2.16 primes are expected in [720,000; 2,965,821[, not 2.97.
Indeed, this is true. I didn't want to delve into this because, at this time, we can only say this about some of the ranges: n=18 and up. The lower ranges are more chaotic with regard to what's already been searched.
For n=15 and n=16, the ranges that appear to be already completed are proving themselves to be less well searched than it would appear. That's why we've found six primes where all the primes should have already been discovered. (Yes, six. There's one more that hasn't been announced yet.)
On the other hand, we don't know what, if any, part of n=17 has been searched before, so we may find fewer primes than expected. (And I ignored 17mega entirely.)
What's certain is that OCL3 is going to breathe new life into 18 and 19 and 20, and there should be a very good chance of finding the largest known GFN prime in either 19 or 20.
____________
Please do not PM me with support questions. They will usually go unanswered. Ask on the forums instead. Thank you!
 

RafaelVolunteer tester
Send message
Joined: 22 Oct 14 Posts: 765 ID: 370496 Credit: 177,075,541 RAC: 143,901

What's certain is that OCL3 is going to breath new life into 18 and 19 and 20, and there should be a very good chance of finding the largest known GFN prime in either 19 or 20.
I still think it's more fun to take the "Largest Prime Crown" away from Mersenne primes than finding the largest GFN Prime (well, it would be the largest anyway), but I suppose that "0.47 primes expected" is rather... discouraging.......
We should launch GFN 20 along with a 7day challenge, to push the discovery of the first prime in that range. Or make a GFN 21, for the same reason.  

Michael GoetzVolunteer moderator Project administrator Project scientist
Send message
Joined: 21 Jan 10 Posts: 10311 ID: 53948 Credit: 115,221,977 RAC: 116,865

How many primes are still to be found?
I did some digging, finding all the GFN primes that PrimeGrid has found, and all (I think) the primes that anyone else has found for n=15 and above. Using Yves' expected primes calculator, and plugging in the total number of primes already found and the blimits of OCL3, I came up with the following table that shows how many primes we can expect to find using OCL3.
n 15 16 17 18 19 20 21 22
2^n 32,768 65,536 131,072 262,144 524,288 1,048,576 2,097,152 4,194,304
PG Primes 87 29 2 6 4 0 0 0
NonPG Primes 56 29 10 1 0 0 0 0
OCL3 limit 16,777,216 11,863,283 8,388,608 5,931,642 4,194,304 2,965,821 2,097,152 1,482,910
Expected primes 190.93 133.23 47.39 20.27 7.40 2.97 1.20 0.47
Remaining primes 47.93 75.23 35.39 13.27 3.40 2.97 1.20 0.47
The bottom line is how many more primes we can expect to find at each n if we search each to the limit of OCL3.
Of note is only can we expect to find many more primes at the lower n ranges, but that with the expanded range available with OCL3 we may also find some large primes.
I've attempted to do it the more accurate way (as per Yves). This is the estimate of the expected number of primes remaining from the current leading edge to the OCL3 limit.
Leading edge is defined as the highest b ever checked on either PRPNet or BOINC. In other words, it's assuming that both the BOINC and PRPNet work we're currently double checking on BOINC was both complete an accurate. Since we've already found 7 primes missed on PRPNet, we know this assumption is false. (The missing primes come from missing residues, not incorrect residues. It appears that either large sections were not tested, or errored out without being detected.)
Here's a revised table:
n 15 16 17 18 19 20 21 22
2^n 32,768 65,536 131,072 262,144 524,288 1,048,576 2,097,152 4,194,304
OCL3 Limit 16,777,216 11,863,283 8,388,608 5,931,642 4,194,304 2,965,821 2,097,152 1,482,910
Leading Edge 10,428,486 4,205,312 1,630,534 1,256,684 920,564 719,998 18,240 57,954
Expected primes 68.49 82.52 37.02 15.45 5.58 2.16 1.18 0.44
The bottom line is most important: It's the remaining number of primes we can expect to find if we search up to the OCL3 limit. Note that this search will take several years for n=19, n=20 and n=21, and substantially more for n=22.
____________
Please do not PM me with support questions. They will usually go unanswered. Ask on the forums instead. Thank you!
 

RafaelVolunteer tester
Send message
Joined: 22 Oct 14 Posts: 765 ID: 370496 Credit: 177,075,541 RAC: 143,901

Any primes found with N=32768 will be too small to be reported to the T5K list.
Seeing this, it got me thinking: what's the point of moving on with GFN 15?
Well, sure, the double check is of great importance, as the missed primes have shown already, but beyond that... it seems like a waste of compute power. At least to me, it makes a lot more sense to just finish the double check and move away to GFN 16, closing GFN 15 for good. Maybe even stretch the searching range up until the OCL3 limit, as it's a fixed "all below work, all above fail" app, unlike the previous "near limit, some fail and some work", but then surely stop there.
Correct me if I'm wrong, but it is not as if GFN 15 was of utmost importance. Afteral, it shares the medal with all other GFN searches, and the n=16 would still provide an option for very short task, but with the advantage of at least getting primes on T5k.  

Michael GoetzVolunteer moderator Project administrator Project scientist
Send message
Joined: 21 Jan 10 Posts: 10311 ID: 53948 Credit: 115,221,977 RAC: 116,865

Any primes found with N=32768 will be too small to be reported to the T5K list.
Seeing this, it got me thinking: what's the point of moving on with GFN 15?
It's a relatively easy way to find primes. They may not make the T5K list anymore, but they're still 200 thousand digits long, they make it onto OUR list, and they extend the list of known GFN primes.
If you don't want to search it, then search something else. There were people searching 32768 before we had the new OCL2 and OCL3 software, meaning they were using only the very slow x87 CPU program, and had to use PRPNet to do it. With the new much faster GPU software and the ease of using BOINC I'm sure it will be even more popular. But people are free to crunch whatever they want.
If Chris changed the name of his website to the Top TEN Thousand Primes, would that magically make this project worth searching?
____________
Please do not PM me with support questions. They will usually go unanswered. Ask on the forums instead. Thank you!
 


One could also argue that n=20 is the most natural range to search. It is the lowest exponent for which we do not know if a prime exists. There is reasonable hope we will find a prime there some day.
The first GF prime in each n series is something special, is it not?
It is really optimistic to think we can find an n=22 prime and beat the Mersenne record before we know any n=20 and n=21 primes. It is not impossible, of course, but it is more likely that GIMPS (the Mersenne people) will beat their own record before we find anything at n=22. It is not unlikely that GIMPS's next find will be so big that people here at PrimeGrid will demand n=23 opened...
/JeppeSN
 

RafaelVolunteer tester
Send message
Joined: 22 Oct 14 Posts: 765 ID: 370496 Credit: 177,075,541 RAC: 143,901

One could also argue that n=20 is the most natural range to search. It is the lowest exponent for which we do not know if a prime exists. There is reasonable hope we will find a prime there some day.
The first GF prime in each n series is something special, is it not?
It is really optimistic to think we can find an n=22 prime and beat the Mersenne record before we know any n=20 and n=21 primes. It is not impossible, of course, but it is more likely that GIMPS (the Mersenne people) will beat their own record before we find anything at n=22. It is not unlikely that GIMPS's next find will be so big that people here at PrimeGrid will demand n=23 opened...
/JeppeSN
I'm actually just waiting for the n=20 to launch on Boinc before moving my GPUs there, exactly for that reason. In fact, only reason why I'm not running GFN 21 is because OCing failed hard on WR tasks and OCL3, so I'm using that as a stress test to find the limit for this newer app, as it's the highest n available (previous OC was running perfectly fine on n=15 and 16).  

Michael GoetzVolunteer moderator Project administrator Project scientist
Send message
Joined: 21 Jan 10 Posts: 10311 ID: 53948 Credit: 115,221,977 RAC: 116,865

The first GF prime in each n series is something special, is it not?
No argument from me. :)
It is really optimistic to think we can find an n=22 prime...
It is.
...but it is more likely that GIMPS (the Mersenne people) will beat their own record before we find anything at n=22.
They have already have beaten their own record once, which temporarily put our n=22 search below their record.
It is not unlikely that GIMPS's next find will be so big that people here at PrimeGrid will demand n=23 opened...
I had not thought of that scenario. I suppose we can consider that possibility at the appropriate time. While it's more feasible today to start thinking of n=23 than it was in 2011, the odds of finding an n=23 prime there are, of course, even smaller than at n=22.
____________
Please do not PM me with support questions. They will usually go unanswered. Ask on the forums instead. Thank you!
 

RafaelVolunteer tester
Send message
Joined: 22 Oct 14 Posts: 765 ID: 370496 Credit: 177,075,541 RAC: 143,901

I had not thought of that scenario. I suppose we can consider that possibility at the appropriate time. While it's more feasible today to start thinking of n=23 than it was in 2011, the odds of finding an n=23 prime there are, of course, even smaller than at n=22.
Just a quick thought, assuming GIMPS doesn't find anything for a long time, would it be "worth it" to do the following:
Be b22 the leading edge for n=22 and b23 for n=23. If the number given by b23 is smaller (in terms of WU run time), the next GFNWR WU would be an n=23 one, instead of an n=22. The moment that the leading edge becomes so big that n=22 is faster to run, the server would sent n=22 again, until it surpasses n=23 size, and so on.....
?
You know, just some food for thought....  


Just a quick thought, assuming GIMPS doesn't find anything for a long time, would it be "worth it" to do the following:
Be b22 the leading edge for n=22 and b23 for n=23. If the number given by b23 is smaller (in terms of WU run time), the next GFNWR WU would be an n=23 one, instead of an n=22. The moment that the leading edge becomes so big that n=22 is faster to run, the server would sent n=22 again, until it surpasses n=23 size, and so on.....
Maybe we will never reach b22 values so huge that even small n=23 tasks would run faster. /JeppeSN  

Michael GoetzVolunteer moderator Project administrator Project scientist
Send message
Joined: 21 Jan 10 Posts: 10311 ID: 53948 Credit: 115,221,977 RAC: 116,865

I had not thought of that scenario. I suppose we can consider that possibility at the appropriate time. While it's more feasible today to start thinking of n=23 than it was in 2011, the odds of finding an n=23 prime there are, of course, even smaller than at n=22.
Just a quick thought, assuming GIMPS doesn't find anything for a long time, would it be "worth it" to do the following:
Be b22 the leading edge for n=22 and b23 for n=23. If the number given by b23 is smaller (in terms of WU run time), the next GFNWR WU would be an n=23 one, instead of an n=22. The moment that the leading edge becomes so big that n=22 is faster to run, the server would sent n=22 again, until it surpasses n=23 size, and so on.....
?
You know, just some food for thought....
Food poisoning, perhaps. Your logic is flawed because you haven't checked how long the tasks take to run.
As a rule of thumb, every time you increase n by 1 (and don't change b), the computation time goes up by a factor of four.
Assume for a moment that the B limit for n=22 is 1,000,000. Also assume that the b limit for n=23 is 800,000.
Remember that (b)^(n) is the same as (b^2)^(n/2). Therefore, 1000000^4194304+1 is the same number as 1000^8388608+1. However, 1000000^4194304+1 will take half the time to crunch as 1000^8388608+1, so you'll always want to crunch the number at the lowest n possible. You never want to use the higher n for mathematically equivalent numbers. In fact, if you've already done all of n=22 up to b=1,000,000, you wouldn't even need to crunch n=23 below 1000 because all those numbers are the equivalent to n=22 numbers you've already searched.
So you never will want to search n=23 below b=1000, and every n=23 candidate with b>1000 will take at least twice as long as the longest n=22 number.
Therefore, you don't want to start searching n=23 until you've finished all of n=22.
(This all assumes the same program/transform is being used in all situations. In reality, when n=22 has to switch to a slower transform because the b limit is exceeded, n=23 may be faster because it will be able to use the faster transform. This will in fact happen when we get to around b=500,000. I used 1,000,000 in my example because the square root is simpler. If we were to continue at the current pace it will be some 30 or 40 years before we switch to n=23.)
____________
Please do not PM me with support questions. They will usually go unanswered. Ask on the forums instead. Thank you!
 

RafaelVolunteer tester
Send message
Joined: 22 Oct 14 Posts: 765 ID: 370496 Credit: 177,075,541 RAC: 143,901

Remember that (b)^(n) is the same as (b^2)^(n/2). Therefore, 1000000^4194304+1 is the same number as 1000^8388608+1. However, 1000000^4194304+1 will take half the time to crunch as 1000^8388608+1, so you'll always want to crunch the number at the lowest n possible. You never want to use the higher n for mathematically equivalent numbers.
Ah... going backwards a bit, does that mean we could exclude a few candidates for lower n on the extended limit given by OCL2/3, as they were already searched at the beginning of the higher n searches?
Or are they THAT much spaced out that it doesn't matter?  

Michael GoetzVolunteer moderator Project administrator Project scientist
Send message
Joined: 21 Jan 10 Posts: 10311 ID: 53948 Credit: 115,221,977 RAC: 116,865

Ah... going backwards a bit, does that mean we could exclude a few candidates for lower n on the extended limit given by OCL2/3, as they were already searched at the beginning of the higher n searches?
Or are they THAT much spaced out that it doesn't matter?
Since we are searching n=22 and n=21 concurrently, that means there were a few n=21 candidates with (b^2)^n^21+1 that we didn't need to search because we had already searched b^n^22+1. But only for very low b, of course. It's a very small percentage of candidates.
____________
Please do not PM me with support questions. They will usually go unanswered. Ask on the forums instead. Thank you!
 


n=17 Mega (OCL2 / 80bit) looks promising.
N = 131072 [bMin = 42598524] bMax = 50000000
Expected number of GF primes: 35.21
Poisson distribution
Chance of 19 primes: 0.1% (100%)
Chance of 20 primes: 0.18% (100%)
Chance of 21 primes: 0.3% (100%)
Chance of 22 primes: 0.48% (99%)
Chance of 23 primes: 0.74% (99%)
Chance of 24 primes: 1.09% (98%)
Chance of 25 primes: 1.53% (97%)
Chance of 26 primes: 2.07% (95%)
Chance of 27 primes: 2.7% (93%)
Chance of 28 primes: 3.4% (91%)
Chance of 29 primes: 4.13% (87%)
Chance of 30 primes: 4.84% (83%)
Chance of 31 primes: 5.5% (78%)
Chance of 32 primes: 6.05% (73%)
Chance of 33 primes: 6.45% (67%)
Chance of 34 primes: 6.68% (60%)
Chance of 35 primes: 6.72% (54%)
Chance of 36 primes: 6.58% (47%)
Chance of 37 primes: 6.26% (40%)
Chance of 38 primes: 5.8% (34%)
Chance of 39 primes: 5.23% (28%)
Chance of 40 primes: 4.61% (23%)
Chance of 41 primes: 3.96% (18%)
Chance of 42 primes: 3.32% (14%)
Chance of 43 primes: 2.72% (11%)
Chance of 44 primes: 2.17% (8%)
Chance of 45 primes: 1.7% (6%)
Chance of 46 primes: 1.3% (5%)
Chance of 47 primes: 0.97% (3%)
Chance of 48 primes: 0.71% (2%)
Chance of 49 primes: 0.51% (2%)
Chance of 50 primes: 0.36% (1%)
Chance of 51 primes: 0.25% (1%)
Chance of 52 primes: 0.17% (0%)
Chance of 53 primes: 0.11% (0%)
N = 131072 [min 'b' = 42598524] max 'b' = 43000000
Expected number of GF primes = 1.92
Poisson distribution
Chance of no prime: 14.69% (100%)
Chance of 1 prime: 28.17% (85%)
Chance of 2 primes: 27.02% (57%)
Chance of 3 primes: 17.28% (30%)
Chance of 4 primes: 8.29% (13%)
Chance of 5 primes: 3.18% (5%)
Chance of 6 primes: 1.02% (1%)
Chance of 7 primes: 0.28% (0%)
http://yves.gallot.pagespersoorange.fr/primes/stat.html
 

Michael GoetzVolunteer moderator Project administrator Project scientist
Send message
Joined: 21 Jan 10 Posts: 10311 ID: 53948 Credit: 115,221,977 RAC: 116,865

I have put Genefer 3.2.9 into production on the n=21 and n=22 (World Record) projects. (3.2.9 was already running on n=15, n=16, and n=17.)
This may affect you in the following ways:
 If you use app_info, you should update your configuration to use the new 3.2.9 apps if you're not already doing so. (If you're not using app_info, or don't know what app_info is, no action is necessary on your part.)
 N=21 and N=22 (WR) will now use trickles to extend deadlines.
 Because the deadline extension mechanism is now functional on n=21, the deadline has been lowered back to 21 days. The max deadline is unchanged at 84 days, so effectively you now have twice as much time to complete the task while abandoned tasks will be resent in half the time.
____________
Please do not PM me with support questions. They will usually go unanswered. Ask on the forums instead. Thank you!
 

Michael GoetzVolunteer moderator Project administrator Project scientist
Send message
Joined: 21 Jan 10 Posts: 10311 ID: 53948 Credit: 115,221,977 RAC: 116,865

Run time estimates for GFNWR CPU tasks
If you're contemplating running the n=22 World Record tasks on a CPU, I only recommend doing so if you have an Intel AVX or FMA3 CPU, i.e., Sandy Bridge or newer. And only run with Hyperthreading turned off. It will be weeks before the first CPU tasks complete, so the PrimeGrid preferences task selection webpage won't show any average run times for World Record tasks for a while.
I expect that most Haswell or better Intel CPUs (with FMA3) will take between 20 and 30 days. Skylake will probably be able to do them in a bit under 20 days. I'm guessing most Intel AVX CPUs (Sandy Bridge and Ivy Bridge) will be in the 30 to 40 day ballpark. Everything else, including all AMD CPUs, will likely be over 40 days. Hyperthreading will DOUBLE the run time, so turn off hyperthreading if you want to run these tasks.
Those numbers exclude low power, low speed CPU families that are used in laptops and allinone computers. These will take a lot longer, even if they have AVX or FMA3. These CPUs are designed for long battery life, not speed.
Please note that GFNWR is, for all intents and purposes, primarily a GPU project. We've started sending out CPU tasks because users with fast CPUs have asked for this. These tasks are only appropriate for very fast CPUs. The very fastest CPUs can barely make the 21 day deadline. Even though the server will extend the deadline to a maximum of 84 days (provided your computer is still making progress), it's likely that your computer will go into "panic"/"high priority"/"earliest deadline first" mode almost immediately upon downloading one of these tasks.
____________
Please do not PM me with support questions. They will usually go unanswered. Ask on the forums instead. Thank you!
 

Michael GoetzVolunteer moderator Project administrator Project scientist
Send message
Joined: 21 Jan 10 Posts: 10311 ID: 53948 Credit: 115,221,977 RAC: 116,865

We're in the process of shutting down the 262144 and 524288 ports on PRPNet. In the near future (probably later today), we'll be reopening those two ranges on BOINC as well as 1048576.
____________
Please do not PM me with support questions. They will usually go unanswered. Ask on the forums instead. Thank you!
 

Michael GoetzVolunteer moderator Project administrator Project scientist
Send message
Joined: 21 Jan 10 Posts: 10311 ID: 53948 Credit: 115,221,977 RAC: 116,865

262144, 524288, and 1048576 are now open on BOINC!
Let us know if there's any problems.
____________
Please do not PM me with support questions. They will usually go unanswered. Ask on the forums instead. Thank you!
 

HonzaVolunteer moderator Volunteer tester Project scientist Send message
Joined: 15 Aug 05 Posts: 1647 ID: 352 Credit: 1,931,980,611 RAC: 402,317

Getting CPU work on Winx64 for 262144, 524288, and 1048576.
I think you can edit first post and make it BOINC status: ACTIVE :)
Now we have full range of Genefer work available.
This should make any Genefer challenge...challenging.
____________
My stats
Badge score: 1*1 + 5*2 + 8*13 + 9*2 + 12*3 = 169  

Michael GoetzVolunteer moderator Project administrator Project scientist
Send message
Joined: 21 Jan 10 Posts: 10311 ID: 53948 Credit: 115,221,977 RAC: 116,865

Getting CPU work on Winx64 for 262144, 524288, and 1048576.
I think you can edit first post and make it BOINC status: ACTIVE :)
Now we have full range of Genefer work available.
This should make any Genefer challenge...challenging.
Updating that first post takes a while. There's a lot of details in there.
GFN challenges will *never* include all GFN ranges. Combining projects with short tasks that do tens of thousands of tasks per day with projects that have long deadlines  and therefore have long challenge cleanups  is a surefire method of clobbering the database. Millions upon millions of tasks will be put into the database before they can be purged after the challenge cleanup completes.
We can run challenges with short tasks. We can run challenges with long tasks. We can not run challenges that combine both.
____________
Please do not PM me with support questions. They will usually go unanswered. Ask on the forums instead. Thank you!
 

mackerelVolunteer tester
Send message
Joined: 2 Oct 08 Posts: 1814 ID: 29980 Credit: 228,261,256 RAC: 200,502

Just switched a GPU onto n=18 (262144) to see how long they take to run.
I was thinking, could n=18 be faster than n=17 mega, since the former is still using the relatively faster OCL3 and not slower OCL2.
For size indication based on leading edge values currently in 1st post:
OCL3 n=17 low 813386 digits
OCL2 n=17 mega 1000048 digits
OCL3 n=18 1579498 digits
OCL3 n=19 3090531 digits
 

Michael GoetzVolunteer moderator Project administrator Project scientist
Send message
Joined: 21 Jan 10 Posts: 10311 ID: 53948 Credit: 115,221,977 RAC: 116,865

I was thinking, could n=18 be faster than n=17 mega, since the former is still using the relatively faster OCL3 and not slower OCL2.
On my GTX 580, n=18 with OCL3 takes 52 minutes and n=17mega with OCL2 takes 34 minutes.
A Maxwell based GPU might have better OCL3 performance which might narrow the gap.
____________
Please do not PM me with support questions. They will usually go unanswered. Ask on the forums instead. Thank you!
 

mackerelVolunteer tester
Send message
Joined: 2 Oct 08 Posts: 1814 ID: 29980 Credit: 228,261,256 RAC: 200,502

Since I posed the question I guess I could actually try it and see what the result is.
For now, n=18 units are taking around 2800s (47m) on a 960OC.  

mackerelVolunteer tester
Send message
Joined: 2 Oct 08 Posts: 1814 ID: 29980 Credit: 228,261,256 RAC: 200,502

Results are in from the 960. Well, result (singular) in the case of n=17 mega. That took 2720 seconds or about 45 minutes. n=18 units were taking just over 2800 seconds, or just under 47 minutes. So for practical purposes they're near enough the same. Might as well keep going with the n=18 units then in the vague hopes for a bigger prime, even if in theory there is less chance a given unit is prime.
Also Michael, your 580 seems to do very well in comparison. I was looking at the OCL/2/3 bench results from the other thread and that suggested the 960 should be significantly faster in both OCL2 and OCL3. So is my 960 underperforming, the 580 overperforming, or maybe I should ignore the benchmark results!
Edit: had a quick comparison of hardware status while running these units. n=18 runs hotter by about 6C even with faster fan speed. Reported power usage is about 58% TDP for 17 mega, and 83% TDP for 18. So 18 is using about 40% more power (that is 1.4x before someone tries subtracting them). GPU load is near max on both, slightly more so for 17 mega. Memory controller load is harder hit for 18, at 77% compared to 42%. The rest is near enough the same.  

Michael GoetzVolunteer moderator Project administrator Project scientist
Send message
Joined: 21 Jan 10 Posts: 10311 ID: 53948 Credit: 115,221,977 RAC: 116,865

With all the ranges, and with different transforms being used, it might be nice to have some consistent benchmarks. The following tests were done at the current leading edge for each range. CPU/GPU were otherwise idle except for the test being performed. In the case of CPU tests, only a single core was used.
The CPU is a Core i54670K (Haswell), standard clocks, standard speed memory. No hyperthreading.
GPU is a factory overclocked GTX 580 reduced to stock clocks, or maybe a bit lower. I forget. It's running at 1544/772 shader/core and 1900 memory.
Tests were run under Windows 7 Professional 64 bit with Nvidia driver 355.60.
For the CPU tests, n=17mega and below is using x87; n=18 and above is using FMA3.
For the GPU OCL tests, n=20 and below are using OCL3, except for 17mega which is using OCL2. n=21 and n=22 are using OCL.
n b Digits CPU TIME OCL Time CUDA Time Transforms
15 6,040,440 222,203 0:15:55 0:00:56 x87, OCL3
16 2,268,232 416,527 1:04:15 0:03:48 x87, OCL3
17low 1,667,884 815,552 4:18:22 0:13:55 x87, OCL3
17mega 42,647,172 1,000,065 5:24:18 0:34:33 x87, OCL2
18 1,061,140 1,579,621 1:37:51 0:52:14 FMA3, OCL3
19 839,922 3,106,008 7:26:06 3:25:00 FMA3, OCL3
20 720,126 6,141,938 31:46:34 13:40:28 FMA3, OCL3
21 18,986 8,972,526 116:44:14 22:17:11 27:48:54 FMA3, OCL
22 58,146 19,983,845 558:14:13 105:07:02 127:02:21 FMA3, OCL
As with all benchmarks, these tests were done from the command line rather than using live BOINC tasks. You never know what the server's going to give you, so you may not be comparing similar tasks. Much more important is that Genefer prints a VERY accurate run time estimate right at the beginning of the test, so you only need to run each test for a little while to see how long the whole test will take.
____________
Please do not PM me with support questions. They will usually go unanswered. Ask on the forums instead. Thank you!
 


I see the current b values for n=18 and n=19 are fairly close to the max b in progress.
Have the lower b values been double checked already previously?
 

Michael GoetzVolunteer moderator Project administrator Project scientist
Send message
Joined: 21 Jan 10 Posts: 10311 ID: 53948 Credit: 115,221,977 RAC: 116,865

I see the current b values for n=18 and n=19 are fairly close to the max b in progress.
Have the lower b values been double checked already previously?
Absolutely. 18 has only been running for 2 to 3 months on PRPNet since the last BOINC double check, and 19 has only been running for about a month since it was last double checked. Furthermore, both ranges  especially 19  did a lot of their initial work on BOINC. There's not a lot to double check.
____________
Please do not PM me with support questions. They will usually go unanswered. Ask on the forums instead. Thank you!
 

1998golferVolunteer moderator Volunteer tester Send message
Joined: 4 Dec 12 Posts: 1001 ID: 183129 Credit: 839,706,180 RAC: 451,073

Running a CPU GFNWR, It's at 2.767% after 12 hours 30 minutes.. So approx 450 hours to complete? Running on i52500k
____________
275*2^3585539+1 is prime!!! (1079358 digits)
Proud member of Aggie the Pew
 

RafaelVolunteer tester
Send message
Joined: 22 Oct 14 Posts: 765 ID: 370496 Credit: 177,075,541 RAC: 143,901

Running a CPU GFNWR, It's at 2.767% after 12 hours 30 minutes.. So approx 450 hours to complete? Running on i52500k
Michael said "558:14:13" on a Haswell, so the order of magnitude sounds about right. You probably just got one the older WUs with smaller b, hence why runtimes seem on the low side.
But yeah, it'll take... a while to complete.  

1998golferVolunteer moderator Volunteer tester Send message
Joined: 4 Dec 12 Posts: 1001 ID: 183129 Credit: 839,706,180 RAC: 451,073

Running a CPU GFNWR, It's at 2.767% after 12 hours 30 minutes.. So approx 450 hours to complete? Running on i52500k
Michael said "558:14:13" on a Haswell, so the order of magnitude sounds about right. You probably just got one the older WUs with smaller b, hence why runtimes seem on the low side.
But yeah, it'll take... a while to complete.
It's a monthold WU http://www.primegrid.com/workunit.php?wuid=447785450
And yeah, it'll be a while.. Only task running though. And little bit overclocked CPU
____________
275*2^3585539+1 is prime!!! (1079358 digits)
Proud member of Aggie the Pew
 

RafaelVolunteer tester
Send message
Joined: 22 Oct 14 Posts: 765 ID: 370496 Credit: 177,075,541 RAC: 143,901

Just switched a GPU onto n=18 (262144) to see how long they take to run.
I was thinking, could n=18 be faster than n=17 mega, since the former is still using the relatively faster OCL3 and not slower OCL2.
For size indication based on leading edge values currently in 1st post:
OCL3 n=17 low 813386 digits
OCL2 n=17 mega 1000048 digits
OCL3 n=18 1579498 digits
OCL3 n=19 3090531 digits
Using the same B as Michael, n=18 gives my Gtx 970 a 28:36 runtime with OCL3. n=17 (mega) with OCL 2 gets 27:03. So yeah, they are pretty much same sized for maxwell cards......
... although if you do have a Maxwell GPU, chances are that the GPU is strong enough to do one of the higher Ns and get more points / more important primes. I didn't check the exact ratios, but n=15 gives me around 54k PPD; my last 2 GFNWR (with OCL3, that is) got me ~160k. I can only imagine intermediate ranges would give intermediate PPD.
At any rate, onwards with my overclocking madness.  

Van ZimmermanVolunteer moderator Project administrator Volunteer tester Project scientist Send message
Joined: 30 Aug 12 Posts: 1770 ID: 168418 Credit: 4,269,599,028 RAC: 4,918,118

The variance gpu capabilities between ocl, ocl2, and ocl3 is quite interesting. The tahiti chips appear to be ocl2/3 crippled.
On one of my tahiti machines processing GFN 21
ocl 13.5h
ocl2 148h
ocl3 146.75h
It crunches a gfn wr in 61 hours.
On one of my kepler machines crunching gfn21 I recall ocl2 performance being terrible, and ocl3 performance being only slightly slower than ocl, although those wus have rolled off of my machine's history, so I can't give exact numbers.
I suspect there is probably a formula out there for n vs b and ocl choice based on gpu (once there were sufficient enough perf stats for each gpu). Anecdotally, at least, for those of use with tahitis, if its not ocl, nothing to see here, please move along....
Using the same B as Michael, n=18 gives my Gtx 970 a 28:36 runtime with OCL3. n=17 (mega) with OCL 2 gets 27:03. So yeah, they are pretty much same sized for maxwell cards......
... although if you do have a Maxwell GPU, chances are that the GPU is strong enough to do one of the higher Ns and get more points / more important primes. I didn't check the exact ratios, but n=15 gives me around 54k PPD; my last 2 GFNWR (with OCL3, that is) got me ~160k. I can only imagine intermediate ranges would give intermediate PPD.
At any rate, onwards with my overclocking madness.
 

RafaelVolunteer tester
Send message
Joined: 22 Oct 14 Posts: 765 ID: 370496 Credit: 177,075,541 RAC: 143,901

The variance gpu capabilities between ocl, ocl2, and ocl3 is quite interesting. The tahiti chips appear to be ocl2/3 crippled.
On one of my tahiti machines processing GFN 21
ocl 13.5h
ocl2 148h
ocl3 146.75h
It crunches a gfn wr in 61 hours.
On one of my kepler machines crunching gfn21 I recall ocl2 performance being terrible, and ocl3 performance being only slightly slower than ocl, although those wus have rolled off of my machine's history, so I can't give exact numbers.
I suspect there is probably a formula out there for n vs b and ocl choice based on gpu (once there were sufficient enough perf stats for each gpu). Anecdotally, at least, for those of use with tahitis, if its not ocl, nothing to see here, please move along....
Actually, I think the only known GPU series which needs such formula is Maxwell. For everything else, OCL >> OCL 3 > OCL2. The difference is big enough that higher N with lower B can't make up for it.
With the exception being Maxwell, which due to OCL 2 being so much worse than OCL 3. In that case, 1 n higher kinda makes up for it, but since almost everything will be using OCL 3 for a while, n = 17Mega and 18 are really the only ones which matter. At least for while....  

mackerelVolunteer tester
Send message
Joined: 2 Oct 08 Posts: 1814 ID: 29980 Credit: 228,261,256 RAC: 200,502

I suspect there is probably a formula out there for n vs b and ocl choice based on gpu (once there were sufficient enough perf stats for each gpu). Anecdotally, at least, for those of use with tahitis, if its not ocl, nothing to see here, please move along....
http://www.primegrid.com/forum_thread.php?id=4152
At the top of that thread is a table showing for each n, at what b do you switch from OCL to OCL3 to OCL2.
Not sure if that was the question being asked, or if it was more what was optimal to compute at a given time? As a generalisation, if you have a nonMaxwell GPU capable of running OCL, then aiming for OCL tasks would probably get the best return. For Maxwell, look for OCL3 tasks.  

RafaelVolunteer tester
Send message
Joined: 22 Oct 14 Posts: 765 ID: 370496 Credit: 177,075,541 RAC: 143,901

Just a quick question: since no primes are known for n = 20,21,22, shouldn't these get the 10% conjecture bonus? Also, if n=21 takes a lot longer than n=20, shouldn't it have a bonus bigger than 10% (or n=20 have no bonus at all)?  

Michael GoetzVolunteer moderator Project administrator Project scientist
Send message
Joined: 21 Jan 10 Posts: 10311 ID: 53948 Credit: 115,221,977 RAC: 116,865

Just a quick question: since no primes are known for n = 20,21,22, shouldn't these get the 10% conjecture bonus?
GFN isn't a conjecture.
Also, if n=21 takes a lot longer than n=20, shouldn't it have a bonus bigger than 10% (or n=20 have no bonus at all)?
It may change in the future.
____________
Please do not PM me with support questions. They will usually go unanswered. Ask on the forums instead. Thank you!
 

RafaelVolunteer tester
Send message
Joined: 22 Oct 14 Posts: 765 ID: 370496 Credit: 177,075,541 RAC: 143,901

Just a quick question: since no primes are known for n = 20,21,22, shouldn't these get the 10% conjecture bonus?
GFN isn't a conjecture.
Going a bit off tangent here, but is it proven that for a given n, there will always be a B that makes it prime? On the links given on the GFN Search Page, there isn't anything about it being proven / believed / proven wrong, they just throw the definition of GFN Prime and call it a day.
So I conjecture such, and now GFN is elegible for the bonus (jk jk, I know that's not how it works, but you get my point :D Still serious about the first paragraph, though).  


GFN isn't a conjecture.
Conjecture 1: there exists a prime number of the form b^(2^20) + 1.
Conjecture 2: there exists a prime number of the form b^(2^21) + 1.
Conjecture 3: there exists a prime number of the form b^(2^22) + 1.
Conjecture 4: For any n, there exists a prime number of the form b^(2^n) + 1.
Conjecture 5: For any n, there exists an infinite number of primes the form b^(2^n) + 1.
 

Michael GoetzVolunteer moderator Project administrator Project scientist
Send message
Joined: 21 Jan 10 Posts: 10311 ID: 53948 Credit: 115,221,977 RAC: 116,865

:)
It's not a Sierpinski or Riesel conjecture.
____________
Please do not PM me with support questions. They will usually go unanswered. Ask on the forums instead. Thank you!
 

Van ZimmermanVolunteer moderator Project administrator Volunteer tester Project scientist Send message
Joined: 30 Aug 12 Posts: 1770 ID: 168418 Credit: 4,269,599,028 RAC: 4,918,118

It was more along the lines of "when is it more efficient to switch to a higher n project with a lower b that is therefore still running ocl given my current hardware?"
For those running maxwell, the answer is probably later than those running tahiti, to take two extremes. For tahiti, the answer could very well be "as soon as it switches away from ocl". For maxwell, that may not be the answer.
I suspect there is probably a formula out there for n vs b and ocl choice based on gpu (once there were sufficient enough perf stats for each gpu). Anecdotally, at least, for those of use with tahitis, if its not ocl, nothing to see here, please move along....
http://www.primegrid.com/forum_thread.php?id=4152
At the top of that thread is a table showing for each n, at what b do you switch from OCL to OCL3 to OCL2.
Not sure if that was the question being asked, or if it was more what was optimal to compute at a given time? As a generalisation, if you have a nonMaxwell GPU capable of running OCL, then aiming for OCL tasks would probably get the best return. For Maxwell, look for OCL3 tasks.
 


[...]
Conjecture 5: For any n, there exists an infinite number of primes the form b^(2^n) + 1.
Note that this is open even for n=1, i.e. it is not known if there are infinitely many primes of the form b^2 + 1. This is the fourth of Landau's problems (1912).
Each new generalized Fermat prime (any n>0) is a prime of this form.
/JeppeSN
 


Going a bit off tangent here, but is it proven that for a given n, there will always be a B that makes it prime?
That has not been proven (but is conjectured by most people)! If you could prove there was at least one B for each n, you would have solved the fourth of Landau's problems (my other post, just above this one). /JeppeSN
 

RafaelVolunteer tester
Send message
Joined: 22 Oct 14 Posts: 765 ID: 370496 Credit: 177,075,541 RAC: 143,901

Just a bit curious... why n=15 / 16 use version 3.10 and other ranges (such as 17 and 20) use 3.09? What's the difference (if any)?  


Just a bit curious... why n=15 / 16 use version 3.10 and other ranges (such as 17 and 20) use 3.09? What's the difference (if any)?
3.09 is the CPU version, 3.10 is the GPU version. So either one can have either version. I am running 16 on both and do see both versions as described.
 

Michael GoetzVolunteer moderator Project administrator Project scientist
Send message
Joined: 21 Jan 10 Posts: 10311 ID: 53948 Credit: 115,221,977 RAC: 116,865

Just a bit curious... why n=15 / 16 use version 3.10 and other ranges (such as 17 and 20) use 3.09? What's the difference (if any)?
For n=15 and 16, 3.09 was OCL and 3.10 is OCL3.
____________
Please do not PM me with support questions. They will usually go unanswered. Ask on the forums instead. Thank you!
 


Recently completed Genefer WU reference:
per n=17low (OCL3) WU:
GTX750 = 18min (25W / 1.4GHz)
GTX 970 = 8min (135W / 1.5GHz)
per n=17 (OCL2) Mega WU:
GTX 750 = 56min
GTX 580 = 34min
GTX 970 = 28min (130W)
per n=18 (OCL3) WU:
GTX 750 = 70min
GTX 580 = 52min
GTX 960 = 47min
GTX 970 = 30min (160W)
per n=19 (OCL3) WU:
GTX970 = 104min (180W)
GTX750 = 275min
per n=20 (OCL3) WU:
GTX 970 = 6hr 40min (210W / 1455MHz)
GTX 750 = 18hr 30min (40W / 1385MHz)
GTX 970 OCL = 7hr 55min (135W / 1540MHz) <16% slower than OCL3. OCL power operating point <35.7% (135/210W).
OCL3 is certainly the most powerful appication available for Maxwell on BOINC  bar none.
per (AVX) i53230m 2.6GHz /1600C11 RAM WU runtime (3 instances):
(290min) / 3WU; 97min = n=18
20hr = n=19
66hr = n=20 (2 instances)
11hr = n=17 Mega x87 (80bit)
per (FMA3) i54440S 2.9GHz / 2133C7 RAM WU (3 instances):
(120min) / 3WU; 40min = n=18
10hr = n=19
36hr = n=20
7hr = n=17 Mega x87 (80bit)
Overall the OCL3 application is <20% faster than OCL (64bit) with Maxwell. OCL3 power requirements are much more demanding. A new standard for Maxwell power consumption has arisen and it's name is OCL3.
Fury (X) Fiji owners: is there a power usage difference between OCL3 vs. OCL?
 

RafaelVolunteer tester
Send message
Joined: 22 Oct 14 Posts: 765 ID: 370496 Credit: 177,075,541 RAC: 143,901

Just a bit curious... why n=15 / 16 use version 3.10 and other ranges (such as 17 and 20) use 3.09? What's the difference (if any)?
For n=15 and 16, 3.09 was OCL and 3.10 is OCL3.
Oh... that explains it.  

HonzaVolunteer moderator Volunteer tester Project scientist Send message
Joined: 15 Aug 05 Posts: 1647 ID: 352 Credit: 1,931,980,611 RAC: 402,317

Fury (X) Fiji owners: is there a power usage difference between OCL3 vs. OCL?
I run Fury Nano but no wattmeter to tell power consumption.
Leading edge of GFN WR.
OCL would take 5861 hours (~940MHz GPU core, 75C, memory controller usage 3540%)
OCL3 would take ~107 hours (~890 MHz GPU core, 76C, memory controller usage ~20 and up to 40% spikes)
GFN17low runs GPU ~980MHz, 73C, memory controller usage 9%
Trouble with Nano is that GPU clock is being adjusted to meet power consumption profile.
OCL seems both more efficient and GPU friendly comparing to OCL3.
I don't expect such a huge difference in power consumption as you experienced on GTX 970 with n=20.
____________
My stats
Badge score: 1*1 + 5*2 + 8*13 + 9*2 + 12*3 = 169  

Michael GoetzVolunteer moderator Project administrator Project scientist
Send message
Joined: 21 Jan 10 Posts: 10311 ID: 53948 Credit: 115,221,977 RAC: 116,865

Since we will exceed OCL3 limits on n=15 relatively quickly, and to a lesser extent also on n=16, here's the "expected primes" table also including the expected number of primes if we search up to the much higher OCL2 limits. Note that searching to those limits will take an extremely long time with today's technology.
n 15 16 17 18 19 20 21 22
2^n 32,768 65,536 131,072 262,144 524,288 1,048,576 2,097,152 4,194,304
OCL3 Limit 16,777,216 11,863,283 8,388,608 5,931,642 4,194,304 2,965,821 2,097,152 1,482,910
OCL2 Limit (approx.) 95,520,000 81,670,000 60,430,000 50,790,000 41,350,000 35,020,000 28,310,000 22,470,000
Leading Edge 10,428,486 4,205,312 1,630,534 1,256,684 920,564 719,998 18,240 57,954
Expected primes 68.49 82.52 37.02 15.45 5.58 2.16 1.18 0.44
Expected OCL2 primes 854.82 762.77 290.76 146.25 60.92 28.91 13.49 5.86
The bottom two lines are the most important: They're the remaining number of primes we can expect to find if we search up to the OCL3 limit or the OCL2 limit, respectively. Note that this search to the OCL3 limits will take several years for n=19, n=20 and n=21, and substantially more for n=22. I only expect we'll be doing OCL2 searching anytime soon on n=15 and n=16, and of course the n=17 mega search.
____________
Please do not PM me with support questions. They will usually go unanswered. Ask on the forums instead. Thank you!
 

RafaelVolunteer tester
Send message
Joined: 22 Oct 14 Posts: 765 ID: 370496 Credit: 177,075,541 RAC: 143,901

One can only hope for Pascal's "10x the compute performance of Maxwell" and SkylakeE's AVX512........  


One can only hope for Pascal's "10x the compute performance of Maxwell" and SkylakeE's AVX512........
And for new implementations!
Since the 1980s, the computations have been using 64bit floating point numbers. But FP64 capabilities are not necessary for massmarket applications. Surprisingly CPU and GPU still include some fast FP64 units. This is changing:
GTX Titan Black : 5121 FP32 GFLOPS / 1707 FP64 GFLOPS.
GTX Titan X : 6144 FP32 GFLOPS / 192 FP64 GFLOPS.
Are AVX512/FP64 instructions going to be fast on SkylakeE?
New hardware => new algorithms and then certainly some new improvements.  

Michael GoetzVolunteer moderator Project administrator Project scientist
Send message
Joined: 21 Jan 10 Posts: 10311 ID: 53948 Credit: 115,221,977 RAC: 116,865

I've updated the first post in this thread to include all GFN primes ever discovered at PrimeGrid.
____________
Please do not PM me with support questions. They will usually go unanswered. Ask on the forums instead. Thank you!
 


I've updated the first post in this thread to include all GFN primes ever discovered at PrimeGrid.
All?
Five "131072 Low" in http://www.primegrid.com/stats_genefer.php and four in the first post...
A new one that ends with 30 like 261 1722230^131072+1, 1660830^131072+1, 1560730^131072+1, 1372930^131072+1 or 1063730^131072+1.
 

Michael GoetzVolunteer moderator Project administrator Project scientist
Send message
Joined: 21 Jan 10 Posts: 10311 ID: 53948 Credit: 115,221,977 RAC: 116,865

All?
Five "131072 Low" in http://www.primegrid.com/stats_genefer.php and four in the first post.
All that we can currently make public, yes. Reportable primes aren't publicized until they're actually reported.
If you look carefully, you'll also see that our list of primes discovered here includes two n=16 primes and one n=15 prime that are also not included in the post. It turns out that those "discoveries" were actually of primes that were previously known.
____________
Please do not PM me with support questions. They will usually go unanswered. Ask on the forums instead. Thank you!
 


I've updated the first post in this thread to include all GFN primes ever discovered at PrimeGrid.
Nice. /JeppeSN  

Michael GoetzVolunteer moderator Project administrator Project scientist
Send message
Joined: 21 Jan 10 Posts: 10311 ID: 53948 Credit: 115,221,977 RAC: 116,865

In a couple of days the n=15 32768 double check will complete and we'll be moving into mostly untested territory. There's two known primes above where we've tested on PRPNet, so clearly somebody has done some testing up there, but we don't know how much may have been tested before. We expect there to be about 70 primes in the untested range testable by OCL3, and there's just two known primes, so I suspect very little of it has actually been tested before.
What I find interesting is that it took nearly 5 years to test up to 10.4M on PRPNet. The double check of the same range (and then some) took about 5 weeks.
____________
Please do not PM me with support questions. They will usually go unanswered. Ask on the forums instead. Thank you!
 

Michael GoetzVolunteer moderator Project administrator Project scientist
Send message
Joined: 21 Jan 10 Posts: 10311 ID: 53948 Credit: 115,221,977 RAC: 116,865

While there's still several thousand double check tasks out being processed, the leading edge of n=15 (32768) has now surpassed what was done on PRPNet and we are once again searching in uncharted territory. I expect we'll find several primes per day for the next few weeks.
The first PRP has already been found; it's being checked for primality now.
____________
Please do not PM me with support questions. They will usually go unanswered. Ask on the forums instead. Thank you!
 

axnVolunteer developer Send message
Joined: 29 Dec 07 Posts: 256 ID: 16874 Credit: 16,324,185 RAC: 70,868

While there's still several thousand double check tasks out being processed, the leading edge of n=15 (32768) has now surpassed what was done on PRPNet and we are once again searching in uncharted territory. I expect we'll find several primes per day for the next few weeks.
The first PRP has already been found; it's being checked for primality now.
With the expected deluge of n=15 primes, perhaps it would be better to have a dedicated thread for those primes?  

Michael GoetzVolunteer moderator Project administrator Project scientist
Send message
Joined: 21 Jan 10 Posts: 10311 ID: 53948 Credit: 115,221,977 RAC: 116,865

While there's still several thousand double check tasks out being processed, the leading edge of n=15 (32768) has now surpassed what was done on PRPNet and we are once again searching in uncharted territory. I expect we'll find several primes per day for the next few weeks.
The first PRP has already been found; it's being checked for primality now.
With the expected deluge of n=15 primes, perhaps it would be better to have a dedicated thread for those primes?
We'll see how it goes. I think there's actually more n=16 primes expected in the "Leading edge to OCL3 blimit" range than there are n=15 primes.
____________
Please do not PM me with support questions. They will usually go unanswered. Ask on the forums instead. Thank you!
 

Michael GoetzVolunteer moderator Project administrator Project scientist
Send message
Joined: 21 Jan 10 Posts: 10311 ID: 53948 Credit: 115,221,977 RAC: 116,865

Also, if n=21 takes a lot longer than n=20, shouldn't it have a bonus bigger than 10% (or n=20 have no bonus at all)?
It may change in the future.
The bonus for n=21 has been increased to 20%.
____________
Please do not PM me with support questions. They will usually go unanswered. Ask on the forums instead. Thank you!
 

Michael GoetzVolunteer moderator Project administrator Project scientist
Send message
Joined: 21 Jan 10 Posts: 10311 ID: 53948 Credit: 115,221,977 RAC: 116,865

Ten days ago the leading edge of n=15 (32768) passed the end of the double checking and we started crunching brand new tasks. There were, of course, quite a few of the double checking tasks still in progress.
All of the double checks are now complete. Every 32768 task that is sent out now will now be a new task.
To avoid confusion, I'm talking about the double checking of old PRPNet work  which was only checked once. (Or sometimes not at all.) All GFN work done on BOINC is sent to (at least) two different computers, so everything going forward is double checked at the same time the first test is done.
____________
Please do not PM me with support questions. They will usually go unanswered. Ask on the forums instead. Thank you!
 


Looks like a few of the GFN18 WUs are dropping out of the fast transforms, all the way down to x87 in some cases:
b = 1232796; 92K seconds vs. ~30K seconds normally: http://www.primegrid.com/result.php?resultid=667409629
b = 1237842; 76K seconds vs. ~18K seconds normally: http://www.primegrid.com/result.php?resultid=667635485
b = 1233608; 90K seconds vs. ~10K seconds normally: http://www.primegrid.com/result.php?resultid=667438497  

Michael GoetzVolunteer moderator Project administrator Project scientist
Send message
Joined: 21 Jan 10 Posts: 10311 ID: 53948 Credit: 115,221,977 RAC: 116,865

Looks like a few of the GFN18 WUs are dropping out of the fast transforms, all the way down to x87 in some cases:
b = 1232796; 92K seconds vs. ~30K seconds normally: http://www.primegrid.com/result.php?resultid=667409629
b = 1237842; 76K seconds vs. ~18K seconds normally: http://www.primegrid.com/result.php?resultid=667635485
b = 1233608; 90K seconds vs. ~10K seconds normally: http://www.primegrid.com/result.php?resultid=667438497
Sure does look like it.
CPUs with AVX and FMA3 will have a little bit more time before they switch over, since the b limit is a little higher with those transforms, but it probably won't be too long before they start to change too.
____________
Please do not PM me with support questions. They will usually go unanswered. Ask on the forums instead. Thank you!
 


Shucks. I was hoping to get a few original (as in not doublecheck) GFN18s in before they switched to the slow version.
My most recent one (b=1247046) briefly went to AVX: http://www.primegrid.com/result.php?resultid=668027560
Looks like we are just about to start hitting new ground with the GFN18s, though.  

Michael GoetzVolunteer moderator Project administrator Project scientist
Send message
Joined: 21 Jan 10 Posts: 10311 ID: 53948 Credit: 115,221,977 RAC: 116,865

Shucks. I was hoping to get a few original (as in not doublecheck) GFN18s in before they switched to the slow version.
My most recent one (b=1247046) briefly went to AVX: http://www.primegrid.com/result.php?resultid=668027560
Looks like we are just about to start hitting new ground with the GFN18s, though.
We'll surpass the DC boundary on 262144 in about a day and a half  the largest double check candidate is 1256802^262144+1, the leading edge is at b=1248724 and we're doing about 900 tasks per day. There's about 1300 candidates left to double check.
The (approximate) b limit on FMA3 on 64 bit Windows is about 1350000, so it's possible that some tests will run FMA3 on new candidates.
There may be slight variations in the limit on 32 bit builds or Linux/Mac because the code will be slightly different. And, of course, some candidates will work on FMA3 and some will not in a very unpredictable fashion, so it's not easy to predict which will need to switch down to x87.
____________
Please do not PM me with support questions. They will usually go unanswered. Ask on the forums instead. Thank you!
 

Michael GoetzVolunteer moderator Project administrator Project scientist
Send message
Joined: 21 Jan 10 Posts: 10311 ID: 53948 Credit: 115,221,977 RAC: 116,865

I'm now considering the n=18 262144 search to have officially started transitioning to the 80 bit x87 transforms. Many tests will still be able to complete with the faster 64 bit transforms, especially FMA3, but as b increases more and more of the tests will have to use the slower transform.
On the positive side, we're about a day away from sending out the first nondoublecheck tasks which will, of course, greatly increase the chance of finding a prime.
____________
Please do not PM me with support questions. They will usually go unanswered. Ask on the forums instead. Thank you!
 

Michael GoetzVolunteer moderator Project administrator Project scientist
Send message
Joined: 21 Jan 10 Posts: 10311 ID: 53948 Credit: 115,221,977 RAC: 116,865

EDIT: Updated to reflect n=18's transition to the 80 bit x87 CPU transform. This changes the CPU run time for n=18 from 1:37:51 to 18:26:51.
With all the ranges, and with different transforms being used, it might be nice to have some consistent benchmarks. The following tests were done at the current leading edge for each range. CPU/GPU were otherwise idle except for the test being performed. In the case of CPU tests, only a single core was used.
The CPU is a Core i54670K (Haswell), standard clocks, standard speed memory. No hyperthreading.
GPU is a factory overclocked GTX 580 reduced to stock clocks, or maybe a bit lower. I forget. It's running at 1544/772 shader/core and 1900 memory.
Tests were run under Windows 7 Professional 64 bit with Nvidia driver 355.60.
For the CPU tests, n=18 and below is using x87; n=19 and above is using FMA3.
For the GPU OCL tests, n=20 and below are using OCL3, except for 17mega which is using OCL2. n=21 and n=22 are using OCL.
n b Digits CPU TIME OCL Time CUDA Time Transforms
15 6,040,440 222,203 0:15:55 0:00:56 x87, OCL3
16 2,268,232 416,527 1:04:15 0:03:48 x87, OCL3
17low 1,667,884 815,552 4:18:22 0:13:55 x87, OCL3
17mega 42,647,172 1,000,065 5:24:18 0:34:33 x87, OCL2
18 1,061,140 1,579,621 18:26:51 0:52:14 x87, OCL3
19 839,922 3,106,008 7:26:06 3:25:00 FMA3, OCL3
20 720,126 6,141,938 31:46:34 13:40:28 FMA3, OCL3
21 18,986 8,972,526 116:44:14 22:17:11 27:48:54 FMA3, OCL
22 58,146 19,983,845 558:14:13 105:07:02 127:02:21 FMA3, OCL
As with all benchmarks, these tests were done from the command line rather than using live BOINC tasks. You never know what the server's going to give you, so you may not be comparing similar tasks. Much more important is that Genefer prints a VERY accurate run time estimate right at the beginning of the test, so you only need to run each test for a little while to see how long the whole test will take.
____________
Please do not PM me with support questions. They will usually go unanswered. Ask on the forums instead. Thank you!
 

Michael GoetzVolunteer moderator Project administrator Project scientist
Send message
Joined: 21 Jan 10 Posts: 10311 ID: 53948 Credit: 115,221,977 RAC: 116,865

N=18's (262144) leading edge has now surpassed the end of the double checks, so brand new n=18 tasks have started going out. There will still be a few double check stragglers for a little while, but for the most part 262144 will now be testing new candidates.
Now that we're testing new numbers again, the predicted number of megaprimes within the range of OCL3 is about 15.
Also, although n=18 has officially started the CPU transition from fast 64 bit transforms to the slow 80 bit transforms, I personally am still seeing all of my n=18 CPU tasks either completely run using FMA3 or only briefly transition to a slower transform before returning to FMA3. At least on some machines (such as my 64 bit Windows/Haswell CPU), there's still some life in the fast CPU apps.
____________
Please do not PM me with support questions. They will usually go unanswered. Ask on the forums instead. Thank you!
 

HonzaVolunteer moderator Volunteer tester Project scientist Send message
Joined: 15 Aug 05 Posts: 1647 ID: 352 Credit: 1,931,980,611 RAC: 402,317

n=16 (65536) there were 8 primes previously missed and found during double check, wow.
All of them were in 1927034<2909834 range, none in rest of double check so far.
We are approaching fast to b=4,205,312: Double check ends and new work begins.
Is that correct?
____________
My stats
Badge score: 1*1 + 5*2 + 8*13 + 9*2 + 12*3 = 169  

Michael GoetzVolunteer moderator Project administrator Project scientist
Send message
Joined: 21 Jan 10 Posts: 10311 ID: 53948 Credit: 115,221,977 RAC: 116,865

n=16 (65536) there were 8 primes previously missed and found during double check, wow.
All of them were in 1927034<2909834 range, none in rest of double check so far.
We are approaching fast to b=4,205,312: Double check ends and new work begins.
Is that correct?
Yes.
____________
Please do not PM me with support questions. They will usually go unanswered. Ask on the forums instead. Thank you!
 

Michael GoetzVolunteer moderator Project administrator Project scientist
Send message
Joined: 21 Jan 10 Posts: 10311 ID: 53948 Credit: 115,221,977 RAC: 116,865

N=16's (65536) leading edge has now surpassed the end of the double checks, so brand new n=16 tasks have started going out. There will still be a few double check stragglers for a little while, but for the most part 65536 will now be testing new candidates.
Now that we're testing new numbers again, I expect we'll be finding T5Kreportable primes at a decent rate. If the current processing rate continues, I expect about 1 prime every three days.
Another prediction: at the current rate, it will take about 8 months for us to exceed the OCL3 limit on n=16.
____________
Please do not PM me with support questions. They will usually go unanswered. Ask on the forums instead. Thank you!
 

Michael GoetzVolunteer moderator Project administrator Project scientist
Send message
Joined: 21 Jan 10 Posts: 10311 ID: 53948 Credit: 115,221,977 RAC: 116,865

Just a bit curious... why n=15 / 16 use version 3.10 and other ranges (such as 17 and 20) use 3.09? What's the difference (if any)?
For n=15 and 16, 3.09 was OCL and 3.10 is OCL3.
And when I switch the n=15 apps from OCL3 to OCL2, those will be called 3.11.
Sometime in 2016 I expect to also switch n=16 to OCL2, but it's possible we could have a combined OCL app before then.
____________
Please do not PM me with support questions. They will usually go unanswered. Ask on the forums instead. Thank you!
 

Michael GoetzVolunteer moderator Project administrator Project scientist
Send message
Joined: 21 Jan 10 Posts: 10311 ID: 53948 Credit: 115,221,977 RAC: 116,865

N=18's (262144) leading edge has now surpassed the end of the double checks, so brand new n=18 tasks have started going out. There will still be a few double check stragglers for a little while, but for the most part 262144 will now be testing new candidates.
The last of the 262144 double check "straggler" tasks has now been completed. Every 262144 task sent from this point on is for new work.
____________
Please do not PM me with support questions. They will usually go unanswered. Ask on the forums instead. Thank you!
 


After 40+ days finally finished:
Ivy bridge based Xeon ( so only avx, not fma/avx2 ) 2.7 GHz with HT on and only 1 wu was being crunch.  

Van ZimmermanVolunteer moderator Project administrator Volunteer tester Project scientist Send message
Joined: 30 Aug 12 Posts: 1770 ID: 168418 Credit: 4,269,599,028 RAC: 4,918,118

After 40+ days finally finished:
Ivy bridge based Xeon ( so only avx, not fma/avx2 ) 2.7 GHz with HT on and only 1 wu was being crunch.
Wow. That is a huge run time.  

Michael GoetzVolunteer moderator Project administrator Project scientist
Send message
Joined: 21 Jan 10 Posts: 10311 ID: 53948 Credit: 115,221,977 RAC: 116,865

N=16's (65536) leading edge has now surpassed the end of the double checks, so brand new n=16 tasks have started going out. There will still be a few double check stragglers for a little while, but for the most part 65536 will now be testing new candidates.
The last of the 65536 straggler double check tasks have now been completed, so every 65536 task sent out from now on is new work.
The only GFN double 
