Join PrimeGrid
Returning Participants
Community
Leader Boards
Results
Other
drummerslowrise

1)
Message boards :
Generalized Fermat Prime Search :
DO YOU FEEL LUCKY?
(Message 162808)
Posted 9 days ago by Yves Gallot
Without getting into the detailed math on this, your logic here is fundamentally flawed because it assumes a constant rate of sieving (i.e., that we have a constant rate of removal of factors). This simply is not true with sieves. The deeper we go into a sieve, the slower the removal of factors becomes...
Number of remaining candidates for ΔB = 1M as a function of sieve depth
1e21: 202154
2e21: 199297 (2857: 1,41%)
3e21: 197663 (1634: 0,82%)
4e21: 196520 (1143: 0,58%)
5e21: 195642 ( 878: 0,45%)
6e21: 194931 ( 711: 0,36%)
7e21: 194334 ( 597: 0,31%)
8e21: 193819 ( 515: 0,26%)
9e21: 193367 ( 452: 0,23%)
1e22: 192965 ( 402: 0,21%)

2)
Message boards :
Generalized Fermat Prime Search :
DO YOU FEEL LUCKY?
(Message 162804)
Posted 9 days ago by Yves Gallot
One thing you're missing. The 200,00 candidates being tested over 5 years actually
represent 200,000 plus 800,000 factors removed by sieving, equals a range of 1M covered in 5 years.
So, 1.4% x 1M = 14,000 candidates eliminated.
And that's with 1% GPUs used for sieving vs. GFN.
So, sieving removes factors at least 7 times faster than GFN.
The "improvement" is log(2.2·10^{21}) / log(1.1·10^{21}) = 20.17% / 19.89% = 1.014. It doesn't depend on ΔB, it is not 20.17%  19.89% = 0.28%.
The number of remaining candidates at 1.1·10^{21} is 1.014 times the number of remaining candidates at 2.2·10^{21} (+1.4%).
We have 200,000 remaining candidates => ΔB = 1M. But 1M x 0.28% ~ 2,800 candidates.
At p_{max} ~ 10^{21}, only one fifth of the factors remove a remaining candidate.

3)
Message boards :
Generalized Fermat Prime Search :
DO YOU FEEL LUCKY?
(Message 162799)
Posted 9 days ago by Yves Gallot
When I was manual sieving GFN22, I was removing about 1,125 candidates a day by myself for the 100M range. Or, 22,500 candidates for a 2G range.
If I was running GFN, I could only test 1 or 2 a day.
The efficiency of sieving is decreasing with the sieve limit.
After sieving, the number of remaining candidates is about e^{γ} · C_{n} · ΔB / log(p_{max}), where e^{γ} = 0.56146, C_{22} = 17.41 and p_{max} is the sieve limit.
Today, for GFN22, p_{max} = 1.1·10^{21}. Then #cand ~ 0.2018 ΔB. The actual number of candidates in [2; 400M] is 80,693,547 => 20.17%.
If the sieve limit is doubled (2.2·10^{21}), the ratio will be 19.89%. The improvement for GFN22/DYFL projects is 1.4%.
Because of technological developments, sieving a range that will be tested 5+ years from now is a mistake.
In the coming five years, GFN22/DYFL projects will test about 200,000 candidates.
N GPUdays are needed to sieve [1.1·10^{21}; 2.2·10^{21}]. It is estimated that if these N GPUdays are allocated to GFN22/DYFL projects then more than 1.4% x 200,000 ~ 2,800 candidates are eliminated.

4)
Message boards :
Generalized Fermat Prime Search :
DO YOU FEEL LUCKY?
(Message 162794)
Posted 9 days ago by Yves Gallot
From my understanding, GFN22 was being sieved to 2G. Only the first 100M was being used.
The remainder of 2G results were being saved. So, it shouldn't be necessary to RESIEVE GFN22 to get results up to 2G.
However, GFN22 was severely undersieved for 2G. So a lot of NEW GFN22 sieving needs to be done.
The full range [2; 2G] is sieved. The sieve limit depends on the computation time of a primility test not on the range size.
Today, GFN22 + DYFL eliminate about 35,000 candidates each year, the equivalent of Δb ~ 200,000. 2G / 200,000 ~ 10,000 years.
GFN22 is not undersieved, it would be if the range could be tested in few years.
2G is a huge range. More than 400 DYFL are expected in this range!

5)
Message boards :
Number crunching :
4070 Ti benchmarks
(Message 162767)
Posted 11 days ago by Yves Gallot
The one thing I love about my 4070Ti is how much more efficient it is compared to my 3070Ti!
Sorry this is a bit off topic but regarding power usage right now running some Einstein work, the 3070Ti pulls 280W. Same work done with the 4070Ti = 164W. Amazing.
The architectures of GeForce 30 and 40 are almost the same. But process size of 30 series is 8 nm (Samsung 8N) and it is 5 nm (TSMC 4N) for 40 series. Hence the energy per operation is much lower.
4070 Ti: 7680 cores at 2610 MHz (285 W). 3070 Ti: 6144 cores at 1770 MHz (290 W)
(7680 * 2610) / (6144 * 1770) = x1.84. In practice, with genefer, the improvement is even greater (clock frequency may not be at boost frequency if max power is drawn).
TSMC 4N is a big step forward.

6)
Message boards :
General discussion :
Longest time to prime?
(Message 162657)
Posted 15 days ago by Yves Gallot
GTX 980: average time
 GFN16, 250 sec/candidate, 15500 candidates/prime => 45 days
 GFN17, 900 sec/candidate, 30000 candidates/prime => 300 days
SGS and GFN15 are not recommended on old hardware. These projects are doublechecked and slow computers are not often first. The most efficient projects are GFN16 on GPU and PPSE on CPU.

7)
Message boards :
Problems and Help :
Multithread for Windows 11
(Message 162625)
Posted 16 days ago by Yves Gallot
Does the indicated irregularity affect the number of points awarded? Is there a known way to fix this error?
What error? Credit depends on candidate size, not on the number of threads.
3*2^201824501: 10,798.23 credits
3*2^20183276+1: 10,960.53 credits

8)
Message boards :
Generalized Fermat Prime Search :
Genefer performance in relation to PCI Express bus bandwidth
(Message 162560)
Posted 18 days ago by Yves Gallot
PCIe bus load depends on GFN subproject.
If PCIe bus bandwidth is the problem then testing GFN20 or GFN21 should increase the power consumption and achieve 280 W.

9)
Message boards :
Generalized Fermat Prime Search :
GFN1x Small Primes search starts 20210121
(Message 162359)
Posted 24 days ago by Yves Gallot
PG running GFN15 "vector" version would be a great fit.
Separate mix of microranges for GPU and for CPU?
Even a slow CPU can test 32 GFN15 in 4 or 5 hours (single threaded). Each task (main and proof) can check 32 candidates.

10)
Message boards :
Generalized Fermat Prime Search :
GFN1x Small Primes search starts 20210121
(Message 162336)
Posted 25 days ago by Yves Gallot
Is it the one with supermisleadingextraconfusing name genefer20_2.00.0_win32.exe?
There are two different applications:
 genefer/genefer22 running on PrimeGrid: a single GFN is tested.
 genefer20 running on Private GFN Server: a vector of GFNs is tested.
genefer is the wellknown application of PrimeGrid. A new version was created in 2022 that implements proof checking ("Fast DC"). In the transition phase, the new application was named genefer22 (not to be confused with the previous version). "Old genefer" is now deprecated: the new application is genefer and is using the year and month of the release as a version number. Release 22.12.2 & 23.01.0 are currently PrimeGrid applications.
genefer20 is a GPU application created in 2020. It is dedicated to the search for GFN primes in the range 8 ≤ n ≤ 14. Since n ≤ 13 were tested to b = 2G, the current version is optimized for GFN14. This new version implements GerbiczLi error checking, the previous version used Gerbicz error checking. Even if it was written in 2023, its interface is identical then it is "genefer20 version 2.0.0". There is no CPU implementation then "g" is not needed. It's true that it is now a 64bit application on Windows.
Some vectors of GFNs are needed on fast GPUs, a single number is too small for thousands of cores. Today, the vector size is 64. The RTX 4090 tests 64 GFN14 in 40 seconds, an application cannot check a single GFN14 in 0.625 second! GFN15 and 16 can be tested to compare performance. genefer and PrimeGrid server will probably evolve to check 32 GFN15, 16 GFN16, 8 GFN17, 4 GFN18, 2 GFN19 and the vector size will increase over time as technology improves. But that is just the design phase and preliminary tests for n ≥ 15.

Next 10 posts
