Message boards :
Sieving :
Combining GFN sieves with different n
Author |
Message |
|
This is based on my brainstorming on the discord chat, but I also want to post it here to bring discussion.
So the idea is, that since we sieve GFN of specific n for B values ranging from 2 to 2,000,000,000, could we use that sieve also for the higher n GFN project? Specifically, this idea would only apply for new projects.
Using GFN22 and GFN23 as an example, we can easily show that
(B^2)^2^22 = (B)^2^23,
which means that all values of B from GFN22 correspond to another B value in GFN23, with the relation
sqrt(B_22) = B_23.
Now, since we will already be testing B values from 2(?) to 400,000,000, we wont want to do those again, so we can instantly remove the corresponding B's from the higher n, which correspond to B_23 values from 2 to 20,000 (=sqrt(400,000,00)). (Or vice versa; we could exclude these from the lower n (undone ones, actually)).
We are left with B_23 values from 20,000 to 44,720 (~sqrt(2,000,000,000)), for which you could use the sievefile from GFN22. If we assume that the sieved-% for GFN22 is the same to up to B=2,000,000,000 than it is to 100,000,000, we can already exclude 58,56% (source: http://www.primegrid.com/sieving/gfn/#R4194304) of candidates from GFN23 (B = 20,000 to 44,720) using the GFN22 sieve file.
Probably when the time comes, its more efficient to sieve GFN23 even deeper than GFN22, but until that work is done, we could already work on 20,000 to 44720 (=5121 candidates with the above sieve-%) with a "ready" sievefile. Assuming that same amount of work is done for that as for GFN22 (~33 tasks a day), we could maybe do 10 tasks (=5 workunits) a day, which would mean that this range has work for nearly 3 years.
A few another ideas come up from this:
-If the B upper limit for GFN sieving could go up at some point, we could reach even higher.
-Vice versa: does it save any computation time to exclude B=2 to 44720, from GFN sieving, since those are already done at lower n sieving (altough not to same depth probably).
What do you think about this idea? Do you notice any mistakes? Is it something that could be used? Or is actually already used, but I just couldn't find the documentation about it? Will GFN23 ever be even started?
One last question: Is there an computing-efficiency difference between lower B, higher n and higher B, lower n? i.e. would testing 10,000^2^23+1 be faster than testing 100,000,000^2^22+1?
EDIT: formatted the equation to a bit more understandable form. | |
|
|
One last question: Is there an computing-efficiency difference between lower B, higher n and higher B, lower n? i.e. would testing 10,000^2^23+1 be faster than testing 100,000,000^2^22+1?
From my limited testing with Genefer and LLR, it seems that lower B, higher n is faster than higher B, lower n. For LLR it seems to mainly come down to FFT size, where a lower B will yield smaller FFT sizes. This is also probably true for Genefer, however another compounding factor in this case is the transform used. For a lower B, higher n, it is likely that a faster transform may be used when compared to higher B, lower n. LLR seems to also have different transforms (all-complex vs. generic reduction) but I don't know how to interpret and compare these, other than to say that all-complex (lower B, higher n) seems a lot faster than generic reduction (higher B, lower n).
Edit: forgot to mention that, interestingly, Genefer outputs different residues for the two different forms while LLR gives the same residue for both. It is, after all, the same candidate, so one would reasonably expect the same residue, regardless of written form.
____________
| |
|
|
It seems the for GFN, lower n is actually better:
152423716^32768+1 is composite. (RES=0000000000000000) (268143 digits) (err = 0.0000) (time = 0:00:52) 15:48:05
12346^65536+1 is composite. (RES=0000000000000000) (268143 digits) (err = 0.0000) (time = 0:01:03) 15:49:29
and
Estimated time for 10000^262144+1 is 0:13:40
Estimated time for 100^524288+1 is 0:26:40
Also, I received a response that the sieve files are taken into account at higher n. So my post is essentially "old news".
Nevertheless, I'll leave it here for everyone to ponder. Maybe it helps someone else come up with something even better :) | |
|
|
(B^2)^2^22 = (B)^2^23
You are right, this formula which can also be written GF(n, B^2) = GF(n+1, B) is certainly something to be aware of.
First of all, the most natural way to write a number (when communicating it) is with the highest possible exponent and a base B which is not a perfect square. If you see a GFN whose base is a square, rewrite it with this formula GF(n, B^2) = GF(n+1, B) until you reach a base which is not a perfect square.
You are right, this is old news. But maybe not always done optimally anyway? (A while ago I noticed problems with it, but I guess it is better now.)
So for example when doing GFN19, we should skip perfect square bases because this will have been tested a long time ago under the GFN20 project. This is true even if it would often be faster to check the "unnatural" form GF(19, B^2) than the more natural for GF(20, B) ...
But when doing GFN22, which is currently the highest exponent we have, I think we should include perfect square bases. If we do, when we open GFN23 one day, we can start from the square root of the base we have reached at that time in GFN22.
/JeppeSN
| |
|
|
I have been thinking about this while doing some testing with Genefer and I found that there does not seem to be a clear rule as to what form is faster to test. For some candidates, Genefer will even be able to use faster transforms, depending on what form of the number you use. As a result, the different transforms and number logic that goes on behind the scenes can make a lot of difference in testing times.
For example, Genefer was only able to test the number 256^16777216+1 using the OCL and OCL3 transforms, while for the number 65536^8388608+1 Genefer could use OCL, OCL2, OCL3, OCL4, and OCL5. The thing is that that is the same number, only written in two different ways. The latter showed about a third of the estimated completion time of the former. Similar behaviour can be observed with the CPU program as well.
I don't know whether these transform limitations are due to the way Genefer was programmed (pointing to a possible bug) or due to the nature of the transforms themselves. Either way, I would suggest adding some more benchmarking logic to Genefer to test perfect square roots.
For example, say you want to test the number 16^(2^18)+1. This new logic would make Genefer benchmark this number together with 4^(2^19)+1 and 2^(2^20)+1. What's more, it could also benchmark against 256^(2^17)+1 and 65536^(2^16)+1, since all these are the same number. Benchmarking all these different forms against the different transforms might show reductions in testing time.
____________
| |
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14037 ID: 53948 Credit: 476,288,249 RAC: 227,716
                               
|
Further comments about Genefer performance will be hidden -- they're off topic here in "Sieving". Please take this discussion to the appropriate forum.
____________
My lucky number is 75898524288+1 | |
|
Message boards :
Sieving :
Combining GFN sieves with different n |