Message boards :
General discussion :
Leyland primes
Author 
Message 
BurVolunteer tester
Send message
Joined: 25 Feb 20 Posts: 465 ID: 1241833 Credit: 285,203,717 RAC: 705,094

I generally don't like the idea of new subprojects, but if ever someone feels the absolute need to implement one, Leyland primes have a certain appeal.
They are of the form x^y + y^x and the biggest currently know one 8656^2929 + 2929^8656 has a mere 30008 digits.
According to PRP Records there are probable Leyland primes with up to 10^6 digits, but apparently it's hard to prove they are prime.
It might not be be easy to make it work with DC, the test for each number would have to be broken down into tasks. Maybe it doesn't make sense if they cannot be run parallel. But it irks me that 99% of new large primes are of a specific form just because those are relatively easy to prove their primality.  


Yes, also see the activity at https://www.mersenneforum.org/forumdisplay.php?f=110. /JeppeSN  

BurVolunteer tester
Send message
Joined: 25 Feb 20 Posts: 465 ID: 1241833 Credit: 285,203,717 RAC: 705,094

Ok, thanks. I was wondering where the currently known ones originated from...  

Dave Send message
Joined: 13 Feb 12 Posts: 2926 ID: 130544 Credit: 1,550,748,473 RAC: 3,718,191

Thought they came from Leyland in Lancashure.  


Thought they came from Leyland in Lancashure.
The surname of Paul Leyland (after whom the numbers are named) likely comes from the toponym Leyland, Lancashire, England. /JeppeSN  

GellyVolunteer tester
Send message
Joined: 13 Nov 16 Posts: 48 ID: 468732 Credit: 1,995,594,166 RAC: 313,236

I generally don't like the idea of new subprojects, but if ever someone feels the absolute need to implement one, Leyland primes have a certain appeal.
They are of the form x^y + y^x and the biggest currently know one 8656^2929 + 2929^8656 has a mere 30008 digits.
According to PRP Records there are probable Leyland primes with up to 10^6 digits, but apparently it's hard to prove they are prime.
It might not be be easy to make it work with DC, the test for each number would have to be broken down into tasks. Maybe it doesn't make sense if they cannot be run parallel. But it irks me that 99% of new large primes are of a specific form just because those are relatively easy to prove their primality.
It's definitely out of character for PrimeGrid to pick up a subproject that cannot prove the primes they are testing, as all of the current and past projects are either sieves or for provable primes.
The reason Leyland primes aren't helpful in terms of primality proving is that you don't tend to know a lot about the factorization of P1 or P+1, which is used in many of the primality proofs for very large primes. You can use general primality proving techniques, such as ECPP, but those top out at about 40k digits at the moment (although there are stirrings of 50k digits in the next 5 years)  

BurVolunteer tester
Send message
Joined: 25 Feb 20 Posts: 465 ID: 1241833 Credit: 285,203,717 RAC: 705,094

Thought they came from Leyland in Lancashure. Thanks, apparently that's exactly where they originate from.
ou can use general primality proving techniques, such as ECPP, but those top out at about 40k digits at the moment (although there are stirrings of 50k digits in the next 5 years) I didn't know primality proving got that hard once there are no special techniques available. I also read that Leyland primes have their use especially for being hard to test for primality.
Is there a possibility doing ECPP via distributed computing? I only found "Also ECPP has never been truly exposed to distributed computing" in this mersenneforum thread.  

GellyVolunteer tester
Send message
Joined: 13 Nov 16 Posts: 48 ID: 468732 Credit: 1,995,594,166 RAC: 313,236

Is there a possibility doing ECPP via distributed computing? I only found "Also ECPP has never been truly exposed to distributed computing" in this mersenneforum thread.
I'm not an expert on ECPP, but from the work I've done and what I've seen, it's definitely a possibility. In 2005, they were using multiple computers in tandem for fastECPP, although definitely a far cry from distributed computing.
The way the main stage of ECPP works is it essentially connects the large prime to a small prime through a bunch of intermediate primes with elliptic curves that have special properties. It's essentially a random walk, hoping to hit on an equation that provides a factorization that you can work with, and many times Primo will end up backtracking if it doesn't like the current prime it's working on. This stage would likely be difficult to distribute well, but with enough distribution, maybe "ok" is good enough.
The second stage is a lot easier to split up and lends itself much better to distribution, as it's just taking all the equations to connect the primes in stage one and factorizing them, AFAIK.
Again, I'm no expert on ECPP, either mathematically or programmatically, and considering the maelstrom of ECC patents and Primo being closed source, it's hard to blame me. Perhaps we could change that for the better if we got enough people booked up on elliptic curves looking into distributed computing  it may be possible to blow past the current limits of ECPP due to it being single system.  

rogueVolunteer developer
Send message
Joined: 8 Sep 07 Posts: 1223 ID: 12001 Credit: 18,565,548 RAC: 0

Numbers of this form are notoriously hard to sieve efficiently. I know because I wrote xyyxsieve and nobody has pointed out a better algorithm than the one I'm using. The most efficient way I know to sieve requires a large amount of memory. For example I was looking at the range 30001 <= x <= 40000, 2 <= y < 40000. It took nearly 24 GB of memory to start sieving the range. The memory requirement is reduced to less than 5 GB at p=1e5, but there are still nearly 10,000,000 terms left in the range. It will take many weeks for a single core to sieve this range to p=1e10. This is because it has to compute x^y (mod p) and y^x (mod p) for all terms. There is some efficiency gained by computing x^y (mod p) for all y (for that x) and also for y^x (mod p) for all x (for that y), but it requires a lot of memory. There might be ways to statistically choose better algorithms to compute all x^y (mod p) for a single x, but I don't expect it to be much faster. The program is more efficient for larger ranges of x and y. This is because it is faster to compute x^y (mod p) for a single x and 2 <= y < 40000 than to compute once for 2 <= y < 20000 and against for 20000 <= y < 40000.
Because of the memory requirements using the GPU is pretty much out of the question. Granted I have a slower GPU at my disposal, but this sieve was actually slower with the GPU than without it. AVX512 could help quite a bit, but I don't have an AVX512 CPU at my disposal.
As for the sieving upper bound, it is very likely to be less than 1e11 for that range and 100 cores against it could probably complete the testing of that range within a few weeks.
It would probably be better to sieve a large range (let's say x <= 100000 and y < 100000) until the number of remaining terms is small enough to merge the results.  

Message boards :
General discussion :
Leyland primes 