PrimeGrid
Please visit donation page to help the project cover running costs for this month

Toggle Menu

Join PrimeGrid

Returning Participants

Community

Leader Boards

Results

Other

drummers-lowrise
1) Message boards : Project Staging Area : PRPNet Help (Message 132289)
Posted 2 days ago by JimBProject donor
https://sourceforge.net/projects/openpfgw/files/latest/download
2) Message boards : Sieving : Resieving GFN22 to max b=2G (Message 132071)
Posted 9 days ago by JimBProject donor
Early sieving (around the first 160000P) on GFN22 only went to a maximum b level of 100M. We're now resieving to b=2G which we'll need at some point for Do You Feel Lucky. I'm sieving everything under p=5000P since those files are very long (gigabytes). Anyone doing this new low-p sieving should be prepared to compress their factor files using zip, 7zip or rar formats. While I can handle almost anything, those three formats are automatically unpacked by my scripts. The maximum file upload size is 40MB, so anything larger than that will not upload. I've put a big obnoxious warning on every manual sieving page, it includes links to email me factor files or PM me with a link to an online factor file. While I'd prefer that you upload within the sieving system, don't be shy about using those other methods if your file is just too large to fit.

And please, hang onto your factor files until you've received credit in the system. This means I was able to verify that the entire file made it to PrimeGrid and was able to unpack it. If there are any problems, the fastest way to get my attention is on our Discord channel (highlight me with @JimB) but of course I also check the forums several times day.
3) Message boards : Sieving : Curiousity (Message 131703)
Posted 21 days ago by JimBProject donor
I have to amend my original answer. It's been so long since I set this up (six years or so) that I forgot some of the details. For any GFN n value, I have two completely separate sieve files. While I think I've gone through the process before in the big manual sieving system thread, it probably bears repeating here so I can point people to a short thread.

When I'm about the validate manual sieving, I do the following:

1) Update the actual reservations themselves. In the very beginning I used to do this manually but it was error-prone and as more sieving happened it became a huge pain. So now there's a program that synchronizes the reservations between the PrimeGrid server and my local server (which generates the GFN stats and graphs). For those with a technical bent, I keep a tunnel open through each SSH client to the mysql port on each server. So my local workstation can run queries and update my local server quickly and efficiently.

2) Download the actual factor files. Again, this used to be done manually, but I wrote a small program that connects to the server and retrieves all the pending factor files. It just grabs every file in each reservation upload directory without regard to what kind of extension it has.

3) I run a program that I call "normalize". It does the following:
a) If the file is in .zip, .7z or .rar format, unpack it.
b) Test every factor to make sure it's valid. n value is tested to make sure it matches the filename.
c) Sorts the the entries in factor,n order. The sieving program output can be out of order.
d) Remove duplicate entries from the factor file. If you restart after a crash, there can be duplicates. That's mostly to keep the stats accurate as duplicate factors don't matter otherwise.
e) The output is always in "factor | candidate" format like 23803926529 | 3480^65536+1 as opposed to early versions of David Underbakke's GFN sieving program which produced output like 1*3480^65536 + 1 factor : 123803926529. The program can read both styles. That's how it originally got the name "normalize".

4) I run a program that resieves from the last factor appearing in each factor file to the end of the range (as given by the filename). It's quite common for GFN22 factor files to not have factors for up to 0.12P without it being an error. For any file apparently missing more than 0.1P of sieving the program throws up a message. Unless I interrupt it, it'll finish sieving on each range. This is where I find most of the problems with uploaded factor files - they end far earlier than they should. If I'm doing processing around the 0400 UTC deadline for the system to give credit (more on that below), then I interrupt processing and remove that factor file from consideration, writing a PM to the user involved. If doing it at a different time of day and the range is not huge, I may let my workstation finish the sieving.

5) Once #4 is finished, the factor files are automatically copied to their appropriate directories on both my workstation and my home server. Each n has its own directory (names are 32768, 65536, 131072 etc. so it's harder to accidentally be in the wrong directory than names like GFN15, GFN16 etc.).

6) Local workstation is done first. For each n in which there are new factors, I run a program that opens the old sieve, reads every factor file and applies it to the sieve. As part of that process, the following tests are done:
a) The header line is tested to make sure it's appropriate for the file. n and bmax values must be valid. Any file missing a header line is flagged.
b) First and last factors are checked to make sure they correspond with the filename
c) Every factor is again tested to make sure it really divides the candidate in question.
d) Gaps between successive candidates are looked at and flagged if they're too far apart.
e) Early sieving on Underbakke's program could have continuations. Special care was made to ensure there was no gap around a continuation. As there was no checkpoint file, the user involved had to re-enter all the parameters of the search and often made mistakes. The factor value could have gaps, the n value could completely change, a different range could be appended, etc.
f) Every newly-generated factor file is expected to have b values above 100M if the user is running the right program. Any factor file that doesn't is flagged here.

Those tests are all run on my local workstation where the only output is the new sieve and nothing else can get screwed up by bad files. Any file that doesn't pass testing here is removed from my local server.

7) A similar program is run on my local server. This program doesn't do all the checking that happened on my workstation, but talks to my local database. It makes certain that factor files exist to completely cover each reservation. Factor counts, sieve removals and the values of the removals themselves are all recorded in database files. At the end of each run (one per n) two lists are printed. One is the list of newly-removed candidates that are currently loaded on PrimeGrid and should be removed. The other is a list of missing ranges that's haven't yet been submitted (the gaps in the sieving). It's painful and time-consuming to remove bad data from the database, which is why this program is only run after the one on my workstation completes without errors.

8) I have a web page where I copy and paste the factors to remove sieved-out work already loaded on the server. While this could be automated, it's helpful for me to see what shows up. This web page either removes candidates entirely if not yet turned into a workunit or cancels the workunit, turns it to quorum 1 (any finished job validates immediately) and sets the residue field to "FACTOR FOUND". A factor is better than a genefer test result and that value will not be overwritten by the validator.

9) After doing all current n values on the local server, I run another command there that generates the stats and regenerates any graph where the data has changed. That program automatically copies those updated files to PrimeGrid's server when it finishes.

10) Somewhere in all of this, usually as each n is done testing on my workstation, I manually validate each pending manual sieving reservation that I downloaded factors for. It's not unusual for more uploads to happen during this processing and those are either left until the next time I "do" manual sieving or downloaded immediately and processed before step 9 above. Each factor file moves from its upload directory to the factor file directory for that n.

11) At 0400 UTC each day, credit is moved from the PSA badge pending (PRPNet and manual sieving) into actual PrimeGrid credit. The amount transferred is up to 80% of your current Recent Average Credit (RAC). Of course this has the effect of boosting your RAC so if you have too much credit to transfer all at once, the amount transferred the next day is much larger.

Back to the sieve files: The sieves on my local workstation are for the full b range for that n. For example, on GFN19 (524288) early sieving only went to b=100M so that's what my sieve goes to. On GFN15, GFN16, GFN17 and soon GFN18 sieving went to b=2G from the beginning and sieves go that high too. Those sieves on my local workstation also have candidates removed due to algebraic factors (some candidates can't possibly be prime as they have known divisors that won't be found by our sieving). Those workstation sieves are the ones used to produce new work to be loaded on PrimeGrid.

Sieve files on my local server are only for the stats and graphs. They all end at either b=100M or b=400M. But it's also useful as another copy of the factor files involved. There are at least four copies of every factor file kept by us. One is on the PrimeGrid web server box , one is on the PrimeGrid database server box which autosyncs with the web server, one is on my workstation and one is on my server. Additionally, every three months I make a backup of my entire sieving directory structure (565 gigabytes at the moment) onto a completely different local box. I have a year's worth of those. And when we finish sieving any project, I burn a copy to DVDR. We're serious about not losing data. Technically I don't need the factors after a new sieve has been generated, but if there are ever questions about whether sieving was done properly, those factors are invaluable.

Finally, bear in mind it takes a lot longer to talk (or read) about this processing than it takes to do it. Most of the tests don't ever find anything wrong, but we can't have improperly-eliminated candidates.
4) Message boards : Sieving : Curiousity (Message 131595)
Posted 24 days ago by JimBProject donor
The statistics are skewed in that they only show factors affecting my current sieve file. For GFN17 that stops just below b=400M. Any factors for b=400M-2G are still completely valid and I could produce a sieve from them in about an hour, but currently such candidates are not testable by the genefer program. It's the same with the graphs for the various GFN sieving. You can see that http://www.primegrid.com/sieving/gfn/GFN131072.png goes to b=400M on the y-axis while for http://www.primegrid.com/sieving/gfn/GFN262144.png b only goes to 100M. That's because all the early sieving for GFN18 was done to only b=100M. It's currently being redone (by me) to b=2G and so in late August that graph will change.

If we got to the point of never needng to sieve a GFN subproject any further, I'd take the time to produce a bmax=2G sieve. The reason I don't right now is that it just makes everything take a lot longer with nothing much to show for it.

And since it seems like a lot of people have never seen those graphs, they're available from the Show Stats link on the manual sieving projects listing.

As far as 213888748689316642817 | 120405194^131072+1 goes 213888748689316642817 is a factor of 120405194^131072+1. So 120405194^131072+1 gets removed from the sieve file and will never be looked at again. Someone else may find another factor at some point, but that's OK. You can compare the counts in the red columns to those in the yellow columns, the difference is what's already been removed by prior sieving. Click the + by the title to open up the full table. Finding factors takes a lot of time, verifying that any factor is correct takes a fraction of a second. All submitted factors are tested as the first step in processing them. We get maybe ten a year that aren't correct, not sure why and really don't care. They're automatically eliminated from consideration.
5) Message boards : Sieving : New automated manual sieving system (Message 131349)
Posted 34 days ago by JimBProject donor
I've just doubled the maximum reservation size on GFN17 sieving from 200P to 400P. For those that do a lot of sieving it means fewer interactions with the sieving system. For those that don't sieve a lot, don't reserve 400P just because you can. Reserve what you can reasonably finish within two weeks.
6) Message boards : Proth Prime Search : Algebraically factorizable Proth prime candidates (Message 131207)
Posted 38 days ago by JimBProject donor
So hopefully these candidates were removed for k = 81, 2401, and 6561 as well.

While I cannot pinpoint when they were removed, I can tell you there aren't any candidates with k in (81, 2401, 6561) where n mod 4 = 2. I suspect they were sieved out for some other reason, hence my list of algebraic factors (which was produced from the current sieve at that time as it's too short to be every possible candidate) didn't need to list those.
7) Message boards : Proth Prime Search : Algebraically factorizable Proth prime candidates (Message 131203)
Posted 38 days ago by JimBProject donor
-- cubes
k IN ( 27, 125, 343, 729, 1331, 2197, 3375, 4913, 6859, 9261 ) AND n MOD 3 = 0
OR
-- fifth powers
k IN ( 243, 3125 ) AND n MOD 5 = 0
OR
-- seventh powers
k IN ( 2187 ) AND n MOD 7 = 0
)[/pre]to be absolutely sure.

/JeppeSN


In the earliest sieve file I have (p=10T), the k values 27, 125, 2197, 4913 and 6859 have no n mod 3 = 0 entries in them. One of the early sieving programs must have taken care of them.

Likewise k=3125 has no n mod 5 = 0 entries in it

On July 2, 2015 algebraic factors were applied for k=243, 343, 729, 1331, 2187, 3375 and 9261 on all PPS sieves 0-12M.

Algebraic factors were also applied for k=625. I don't remember why this works, but for example I have that:
25*2^5500069-5*2^2750035+1 is a factor of 625*2^11000138+1

The messages that instigated these algebraic factors being applied were probably on Lennart's PST message board server (now gone). I certainly didn't think of this myself. But that's why there aren't any candidates left that meet those conditions.
8) Message boards : Proth Prime Search : Algebraically factorizable Proth prime candidates (Message 131182)
Posted 38 days ago by JimBProject donor
PPS3M (n=2M-2.999999M) was sieved to p=100P.

SELECT COUNT(*) FROM llr WHERE project="PPS" and k=343 and n mod 3 = 0; +----------+ | count(*) | +----------+ | 0 | +----------+
9) Message boards : Problems and Help : What is the most likely/fastest way to K, TP, F, SG, and # badges? (Message 130934)
Posted 47 days ago by JimBProject donor
In the full n=0-25M 321 sieve file, there are:

811660 c=1 candidates
and
1098869 c=-1 candidates
10) Message boards : Number crunching : Sub-project "life" expectancy (Message 130859)
Posted 52 days ago by JimBProject donor
The only change that's already happened is that we've loaded new work for PPS MEGA in the PPSE k range of 1201-9999 starting at n=3.322M. As of when I'm writing this, none of them have been turned into workunits yet. PPS will not jump to higher n values for at least two years and PPSE won't change for decades. So PPS and PPSE remain unchanged.

As far as the upcoming challenge, it involves PPS only. PPSE and PPS MEGA do not count towards the 50th Anniversary of the Moon Landing Challenge.


Next 10 posts
[Return to PrimeGrid main page]
DNS Powered by DNSEXIT.COM
Copyright © 2005 - 2019 Rytis Slatkevičius (contact) and PrimeGrid community. Server load 1.21, 1.04, 1.07
Generated 25 Aug 2019 | 5:45:21 UTC