Looking through the PRPNet results, I recently came across a surprisingly large number of residues that are wrong. There are:
1 result on port 12006 (k=27) with residue 0000000000000001
8 results on port 12001 (k=121) with residue 0000000000000001
1407 results on port 12002 (factorial) with residue 0000000000000000
2403 results on port 12008 (primorial) with residue 0000000000000000
0000000000000001 is a known bad residue that shows up on LLR c=1 tests when hardware is faulty (overclocking, bad RAM, etc). In BOINC the validator will not accept it without three matching results and even then it emails us. Yes, we've managed to have two hosts on the same workunit return that matching result about six or seven times. It was always wrong.
I have no idea what leads to a residue of 0000000000000000 on PFGW but it has to be wrong. All but 95 of the factorial and primorial results are from a single bad host back in 2010. Rogue had asked about gaps back in 2015, but I didn't go searching for bad residues at that time.
I've written some code to force retesting on those candidates They're all quite a bit smaller than the leading edge and will test more quickly. Because the Top 5000 Primes site has separate short lists for Primorial and Factorial primes, any prime found is still reportable even if it's not still in the top 5000 any more.
There may be other gaps in the work done, which I'll look for soon. It turns out the quality of early sieving on Primorial was less than perfect. Rerunning the first 1G of sieving removed 30 candidates from the latest sieve file. I'm now rerunning p=1G-10G and will decide afterwards whether we need to resieve everything. Also, if primorial is ever going to move to PrimeGrid, we'd probably need to sieve to a higher n level than we've done in the past. That would force us to restart sieving from scratch. Primorial sieving remains disabled until we figure that out.