What he said. Let me explain more fully.
Originally sieving on all the GFN went to bmax=100M. Back then that was as far as the Genefer program could test to. But Yves kept making improvements to Genefer and adding new transforms. In its current form Genefer can test up to b=400M but with the addition of (slower) code it could easily test beyond that.
The sieving program we use used to be hardcoded to stop at b=100M but it turns out that it was always calculating up to b=2G and just not outputting those factors. The programs (there are two versions) were changed to output all factors found. When this happened, it was in the middle of sieving most n's and we had a mixture of bmax=100M and bmax2G. We've since resieved all of the GFN15, GFN16 and GFN17 that used to have a bmax of 100M to bmax=2G. For GFN18, all the sieving above p=20000P was from the newer sieve version and everything under that was from the older one. I took it upon myself to resieve GFN18 from 0-20000P over the summer. On Friday 20 September I recreated the stats sieve and updated everything. The stats cover only sieving within the range of the sieve file I use there. Right now, GFN15-18 and GFN22 have stats for b=0-400M, GFN19-21 have b=0-100M, and through me being half-asleep GFN23 has b=0-2G. I'm going to redo GFN23 to b=0-400M today because processing the entire b range several times a day takes too long.
However, the stats sieve is not used to generate new candidates. For new candidates I have sieves on the machine I'm typing this on and they all go to either b=0-100M (GFN19-21) or b=0-2G (everything else). Why do I have two copies? Well, there are some candidates not removed by sieving which we nonetheless know are not prime, like any b that's a power of 2 (2, 4, 16, 32, etc.) and others that won't be tested because we already tested the equivalent number in a different n. Trivial example: 2^4194304+1 = 16^1048576+1 = 256^524288+1. I don't remove those from the sieving stats, but do from the sieve used to generate work.
So, what you witnessed was me starting from a b=0-400M sieve rather than a b=0-100M sieve and my programs counting a much wider range of factors to put into the stats. The reason I used smaller sieves for stats was to decrease how much time it takes to produce those stats. It's also a good way to ensure that I have two independent copies of all the factor files in addition to everything being kept on our server. If we ever start testing candidates for b>400M then I'll redo the stats at that time. Starting from p=0 it takes close to an entire day to do that, which is why I don't change the stats sieve very often.
And by the way, once a bunch of work is done I create a new sieve at some p level. For example the GFN22 sieve I'm using at the moment is for p=950869P. So all factor files below that level are already included and don't have to be looked at again. That keeps the stats computation time down. When sieving starts taking a long time I create a new sieve or if we're near the end of the current max sieving range I'll wait it out until I can create a "final" sieve. Or at least final until sieving on that n is reopened. An example of that is me using a p=300000P sieve for GFN18. I'm waiting for one final range to be completed and then I'll create a p=420000P sieve.