Author |
Message |
|
I can confirm what Honza said here - http://www.primegrid.com/forum_thread.php?id=8250&nowrap=true#121338
This is about RTX 2080 but anyway for those wondering how it is doing.
gfnsvocl_w64_2G.exe with default settings on GFN22 - about 177P/day.
B8 - 350P/day
B9 - 495P/day
B10 - 680P/day
B11 - 720P/day
B12 - 730P/day
B13 - 765P/day, not nice to work with.
The 2080 seriously fast at sieving!
My AMD 8350 CPU is a bottleneck on GFN21 sieving. I had to run two tasks, but it peaks out at around 760 P/day.
Each of the two tasks is using ~8% of my total CPU. With one task it was maxed at 12.5% and only sieving at 670 P/day.
In comparison, a 1070 was sieving at 156 P/day.
Both using B13.
That makes the RTX 2080 GPU 4.87 times faster than the GTX 1070 at manual sieving.
According to GPU-Z the GPU usuage is at 99-100%, temps are around 76C with fans going at 93% using a manually set fan curve. It's using a max of 260watts but fluctuates around 250-258 watts. I have the power limit maxed out on my card through afterburner software. Core clock is at 1935MHz and memory speed is at 1775MHz.
The performance per watt between the RTX 2080 and the GTX 1070 is about 3x greater.
To double check the sieving speeds I also ran a GFN22 sieve. GFN22 sieving came back at 768 P/day occasionally going up to 772 P/day. That is 32 P/hour. This was using 7.5% of the total CPU for just 1 task.
Impressive.
For reference the card I am using is a GIGABYTE GeForce RTX 2080 GAMING OC 8G Video Card, GV-N2080GAMING OC-8GC. |
|
|
RafaelVolunteer tester
 Send message
Joined: 22 Oct 14 Posts: 911 ID: 370496 Credit: 550,412,714 RAC: 436,139
                         
|
Each of the two tasks is using ~8% of my total CPU. With one task it was maxed at 12.5% and only sieving at 670 P/day.
Try using the W1 parameter, it's supposed to be used with Nvidia card to reduce power consumption when sieving higher n values.
|
|
|
|
Each of the two tasks is using ~8% of my total CPU. With one task it was maxed at 12.5% and only sieving at 670 P/day.
Try using the W1 parameter, it's supposed to be used with Nvidia card to reduce power consumption when sieving higher n values.
I use W1 every time now since I learned about it. |
|
|
RafaelVolunteer tester
 Send message
Joined: 22 Oct 14 Posts: 911 ID: 370496 Credit: 550,412,714 RAC: 436,139
                         
|
Each of the two tasks is using ~8% of my total CPU. With one task it was maxed at 12.5% and only sieving at 670 P/day.
Try using the W1 parameter, it's supposed to be used with Nvidia card to reduce power consumption when sieving higher n values.
I use W1 every time now since I learned about it.
Oh, I see, I just thought you didn't because the quote said it was using default settings (and default is W0). |
|
|
|
Each of the two tasks is using ~8% of my total CPU. With one task it was maxed at 12.5% and only sieving at 670 P/day.
Try using the W1 parameter, it's supposed to be used with Nvidia card to reduce power consumption when sieving higher n values.
I use W1 every time now since I learned about it.
Oh, I see, I just thought you didn't because the quote said it was using default settings (and default is W0).
No worries, I forgot to mention the W1 parameter. And the quote is from a post from Honza in another thread. If I don't use W1 it maxes out the CPU each instance.
Since the 1060 sieving performance findings thread where I learned of this nice W1 parameter to reduce CPU usage I've used it every time I manually sieve now. |
|
|
|
Sieving GFN19....
My FX-8350 CPU is totally a bottleneck for GFN19 sieving using a 2080. Each instance I have running is using a full CPU core only sieving around 175P/day. I am using the W1 parameter. I have 4 instances running using 4 cores, one for each instance, and I'm still only able to get the 2080 GPU up to around 90% usage and sieving at a total rate of 700P not yet approaching the ~760 rate the card maxes out at. So almost 4 times the CPU usage compared to GFN21 sieving.
The GFN21 ranges are all reserved, so looks like I will be reverting to GFN22 for now until the new more powerful CPU next year. |
|
|
|
Not sure how or why, but, the 2080 sieving GFN-22 is doing so at over 840p/day right now and up to 872p/day. Wild. Nothing changed cept a reboot and restarting the sieve which b4 was running at 735p/day.
Same CPU usage, same drivers... fastest I've seen it go is right now 872p peak...
|
|
|
|
For GFN18 and GFN19 two or more instances required and each utilize 100% cpu core. But GFN22 usually run in one instance an utilize ~20% cpu core. |
|
|
Azmodes Volunteer tester
 Send message
Joined: 30 Dec 16 Posts: 184 ID: 479275 Credit: 2,197,541,354 RAC: 340
                       
|
Emphasis on the "or more" for this card. I need eight instances of GFN18 to utilize it 100%. It affects the core load quite predictably and individual P/day only decreases slightly. For instance I'm at 50% load with four and each works at 122 P/day. With one it's 128 (15% load). With eight, it's 98.
Good thing I got 32 CPU threads on this machine. :>
850+ sounds about right for 22, since I was able to push it to 850 at 21 already (with an 150 MHz overlock, admittedly). Things just get more efficient with higher n values.
____________
Long live the sievers.
+ Encyclopaedia Metallum: The Metal Archives + |
|
|
|
For GFN18 and GFN19 two or more instances required and each utilize 100% cpu core. But GFN22 usually run in one instance an utilize ~20% cpu core.
GFN18 and GFN19 are very CPU hungry sieves. With my FX 8350 using 5 instances of GFN18 sieves I cannot even come close to using all of the power of the 2080 GPU. It's about half in fact. I had that running a few times the past couple nights. 88p/day for each instance each using a full core. GFN19 is a little better. But for GFN18 on my older AMD FX 8350 it requires like 10 instances to max out the GPU which I just can't do as I only have 8 cores.
I thought the maximum P/day prior to today for the 2080 was around 760p/day. I had run GFN22 prior and even when I started it up last night it was sieving at only 735-750p/day. One instance, using 8-9% total CPUs or about 3/4 of one core. Not sure how it's running at well over 100p more per day right now.
But, either way, it's good. It sieves like a beast :) I was happy with 760p but hey, I'll take 872p/day max :) |
|
|
|
Emphasis on the "or more" for this card. I need eight instances of GFN18 to utilize it 100%. It affects the core load quite predictably and individual P/day only decreases slightly. For instance I'm at 50% load with four and each works at 122 P/day. With one it's 128 (15% load). With eight, it's 98.
Good thing I got 32 CPU threads on this machine. :>
850+ sounds about right for 22, since I was able to push it to 850 at 21 already (with an 150 MHz overlock, admittedly). Things just get more efficient with higher n values.
ok good to know someone else has similar performance findings.
I think the reason is the size of the numbers it's calculating. There is CPU overhead involved. Smaller numbers with the GFN22 stuff vs very large numbers with the GFN18 sieving.
if you look at what it reports when it finds a factor to eliminate, that number on the left at the beginning is far larger for GFN18 sieves than for GFN22 sieves. |
|
|
streamVolunteer moderator Project administrator Volunteer developer Volunteer tester Send message
Joined: 1 Mar 14 Posts: 1033 ID: 301928 Credit: 543,616,621 RAC: 5,946
                         
|
I think the reason is the size of the numbers it's calculating. There is CPU overhead involved. Smaller numbers with the GFN22 stuff vs very large numbers with the GFN18 sieving.
No. It does not matter will you test at 1P or 100000P. For GFN-"N", only "N" counts. Each step down requires double amount of CPU power.
The program first generates a set of "intermediate" candidates which must pass some sanity checks before they can be fed to GPU for full testing. These checks are CPU-only. "N" defines a "step" for this generator. Lowering "N" by 1 creates 2x more candidates per 1P, thus 2x more CPU power is required to do initial tests.
|
|
|
|
I think the reason is the size of the numbers it's calculating. There is CPU overhead involved. Smaller numbers with the GFN22 stuff vs very large numbers with the GFN18 sieving.
No. It does not matter will you test at 1P or 100000P. For GFN-"N", only "N" counts. Each step down requires double amount of CPU power.
The program first generates a set of "intermediate" candidates which must pass some sanity checks before they can be fed to GPU for full testing. These checks are CPU-only. "N" defines a "step" for this generator. Lowering "N" by 1 creates 2x more candidates per 1P, thus 2x more CPU power is required to do initial tests.
ok, thanks for clearing that up. So the changes to the N value the more CPU required by 2x. I thought it would be reversed... greater the N more CPU required but lower N value increases the CPU usage.
|
|
|
Dave  Send message
Joined: 13 Feb 12 Posts: 3207 ID: 130544 Credit: 2,285,547,321 RAC: 769,322
                           
|
I think the reason is the size of the numbers it's calculating. There is CPU overhead involved. Smaller numbers with the GFN22 stuff vs very large numbers with the GFN18 sieving.
No. It does not matter will you test at 1P or 100000P. For GFN-"N", only "N" counts. Each step down requires double amount of CPU power.
The program first generates a set of "intermediate" candidates which must pass some sanity checks before they can be fed to GPU for full testing. These checks are CPU-only. "N" defines a "step" for this generator. Lowering "N" by 1 creates 2x more candidates per 1P, thus 2x more CPU power is required to do initial tests.
ok, thanks for clearing that up. So the changes to the N value the more CPU required by 2x. I thought it would be reversed... greater the N more CPU required but lower N value increases the CPU usage.
+1
|
|
|
|
Azmodes had a good idea earlier, lower the memory clock, up the GPU clock. It increased the P/day by 40-60P. 820P/day now is the norm.
I also learned that W1 can slow things down slightly for these faster cards. They need to be constantly supplied with data so removing W1 increases the CPU usage to a core per sieve on an FX 8350 for each instance now, but there is not as much variation and bouncing of P/day rates. It's more constant at a higher rate.
GPU clock is running at 2040 now stable with 6800 memory timings stock speed. Temps were lower than with a +300 memory clock. Lowering the memory clock further than 6800 did not reduce temps but raising it above 6800 stock speeds increased the temps.
Opening my outdated poor cooling case helped also. 2 GPUs on top of one another needs some good ventilation. So like many have done I simply took off the side of the case and put a fan blowing directly into it on the cards :) 5-6c temp difference from that alone. New case for the new build will be much better with airflow.
Thanks Azmodes! Happy sieving!
|
|
|
Azmodes Volunteer tester
 Send message
Joined: 30 Dec 16 Posts: 184 ID: 479275 Credit: 2,197,541,354 RAC: 340
                       
|
Do we have any proud owners of a 2080 Ti here willing to post some P/day numbers?
____________
Long live the sievers.
+ Encyclopaedia Metallum: The Metal Archives + |
|
|
|
Do we have any proud owners of a 2080 Ti here willing to post some P/day numbers?
I would be curious to see those numbers. |
|
|
|
Do we have any proud owners of a 2080 Ti here willing to post some P/day numbers?
I could do it, but would need help on how to do it. |
|
|
Azmodes Volunteer tester
 Send message
Joined: 30 Dec 16 Posts: 184 ID: 479275 Credit: 2,197,541,354 RAC: 340
                       
|
http://www.primegrid.com/sieving_intro.php
____________
Long live the sievers.
+ Encyclopaedia Metallum: The Metal Archives + |
|
|
|
I downloaded the default chunk of gfn22 and ran at each block size. No problems using the machine.
gfnsvocl_w64_2G.exe on an Nvidia 2080 Ti Founders Edition Windows 64 Pro
Intel Core i9 7920X @ 2.90GHz base frequency
Skylake-X 14nm Technology
Only BIOS change was to limit CPU temp to 70 degrees
Standard turbo boost running at 3.8ghz.
default 10048.0/s (174.2P/day)
b8 20010.7/s (347.0P/day)
b9 38741.3/s (671.7P/day)
b10 50858.7/s (881.8P/day)
b11 55296.0/s (958.8P/day)
b12 57433.2/s (995.8P/day)
b13 58397.9/s (1012.6P/day)
I can confirm what Honza said here - http://www.primegrid.com/forum_thread.php?id=8250&nowrap=true#121338
This is about RTX 2080 but anyway for those wondering how it is doing.
gfnsvocl_w64_2G.exe with default settings on GFN22 - about 177P/day.
B8 - 350P/day
B9 - 495P/day
B10 - 680P/day
B11 - 720P/day
B12 - 730P/day
B13 - 765P/day, not nice to work with.
The 2080 seriously fast at sieving!
My AMD 8350 CPU is a bottleneck on GFN21 sieving. I had to run two tasks, but it peaks out at around 760 P/day.
Each of the two tasks is using ~8% of my total CPU. With one task it was maxed at 12.5% and only sieving at 670 P/day.
In comparison, a 1070 was sieving at 156 P/day.
Both using B13.
That makes the RTX 2080 GPU 4.87 times faster than the GTX 1070 at manual sieving.
According to GPU-Z the GPU usuage is at 99-100%, temps are around 76C with fans going at 93% using a manually set fan curve. It's using a max of 260watts but fluctuates around 250-258 watts. I have the power limit maxed out on my card through afterburner software. Core clock is at 1935MHz and memory speed is at 1775MHz.
The performance per watt between the RTX 2080 and the GTX 1070 is about 3x greater.
To double check the sieving speeds I also ran a GFN22 sieve. GFN22 sieving came back at 768 P/day occasionally going up to 772 P/day. That is 32 P/hour. This was using 7.5% of the total CPU for just 1 task.
Impressive.
For reference the card I am using is a GIGABYTE GeForce RTX 2080 GAMING OC 8G Video Card, GV-N2080GAMING OC-8GC.
|
|
|
|
Fedora 29 gfnsvocl_linux_x86_64 on an Nvidia 2080 Ti Founders Edition
Intel Core i9 9980XE executing 3.8ghz
default 10688.0/s (185.3P/day)
b8 20978.0/s (363.7P/day)
b9 41317.8/s (716.4P/day)
b10 53742.1/s (931.8P/day)
b11 56832.2/s (985.4P/day)
b12 58204.9/s (1009.2P/day)
b13 59421.0/s (1030.3P/day)
I downloaded the default chunk of gfn22 and ran at each block size. No problems using the machine.
gfnsvocl_w64_2G.exe on an Nvidia 2080 Ti Founders Edition Windows 64 Pro
Intel Core i9 7920X @ 2.90GHz base frequency
Skylake-X 14nm Technology
Only BIOS change was to limit CPU temp to 70 degrees
Standard turbo boost running at 3.8ghz.
default 10048.0/s (174.2P/day)
b8 20010.7/s (347.0P/day)
b9 38741.3/s (671.7P/day)
b10 50858.7/s (881.8P/day)
b11 55296.0/s (958.8P/day)
b12 57433.2/s (995.8P/day)
b13 58397.9/s (1012.6P/day)
I can confirm what Honza said here - http://www.primegrid.com/forum_thread.php?id=8250&nowrap=true#121338
This is about RTX 2080 but anyway for those wondering how it is doing.
gfnsvocl_w64_2G.exe with default settings on GFN22 - about 177P/day.
B8 - 350P/day
B9 - 495P/day
B10 - 680P/day
B11 - 720P/day
B12 - 730P/day
B13 - 765P/day, not nice to work with.
The 2080 seriously fast at sieving!
My AMD 8350 CPU is a bottleneck on GFN21 sieving. I had to run two tasks, but it peaks out at around 760 P/day.
Each of the two tasks is using ~8% of my total CPU. With one task it was maxed at 12.5% and only sieving at 670 P/day.
In comparison, a 1070 was sieving at 156 P/day.
Both using B13.
That makes the RTX 2080 GPU 4.87 times faster than the GTX 1070 at manual sieving.
According to GPU-Z the GPU usuage is at 99-100%, temps are around 76C with fans going at 93% using a manually set fan curve. It's using a max of 260watts but fluctuates around 250-258 watts. I have the power limit maxed out on my card through afterburner software. Core clock is at 1935MHz and memory speed is at 1775MHz.
The performance per watt between the RTX 2080 and the GTX 1070 is about 3x greater.
To double check the sieving speeds I also ran a GFN22 sieve. GFN22 sieving came back at 768 P/day occasionally going up to 772 P/day. That is 32 P/hour. This was using 7.5% of the total CPU for just 1 task.
Impressive.
For reference the card I am using is a GIGABYTE GeForce RTX 2080 GAMING OC 8G Video Card, GV-N2080GAMING OC-8GC.
|
|
|
Azmodes Volunteer tester
 Send message
Joined: 30 Dec 16 Posts: 184 ID: 479275 Credit: 2,197,541,354 RAC: 340
                       
|
Quite nice, thank you for sharing. Is that with or without the w1 parameter?
You might be able to get a couple dozen more P/day out of it on Windows if you're running two at the same time. At least that's how it is for my non-Ti 2080 on Windows 10, with one task there are some load fluctuations even at B13, two solve that and improve throughput a bit.
____________
Long live the sievers.
+ Encyclopaedia Metallum: The Metal Archives + |
|
|
|
Quite nice, thank you for sharing. Is that with or without the w1 parameter?
You might be able to get a couple dozen more P/day out of it on Windows if you're running two at the same time. At least that's how it is for my non-Ti 2080 on Windows 10, with one task there are some load fluctuations even at B13, two solve that and improve throughput a bit.
Those are all without the W1 parameter set.
|
|
|
|
rjs5,
That's awesome! Slightly over 1,000 P/day! At least 25% faster than a 2080.....
As Azmodes said try running two instances to get a total P/day from both together.... I'm curious if even a Skylake 7820x can't feed a 2080 Ti fully with only 1 sieve instance.
Thank you for sharing your results and taking the time to run those tests. |
|
|