Join PrimeGrid
Returning Participants
Community
Leader Boards
Results
Other
drummers-lowrise
|
Message boards :
Generalized Fermat Prime Search :
Errors on GTX560Ti
Author |
Message |
|
Hi!
Yesterday I just wanted to give the GFNs on my new GTX 560 Ti a try, but all I got were errors like this for example.
"Environment is not correct" says the one line in the stderr translated.
I have NVidia-driver 301.42.
I tried it some minutes ago again with resetting the project because I thought, there could be a wrong cuda.dll on my system from another or older app. But it seems it downloaded the same stuff and the WUs are failing again with the same output (like here).
So what could be wrong here?
Edit:
Found a thread that mentioned GTX 560 Tis should be running the WUs despite of being a little overclocked. Wonder if that could be the problem nevertheless... Will try a little downclocking later.
____________
Life is Science, and Science rules. To the universe and beyond
Proud member of BOINC@Heidelberg
| |
|
|
try downclocking the memory first. Mine runs at 915MHz but the memory has been downclocked to 1800. It is watercooled though so that helps with the extra core clock speed. | |
|
|
After a bit testing it looks like I found some values where the WUs don't error out anymore.
I wonder though, my system with the 560 Ti is still relatively laggy with these and I cannot work very well with it - and my X6 with a GTX 560 doesn't have that problem with GFNs and isn't laggy at all (the Ti has now values even below the normal 560).
Looks like I only can run them on the 560 Ti when I'm afk...
____________
Life is Science, and Science rules. To the universe and beyond
Proud member of BOINC@Heidelberg
| |
|
|
After a bit testing it looks like I found some values where the WUs don't error out anymore.
I wonder though, my system with the 560 Ti is still relatively laggy with these and I cannot work very well with it - and my X6 with a GTX 560 doesn't have that problem with GFNs and isn't laggy at all (the Ti has now values even below the normal 560).
Looks like I only can run them on the 560 Ti when I'm afk...
You should take a look at this thread:[url] http://www.primegrid.com/forum_thread.php?id=3982[/url] and see if setting a different block size reduces the lag. You can set the block size in your preferences page ("shift" value). Default shift for small gfn WUs is 7 and for world record ones is 8. Setting a smaller shift might reduce lag.
____________
676754^262144+1 is prime | |
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14043 ID: 53948 Credit: 481,202,847 RAC: 504,835
                               
|
Two different issues here, errors and screen lag.
Errors:
It's been discussed in great detail in several threads, but the short version is that your "maxErr exceeded" problem is caused by clock rates and/or temperatures being too high. If you're overclocking (often even factory overclocking), GeneferCUDA starts getting calculation errors. With one card, the GTX 550 Ti, even stock clock rates are too fast. It's sometimes necessary to downclock the rates on that card to get it to work reliably. It's been found that lowering clock rates on the GPU memory is more useful than lowering clock rates on the GPU shaders.
Since it's also temperature sensitive, another method of increasing stability is to increase the GPU fan speed. Or, as at least one person has done, duct the air conditioning straight into the computer case.
Lag:
You can try using the setting on the PrimeGrid preferences page to lower the block size. This should help in reducing the lag. It may, however, slow down the computations.
The lag is caused by interaction between GeneferCUDA and other programs that uuse the GPU to do visual effects on the screen. Unfortunately, "other programs" seems to include many java-script website ads, as well as many Microsoft programs. What I do personally is I set BOINC to not run the GPU when I'm using the computer. If no programs are causing any lag, I just override that and force the GPU to always run. Usually, I have that override set, but if something is causing lag I turn off the override.
Hope this helps!
____________
My lucky number is 75898524288+1 | |
|
|
With one card, the GTX 550 Ti, even stock clock rates are too fast.
And it seems with the GTX 560 Ti also then, since I have that problem...
Thanx for suggesting lowering the block size, guys, didn't consider that because the other card has no problem with the normal setting. Seems block size 5 works the lag out - doubles the crunch time now with all the settings - but it would be rather annoying to set the block size back every time when the other card would need new WUs...
Is there a way to set it per app_info.xml for my 560 Ti?
____________
Life is Science, and Science rules. To the universe and beyond
Proud member of BOINC@Heidelberg
| |
|
Scott Brown Volunteer moderator Project administrator Volunteer tester Project scientist
 Send message
Joined: 17 Oct 05 Posts: 2420 ID: 1178 Credit: 20,148,911,466 RAC: 22,768,785
                                                
|
Is there a way to set it per app_info.xml for my 560 Ti?
Easiest thing would be to set it up as different venues (i.e., home, school, work, default) and set the block size within venues differently.
| |
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14043 ID: 53948 Credit: 481,202,847 RAC: 504,835
                               
|
Is there a way to set it per app_info.xml for my 560 Ti?
I'm not sure if BOINC provides a mechanism for doing that, even with app_info.
____________
My lucky number is 75898524288+1 | |
|
|
Is there a way to set it per app_info.xml for my 560 Ti?
I'm not sure if BOINC provides a mechanism for doing that, even with app_info.
Block size settings only apply to the gfn wus. It does not interfere with other sub-projects. If what you were looing for was a way to automatically adapt the overclocking to the work being crunched, I do not think boinc can handle that. It would be a nice - but dangerous - feature though.
____________
676754^262144+1 is prime | |
|
|
If what you were looing for was a way to automatically adapt the overclocking to the work being crunched
I know that the overclocking stuff can't be handled by the app_info.xml, would be way too off-BOINC. ;-)
I was referring only to the block size. Sorry if it wasn't that obvious.
I'm not that much into the mechanisms of the xml, I only know a little stuff for some of the apps from several forums, but I remember in the early stages of DNETC it was possible to set up the usage of the GPU with a special parameter - so if that would be possible here with the block size, it would be nice. But I don't know if it is the same feature DNETC had - probably not I guess...
____________
Life is Science, and Science rules. To the universe and beyond
Proud member of BOINC@Heidelberg
| |
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14043 ID: 53948 Credit: 481,202,847 RAC: 504,835
                               
|
Just to make sure I understood your question, you have two GPUs in the same computer and want to use different block sizes in each one, correct?
If so, I can think of no way of doing that currently.
The best idea I can offer is setting BOINC to only run the GPUs when the system is idle. That would also have the advantage of running both GPUs at full speed when you're not using the computer.
____________
My lucky number is 75898524288+1 | |
|
|
Just to make sure I understood your question, you have two GPUs in the same computer and want to use different block sizes in each one, correct?
No, seems you didn't read my posts correctly. To describe it again in other words - maybe my English was a little misleading first:
I have an X6 with a GTX 560, running the GFNs without problems at all. I completely crunched my current GFN credits with it.
And I have an X4 with the GTX 560 Ti, which I wanted to try the GFNs now too, and has the already mentioned problem.
If I could set up a app_info.xml with changed block size for the 560 Ti, it would be easier than to constantly editing the settings when the comps are asking for new work.
____________
Life is Science, and Science rules. To the universe and beyond
Proud member of BOINC@Heidelberg
| |
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14043 ID: 53948 Credit: 481,202,847 RAC: 504,835
                               
|
Just to make sure I understood your question, you have two GPUs in the same computer and want to use different block sizes in each one, correct?
No, seems you didn't read my posts correctly. To describe it again in other words - maybe my English was a little misleading first:
I have an X6 with a GTX 560, running the GFNs without problems at all. I completely crunched my current GFN credits with it.
And I have an X4 with the GTX 560 Ti, which I wanted to try the GFNs now too, and has the already mentioned problem.
If I could set up a app_info.xml with changed block size for the 560 Ti, it would be easier than to constantly editing the settings when the comps are asking for new work.
Ok, gotcha.
You can do that with app_info, but there's a much easier way.
Put the two computers in different venues (e.g., put one in "home" and one in "work") and then assign different different block sizes to the different venues.
____________
My lucky number is 75898524288+1 | |
|
|
Just to make sure I understood your question, you have two GPUs in the same computer and want to use different block sizes in each one, correct?
No, seems you didn't read my posts correctly. To describe it again in other words - maybe my English was a little misleading first:
I have an X6 with a GTX 560, running the GFNs without problems at all. I completely crunched my current GFN credits with it.
And I have an X4 with the GTX 560 Ti, which I wanted to try the GFNs now too, and has the already mentioned problem.
If I could set up a app_info.xml with changed block size for the560 Ti, it would be easier than to constantly editing the settings when the comps are asking for new work.
-
Then Scott already gave you the best solution a couple of posts ago. Use the prefs page to set diffetent locations to each host. You can then set different block sizes for each of them.
____________
676754^262144+1 is prime | |
|
|
Hm, okay thx, will consider and try that...
____________
Life is Science, and Science rules. To the universe and beyond
Proud member of BOINC@Heidelberg
| |
|
Message boards :
Generalized Fermat Prime Search :
Errors on GTX560Ti |