PrimeGrid
Please visit donation page to help the project cover running costs for this month

Toggle Menu

Join PrimeGrid

Returning Participants

Community

Leader Boards

Results

Other

drummers-lowrise
1) Message boards : AP26 - AP27 Search : AP27 Closing??? (Message 133449)
Posted 16 days ago by Dj Ninja
I would like to see this subproject keep running. Even with me not haven't found an AP27 (yet), it was fun to find some shorter ones.

So if I could make a wish: Let's got for AP28 or AP29!
May the force be with us. Maybe we're lucky.

If AP28 or AP29 is not started, just keep AP27 running as long as it doesn't hurt the server.
I think there are enough people out the who like it, even when it has archived its primary goal.
2) Message boards : Number crunching : Wallis is Born Challenge (Message 101047)
Posted 1064 days ago by Dj Ninja
As for GPU app, it was Michael's decision to not send it to 32-bit systems although GPU app itself is 32-bit, motivating that number of such systems (32-bit OS and capable GPU with 2+ GB of video RAM) was very low.

That's bad because I have such a system. GTX670 GPU on an older WinXP32 gaming rig I didn't want to rebuild yet. Because I can't archive full power due to your limitation not sending a 32 bit capable app to a 32 bit system I will *not* participate in the race.
3) Message boards : Number crunching : Reign Record Challenge (Message 87973)
Posted 1496 days ago by Dj Ninja
Maybe the task duration times are greatly affected due to what hardware is used inside and outside a race. I think there are not many Q6600's left out in the field crunching LLR tasks aside a race. The few machines left doing it maybe highly tuned boxes, which are boosted to the max so that they're ready for the task. This could make the processor look much faster than it is on "normal" machines (un-oced all cores used). When these "normal" boxes are attracted by a race, they never can catch up with the tuned ones.

If I'm allowed to make a suggestion - I would prefer fewer challenges, but longer ones.
4) Message boards : Number crunching : Reign Record Challenge (Message 87932)
Posted 1498 days ago by Dj Ninja
If some factors like HT enabled/disabled or using only one core, keeping the other cores idle are counting that much as we see here in this race, it's definitely the wrong style of race FOR ME.

MY opinion is that even slower computers like a stock un-oced Q6600 without maximizing-RAM-performance-by-disabling-cores-tricks should be able to deliver at least one "set" of WUs (four on a Q6600) within the race. The race duration needs to be long enough to ensure that.

But that is MY opinion, I didn't want to start a discussion/war on race duration times. And its MY choice to participate in such races - or not. Nobody needs to follow my opinion if I found these short races with long tasks unsuitable to me in the future.

If it is your wish I could startup a standard Q9550 machine to show the WU run time on these CPU. I assume it will *never* finish inside this race.

Fortunately it is not my job to organize races. :D
5) Message boards : Number crunching : Reign Record Challenge (Message 87918)
Posted 1498 days ago by Dj Ninja
Yep, this race is for brand-new computers only, due to its short duration in combination with very long running tasks.

If I'm lucky I'll manage to get 8 WUs done inside the race. Another 16 or 24 will finish shortly after the race ended. I'm completing these only to help with the cleanup, but further races of this style going without me.
6) Message boards : Number crunching : Prime Field of Dreams (Message 87838)
Posted 1500 days ago by Dj Ninja
If the GTX 580 is really twice as fast as an R9 280X... I'll get one. :D
7) Message boards : Number crunching : Task limits being reduced (Message 87795)
Posted 1501 days ago by Dj Ninja
I understood your problem, but I think scheduler requests without getting work are extremely rare in real world.

Is there a way to increase the number of tasks the server software can hold in memory?

Maybe it is worth it to try out 6 or 5 tasks per core if the server's hold-in-memory tasks are getting too low.

It seems that the BOINC server software is not designed to handle that much subprojects. How long the very small N genefer subprojects will last until the search range is completed? Maybe we should move onto these subprojects each-by-each and not all at once. I would dedicate one or two old GPUs specially to help clearing the short-running subprojects, so that bigger values are reached faster (if I'm not alone crunching this).

Another approach maybe a modification of the science app for genefer tasks.
For an example, my 3x+1 science app was designed to handle different work unit lengths with the same executable. The length was defined in the work unit parameters given to the science app and every user could have choosen its own preferred workunit length within his project settings. The work generator handled this by preserving created work in the database, which could be done very freely (linear processing of the work space).
As long as science app, project database, work generator and validator must be built and maintained by the project (and not the BOINC server software), this might work with a single subproject and a user-chooseable value of N. You did such modifications to the HTML scripts already by adding the changeable block size on genefer CUDA tasks.
8) Message boards : Number crunching : Prime Field of Dreams (Message 87776)
Posted 1501 days ago by Dj Ninja
I would never say don't do it as long as the results can be handled. If the GTX 580 needs less than a minute, a R9 280x will beat the crap out of it.
9) Message boards : General discussion : Errors on TRP Seive tasks (Message 87775)
Posted 1501 days ago by Dj Ninja
Did you had something like an unclean windows shutdown, or is it an overclocked machine? This might cause various errors with unknow reasons. Memory errors could be another cause.

Years ago, I experienced the same on a Q6600 running at 3.6 Ghz - that's 150% rated clock speed. The machine was stable and awful fast, but a very small number of tasks (might have been 2..3 tasks a month) crashed. As all the other tasks where running fine, I ignored this minor problem.
10) Message boards : Number crunching : Task limits being reduced (Message 87773)
Posted 1501 days ago by Dj Ninja
Great! *thumbup*

Now I got the drive I need to set up the new machine I got yesterday, and this machine should have enough power to do LLR tests as well as sieving. Maybe it will participate in the upcoming challenge. :)

I think 8 is fine and some "empty" scheduler requests (a request without getting *any* work even if there was work available) were no problem in the past. On my machines they were very very rare. Sometimes I noticed requests with only a small number of tasks (2..3 tasks), but I can't remember a zero. By this experience I think it works very fine and - you know me ;) - I never would have completed that many tasks when unhappy with the server's performance.

I don't see you having any problems with your server settings. The server works great. GPU sieve tasks are very small (10 minutes on a mid-range nVidia card) and large genefer-WR tasks are often failing due to server-independent issues. I welcome getting the GPU sieve queue to full with two requests, because it's cached work even at full state is done in less than a day. So I have to care twice a day on this machine and needing only two requests is saving me time.

When you think about lowering the values again, I need longer running CPU sieve tasks. Maybe I get a better internet connection within a month or two, making me unaffected in the future, but other users might encounter the same problems as I did. Okay, I'm pretty shure I'm the only one who tries to get a CPU farm like mine running on such a bad internet connection, but ... you never know... ;)

I got one question about it: Did you have users complaining that they sometimes didn't got work even if there was work available?


Next 10 posts
[Return to PrimeGrid main page]
DNS Powered by DNSEXIT.COM
Copyright © 2005 - 2019 Rytis Slatkevičius (contact) and PrimeGrid community. Server load 1.69, 1.80, 1.74
Generated 18 Oct 2019 | 20:32:55 UTC