Please visit donation page to help the project cover running costs for this month

Toggle Menu

Join PrimeGrid

Returning Participants


Leader Boards



1) Message boards : Number crunching : Wallis is Born Challenge (Message 101047)
Posted 916 days ago by Dj Ninja
As for GPU app, it was Michael's decision to not send it to 32-bit systems although GPU app itself is 32-bit, motivating that number of such systems (32-bit OS and capable GPU with 2+ GB of video RAM) was very low.

That's bad because I have such a system. GTX670 GPU on an older WinXP32 gaming rig I didn't want to rebuild yet. Because I can't archive full power due to your limitation not sending a 32 bit capable app to a 32 bit system I will *not* participate in the race.
2) Message boards : Number crunching : Reign Record Challenge (Message 87973)
Posted 1349 days ago by Dj Ninja
Maybe the task duration times are greatly affected due to what hardware is used inside and outside a race. I think there are not many Q6600's left out in the field crunching LLR tasks aside a race. The few machines left doing it maybe highly tuned boxes, which are boosted to the max so that they're ready for the task. This could make the processor look much faster than it is on "normal" machines (un-oced all cores used). When these "normal" boxes are attracted by a race, they never can catch up with the tuned ones.

If I'm allowed to make a suggestion - I would prefer fewer challenges, but longer ones.
3) Message boards : Number crunching : Reign Record Challenge (Message 87932)
Posted 1350 days ago by Dj Ninja
If some factors like HT enabled/disabled or using only one core, keeping the other cores idle are counting that much as we see here in this race, it's definitely the wrong style of race FOR ME.

MY opinion is that even slower computers like a stock un-oced Q6600 without maximizing-RAM-performance-by-disabling-cores-tricks should be able to deliver at least one "set" of WUs (four on a Q6600) within the race. The race duration needs to be long enough to ensure that.

But that is MY opinion, I didn't want to start a discussion/war on race duration times. And its MY choice to participate in such races - or not. Nobody needs to follow my opinion if I found these short races with long tasks unsuitable to me in the future.

If it is your wish I could startup a standard Q9550 machine to show the WU run time on these CPU. I assume it will *never* finish inside this race.

Fortunately it is not my job to organize races. :D
4) Message boards : Number crunching : Reign Record Challenge (Message 87918)
Posted 1350 days ago by Dj Ninja
Yep, this race is for brand-new computers only, due to its short duration in combination with very long running tasks.

If I'm lucky I'll manage to get 8 WUs done inside the race. Another 16 or 24 will finish shortly after the race ended. I'm completing these only to help with the cleanup, but further races of this style going without me.
5) Message boards : Number crunching : Prime Field of Dreams (Message 87838)
Posted 1352 days ago by Dj Ninja
If the GTX 580 is really twice as fast as an R9 280X... I'll get one. :D
6) Message boards : Number crunching : Task limits being reduced (Message 87795)
Posted 1353 days ago by Dj Ninja
I understood your problem, but I think scheduler requests without getting work are extremely rare in real world.

Is there a way to increase the number of tasks the server software can hold in memory?

Maybe it is worth it to try out 6 or 5 tasks per core if the server's hold-in-memory tasks are getting too low.

It seems that the BOINC server software is not designed to handle that much subprojects. How long the very small N genefer subprojects will last until the search range is completed? Maybe we should move onto these subprojects each-by-each and not all at once. I would dedicate one or two old GPUs specially to help clearing the short-running subprojects, so that bigger values are reached faster (if I'm not alone crunching this).

Another approach maybe a modification of the science app for genefer tasks.
For an example, my 3x+1 science app was designed to handle different work unit lengths with the same executable. The length was defined in the work unit parameters given to the science app and every user could have choosen its own preferred workunit length within his project settings. The work generator handled this by preserving created work in the database, which could be done very freely (linear processing of the work space).
As long as science app, project database, work generator and validator must be built and maintained by the project (and not the BOINC server software), this might work with a single subproject and a user-chooseable value of N. You did such modifications to the HTML scripts already by adding the changeable block size on genefer CUDA tasks.
7) Message boards : Number crunching : Prime Field of Dreams (Message 87776)
Posted 1354 days ago by Dj Ninja
I would never say don't do it as long as the results can be handled. If the GTX 580 needs less than a minute, a R9 280x will beat the crap out of it.
8) Message boards : General discussion : Errors on TRP Seive tasks (Message 87775)
Posted 1354 days ago by Dj Ninja
Did you had something like an unclean windows shutdown, or is it an overclocked machine? This might cause various errors with unknow reasons. Memory errors could be another cause.

Years ago, I experienced the same on a Q6600 running at 3.6 Ghz - that's 150% rated clock speed. The machine was stable and awful fast, but a very small number of tasks (might have been 2..3 tasks a month) crashed. As all the other tasks where running fine, I ignored this minor problem.
9) Message boards : Number crunching : Task limits being reduced (Message 87773)
Posted 1354 days ago by Dj Ninja
Great! *thumbup*

Now I got the drive I need to set up the new machine I got yesterday, and this machine should have enough power to do LLR tests as well as sieving. Maybe it will participate in the upcoming challenge. :)

I think 8 is fine and some "empty" scheduler requests (a request without getting *any* work even if there was work available) were no problem in the past. On my machines they were very very rare. Sometimes I noticed requests with only a small number of tasks (2..3 tasks), but I can't remember a zero. By this experience I think it works very fine and - you know me ;) - I never would have completed that many tasks when unhappy with the server's performance.

I don't see you having any problems with your server settings. The server works great. GPU sieve tasks are very small (10 minutes on a mid-range nVidia card) and large genefer-WR tasks are often failing due to server-independent issues. I welcome getting the GPU sieve queue to full with two requests, because it's cached work even at full state is done in less than a day. So I have to care twice a day on this machine and needing only two requests is saving me time.

When you think about lowering the values again, I need longer running CPU sieve tasks. Maybe I get a better internet connection within a month or two, making me unaffected in the future, but other users might encounter the same problems as I did. Okay, I'm pretty shure I'm the only one who tries to get a CPU farm like mine running on such a bad internet connection, but ... you never know... ;)

I got one question about it: Did you have users complaining that they sometimes didn't got work even if there was work available?
10) Message boards : Number crunching : Task limits being reduced (Message 87746)
Posted 1354 days ago by Dj Ninja
My machines can connect via mobile phone network only and because of that are very limited in bandwith (64kbit) when the threshold of 500MB per month is reached. They will reach this limit in a few days and I could send you the client log if you're interested.
At the same time this connection is not available 24h per day - what makes WU caching (many many many WUs when sieving) absolutely necessary.

Computers with a stable kept-online internet connection are not affected, because they will do frequent scheduler requests (reporting/requesting a small number of WUs only) by default and their size is not a problem even for a slow but stable 1mbit-DSL connection.

The workunit files are so small, that they won't cause a problem. The connection don't need to be very stable because the whole transfer is done instantly even on a slow connection.

Scheduler Requests (which are causing the problem) are difficult because of their size. The connection needs to be stable while sending and recieving a lot of data at a very low speed. In many cases the connection gets interrupted, the data transfer stops until it reaches the timeouts - and has to be retried with its entire data load and tendency to fail again. Because of the timeouts and long transfer times (using 64kbit only) I can't retry frequently and I simply don't have the time to do this on multiple machines too. Uploading/downloading small WU files with 2-3 machines at the same time might work because they're very small - but doing two scheduler requests simultaneously is not possible. I'll have to do this one-by-one on all machines and often it has been difficult in the past - even with 8 times more WUs per request.

Switching to LLR tasks (what I could do as a work-around) is no real option because of CPU age. All but two machines are very slow on LLR and really fast on sieve (i7-980 for example). I would like 12-hour sieve tasks (on a fast computer, not when using a P2 Celeron 266).

Another issue is that GPU tasks are delivered with CPU core number dependency. A machine with a single-core CPU (only to feed the GPU) will get only 2 tasks per request now (1 CPU + 1 GPU). A machine with 6 HT cores gets 13, with the same amount of computing power. Machines with more than one physical CPU (and maybe HT too) might get 25 - STILL with the same GPU computing capability.

Are you seeing computers that are running out of work because they can't connect to the server often enough to keep the work queue filled?

Yes. I keep you informed.

Next 10 posts
[Return to PrimeGrid main page]
Copyright © 2005 - 2019 Rytis Slatkevičius (contact) and PrimeGrid community. Server load 1.15, 1.97, 2.32
Generated 24 May 2019 | 6:07:35 UTC