PrimeGrid
Please visit donation page to help the project cover running costs for this month

Toggle Menu

Join PrimeGrid

Returning Participants

Community

Leader Boards

Results

Other

drummers-lowrise

Advanced search

Message boards : Number crunching : How to maximize the usefulness of my hardware?

Author Message
Profile Tuna Ertemalp
Avatar
Send message
Joined: 27 Mar 15
Posts: 45
ID: 388469
Credit: 1,208,836,762
RAC: 4,799,587
321 LLR Amethyst: Earned 1,000,000 credits (1,505,931)Cullen LLR Ruby: Earned 2,000,000 credits (2,788,120)ESP LLR Ruby: Earned 2,000,000 credits (2,410,555)Generalized Cullen/Woodall LLR Ruby: Earned 2,000,000 credits (2,274,094)PPS LLR Turquoise: Earned 5,000,000 credits (5,184,478)PSP LLR Amethyst: Earned 1,000,000 credits (1,165,010)SoB LLR Ruby: Earned 2,000,000 credits (2,962,824)SR5 LLR Ruby: Earned 2,000,000 credits (2,236,155)SGS LLR Ruby: Earned 2,000,000 credits (2,247,690)TRP LLR Amethyst: Earned 1,000,000 credits (1,973,973)Woodall LLR Ruby: Earned 2,000,000 credits (4,105,565)321 Sieve Sapphire: Earned 20,000,000 credits (39,335,753)Generalized Cullen/Woodall Sieve Turquoise: Earned 5,000,000 credits (5,586,050)PPS Sieve Emerald: Earned 50,000,000 credits (65,532,240)Sierpinski (ESP/PSP/SoB) Sieve (suspended) Silver: Earned 100,000 credits (105,796)TRP Sieve (suspended) Silver: Earned 100,000 credits (291,585)AP 26/27 Sapphire: Earned 20,000,000 credits (28,195,882)GFN Double Amethyst: Earned 1,000,000,000 credits (1,040,946,650)
Message 91118 - Posted: 11 Jan 2016 | 18:43:11 UTC
Last modified: 11 Jan 2016 | 18:44:31 UTC

I had a bit of a conversation over Private Message with Michael Goetz, but making it public with his permission so that it can be searched by everyone 'til eternity... :) I am sure it could generate further questions and drill-downs. I know it does for me.

I wrote:

Sorry to bother you directly. But, given the current challenge, I thought a general forum msg might get lost. And, ultimately, you seem to be answering most of the questions like this, anyways... :)

I have the hobby of building PCs to use for 50+ BOINC projects. Usually I select all subprojects in each BOINC project, give equal share of 100 to each project, attach all my hosts to all projects, and let them "share the wealth". But, sometimes I focus them on one project, sometimes on some subprojects. Right now I am doing it for PrimeGrid, for 17Mega...WR . I saw that you guys were doing a challenge, so I thought I'd jump in and help. I was about 2 days late, but seem to be already in the top 50. Also, I am ranked <1400 worldwide in combined BOINC stats. That said, I am not after points/ranks/credits, really; they come as a result of what I do, not why I do it. But, I would like to contribute at max. To that effect, I was going over the forums, and I saw people talking about an "app_info". Now, I have no clue if that is an XML file or a section in cc_config.xml, but it occurred to me that maybe my hardware is not used to its max.

You can see from my host list that I have a fair bit of machines, but best ones are using the 8-core (16-thread) Intel i7-5960X, one with Dual Titan X, one with Quad Titan X, and one with dual Titan Z, all with 32G or 64G memory. There is another machine with a single Titan X on a slightly older CPU, but that machine will also become a 5960X/Dual Titan X/32G in the coming month. Then a mix of misc machines, using single 580s, laptops, etc.

Using GPU-Z, I see the GPU Load on Titans is usually 95+% on the GPUs, but the memory usage doesn't seem to be anywhere close to the 12G each of these cards have. So, I wondered if you are getting the best from my hardware.

If there is anything I can do to help you out more, please send me a rather step-by-step instructions (like, "copy this, paste it in here for Titan X, then do this other thing for Titan Z, then this over here for your CPUs etc." :-) ), and I'd be happy to.

Isn't there a way for your server to generate something that gets sent to any given host based on the hardware description you get about that host during the server/host communication? Anything people do manually is bound to get out of whack as hardware changes, things getting reinstalled, etc. And, any "standard" settings you use (like what I have on my hosts by default) will probably result in less than effective use of available capabilities in edge cases like really capable CPUs/GPUs.

Thanks & good luck
Tuna


The response from Michael Goetz was:


Tuna,

Thanks for participating! Here's some guidelines for PrimeGrid in general, and for this challenge in particular.

(I'll also recommend asking for help on the forums. I'll give your question a shot, however...)

* PrimeGrid generally is not a project where "select everything" is a good strategy. The types of sub-projects we have not only vary in purpose, but they vary greatly in what type of hardware they excel in. Some hardware does much better on some projects than others.

* There's 9 different GFN sub-projects, and for technical reasons they use different algorithms. Some hardware is better at some of the GFN sbu-projects than others. I strongly recommend reading my post about GFN sub-projects. You can ignore the lists of discovered primes, but pay close attention to which transform is usable for each sub-project. Only n=17-mega and up are part of this challenge.

* Intel CPUs with AVX (i.e., Sandy Bridge+), or even better, FMA3 (Haswell+) Are much faster when running LLR or Genefer. AMD's design makes AVX useless, so their CPUs are much slower than Intels for anything at PrimeGrid except for sieving. Sieves don't use AVX.

* As for running GFN tasks on a GPU, if you have an Nvidia GPU, you have a choice of running a CUDA version of Genefer or an OCL version of Genefer. Almost always, the OCL version is faster than CUDA -- but on Windows, OCL will consume an entire CPU core. This is a driver problem which apparently has been corrected in the latest Nvidia drivers, but the apps need to be updated to take advantage.

* GeneferCUDA and GeneferOCL (but NOT OCL2/OCL3/OCL4) is all about double precision floating point performance, so if you use any type of TITAN, put it in double precision mode if you're running GFN-WR or GFN-21. All the other GFN ranges won't use double precision, so leave the TITANs in normal mode.

* Put your AVX/FMA3 CPUs on ranges where they'll use those instructions. Currently that's GFN=19 or higher. Don't use them on GFN-17-Mega or GFN-18; their abilities would be wasted. Use non-AVX (or AMD) CPU cores on 17-Mega and 18.

* Pay attention to the information about the length of the tasks in each range. GFN-WR tasks will take about 20 days on the very fastest CPUs.

* App_info: My advice is if you have to ask what it is, you probably don't want to use it. This is formally called "anonymous platform", and involves setting up an app_info.xml file for PrimeGrid that says that you're manually providing the software for every app instead of the server. It's very easy to mess up, and it's guaranteed to cancel any in-progress tasks. The reason some people are using it is so they can use the experimental OCL4 instead of OCL3. Also, on the latest Maxwell-architecture Nvidia GPUs, OCL3 is faster than OCL, so some people are using it to run OCL3 (or OCL4) on GFN-21 and GFN-WR. Again, I don't recommend this, and instructions for doing this are long and complicated and are talked about in great length on the forums.

* Hyperthreading: For GFN or LLR, either turn hyperthreading off, or set BOINC to use 50% of the CPUs. I.e., on a typical quad-core i7, BIONC should see 4 cores instead of 8. For sieving, hyperthreading is beneficial, so use all 8 cores.

* Both Genefer and LLR are exceptionally stressful on CPUs and GPUs. Watch your temperatures. (The core of the LLR app is also the core of Prime95, which many overclockers use as a stress test. Their "stress-test" is our normally way of operating.) I don't recommend crunching on mobile devices, including laptops and all-in-one desktops. (All in one's have more in common with laptops than with typical desktop computers.)

* For GFN, don't overclock your GPUs. At all. That includes factory overclocking, so if you have factory OC'd GPUs you might want to manually lower their clocks to stock clock speeds. If you have GPU tasks failing and see "maxErr exceeded" errors in stderr.txt, it's usually overclocking that's the culprit.

Hope that helps.

In the future, please ask questions like this on the forums. This took 30-60 minutes to write, and it will benefit exactly one person. A question answered on the forums benefits the entire community, and lasts forever. Thank you and good luck!

Mike


I asked...


Thank you so much for taking the time! If you'd like, I have nothing against posting my question and your reply on the forum for everyone's use and future searches. May I?

Thanks again!
Tuna


And, he responded...


Sure. That kind of information's been posted many times, but it doesn't hurt to keep it fresh. There's a lot of stuff going on here, and it can be quite overwhelming for new participants.

Mike


So, here we are... Thanks again, Michael.

I would have further questions/suggestion on a lot of the points Michael made, but I am slowly waking up in my timezone. After coffee and other morning errands...

In the meantime, if anyone else has any pointers, links, tips, scripts, XMLs, responses, feel free to add. I am sure future generations as well as current clueless "calculation herds" like I am a member of who are too shy to reveal ignorance and ask for help while they are jumping from one BOINC project to another would benefit.

Thanks all!
Tuna

PS: By the way, my questions to optimize the usefulness of my machines for PrimeGrid is not solely due to the currently ongoing Makar Sankranti Challenge. That was just a reason to focus on PrimeGrid right now, and then learn about skills that will be useful well into the future.
____________

Van ZimmermanProject donor
Volunteer moderator
Project administrator
Volunteer tester
Project scientist
Send message
Joined: 30 Aug 12
Posts: 1951
ID: 168418
Credit: 6,015,751,316
RAC: 46,534
Discovered the World's First GFN-20 prime!!!Discovered 2 mega primesFound 1 prime in the 2018 Tour de PrimesFound 1 prime in the 2019 Tour de Primes321 LLR Sapphire: Earned 20,000,000 credits (20,089,185)Cullen LLR Sapphire: Earned 20,000,000 credits (21,197,588)ESP LLR Sapphire: Earned 20,000,000 credits (20,089,407)Generalized Cullen/Woodall LLR Sapphire: Earned 20,000,000 credits (21,873,609)PPS LLR Sapphire: Earned 20,000,000 credits (21,918,476)PSP LLR Sapphire: Earned 20,000,000 credits (20,117,197)SoB LLR Sapphire: Earned 20,000,000 credits (26,779,370)SR5 LLR Sapphire: Earned 20,000,000 credits (20,278,667)SGS LLR Sapphire: Earned 20,000,000 credits (20,055,153)TRP LLR Sapphire: Earned 20,000,000 credits (20,619,871)Woodall LLR Sapphire: Earned 20,000,000 credits (20,394,541)321 Sieve Sapphire: Earned 20,000,000 credits (24,694,854)Generalized Cullen/Woodall Sieve Sapphire: Earned 20,000,000 credits (20,373,251)PPS Sieve Double Amethyst: Earned 1,000,000,000 credits (1,051,222,753)Sierpinski (ESP/PSP/SoB) Sieve (suspended) Jade: Earned 10,000,000 credits (10,189,695)TRP Sieve (suspended) Jade: Earned 10,000,000 credits (10,102,079)AP 26/27 Emerald: Earned 50,000,000 credits (60,248,786)GFN Double Ruby: Earned 2,000,000,000 credits (4,407,191,681)PSA Double Bronze: Earned 100,000,000 credits (198,313,575)
Message 91137 - Posted: 12 Jan 2016 | 14:24:16 UTC - in response to Message 91118.

FWIW, I'll throw my $.02 in based on my experience.

Decisions about where to point your gpus are somewhat easy. Only two projects, PPS-Sieve and GFN can use them (unless you go the prpnet route). You have gpus which are good at both. If you want to find primes, run GFN. If you want credits, run PPS-Sieve.

If you run PPS-Sieve, you may want to set up an app_config.xml file to run two units at once-in my experience that tends to keep the gpu fully loaded and provides better overall throughput than a single task.

With GFN WR and my nvidia cards, I have never seen them stay at 100% utilization, and this appears to be a power limit problem. MSI AB lets me select a max power limit of 106% on my titan black, and with DP selected, it shows always at the power limit (even though telemetry reports it is actually at 100%).

Of course, if you run GFN, you have to select which subproject. With those gpus, I would point them at GFN-WR, and let them go, after all, the project's mission is to discover the largest prime, and you have hardware suited for it. I set any of my gpus which can complete a wr unit in three days or less (give or take an hour or three) on wr.

My only experience with maxwell cards is with non-titan versions, so I cannot comment as to how the titan x will perform with the ocl3 or ocl4 transform. I have a seen a 50% performance increase with ocl4 over ocl for wr units. If you want to go down the app_info path, which you would need to do to run ocl4 (or 3) on wr or gfn-21 units, I suspect there are plenty of folks who will help you get there, understanding it can be tedious and frustrating to get it working.

As to cpus, that is a different story altogether. Much of it will be personal preference as to whether you want to find a lot of primes (small gfn/sgs/pps/ppse), or larger ones (gfn/sob), with an eye as to whether you are looking for points or not.

While I agree with Mr. Goetz' statement about pointing avx/fma3 capable cpus at projects which can utilize those features, my experience has shown at least for GFN that non-avx cpus don't necessarily have a comparative advantage at non-avx/fma3 projects. For example, my oldest cpus in production are e5452s, with all the power that sse4.1 can bring to bear. They can complete a gfn-19 unit in 50 hours, with all cores running, but a gfn-18 unit still takes 43 hours. On my e5520s, gfn-18 units actually take longer (34 vs. 27 hours).

The instant challenge not withstanding, with your hardware, I would run GFN-WR on the gpus, GFN-19 on the cpus, and perhaps experiment with other cpu projects to see if any are of interest.

Kelly DaviesProject donor
Send message
Joined: 4 Apr 10
Posts: 106
ID: 58144
Credit: 6,049,388,504
RAC: 626,565
321 LLR Ruby: Earned 2,000,000 credits (2,050,518)Cullen LLR Ruby: Earned 2,000,000 credits (2,282,176)ESP LLR Ruby: Earned 2,000,000 credits (3,106,610)Generalized Cullen/Woodall LLR Ruby: Earned 2,000,000 credits (2,142,891)PPS LLR Ruby: Earned 2,000,000 credits (2,099,213)PSP LLR Ruby: Earned 2,000,000 credits (3,475,366)SoB LLR Double Bronze: Earned 100,000,000 credits (164,808,091)SR5 LLR Ruby: Earned 2,000,000 credits (3,027,344)SGS LLR Turquoise: Earned 5,000,000 credits (5,042,519)TRP LLR Ruby: Earned 2,000,000 credits (2,010,534)Woodall LLR Ruby: Earned 2,000,000 credits (2,077,196)321 Sieve Ruby: Earned 2,000,000 credits (2,483,203)Generalized Cullen/Woodall Sieve Sapphire: Earned 20,000,000 credits (28,991,973)PPS Sieve Double Turquoise: Earned 5,000,000,000 credits (5,765,171,846)Sierpinski (ESP/PSP/SoB) Sieve (suspended) Sapphire: Earned 20,000,000 credits (24,236,704)TRP Sieve (suspended) Jade: Earned 10,000,000 credits (10,007,588)AP 26/27 Jade: Earned 10,000,000 credits (10,058,984)GFN Jade: Earned 10,000,000 credits (15,782,029)PSA Gold: Earned 500,000 credits (525,865)
Message 91142 - Posted: 12 Jan 2016 | 16:37:16 UTC

Good post, thanks.

Have a look at Seventeen or Bust

Those long tasks could benefit from your fast CPUs.

Cheers
____________
My Lucky Number is 1893*2^1283297+1

Profile Tuna Ertemalp
Avatar
Send message
Joined: 27 Mar 15
Posts: 45
ID: 388469
Credit: 1,208,836,762
RAC: 4,799,587
321 LLR Amethyst: Earned 1,000,000 credits (1,505,931)Cullen LLR Ruby: Earned 2,000,000 credits (2,788,120)ESP LLR Ruby: Earned 2,000,000 credits (2,410,555)Generalized Cullen/Woodall LLR Ruby: Earned 2,000,000 credits (2,274,094)PPS LLR Turquoise: Earned 5,000,000 credits (5,184,478)PSP LLR Amethyst: Earned 1,000,000 credits (1,165,010)SoB LLR Ruby: Earned 2,000,000 credits (2,962,824)SR5 LLR Ruby: Earned 2,000,000 credits (2,236,155)SGS LLR Ruby: Earned 2,000,000 credits (2,247,690)TRP LLR Amethyst: Earned 1,000,000 credits (1,973,973)Woodall LLR Ruby: Earned 2,000,000 credits (4,105,565)321 Sieve Sapphire: Earned 20,000,000 credits (39,335,753)Generalized Cullen/Woodall Sieve Turquoise: Earned 5,000,000 credits (5,586,050)PPS Sieve Emerald: Earned 50,000,000 credits (65,532,240)Sierpinski (ESP/PSP/SoB) Sieve (suspended) Silver: Earned 100,000 credits (105,796)TRP Sieve (suspended) Silver: Earned 100,000 credits (291,585)AP 26/27 Sapphire: Earned 20,000,000 credits (28,195,882)GFN Double Amethyst: Earned 1,000,000,000 credits (1,040,946,650)
Message 91152 - Posted: 12 Jan 2016 | 19:57:49 UTC

Thank you for all the insights and suggestions and pointers. Keep'em coming...

So far, the only "simple" action I took was to put all four GPUs on the one machine with dual Titan Zs into the Double Precision mode using NVIDIA Control Panel's "Manage 3D Settings" page, under "Global Settings". I hope this will help with better utilization of that machine by PrimeGrid. That option doesn't seem to be available in the NVidia driver's 361.43 Control Panel for Titan X (and, obviously, not for 580, either).

The next "simple and generic" thing to do seems to be to wait for the challenge to complete, and then set the global project preferences for my account to use only OpenCL for the Genefer projects and CPU for everything else, given the variety of hardware I have (from a Surface Pro 2 to Dual Titan Z). I don't care to maximize for credits; nowadays one can do that by just running BitcoinUtopia for a while, generating small sums of cash instead of science, for huge amounts of credit. I made an exception to my rule of "no BU" very recently to raise some cash for BOINCstats, and it already left a bad taste in my mouth despite the glut of credits I received. So much for that... I like getting useful science results, not fame for a particular result or mucho credits to top some list.

In general, my current understanding after all this is that a lot of fine tuning can be done, at an individual machine/subproject level. Unfortunately, that is hard to do when you belong to the subset of users in the BOINC universe with lots of machines accumulated over the years, each at varying degrees of capability, trying to be fair in giving a slice of their time to all available projects in that universe by setting Resource Share to 100 to each project. The tools for that sort of fine-tuning seem to be distributed across places:

    - Settings in NVidia Control Panel,
    - turning on/off CPU features in the BIOS,
    - putting hosts in groups on each project's website,
    - selecting subprojects/apps for those groups,
    - using app_config.xml to further tweak app versions and parameters on each host for each project/subproject,
    - plus, few projects and project fans have created "optimized apps" (e.g. SETI) for certain CPUs that can be selected only if you know what you are doing and that they exist,


all of which requires a deep/detailed understanding of all the apps the (sub)project has by combing the forums with thousands of messages in them... And, you would need to revisit all of your tweaks/settings for each project/subproject for a given host if the hardware on that host ever changes! And, you also need to keep track of any changes/additions to the stock/optimized apps for each (sub)project to tweak your settings on all your hosts, if need be! Phew... And, some of these, like GPU/CPU settings are global to the machine, and a GPU/CPU global setting that is good for Project1/SubProjectA might not be good at all for a totally different Project50/SubProjectZ. For instance, should I leave my Titan Zs in the Double Precision mode forever, even when they are switching between 50 projects every hour after my current PrimeGrid focus is over and my machines go back to being an equal-opportunity CPU/GPU provider for all BOINC projects?

<WishfullRant>
The problem is that using BOINC should be simple for people to just sign up for, click a few buttons to select as many projects as they want, then let it work and contribute at the max capacity that they can with what they have without ever caring again. BOINC already is suffering from a lack of enthusiasm in the general public, and has slowly turned into a toolset used by those who really care about this or that project. That wasn't the intention back in the late 90s. This is not a PrimeGrid-spific comment, but the "set it and forget it" simplicity of BOINC that was to attract masses to donate their spare CPU/GPU cycles is slipping away.

I am aware that this is mostly outside of the scope of each individual BOINC project, but I wish BOINC allowed a better "Project Preferences" page on the project site, an "Advanced" view if you will, in addition to the current "Simple" view. Each project knows exactly all the host machines and capabilities. Each project knows each of its own subprojects and their requirements and available apps and parameters, stock or optimized. In an Advanced View on the project's Project Preferences site, one should be able to see a matrix of subprojects vs. hosts, and for each intersection select the right stuff, with strong suggestions/presets by the site which already knows (should know!) what is best for using that host to the max, and possible suggestions for that host that might need to be done manually in the BIOS/ControlPanels for things that cannot be done programmatically by setting CPU/GPU flags by the project apps during the runtime for the duration of their execution.

Such per-host presets driven by the project itself would bring back that "set it and forget it" spirit to use each host to the max while still allowing an even better way to fine-tune for the diehard enthusiast. Today's default options just leave too much spare capacity unused.
</WishfullRant>

Oh well...

Tuna

____________

Profile Michael GoetzProject donor
Volunteer moderator
Project administrator
Project scientist
Avatar
Send message
Joined: 21 Jan 10
Posts: 12683
ID: 53948
Credit: 184,880,120
RAC: 197,398
The "Shut up already!" badge:  This loud mouth has mansplained on the forums over 10 thousand times!  Sheesh!!!Discovered the World's First GFN-19 prime!!!Discovered 1 mega primeFound 1 prime in the 2018 Tour de PrimesFound 1 prime in the 2019 Tour de Primes321 LLR Ruby: Earned 2,000,000 credits (2,063,182)Cullen LLR Ruby: Earned 2,000,000 credits (2,005,249)ESP LLR Ruby: Earned 2,000,000 credits (4,165,092)Generalized Cullen/Woodall LLR Ruby: Earned 2,000,000 credits (2,145,754)PPS LLR Ruby: Earned 2,000,000 credits (2,773,744)PSP LLR Ruby: Earned 2,000,000 credits (2,632,269)SoB LLR Sapphire: Earned 20,000,000 credits (34,158,496)SR5 LLR Turquoise: Earned 5,000,000 credits (8,293,415)SGS LLR Ruby: Earned 2,000,000 credits (2,012,222)TRP LLR Ruby: Earned 2,000,000 credits (2,737,347)Woodall LLR Ruby: Earned 2,000,000 credits (2,195,123)321 Sieve Turquoise: Earned 5,000,000 credits (5,046,112)Cullen/Woodall Sieve (suspended) Ruby: Earned 2,000,000 credits (4,170,256)Generalized Cullen/Woodall Sieve Turquoise: Earned 5,000,000 credits (5,059,304)PPS Sieve Sapphire: Earned 20,000,000 credits (20,110,788)Sierpinski (ESP/PSP/SoB) Sieve (suspended) Amethyst: Earned 1,000,000 credits (1,035,522)TRP Sieve (suspended) Ruby: Earned 2,000,000 credits (2,051,121)AP 26/27 Turquoise: Earned 5,000,000 credits (7,090,096)GFN Emerald: Earned 50,000,000 credits (64,594,991)PSA Jade: Earned 10,000,000 credits (10,540,036)
Message 91153 - Posted: 12 Jan 2016 | 20:17:42 UTC

"Set it and forget it", using "spare" cpu cycles on otherwise idle machines.

Vs.

Enthusiasts who build expensive, custom, optimized computing engines using state-of-the-art components specifically for computing on one or more projects.

We (and probably most projects) have a mix of both types, but the latter are usually the more vocal on the forums. We try to cater to both.

One thing which you might find helpful: PrimeGrid has a lot more "venues"/"locations" available than other projects. Whereas the standard BOINC server only supports 4 venus (--- / home / school / work), we've added 10 more. You can set up 14 different configurations for you computers, so it's easier to assign different sub-projects to different computers when you have more than 4 computers.

http://www.primegrid.com/prefs.php?subset=project&cols=1
____________
Please do not PM me with support questions. Ask on the forums instead. Thank you!

My lucky number is 75898524288+1

Profile Tuna Ertemalp
Avatar
Send message
Joined: 27 Mar 15
Posts: 45
ID: 388469
Credit: 1,208,836,762
RAC: 4,799,587
321 LLR Amethyst: Earned 1,000,000 credits (1,505,931)Cullen LLR Ruby: Earned 2,000,000 credits (2,788,120)ESP LLR Ruby: Earned 2,000,000 credits (2,410,555)Generalized Cullen/Woodall LLR Ruby: Earned 2,000,000 credits (2,274,094)PPS LLR Turquoise: Earned 5,000,000 credits (5,184,478)PSP LLR Amethyst: Earned 1,000,000 credits (1,165,010)SoB LLR Ruby: Earned 2,000,000 credits (2,962,824)SR5 LLR Ruby: Earned 2,000,000 credits (2,236,155)SGS LLR Ruby: Earned 2,000,000 credits (2,247,690)TRP LLR Amethyst: Earned 1,000,000 credits (1,973,973)Woodall LLR Ruby: Earned 2,000,000 credits (4,105,565)321 Sieve Sapphire: Earned 20,000,000 credits (39,335,753)Generalized Cullen/Woodall Sieve Turquoise: Earned 5,000,000 credits (5,586,050)PPS Sieve Emerald: Earned 50,000,000 credits (65,532,240)Sierpinski (ESP/PSP/SoB) Sieve (suspended) Silver: Earned 100,000 credits (105,796)TRP Sieve (suspended) Silver: Earned 100,000 credits (291,585)AP 26/27 Sapphire: Earned 20,000,000 credits (28,195,882)GFN Double Amethyst: Earned 1,000,000,000 credits (1,040,946,650)
Message 91154 - Posted: 12 Jan 2016 | 20:22:40 UTC - in response to Message 91153.

You can set up 14 different configurations for you computers, so it's easier to assign different sub-projects to different computers when you have more than 4 computers.


Yup, I had noticed that. :-) I will make use of it after the current challenge is over.

Tuna

Kelly DaviesProject donor
Send message
Joined: 4 Apr 10
Posts: 106
ID: 58144
Credit: 6,049,388,504
RAC: 626,565
321 LLR Ruby: Earned 2,000,000 credits (2,050,518)Cullen LLR Ruby: Earned 2,000,000 credits (2,282,176)ESP LLR Ruby: Earned 2,000,000 credits (3,106,610)Generalized Cullen/Woodall LLR Ruby: Earned 2,000,000 credits (2,142,891)PPS LLR Ruby: Earned 2,000,000 credits (2,099,213)PSP LLR Ruby: Earned 2,000,000 credits (3,475,366)SoB LLR Double Bronze: Earned 100,000,000 credits (164,808,091)SR5 LLR Ruby: Earned 2,000,000 credits (3,027,344)SGS LLR Turquoise: Earned 5,000,000 credits (5,042,519)TRP LLR Ruby: Earned 2,000,000 credits (2,010,534)Woodall LLR Ruby: Earned 2,000,000 credits (2,077,196)321 Sieve Ruby: Earned 2,000,000 credits (2,483,203)Generalized Cullen/Woodall Sieve Sapphire: Earned 20,000,000 credits (28,991,973)PPS Sieve Double Turquoise: Earned 5,000,000,000 credits (5,765,171,846)Sierpinski (ESP/PSP/SoB) Sieve (suspended) Sapphire: Earned 20,000,000 credits (24,236,704)TRP Sieve (suspended) Jade: Earned 10,000,000 credits (10,007,588)AP 26/27 Jade: Earned 10,000,000 credits (10,058,984)GFN Jade: Earned 10,000,000 credits (15,782,029)PSA Gold: Earned 500,000 credits (525,865)
Message 91171 - Posted: 13 Jan 2016 | 2:53:42 UTC

Yeah i have 7 computers atm, each with different capabilities and I have a profile set up for each one. I am having fun right now juggling the different challenge tasks to optimize things for each computer. I had a couple old tasks suspended for the challenge on one machine which is making it trickier babysitting it when it is ready for new work.

Normally outside of the challenges, I run PPS Sieve on the GPUs and run Seventeen or Bust on my faster CPUs (5-10day turnaround), with some smaller tasks like SGS on my less capable machines (might continue to run some GFN-Mega or one of the others on those lesser CPUs after the challenge.)

Its a fun hobby here with the different sub projects.
____________
My Lucky Number is 1893*2^1283297+1

Profile mikey
Avatar
Send message
Joined: 17 Mar 09
Posts: 997
ID: 37043
Credit: 457,990,572
RAC: 66,461
321 LLR Gold: Earned 500,000 credits (511,386)Cullen LLR Gold: Earned 500,000 credits (510,818)ESP LLR Amethyst: Earned 1,000,000 credits (1,021,868)Generalized Cullen/Woodall LLR Gold: Earned 500,000 credits (564,210)PPS LLR Turquoise: Earned 5,000,000 credits (5,086,073)PSP LLR Gold: Earned 500,000 credits (686,232)SoB LLR Gold: Earned 500,000 credits (785,805)SR5 LLR Gold: Earned 500,000 credits (998,970)SGS LLR Turquoise: Earned 5,000,000 credits (5,095,282)TRP LLR Gold: Earned 500,000 credits (548,363)Woodall LLR Gold: Earned 500,000 credits (673,276)321 Sieve Ruby: Earned 2,000,000 credits (2,006,315)Cullen/Woodall Sieve (suspended) Gold: Earned 500,000 credits (944,431)Generalized Cullen/Woodall Sieve Sapphire: Earned 20,000,000 credits (20,813,253)PPS Sieve Double Silver: Earned 200,000,000 credits (336,907,934)Sierpinski (ESP/PSP/SoB) Sieve (suspended) Ruby: Earned 2,000,000 credits (2,446,797)AP 26/27 Sapphire: Earned 20,000,000 credits (33,132,385)GFN Sapphire: Earned 20,000,000 credits (24,806,170)PSA Sapphire: Earned 20,000,000 credits (20,457,430)
Message 91179 - Posted: 13 Jan 2016 | 11:19:11 UTC - in response to Message 91171.

Yeah i have 7 computers atm, each with different capabilities and I have a profile set up for each one. I am having fun right now juggling the different challenge tasks to optimize things for each computer. I had a couple old tasks suspended for the challenge on one machine which is making it trickier babysitting it when it is ready for new work.

Normally outside of the challenges, I run PPS Sieve on the GPUs and run Seventeen or Bust on my faster CPUs (5-10day turnaround), with some smaller tasks like SGS on my less capable machines (might continue to run some GFN-Mega or one of the others on those lesser CPUs after the challenge.)

Its a fun hobby here with the different sub projects.


With 14 venues you have access to here you could set up one of them just for the challenges and then modify it for each challenge, then move each computer to it when you run a challenge, and then back to it's normal venue when the challenge is over. That should cut down on some of the changes.

Profile Andrew Hughes
Volunteer tester
Send message
Joined: 19 Feb 15
Posts: 71
ID: 381849
Credit: 78,325,605
RAC: 0
321 LLR Silver: Earned 100,000 credits (435,860)PPS LLR Turquoise: Earned 5,000,000 credits (7,595,445)TRP LLR Bronze: Earned 10,000 credits (75,112)Generalized Cullen/Woodall Sieve Bronze: Earned 10,000 credits (80,285)PPS Sieve Ruby: Earned 2,000,000 credits (4,823,901)Sierpinski (ESP/PSP/SoB) Sieve (suspended) Silver: Earned 100,000 credits (199,298)GFN Emerald: Earned 50,000,000 credits (64,856,896)PSA Silver: Earned 100,000 credits (258,808)
Message 92625 - Posted: 2 Mar 2016 | 18:46:08 UTC

I don't see an option for double precision mode with my Titan X, where do you guys see it? I'm running nvidia control panel v8.1.800.0, and also have a 780ti in the machine

Thanks!

Profile bossProject donor
Avatar
Send message
Joined: 27 Apr 11
Posts: 18
ID: 96592
Credit: 513,476,327
RAC: 43,720
Discovered 1 mega prime2016 Tour de Primes highest prime score2016 Tour de Primes largest primeFound 1 prime in the 2018 Tour de Primes321 LLR Ruby: Earned 2,000,000 credits (2,008,795)Cullen LLR Amethyst: Earned 1,000,000 credits (1,107,888)ESP LLR Silver: Earned 100,000 credits (128,045)PPS LLR Ruby: Earned 2,000,000 credits (4,452,841)PSP LLR Ruby: Earned 2,000,000 credits (2,005,501)SoB LLR Gold: Earned 500,000 credits (765,697)SR5 LLR Gold: Earned 500,000 credits (502,487)SGS LLR Ruby: Earned 2,000,000 credits (2,010,168)TRP LLR Gold: Earned 500,000 credits (503,587)Woodall LLR Amethyst: Earned 1,000,000 credits (1,103,875)321 Sieve Ruby: Earned 2,000,000 credits (2,037,345)Cullen/Woodall Sieve (suspended) Jade: Earned 10,000,000 credits (19,035,381)Generalized Cullen/Woodall Sieve Jade: Earned 10,000,000 credits (10,029,672)PPS Sieve Double Silver: Earned 200,000,000 credits (300,571,897)Sierpinski (ESP/PSP/SoB) Sieve (suspended) Ruby: Earned 2,000,000 credits (2,803,747)TRP Sieve (suspended) Jade: Earned 10,000,000 credits (11,687,916)AP 26/27 Emerald: Earned 50,000,000 credits (85,105,150)GFN Emerald: Earned 50,000,000 credits (61,898,179)PSA Turquoise: Earned 5,000,000 credits (5,718,517)
Message 92633 - Posted: 2 Mar 2016 | 19:14:50 UTC - in response to Message 92625.

I don't see an option for double precision mode with my Titan X, where do you guys see it? I'm running nvidia control panel v8.1.800.0, and also have a 780ti in the machine

Thanks!

Titan X (Maxwell GM200) has no special DP mode like the old Titan (Kepler GK110)

Profile Andrew Hughes
Volunteer tester
Send message
Joined: 19 Feb 15
Posts: 71
ID: 381849
Credit: 78,325,605
RAC: 0
321 LLR Silver: Earned 100,000 credits (435,860)PPS LLR Turquoise: Earned 5,000,000 credits (7,595,445)TRP LLR Bronze: Earned 10,000 credits (75,112)Generalized Cullen/Woodall Sieve Bronze: Earned 10,000 credits (80,285)PPS Sieve Ruby: Earned 2,000,000 credits (4,823,901)Sierpinski (ESP/PSP/SoB) Sieve (suspended) Silver: Earned 100,000 credits (199,298)GFN Emerald: Earned 50,000,000 credits (64,856,896)PSA Silver: Earned 100,000 credits (258,808)
Message 92700 - Posted: 3 Mar 2016 | 6:21:02 UTC - in response to Message 92633.

I don't see an option for double precision mode with my Titan X, where do you guys see it? I'm running nvidia control panel v8.1.800.0, and also have a 780ti in the machine

Thanks!

Titan X (Maxwell GM200) has no special DP mode like the old Titan (Kepler GK110)

Thanks!

Message boards : Number crunching : How to maximize the usefulness of my hardware?

[Return to PrimeGrid main page]
DNS Powered by DNSEXIT.COM
Copyright © 2005 - 2019 Rytis Slatkevičius (contact) and PrimeGrid community. Server load 0.74, 0.97, 1.28
Generated 21 Aug 2019 | 16:19:25 UTC