Join PrimeGrid
Returning Participants
Community
Leader Boards
Results
Other
drummers-lowrise
|
Message boards :
Number crunching :
How many GPU'S
Author |
Message |
|
I'm wondering if I can run multiple GPU'S of different models?
Like Nvidia GTX Titan black and EVGA GeForce GTX780 Ti.
Thanks Lonnie | |
|
|
I'm wondering if I can run multiple GPU'S of different models?
Like Nvidia GTX Titan black and EVGA GeForce GTX780 Ti.
Thanks Lonnie
I believe you can. I did this recently with two EVGA Kepler cards - GTX TITAN and a GTX TITAN Black Superclocked in the primary PCI-E x16 slot. The only hiccup was that BOINC showed this machine as having two TITAN Black SCs, rather than two slightly different models.
These cards are processing tasks as specified in my PrimeGrid preferences. I don't know of a way to specify different subprojects for each card, which isn't a problem for me. | |
|
|
Yes i have a mixture of brands and models in my machines
780TI, 780s, 770, 750TI, 650TIs, zotac, evga, asus
various combinations, 2-4 GPUs
And yes it reports them all as being one type. So BOINC might report that you have two Titans or two 780Tis but it will run just fine. This skews the Fastest GPU pages results some.
I also had space to stick 430s in the PCI1 slots in one machine but couldnt get that to work. I had read that that should work but no go.
____________
My Lucky Number is 1893*2^1283297+1 | |
|
tng Send message
Joined: 29 Aug 10 Posts: 499 ID: 66603 Credit: 50,810,785,652 RAC: 31,726,137
                                                    
|
I'm wondering if I can run multiple GPU'S of different models?
Like Nvidia GTX Titan black and EVGA GeForce GTX780 Ti.
Thanks Lonnie
I believe you can. I did this recently with two EVGA Kepler cards - GTX TITAN and a GTX TITAN Black Superclocked in the primary PCI-E x16 slot. The only hiccup was that BOINC showed this machine as having two TITAN Black SCs, rather than two slightly different models.
BOINC does that. I had it happen with a GTX480 and a GTX580 -- looked like I had two 480s, one incredibly fast. I don't know if it does that all the time. Doesn't cause any problems.
If cards are very different in capabilities, you might need to set use_all_gpus in a cc_config, but I would guess that will not be a problem with those two cards.
Of course, power and cooling could be issues, but presumably you have thought of that.
____________
| |
|
|
With trying to cram as many GPUs as you can into a system, it can also have to do with the maximum amount of PCI-e lanes available to your chipset/processor, and how those lanes can be divided. You could google for your chipset or CPU and its available configurations with different chipsets - this will probably tell you what will and won't work, i.e. 1x16 and 2x8 might be fine, but 3x[whatever] isn't supported; or because of one or all of the card's/cards' demands, the selection of "divides" which your chipset and CPU support doesn't correlate to what your cards are asking.
So you might be safe if you've forked out for a mobo with a configuration which is certified for 4x SLI/Crossfire because of the sheer number of lane-dividing configurations it supports: but if you've got one which is certified to "only" support 3x SLI/Crossfire, that doesn't mean it will take 3 different (or even two different) modern graphics cards, even if you have the power and PCI-e slots (and on paper, the software) to accommodate them.
For example, here is how a Z97 Haswell/Broadwell system can adapt:
-
...And here's how an X99 Xeon system can adapt:
-
Obviously I'm talking about the top-left bit. So, with an ultra-expensive Xeon setup at home, this could be an untapped goldmine for the use of old(er) GPUs. It's quite a rare scenario though, I suppose.
The Z97 diagram is a bit misleading because loads of us have got DDR3 clocked over 1600MHz, and in the future (possibly now, actually) the X99 diagram could be a bit redundant because of course available DDR4 modules will increase in speed (and hopefully the chipset/Xeon/BIOS won't complain - more of a concern considering it's aimed at the workplace, not power-users). The Quad-channel nature of the Xeon is obviously a huge advantage for crunching here, no matter what speed the DDR4 is. Also - I don't think DDR4 is currently faster "by nature", either - it's just physically denser and more power-efficient. | |
|
|
Thanks everyone for the info, second question is what would be the best OS? I do want to use a Windows system, this computer is a MB with two CPU's.....and seven video cards....
It is for Prime Grid Only.....can't afford to run it all the time with a 1000 watt PS>
My MB is a Evga 270-se-w888 w/7 slots..... | |
|
|
Can anyone tell me what this is?
'
Lonnie
mailbox:///C:/Users/Lonnie%20Christensen/AppData/Roaming/Mozilla/SeaMonkey/Profiles/hr5orudz.default/Mail/mail.wavecable-1.com/Inbox?number=3453307&part=1.2&type=image/jpeg&filename=IMG_0486.JPG
| |
|
JimB Honorary cruncher Send message
Joined: 4 Aug 11 Posts: 920 ID: 107307 Credit: 989,553,981 RAC: 23,780
                     
|
Can anyone tell me what this is?
'
Lonnie
mailbox:///C:/Users/Lonnie%20Christensen/AppData/Roaming/Mozilla/SeaMonkey/Profiles/hr5orudz.default/Mail/mail.wavecable-1.com/Inbox?number=3453307&part=1.2&type=image/jpeg&filename=IMG_0486.JPG
You're trying to link us to something that exists on your C: drive. Sorry, but we're not able to see that.
| |
|
|
I'm currently building a 3 GPU system on a z97 Asus Pro (wifi). Linux or Win 8.1 - the motherboard has yet to detect any of the GPU's (GTX970/GTX750/GT630) for the OS.
Linux on a USB (never went past POST) for the option to install. Switched over to Win8.1 installation and all went well expect the MB can't detect GPU"S in UFEI mode or not.
Any ideas?
A side note: Haswell's "rated" TDP is never anywhere the supposed number. A 65W TDP Haswell draws 89 watts on four cores - 63W for 3 - 45W for 2. Never run a stock cooler fan while computing LLR! | |
|
mikey Send message
Joined: 17 Mar 09 Posts: 1895 ID: 37043 Credit: 825,161,243 RAC: 574,098
                     
|
I'm currently building a 3 GPU system on a z97 Asus Pro (wifi). Linux or Win 8.1 - the motherboard has yet to detect any of the GPU's (GTX970/GTX750/GT630) for the OS.
Linux on a USB (never went past POST) for the option to install. Switched over to Win8.1 installation and all went well expect the MB can't detect GPU"S in UFEI mode or not.
Any ideas?
A side note: Haswell's "rated" TDP is never anywhere the supposed number. A 65W TDP Haswell draws 89 watts on four cores - 63W for 3 - 45W for 2. Never run a stock cooler fan while computing LLR!
No ideas about Linux that aren't outdated, but in Windows always start with one gpu, then add the 2nd then add the third. Windows has a bad habit of disabling gpu's on start-up if nothing is plugged into them, starting with a single gpu gets around that problem, which can then be solved by a "dummy plug" and then in Boinc by a 'cc_config.xml' file that tells Boinc to 'use all gpu's'.
You could also try a free Windows10 trial, in my experience it is NOT ready for primetime, and the 10041 version has some big problems, which in later versions are reportedly fixed, but they haven't been pushed out to me yet. I use it just fine on a Boinc only machine, on an everyday machine it would NOT be my choice though!! | |
|
|
I'm currently building a 3 GPU system on a z97 Asus Pro (wifi). Linux or Win 8.1 - the motherboard has yet to detect any of the GPU's (GTX970/GTX750/GT630) for the OS.
Linux on a USB (never went past POST) for the option to install. Switched over to Win8.1 installation and all went well expect the MB can't detect GPU"S in UFEI mode or not.
Any ideas?
A side note: Haswell's "rated" TDP is never anywhere the supposed number. A 65W TDP Haswell draws 89 watts on four cores - 63W for 3 - 45W for 2. Never run a stock cooler fan while computing LLR!
No ideas about Linux that aren't outdated, but in Windows always start with one gpu, then add the 2nd then add the third. Windows has a bad habit of disabling gpu's on start-up if nothing is plugged into them, starting with a single gpu gets around that problem, which can then be solved by a "dummy plug" and then in Boinc by a 'cc_config.xml' file that tells Boinc to 'use all gpu's'.
You could also try a free Windows10 trial, in my experience it is NOT ready for primetime, and the 10041 version has some big problems, which in later versions are reportedly fixed, but they haven't been pushed out to me yet. I use it just fine on a Boinc only machine, on an everyday machine it would NOT be my choice though!!
After updating chipset/BIOS/Win8.1 drivers: still no GPU acknowledgement success. (Nvidia driver installer says it "can't find any compatible hardware." Each graphic card has monitor connected: VGA for the GT630 - HDMI for both the 750 and 970. Tried putting the GPU into different PCI slots for each. Neither the BIOS settings or device manager sees any of GPU's no matter. There is power going to them as the fans spin on 970 (750/630 are heatsink only) Pulling 630/750 out while system is on - the system resets into BIOS stating a power error.
Murphy's law
USB memory for Linux was corrupt - explains the dead install.
| |
|
Scott Brown Volunteer moderator Project administrator Volunteer tester Project scientist
 Send message
Joined: 17 Oct 05 Posts: 2417 ID: 1178 Credit: 20,020,905,538 RAC: 20,399,133
                                                
|
This might be going on because the system thinks you are planning to SLI, but some of your cards--the 630 for sure and maybe the 750--cannot SLI. I would double check and make sure that all SLI (hardware and software) is set to the off position.
Also, if the board has onboard GPU, then make sure that the onboard GPU is set to off as well.
I would also try the 970 by itself first (with the system powered off to install/uninstall cards). Then add the other two cards one at a time.
| |
|
|
Pulling 630/750 out while system is on - the system resets into BIOS stating a power error.
I would shut the system down and turn off master switch on power supply unit before adding or removing graphics cards. I don't think they are meant to be hot swappable in the PCI-E slots. | |
|
|
This might be going on because the system thinks you are planning to SLI, but some of your cards--the 630 for sure and maybe the 750--cannot SLI. I would double check and make sure that all SLI (hardware and software) is set to the off position.
Also, if the board has onboard GPU, then make sure that the onboard GPU is set to off as well.
I would also try the 970 by itself first (with the system powered off to install/uninstall cards). Then add the other two cards one at a time.
Leaning towards a bad z97 MB. I have a spare Z87 mITX board with one PciE slot to test the 970/750 to make sure pins are not messed up on GPU before I RMA.
Jumped RTC RAM and reset CMOS. Configured many BIOS settings. Nvidia drivers can't see any of GPU's to engage. (When I try to install drivers: the first step: "checking compatibility" errors out to a hardware message - no GPU's are found - no matter what GPU or how many slots are filled and where.
The Intel iGPU was disabled for the first half while moving GPU's into pcie slots. (Tried 970 alone) When the Intel was enabled each card went into pcie either alone/two at time or all three to see if any would be seen by BIOS or OS. The GT630 was seen in the 2nd Pcie slot only by BIOS but not the OS. As of now: the OS and BIOS are blind to the 750 or 970 (monitors plugged in). I've disabled the Intel iGPU again to see if the NVidia GPU's alone are seen after a few more Pcie BIOS changes.
I would shut the system down and turn off master switch on power supply unit before adding or removing graphics cards. I don't think they are meant to be hot swappable in the PCI-E slots.
Yes - I have the PSU power unplugged with master off when cards are changed around. I pulled them out to confirm power from PCIe was active.
| |
|
|
I agree with Wingless Wonder though, you should never pull out PCI-e cards with the system switched on. There's an awful lot of amperage going to those cards, and obviously not just from the slot itself. I'm not sure about the 630 requiring PCI-e cables from the PSU, but the 750 surely does? If you were omitting plugging those in too, then it complicates things more.
I know it's a bit late to be saying that now, it's just something which should probably be avoided in any situation. | |
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 14037 ID: 53948 Credit: 477,144,468 RAC: 289,049
                               
|
I know others have said it, but for the benefit of anyone reading this thread, you should never, under any circumstances, insert or remove (almost *) anything from a motherboard unless it is powered off and the power cord is disconnected. Permanent damage, including total failure, of the motherboard or anything connected to to the motherboard (CPU, RAM, power supply, etc.) may result.
(*) The sole exception are components explicitly designed to be removed when powered on, which include USB connections, SATA devices, and anything plugged into an external connector. Also, note that while it's "safe" to hot-swap a SATA device, that only means the hardware won't be damaged. The data may be corrupted unless proper device shutdown procedures are followed, e.g., "eject hardware" under Windows.
____________
My lucky number is 75898524288+1 | |
|
|
After much tinkering I've ruled out a bad MB: BIOS now see's all three GPU's (Slot 1/2/3 no matter the GPU's order) with Intel iGPU enabled or disabled - although the OS is unaware of all three.
I agree with Wingless Wonder though, you should never pull out PCI-e cards with the system switched on. There's an awful lot of amperage going to those cards, and obviously not just from the slot itself. I'm not sure about the 630 requiring PCI-e cables from the PSU, but the 750 surely does? If you were omitting plugging those in too, then it complicates things more.
I know it's a bit late to be saying that now, it's just something which should probably be avoided in any situation.
I was over zealous and forgot protocol. Fortunately - nothing was (GT630) blown while making this (costly) mistake.
The 25W/384CUDA GT630 has no 6/8 pin. The Gtx750 has a 6pin. The 970 has [2] 8pin. I have custom VBIOS for both the 750 and 970. (Once I get the OS to see the Maxwell GPU's a flash is in order.) The 4+1 750 is locked 38.5W yet capable of 150% power target with a redone VBIOS. A non-reference Zotac 970 8+3+2 power phase PCB is locked at 125% but with a custom golden sample (F3) BIOS it can reach 150% power target with-in a 225-250W limit.
I'm impressed with S series Haswell. Having XMP enabled at 2133 C9 RAM speeds up processing times. Liquid cooling or a top notch air is a must for long term LLR/Genefer compute on any Haswell. | |
|
|
Can you tell us which PSU you're using please? | |
|
|
Can you tell us which PSU you're using please?
Rosewill Fortress (Platinum) 750-Watt Active PFC | |
|
|
I think you've probably just about covered your system for those 3 cards plus the core components, then. The thing to look out for is not the rated power draw in watts, or the minimum wattage of the recommended PSU, but the maximum amount of amps all three might want at the same time @ 12V - if it exceeds your PSU's max. amps @ 12V rating with the PCI-e connectors (the cables/plugs, not what the slot itself can provide), then problems could arise. With the motherboard, too. | |
|
|
I think you've probably just about covered your system for those 3 cards plus the core components, then. The thing to look out for is not the rated power draw in watts, or the minimum wattage of the recommended PSU, but the maximum amount of amps all three might want at the same time @ 12V - if it exceeds your PSU's max. amps @ 12V rating with the PCI-e connectors (the cables/plugs, not what the slot itself can provide), then problems could arise. With the motherboard, too.
Indeed - my PSU 12V single rail is rated for 62.5A. If I added another 970 with 750/970 then the Amp limit will be close if all GPU's are over 125% power target. ( The 750/630 would go on another MB if a second 970 is added.) Most (6+2) 970's require 19-27A running full bore at 125%. The non-reference overclocked 125-150% Zotac equates to 25-35A. 12V Single rail 750-1000W PSU's are rated 45-85A while 1000-1600 are 90-140A.
Signs point to UFEI being the root of my problems: no matter what I choose the system always boot into legacy mode. (press shift while restarting or command shutdown.exe /r /o) advanced win8.1 UFEI options are missing (my other Win8.1 machine has these options). OS is blind to any of GPU's even though the BIOS shows each and how many lanes are being run (8x/8x/ x4 from the chipset) and CPU connect is (8x/x4/x4).
Installing Win10 as a hail mary to see if UFEI makes a difference. As I don't have a working USB I can't test Linux today. OS footprint size is important as this machine will be for crunching only. | |
|
Message boards :
Number crunching :
How many GPU'S |