Will This 5 GPU Setup Work?

Hey guys -

                        gonna seek more advice on tech specs.  I enquired a few weeks ago whether to spend a crap load of cash to upgrade to a 2950x system and the consenses was my i7 5930k rig was still adequete.  So thank you for that.  And this was confirmed in the benchmark thread when I saw that my 2 x 2080 ti setup was actually a couple of seconds faster than an i9 9900k also with 2 x 2080 TI's.  I suspect that the fact that my 2080Ti's are the OC gaming version is responsible for that 2 secs.

 

The other reason I wanted to upgrade was to make use of 3 GTX 970 's I still have but the 2080 ti's just makes it too tight on my existing x99 deluxe MB.  It has 5 slots, but it's just too cramped.  So now I'm wondering if I got 2 PCIE riser's for the 2 top PCIE slots and mount the 2080ti's externally while installing the 970's in the lower 3 slots on the mb.  see image below

You guys think this would work? If it does, will the 40 PCIE lanes on the i7 be enough?  I remember reading the difference in performance between 16x vs 8x was minimal, but how about 4x.

 

5 gpu rig.png

 

Comments

  • kenshaw011267kenshaw011267 Posts: 3,805

    How many PCIE power connectors does your PSU provide? The 2080ti's each need 2 and most, all?, 970's will need 1. You might need a new PSU to supply 7 PCIE power lines.

    As to putting GPU's on risers, PCIE bus width is most irrelevant. They could all be x1 and it would just take a little longer for the render to start. Do make sure the one you want driving you monitor is on a x8, at least slot, or you'll have trouble even browsing the web. 

    Other than the above, assuming you're comfortable making some sort of DIY holder/mount for the cards, this should work fine.

  • rrwardrrward Posts: 556

    While that will work, you prbably don't want to, unless your scenes are tiny. Those 970's have 3.5GB of VRAM (they say 4, but that last .5 is slower than dirt). If your scene ever goes above 4GB then the 970s are going to cut out and you'll be back on just the 2080s. There is some "concern" about mixing CUDA generations, but I don't know hoe valid those concerns actually are.

  • RobinsonRobinson Posts: 751

    Probably be OK with the riser.  The main problem (so I read) is that with multi-card setups of standard desktop parts one blows hot air down onto the other.  This is why cards like Quadro tend to be blowers.  The air gets ejected directly out of the case through the back plate.  Should be OK with 2 though.

  • kenshaw011267kenshaw011267 Posts: 3,805
    rrward said:

    While that will work, you prbably don't want to, unless your scenes are tiny. Those 970's have 3.5GB of VRAM (they say 4, but that last .5 is slower than dirt). If your scene ever goes above 4GB then the 970s are going to cut out and you'll be back on just the 2080s. There is some "concern" about mixing CUDA generations, but I don't know hoe valid those concerns actually are.

    I'm mixing a 1080ti with a 2070. I've had no problems.

  • outrider42outrider42 Posts: 3,679
    rrward said:

    While that will work, you prbably don't want to, unless your scenes are tiny. Those 970's have 3.5GB of VRAM (they say 4, but that last .5 is slower than dirt). If your scene ever goes above 4GB then the 970s are going to cut out and you'll be back on just the 2080s. There is some "concern" about mixing CUDA generations, but I don't know hoe valid those concerns actually are.

    Hey now, I had a 970 and I can tell you that for Daz it would use every byte of that 4GB buffer. You need to understand how Daz Iray works, it is not a video game. Iray is not swapping data in and out like a game does. So the slower memory is not a factor at all. It might effect the amount of time it takes for the scene to load into VRAM, but only by a fraction of second. Iray loads the entire scene into VRAM. Once it loads the GPU goes to work, and it renders at full speed. Don't be spreading false information.

    I see no problem with using different generations of cards. I have mixed Kepler, Maxwell, and Pascal at different times. It doesn't matter. What matter more is the amount of VRAM in each card, using cards that are mismatched results in the smaller cards dropping out when they cannot fit the scene.

    Running cards with 11GB is not a great match with 4GB. Maybe if the 970 had 8GB (which doesn't exist) it would be a better fit, but 4GB is very limiting. Only smaller scenes would run on all the cards at once.

    The biggest negative here is the power draw. Running 5 GPUs will require serious power. While your power supply may have enough total Watts, I highly doubt it can support 5 GPUs at once. It almost certainly doesn't have enough connectors for 5 cards. Actually, most power supplies only support 2 or 3 GPUs. You would need to buy a second power supply just to do this! The issue is the 12V rail.

    My power supply is 1000 Watts, and it only has connectors for 3 GPUs, and its a pretty decent power supply. 5 GPUs would be going beyond desktop standards and require more.

    Even if you have all the power, can your CPU run 5 GPUs? This I am not sure.

    And after all of these hoops to jump through, I really doubt it is worth doing. The 970 is getting old now. 3 of them put together is not going to be faster than one 2080ti, not even close, I don't think 3 would even match a 1080ti, maybe. When you run a lot of GPUs, you get diminishing returns in Iray. Three 970s are not going to render exactly 3 times faster as one 970. There is some loss here. So running 5 GPUs together would be even less efficient.

  • kenshaw011267kenshaw011267 Posts: 3,805
    edited June 2019

    There are some high end ATX PSU's with emough PCIE connectors.

    This 1300W has 8.

    https://www.newegg.com/p/N82E16817256195

    Generally speaking the higher the 80+ rating and higher the wattage the more PCIE connectors.

    Post edited by kenshaw011267 on
  • nicsttnicstt Posts: 11,715

    Would it work: yes.

    Personally, that isn't the question you should be asking; I think you should be asking if: "it's a good idea?"

    Hell no.

    Outrider talks about mixing 4GB and 11GB cards; I wouldn't waste your time; if you need 11GB then the 4GB are going to get ignored the majority of the time; they're going to use your electric and contribut nothing most of the time. They may even add a little heat, potentially adversely affecting performance of the cards doing the work

    I use a 970 to run my monitors; I very, very rarely add it to the render; noise and additional heat, and they get dropped. This is using a 980ti.

     

  • KitsumoKitsumo Posts: 1,221

    I think they'll work fine. Instead of upgrading to a more expensive power supply, you could just buy a second one. There are power supply master/slave cables (that power up the 2nd unit when the 1st is turned on), but I bought one and it didn't work. A quick web search and I figured out that I could use a jumper wire to activate the 2nd power supply. That worked fine, I just have to turn that PS on and off manually, but it's no big deal.

    Mixing CUDA cores from different generations doesn't matter, that's the whole point of CUDA. If you were talking about a Fermi card, then it might be an issue because apparently support for those is being dropped from Iray. The fact that it's a 4Gb card shouldn't matter either. If you're rendering a small scene, the 4Gb card will participate. If it doesn't have enough VRAM, it will just sit there idle, but it's not going to slow anything down. It'll generate as much heat as any idle card.

    As others have pointed out, risers work fine, the number of PCI-E lanes to each card doesn't matter much for render times. I would encourage moving the cards to an external enclosure, mostly for cooling purposes and especially if using 2 power supplies. Using a milk crate worked fine for me, but if you want something more professional looking, there are plenty of mining setups for sale cheap on Amazon and Newegg.

    p/s to the original poster, we can't see the pic you posted.

  • I've run multi-GPU setups before for Iray. I had 1 in the machine and 4 mounted on an Amfeltec quad-GPU cluster, which is an open-air frame with 4 PCI slots on a board. 2 on top, 2 on bottom. Worked great while it worked, but being open-air, it's easy to fry with a small static discharge.

    I also ran an Akitio ThunderII via Thunderbolt. It's not really intended for GPUs, and the box is too small to hold anything bigger than a Soundblaster audio card or a DVD-drive. I took the card out of mine and stuck it on a Titan Z (triple-wide) and let the GPU rest upside down since it was wide enough, and that worked.

    Right now I've got a Quadro K4000 running primary display with a Titan X (Pascal) in slot 2 handling Iray, and it runs off of an external PSU.

    And it's advisable to have more than one PSU. For the 970s, a single PSU could probably power 2 of them easily, with room to spare. Don't go too cheap and try to run one PSU at max on 4 GPUs, and don't get "just what you need' to run 2 of them. Get one a bit higher for the spikes.

    Having to turn a second PSU on and off manually isn't a chore. It's not even remotely inconvenient.

    One issue you may encounter, however, is Windows' limit of how many GPUs it can read, and how many resources your motherboard can assign to them. Code 12 errors are common.

  • mechengineer97mechengineer97 Posts: 116
    edited June 2019

    Hey guys, thanks for all your input. 

    Here are my results when I only had the single 2080TI  and 2 970's from the benchmark thread which I should have included in my first post.  

    Here's an online image of the one that didn't display in my first post.

    https://www.freeimage.us/share-15D4_5CFC6064.html

    Went through all the scenes I've done since Jan trying to determine how many of them exceeded 4Gb in vram.  One scene stuck out, it had a crap load of chrome, glass and fine brick texture.  It took a couple of hours to render.  I removed one of the 2080 TI's and reinstalled the 2  970's to do a bit testing.  Typically my scenes are 10 to 12 secs at the most that rarely have avatars, but if they do, it's from a distance.  This scene took 21s to render a frame using all 3 cards and 29s with just the 2080 Ti. 

    Next, I duplicated everything twice so there was 3x the number of these surfaces and re-ran the test.  It took the single 2080Ti over twice as long to complete while the 3 GPU's only did it faster by about 2 secs. I suspect after tripling everything, the scene was at the 970's threshold of vram.

    But on average, it looks like the 2  970's helped to reduce rendering time by 35% to 40%.  Maybe 50% if I added the 3rd 970 getting close to a 1080 ti?  Or 25% inprovement with the 2 2080 Ti's

     

    I ended up grabbing 4 PCI-E risers and a Gigabyte 700W PSU that was on sale for $70 spending a total of $200.  Ran the project from the benchmark thread again.

    So 2  2080 TI's by themselves ran it in 33sec, 

    2  2080 TI's and the 3 GTX 970's in 25 sec,  so it improved the rendering by 25%.

     

    Post edited by mechengineer97 on
  • jmtbankjmtbank Posts: 187
    edited June 2019

    Edit2:  And it might be a bad assumption that having RTX on for one card will stop the 970s working!?  

    Thats a beast of a motherboard.

     

    As soon as Daz updates to this year's Iray version the 970s will halve the 2080s speed. Or worse. [edit: the 2080 is assumed? to be doubling in speed when RTX is turned on]

    We don't know when the update will happen sadly. 

    Edit: Be interested in how much power thats pulling at the wall :D

     

    Post edited by jmtbank on
  • outrider42outrider42 Posts: 3,679
    jmtbank said:

    Edit2:  And it might be a bad assumption that having RTX on for one card will stop the 970s working!?  

    Thats a beast of a motherboard.

     

    As soon as Daz updates to this year's Iray version the 970s will halve the 2080s speed. Or worse. [edit: the 2080 is assumed? to be doubling in speed when RTX is turned on]

    We don't know when the update will happen sadly. 

    Edit: Be interested in how much power thats pulling at the wall :D

     

    Why? I would expect RT cores to combine across cards just like CUDA does. If a card doesn't have them, it should be OK. That's just my thought, though.
Sign In or Register to comment.