How to Build a quad-GPU rig for monster Iray performance (and animating)

245

Comments

  • PA_ThePhilosopherPA_ThePhilosopher Posts: 1,039
    edited May 2017
    fastbike1 said:

    @PA_ThePhilosopher   "You can connect them either in series or paralell. I prefer connecting them in series, since the D5 pump is powerful enough"

    This would allow doing without another block, but potentially increases temperatures in the last GPU. It may not matter much. Do you see any temperture difference between GPU 1 and 4? Seems like parallel plumbing might be simpler?

    Hey fastbike,

    When I was building my system, I too was worried that connecting them in series would mean that the last GPU would run hotter than the first. But this is a myth which has been debunked by a few people. There is no difference in temperatures between the first and last GPU. Since the water is flowing at such a high rate, there is only a fraction of a second between when it passes GPU1 to GPU4. It simply doesn't have enough time to heat up to any significant degree.

    As for the series vs. parallel debate, this ultimately comes down to a matter of preference, since there is real no difference in terms of temperatures between the two. But even so, I still prefer series over parallel (As long as your pump is strong enough, and there is not too much resistance in your loop), because I like the idea of 1) knowing that 100% of the water will pass through each GPU, vs. who knows what, and 2) the flow rate over each GPU will also be 100% vs. 50% or less.

    Post edited by PA_ThePhilosopher on
  • fastbike1fastbike1 Posts: 4,078
    edited May 2017

    I wasn't referring to any myth since I hadn't seen any. I was asking strictly from an engineering perspective since I have no idea of the pump capacity. With a better than "good enough" pump, It would certainly be possible for each GPU temp to be ~the same even though the coolant must be progressively hotter from one gpu to the next.

    I would agree that I would rather go with a bigger pump and be sure that each GPU is getting all the flow.

    Post edited by fastbike1 on
  • Takeo.KenseiTakeo.Kensei Posts: 1,303

    iNTEL's new HEDT Platform will be there soon http://wccftech.com/intel-x299-skylake-x-kaby-lake-x-pc-gaming-show-2017/

    Some Prices will certainly go down

    Few remarks :

    - All actual Xeon E5 have 40 lanes. A Xeon E5 1620v4 3,5 GHZ is cheaper than a i7 6850K and provides the necessary lanes

    - a 28-40 lane CPU is not mandatory. Some motherboard can manage a 4 GPU setup with a 16 lane CPU thanks to a PLX chip. Ex: AS Rock Supercarrier or Asus Z170/Z270 WS. Add an i5 7600K and your price tag will go down. Unlike the X99 platform, a Z170/270 motherboard provides the intel integrated GPU which can be usefull if you want to have 4 dedicated GPU for rendering

    - The X99 platform is old and some new technology will not be available with it (Ex optane, USB3.1, Sata express etc...). Some motherboard manufacturers however do some 'Refresh' of their product (ex ROG RAMPAGE V EDITION 10) but some of these new features may be disabled with 4 GPU

    - Memory : higher clock speed is better. Get 3000+ memory; Datas are going back and forth between the GPU memory and the computer memory and it seems good memory is always overlooked. The X99 platform coupled with Xeon or Extreme i7 have Quad channel memory. Z170 and Z270 don't have this feature. DDR4 although having higher clockspeed than DDR3 may induce latency. It can only be compensated with high clock speed.

    I'll finish with real Nvidia PrOn

    There is a step further in the quest of power. 4 GPU Rig is not what I call a monster

  • PA_ThePhilosopherPA_ThePhilosopher Posts: 1,039
    edited May 2017

    Thanks for the tip. Do happen to know if the upcoming X299 motherboards (like the MSI featured in the article) will be able to support more than 4 GPUs? Reading about it now, it appears the Skylake chip could have up to 48 lanes (others say 46, others 44...not sure which is true). This is a good sign I take it. Hopefully the industry will trend toward more accomodations for 5x+ GPU setups, beyond the standard quad build. 

    - All actual Xeon E5 have 40 lanes. A Xeon E5 1620v4 3,5 GHZ is cheaper than a i7 6850K and provides the necessary lanes

    Interesting. However, the Xeon is a 4-core whereas the i7-6850K is a 6-core. So I'm not sure what advantages going with a Xeon here will provide. For GPU rendering, more cores seem to be very important. 

    - a 28-40 lane CPU is not mandatory. Some motherboard can manage a 4 GPU setup with a 16 lane CPU thanks to a PLX chip. Ex: AS Rock Supercarrier or Asus Z170/Z270 WS. Add an i5 7600K and your price tag will go down. Unlike the X99 platform, a Z170/270 motherboard provides the intel integrated GPU which can be usefull if you want to have 4 dedicated GPU for rendering

    Interesting about PLX. That is good to hear that there is a greater push for mulit-GPU support. But correct me if I'm wrong, this technology still seems to be relatively new and untested in terms of GPU rendering. Do you happen to know of any articles that tested the PLX chip for GPU rendering on a quad+ setup? Some people have mentioned possible latency issues involved in the PLX technology, potentially slowing things down. 

    - The X99 platform is old and some new technology will not be available with it (Ex optane, USB3.1, Sata express etc...). Some motherboard manufacturers however do some 'Refresh' of their product (ex ROG RAMPAGE V EDITION 10) but some of these new features may be disabled with 4 GPU

    The X99 platform is older technology, yes. But from what I can tell at the moment, it is all we have to viably work with. For someone wanting to build a quad-GPU setup today, I don't know of any more tested or viable solution than the LGA-2011 v3 / X99. The benefit of a tried and true technology is that it is well-established, with years of refinement and polishing behind it.

    - Memory : higher clock speed is better. Get 3000+ memory; Datas are going back and forth between the GPU memory and the computer memory and it seems good memory is always overlooked. The X99 platform coupled with Xeon or Extreme i7 have Quad channel memory. Z170 and Z270 don't have this feature. DDR4 although having higher clockspeed than DDR3 may induce latency. It can only be compensated with high clock speed.

     yes 

    Also +1 for XMP profile support (rather than just the raw unbuffered memory).

    I'll finish with real Nvidia PrOn There is a step further in the quest of power. 4 GPU Rig is not what I call a monster

    That's a facinating find. Is that the ASUS X99 E WS? It looks like it can support seven single-slot GPUs on a single 40-lane chip. I suppose that would mean 8x/8x/4x/4x/4x/4x/4x, unless PLX was involved? Reading this article, they were able attain a score of 900 on Octane bench, meaning their gains were proportional with this board (125 x 7). I wonder what PCI speeds they were running at. Do you happen to know? 

    Anyway, all that aside, 7x GPUs is definitely a monster build. But so are 4x GPU builds (which are easier to build and within the reach of the average prosumer / professional). And now with the latest 10xx series, you can get almost the same performance with only 4x 1080 Tis. Once the numbers start coming in, I bet you're going to see quad 1080 Ti's hitting beyond 700 on Octane bench (that's seven 980's!).

    -P

    Post edited by PA_ThePhilosopher on
  • Robert FreiseRobert Freise Posts: 4,574

    Finding this thread very interesting and informative as I just happen to have four 1070tis and my mind was thinking what if and how

  • Takeo.KenseiTakeo.Kensei Posts: 1,303

    Thanks for the tip. Do happen to know if the upcoming X299 motherboards (like the MSI featured in the article) will be able to support more than 4 GPUs? Reading about it now, it appears the Skylake chip could have up to 48 lanes (others say 46, others 44...not sure which is true). This is a good sign I take it. Hopefully the industry will trend toward more accomodations for 5x+ GPU setups, beyond the standard quad build.

    No idea. I don't think that will be the case but who knows. Actually if you want 8 GPUs you have to look at Tian or Supermicro server boards and that usually implies dual xeon servers.

    - All actual Xeon E5 have 40 lanes. A Xeon E5 1620v4 3,5 GHZ is cheaper than a i7 6850K and provides the necessary lanes

    Interesting. However, the Xeon is a 4-core whereas the i7-6850K is a 6-core. So I'm not sure what advantages going with a Xeon here will provide. For GPU rendering, more cores seem to be very important.

    It's just a cheaper alternative and there are also more expensive Xeon with way more cores (see E5 2699v4). More cores is not mandatory. I just give more options for people who'd like to spend less. I'm not sure more CPU power is required as most of the work is done by the GPU. It just need not to be the bottleneck

    Xeon have few advantages that may not be usefull for DS users but who knows : bigger cache, higher memory support (1TB+) and ECC. With 2xxx serie you can couple two Xeon to get 80 lanes

    - a 28-40 lane CPU is not mandatory. Some motherboard can manage a 4 GPU setup with a 16 lane CPU thanks to a PLX chip. Ex: AS Rock Supercarrier or Asus Z170/Z270 WS. Add an i5 7600K and your price tag will go down. Unlike the X99 platform, a Z170/270 motherboard provides the intel integrated GPU which can be usefull if you want to have 4 dedicated GPU for rendering

    Interesting about PLX. That is good to hear that there is a greater push for mulit-GPU support. But correct me if I'm wrong, this technology still seems to be relatively new and untested in terms of GPU rendering. Do you happen to know of any articles that tested the PLX chip for GPU rendering on a quad+ setup? Some people have mentioned possible latency issues involved in the PLX technology, potentially slowing things down.

    PLX has been there for years. Some people in the Octane Forum have multiGPU builds with a PLX based motherboard ( z77, x79 ). It is also used in some Serverboards for Renderfarms. No worry it works and I'm not sure you can tell the difference. Eventually a few frames lost in a game but I doubt anybody can see that with his bare eyes

    - The X99 platform is old and some new technology will not be available with it (Ex optane, USB3.1, Sata express etc...). Some motherboard manufacturers however do some 'Refresh' of their product (ex ROG RAMPAGE V EDITION 10) but some of these new features may be disabled with 4 GPU

    The X99 platform is older technology, yes. But from what I can tell at the moment, it is all we have to viably work with. For someone wanting to build a quad-GPU setup today, I don't know of any more tested or viable solution than the LGA-2011 v3 / X99. The benefit of a tried and true technology is that it is well-established, with years of refinement and polishing behind it.

    Agree on that. However X99 is not future proof with x299 coming. That information may be relevant for people who want to build a new computer

     

    I'll finish with real Nvidia PrOn There is a step further in the quest of power. 4 GPU Rig is not what I call a monster

    That's a facinating find. Is that the ASUS X99 E WS? It looks like it can support seven single-slot GPUs on a single 40-lane chip. I suppose that would mean 8x/8x/4x/4x/4x/4x/4x, unless PLX was involved? Reading this article, they were able attain a score of 900 on Octane bench, meaning their gains were proportional with this board (125 x 7). I wonder what PCI speeds they were running at. Do you happen to know? 

    Anyway, all that aside, 7x GPUs is definitely a monster build. But so are 4x GPU builds (which are easier to build and within the reach of the average prosumer / professional). And now with the latest 10xx series, you can get almost the same performance with only 4x 1080 Tis. Once the numbers start coming in, I bet you're going to see quad 1080 Ti's hitting beyond 700 on Octane bench (that's seven 980's!).

    -P

    Yep its the Asus X99 E WS and you found the article where I saw it. You can check it's specs at https://www.asus.com/us/Motherboards/X99E_WSUSB_31/specifications/

    7 x PCIe 3.0/2.0 x16 (single x16 or dual x16/x16 or triple x16/x16/x16 or quad x16/x16/x16/x16 or seven x16/x8/x8/x8/x8/x8/x8)

    The author of the article also speaks about external DIY GPU builds. It seems that bitcoin mining rigs are also a suitable for GPU rendering with Octane. May also work for Iray

     

  • I found something that we can dream about.  Would probably be outr of all or our price ranges (50 grand).  I do have s small servier rack that it would fit in though.  Here it is

    http://www.nvidia.com/object/visual-computing-appliance.html

    Probably overkill for what we do but might be interesting to render on one to see how fast it is.  I am strictly a hobbyist myself right now so I will be doing good to upgrade from my single 960 4GB to a pair of 1070s on water. I tought about maybe gertting a last gen Quadro Pro (M series) but they are still too expensive.  BTW the water cooling info is good. Thanks.

  • tj_1ca9500btj_1ca9500b Posts: 2,057
    edited May 2017

    For those of you looking to jump on the Intel X299/i9 bandwagon when those hit the market, here's a link to some motherboard porn for ya, courtesy of MSI!

    http://www.guru3d.com/news-story/msi-teases-x299-a-little-more-with-x299-godlike-gaming.html

    Quote from article:

    The earlier photo already showed four reinforced PCI-Express slots and three shielded M.2 slots. This round you can see three Ethernet connectors, USB 3.1 and two 5G Wi-Fi antennas

    Post edited by tj_1ca9500b on
  • PA_ThePhilosopherPA_ThePhilosopher Posts: 1,039
    edited June 2017

    I just wanted to add a small update to this thread: I just upgraded my rig from four 780 Ti's to two 1080 Ti's (on air atm), and plan to add a third and fourth as finances allow.

    On first impressions, I can say this: Having only two 1080 Ti's on air definitely exceeds my expectations. They scored a combined 400 on Octane Bench, which is equivalent to four 980's. With Opti-X on, they complete Sickleyeild's benchmark in 1 minute flat (1:45 with Opti-X off), which is very good for only two cards on air. Also of note, scenes load much faster now with these cards, likely due to the insane overlocking.  

    All in all, the 1080 Ti is definitely worth the money. Its the best thing to come from NVidia in a long time. 

    -P

    P.S. I bought the EVGA Black Edition, which has better cooling and higher overclock than the base reference model. I highly recommend this model, as my temps never exceed 60-70 degrees.

    Post edited by PA_ThePhilosopher on
  • barbultbarbult Posts: 26,218

    Thanks for the update and recommendation.

  • GaryHGaryH Posts: 66
    edited June 2017

    Ooh, goodie, a juicy hardware thread to follow...

    Rather than one big lump case, I'm going to be looking at external radiators on a quick release system. Apparently, this can now be done straightforwardly without leaks and losses. I can keep radiator setups at my most frequent destinations and just transport the system core in its aluminium Lian Li case.

    That's the route I'm taking in water cooling my two Titan X Pascals.  Still waiting on my second set of EK quick disconnects.  BTW, you can now by them directly from EK for $32, BUT you can also buy them from United States Plastic for almost half the price.  The quick disconnects that EK resells are made by CPC.  It's their NS4 model with 3/8" barbs.  http://www.usplastic.com/search/default.aspx?it=item&keyword=cpc%20ns4

    I'm going with Watercool's Mo-Ra3 420 with all of the water cooling gear outside the case except for the GPU blocks.

     

     

     

    Post edited by GaryH on
  • Just for adding to the conversation:  I have a 1U Tesla unit that contains 4 Tesla compute GPU units and it is completely air cooled.  It is as loud as a jet engine, but air cooled none-the-less.

    Kendall

    I've mulled Tesla-based rack units, but finding in-depth info on them seems impossible until after you buy one.

    Can you swap the original GPUs out with GTX models? From a cost vs cores perspective, Teslas and Quadros are far more expensive than a similarly-equipped GTX GPU, even on the used market. I can either get one Quadro with less than 5000 cores and 12GB VRAM for $5000 or 8 1080TIs for $4800. The math answers that question. While Quadros have a more stable clock for day-long rendering, GTX units can be underclocked comparably using Afterburner or PrecisionX.

    At any rate, the rack-mounted setup appeals to me greatly, due mostly to my background as a musician :P, but the primary problems I'm finding are:
    1. No one is selling empty racks cheap enough
    2. Incomplete info on older rack systems (can you replace the cards, do you need a subscription to Nvidia's GRID software or some other suchness, do these units report to the PC as a single unit or multiple GPUs, etc)

     

    I went with an Amfeltec GPU cluster, which can hold 4 GPUs on an open-air frame, and so far it's been worth it. I burned up one Thermaltake PSU trying to get it up and running, but the replacement is doing fine. I do need to explore watercooling, though. I'm running a 980, Titan Z, and 2 780TIs on it right now, along with the Titan X as my primary display and gaming card.

    Previously I was using an Akitio Thunder2 box, which works well enough for rendering, however, I have not been able to get the GPU cluster and the Thunderbolt card to work at the same time. Not sure if it's a resource conflict or I've managd to kill it (given that it has to live outside of its protective case to work with gaming-length GPUs).
    Anyway....

    According to Amfeltec, you can connect up to 4 clusters to one PC, and each cluster can have 4 GPUs, for a total of 16 GPUs on one computer. I'm assuming that includes some form of the Windows home editions like 7, 8, or 10, and is not limited to Server, or intended for an Apple product, or something running Linux. They didn't really specify.

     

  • Kendall SearsKendall Sears Posts: 2,995

    Just for adding to the conversation:  I have a 1U Tesla unit that contains 4 Tesla compute GPU units and it is completely air cooled.  It is as loud as a jet engine, but air cooled none-the-less.

    Kendall

    I've mulled Tesla-based rack units, but finding in-depth info on them seems impossible until after you buy one.

    Can you swap the original GPUs out with GTX models? From a cost vs cores perspective, Teslas and Quadros are far more expensive than a similarly-equipped GTX GPU, even on the used market. I can either get one Quadro with less than 5000 cores and 12GB VRAM for $5000 or 8 1080TIs for $4800. The math answers that question. While Quadros have a more stable clock for day-long rendering, GTX units can be underclocked comparably using Afterburner or PrecisionX.

    Not really. GTX cards tend to be MUCH larger and longer than the 1U unit will allow. There is *NO* space for any card that would want 2 slots, and there are 2 PCI-e slots inline on each side. Also, there is no space for the backplane that holds the video ports.  In the rear right slot the physical ports would keep the card from even seating into the slot. One can use Quadros that are single slot and short enough to not impose on the other inline card (for the rear right slot a single slot quadro with only displayport or HDMI jacks would fit w/o cutting the case).  While I've not personally tried it, I have been told that mismatched cards can be used in the slots.  So it should be possible to use cards with physical ports in the slots they fit and retain dedicated tesla's in the other slots according to the reports.

    I also have some Quadro Plex units that are big enough to support 2 dual slot cards.  They actually shipped with 2 Quadro FX-5800 quadro cards installed.  These are older units and no longer available.

    Both the external tesla and plex units require special nVidia interface cards and cables that are not cheap.  The Tesla's actually require 2 of the interface cards and cable sets, while the plex only needs a single set.

    At any rate, the rack-mounted setup appeals to me greatly, due mostly to my background as a musician :P, but the primary problems I'm finding are:
    1. No one is selling empty racks cheap enough
    2. Incomplete info on older rack systems (can you replace the cards, do you need a subscription to Nvidia's GRID software or some other suchness, do these units report to the PC as a single unit or multiple GPUs, etc)

     

    I went with an Amfeltec GPU cluster, which can hold 4 GPUs on an open-air frame, and so far it's been worth it. I burned up one Thermaltake PSU trying to get it up and running, but the replacement is doing fine. I do need to explore watercooling, though. I'm running a 980, Titan Z, and 2 780TIs on it right now, along with the Titan X as my primary display and gaming card.

    Previously I was using an Akitio Thunder2 box, which works well enough for rendering, however, I have not been able to get the GPU cluster and the Thunderbolt card to work at the same time. Not sure if it's a resource conflict or I've managd to kill it (given that it has to live outside of its protective case to work with gaming-length GPUs).
    Anyway....

    According to Amfeltec, you can connect up to 4 clusters to one PC, and each cluster can have 4 GPUs, for a total of 16 GPUs on one computer. I'm assuming that includes some form of the Windows home editions like 7, 8, or 10, and is not limited to Server, or intended for an Apple product, or something running Linux. They didn't really specify.

     

    Going with an external PCI-e setup would serve most better than this.  Unless you're going the HPC route (non-rendering) then using tesla's doesn't tend to work out fiscally.

    Kendall

  • Ghosty12Ghosty12 Posts: 2,080

    One thing of interest is that while not GPU related the new AMD Threadripper will cater up to 64 PCIE Lanes, be $1000 for the 16 Core CPU and one major change the Threadripper CPU's are to come with Liquid Cooling due to it being said that an aircooling solution would be too big and bulky and likely very heavy..

  • barbultbarbult Posts: 26,218

    I'm having difficulty understanding the number of PCI-e lanes required to support multiple graphics cards. Is it 16 for the 1st and 8 for each additional card, or should each card have 16? I see places like Origin and CyberPower selling Intel 7700K based computers with 2 1080 Ti cards, but the Intel website says 7700K has only 16 PCI-e lanes. Other places I see that the chipset on the MB provides another 8 lanes, for a total of 24. Is the 7700K a "good" chip for a new build with 2 1080 Ti cards, or is it now too outdated with the I9s coming out?

    And how concerned should I be about the number of cores in in the processor? I'm mainly interested in Daz rendering (on the GPUs) and possibly things like Marvelous Designer and Blender. Does going to a processor with more than 4 cores really help a lot? Does something like the Xeon or Broadwell E pay off?

  • PA_ThePhilosopherPA_ThePhilosopher Posts: 1,039
    edited July 2017
    barbult said:

    I'm having difficulty understanding the number of PCI-e lanes required to support multiple graphics cards. Is it 16 for the 1st and 8 for each additional card, or should each card have 16? I see places like Origin and CyberPower selling Intel 7700K based computers with 2 1080 Ti cards, but the Intel website says 7700K has only 16 PCI-e lanes. Other places I see that the chipset on the MB provides another 8 lanes, for a total of 24. Is the 7700K a "good" chip for a new build with 2 1080 Ti cards, or is it now too outdated with the I9s coming out?

    And how concerned should I be about the number of cores in in the processor? I'm mainly interested in Daz rendering (on the GPUs) and possibly things like Marvelous Designer and Blender. Does going to a processor with more than 4 cores really help a lot? Does something like the Xeon or Broadwell E pay off?

    For only two GPU's, 16 lanes should be fine (x8 for each card). As was discussed above, most tests seem to indicate no measurable difference between the latest gen 16 and 8 PCI-e.

    8x was really only an issue on prior generations. 

    As for cores, I would consider 6 cores a minimum requirement, even if all you'll be doing is GPU rendering in Iray. If I recall correctly, Mec4D used to be on a 4-core system and noticed a huge difference in Iray when she upgraded to 8 cores.

    -P

    Post edited by PA_ThePhilosopher on
  • ebergerlyebergerly Posts: 3,255
    edited July 2017

    Thanks for the great info. In fact  one of the big questions I had was whether adding a second GTX 1070 to my existing one would make much difference in my iray viewport display.

    After watching your video with 4 GPU's and multiple G3's and some emissive lights, I tried to duplicate that (only 2 G3's though), and here's my result with only a single GTX 1070. Doesn't seem to be vastly different from your results, though there's definitely a visible difference. I'm still on the fence whether it will be worth it to double up...

     

    Post edited by ebergerly on
  • ebergerlyebergerly Posts: 3,255

    I know exactly nothing about watercooling, but from the little I've seen on youtube it at least *seems* like it's allowing people to crank their clock speeds from, say, 3.5GHZ to maybe 4.0 GHZ or something like that. Doesn't seem like it's a big % gain. And it seems like the risk to reward is quite high, requiring you to mess with the outer limits of the devices' capabilities for not a whole lot of reward. Gives me the willies.

    And I certainly see that such monster machines might be needed for animations, but for stills and/or cases where you can, for example, do some compositing rather than render entire scenes, maybe there are other ways to save significant time. I think it has to be a case-by-case evaluation. 

    And one more comment...

    Have you noticed that no matter how much of a computing monster is mentioned, there's always someone to say "that's not a monster, how about a machine with 126 GPU's !! " smiley

  • Robert FreiseRobert Freise Posts: 4,574

    I'm currently using two GTX 1070's have posted benchmarks here https://www.daz3d.com/forums/discussion/53771/iray-starter-scene-post-your-benchmarks/p17

    One set for a dual Xenon system and further down for a Ryzen 1700X

     

  • Ghosty12Ghosty12 Posts: 2,080

    Off topic of sorts but everytime I try to look at this thread it lags out my browser something chronic, and have found it is some of the images used in some of the posts.. Not sure why but it is very annoying to say the least..

  • barbultbarbult Posts: 26,218
    barbult said:

    I'm having difficulty understanding the number of PCI-e lanes required to support multiple graphics cards. Is it 16 for the 1st and 8 for each additional card, or should each card have 16? I see places like Origin and CyberPower selling Intel 7700K based computers with 2 1080 Ti cards, but the Intel website says 7700K has only 16 PCI-e lanes. Other places I see that the chipset on the MB provides another 8 lanes, for a total of 24. Is the 7700K a "good" chip for a new build with 2 1080 Ti cards, or is it now too outdated with the I9s coming out?

    And how concerned should I be about the number of cores in in the processor? I'm mainly interested in Daz rendering (on the GPUs) and possibly things like Marvelous Designer and Blender. Does going to a processor with more than 4 cores really help a lot? Does something like the Xeon or Broadwell E pay off?

    For only two GPU's, 16 lanes should be fine (x8 for each card). As was discussed above, most tests seem to indicate no measurable difference between the latest gen 16 and 8 PCI-e.

    8x was really only an issue on prior generations. 

    As for cores, I would consider 6 cores a minimum requirement, even if all you'll be doing is GPU rendering in Iray. If I recall correctly, Mec4D used to be on a 4-core system and noticed a huge difference in Iray when she upgraded to 8 cores.

    -P

    Thank you.

  • Takeo.KenseiTakeo.Kensei Posts: 1,303

     

    ghosty12 said:

    Off topic of sorts but everytime I try to look at this thread it lags out my browser something chronic, and have found it is some of the images used in some of the posts.. Not sure why but it is very annoying to say the least..

    That may be because your internet speed is slow. It may also du to the above animated pic. You can disable animations in your browser so that it only load the first pic

     

     

    barbult said:

    I'm having difficulty understanding the number of PCI-e lanes required to support multiple graphics cards. Is it 16 for the 1st and 8 for each additional card, or should each card have 16? I see places like Origin and CyberPower selling Intel 7700K based computers with 2 1080 Ti cards, but the Intel website says 7700K has only 16 PCI-e lanes. Other places I see that the chipset on the MB provides another 8 lanes, for a total of 24. Is the 7700K a "good" chip for a new build with 2 1080 Ti cards, or is it now too outdated with the I9s coming out?

    And how concerned should I be about the number of cores in in the processor? I'm mainly interested in Daz rendering (on the GPUs) and possibly things like Marvelous Designer and Blender. Does going to a processor with more than 4 cores really help a lot? Does something like the Xeon or Broadwell E pay off?

    For only two GPU's, 16 lanes should be fine (x8 for each card). As was discussed above, most tests seem to indicate no measurable difference between the latest gen 16 and 8 PCI-e.

    8x was really only an issue on prior generations. 

    As for cores, I would consider 6 cores a minimum requirement, even if all you'll be doing is GPU rendering in Iray. If I recall correctly, Mec4D used to be on a 4-core system and noticed a huge difference in Iray when she upgraded to 8 cores.

    -P

    The best current cards begin to saturate the 8x. Being able to handle two PCIe 3.0 at 16x may be a safer bet for the future. Especially if you target real time rendering.

     

  • MEC4DMEC4D Posts: 5,249

    For rendering with GPU and iray you need max 4 cores but because iray is a hybrid it need the CPU power to load faster your models into the GPU and for faster scene updates,  more CPU cores faster job between rendering , also with faster CPU cores things works better , you will get faster viewport even if you render just with GPU , so if you going to inwest in expensive GPU think about fast CPU and RAM also, I build my iray monster already and I enjoye it since then . When you render Animation you need as fast CPU as possible so it can handle the scene updates for each frame faster , other way it is like you buy a Mension and for driving you use a bike ;)  CPU is the heart of your system so make sure it can handle quick everything you put into it.

    If that was not my work I would never go for more than good  2 GPU cards and fast 4 cores CPU 

    barbult said:

     

    For only two GPU's, 16 lanes should be fine (x8 for each card). As was discussed above, most tests seem to indicate no measurable difference between the latest gen 16 and 8 PCI-e.

    8x was really only an issue on prior generations. 

    As for cores, I would consider 6 cores a minimum requirement, even if all you'll be doing is GPU rendering in Iray. If I recall correctly, Mec4D used to be on a 4-core system and noticed a huge difference in Iray when she upgraded to 8 cores.

    -P

     

  • GatorGator Posts: 1,319
    hphoenix said:
    nicstt said:

    Love the case, I've been looking for a new one; so you've sold me on the Thermaltake.

    I also like the fact the motherboard lies flat.

    There are a few like it.  My personal gaming/rendering system I built in a CoolerMaster HAF XB Evo :  http://www.coolermaster.com/case/lan-box-haf-series/haf-xb-evo/

    The TT x9 cube is HUGE.  The CM HAF XB Evo is a bit smaller.

    Plenty of room, unbelievable airflow, Easy to open, move, etc.  Even the power supply mounting allows it to easily slide in and out.  Motherboard tray comes out in a single piece, so it's a LOT easier to work on.  Routing a lot of cables underneath can get tricky, and the HotSwap bay control board position makes the power supply cables a bit tight.  But it's VERY nice.  Put a 200mm fan on the top venting out, fans on front pull in, fans on back vent outward.  I never see temps above 50°C on the water cooled CPU (i7 6800k) and never above 60°C on the two 1080 GTX GPUs (Asus STRIX ROG) even under full loading.

    I'll post a photo of it later, after I get home!

     

    Informative post, gotta comment here because CoolerMaster HAF XB Evo only has 7 PCI expansion slots, which if I'm not mistaken means only enough room for 3 graphics cards, not 4.

  • DrNewcensteinDrNewcenstein Posts: 816
    edited July 2017

    Not really. GTX cards tend to be MUCH larger and longer than the 1U unit will allow. There is *NO* space for any card that would want 2 slots, and there are 2 PCI-e slots inline on each side. Also, there is no space for the backplane that holds the video ports.  In the rear right slot the physical ports would keep the card from even seating into the slot. One can use Quadros that are single slot and short enough to not impose on the other inline card (for the rear right slot a single slot quadro with only displayport or HDMI jacks would fit w/o cutting the case).  While I've not personally tried it, I have been told that mismatched cards can be used in the slots.  So it should be possible to use cards with physical ports in the slots they fit and retain dedicated tesla's in the other slots according to the reports.

    I also have some Quadro Plex units that are big enough to support 2 dual slot cards.  They actually shipped with 2 Quadro FX-5800 quadro cards installed.  These are older units and no longer available.

    Both the external tesla and plex units require special nVidia interface cards and cables that are not cheap.  The Tesla's actually require 2 of the interface cards and cable sets, while the plex only needs a single set.

     

    Going with an external PCI-e setup would serve most better than this.  Unless you're going the HPC route (non-rendering) then using tesla's doesn't tend to work out fiscally.

    Kendall

    Thanks. I've seen the Plex on Ebay, but again, no info on it. I did find something about the Tesla rack unit that said each HIC controls 2 cards, hence you need two host interface cards.

    For the price, even used, and only being able to hold 2 cards, I'm getting a better deal (though more of a fire hazard) with the Amfeltec cluster at about the same price (around $600 USD) than the Plex. Takes up about a foot of space in all 3 directions, which is comparable to the Plex, from what I did find on it.

    I know Teslas are mostly Quadros with no output ports, which is fine for both secondary slot and external use, but had wondered if they would be fine for Iray, even though on average you can get a GTX with the same or better specs new for the same price. I'm financially suicidal but only up to a certain point LOL
    Though I will say I saw a K80 go for not much more than a new Titan Xp the other day on Ebay (~$1500 vs $1200 new for the Xp). That's not bad, but still  a bit much for "I'll just get one and see if it works".   
    DERP EDIT: That was a K40 for $1525, not a K80. The K80 was over $2K and from the Russian Federation from a 0-feedbacker. I bet that turned out swell for the "winner".

    Since Device Manager assigns resources to the Audio functions of all cards, even if they're not being used for direct output, I have had to disable those functions to free those resources with the Akitio Thunderbolt unit. Butchering your resource tree is fun, and the results are pleasing, but you turn your resource stack into a game of Jenga with nukes in a kayak on the Colorado River during a flood. I have a HDD missing, even after a Refresh.
    Hence, my interest in Teslas. No audio functions means no Device Mismanager hacking.

    I also found that MSI Afterburner will recognize all my cards, while Precision X was limited to 4. With the Titan Z counting as 2, that was a problem. It also has the same taskbar monitors so I can see when the RAM on the 780TIs are about to fill up.

     

    Regarding faster scene loads, someone posted somewhere, and I tried it myself and it works, that if you set your viewport draw mode to Iray and let it load, final rendering begins faster. However, it takes as long (or longer) to make that happen as if you had left it in Textured mode, or even turned the viewport display off (after setting your rendering camera), so there's that. Then again, I'm using an i7-4770K that isn't OC'd, and have 32GB of RAM, on an ASUS Z87 Pro. While the first slot is x16, it cuts down to x8 due to the Amfeltec HID board being in slot 2 (also 8x), so there's some delay in CPU-GPU communication, apparently.

     

    Post edited by DrNewcenstein on
  • LilithVXLilithVX Posts: 36

    I just upgraded to two 1080Ti's, replacing my two old Titan X's (Maxwell). And I was thinking that instead of letting the old ones just collect dust I should use them. The problem is however that both the new and old is hybrid cards. I mean no that's not a problem as in a negative since you have built in water cooling. But when it comes to needing space, yeah it's a problem. Even in my Phantek Enthoo Primo big tower I can only really fit 3 hybrid cards. The water cooling takes up too much space to fit another card. Seems to be very hard to find a case that has much more space.

    Another problem is finding a good MB. I want to jump on the new intel 2066 socket but there are (at least not where I buy) no good MB with places for four GPU's. For the older 2011 socket I know some MB's existed that could do 4. I'm guessing, after reading the posts in this thread, that the point is to go full water cooling and not hybrid thus getting rid of the extra bulk each GPU has. Since the hybrid solution has the ordinary small fan direclty on the card and the big one on the end of two tubes.

  • JCThomasJCThomas Posts: 254

    For what it's worth, my previous main workstation was a Z97 based system with a quad core 4790K. I was on the EVGA Z97 Classified, which has a PLX chip. I ran Two Titan Xs, a Titan Z, and a Titan Black (which is technically 5 GPUS, since Titan Z is a dual GPU) without any problems. Having only 4 cores wasn't an issue. I ended up breaking the system down and selling the backbone of it because I was consistently maxing out my 32GB RAM. I kind of regret it actually, because it was definitely a unique build, had a lot of love and planning in it.

    Anyway, main point is you can build a wuad gpu rig with a quad core as long as you can find a motherboard that supports it. Currently the ASUS Z270 WS is one of the only ones I can think of that offers that functionality. But you could run 4 GPUs off a 7700k, for example, with that board.

  • Robert FreiseRobert Freise Posts: 4,574
    LilithMV said:

    I just upgraded to two 1080Ti's, replacing my two old Titan X's (Maxwell). And I was thinking that instead of letting the old ones just collect dust I should use them. The problem is however that both the new and old is hybrid cards. I mean no that's not a problem as in a negative since you have built in water cooling. But when it comes to needing space, yeah it's a problem. Even in my Phantek Enthoo Primo big tower I can only really fit 3 hybrid cards. The water cooling takes up too much space to fit another card. Seems to be very hard to find a case that has much more space.

    Another problem is finding a good MB. I want to jump on the new intel 2066 socket but there are (at least not where I buy) no good MB with places for four GPU's. For the older 2011 socket I know some MB's existed that could do 4. I'm guessing, after reading the posts in this thread, that the point is to go full water cooling and not hybrid thus getting rid of the extra bulk each GPU has. Since the hybrid solution has the ordinary small fan direclty on the card and the big one on the end of two tubes.

    Not familiar with your case but this one is massive ROSEWILL BLACKHAWK-ULTRA - FULL TOWER GAMING COMPUTER CASE - 8 PREINSTALLED COOLING FANS Newegg generally has them in stock

    Just an fyi for anyone looking at cases

  • I'll probably upgrade my core PC once more, but by the time you factor in the cost of the MOBO, CPU, RAM, and PSU to run multiple GPUs in one case, as well as some sort of license subscription to a software product that can render over a network, you could buy at least one Amfeltec GPU cluster and probably 2 more decent GPUs (1080TIs) for it. That cluster can be used in any other rig that has a free PCI slot. Plus you don't have to renew every year, as with software, unless you want a better component.

    The only issue I see at the moment, however, is that while my primary GPU inside the PC case on the slot identified as PCIx16_1 is running at full x16, the cluster's interface card running on the PCIx16_3 slot is only at x4, which is set in the BIOS (I can only set it for either x1 or x4, and if I put the card on Slot 2, it locks to x1 and cuts Slot 1 down to x8). Meanwhile, both GPU-Z and Nvidia SMI report the Current Bus Speed for the Titan Z is x16 on both halves, while the 780 TI, 1080TI and 980 are showing only x1 as the Current Bus Speed.

    Then again, this could be a CPU limitation (i7-4770K), I had only 3 cards on it when I started typing this, and thought maybe the speed difference was due to how the slots are linked. I added the 780, but the Titan Z is still showing x16 while the rest are at x1.

    From what I've been seeing of most MOBOs since digging further into multi-GPU setups, I can't find one that doesn't cut the Slot speed when more than one is populated. Seems counter-intuitive to me; if a given feature can't run at full capacity because of shared board resources, either improve the design or eliminate the other feature.

     

  • GLEGLE Posts: 52

    As an IT manager, here's the technical review of my personal experience with Iray:

    My current setup is an i7-5820k with 16GB of RAM on a Rampage V Extreme, with a pair of 980Ti cards. I render with CPU disabled because the rig becomes awfully unresponsive otherwise.

    The 6 cores/12 threads are not really necessary (I got them before Iray hit the DAZ street). I'd say you need one core per card based on realtime usage logs. Putting the library on a RAID or SSD storage solution will improve your pre-render times more, both viewport time and data assembly right before the cards begin to work. I'm also custom loop watercooling, but that's just for the show (Thermaltake Core P5 setup).

    16GB of RAM are a small limitation because after I Ctrl+R it usually gets filled before sending data to the cards. Some "swap file" gets used and that's a slowdown. 32GB would be more than enough to handle that.

    I didn't monitor PCI Express bandwidth usage, but I think not much is being done after the cards are commanded to chomp the data.

    The card driving the display uses about 1,2 GB more RAM than the other, so I'm planning to add a 1070 or 1080 to gain 33% performance and the correct memory headroom. Remember to turn on the "Optimize for Compute" option in the nVidia driver.

    With generous details, an exterior scene will usually render in 10 to 30 minutes, an interior scene takes 2 to 6 hours. Core speed during rendering is 1380 MHz on both cards.

    My rig is not cutting edge anymore, but I consider it very respectable as for potential upgrades. Going to a 1620 40-lane Xeon is likely what I would do if I wanted more GPUs.

    If I had to build an Iray rig today, I'd probably go with a mining board with all the GPUs I could afford. All the rest is for the show and not an efficient use of money.

    Follows what I'd go with if I had to build a rig today.

    The poor man machine (you may look for used parts):

    AMD FX 8320/50/70 or 9590 processor (the old ones with 8 cores. Core i5 class performance for small price, and brilliant for computational purposes if you do something else than Iray)
    16GB RAM
    Suitable motherboard (will likely need risers)
    3x1TB drives in RAID 0 (remember to backup!)
    Cards in the 1070 performance class (older cards like the 980Ti are nto so cheap in comparison and come with less RAM)
    Corsair VS 650 is a quiet and reliable PSU that won't break the bank. Built a lot of rigs with it, never failed.
    Cheapo case with 8 slots (used Corsair Vengeance V70 should do the trick and last an eternity)

    The sensible machine (you should not look for used parts):

    Ryzen 5 (workstation) or Core i3 (sole rendering)
    32GB RAM
    Motherboard for mining
    Best cards you can get (up to 8 as per nVidia guidelines)
    Lepa G1600 if you don't mind some noise/Seasonic if you mind the noise and want it very durable and HQ
    3 or 4 TB drives for storage in RAID 0 for the library and mandatory SSD Windows and programs drive
    Thermaltake and Zalman make some nice and inexpensive cases, or you can go Cooltek or LianLi if you want satin aluminum

    The crazy flagship machine (used parts? LOL):

    Most expensive Threadripper (wait for Naples if you want to go really big)
    Asus Zenith Extreme/Gigabyte X399 Gaming 7
    128 GB of RAM (cause flagship)
    Other specs as above or better
    Outlandish case from Corsair/CoolerMaster/other gaming oriented brand

    Bye

Sign In or Register to comment.