OT: Looking For New Desktop

RCDescheneRCDeschene Posts: 2,816
edited December 2015 in The Commons

As the title says, I'm looking to invest in a new PC that is more suited for high quality Iray rendering. The laptop I'm using now is a CyberpowerPC Gaming Notebook that I had since Februry 2011 and it's beginning to show the extent of it's strength by completely crashing everytime I try to render even something simple as a posed character.

So, my question is: what would be a decent desktop build for the likes of a Studio 4.9 app with over 2500 installed products for a price range of $1000 - $1500? Here are my current specs:

 

Post edited by RCDeschene on

Comments

  • Minimum things I'd be looking for is an I7 processor with at least 16GB of Ram and a Nvidea video card with at least 4GB of memory. Biggest bang for the lowest price in video cards would either be a 970 or 960 (just make sure the 960 has 4GB and not the more common 2GB). Above these the price really starts climbing much faster than the resulting gains you'll get in render times. If your looking to build for the future you might want to look for a system that has a motherboard that has more than one PCIe x16 slot so you can eventually install and run more than one video card at a time.

  • JCThomasJCThomas Posts: 254

    do you have a microcenter nearby? If so, you can get an MSI Z97 Gaming 5 ATX Motherboard and i7 4790K for under $400.

    Figure about a hundred bucks for 16GB DDR3 RAM (2x8, so you can add 16 more GB later).

    Then 600-700 for GTX 980 ti.

    YOu should be able to swap out the hard drive from your laptop, and then add maybe a 500GB Samsung 850 evo for under 200. Here's a good psu for under a hundred bucks:http://www.newegg.com/Product/Product.aspx?Item=N82E16817438048. Go with something basic for your cooler, like a CoolerMaster Hyper 212 evo, and you should come in right around 1500. If you can't grab the Microcenter deal, you should be able to get that motherboard and cpu for about a hundred bucks more.

    The 970 is the best value for gaming, but for rendering the 980 ti is worth it.

     

  • RCDescheneRCDeschene Posts: 2,816
    JCThomas said:

    do you have a microcenter nearby? If so, you can get an MSI Z97 Gaming 5 ATX Motherboard and i7 4790K for under $400.

    Figure about a hundred bucks for 16GB DDR3 RAM (2x8, so you can add 16 more GB later).

    Then 600-700 for GTX 980 ti.

    YOu should be able to swap out the hard drive from your laptop, and then add maybe a 500GB Samsung 850 evo for under 200. Here's a good psu for under a hundred bucks:http://www.newegg.com/Product/Product.aspx?Item=N82E16817438048. Go with something basic for your cooler, like a CoolerMaster Hyper 212 evo, and you should come in right around 1500. If you can't grab the Microcenter deal, you should be able to get that motherboard and cpu for about a hundred bucks more.

    The 970 is the best value for gaming, but for rendering the 980 ti is worth it.

     

    Yeah, I was concidering that, but I don't want to stick to this craptop anymore. I'm not mobile with my major tech anymore and everyone keeps preaching to me that desktops are ideal for heavy-duty CGI work. I definately need something with liquid cooling, as you suggested.

  • Hello!

    IG's advice is very good.

    The only difference I would suggest is that you go for 24 or 32 GB of system RAM right off the bat.  With 16 GB, you're not likely to run out ever, but you could end up making Windows hit the swap file more often with big scenes and/or lots of applications open.  Not a big deal if you're using all SSD drives, but why accept limits when it's fairly cheap to raise the performance?  In my opinion, the best time to upgrade memory is when you buy it the first time, because you can buy matched modules.

    Now, about your laptop: 

    It shouldn't be crashing whenever you try to render.  It could be heat related.  Rendering causes heat buildup fast.  My own 2011 Asus laptop starts blowing hot air almost immediately whenever I fire up something like a render.  That's normal, and keeping those ports open and clear will help your laptop breathe easier.  So, if yours were my laptop, I'd open it up and blow out all the dust bunnies.  You'd be surprised at how big the tumbleweeds can be, and it's even worse the more people you have in your home/work area (I swear we are turning into dust by the minute and we leak syrup at night) or if you have any furry pets.  I also know from experience that birds generate almost as much dander as cats, so if you keep even a single cockatiel, you should regularly open up the laptop and clean out the heatsinks.

    And while you're inside there, you might want to consider whether or not to reseat the heatsinks with new thermal paste.  Your 2011 laptop is now getting close to 6 years old; maybe 7 if you bought it early in the manufacturer's lifecycle for that model.  The old paste is probably as dried out and chalky as that toothpaste blob that landed above the waterline in your sink.  It's probably just as effective.  wink

    By the way, desktops need to be cleaned out, too.  Even if they are water-cooled.  About twice a year, I pull my tower out, take it outside, open it up, and blast out the radiator's fins and the heat sinks for the graphic cards, and EVERYTHING.  The floor of my workstation always has some grey tumbleweeds, and in places where I swear there's no air flow! 

    From dust to dust.  And syrup, I tell you, syrup.

  • dracorndracorn Posts: 2,353
    edited January 2016

    I went to Fry's to build my custom PC.  I worked with an expert and picked out each of my components and they built it for $100.  For me that was the best method, as PC packages, even though customizable, didn't have exactly what I wanted. 

    Granted, my budget was more than twice yours.  I found I racked up the cost quickly with he Nvidia GTX 980ti, the Intel i7 6 core processor, 64 GB Ram, 1TB SSD (rated for 10 years).  The motherboard itself was $250, but it can handle 4 video cards linked together, should I decide to add more.  My render times are typically 5 - 10 minutes at 2000-3000 resolution.  The longest was 1.5 hours with full backround, lighting and six people at 10,000 resolution.  (The resulting file was 30 MB, totally overkill, but I wanted to see what it could do).

    Cooling was a high priority and I opted for a dual fan, water cooled radiator, a spacious case (Corsair), with three additional fans.  I did opt for the best brands out there: Corsair, Gigabyte, Samsung, Intel, WD.  You can save cost with different brands.  Don't forget your power supply needs to have the wattage to run all of this.  I also added a UPS - this is vital if you want to protect your investment, especially if your electricity is from power lines rather than being buried.

    My reasoning behind getting a workstation powerhouse PC was that I will be keeping it for 5-7 years before upgrading again.  Meanwhile, I am tucking away cash so that I have my budget available next time. 

    Dust:  I dust mine more frequently than twice per year - I bought a case with a side door so that I can easily reach in.  I plan on investing in some Demciflex filters - there's a set custom fit to my case, to over every vent.  They will keep the dust out and comes with software to remind me when to clean them.

    You're budget is smaller, but you will find that it gets chewed up quickly by the main components: processor, RAM and video card.  If you can save a little bit longer, you may be happier with more power in the long run.  Remember when adding components later: your RAM must match exactly the type and speed of the existing, so don't wait too long before that additional 16GB RAM. 

    Take a look at this site - you can play around with components in the custom build section to see what you can get for how much.  http://www.ecollegepc.com/

    This is my powerhouse:

     

    Computer SM 01.JPG
    1728 x 2304 - 991K
    Computer SM 02.JPG
    1728 x 2304 - 1M
    Post edited by dracorn on
  • nDelphinDelphi Posts: 1,920
    edited January 2016
    dracorn said:
    Granted, my budget was more than twice yours.  I found I racked up the cost quickly with he Nvidia GTX 980ti, the Intel i7 6 core processor, 64 GB Ram, 1TB SSD (rated for 10 years).  The motherboard itself was $250, but it can handle 4 video cards linked together, should I decide to add more.  My render times are typically 5 - 10 minutes at 2000-3000 resolution.  The longest was 1.5 hours with full backround, lighting and six people at 10,000 resolution.  (The resulting file was 30 MB, totally overkill, but I wanted to see what it could do)

    Sweet! My computer is a toy next to yours. The best that I could do to upgrade was get a GTX 960/4GB and a new PSU, 750 WATTS. I pick them up on Monday.

    Post edited by nDelphi on
  • frogimusfrogimus Posts: 200

    If you're building (or having it built), bear in mind that dust is minimized with posiitive pressure within the case.  That is, more airflow in than out - this prevents air from being pulled in through gaps and other spots where you aren't filtering. I use a Fractal Designs R4 case with built-in fan filters and never have a speck of dust inside (amazing, considering I live within sight of a limestone quarry).

     

  • kyoto kidkyoto kid Posts: 41,857

    ...currently lworking with oinly 12 MB and an old 1GB Nvidia GPU.  So stuck with long render times in Iray (though not as bad as with UE or especially Reality/Lux) particularly when  swapping comes into play..

    Have a new build in the planning stages but it will require some "creative finacning" to make it happen. In the meantime, just looking to double my current system's memory to 24 GB, add an "adequate"  more updated GPU mainly just to drive the viewport and displays as I am getting flickers and "whiteouts" (the latter when loading large or very involved scenes).  For full GPU rendering, even a 980TI isn't enough to handle the scenes I usually create and I really suck at multipass rendering and compositing.  Was hoping to find Titan X in the stocking (my MB, PSU, and case would actually support it), but alas, it was not to be.

    Though my MB is an old LGA 1366 Socket, I can still upgrade to a 6 core/12 thread i7 980 Extreme which will give me a little more processing horespower it I wanted Sadly it will cost about what one of the Xeons in the specs below does.

    The new system is to be a monster, Dual 8 core Xeon 2630 V3s, dual socket LGA 2011-3 MB, 128 GB physical memory (to start), dual Nvidia Pascal GTX GPUs with  16 GB HBM2 memory and 5,000 - 6,000 cores each (which will be out by the time I would actually be able to afford all the components and put it all together).

  • SnowSultanSnowSultan Posts: 3,773

    Have a new build in the planning stages but it will require some "creative finacning" to make it happen. 

     

    "creative financing"...like robbing Fort Knox?   ;)   

    Honestly, I cannot imagine you needing the system you describe to simply render DAZ products. I have done all of my images, often consisting of two or more figures with a lot of clothing items and with high-res textures with a 1366 socket i7 930, 10 GB RAM and an nVidia GTX 760. I've only had crashes when I used too many 4096x4096 textures in a large render. You almost never need to render at 8000x10000 or other ridiculous resolutions, and Iray often renders some things faster than 3Delight (like ambient occlusion and SSS).

    It's not really practical to try rendering ten or twenty high-res figures in one pass anyway. Iray's Canvases are very good at making multiple render passes that are not too difficult to combine in postwork. I think you can get away with a much more affordable system and still increase your productivity.

  • kyoto kidkyoto kid Posts: 41,857
    edited January 2016

    ...When I built my current system, it was more what many people said I would need. Years later, I am running into constant swapping to the HDD which seriously bogs the render process down.  The demand 3D software is making on system requirements is growing faster than it can keep up with.

    For example, I often employ a lot of special atmospheric effects, special lighting effects, I use Garibaldi Express which requires converting hair to .obj format for Iray (which in turn bloats the polycount), I use SSS, I have scenes with upwards of a dozen characters, I tend to "dirty up" sets with extra props and detailed textures to make them look more believable (seeing city streets that you could "eat off of" just doesn't cut it).

    I cannot finish working on my railway station scene because it pushes the absolute limit of my current memory, as well as the GPU, the latter just for running the displays and Viewport. It takes upwards of thirty minutes just to run the initial calculations and optimisation (as I am using a "wet" misty fog) before actual rendering can begin (at 1,200 x 900 resolution for test purposes).  During this phase, the render window, and UI go completely white on me for the entire time.  Just loading the scene takes about twenty minutes during which the same "whiteout" of the entire UI occurs.

    This tells me I'm really on the edge of exceeding the limits of the hardware which in turn takes a toll on the individual parts.

    This is why I choose to "overbuild"so that I avoid putting extreme stress on the components. Better to have the extra power and rarely use it than not enough and push the system to it's limits all the time.

    As I mentioned, my current "real" plan is to upgrade the workstation I have increasing it's memory and basic graphics performance which will at least keep me going for a while until I can afford to build an entirely new system. Yes most of the tech it was built on is now legacy but unlike the notebook, I'm not at a complete standstill, I just have to wait a lot for processes to finish Doubling the memory and getting at least a 4 GB GPU will ease a lot of the strain I have currently been putting on it with some of the intense scenes I create.  As I used to paint in oils I completely visualise a scene in my mind before sitting down and loading sets, props, and morphing characters. Unlike a canvas and a palette of oil paints, a computer has some unique inherent limitations that must be dealt with.

    The "dream system" I mentioned is just that. If I can get the funds to build it, great.

    Please excuse the revisionism, I am having to deal with a major distraction that is making it hard to concentrate.

    Post edited by kyoto kid on
  • SnowSultanSnowSultan Posts: 3,773

    Sure, I didn't mean to suggest that you shouldn't build the best machine you can afford. Atmospheric effects can indeed add a lot of stress to any scene (which I why I'd recommend doing things like fog and godrays in postwork anyway), and if you're converting dynamic hair to OBJ, that will certainly do it. I'm not sure if you really need 128 GB of RAM though, the video card is more important to Iray. Maybe consider two 980ti cards (or two Titans like Mec4D has). I'm probably going to buy a 980ti myself and see if it's enough on its own before doing a lot of other upgrading or system building.

  • kyoto kidkyoto kid Posts: 41,857

    ...yeah, for me the Titan at minimum as many of my scenes easily exceed 6 GB. Too bad that Nvidia didn't make an 8 GB GTX card. That would probably be enough for most of my work. Sadly the only one available is the Quadro K/M5200 which is 2,500$. Sapphire makes an AMD card with 8 GB, but that's no help for Iray. Still keeping an eye on Nvidia's Pascal development. If it delivers the performance promised, two of those would be quite sufficient. If anything, it may cause the price for the Titan X to tumble to something more reasonable.

    Somewhat disappointed in the 980ti as according to pre-release information last year, it was supposed to be an 8 GB card.

    Where I have the greatest trouble with postwork is when effects need to be behind other scene elements or with shadows that are cast on objects on one layer from another.  That requires digital painting and I no longer have a steady hand for that. I cannot use a tablet for the same reason I do not paint or draw anymore as I cannot hold an implement like a pencil, brush or stylus for very long, and my pressure sensitivity is pretty much gone.  Hence I have to get the most mileage I can out of the initial render process which often means really loading a scene up with a lot of render hogging effects.

  • kyoto kid said:
    The new system is to be a monster, Dual 8 core Xeon 2630 V3s, dual socket LGA 2011-3 MB, 128 GB physical memory (to start), dual Nvidia Pascal GTX GPUs with  16 GB HBM2 memory and 5,000 - 6,000 cores each (which will be out by the time I would actually be able to afford all the components and put it all together).

    Seriously?  That's about $600 worth of motherboard and $1400 worth of CPUs, room for another radiator or heatsink+fan and the requisite noise and heat increase.  Plus, that's some serious creative financing.

    If you are planning to do mostly IRAY rendering going forward, I suggest you don't need tons of CPU cores like in the old days.  Add GPUs instead; most ATX motherboards will support 3 or 4 of these and you "could" do just fine with an X99 or Z170 chipset and a single appropriate i7 processor.  Just don't get one with half the lanes, cache, or cores disabled...ugh.  Remember also that the BIOS and setup screens for the server motherboards is probably not going to be as polished as those for the consumer boards, and also remember that server boards will require ECC memory which will further increase cost.  All this for a system that you'll want to replace ANYWAY in another 5-6 years.

    I'd go with X99 or wait to see what next year holds for CPUs from Intel (please don't cheap out on AMD).  Even if you splurged on an 8-core 3.5 Ghz CPU, you could still cut your costs down by about $1500 and save on some heat generation and power consumption costs too.  Put the money into two or three GTX 980Ti's or wait for the next generation graphic chip from Nvidia.

    Now that I've said all that, I will admit that I would be the one who would go ahead and build precisely the type of system you're thinking of.  wink

  • SnowSultanSnowSultan Posts: 3,773

    Just don't get one with half the lanes, cache, or cores disabled...ugh. 

    Is that the difference between a 5820 and 5830? I asked about that in another thread, but I've heard that there's very little difference performance-wise.

     

     Put the money into two or three GTX 980Ti's or wait for the next generation graphic chip from Nvidia.

    He says that his scenes are usually over 6 GB though, so even three 980s won't handle it because (assuming I understand it correctly), Iray doesn't combine the RAM of your cards, so he'll still be working with 6 GB. 

  • Just don't get one with half the lanes, cache, or cores disabled...ugh. 

    Is that the difference between a 5820 and 5830? I asked about that in another thread, but I've heard that there's very little difference performance-wise.

    Yep, that's the difference.  And by the way, there is no 5830; I think you meant 5930.  So the comparison is the i7 5820K vs 5930K.

    So what's the difference to you between 28 PCI lanes and 40 lanes?  Not much if you're only ever going to use a single GPU.  But if you're planning to use multiple GPUs then the 5930, with all 40 lanes enabled, MIGHT run faster.  I don't know if it matters that as a graphic rendering machine, it will not likely be made to use SLI. 

    Either which way, the comparison charts I've found (between the 5820 and 5930) seem to not show the difference in active lanes, but they sure as hell show a price difference with no underlying reasoning.  That's unfortunate for the consumer.

     

     Put the money into two or three GTX 980Ti's or wait for the next generation graphic chip from Nvidia.

    He says that his scenes are usually over 6 GB though, so even three 980s won't handle it because (assuming I understand it correctly), Iray doesn't combine the RAM of your cards, so he'll still be working with 6 GB. 

    That does complicate matters a bit.  The Nvidia GTX 980TI does come with 6GB for about $650 to $750 USD (each).  But if you want or need more memory than that, you'll probably need to consider a Titan X, because these come with 12 GB.  But at $1000 to $1200, they also come with a 65%+ price premium.  Use-case matters here.  People shouldn't spend more money than they need to for the work they plan to do.

  • SnowSultanSnowSultan Posts: 3,773

    And by the way, there is no 5830; I think you meant 5930. 

    Right, oops. I'll do a little searching and see if I can find out anything about any performance differences. I did read that you can't get two cards running in 16x/16x with the 28 PCI lane chipset, but numerous tests have shown virtually no performance loss in either games or rendering from running 16x/8x. 

    People shouldn't spend more money than they need to for the work they plan to do.

    Yeah, and it's unfortunate that there's no 8 GB card like Kyoto said. 6 might just be enough for some people and 12 is overkill...nothing in between. I personally think I'll be OK with the 6 because I do a lot of postwork and I'm able to combine multiple renders if necessary. Do you happen to know if most or all renderers only take into account the RAM from one card like Iray, or is that an Iray-specific issue that might be solved one day?

  • Subtropic PixelSubtropic Pixel Posts: 2,388
    edited January 2016
    Do you happen to know if most or all renderers only take into account the RAM from one card like Iray, or is that an Iray-specific issue that might be solved one day?

    Well, to the best of my knowledge, the only cards in your system that would be eligible to participate in a render would be those with enough RAM to hold the whole scene.  If my understanding is correct, then each person needs to figure out what their own "predominant" use case would be. 

    • Do you do "most-to-all" scenes under 6 GB?  If so, just go with GTX 690 TI cards and "bounce" [1] or render in layers when necessary.
    • Do you do most scenes under 6 GB, but there are enough > 6GB scenes that "bouncing" would be an inefficient use of your time?  Then get at least one Titan X to help the CPU out during those times.  You can mix in some less expensive GTXs to cover that "most" scenario.
    • Do you do the majority of your work with > 6 GB scenes?  Then put in multiple Titan X cards or just wait and see what Nvidia's next product cycle will include.

     

    ...it's unfortunate that there's no 8 GB card like Kyoto said. 6 might just be enough for some people and 12 is overkill...nothing in between. I personally think I'll be OK with the 6 because I do a lot of postwork and I'm able to combine multiple renders if necessary.

     

     

    Well, 8 GB is 33% more than 6.  If it were available, could that get soaked up faster than poof anyway with some of these HD meshes and high definition textures, especially for big scenes?  Maybe, maybe not.  So combining renders, or maybe "rendering in layers" would be a good solution, if it doesn't become a frequent thing.  I think it can be as much of a hassle as audio "bouncing" used to be.

     

    [1]  The term "bouncing" is an old digital audio term.  In the late 90's and early 00's, home studio enthusiast computers might have only 4 GB of RAM, and 32-bit Windows applications had memory constraints too.  Too many CPU-heavy instruments (those using compute rather than sound samples) or effects (reverbs can be CPU expensive) at once in a song would often cause playback and recording glitches.  Pops, clicks, timing wackiness, etcetera.  Plain audio tracks are very low-CPU if you're playing them "dry" (without an effect applied). 

    The solution was to "bounce" or "render" specific high-CPU tracks to an audio file, substitute that audio file for the original high-CPU track, and mute or disable said original track.  Eventually, some DAW (Digital Audio Workstation) makers came up with a "freeze" feature that did all this automatically, but would allow the user to "unfreeze" in case he/she needed to adjust the effects or sound.

    Luckily today, with 64 bit hardware, OS, and applications, this is (has not) been an issue for over 6 years now, and most audio people don't need to bounce anymore, except in the case that they might want to finalize some of their tracks or share tracks with a person who doesn't have the same instruments and effects installed.

    Post edited by Subtropic Pixel on
  • RCDescheneRCDeschene Posts: 2,816

    Minimum things I'd be looking for is an I7 processor with at least 16GB of Ram and a Nvidea video card with at least 4GB of memory. Biggest bang for the lowest price in video cards would either be a 970 or 960 (just make sure the 960 has 4GB and not the more common 2GB). Above these the price really starts climbing much faster than the resulting gains you'll get in render times. If your looking to build for the future you might want to look for a system that has a motherboard that has more than one PCIe x16 slot so you can eventually install and run more than one video card at a time.

    Holy crap! I didn't see this one! Thanks for the down-the-list suggestions!

  • RCDescheneRCDeschene Posts: 2,816
    edited January 2016

    As for everyone else so far, basically what I'm hearing is, it seems as though it depends of the type of renders I do.

    I'm the type of user who doesn't really do scenes so much visualising characters. So, most of my renders typically never have more than 2 or 3 figures at once and all my settings are pretty much out-of-the-box. Since they are mostly for my personal illustrations anyway, I don't need them to be video game AMV/3D movie quality. Not only that, but my main genre is Anime, so not much HD detail is needed for my character skins, either.

    Post edited by RCDeschene on
  • Hello!

    IG's advice is very good.

    The only difference I would suggest is that you go for 24 or 32 GB of system RAM right off the bat.  With 16 GB, you're not likely to run out ever, but you could end up making Windows hit the swap file more often with big scenes and/or lots of applications open.  Not a big deal if you're using all SSD drives, but why accept limits when it's fairly cheap to raise the performance?  In my opinion, the best time to upgrade memory is when you buy it the first time, because you can buy matched modules.

    Now, about your laptop: 

    It shouldn't be crashing whenever you try to render.  It could be heat related.  Rendering causes heat buildup fast.  My own 2011 Asus laptop starts blowing hot air almost immediately whenever I fire up something like a render.  That's normal, and keeping those ports open and clear will help your laptop breathe easier.  So, if yours were my laptop, I'd open it up and blow out all the dust bunnies.  You'd be surprised at how big the tumbleweeds can be, and it's even worse the more people you have in your home/work area (I swear we are turning into dust by the minute and we leak syrup at night) or if you have any furry pets.  I also know from experience that birds generate almost as much dander as cats, so if you keep even a single cockatiel, you should regularly open up the laptop and clean out the heatsinks.

    And while you're inside there, you might want to consider whether or not to reseat the heatsinks with new thermal paste.  Your 2011 laptop is now getting close to 6 years old; maybe 7 if you bought it early in the manufacturer's lifecycle for that model.  The old paste is probably as dried out and chalky as that toothpaste blob that landed above the waterline in your sink.  It's probably just as effective.  wink

    By the way, desktops need to be cleaned out, too.  Even if they are water-cooled.  About twice a year, I pull my tower out, take it outside, open it up, and blast out the radiator's fins and the heat sinks for the graphic cards, and EVERYTHING.  The floor of my workstation always has some grey tumbleweeds, and in places where I swear there's no air flow! 

    From dust to dust.  And syrup, I tell you, syrup.

    Are both your 980s running x16 or is one at x16 and one at x8? If they are both at x16 how much of a difference is there between rendering with both GPUs compared to just one? Just wondering if there really is much differnce between how many lanes you running. According to the data Nvidia provides here: http://www.nvidia.com/object/iray-scaling.html they are only getting about a 50% increase in performance when they double up on the GPUs but they are also only running (at most) one at x16 and one at x8 (or both at x8) as it says they only have 28 lanes total. It might answer a lot of questions about the effect the number of lanes have if your getting better or worse results than the 50% increase they are reporting when you render with 2 equal cards compared to just one.

  • Are both your 980s running x16 or is one at x16 and one at x8?

    Hello! 

    Well THAT was interesting.  I just double-checked my motherboard's user guide and confirmed that I have each 980 plugged into an X16 slot, but it appears that one or both of them may be running in X8 mode.

    My motherboard is an Asus P9X79 WS, and has 6 PCIe slots.  I have one GPU in slot 1, and the other in slot 5.  I think I'm getting X16 and X8 right now, but my motherboard documentation isn't very clear. 

    Slot 4 was recently vacated, but I think moving the second GPU from slot 5 to slot 4 would degrade my performance in slot 1.  But again, the doc is not very clear.  sad

     

  • bad4ubad4u Posts: 684
    edited January 2016
     

    Well THAT was interesting.  I just double-checked my motherboard's user guide and confirmed that I have each 980 plugged into an X16 slot, but it appears that one or both of them may be running in X8 mode.

    My motherboard is an Asus P9X79 WS, and has 6 PCIe slots.  I have one GPU in slot 1, and the other in slot 5.  I think I'm getting X16 and X8 right now, but my motherboard documentation isn't very clear. 

    Slot 4 was recently vacated, but I think moving the second GPU from slot 5 to slot 4 would degrade my performance in slot 1.  But again, the doc is not very clear.  sad

     

    Specifications for P9X79WS say blue slots can run x16 or x8 link, black slots only can run x8 link. The white ones only run at x4. So you would want two x16 graphic cards in the blue slots for x16/x16, or you degrade them in some way (x16/x8/x8 or even worse). Still this is only half the truth, as it depends on other PCI-E cards that share the lanes.

    If I'm correct this chipset has 40 PCI-E lanes that are shared between the 6 PCI-E slots. 32 lanes for the black/blue ones for x16/x16 , x16/x8/x8 and x8/x8/x8/x8 configurations, the remaining 8 lanes for x4/x4 on the white slots. So make sure if you have other, non-graphics PCI-E cards they use the white x4 slots to get the best from your graphic cards (for 2 cards usually in the 2 blue slots then). With additional cards in a blue/black slot it might degrade performance for the remaining black/blue slots.

     

    Post edited by bad4u on
  • bad4u said:
     

    Well THAT was interesting.  I just double-checked my motherboard's user guide and confirmed that I have each 980 plugged into an X16 slot, but it appears that one or both of them may be running in X8 mode.

    My motherboard is an Asus P9X79 WS, and has 6 PCIe slots.  I have one GPU in slot 1, and the other in slot 5.  I think I'm getting X16 and X8 right now, but my motherboard documentation isn't very clear. 

    Slot 4 was recently vacated, but I think moving the second GPU from slot 5 to slot 4 would degrade my performance in slot 1.  But again, the doc is not very clear.  sad

     

    Specifications for P9X79WS say blue slots can run x16 or x8 link, black slots only can run x8 link. The white ones only run at x4. So you would want two x16 graphic cards in the blue slots for x16/x16, or you degrade them in some way (x16/x8/x8 or even worse). Still this is only half the truth, as it depends on other PCI-E cards that share the lanes.

    If I'm correct this chipset has 40 PCI-E lanes that are shared between the 6 PCI-E slots. 32 lanes for the black/blue ones for x16/x16 , x16/x8/x8 and x8/x8/x8/x8 configurations, the remaining 8 lanes for x4/x4 on the white slots. So make sure if you have other, non-graphics PCI-E cards they use the white x4 slots to get the best from your graphic cards (for 2 cards usually in the 2 blue slots then). With additional cards in a blue/black slot it might degrade performance for the remaining black/blue slots.

     

    I see.  So you'd advise that I move my second GPU up to slot 4 (blue).

    And I just got done shaking out all the bugs from my latest surgery on that box (new PSU).  surprise

    I wonder, would it be useful/informative for me to benchmark the current configuration, then rerun the benchmarks again after I've changed it?

  • bad4ubad4u Posts: 684
    edited January 2016

    I see.  So you'd advise that I move my second GPU up to slot 4 (blue).

    And I just got done shaking out all the bugs from my latest surgery on that box (new PSU).  surprise

    I wonder, would it be useful/informative for me to benchmark the current configuration, then rerun the benchmarks again after I've changed it?

    Both graphic cards into the blue slots, and take care the black slots are kept free, yes. If you have additional cards they should go to the white slots. At least that's what would be the correct configuration for using two x16 graphic cards. There might be other situations where you'd want to go for x16/x8 , e.g. if you add a third graphics card or a new USB3.1 add-on card that maybe needs more than x4 too, but not sure what those actually need.

    Benchmarks.. well, might help to find out if something goes really wrong, but don't hope for big differences between X16/x8 and x16/x16 configurations, what I've read so far these are often only marginal (at least with PCIe 2.0 cards, might be better with 3.0 cards, which your board supports - don't know if your cards are 3.0 types).

     

    Post edited by bad4u on
Sign In or Register to comment.