Hardware Question

so I may be alone in this crazyness, not sure if it belongs in here or in technical or because its not really about the software.

So I have embraced Iray, maybe a little to much, and added a 6th gpu to my system today.

I am now runing...

  • 1 gtx 970
  • 1 gtx 980
  • 1 gtx 1050 ti
  • 1 gtx 1070
  • 1 gtx 1070 ti
  • 1 gtx 1080 ti

I am using PCIE 1X riser cards for mining, everything worked good with 5 cards, however, once I added the 6th, all the video cards went down to only supporting 1 monitior.

I have the 1070 TI in the primary pci exress slot since it is smaller then the 1080ti and only covers 1 extra slot and not 2, and it is running at 8x.

any thoughts as to why this is? any one doing the same.

This is a ryzen cpu, i enabled "above 4g decoding" to allow the 6th card to be recognized, I plan to max out the pci lanes on this just for iray rendering, but this machine has 3 monitors, and now 2 are running from external cards so they lag.

Comments

  • I would loose the 1050 due to memory limitations and the 970 & 980 too of they are less than 8GB of memory.  If you can, sell all the cards except for the 1080 TI and put the cash towards 1 or 2 more 1080 Ti's which will give you the abiity to load 11GB scenes and have a whole bunch of cuda cores for rendering (3584 x 3 or 10,752 Cuda cores).  Even with just 2 1080 Ti's, that is still 7168 Cuda cores   Right now you are limited by the cards with the least amount of video memory in your system to 4GB scenes.  Pull the 1050 and any othe cards with less than 8 GB of GPU memory and see how it renders.

  • well yeah, obviously i don't try to make all my scenes fit in the 4gb, but when it happens to fit in the 4gb the extra boost is nice.

    i simple test scene, the 1070, 1070 ti, and 1080 ti render to 500 iterations in 1 min, the 970, 980, 1050ti render to 500 iterations in 1min 30 seconds. use all of them and its 40 seconds. 20 seconds doesn't sound like much their, but for 5000 iterations it saves me over 3 minutes, 10,000 6 minutes, etc.

    yes the eventual goal is to get all 8gb and 11 gb cards with high cuda counts, but money.

    i only have 1080ti because I happened to catch it for $750 between the mining craze drop and right before nvidia's stock plumited raising it back over $1200

  • ArtAngelArtAngel Posts: 1,484

    I may be wrong ... but my understanding is if the graphic cards are not identical the render will use one card only and if that won't suffice will rely on cpu and not the cards.

  • namffuaknamffuak Posts: 4,040
    ArtAngel said:

    I may be wrong ... but my understanding is if the graphic cards are not identical the render will use one card only and if that won't suffice will rely on cpu and not the cards.

    Ah - yes, you are. smiley Iray is not picky; it will use any and all cards with cuda cores that have enough memory to hold the entire scene. For example - if there are 4 cards present, at 2, 4, 6, and 11 GB - well, first of all, use the 2 GB card to drive your monitor(s) as very few scenes will fit in 2 GB. If the scene takes 3 GB then the 4, 6, and 11 GB cards will all be used. At 4.01 GB the 4 GB card drops off and just the 6 and 11 GB cards get used. If the scene blows past 11 GB then none of the cards get used and Iray will do a CPU render - even if the CPU was not selected for use.

  • namffauk is correct, iray will use any and all cuda devices available with enough vram.

    however (i have read, no first hand experience) once you mix gtx and quadro cards you will have issues, but its a driver issue and has nothing to do with iray itself as far as i know.

    i do various things from single character portraits to complicated 10 character scenes, and more recentely, a scene with 12 city blocks, thats the reason I haven't dropped the 4gb cards, they are still usefull for portaits, and with mitchell @ 0.5, i throw as many cuda cores i can get my hands on at it. still waiting on full rtx support before i drop money on one of those, i hear the tensor denoiser is supposed to be insane.

  • GTX and Quadro's drivers are incompatible. They cannot be installed on the same computer. Or at least that's what Nvidia says. I'm unaware of anyone who has tried and confirmed it.

  • joseftjoseft Posts: 310

    GTX and Quadro's drivers are incompatible. They cannot be installed on the same computer. Or at least that's what Nvidia says. I'm unaware of anyone who has tried and confirmed it.

    I looked into this a few years back, and from what i remember reading then, its possible, but highly problematic and not worth the hassle

  • joseft said:

    GTX and Quadro's drivers are incompatible. They cannot be installed on the same computer. Or at least that's what Nvidia says. I'm unaware of anyone who has tried and confirmed it.

    I looked into this a few years back, and from what i remember reading then, its possible, but highly problematic and not worth the hassle

    they always said the same thing about nvidia and amd, works fine as long as the amd is the primary video card.

  • joseftjoseft Posts: 310
    joseft said:

    GTX and Quadro's drivers are incompatible. They cannot be installed on the same computer. Or at least that's what Nvidia says. I'm unaware of anyone who has tried and confirmed it.

    I looked into this a few years back, and from what i remember reading then, its possible, but highly problematic and not worth the hassle

    they always said the same thing about nvidia and amd, works fine as long as the amd is the primary video card.

    Just because mixing AMD and Nvidia cards works, does not mean mixing GTX and Quadro does. 
     

    The reason it doesnt work well is because Nvidia want it that way. Gets technical to explain the reasons behind it, (a quick google search will show you a few places where it is explained in depth), but as is generally the case with these things, its a profit-driven decision.

  • joseft said:

    GTX and Quadro's drivers are incompatible. They cannot be installed on the same computer. Or at least that's what Nvidia says. I'm unaware of anyone who has tried and confirmed it.

    I looked into this a few years back, and from what i remember reading then, its possible, but highly problematic and not worth the hassle

    It's fairly easy to do the mix. Geforce drivers also contain Quadro drivers now. Only cards prior to Kepler are not supported in recent drivers

  • joseft said:

    GTX and Quadro's drivers are incompatible. They cannot be installed on the same computer. Or at least that's what Nvidia says. I'm unaware of anyone who has tried and confirmed it.

    I looked into this a few years back, and from what i remember reading then, its possible, but highly problematic and not worth the hassle

    It's fairly easy to do the mix. Geforce drivers also contain Quadro drivers now. Only cards prior to Kepler are not supported in recent drivers

    Link?

  • Takeo.KenseiTakeo.Kensei Posts: 1,303
    edited December 2018

    No need for a link you can check by yourself. You just have to check in c:windows\system32\driverstore\filerepository\ the file nv_disp.inf and scroll to the bottom to see what is supported

    I'm actually using Geforce drivers for a quadro on my desktop and also an other one on my notebook

    Post edited by Takeo.Kensei on
  • No need for a link you can check by yourself. You just have to check in c:windows\system32\driverstore\filerepository\ the file nv_disp.inf and scroll to the bottom to see what is supported

    I'm actually using Geforce drivers for a quadro on my desktop and also an other one on my notebook

    Thats good to know, I have debated picking up a quadro but didint want to drop my gtx cads, what quadro you using? other then the fact the quadros typicaly have higher vram, any other advantages to quadro?

    considering the gtx 1080 is $100 cheaper then the m4000, has 8gb vram, and the gtx does have more cuda cores, its their any advantage to the m4000 vs the 1080 for iray.

    I know the cuda core count doesn't always mean everything depending on the effenciency, like how the 1050ti has less then half the cuda cores as the 970, but performs the same. that could be the extra 200mhz on the gpu though.

    the quadro also has a much lower tdp, and I know the gtx cards arn't supported by the iray server (building an iray server is something I may do down the road) and nvlinks memory pooling is something i am interested in.

     

  • No need for a link you can check by yourself. You just have to check in c:windows\system32\driverstore\filerepository\ the file nv_disp.inf and scroll to the bottom to see what is supported

    I'm actually using Geforce drivers for a quadro on my desktop and also an other one on my notebook

    There are no laptop grade Quadro's never have been. The cards are built for servers. Further simply because both drivers are supported doesn't mean you can mix them on the samemachine. So again link?

  • Thats good to know, I have debated picking up a quadro but didint want to drop my gtx cads, what quadro you using? other then the fact the quadros typicaly have higher vram, any other advantages to quadro?

    A) for the average Daz Studio user, the GeForce series cards are cheaper, faster, and quite sufficient to do the job.

    B) for the extreme Daz Studio user (huge scenes, large amounts of animation) and / or someone with other graphics intensive programs, the extra cost of the Quadro series cards and the additional memory, stability, and longevity can be worth it.

  • No need for a link you can check by yourself. You just have to check in c:windows\system32\driverstore\filerepository\ the file nv_disp.inf and scroll to the bottom to see what is supported

    I'm actually using Geforce drivers for a quadro on my desktop and also an other one on my notebook

    There are no laptop grade Quadro's never have been.

    Sure Nvidia never made Quadro for mobile workstation https://www.nvidia.com/en-us/design-visualization/quadro-for-mobile-workstations/

    The cards are built for servers.

    Not all are (Workstation). What prevent anybody from putting them in a consumer Rig and Vice Versa anyway? Note that Nvidia changed it's EULA recently because some Geforce were used in servers so...

    Further simply because both drivers are supported doesn't mean you can mix them on the samemachine.

    And what could possibly prevent that once you have drivers for both ?

    So again link?

    Googling yourself would have saved me some time, and really there are tons of links, but here you go

    http://www.tomshardware.co.uk/answers/id-3477739/quadro-geforce-workstation.html#r19978947

    http://www.tomshardware.co.uk/answers/id-3360274/mixing-nvidia-cards-workstation-irregular-results.html

    https://linustechtips.com/main/topic/372498-mixing-quadro-with-geforce/

    https://devtalk.nvidia.com/default/topic/928306/quadro-and-geforce-drivers-on-same-system/

    https://devtalk.nvidia.com/default/topic/928306/cuda-setup-and-installation/quadro-and-geforce-drivers-on-same-system/post/4848298/#4848298

     

    There is a guy that even seem to have mixed Quadro and Geforce drivers to get it working. Never tried this one but it could be the best way to do it if the Quadro keep it's unique feature

    https://h30434.www3.hp.com/t5/Business-PCs-Workstations-and-Point-of-Sale-Systems/SOLVED-Quadro-P2000-GTX-1080TI-CAD-AND-GAMING-on-a-HP-Z620/td-p/6114848

     

     

    No need for a link you can check by yourself. You just have to check in c:windows\system32\driverstore\filerepository\ the file nv_disp.inf and scroll to the bottom to see what is supported

    I'm actually using Geforce drivers for a quadro on my desktop and also an other one on my notebook

    Thats good to know, I have debated picking up a quadro but didint want to drop my gtx cads, what quadro you using? other then the fact the quadros typicaly have higher vram, any other advantages to quadro?

    considering the gtx 1080 is $100 cheaper then the m4000, has 8gb vram, and the gtx does have more cuda cores, its their any advantage to the m4000 vs the 1080 for iray.

    I know the cuda core count doesn't always mean everything depending on the effenciency, like how the 1050ti has less then half the cuda cores as the 970, but performs the same. that could be the extra 200mhz on the gpu though.

    the quadro also has a much lower tdp, and I know the gtx cards arn't supported by the iray server (building an iray server is something I may do down the road) and nvlinks memory pooling is something i am interested in.

     

    Geforce drivers are better for gaming. Quadro drivers are optimized for Professionnal Apps like CAD software (Solidworks, Catia etc). Quadro had the OpenGL advantage for a long time. Depending on the drivers you lose and earn features or performance in respective fields.

    Quadro have a lot of features that may not be usefull for common use. Ex 10 bit color Output, TCC mode, etc. As Kenshaw said, they are also "built" for servers, but in the sense that they are specifically designed to be packed inside a server to work together (think cluster or renderfarm).

    I'm using a Quadro P400 only for display as it has the 10bit output. Nothing else. My setup has no advantage outside of video/photo editing and has no gain for DS

    Unless you have a lot of money to throw out or a need for a specific Quadro feature, I don't really think using Quadro for DS is the way to go because of price/performance drawback but here is what I can think of :

    - Quadro M4000, P4000 and RTX 4000 are one slot GFX cards => in case of a need of low TDP and space it is a good choice

    - Quadro and Titans can be put in TCC mode => no RAM reservation from Windows and better performance when using multiple cards in TCC

    But still, I wouldn't recommend buying Quadros just for DS. Titan Cards could be a better choice as they have some of the Quadro Features

    One point I didn't mention also : I don't know what happens when mixing Quadro and Geforce of different generation. I don't think there will be any problem in common use, but I think some specific feature require same generation Cards to work

  • BTW you don't need Quadro for 10 bit color output. Quadro's advantages primarily lie in being optimized for compute performance, amount of VRAM and some other technical features that make them appealing for non graphical uses. Nvidia does not intend them as gaming cards, they are terrible at it, or even as primary graphic outputs. They do include the outputs, usually, because they get used on workstations but they aren't good at that either.

  • BTW you don't need Quadro for 10 bit color output.

    Of course you do. It's a locked feature. Geforce can only output 10 Bit color with DirectX (so mostly gaming). Otherwise, it doesn't work

     

  • LOL. No. You assumed Nvidia. If you're buying a gpu just for 10 bit Photoshop or premiere you're wildly overpaying if you go Quadro.

  • LOL. No. You assumed Nvidia. If you're buying a gpu just for 10 bit Photoshop or premiere you're wildly overpaying if you go Quadro.

    If you're thinking AMD the case is the same. You need a Pro Card. Consumer card won't do it. I hope you're not thinking about any other constructor (Intel ?) that anyway won't do it either

  • You're the guy telling people they need Quadro for 10 bit Photoshop and premiere. With cards like the WX 5100 out there why did you do that?

    I simply pointed out that you were wrong.

  • You're the guy telling people they need Quadro for 10 bit Photoshop and premiere. With cards like the WX 5100 out there why did you do that?

    I simply pointed out that you were wrong.

    Cost of a Quadro P400 100$

    Cost of a WX 5100 500$ and no Cuda

    Whose overpaying and who is wrong ?

  • KitsumoKitsumo Posts: 1,210
    edited December 2018

    so I may be alone in this crazyness, not sure if it belongs in here or in technical or because its not really about the software.

    So I have embraced Iray, maybe a little to much, and added a 6th gpu to my system today.

    I am now runing...

    • 1 gtx 970
    • 1 gtx 980
    • 1 gtx 1050 ti
    • 1 gtx 1070
    • 1 gtx 1070 ti
    • 1 gtx 1080 ti

    I am using PCIE 1X riser cards for mining, everything worked good with 5 cards, however, once I added the 6th, all the video cards went down to only supporting 1 monitior.

    I have the 1070 TI in the primary pci exress slot since it is smaller then the 1080ti and only covers 1 extra slot and not 2, and it is running at 8x.

    any thoughts as to why this is? any one doing the same.

    This is a ryzen cpu, i enabled "above 4g decoding" to allow the 6th card to be recognized, I plan to max out the pci lanes on this just for iray rendering, but this machine has 3 monitors, and now 2 are running from external cards so they lag.

    Getting back to the OP, I just wanted to say nice setup! I hope you'll show us some pics of it. Mine is disassembled at the moment, but here it is in it's former glory. I don't know what would cause the problem you're describing. The only thing I could think of is that the motherboard doesn't have enough resources to support all PCIE slots fully. BUT, I'm no hardware guru, so take that with a grain of salt.

    As far as people saying remove the slower cards, no, slower/low memory cards will not hinder the performance or limit your rendering options, but I think you already know that. A card either renders or it doesn't, but it won't slow down the other cards.

    Post edited by Kitsumo on
  • ArtAngelArtAngel Posts: 1,484
    namffuak said:
    ArtAngel said:

    I may be wrong ... but my understanding is if the graphic cards are not identical the render will use one card only and if that won't suffice will rely on cpu and not the cards.

    Ah - yes, you are. smiley Iray is not picky; it will use any and all cards with cuda cores that have enough memory to hold the entire scene. For example - if there are 4 cards present, at 2, 4, 6, and 11 GB - well, first of all, use the 2 GB card to drive your monitor(s) as very few scenes will fit in 2 GB. If the scene takes 3 GB then the 4, 6, and 11 GB cards will all be used. At 4.01 GB the 4 GB card drops off and just the 6 and 11 GB cards get used. If the scene blows past 11 GB then none of the cards get used and Iray will do a CPU render - even if the CPU was not selected for use.

    I have one machine with two 12 GB Titans xp and another machine with two 1080tis at 11 gb each. Just for clarification's sake , are you saying if the scene blows past 11 GB than it's a cpu render?

  • namffuaknamffuak Posts: 4,040
    ArtAngel said:
    namffuak said:
    ArtAngel said:

    I may be wrong ... but my understanding is if the graphic cards are not identical the render will use one card only and if that won't suffice will rely on cpu and not the cards.

    Ah - yes, you are. smiley Iray is not picky; it will use any and all cards with cuda cores that have enough memory to hold the entire scene. For example - if there are 4 cards present, at 2, 4, 6, and 11 GB - well, first of all, use the 2 GB card to drive your monitor(s) as very few scenes will fit in 2 GB. If the scene takes 3 GB then the 4, 6, and 11 GB cards will all be used. At 4.01 GB the 4 GB card drops off and just the 6 and 11 GB cards get used. If the scene blows past 11 GB then none of the cards get used and Iray will do a CPU render - even if the CPU was not selected for use.

    I have one machine with two 12 GB Titans xp and another machine with two 1080tis at 11 gb each. Just for clarification's sake , are you saying if the scene blows past 11 GB than it's a cpu render?

    On the system with the 1080tis, yes. Memory is not additive on any cards short of the 20xx series and the the jury is still out on them (I've quit following the discussions - they're out of my price range and won't fit in the existing system). I just do this as a hobby, so I'll stick to my 6 GB 980ti and 11 GB 1080ti. laugh

  • Kitsumo said:

    so I may be alone in this crazyness, not sure if it belongs in here or in technical or because its not really about the software.

    So I have embraced Iray, maybe a little to much, and added a 6th gpu to my system today.

    I am now runing...

    • 1 gtx 970
    • 1 gtx 980
    • 1 gtx 1050 ti
    • 1 gtx 1070
    • 1 gtx 1070 ti
    • 1 gtx 1080 ti

    I am using PCIE 1X riser cards for mining, everything worked good with 5 cards, however, once I added the 6th, all the video cards went down to only supporting 1 monitior.

    I have the 1070 TI in the primary pci exress slot since it is smaller then the 1080ti and only covers 1 extra slot and not 2, and it is running at 8x.

    any thoughts as to why this is? any one doing the same.

    This is a ryzen cpu, i enabled "above 4g decoding" to allow the 6th card to be recognized, I plan to max out the pci lanes on this just for iray rendering, but this machine has 3 monitors, and now 2 are running from external cards so they lag.

    Getting back to the OP, I just wanted to say nice setup! I hope you'll show us some pics of it. Mine is disassembled at the moment, but here it is in it's former glory. I don't know what would cause the problem you're describing. The only thing I could think of is that the motherboard doesn't have enough resources to support all PCIE slots fully. BUT, I'm no hardware guru, so take that with a grain of salt.

    As far as people saying remove the slower cards, no, slower/low memory cards will not hinder the performance or limit your rendering options, but I think you already know that. A card either renders or it doesn't, but it won't slow down the other cards.

    Thanks for returning to my original question ;)

    Ill try to get a pic of it tonight, its not pretty though, video cards are litteraly screwed to a book shelf. that sits right next to my case ;) during my next upgrade i am looking at getting some stackable cases so my cat can't get in it anymore. and yeah, my thought is i may be over taxing the system with that many cards.

    and for the quadro question, so for rendering, and the cost, quadro isn't really worth it then unless I specificly buying them to build a iray server. what about titan cards? i know they are expensive as crap, but picking up 1 or 2 rtx titens looks promising and cost effective for me. considering with nvlink it would give me 48gb vram and 9216 cuda cores with 2 cards, and then what ever advancments the RT and Tensor cores eventualy give to iray.

    anyone know how the server treats titan cards? IE, does the server support titan?

  • Just a word of caution ... I cannot speak from personal experience ... but I have heard many reports that linked graphics cards and Daz Studio do not seem to get along well.

  • Just a word of caution ... I cannot speak from personal experience ... but I have heard many reports that linked graphics cards and Daz Studio do not seem to get along well.

    SLI is not supported and should be turned off for Iray, yes - but that's just a setting in the nVidia Control Panel as I understand it.

  • Just a word of caution ... I cannot speak from personal experience ... but I have heard many reports that linked graphics cards and Daz Studio do not seem to get along well.

    SLI is not supported and should be turned off for Iray, yes - but that's just a setting in the nVidia Control Panel as I understand it.

    Yes and SLI is pretty pointless even in gaming these days so there would be little reason to turn it on ever.

  • You're the guy telling people they need Quadro for 10 bit Photoshop and premiere. With cards like the WX 5100 out there why did you do that?

    I simply pointed out that you were wrong.

    Cost of a Quadro P400 100$

    Cost of a WX 5100 500$ and no Cuda

    Whose overpaying and who is wrong ?

    P400 cannot do what either 4k 10 bit video editing or 4k 10 bit photoediting. Since all professional work is now done at 4k I have no idea why you even brought up the P400. Further the WX 5100 is $335 not 500. Anyone can check Google and see that.

Sign In or Register to comment.