Help on Choosing a GPU for Iray

Hello all.

Irray has literally blown me away. I am making plans to by a new card by the end of the year. I went to JimZombie's website and found a article on the matter. I have decided on the GTX 960. This is the one I am getting from EVGA: http://www.newegg.com/Product/Product.aspx?Item=N82E16814487128&cm_re=gtx_960-_-14-487-128-_-Product

The question I am asking is am I making a good choice for my PC?

My case: http://www.newegg.com/Product/Product.aspx?Item=N82E16811148066

My motherboard: http://www.newegg.com/Product/Product.aspx?Item=N82E16813135361

Thanks Again,

~GO :D

 

«1

Comments

  • nicsttnicstt Posts: 11,715
    edited September 2015

    Been a lot of threads for this, try search for more detailed info.

    900 series, and the more memory the better. 970 is a combination of best value versus performance; the 980ti perhaps best overall, although the Titan gives more memory, which for some scenes is vital.

    I have a 970 and 980ti; the 980 can be fairly close to twice as fast, although it does vary; I have run out of memory on the 980 with 5 figures and a simple amount of scene/prop items, and also with fewer figures. So while the Titan would have been useful, with its extra memory, I couldn't justifiy the cost. Two 970s are going to give a little more performance than a 980ti, but they will have less memory (it is NOT shared), and they will use a lot more power doing it. In addition, you will need to consider the implications of two cards as opposed to one, on the rest of your system - mainly PSU for providing the power, and the motherboard for supporting two or more cards. PSU is very important, in some respects, much more than the case - as long as the case fits what you want; but a properly planned build takes into consideration the stallation to ensure adequate cooling/airflow. More fans isn't the answer; it has to be planned.

    If you're serious about rendering, than I'd look at intel as opposed to AMD; preferably i7, but the unlocked i5 is decent. It depends on budget of course, and comprises for most of us have to be made somewhere. That MB only takes one (GFX) card, so you are very restricted; it means you have to buy a better card when upgrading, or scrap the MB and buy a new one.

    It is false economy, but again dependent on budget, to not plan for upgrades and expansion.

    Post edited by nicstt on
  • ronmolinaronmolina Posts: 118
    edited September 2015

    I use the Titan x only because it has more memory. The max you can get with the 980 is 6GB mem and it is arguably as fast as the Titan x which has 12GB mem. I have scenes that are pushing the 12 GB mem. So for me the more memory the better for now. There is a place on the net that shows the performance of each with Iray and I cannot remeber where it was whch shows the Titan X as the fastest with the exception of some very expensive hardware that cost over 10,000. The 970 overclocked I believe was second with the 980 just slightly behind but you can get 6GB with the 980.

    Post edited by ronmolina on
  • Don't forget, if you need more memory and can't afford a card with it, you can break your scenes into separate renders and composite, like the movie/tv studios do. It will also save render time.

     

  • Oso3DOso3D Posts: 15,085
    How well does Iray work with multiple cards?
  • How well does Iray work with multiple cards?

    fine but there isn't a lot of feedback/monitoring so it's a pain point for me. works though.

  • My point is: buy the more CUDA you can afford.

    My second point is: with many CUDA core, you MAKE the difference. Having an instant preview, working in real time with textures, finalizing a render in short time, all of this MAKE the difference. The learning curve is also shortenend, so is the test phase. It is not only the render time. your whole productivity is boosted a lot.

    If you can afford it, a 980Ti make a lot of difference. If you don't need too much GPU RAM, then 2 980ti is awesome. Problem is you will need a bigger case and  a bigger power unit.

  • Oso3DOso3D Posts: 15,085

    The Alienware X51, which I have my eye on, can support up to 3 cards. My inclination is to get one 980ti, then add more as budget allows.
    But if one Titan has some significant advantages over multiple cards (like if memory is limited to that of one card), then that shapes my decision.

     

  • larsmidnattlarsmidnatt Posts: 4,511
    edited September 2015

    The Alienware X51, which I have my eye on, can support up to 3 cards. My inclination is to get one 980ti, then add more as budget allows.
    But if one Titan has some significant advantages over multiple cards (like if memory is limited to that of one card), then that shapes my decision.

     

    Scenes have to fit on the cards indivdually. 6gb + 6gb does not make 12. It makes 6gb basically running twice as fast. They don't work together, they work in parallel basically.

    Post edited by larsmidnatt on
  • Oso3DOso3D Posts: 15,085

    Hrm. Sounds like I should get a Titan, then, should I be able to afford it.

     

  • nicsttnicstt Posts: 11,715

    I couldn't justify the extra cost of the titan; 50% more cash for 6GB of memory. I was tempted though, but decide a second monitor was more important.

    It is possible that with directx 12 that memory sharing could be on the cards; it is expected to be possible with games. Even if it's possible, it doesn't mean NVidia will enable it.

  • larsmidnattlarsmidnatt Posts: 4,511
    edited September 2015
    nicstt said:

    I couldn't justify the extra cost of the titan; 50% more cash for 6GB of memory. I was tempted though, but decide a second monitor was more important.

    It is possible that with directx 12 that memory sharing could be on the cards; it is expected to be possible with games. Even if it's possible, it doesn't mean NVidia will enable it.

    This isn't games, so the applications have different needs. For Games and SLI, the gpu works together. For GPU rendering they work independantly of each other and merge the data.

    Though with Octane render you can use system memory to expand the amount of Ram used for renders, I don't know if they share the same system ram or you need double for two cards. I haven't used the OOC memory feature too much or paid attention to how much ram it eats up.

    Post edited by larsmidnatt on
  • mtl1mtl1 Posts: 1,508
    GumpOtaku said:

    Hello all.

    Irray has literally blown me away. I am making plans to by a new card by the end of the year. I went to JimZombie's website and found a article on the matter. I have decided on the GTX 960. This is the one I am getting from EVGA: http://www.newegg.com/Product/Product.aspx?Item=N82E16814487128&cm_re=gtx_960-_-14-487-128-_-Product

    The question I am asking is am I making a good choice for my PC?

    My case: http://www.newegg.com/Product/Product.aspx?Item=N82E16811148066

    My motherboard: http://www.newegg.com/Product/Product.aspx?Item=N82E16813135361

    Thanks Again,

    ~GO :D

     

    I am using the same video card and it does decently well for the price. However, you'll definitely outgrow the 960 4GB if you're an enthusiast renderer. I personally haven't reached the 4GB limit of the card with my scenes and they all seem to render at a reasonable rate.

    Just keep in mind that there's no such thing as "future proofing" in terms of computer components no matter the case usage. It's all a game to avoid buying twice.

  • StratDragonStratDragon Posts: 3,273
    edited September 2015

     

    How well does Iray work with multiple cards?

    It should be fine, it was designed to work under those conditions but does have RAM limitations. All the cards in the "pool" need to be able to render the scene as if they were the only card available. They will share CUDA cores, but not GPU Memory. Having for instance three (3) Titan GX Blacks with 12GB x 3 (36 GB) would become 12GB GPU RAM available for an Iray render at the time of this writing.
    If your scene takes 4GB and one of your cards only has 3GB it will drop out of the "pool" and no longer contribute to the render. 
     

    Post edited by StratDragon on
  • fastbike1fastbike1 Posts: 4,078
    edited September 2015

    While I think a beefy power supply is a great investment, I'm not sure folks realize that the current Geforce cards only need from 140 to 250 watts each.

    Older (than the current offerings) Geforce cards are hard to find and typically almost as expensive as new. If you need more than 4GB for a scene, your only choices in new cards are the GTX980TI and the Titan X.

    Titan X in the US is ~$1000, the GTX 980TI is ~$650. The 980TI has almost as many CUDA cores (2816) as the Titan X (3072), but half the memory (6GB vs 12Gb).

    I wouldn't recommend less than the GTX970 if you are getting a new card for an older (<2 years) machine. The 970 has 4GB of memory (960s and below have 2GB which won't be enough) and 1664 CUDA cores. Costs about $340 USD and needs 140watts.

    I wouldn't consider a 980 because it's only $100 less than a 980TI but only 4GB and 800 less CUDA cores.

    3D rendering has always been hardware intensive. You pays your money and you makes your choice.

    Post edited by fastbike1 on
  • SixDsSixDs Posts: 2,384

    "with Octane render you can use system memory to expand the amount of Ram"

    If Octane is leveraging system RAM to offset shortfalls in video RAM, then it must be using a hybrid solution: using the GPU for part of the rendering task, and the CPU for another. There currently is no technology built into video cards that I am aware of that will allow the card to tap into system memory to supplement the onboard memory. Nvidia has talked of this, but I personally will believe it when I see it, and I wouldn't expect to see it at all except at the very high end.

  • thanks for the help so far, just one little note. 

    This computer has been assembled and is working. I am thinking about upgrading to a GPU suitable for Iray.

    Hope this clearifies anything.

    ~GO :D

  • DAZ_SpookyDAZ_Spooky Posts: 3,100
    edited September 2015

    In general, or on average, a 4GB card will handle 3-4 figures with clothes, hair and and an environment. A 2GB card may, or may not handle one figure, with clothing and hair. 

    As long as the scene fits on the card, then more CUDA cores is the important factor. If the scene doesn't fit on the card, then the number of CUDA crore doesn't matter, as it won't use any. 

    Performance wise the 980ti vs the Titan X the speed difference is about 3%. Does the 12GB really justiy the extra cost? That is entirely up to you and how you use it. 

    On the market there are currently two cards faster than the 980ti, the Titan X and the M6000, though depending on the specific cards in question, they are close enough that the 980ti may be faster in some comparisons. :) 

     

     

    Post edited by DAZ_Spooky on
  • Oso3DOso3D Posts: 15,085

    Is there any rough guideline to render size? Does it relate directly to file save size, or ... not?

     

  • SixDsSixDs Posts: 2,384

    An excellent question, William, and one that, surprisingly, seldom seems to be asked. My assumption would be that most people simply assume that the amount of video ram required is equal to the cumulative total of the file sizes of the assets being used in the scene. While this is true to a certain extent, with more and larger assets requiring more memory, the actual amount of memory being used will depend upon far more than that and will be influenced also by your render settings and the amount and type of computations necessary to accomodate shaders, transparency, etc. In short, there is no quick and easy way to accurately determine how much memory will be required in advance since the calculations are so many and complex. A utility that would do this would be nice, but would perhaps take as long to complete its calculations that one might be better off just sticking to trial and error.

  • HavosHavos Posts: 5,581

    Not sure why you think file save size is any indicator of the amount of video RAM that is needed. A lot of my scenes save in less than 1 MB, but DS will need mutliple GB to load them. The RAM used by DS to load the scene is a better indicator, but this also can be misleading, since a Genesis 1/2/3 character can easily need 100s of MB to load into memory if you have a lot of morphs installed, however the morph definition data is not transferred to the video memory, only the mesh geometry and the texture files. It is the latter that is the most important, and as Genesis 3 has a lot of 4000x4000 texture files, including files for diffuse, bump and specular, this can easily eat up 100s of MB of video RAM. Add a few detailed props (in terms of geometry) that have simple or no textures, and these will take up very little video RAM.

    DAZ Spooky's rough guide of 1 character for 2GB and 2-4 with 4 GB seems about right from my own experience. However this is referring to foreground characters. You can add multiple background characters if they are using low poly figures and/or more simple textures (Lorenzo and Loretta are good for this, but M3/V3 are also good due to their low definition texture files). Background characters should ideally be stripped of their bump/specular maps, and even better if they share textures (eg if all wearing the same uniform).

  • Oso3DOso3D Posts: 15,085

    My question was along the lines of what _additional_ memory load resolution size entailed.

    Obviously, you could have a 100 x 100 pixel image but you have 300 characters with unique textures. A lot of the memory required is for scene assets.

    What I'm not clear about is, after you've accounted for what's in the scene, how much MORE is due to image size, and whether that effect is equal to image size or some multiplier or nonlinear.

     

  • HavosHavos Posts: 5,581

    Ah, now I understand the question. I am not 100% sure, but I would have thought that the output resolution size would have little or nothing to do with the amount of video card RAM needed. I guess the final image would be kept on the card somewhere (though I am not certain this is correct, it may be in main memory only), however, either way the memory needed for a single image will be irrelevant compared to all your scene textures etc, which will be loaded in regardless of output image size. I suspect (but again I am not certain), that some off screen assets would not be loaded onto the card, but that would be independent of the output resolution.

  • larsmidnattlarsmidnatt Posts: 4,511
    edited October 2015
    SixDs said:

    "with Octane render you can use system memory to expand the amount of Ram"

    If Octane is leveraging system RAM to offset shortfalls in video RAM, then it must be using a hybrid solution: using the GPU for part of the rendering task, and the CPU for another. 

    No, that is not the case. It's all GPU. It doesn't work as well for people with older slower PCIE connections but has no negative performance impact on my rig which is an older i7.
    some examples

    http://www.daz3d.com/forums/discussion/comment/916793/#Comment_916793

    http://www.daz3d.com/forums/discussion/comment/903251/#Comment_903251

     


    What I'm not clear about is, after you've accounted for what's in the scene, how much MORE is due to image size, and whether that effect is equal to image size or some multiplier or nonlinear.

     

    this is just dependant on the size of the frame rendered. It doesn't matter the textures or anything else.  Just add the frame buffer to the texture/geometry data and you have your answer.

    Post edited by larsmidnatt on
  • nDelphinDelphi Posts: 1,920
    mtl1 said:

    I am using the same video card and it does decently well for the price. ...  I personally haven't reached the 4GB limit of the card with my scenes and they all seem to render at a reasonable rate.

    What is a "reasonable rate"? Do you have a gallery where I can see the scenes you are rendering with the 960? I am looking to purchase one as I have no budget for a more powerful card.

  • HavosHavos Posts: 5,581
    edited October 2015
    nDelphi said:
    mtl1 said:

    I am using the same video card and it does decently well for the price. ...  I personally haven't reached the 4GB limit of the card with my scenes and they all seem to render at a reasonable rate.

    What is a "reasonable rate"? Do you have a gallery where I can see the scenes you are rendering with the 960? I am looking to purchase one as I have no budget for a more powerful card.

    It depends on the scene so much, that it is difficult to say what a reasonable rate is for a given graphics card. I have a GTX 970 which has roughly 50% more CUDA cores than a GTX 960. I am seeing typical speed increases of about 10 times when using GPU only with my card, compared to CPU only. With a GTX 960 you would get a 6-7 times increase, assuming the scene fitted into the 2GB of memory. I can render a nude V7 in 50 seconds, and with clothes and hair it takes about 90 seconds. The image below was rendered in 90 seconds  (so would take around 140 seconds on a GTX 960). Note that this is using nothing more than the default HDR for lighting and no background. If I was to stick the same V7 into a dark room lit by nothing but a few candles, then the render would take much longer (probably hours before most of the image grain was removed).

    V7 90 Seconds.png
    1920 x 1080 - 928K
    Post edited by Havos on
  • nDelphinDelphi Posts: 1,920
    Havos said:
    nDelphi said:
    mtl1 said:

    I am using the same video card and it does decently well for the price. ...  I personally haven't reached the 4GB limit of the card with my scenes and they all seem to render at a reasonable rate.

    What is a "reasonable rate"? Do you have a gallery where I can see the scenes you are rendering with the 960? I am looking to purchase one as I have no budget for a more powerful card.

    It depends on the scene so much, that it is difficult to say what a reasonable rate is for a given graphics card. I have a GTX 970 which has roughly 50% more CUDA cores than a GTX 960. I am seeing typical speed increases of about 10 times when using GPU only with my card, compared to CPU only. With a GTX 960 you would get a 6-7 times increase, assuming the scene fitted into the 2GB of memory. I can render a nude V7 in 50 seconds, and with clothes and hair it takes about 90 seconds. The image below was rendered in 90 seconds  (so would take around 140 seconds on a GTX 960). Note that this is using nothing more than the default HDR for lighting and no background. If I was to stick the same V7 into a dark room lit by nothing but a few candles, then the render would take much longer (probably hours before most of the image grain was removed).

    Yes, I understand that it depends on the scene. That's why I asked to see his renders. You did answer THE question, a darker scene vs a lighter one. I would be purchasing a GTX 960 with 4GBs (I think the 960 goes up to 4GBs?). There are also several flavors of them.

  • HavosHavos Posts: 5,581

    A 4GB card makes a lot of sense, you can do much more with them. My understanding is that a fair bit of memory disappears in various overheads, so a 4GB card actually gives you more than double the space for your actual scene. What is the price difference of a 4GB 960 and a 970?

  • nDelphinDelphi Posts: 1,920
    Havos said:

    A 4GB card makes a lot of sense, you can do much more with them. My understanding is that a fair bit of memory disappears in various overheads, so a 4GB card actually gives you more than double the space for your actual scene. What is the price difference of a 4GB 960 and a 970?

    GTX 960 4GBs: $210+

    GTX 970 4GBs: $330+

  • Oso3DOso3D Posts: 15,085

    Load larger and larger scenes. If your card melts and sets your computer on fire, it wasn't big enough!

     

  • ronmolinaronmolina Posts: 118

    Load larger and larger scenes. If your card melts and sets your computer on fire, it wasn't big enough!

     

    I actually had that happen about 5 years ago. I believe the fan on the video card failed when I was rendering a Vue scene for several hours. It was an old card.

Sign In or Register to comment.