Video Card: CUDA cores vs memory - which is more important?

mmitchell_houstonmmitchell_houston Posts: 2,472
edited March 2016 in The Commons

I have a “secondary” computer that I am using to test drive Daz Studio 4.9, and to use when I have a render that I want to let run for a few days without tying up my main computer. Lately, I’ve been thinking of spending about $100 - $150 to put a new video card in it.

Right now it has an EVGA GeForce 210 512 MB DDR3 PCI Express. (It has 16 CUDA cores)

I do NOT have a specific card in mind. This isn’t my main computer, so I don’t want to go top of the line, and $150 is really the very max of my budget. I’m really leaning toward a lower-end card in the $100-$120 range.

Still – I’d like to see a performance boost from this purchase. Which brings me to the question I’ve been thinking about: To get a speed boost in rendering, should I focus more on getting a high number of CUDA cores, or on getting more VRAM?

 

For example, should I get a card with 4 GB of memory, but only about 300 CUDA cores?

Or should I get a card with only 2 GB of memory, but 600 CUDA cores?

I know this is a very subjective question -- and that a lot depends on what type of scenes I'm rendering. So, if you had this budget and this situation, what would YOU do?

--------------

Other specs on the computer: It’s a dual-core Pentium 2 with 16 GB of RAM. I may also (eventually) upgrade the processor.

Post edited by mmitchell_houston on
«1

Comments

  • kaotkblisskaotkbliss Posts: 2,914

    The number of Cuda cores won't matter much if the scene doesn't fit in the video card memory.

    Just my thoughts.

  • HavosHavos Posts: 5,306

    I have a 4GB card, and so far that has handled every scene I have thrown at it. As such I feel a 4GB card is the minimum you need, and after that you look at the number of cores. From my understanding a 2GB card is less than half a 4GB when it comes to space available for scene content, since a fair bit of memory disappears in overhead regardless of what is in the scene.

  • kaotkblisskaotkbliss Posts: 2,914

    My scenes tend to get pretty large and more often than not, I exceed my card's 4GB limit :(

  • HavosHavos Posts: 5,306

    My scenes tend to get pretty large and more often than not, I exceed my card's 4GB limit :(

    Clearly the more memory the better, but if the OP only has 150$ for a video card, I am pretty sure there are no 6GB+ cards available for that price, so his choice is really down to 2 or 4 GB. My advice is for a 4GB one with less cores, but if the renders are mainly a single figure and a relatively simple background, then 2GB might be a better option.

  • prixatprixat Posts: 1,585

    I went for the 4GB 750ti, how much does that cost in your area?

  • Okay, you guys are saying what I thought: Memory is more important in this situation.

    Thanks.

     

  • fastbike1fastbike1 Posts: 4,074

    It's not that simple. The question is what kinds of scenes do you develop and what are your priorities? $150 isn't going to get you much of anything that will make a noticable difference. $200 - 250 is what you need for a GTX960. This is the cheapest 4GB card you can get. A GTX 950 is only 2GB.

    Frankly, your hardware is pretty minimal to be doing much of anything "faster".

  • Charlie JudgeCharlie Judge Posts: 12,328
    fastbike1 said:

    It's not that simple. The question is what kinds of scenes do you develop and what are your priorities? $150 isn't going to get you much of anything that will make a noticable difference. $200 - 250 is what you need for a GTX960. This is the cheapest 4GB card you can get. A GTX 950 is only 2GB.

    Frankly, your hardware is pretty minimal to be doing much of anything "faster".

    Not exactly true. While most GTX750ti cards are 2 GB you can get a 4 GB one for about $150: https://www.google.com/?gws_rd=ssl#q=GTX+750+ti+4gb&tbm=shop&spd=16305686193211254503

  • Not exactly true. While most GTX750ti cards are 2 GB you can get a 4 GB one for about $150: https://www.google.com/?gws_rd=ssl#q=GTX+750+ti+4gb&tbm=shop&spd=16305686193211254503

    I'll look into this card. Thanks.

  • DestinysGardenDestinysGarden Posts: 2,550

    Hey Mitchell, is seems you have already come to this conclusion, but I'll chime in anyway. Go for the best 4GB card you can. One figure with good textures, clothes, hair, and simple background can push 2GB easy, and if your scene doesn't fit on that, it doesn't matter how many CUDA cores you have, because they won't be used effectively.

    One other important thing to keep in mind is your computer's power supply. Check that your computer has the right specs to be compatible with the cards you are considering.

    I found this one pretty quickly on NewEgg, only $100.

    http://www.newegg.com/Product/Product.aspx?Item=N82E16814487043&nm_mc=KNC-GoogleAdwords-PC&cm_mmc=KNC-GoogleAdwords-PC-_-pla-_-Video+Card+-+Nvidia-_-N82E16814487043&gclid=CKiP59HBxcsCFQ4zaQod8r8Org&gclsrc=aw.ds

    Double the price, and outside of your max budget at $200, but triple the cores

    http://www.newegg.com/Product/Product.aspx?Item=N82E16814487154&nm_mc=KNC-GoogleAdwords-PC&cm_mmc=KNC-GoogleAdwords-PC-_-pla-_-Video+Card+-+Nvidia-_-N82E16814487154&gclid=CN338M_BxcsCFZSEaQodfVgA0w&gclsrc=aw.ds

    EVGA, Asus, and MSI all have good reputations for slightly better prices than NVDIA, so check reviews too. I'd think a 4GB 740 or thereabouts should give you a real performance boost for your secondary system., and absolutely they can be had for under $150. Good luck.

  • DustRiderDustRider Posts: 2,691
    edited March 2016

    To meet your parameters, the gtx 750ti looks like the best option (and has very modest power requirements), and B&H is a very good company. With Iray, I would consider 3gb the minimum, 2gb is usable, but your scenes will have to be very simple. Cuda cores are quite important as well, but as mentioned above, if your scene won't fit in gpu memory, then you can't use the gpu to renber at all.

    In the same position as your are, I would get the most bang for the buck on a 4gb card, so the gtx750ti would be my first option, then probably a gt740 with 4gb. 

    Post edited by DustRider on
  • mmitchell_houstonmmitchell_houston Posts: 2,472
    edited March 2016

    One other important thing to keep in mind is your computer's power supply. Check that your computer has the right specs to be compatible with the cards you are considering.

    Double the price, and outside of your max budget at $200, but triple the cores

    http://www.newegg.com/Product/Product.aspx?Item=N82E16814487154&nm_mc=KNC-GoogleAdwords-PC&cm_mmc=KNC-GoogleAdwords-PC-_-pla-_-Video+Card+-+Nvidia-_-N82E16814487154&gclid=CN338M_BxcsCFZSEaQodfVgA0w&gclsrc=aw.ds

     

     

    VGA, Asus, and MSI all have good reputations for slightly better prices than NVDIA, so check reviews too. I'd think a 4GB 740 or thereabouts should give you a real performance boost for your secondary system., and absolutely they can be had for under $150. Good luck.

    I must admit, it's tempting to double the budget and get something really nice. And, you're spot-on about the power supply requirements. I think the unit has a 500 W power supply, but I'm going to double check tonight and -- if not -- buy a new power supply, as well (money always seems to go out faster than it comes in). If this were my main system, I'd be willing to spend a lot more. But I'm actually saving my money for when the new Pascal chipsets ship later this year. At that time, I'm planning to sink $2k or more into a new system that's tricked out for rendering speed and power. But, even with the anticipated increased horsepower, I still like having an older, "adequate" machine available in case I want to do some light rendering or a smaller project while my main machine is tied up with bigger work.

    Thanks for the input.

    Post edited by mmitchell_houston on
  • edited January 2017

    I am still confused.

    Which is best and which is best among the "normal" cards? Also which one is best for Iray?

    780ti

    980ti

    1050

    1060

    1070

    1080

    One 780ti would that be better than one 1080?

    Dual 780ti would that be better than one 1080. Less ram on the 780ti than the 1080 , but more Cuda cores.

    Dual 780ti would that be better than one 980ti?

    I am thinking about a scene with like four characters. My CPU is a I7-5820K 3,3GHz, 16 GB of ram.

    Post edited by Dayanne Valcorsair on
  • fastbike1fastbike1 Posts: 4,074

    Better depends on what scenes you render, and on what scenes you would LIKE to render now and in the future.

    With Iray it's not a matter of VRAM or CUDA cores. Both are important. Infinite CUDA cores won't matter if your scene doesn't fit in VRAM. Infinite VRAM won't matter if you only have a few hundred CUDA cores.

    For example 2 780TIs won't be faster than 1 980TI for a scene that needs 4GB. This is not a huge scene. 4 characters with textures, outfits, decent hair and some background will probably need more than 4GB.

    I frankly wouldn't consider anything other than a 980TI or a 1080. The attached scene is 2.5 Gb when rendering at 3000x3818

     

    Janya ML7 11x14 KSL1 L-R photo2.jpg
    3000 x 3818 - 638K
  • mjc1016mjc1016 Posts: 15,001

    I am still confused.

    Which is best and which is best among the "normal" cards? Also which one is best for Iray?

    780ti

    980ti

    1050

    1060

    1070

    1080

    One 780ti would that be better than one 1080?

    Dual 780ti would that be better than one 1080. Less ram on the 780ti than the 1080 , but more Cuda cores.

    Dual 780ti would that be better than one 980ti?

    I am thinking about a scene with like four characters. My CPU is a I7-5820K 3,3GHz, 16 GB of ram.

    And comparing CUDA cores across generations is not worthwhile.  While the dual 780s may have more cores, that is now 2 generations old (probably) technology...and the lower number of 1080 cores will do more in the same amount of time than the greater number of 780 cores.  It's a funciton of the higher clock speed and smaller die size that allows this.

  • hphoenixhphoenix Posts: 1,335
    edited January 2017

    I am still confused.

    Which is best and which is best among the "normal" cards? Also which one is best for Iray?

    780ti

    980ti

    1050

    1060

    1070

    1080

    One 780ti would that be better than one 1080?

    Dual 780ti would that be better than one 1080. Less ram on the 780ti than the 1080 , but more Cuda cores.

    Dual 780ti would that be better than one 980ti?

    I am thinking about a scene with like four characters. My CPU is a I7-5820K 3,3GHz, 16 GB of ram.

    Now that 4.9.3 has been released with 1000-series support, we'll be able to benchmark with a bit more comparability.  After we get some good solid benchmarks (I'll be running one later this evening after I update) we will be able to get an idea of the 1000 series performance.

    Also, remember heat/power can have an effect on performance.  If a pair of 780Ti cards are running full tilt, they're generating about 800 - 900 Watts of heat dissipation.  If you aren't running crazy cooling on them and the case, those cards will rapidly get thermally limited (i.e., they'll slow their clock down to reduce heat to keep the card from overheating.)  A pair of 1080 cards, on the other hand, will only be putting out about 360 Watts of power at full load.  It takes a lot less to run them, a lot less cooling to keep them running full power, and they run faster (the base clock on the 1080 is 1600MHz, the 780Ti is around 1100MHz).  So while the 780Ti may have more cores, those cores are less efficient, and running slower, than the cores on the 1080.

    Also realize that for very quick renders, you may not see those benefits as much.  But when you get to longer, more involved renders, that is when thermal load and such becomes an issue.  And a big issue it can be.

    And while you may get a pair of 780Ti cards cheaper than a pair of 1080s, remember that most will have to buy a MUCH bigger power supply to run them.  A 600W power supply could run dual 1080s, but dual 780Ti cards would require at least a 1000W power supply, probably more.  That's a pretty significant extra bit of money that adds to the cost of the cards.  If you have to buy a $250 power supply, then the 780Ti cards have to be ($1200 - $250 = $950, $950 / 2 = $475) under $475 to even be considered cheaper.  Most 780Ti cards you find on ebay are used, and run around $250 + shipping.  They'll be out of warranty.

    And let's not forget, the 780Ti comes with 3GB of memory.  Anything but very basic scenes won't fit in it, and you're back to CPU for rendering.  The 1080 comes with 8GB memory.  It fits pretty large scenes. (Memory isn't additive in Iray....2x 8GB cards still only holds up to an 8GB scene.)

    From what we've seen so far in the Benchmarks thread, the 1080 is slightly faster than the 980Ti (about 10% - 15%) has 33% more memory, and uses about 60% less power.  That's at stock speed.  The 1070 is about 20% slower than the 980Ti, has 33% more memory, and uses about 65% less power.  Dual 1070's would probably be the sweet spot for power/price at this point.  1060s have 6GB, and 1050 Ti cards only have 4GB.  Titan X (pascal) cards are very pricey still, but are pretty impressive (and have 12GB of memory.)

    Dual 1070 GTX cards would be around $800, would use about 300W total, and provide about 50%  more speed than a single 980Ti, while still having 33% more ram.

    (...and you don't have to buy used....)

     

     

    Post edited by hphoenix on
  • TaozTaoz Posts: 9,733

    Specs for GTX 1070 is PCIe 3.0 x16, will performance be reduced with PCIe 1.0 x16 (assuming it's backwards compatible)?

  • hphoenixhphoenix Posts: 1,335
    Taozen said:

    Specs for GTX 1070 is PCIe 3.0 x16, will performance be reduced with PCIe 1.0 x16 (assuming it's backwards compatible)?

    With PCI-E (as a standard) each version is backwards compatible.  The only difference is a question of speed and number of lanes supported.

    A card that "Supports PCI-Express v3.0 x16" will run in ANY PCI-E slot.  It may not run as fast, but it will work.  The amount of speed lost is pretty small in most cases.  The number of lanes is more a consideration for speed and power maximums.  Unless you are running a very high end CPU, the typical motherboard today only supports up to 40 PCI-E lanes.  That means (given certain lanes are dedicated to Northbridge and Southbridge components), typically only 24 lanes are available for GPUs.....so you can have ONE 16x card in the system.  If you put two in the system, both will drop to 8x.  It is possible to force one to 16x and one to 8x, but that creates issues in synchronizing.  Or you can have THREE cards, each at 8x, but that's the max.  Put FOUR in (if you have the slots) and some (usually two of them) will drop to 4x.....and so on.

    It's really a question of how much a hit you take.  V3.0 of PCI-E really just supports faster burst transfers.  It does make a small difference.

     

    tl;dr - performance will reduce a little, and yes, it's backwards compatible.

     

  • TaozTaoz Posts: 9,733
    hphoenix said:
    Taozen said:

    Specs for GTX 1070 is PCIe 3.0 x16, will performance be reduced with PCIe 1.0 x16 (assuming it's backwards compatible)?

    With PCI-E (as a standard) each version is backwards compatible.  The only difference is a question of speed and number of lanes supported.

    A card that "Supports PCI-Express v3.0 x16" will run in ANY PCI-E slot.  It may not run as fast, but it will work.  The amount of speed lost is pretty small in most cases.  The number of lanes is more a consideration for speed and power maximums.  Unless you are running a very high end CPU, the typical motherboard today only supports up to 40 PCI-E lanes.  That means (given certain lanes are dedicated to Northbridge and Southbridge components), typically only 24 lanes are available for GPUs.....so you can have ONE 16x card in the system.  If you put two in the system, both will drop to 8x.  It is possible to force one to 16x and one to 8x, but that creates issues in synchronizing.  Or you can have THREE cards, each at 8x, but that's the max.  Put FOUR in (if you have the slots) and some (usually two of them) will drop to 4x.....and so on.

    It's really a question of how much a hit you take.  V3.0 of PCI-E really just supports faster burst transfers.  It does make a small difference.

     

    tl;dr - performance will reduce a little, and yes, it's backwards compatible.

     

    OK. However, you say:  "A card that "Supports PCI-Express v3.0 x16" will run in ANY PCI-E slot." But the PCI-E slots are different depending on how many lanes they support max. Are you refering to the version here (1.x, 2.x, 3.x)? 

    I have an older board with one x16 slot and two x1 slots, I assume I can only have one graphics card installed here then? 

     

  • I just checked New Egg, Frys & Microcenter.  All have 4GB GTX 1050s for around $150.00 that are in the OP's budget.  Also, one of the 1050's claim to fame is very low power usage so you may not need to upgrade your power supply.

    Now that 4.9.3 is out, thinking about upgrading my video card too.  Here are my current specs;System Specs: Intel i5 4670K, Gigabyte Z97X-Gaming 7, EVGA GTX 960 SSC 4GB, 2x 8192kb Patriot Viper DDR-3 memory, WD 6400AAKS hard drive with a 32gb SanDisk ReadyCache OS/app drive, 1 Hitachi HDT721010SLA360 1TB hard drive Data Drive, Corsair HX1000W PSU, HP DVD1720 optical drive, CoolerMaster CM 690 II Case (side cover off), Samsung SyncMaster P2370 Monitor @ 1080p, Windows 10 Professional 64

    Besides the GPU, really thinking about going to a HDD and adding another 16GB of memory.  Still trying to decide on the best bang for the buck in GPUs and will pick up a matched pair of spinner daa drives for a raid1 data array to protect my data.

    Looking forward to testing whe I get home from vacation (on a puny laptop now that couldn't render a playing card).

  • edited January 2017

    Hello again!

    Thanks a lot for the well written response hphoenix. I guess I shall start with one GTX 1070. My current card is a Radeon R9  290X which I thought would be a great card. Crying shame DAZ 3D does not support the AMD cards.

    Also when the 1080ti is out then perhaps the prices on the 1070 will drop. I guess the 1080ti will be too expensive for me. Best for me would be a card that works well for both games and DAZ 3D. Apparently the 1070 is like 50% better than the R9 for games so I guess it is not bad to change for that reason either. My PSU is a Corsair 850W so two cards should not be a problem.

    Post edited by Dayanne Valcorsair on
  • nicsttnicstt Posts: 11,714

    It's about balance; I wouldn't consider a card with less than 4GB; having said that, I feel 6GB is the minimum.

    Cuda's are useless when the CPU is having to render.

  • hphoenixhphoenix Posts: 1,335
    Taozen said:
    hphoenix said:
    Taozen said:

    Specs for GTX 1070 is PCIe 3.0 x16, will performance be reduced with PCIe 1.0 x16 (assuming it's backwards compatible)?

    With PCI-E (as a standard) each version is backwards compatible.  The only difference is a question of speed and number of lanes supported.

    A card that "Supports PCI-Express v3.0 x16" will run in ANY PCI-E slot.  It may not run as fast, but it will work.  The amount of speed lost is pretty small in most cases.  The number of lanes is more a consideration for speed and power maximums.  Unless you are running a very high end CPU, the typical motherboard today only supports up to 40 PCI-E lanes.  That means (given certain lanes are dedicated to Northbridge and Southbridge components), typically only 24 lanes are available for GPUs.....so you can have ONE 16x card in the system.  If you put two in the system, both will drop to 8x.  It is possible to force one to 16x and one to 8x, but that creates issues in synchronizing.  Or you can have THREE cards, each at 8x, but that's the max.  Put FOUR in (if you have the slots) and some (usually two of them) will drop to 4x.....and so on.

    It's really a question of how much a hit you take.  V3.0 of PCI-E really just supports faster burst transfers.  It does make a small difference.

     

    tl;dr - performance will reduce a little, and yes, it's backwards compatible.

     

    OK. However, you say:  "A card that "Supports PCI-Express v3.0 x16" will run in ANY PCI-E slot." But the PCI-E slots are different depending on how many lanes they support max. Are you refering to the version here (1.x, 2.x, 3.x)? 

    I have an older board with one x16 slot and two x1 slots, I assume I can only have one graphics card installed here then? 

    Probably best that way.  Most x1 slots aren't full slots.  x4, x8, and x16 slots are all identical, physically.  the 'x' is just how many lanes the slot can support (which is more about the the number of lines that are actually connected to the slot.  An x16 slot can run in any of the modes.

    Most PCI-E x1 slots are actually Mini PCI-E slots.  They usually only support x1 or x2 operation.

    Put at it's simplest.  If the card will fit in the slot (and the card and slot ARE PCI-E) then you can run it.  It will probably run slower, but how much depends on a lot of stuff.  PCI-E lanes are pretty fast to start, and whether there is a lot of transfer back and forth as something runs it will have a greater impact.  So if they are all full size slots, you could put 3 cards on a mother board with a x16, and 2 x1 slots.  But only the one in the x16 will run at full speed.

     

  • SpottedKittySpottedKitty Posts: 7,232
    My current card is a Radeon R9  290X which I thought would be a great card. Crying shame DAZ 3D does not support the AMD cards.

    There's nothing DAZ could do about it — Iray is an NVidia development, so it only works on a card that has CUDA cores, which effectively means only an NVidia card. AMD cards don't use the CUDA technology.

  • UHFUHF Posts: 512

    With Octane you can just use your PC's RAM with your CUDA cores.  It only costs about 10% performance.  I did an 8.5GB render on my 4GB video card.

  • TaozTaoz Posts: 9,733
    hphoenix said:
    Taozen said:
    hphoenix said:
    Taozen said:

    Specs for GTX 1070 is PCIe 3.0 x16, will performance be reduced with PCIe 1.0 x16 (assuming it's backwards compatible)?

    With PCI-E (as a standard) each version is backwards compatible.  The only difference is a question of speed and number of lanes supported.

    A card that "Supports PCI-Express v3.0 x16" will run in ANY PCI-E slot.  It may not run as fast, but it will work.  The amount of speed lost is pretty small in most cases.  The number of lanes is more a consideration for speed and power maximums.  Unless you are running a very high end CPU, the typical motherboard today only supports up to 40 PCI-E lanes.  That means (given certain lanes are dedicated to Northbridge and Southbridge components), typically only 24 lanes are available for GPUs.....so you can have ONE 16x card in the system.  If you put two in the system, both will drop to 8x.  It is possible to force one to 16x and one to 8x, but that creates issues in synchronizing.  Or you can have THREE cards, each at 8x, but that's the max.  Put FOUR in (if you have the slots) and some (usually two of them) will drop to 4x.....and so on.

    It's really a question of how much a hit you take.  V3.0 of PCI-E really just supports faster burst transfers.  It does make a small difference.

     

    tl;dr - performance will reduce a little, and yes, it's backwards compatible.

     

    OK. However, you say:  "A card that "Supports PCI-Express v3.0 x16" will run in ANY PCI-E slot." But the PCI-E slots are different depending on how many lanes they support max. Are you refering to the version here (1.x, 2.x, 3.x)? 

    I have an older board with one x16 slot and two x1 slots, I assume I can only have one graphics card installed here then? 

    Probably best that way.  Most x1 slots aren't full slots.  x4, x8, and x16 slots are all identical, physically.  the 'x' is just how many lanes the slot can support (which is more about the the number of lines that are actually connected to the slot.  An x16 slot can run in any of the modes.

    Most PCI-E x1 slots are actually Mini PCI-E slots.  They usually only support x1 or x2 operation.

    Put at it's simplest.  If the card will fit in the slot (and the card and slot ARE PCI-E) then you can run it.  It will probably run slower, but how much depends on a lot of stuff.  PCI-E lanes are pretty fast to start, and whether there is a lot of transfer back and forth as something runs it will have a greater impact.  So if they are all full size slots, you could put 3 cards on a mother board with a x16, and 2 x1 slots.  But only the one in the x16 will run at full speed.

    OK. Well at least it looks like I can use a GTX 1070 or 1080 in that machine with reasonable performance until I get built another one with components that are more up to date and which can handle two or more cards.

    Thanks!

     

  • TaozTaoz Posts: 9,733
    UHF said:

    With Octane you can just use your PC's RAM with your CUDA cores.  It only costs about 10% performance.  I did an 8.5GB render on my 4GB video card.

    Hm, maybe that's the way to go. Octane is expensive but so are graphics cards with a lot of RAM. How much system RAM can Octane utilize?

  • linvanchenelinvanchene Posts: 1,336
    edited January 2017

    @ Octane and system RAM -> textures only!

     

    Taozen said:
    UHF said:

    With Octane you can just use your PC's RAM with your CUDA cores.  It only costs about 10% performance.  I did an 8.5GB render on my 4GB video card.

    Hm, maybe that's the way to go. Octane is expensive but so are graphics cards with a lot of RAM. How much system RAM can Octane utilize?

    Just to be clear:

    Octane currently is only able to load textures into system RAM. Octane does not support loading of geometry into system RAM.

    However all available RAM can be used.

    Example:

    - System with 64 GB RAM

    ~ 11'000 MB used for applications running

    -> around 50'000 MB RAM available to store textures in Octane

    This will help you especially if you want to work with high resolution 8000x8000 textures or HDR backgrounds which can take up to 500+ MB in VRAM.

    - - -

     

    @ VRAM vs Cuda cores

    In any case make sure you get a card with as much VRAM as you can efford. You can safe some money by buying the cheaper stock models that are not overclocked.

    Examples:

    A) purchase a 1070 now and purchase a 1080 later -> you get 8 GB VRAM and can still increase the speed with additional cuda cores of a second card later

    B) if you buy a card with only 6 GB VRAM now and then any card with more VRAM later -> you get more cuda cores but are stuck with 6 GB VRAM if you want to use both

     

     

    Post edited by linvanchene on
  • UHFUHF Posts: 512

    What he said... and it makes little difference in render time.

  • TaozTaoz Posts: 9,733

    @ Octane and system RAM -> textures only!

     

    Taozen said:
    UHF said:

    With Octane you can just use your PC's RAM with your CUDA cores.  It only costs about 10% performance.  I did an 8.5GB render on my 4GB video card.

    Hm, maybe that's the way to go. Octane is expensive but so are graphics cards with a lot of RAM. How much system RAM can Octane utilize?

    Just to be clear:

    Octane currently is only able to load textures into system RAM. Octane does not support loading of geometry into system RAM.

    However all available RAM can be used.

    Example:

    - System with 64 GB RAM

    ~ 11'000 MB used for applications running

    -> around 50'000 MB RAM available to store textures in Octane

    This will help you especially if you want to work with high resolution 8000x8000 textures or HDR backgrounds which can take up to 500+ MB in VRAM.

    - - -

     

    @ VRAM vs Cuda cores

    In any case make sure you get a card with as much VRAM as you can efford. You can safe some money by buying the cheaper stock models that are not overclocked.

    Examples:

    A) purchase a 1070 now and purchase a 1080 later -> you get 8 GB VRAM and can still increase the speed with additional cuda cores of a second card later

    B) if you buy a card with only 6 GB VRAM now and then any card with more VRAM later -> you get more cuda cores but are stuck with 6 GB VRAM if you want to use both

    Thanks for the info. So the amount of VRAM available for rendering is always determined by the card with the least amount of VRAM?

Sign In or Register to comment.