Additional GPU - helpful or not?

Hi!

Luxury question incoming ;-)

I proudly own a Geforce 1080ti and I was wandering if there would be a benefit in adding another grafic card, for example a GeForce GTX Titan Z, just for the CUDA cores.

Can DAZ use the buddy, or is it a waste of money?

GOAL: Prevent CPU-Fallbacks with complex scenes and quicker render times

Best wishes

Ryselle

Comments

  • Adding a card with more VRAM than the 1080ti will reduce fallbacks, as bigger scenes will fit on the bigger card, but it will not help keep the 1080ti in the renders.A Titan Z is a dual GPU card with each chip having 6Gb of VRAM so it will not do that, at least I don't think it will I haven't tested it. More than likely you'd need something like a Titan X or a newer Titan or a Quadro to get bigger scenes on a card*.

    Adding any GPU supported by iRay, Kepler and newer, will add CUDA to the scene if the scene fits on that card. So you could add a 1050 and get an increase in render speeds when the scene was small enough to fit on the 1050.

    * The current beta includes at least the early phase of enabling VRAM pooling overNVLink. So once that is working it would be possible to take a pair of matched uring cards with NVLink connectors and pool the VRAM. That would let a pair of 2070 Supers (or better) both have lots of cUDA but also get more than 11Gb of VRAM.

  • Yes, there is benefit from adding an additional card. Faster renders from the pooled CUDA cores.  VRAM is NOT pooled prior to NVLink (I can't confirm this myself as I only have 1 RTX Titan)

    The rule here is simple; go for as much VRAM as possible- that's what counts for any scene size. the speed of the GPU in core clock means little (the more cores however, does make a difference).

    Good luck!

  • Adding a card with more VRAM than the 1080ti will reduce fallbacks, as bigger scenes will fit on the bigger card, but it will not help keep the 1080ti in the renders.A Titan Z is a dual GPU card with each chip having 6Gb of VRAM so it will not do that, at least I don't think it will I haven't tested it. More than likely you'd need something like a Titan X or a newer Titan or a Quadro to get bigger scenes on a card*.

    Adding any GPU supported by iRay, Kepler and newer, will add CUDA to the scene if the scene fits on that card. So you could add a 1050 and get an increase in render speeds when the scene was small enough to fit on the 1050.

    * The current beta includes at least the early phase of enabling VRAM pooling overNVLink. So once that is working it would be possible to take a pair of matched uring cards with NVLink connectors and pool the VRAM. That would let a pair of 2070 Supers (or better) both have lots of cUDA but also get more than 11Gb of VRAM.

    I'm not sure it's safe to assume that every card with an nvLink connector can use pooling.

  • Adding a card with more VRAM than the 1080ti will reduce fallbacks, as bigger scenes will fit on the bigger card, but it will not help keep the 1080ti in the renders.A Titan Z is a dual GPU card with each chip having 6Gb of VRAM so it will not do that, at least I don't think it will I haven't tested it. More than likely you'd need something like a Titan X or a newer Titan or a Quadro to get bigger scenes on a card*.

    Adding any GPU supported by iRay, Kepler and newer, will add CUDA to the scene if the scene fits on that card. So you could add a 1050 and get an increase in render speeds when the scene was small enough to fit on the 1050.

    * The current beta includes at least the early phase of enabling VRAM pooling overNVLink. So once that is working it would be possible to take a pair of matched uring cards with NVLink connectors and pool the VRAM. That would let a pair of 2070 Supers (or better) both have lots of cUDA but also get more than 11Gb of VRAM.

    I'm not sure it's safe to assume that every card with an nvLink connector can use pooling.

    If it works on one RTX card it should work on all of the ones with NVLink. 

    The 2070 Super and 2080's have the same GPU NVLink connector and chip as the Quadro RTX 5000 which I also know works in other software.

    The 2080ti and RTX Titan have the same connector and chip as the RTX 6000 and 8000 which are also working in other apps.

    The other NVLink 2.0 card, GV100, I expect to work because it certainly works in other applications.

    It is Nvidia though and I've expected them to shut this function out of all concumer cards to keep from tempting the Quadro market. It gets harder to justify the RTX Quadros if you can get a pair of 2080ti's with nearly double the CUDA of a 8000 and roughly half the VRAM for a lower cost (less than half for some 2080ti pairs).

  • Thank you all for the answers, if I get it right, I wouldn't benefit from 2018, because since I have the 1080, I cannot use the NV-Link.

    So your recommendation would be to go for a graphic card, with at least 11 gigabyte of RAM, as my 1080 has, and as maths tutor course as possible.

    The Titan series has, on paper over 5000, and the 11 gigabyte of RAM, but since this is a dual card, it wont work. so what would be your recommendation?

  • Thank you all for the answers, if I get it right, I wouldn't benefit from 2018, because since I have the 1080, I cannot use the NV-Link.

    So your recommendation would be to go for a graphic card, with at least 11 gigabyte of RAM, as my 1080 has, and as maths tutor course as possible.

    The Titan series has, on paper over 5000, and the 11 gigabyte of RAM, but since this is a dual card, it wont work. so what would be your recommendation?

    Right now, it is uncertain which cards will work with the NV-link in iray.

    I started with a 980ti a few years ago, then added a 1080ti.  My next upgrade last year was to replace the 980ti with a 2080ti alongside the 1080ti.  The two cards work well together because they both have 11GB VRAM and I am very pleased with the render speed increase.  The RTX cards can give an additional speed increase in some specific situations with iray.

    You mention dual card - is this in terms of size?  Most of the new RTX cards seem to be 2 or 2.5 slot sizes.  I got a blower card for this reason, but this is only feasible if you are not concerned about noise when rendering.

  • Thank you all for the answers, if I get it right, I wouldn't benefit from 2018, because since I have the 1080, I cannot use the NV-Link.

    So your recommendation would be to go for a graphic card, with at least 11 gigabyte of RAM, as my 1080 has, and as maths tutor course as possible.

    The Titan series has, on paper over 5000, and the 11 gigabyte of RAM, but since this is a dual card, it wont work. so what would be your recommendation?

    Right now, it is uncertain which cards will work with the NV-link in iray.

    I started with a 980ti a few years ago, then added a 1080ti.  My next upgrade last year was to replace the 980ti with a 2080ti alongside the 1080ti.  The two cards work well together because they both have 11GB VRAM and I am very pleased with the render speed increase.  The RTX cards can give an additional speed increase in some specific situations with iray.

    You mention dual card - is this in terms of size?  Most of the new RTX cards seem to be 2 or 2.5 slot sizes.  I got a blower card for this reason, but this is only feasible if you are not concerned about noise when rendering.

    The specific Titan the op was interested in is a dual GPU card.

    @Ryselle-Ryssa If you really feel you need more than 11Gb then the RTX Titan or some of the Pascal and Turing Quadro's are your best options. The Titan X, either one, or Titan Xp have just 12 Gb and that probably isn't enough of an upgrade to be worth the cost.

  • RayDAntRayDAnt Posts: 1,161

    Adding a card with more VRAM than the 1080ti will reduce fallbacks, as bigger scenes will fit on the bigger card, but it will not help keep the 1080ti in the renders.A Titan Z is a dual GPU card with each chip having 6Gb of VRAM so it will not do that, at least I don't think it will I haven't tested it. More than likely you'd need something like a Titan X or a newer Titan or a Quadro to get bigger scenes on a card*.

    Adding any GPU supported by iRay, Kepler and newer, will add CUDA to the scene if the scene fits on that card. So you could add a 1050 and get an increase in render speeds when the scene was small enough to fit on the 1050.

    * The current beta includes at least the early phase of enabling VRAM pooling overNVLink. So once that is working it would be possible to take a pair of matched uring cards with NVLink connectors and pool the VRAM. That would let a pair of 2070 Supers (or better) both have lots of cUDA but also get more than 11Gb of VRAM.

    I'm not sure it's safe to assume that every card with an nvLink connector can use pooling.

    If it works on one RTX card it should work on all of the ones with NVLink. 

    The 2070 Super and 2080's have the same GPU NVLink connector and chip as the Quadro RTX 5000 which I also know works in other software.

    The 2080ti and RTX Titan have the same connector and chip as the RTX 6000 and 8000 which are also working in other apps.

    The other NVLink 2.0 card, GV100, I expect to work because it certainly works in other applications.

    It is Nvidia though and I've expected them to shut this function out of all concumer cards to keep from tempting the Quadro market. It gets harder to justify the RTX Quadros if you can get a pair of 2080ti's with nearly double the CUDA of a 8000 and roughly half the VRAM for a lower cost (less than half for some 2080ti pairs).

    Going by Nvidia's own documentation (the Iray Programmer's Guide, OptiX Programmer's Manual, and Cuda Programmer's Manual all contain separate, sometimes seemingly contradictory explanations for how memory pooling in Iray via NVLink (allegedly) works. Putting it all together, this is how things seemingly pan out in terms of which GPU/os/driver configurations allow you to take full advantage of pooled memory:

    Definitely: In the case of any physically compatible GeForce/Titan/Quadro/Tesla card combination running under Linux.

    Probably: In the case of any physically compatible Titan/Quadro/Tesla card combinations running under Windows with TCC driver mode enabled.

    Maybe: In the case of any physically compatible GeForce/Titan/Quadro card combination running under Windows with "SLI" enabled in Graphics Settings (found by PugetSystems to be a hack for getting TCC-driver-mode style NVLink P2P communication going between cards regardless of official support for TCC driver mode on that model GPU.)

    So I'd also suggest caution about which cards will actually allow people to do what - at least until people manage to do more testing.

  • Thanks to everyone for the helpful comments and recommendations.. I decided to wait until the release of the new chips, and then buy a card additional to the 1080 ti.

  • alex86firealex86fire Posts: 1,130

    I have 2 questions for the people with 2 cards.

    1. Since you have 2 cards, can you start 2 different renders at the same time, each on it's own card. That would be awesome if possible, each using its own card and it's own VRAM for the render.

    2. I have read more than once now about the memory pooling for the RTX cards. Has anyone actually done it? Where can I read more about it? 

     

  • Richard HaseltineRichard Haseltine Posts: 109,510

    I have 2 questions for the people with 2 cards.

    1. Since you have 2 cards, can you start 2 different renders at the same time, each on it's own card. That would be awesome if possible, each using its own card and it's own VRAM for the render.

    It would require a two versions, or two instances of the same version, of DS to be runing as rendering locks the UI (aside from cancelling and some render settings).

    2. I have read more than once now about the memory pooling for the RTX cards. Has anyone actually done it? Where can I read more about it?

    The Public Build includes a version of Iray that supports memory sharing (with a performance hit) for materials (other eleemnts have to load fully into each card) over the nvLink (I think that's the name) bridge - but I'm not sure it applies to all RTX cards.

  • alex86firealex86fire Posts: 1,130

    I have 2 questions for the people with 2 cards.

    1. Since you have 2 cards, can you start 2 different renders at the same time, each on it's own card. That would be awesome if possible, each using its own card and it's own VRAM for the render.

    It would require a two versions, or two instances of the same version, of DS to be runing as rendering locks the UI (aside from cancelling and some render settings).

    Yes, I am of course thinking of starting DS twice, with 2 different scenes. And in each DS, to manually select a different card as the renderer.

    2. I have read more than once now about the memory pooling for the RTX cards. Has anyone actually done it? Where can I read more about it?

    The Public Build includes a version of Iray that supports memory sharing (with a performance hit) for materials (other eleemnts have to load fully into each card) over the nvLink (I think that's the name) bridge - but I'm not sure it applies to all RTX cards.

    Good to know it's only for materials. Where can we find the list of cards?

  • RayDAntRayDAnt Posts: 1,161

    I have 2 questions for the people with 2 cards.

    1. Since you have 2 cards, can you start 2 different renders at the same time, each on it's own card. That would be awesome if possible, each using its own card and it's own VRAM for the render.

    It would require a two versions, or two instances of the same version, of DS to be runing as rendering locks the UI (aside from cancelling and some render settings).

    Yes, I am of course thinking of starting DS twice, with 2 different scenes. And in each DS, to manually select a different card as the renderer.

    2. I have read more than once now about the memory pooling for the RTX cards. Has anyone actually done it? Where can I read more about it?

    The Public Build includes a version of Iray that supports memory sharing (with a performance hit) for materials (other eleemnts have to load fully into each card) over the nvLink (I think that's the name) bridge - but I'm not sure it applies to all RTX cards.

    Good to know it's only for materials. Where can we find the list of cards?

    See this post.

  • alex86firealex86fire Posts: 1,130
    RayDAnt said:

    I have 2 questions for the people with 2 cards.

    1. Since you have 2 cards, can you start 2 different renders at the same time, each on it's own card. That would be awesome if possible, each using its own card and it's own VRAM for the render.

    It would require a two versions, or two instances of the same version, of DS to be runing as rendering locks the UI (aside from cancelling and some render settings).

    Yes, I am of course thinking of starting DS twice, with 2 different scenes. And in each DS, to manually select a different card as the renderer.

    2. I have read more than once now about the memory pooling for the RTX cards. Has anyone actually done it? Where can I read more about it?

    The Public Build includes a version of Iray that supports memory sharing (with a performance hit) for materials (other eleemnts have to load fully into each card) over the nvLink (I think that's the name) bridge - but I'm not sure it applies to all RTX cards.

    Good to know it's only for materials. Where can we find the list of cards?

    See this post.

    Awesome, I have a 2070 Super so I'm good with the first card.

    Just have to consider if I should get another one or wait for the 3000 series. Since they didn't announce anything yet and with the corona virus everything will be slowed down (I think) maybe I should just get another 2070 Super.

    Especially if I can also render 2 different stuff in parallel with different DS sessions.

  • kenshaw011267kenshaw011267 Posts: 3,805
    RayDAnt said:

    I have 2 questions for the people with 2 cards.

    1. Since you have 2 cards, can you start 2 different renders at the same time, each on it's own card. That would be awesome if possible, each using its own card and it's own VRAM for the render.

    It would require a two versions, or two instances of the same version, of DS to be runing as rendering locks the UI (aside from cancelling and some render settings).

    Yes, I am of course thinking of starting DS twice, with 2 different scenes. And in each DS, to manually select a different card as the renderer.

    2. I have read more than once now about the memory pooling for the RTX cards. Has anyone actually done it? Where can I read more about it?

    The Public Build includes a version of Iray that supports memory sharing (with a performance hit) for materials (other eleemnts have to load fully into each card) over the nvLink (I think that's the name) bridge - but I'm not sure it applies to all RTX cards.

    Good to know it's only for materials. Where can we find the list of cards?

    See this post.

    Awesome, I have a 2070 Super so I'm good with the first card.

    Just have to consider if I should get another one or wait for the 3000 series. Since they didn't announce anything yet and with the corona virus everything will be slowed down (I think) maybe I should just get another 2070 Super.

    Especially if I can also render 2 different stuff in parallel with different DS sessions.

    NVLink connectors on graphics cards are not created equal. There is a single width one and a double width one. I think you'd be better off with cards with the same kind of connector. But at this point no one knows for sure.

Sign In or Register to comment.