nvida 1080 question

24

Comments

  • PA_ThePhilosopherPA_ThePhilosopher Posts: 1,039
    edited June 2016

    how much of a bigger jump will it be from my 980 ti to that? would it be hugely noticeable or  anything

    Remember, with GPU rendering it's all about CUDA cores. Nothing else really matters for speed. If you want fast rendering, the CUDA core count is the only number you should really care about. The new 1080 only has 2,560 cores vs. the 980 Ti's 2,816 cores. This is why there is no real performance benefit in the newer 1080 line. Even the older 780 Ti, with its 2,880 cores will still be about equal to a 1080 with all its boost clocking and memory. So even if the 1080 has iRay support, it is a moot point: It will still be no faster than a 980 Ti, or even the 3- year old 780 Ti. 

    I don't think people realize just how much of a beast older cards like the 780 Ti really are because of their CUDA cores. For less than $250 on ebay, you can buy a 780 Ti right now and get equivelant render times to the newest nVidia $1,000 analogs. That's insane. It is just not worth buying a newer nVidia card yet, at least not now, since nVidia has nerfed the CUDA count on the newer analogs to try to make their Titan line-up more appealing.

    With all that said, however, there is talk of the new Ti line-up which will have 3,800+ CUDA cores... which will be lightning fast for rendering.... Hopefully by then the market will wake up to the potential of iRay for animation rendering. 

    -P

    kyoto kid said:

    ...yeah the memory issue is a real stickler. 

    This does not seem to be much of an issue anymore with the newer nVidia drivers. I use 780 Ti's--which only have 3 GB of memory--- and yet I am able to render massive scenes with 6 million+ polygons without the GPU dumping over to the CPU...How? I have no idea. But what I do know, is just a few months ago this was not possible. On older dirvers, I could only render out scenes on GPU only up to ~1.5 million polygons. Now...well, there doesn't seem to be a limit. Somehow, the newer nVidia drivers are able to manage memory much better. (currently I am using driver version 362).

    -P

    Post edited by PA_ThePhilosopher on
  • MEC4DMEC4D Posts: 5,249

    Don't forget that having more VRAM will slow down everything else so good CPU to support it and fast memory is needed also , I have slower cheaper processors but they performed so slow with DS so I had to upgrade , so when building a rig  you need  to have the right CPU to support your GPUs for the right balance or you not improve anything and just waste your money . I am happy now with the iray rendering performance as I found the right balance and each additional GPU cut the speed in half with the last DS build, but for what cost , that why average customers will have no pleasure from more VRAM unless they invest a lot more in their system to support it optimal other way as you said it is waste of money and GPU , all we need is more VRAM and less cards in our systems with higher clock speed

     

    kyoto kid said:
    MEC4D said:

    It is all possible , but sadly not for average customers yet as beside your VRAM you need to spend money on your system to handle it smooth , my system has 40GB of VRAM from what 12GB is usable in iray and 4GB for the system video memory the rest of it not really usable but still costed no less, not forget to mention the system costs to support it all . Waste of money , resource and energy but there is no other way at this moment .

    Maybe in the next 4-5 years we have the 1 and only card that will support our needs , cost and energy efficient that everyone can afford it . And the sales of Titan X 12GB will be like $55 lol

    as nobody want it anymore .

    ...yeah the memory issue is a real stickler.  Too bad Nvidia, or someone else, wouldn't develop cards that just added CUDA cores (without the memory) for boosting render speed.  Yeah I know we are "small potatoes" compared to the gaming or professional 3D market, but yes, it would be nice not having to spend 1,000$ per extra card that we use only a portion of just to reduce render time.

     

    kyoto kid said:

    according to the new structure of the pascal cards, the ideal is: 16GB HBM2 and 2 cards that share this memory making a 32GB unique memory block, and if possible with a GP100 (or 102) with all cuda core activated. This hadware already exists, but the software/driver part doesn't. Add to this the marketing layer: if this goes live, the more than 16GB cards will be (almost) useless, so they'd rather don't go to far and too soon in this direxction.

    ...one report form a fairly reputable source that  I read mentioned that the 32 GB version would most likely be reserved for the "Professional" (eg Quadro) GPUs.  Considering they upgraded the M6000's memory to 24 GB but not the Titan-X's, it would make sense.  So we get a 16 GB Pascal "Titan" and a 32 GB Quadro "P6000" with Volta being for the next generation Tesla Compute units.

     

  • nickalamannickalaman Posts: 196

    how much of a bigger jump will it be from my 980 ti to that? would it be hugely noticeable or  anything

    Remember, with GPU rendering it's all about CUDA cores. Nothing else really matters for speed. If you want fast rendering, the CUDA core count is the only number you should really care about. The new 1080 only has 2,560 cores vs. the 980 Ti's 2,816 cores. This is why there is no real performance benefit in the newer 1080 line. Even the older 780 Ti, with its 2,880 cores will still be about equal to a 1080 with all its boost clocking and memory. So even if the 1080 has iRay support, it is a moot point: It will still be no faster than a 980 Ti, or even the 3- year old 780 Ti. 

    I don't think people realize just how much of a beast older cards like the 780 Ti really are because of their CUDA cores. For less than $250 on ebay, you can buy a 780 Ti right now and get equivelant render times to the newest nVidia $1,000 analogs. That's insane. It is just not worth buying a newer nVidia card yet, at least not now, since nVidia has nerfed the CUDA count on the newer analogs to try to make their Titan line-up more appealing.

    With all that said, however, there is talk of the new Ti line-up which will have 3,800+ CUDA cores... which will be lightning fast for rendering.... Hopefully by then the market will wake up to the potential of iRay for animation rendering. 

    -P

    kyoto kid said:

    ...yeah the memory issue is a real stickler. 

    This does not seem to be much of an issue anymore with the newer nVidia drivers. I use 780 Ti's--which only have 3 GB of memory--- and yet I am able to render massive scenes with 6 million+ polygons without the GPU dumping over to the CPU...How? I have no idea. But what I do know, is just a few months ago this was not possible. On older dirvers, I could only render out scenes on GPU only up to ~1.5 million polygons. Now...well, there doesn't seem to be a limit. Somehow, the newer nVidia drivers are able to manage memory much better. (currently I am using driver version 362).

    -P

    it's not just the number of CUDA cores but also the speed the are running at. 

  • alexhcowleyalexhcowley Posts: 2,404

    how much of a bigger jump will it be from my 980 ti to that? would it be hugely noticeable or  anything

    Remember, with GPU rendering it's all about CUDA cores. Nothing else really matters for speed. If you want fast rendering, the CUDA core count is the only number you should really care about. The new 1080 only has 2,560 cores vs. the 980 Ti's 2,816 cores. This is why there is no real performance benefit in the newer 1080 line. Even the older 780 Ti, with its 2,880 cores will still be about equal to a 1080 with all its boost clocking and memory. So even if the 1080 has iRay support, it is a moot point: It will still be no faster than a 980 Ti, or even the 3- year old 780 Ti. 

    I don't think people realize just how much of a beast older cards like the 780 Ti really are because of their CUDA cores. For less than $250 on ebay, you can buy a 780 Ti right now and get equivelant render times to the newest nVidia $1,000 analogs. That's insane. It is just not worth buying a newer nVidia card yet, at least not now, since nVidia has nerfed the CUDA count on the newer analogs to try to make their Titan line-up more appealing.

    With all that said, however, there is talk of the new Ti line-up which will have 3,800+ CUDA cores... which will be lightning fast for rendering.... Hopefully by then the market will wake up to the potential of iRay for animation rendering. 

    -P

    kyoto kid said:

    ...yeah the memory issue is a real stickler. 

    This does not seem to be much of an issue anymore with the newer nVidia drivers. I use 780 Ti's--which only have 3 GB of memory--- and yet I am able to render massive scenes with 6 million+ polygons without the GPU dumping over to the CPU...How? I have no idea. But what I do know, is just a few months ago this was not possible. On older dirvers, I could only render out scenes on GPU only up to ~1.5 million polygons. Now...well, there doesn't seem to be a limit. Somehow, the newer nVidia drivers are able to manage memory much better. (currently I am using driver version 362).

    -P

    it's not just the number of CUDA cores but also the speed the are running at. 

    With a heavy-weight number-crunching application like Iray, I would have thought that faster memory would help, as well.  Then there's the much smaller fabrication process used with Pascal.  Shorter pathways within the processor should also increase speed.  

    Cheers,

    Alex.

  • MEC4DMEC4D Posts: 5,249

    Plus the CPU and System Memory , just CUDA cores will do nothing , CPU is used in rendering even if you render just with GPU and even more in viewport , so how faster your CPU is how faster the viewport will  spin with your GPUs as each time you rotate the scene CPU is sending updates to the GPUs 

    and if you think that 10.000 cuda cores will be faster in your viewport then think longer .. how more cards you have how slower your CPU will works ending in delay of overal perfomance 

    you wanna see it in action? change the CPU core speed to 1Ghz and see how fast your viewport will spin using only GPUs  , as in my case it use 6-8 cores for the viewport and 45% of the CPU for the GPU rendering constantly from the begining of the rendering to the end since iray is more of a hybrid  .

    so how more Cuda cores how slower your iray will be if you have not enough CPU power and fast system memory , I just spent $1000 for a faster CPU so it can support my 10.000 Cuda Cores , as 4 core CPU was good for max 6000 Cuda cores

     

    kyoto kid said:
    MEC4D said:

    It is all possible , but sadly not for average customers yet as beside your VRAM you need to spend money on your system to handle it smooth , my system has 40GB of VRAM from what 12GB is usable in iray and 4GB for the system video memory the rest of it not really usable but still costed no less, not forget to mention the system costs to support it all . Waste of money , resource and energy but there is no other way at this moment .

    Maybe in the next 4-5 years we have the 1 and only card that will support our needs , cost and energy efficient that everyone can afford it . And the sales of Titan X 12GB will be like $55 lol

    as nobody want it anymore .

    ...yeah the memory issue is a real stickler.  Too bad Nvidia, or someone else, wouldn't develop cards that just added CUDA cores (without the memory) for boosting render speed.  Yeah I know we are "small potatoes" compared to the gaming or professional 3D market, but yes, it would be nice not having to spend 1,000$ per extra card that we use only a portion of just to reduce render time.

     

    kyoto kid said:

    according to the new structure of the pascal cards, the ideal is: 16GB HBM2 and 2 cards that share this memory making a 32GB unique memory block, and if possible with a GP100 (or 102) with all cuda core activated. This hadware already exists, but the software/driver part doesn't. Add to this the marketing layer: if this goes live, the more than 16GB cards will be (almost) useless, so they'd rather don't go to far and too soon in this direxction.

    ...one report form a fairly reputable source that  I read mentioned that the 32 GB version would most likely be reserved for the "Professional" (eg Quadro) GPUs.  Considering they upgraded the M6000's memory to 24 GB but not the Titan-X's, it would make sense.  So we get a 16 GB Pascal "Titan" and a 32 GB Quadro "P6000" with Volta being for the next generation Tesla Compute units.

     

    how much of a bigger jump will it be from my 980 ti to that? would it be hugely noticeable or  anything

    Remember, with GPU rendering it's all about CUDA cores. Nothing else really matters for speed. If you want fast rendering, the CUDA core count is the only number you should really care about. The new 1080 only has 2,560 cores vs. the 980 Ti's 2,816 cores. This is why there is no real performance benefit in the newer 1080 line. Even the older 780 Ti, with its 2,880 cores will still be about equal to a 1080 with all its boost clocking and memory. So even if the 1080 has iRay support, it is a moot point: It will still be no faster than a 980 Ti, or even the 3- year old 780 Ti. 

    I don't think people realize just how much of a beast older cards like the 780 Ti really are because of their CUDA cores. For less than $250 on ebay, you can buy a 780 Ti right now and get equivelant render times to the newest nVidia $1,000 analogs. That's insane. It is just not worth buying a newer nVidia card yet, at least not now, since nVidia has nerfed the CUDA count on the newer analogs to try to make their Titan line-up more appealing.

    With all that said, however, there is talk of the new Ti line-up which will have 3,800+ CUDA cores... which will be lightning fast for rendering.... Hopefully by then the market will wake up to the potential of iRay for animation rendering. 

    -P

    kyoto kid said:

    ...yeah the memory issue is a real stickler. 

    This does not seem to be much of an issue anymore with the newer nVidia drivers. I use 780 Ti's--which only have 3 GB of memory--- and yet I am able to render massive scenes with 6 million+ polygons without the GPU dumping over to the CPU...How? I have no idea. But what I do know, is just a few months ago this was not possible. On older dirvers, I could only render out scenes on GPU only up to ~1.5 million polygons. Now...well, there doesn't seem to be a limit. Somehow, the newer nVidia drivers are able to manage memory much better. (currently I am using driver version 362).

    -P

    it's not just the number of CUDA cores but also the speed the are running at. 

     

  • MEC4DMEC4D Posts: 5,249

    You can get 4 cheaper gtx 970 that will render 25% faster that high end cards for the same money, but the catch here is that you will have not enough VRAM 

    but as always, iray is not about how faster you render , as faster rendering don't mean better render quality , the quality is in the light paths not in cleaning the noises of your renders

    the number of iterations are not the number of quality as I can do slower render with half the iteration compared to the same 3 times faster render with 3 times the iterations and still noisy 

    and because base rendering settings use only 80% of GPU compared to 100% when using samplers , faster don't mean better

    so take your time, clean your system from stuff that only delay everything as less sometimes will do the job better and faster than you may thinking 

    6 years ago I was thinking exactly as some of you as that was logical to me, more is better , but the truth it totally different and until Nvidia allow us to stack the GPUs power in iray plan good before you add another card you just snapped from ebay as it may just delay what you already have and just increase speed of 10% is not what you should get for the money you spent .

    mixing slower cards with faster cards is big mistake as it will affect the GPU scaling so keep it as closer as possible and the best from the same series and number of cores for each GPU so iray can scale it equal for optimal performance and not the numbers of cuda cores as that is first newbie mistake people do .

  • 3delinquent3delinquent Posts: 355

    PA_ThePhilosopher said, 'This does not seem to be much of an issue anymore with the newer nVidia drivers. I use 780 Ti's--which only have 3 GB of memory--- and yet I am able to render massive scenes with 6 million+ polygons without the GPU dumping over to the CPU...How? I have no idea. But what I do know, is just a few months ago this was not possible. On older dirvers, I could only render out scenes on GPU only up to ~1.5 million polygons. Now...well, there doesn't seem to be a limit. Somehow, the newer nVidia drivers are able to manage memory much better. (currently I am using driver version 362).'

    Anyone else have this experience? I understood from what I've read most places that 4gb is barely enough for a couple of fully clothed G3's with hair and a simple background.

  • HavosHavos Posts: 5,606

    PA_ThePhilosopher said, 'This does not seem to be much of an issue anymore with the newer nVidia drivers. I use 780 Ti's--which only have 3 GB of memory--- and yet I am able to render massive scenes with 6 million+ polygons without the GPU dumping over to the CPU...How? I have no idea. But what I do know, is just a few months ago this was not possible. On older dirvers, I could only render out scenes on GPU only up to ~1.5 million polygons. Now...well, there doesn't seem to be a limit. Somehow, the newer nVidia drivers are able to manage memory much better. (currently I am using driver version 362).'

    Anyone else have this experience? I understood from what I've read most places that 4gb is barely enough for a couple of fully clothed G3's with hair and a simple background.

    I have a 4GB card, and I rendered scenes with 6 dressed characters, plus a fairly complex background. There are a lot of variables involved in how much GPU memory you need for a scene, so it is not as simple as saying so many GB can render this or that.

  • MEC4DMEC4D Posts: 5,249

     

    with my 12GB I can render easy 20 milions poly what is over 740 Genesis figures not textured  , the models are not big deal but textures, reducing the resolution will allow you to render more 

    I never rendered more than 50 textured figures but did not checked how much vram it used

    PA_ThePhilosopher said, 'This does not seem to be much of an issue anymore with the newer nVidia drivers. I use 780 Ti's--which only have 3 GB of memory--- and yet I am able to render massive scenes with 6 million+ polygons without the GPU dumping over to the CPU...How? I have no idea. But what I do know, is just a few months ago this was not possible. On older dirvers, I could only render out scenes on GPU only up to ~1.5 million polygons. Now...well, there doesn't seem to be a limit. Somehow, the newer nVidia drivers are able to manage memory much better. (currently I am using driver version 362).'

    Anyone else have this experience? I understood from what I've read most places that 4gb is barely enough for a couple of fully clothed G3's with hair and a simple background.

     

  • kyoto kidkyoto kid Posts: 42,017
    edited June 2016
     
    kyoto kid said:

    ...yeah the memory issue is a real stickler. 

    This does not seem to be much of an issue anymore with the newer nVidia drivers. I use 780 Ti's--which only have 3 GB of memory--- and yet I am able to render massive scenes with 6 million+ polygons without the GPU dumping over to the CPU...How? I have no idea. But what I do know, is just a few months ago this was not possible. On older dirvers, I could only render out scenes on GPU only up to ~1.5 million polygons. Now...well, there doesn't seem to be a limit. Somehow, the newer nVidia drivers are able to manage memory much better. (currently I am using driver version 362).

    -P

    ...try rendering a scene like the one attached at 3,000 x 2,250 (for a 33% print reduction to 20" x 15") at a quality setting of 2 or 3 with only 3 - 4 GB.  On my system the 1,600 x 1,200 test render below took over 11 GB in CPU mode before it dropped to virtual memory (and the quality was set at the default of 1).

    It's not just the poly count, but also the textures and effects (like the fog/mist and wet surfaces) that all have to fit in GPU memory.  Octane will split the load between CPU and GPU and still render pretty fast so there a one with less memory will work. With Iray, it's all or nothing.  I don't bother with portraits unless it is for an RPG character, I do "big" involved scenes and would like to have high quality prints made to exhibit my works.

    rail statation proof.png
    1600 x 1200 - 2M
    Post edited by kyoto kid on
  • kyoto kidkyoto kid Posts: 42,017
    edited June 2016
    MEC4D said:

    You can get 4 cheaper gtx 970 that will render 25% faster that high end cards for the same money, but the catch here is that you will have not enough VRAM 

    but as always, iray is not about how faster you render , as faster rendering don't mean better render quality , the quality is in the light paths not in cleaning the noises of your renders

    the number of iterations are not the number of quality as I can do slower render with half the iteration compared to the same 3 times faster render with 3 times the iterations and still noisy 

    and because base rendering settings use only 80% of GPU compared to 100% when using samplers , faster don't mean better

    so take your time, clean your system from stuff that only delay everything as less sometimes will do the job better and faster than you may thinking 

    6 years ago I was thinking exactly as some of you as that was logical to me, more is better , but the truth it totally different and until Nvidia allow us to stack the GPUs power in iray plan good before you add another card you just snapped from ebay as it may just delay what you already have and just increase speed of 10% is not what you should get for the money you spent .

    mixing slower cards with faster cards is big mistake as it will affect the GPU scaling so keep it as closer as possible and the best from the same series and number of cores for each GPU so iray can scale it equal for optimal performance and not the numbers of cuda cores as that is first newbie mistake people do .

    ...again speed is all relative and no matter how fast or good the GPUs are, if the scene cannot be held in memory all those CUDA cores are worthless as you're forced back into the full CPU "slow lane". So with 4GB, I'm pretty much restricted to character studies, portraits and simple vignettes, 6 GB would maybe handle about 50 - 60% of my scenes, 8 GB about 75%, and 12 GB over 90%.

    In my build schemes, all GPUs were the same model with identical clock speed, number of, cores memory, etc.  The current concept is now based on a 3.2 Ghz 8 Core i7 (the new 10 core i7 is way overpriced). Physical memory is fast DDR4 configured in quad channel mode.   That should be enough to support a couple Titan-Xs for a high level of performance.

    As I do homebuilds (and am avoiding W10) I don't have any factory installed bloatware to deal with.  While I do have a security suite, there is no provision for "productivity" software or games on the workstation (not into video gaming anyway).  The system is strictly a CG production machine, just like my current one is.  I have net access so I can direct download content and software updates as well as upload imnages. (I also manually perform all sescurity updates, somethng W10 doesn't let you do evne at the Pro level).  While I am working the system remains offline, I have a couple older notebooks for that purpose.

    Post edited by kyoto kid on
  • MEC4DMEC4D Posts: 5,249

    the size of the render also take the vram , usually for 19020x1080 it take 1GB of VRAM , I just rendered right now System Ram was 7.8GB for DS  the GPU 2.6 GB  then it incrased 1GB while rendering , but the textures are the VRAM killer , I never use 4000x4000 textures while rendering with iray beside very close portraits but for full figures you can easy go with 2000x2000 and that will cut in half your VRAM usage .

    When Octane came out I had only 1,5GB of VRAM and max I could render was 1 figure dressed with hair with reduced textures to 2000x2000 as with 4kx4k it was not possible and I still made animations and tones of renders but when you have more resource you stopping caring about until you reach the maximum again .

    kyoto kid said:
     
    kyoto kid said:

    ...yeah the memory issue is a real stickler. 

    This does not seem to be much of an issue anymore with the newer nVidia drivers. I use 780 Ti's--which only have 3 GB of memory--- and yet I am able to render massive scenes with 6 million+ polygons without the GPU dumping over to the CPU...How? I have no idea. But what I do know, is just a few months ago this was not possible. On older dirvers, I could only render out scenes on GPU only up to ~1.5 million polygons. Now...well, there doesn't seem to be a limit. Somehow, the newer nVidia drivers are able to manage memory much better. (currently I am using driver version 362).

    -P

    ...try rendering a scene like the one attached at 3,000 x 2,250 (for a 33% print reduction to 20" x 15") at a quality setting of 2 or 3 with only 3 - 4 GB.  On my system the 1,600 x 1,200 test render below took over 11 GB in CPU mode before it dropped to virtual memory (and the quality was set at the default of 1).

    It's not just the poly count, but also the textures and effects (like the fog/mist and wet surfaces) that all have to fit in GPU memory.  Octane will split the load between CPU and GPU and still render pretty fast so there a one with less memory will work. With Iray, it's all or nothing.  I don't bother with portraits unless it is for an RPG character, I do "big" involved scenes and would like to have high quality prints made to exhibit my works.

     

  • kyoto kidkyoto kid Posts: 42,017
    MEC4D said:

     

    with my 12GB I can render easy 20 milions poly what is over 740 Genesis figures not textured  , the models are not big deal but textures, reducing the resolution will allow you to render more 

    I never rendered more than 50 textured figures but did not checked how much vram it used

    PA_ThePhilosopher said, 'This does not seem to be much of an issue anymore with the newer nVidia drivers. I use 780 Ti's--which only have 3 GB of memory--- and yet I am able to render massive scenes with 6 million+ polygons without the GPU dumping over to the CPU...How? I have no idea. But what I do know, is just a few months ago this was not possible. On older dirvers, I could only render out scenes on GPU only up to ~1.5 million polygons. Now...well, there doesn't seem to be a limit. Somehow, the newer nVidia drivers are able to manage memory much better. (currently I am using driver version 362).'

    Anyone else have this experience? I understood from what I've read most places that 4gb is barely enough for a couple of fully clothed G3's with hair and a simple background.

     

    ...considering my scenes tend to be fairly involved, reducing texture resolutions (which includes bump and displacement) in a 2D programme can be extremely tedious and time consuming, and could easily offset any gain in reduced render time. Imagine having to do that for a set like Stonemason's Urban Sprawl 3.  This is why I need "big" memory (or just move to Octane which splits the load and still renders pretty quickly).

  • MEC4DMEC4D Posts: 5,249

    fully agree, I use my 4GB for monitor, the other day I tried to work with it to see how thing work out and was like OMG my CPU was faster lol so I know what you dealing with , 2 good cards with high VRAM count are the best for iray even with 4 cores CPU , I was about to get the 10 cores but I knew I will regret the price very soon so I choice the 8 cores , if I had only 2 cards I would keep my super fast 4 core CPU as it was just fine sadly not enough for 3 GPUs and I was very mad at the benchmark guy that said there is no improvent with faster cores than 4 in iray even with 4 cards , what is not the case in DS, with 4 cores it never used more than 2 cores before,  but things changed when I upgraded to 8 cores CPU or maybe that was the new DS build that improvet , not sure exactly but I know that it use now more, if I knew I would not purchase the 4 cores early and go stright for higher speed CPU so I wasted $400 but as you saw in early video from last year it was pretty fast , with the new CPU and with the same cards it is now 50% faster than before in the viewport , even CPU can spin the viewport already so pretty good

    kyoto kid said:
    MEC4D said:

    You can get 4 cheaper gtx 970 that will render 25% faster that high end cards for the same money, but the catch here is that you will have not enough VRAM 

    but as always, iray is not about how faster you render , as faster rendering don't mean better render quality , the quality is in the light paths not in cleaning the noises of your renders

    the number of iterations are not the number of quality as I can do slower render with half the iteration compared to the same 3 times faster render with 3 times the iterations and still noisy 

    and because base rendering settings use only 80% of GPU compared to 100% when using samplers , faster don't mean better

    so take your time, clean your system from stuff that only delay everything as less sometimes will do the job better and faster than you may thinking 

    6 years ago I was thinking exactly as some of you as that was logical to me, more is better , but the truth it totally different and until Nvidia allow us to stack the GPUs power in iray plan good before you add another card you just snapped from ebay as it may just delay what you already have and just increase speed of 10% is not what you should get for the money you spent .

    mixing slower cards with faster cards is big mistake as it will affect the GPU scaling so keep it as closer as possible and the best from the same series and number of cores for each GPU so iray can scale it equal for optimal performance and not the numbers of cuda cores as that is first newbie mistake people do .

    ...again speed is all relative and no matter how fast or good the GPUs are, if the scene cannot be held in memory all those CUDA cores are worthless as you're forced back into the full CPU "slow lane". So with 4GB, I'm pretty much restricted to character studies, portraits and simple vignettes, 6 GB would maybe handle about 50 - 60% of my scenes, 8 GB about 75%, and 12 GB over 90%.

    In my build schemes, all GPUs were the same model with identical clock speed, number of, cores memory, etc.  The current concept is now based on a 3.2 Ghz 8 Core i7 (the new 10 core i7 is way overpriced). Physical memory is fast DDR4 configured in quad channel mode.   That should be enough to support a couple Titan-Xs for a high level of performance.

    As I do homebuilds (and am avoiding W10) I don't have any factory installed bloatware to deal with.  While I do have a security suite, there is no provision for "productivity" software or games on the workstation (not into video gaming anyway).  The system is strictly a CG production machine, just like my current one is.  I have net access so I can direct download content and software updates as well as upload imnages. (I also manually perform all sescurity updates, somethng W10 doesn't let you do evne at the Pro level).  While I am working the system remains offline, I have a couple older notebooks for that purpose.

     

  • 3delinquent3delinquent Posts: 355

    The examples you guys are discussing give a great indication of the difference some of the variables involved can make. Thankyou.

  • kyoto kidkyoto kid Posts: 42,017
    MEC4D said:

    fully agree, I use my 4GB for monitor, the other day I tried to work with it to see how thing work out and was like OMG my CPU was faster lol so I know what you dealing with , 2 good cards with high VRAM count are the best for iray even with 4 cores CPU , I was about to get the 10 cores but I knew I will regret the price very soon so I choice the 8 cores , if I had only 2 cards I would keep my super fast 4 core CPU as it was just fine sadly not enough for 3 GPUs and I was very mad at the benchmark guy that said there is no improvent with faster cores than 4 in iray even with 4 cards , what is not the case in DS, with 4 cores it never used more than 2 cores before,  but things changed when I upgraded to 8 cores CPU or maybe that was the new DS build that improvet , not sure exactly but I know that it use now more, if I knew I would not purchase the 4 cores early and go stright for higher speed CPU so I wasted $400 but as you saw in early video from last year it was pretty fast , with the new CPU and with the same cards it is now 50% faster than before in the viewport , even CPU can spin the viewport already so pretty good

    kyoto kid said:
    MEC4D said:

    You can get 4 cheaper gtx 970 that will render 25% faster that high end cards for the same money, but the catch here is that you will have not enough VRAM 

    but as always, iray is not about how faster you render , as faster rendering don't mean better render quality , the quality is in the light paths not in cleaning the noises of your renders

    the number of iterations are not the number of quality as I can do slower render with half the iteration compared to the same 3 times faster render with 3 times the iterations and still noisy 

    and because base rendering settings use only 80% of GPU compared to 100% when using samplers , faster don't mean better

    so take your time, clean your system from stuff that only delay everything as less sometimes will do the job better and faster than you may thinking 

    6 years ago I was thinking exactly as some of you as that was logical to me, more is better , but the truth it totally different and until Nvidia allow us to stack the GPUs power in iray plan good before you add another card you just snapped from ebay as it may just delay what you already have and just increase speed of 10% is not what you should get for the money you spent .

    mixing slower cards with faster cards is big mistake as it will affect the GPU scaling so keep it as closer as possible and the best from the same series and number of cores for each GPU so iray can scale it equal for optimal performance and not the numbers of cuda cores as that is first newbie mistake people do .

    ...again speed is all relative and no matter how fast or good the GPUs are, if the scene cannot be held in memory all those CUDA cores are worthless as you're forced back into the full CPU "slow lane". So with 4GB, I'm pretty much restricted to character studies, portraits and simple vignettes, 6 GB would maybe handle about 50 - 60% of my scenes, 8 GB about 75%, and 12 GB over 90%.

    In my build schemes, all GPUs were the same model with identical clock speed, number of, cores memory, etc.  The current concept is now based on a 3.2 Ghz 8 Core i7 (the new 10 core i7 is way overpriced). Physical memory is fast DDR4 configured in quad channel mode.   That should be enough to support a couple Titan-Xs for a high level of performance.

    As I do homebuilds (and am avoiding W10) I don't have any factory installed bloatware to deal with.  While I do have a security suite, there is no provision for "productivity" software or games on the workstation (not into video gaming anyway).  The system is strictly a CG production machine, just like my current one is.  I have net access so I can direct download content and software updates as well as upload imnages. (I also manually perform all sescurity updates, somethng W10 doesn't let you do evne at the Pro level).  While I am working the system remains offline, I have a couple older notebooks for that purpose.

     

    ...so if I were to only go with two Titan-Xs (and my older GPU just to run the displays). I'd be OK with the 3.6GHz Broadwell Hexcore that supports DDR4 2400 memory in Quad Channel mode (great for when I work with Carrara or in 3DL). Crikey that would save me over 500$ compared to the 3.2 GHz 8 core.

    I've also been reading about AMD's forthcoming 8 and 16 core Zen CPUs using a 14 NM die (which have simultaneous hyperthreading) and will support DDR4 3200 in 8 Channel mode. Some are feeling this new CPU might be the "shot in the arm" AMD has needed for more than a decade, Guess we'll see come this fall.

  • AllenArtAllenArt Posts: 7,175
    edited June 2016

    Watching this with interest. I just bought a refurb workstation with 2-8 core Xeons @ 2.6ghz and 64 gigs of ram and I've been having so much trouble trying to decide which card I want. I can only use two cards, so one will have to be to run the monitors and the other for rendering, and looks like I'm leaning toward a Titan X again. Even so, not sure I can even have that as I can only have 300 watts total between both graphics slots and a Titan X alone is 250 watts. Decisions, decisions ;).

    Laurie

    Post edited by AllenArt on
  • kyoto kidkyoto kid Posts: 42,017

    ...heftier power supplies are not all that expensive.

    I was originally looking to use dual 8 core Xeons in my new build but their clock speed is even slower than my old i7 940.

  • MEC4DMEC4D Posts: 5,249

    Very easy ,. You will be  more than good for all your programs and very fast

    kyoto kid said:
    MEC4D said:

    fully agree, I use my 4GB for monitor, the other day I tried to work with it to see how thing work out and was like OMG my CPU was faster lol so I know what you dealing with , 2 good cards with high VRAM count are the best for iray even with 4 cores CPU , I was about to get the 10 cores but I knew I will regret the price very soon so I choice the 8 cores , if I had only 2 cards I would keep my super fast 4 core CPU as it was just fine sadly not enough for 3 GPUs and I was very mad at the benchmark guy that said there is no improvent with faster cores than 4 in iray even with 4 cards , what is not the case in DS, with 4 cores it never used more than 2 cores before,  but things changed when I upgraded to 8 cores CPU or maybe that was the new DS build that improvet , not sure exactly but I know that it use now more, if I knew I would not purchase the 4 cores early and go stright for higher speed CPU so I wasted $400 but as you saw in early video from last year it was pretty fast , with the new CPU and with the same cards it is now 50% faster than before in the viewport , even CPU can spin the viewport already so pretty good

    kyoto kid said:
    MEC4D said:

    You can get 4 cheaper gtx 970 that will render 25% faster that high end cards for the same money, but the catch here is that you will have not enough VRAM 

    but as always, iray is not about how faster you render , as faster rendering don't mean better render quality , the quality is in the light paths not in cleaning the noises of your renders

    the number of iterations are not the number of quality as I can do slower render with half the iteration compared to the same 3 times faster render with 3 times the iterations and still noisy 

    and because base rendering settings use only 80% of GPU compared to 100% when using samplers , faster don't mean better

    so take your time, clean your system from stuff that only delay everything as less sometimes will do the job better and faster than you may thinking 

    6 years ago I was thinking exactly as some of you as that was logical to me, more is better , but the truth it totally different and until Nvidia allow us to stack the GPUs power in iray plan good before you add another card you just snapped from ebay as it may just delay what you already have and just increase speed of 10% is not what you should get for the money you spent .

    mixing slower cards with faster cards is big mistake as it will affect the GPU scaling so keep it as closer as possible and the best from the same series and number of cores for each GPU so iray can scale it equal for optimal performance and not the numbers of cuda cores as that is first newbie mistake people do .

    ...again speed is all relative and no matter how fast or good the GPUs are, if the scene cannot be held in memory all those CUDA cores are worthless as you're forced back into the full CPU "slow lane". So with 4GB, I'm pretty much restricted to character studies, portraits and simple vignettes, 6 GB would maybe handle about 50 - 60% of my scenes, 8 GB about 75%, and 12 GB over 90%.

    In my build schemes, all GPUs were the same model with identical clock speed, number of, cores memory, etc.  The current concept is now based on a 3.2 Ghz 8 Core i7 (the new 10 core i7 is way overpriced). Physical memory is fast DDR4 configured in quad channel mode.   That should be enough to support a couple Titan-Xs for a high level of performance.

    As I do homebuilds (and am avoiding W10) I don't have any factory installed bloatware to deal with.  While I do have a security suite, there is no provision for "productivity" software or games on the workstation (not into video gaming anyway).  The system is strictly a CG production machine, just like my current one is.  I have net access so I can direct download content and software updates as well as upload imnages. (I also manually perform all sescurity updates, somethng W10 doesn't let you do evne at the Pro level).  While I am working the system remains offline, I have a couple older notebooks for that purpose.

     

    ...so if I were to only go with two Titan-Xs (and my older GPU just to run the displays). I'd be OK with the 3.6GHz Broadwell Hexcore that supports DDR4 2400 memory in Quad Channel mode (great for when I work with Carrara or in 3DL). Crikey that would save me over 500$ compared to the 3.2 GHz 8 core.

    I've also been reading about AMD's forthcoming 8 and 16 core Zen CPUs using a 14 NM die (which have simultaneous hyperthreading) and will support DDR4 3200 in 8 Channel mode. Some are feeling this new CPU might be the "shot in the arm" AMD has needed for more than a decade, Guess we'll see come this fall.

     

  • MEC4DMEC4D Posts: 5,249

    if you only render then it will use only 150W on basic rendering  160-170W with samplers , older cards use even more , but if you like to play games then better wait for 1080 so you are sure you have enough power or maybe 1070 but I will wait and see how it perform for real in iray 

    AllenArt said:

    Watching this with interest. I just bought a refurb workstation with 2-8 core Xeons @ 2.6ghz and 64 gigs of ram and I've been having so much trouble trying to decide which card I want. I can only use two cards, so one will have to be to run the monitors and the other for rendering, and looks like I'm leaning toward a Titan X again. Even so, not sure I can even have that as I can only have 300 watts total between both graphics slots and a Titan X alone is 250 watts. Decisions, decisions ;).

    Laurie

     

  • AllenArtAllenArt Posts: 7,175

    Nah, I'm not a gamer...lol. I just do rendering ;).

    Laurie

  • MEC4DMEC4D Posts: 5,249

    Then you good ! just wait one month so you don't regret and if you want 12GB then you know what to do . 

    AllenArt said:

    Nah, I'm not a gamer...lol. I just do rendering ;).

    Laurie

     

  • PA_ThePhilosopherPA_ThePhilosopher Posts: 1,039
    edited June 2016
    Mec4D said:
    Don't forget that having more VRAM will slow down everything else so good CPU to support it and fast memory is needed also , I have slower cheaper processors but they performed so slow with DS so I had to upgrade , so when building a rig you need to have the right CPU to support your GPUs for the right balance or you not improve anything and just waste your money . I am happy now with the iray rendering performance as I found the right balance and each additional GPU cut the speed in half with the last DS build, but for what cost , that why average customers will have no pleasure from more VRAM unless they invest a lot more in their system to support it optimal other way as you said it is waste of money and GPU , all we need is more VRAM and less cards in our systems with higher clock speed

     

    Hey Mec4D,
    Indeed. This is why it is so important to always be sure that Power Options in control panel is set to "High Performance." This way the CPU is always running at max Ghz. If it is set to "Power Saver," then the CPU will be downclocked to sub 1 GHz speeds, and the rendering times with slow to a crawl (I had to learn this the hard way).

    I think the point I was trying to make, however, was that CUDA is by far the most important variable of a GPU rendering rig. Yes, a good CPU is important too. But really any decent CPU will suffice. The difference between 3.5 GhZ and 4 GHz, or 4 cores or 6 in a CPU is negligable, compared to the difference in CUDA cores. Just look at any online rendering benchmarks (such as Octane bench or Furry Ball), and the systems that rank the highest are always the systems with the highest CUDA count. Anyone who is serious about rendering time absolutely must put CUDA count at the top of their priority list. 

     

    kyoto kid said:

    ...try rendering a scene like the one attached at 3,000 x 2,250 (for a 33% print reduction to 20" x 15") at a quality setting of 2 or 3 with only 3 - 4 GB.  On my system the 1,600 x 1,200 test render below took over 11 GB in CPU mode before it dropped to virtual memory (and the quality was set at the default of 1).

    It's not just the poly count, but also the textures and effects (like the fog/mist and wet surfaces) that all have to fit in GPU memory.  Octane will split the load between CPU and GPU and still render pretty fast so there a one with less memory will work. With Iray, it's all or nothing.  I don't bother with portraits unless it is for an RPG character, I do "big" involved scenes and would like to have high quality prints made to exhibit my works.

    Hey kyoto,
    It would be interesting if we can all test a scene like that, to see just how far 3 GB can go on the latest drivers. So far, I have not hit a limit, and I have rendered scenes with dozens of characters and large sets on GPU alone (but again, only on the newer nVidia drivers). So VRAM doesn't seem to be as important anymore, at least not in my case. I remember before I purchased my first 780 Ti, I was so worried that 3 GB was not going to be enough. And for a while, I did hit a limit. But now that is no longer the case. If I need to render scenes with 700 characters, then it would benefit updagrding to 12 GB VRAM. But I don't see myself having that need anytime soon.

    I tip my hat to nVidia for continually upgrading and improving their rendering engine and GPU support (despite being so focussed now on video games).

    -P

    Post edited by PA_ThePhilosopher on
  • MEC4DMEC4D Posts: 5,249
    edited June 2016

    HI

    Yes of course but don;t suggest yourself by benchmarks done with Octane as Octane  have total  different way of GPU scaling than iray  that really can't even compare in speed , other benchmarks are done in MaX what is also different story  as it usually render  slower than  iray in Daz Studio , without GPU there is no speed that is clear to everyone , that why I said if you go for more tha 2 cards you need to pay attention as things will works not the same way and even if you think is faster it is not optimal faster and nobody want to trow out $1000 for having 20% extra speed .

    In the early drivers for win 10 there was a wrong power setting for GPU under Nvidia Panel , first was use multiple GPU , and second Average Power that slow down a lot , right now the power is set to Max Power but multiple GPUs aren still as main settings so if you use just one GPU  for your monitor's set it to single GPU, and as you mentioned the System power option usually set for Balance or power saving and not  performance 

    I was thinking having 4 cores at 5GHZ will do the trick with 3 cards compared to 8 cores at  3Ghz (3.5Ghz) as indicated by the benchmarks but it was so wrong since the moment I upgraded the CPU to 8 cores I gain speed almost of additional card, less delay , faster scene load and rendering and my intel easy tune show utilization of 8 core with only GPU rendering, then I made experiment , disable 4 cores and run just 4 cores and the result was slower in overall , while working with iray and while rendering .

    So if you plan for more than 2 card thing about , if you go for max 2cards 4 core CPU will be just  right , no matter at 3.5 or 4.0 Ggz as my other rig works wonderful with this CPU

    here is my steam punk Hell Boy iray rig for 2016 with 9250 Cuda Cores and i75960 Extreme with 64GB DDR4 Corsair Dominator Platinum and MSI GodLike Motherboard powered with 1300W all water cooled , in December I am going to change the rubber pipes to hard pipes for central water cooling system as I am not feeling like spending another $500 while the temp run so low but more for the astetic of it , it is behind a glass full tower size rig so not typical PC case , it can be used on table top as well . I wanted something different this time as I am tired of boxes , I have one more space for 1070 or 1080 water cooled as well for the monitors

    Mec4D said:
     

     

    Hey Mec4D,
    Indeed. This is why it is so important to always be sure that Power Options in control panel is set to "High Performance." This way the CPU is always running at max Ghz. If it is set to "Power Saver," then the CPU will be downclocked to sub 1 GHz speeds, and the rendering times with slow to a crawl (I had to learn this the hard way).

    I think the point I was trying to make, however, was that CUDA is by far the most important variable of a GPU rendering rig. Yes, a good CPU is important too. But really any decent CPU will suffice. The difference between 3.5 GhZ and 4 GHz, or 4 cores or 6 in a CPU is negligable, compared to the difference in CUDA cores. Just look at any online rendering benchmarks (such as Octane bench or Furry Ball), and the systems that rank the highest are always the systems with the highest CUDA count. Anyone who is serious about GPU rendering absolutely must put CUDA count at the top of their priority list. 

     

    WP_20160622_21_06_35_Pro[1]iray rig mec4d.jpg
    694 x 919 - 429K
    Post edited by MEC4D on
  • PA_ThePhilosopherPA_ThePhilosopher Posts: 1,039
    edited June 2016

    Nice rig Mec. That must have cost a few pennies. Good call on going water too. When you have upwards of 3 or 4 GPU's, water cooling is almost essential.

    In regards to iRay, I expect it will eventually scale linerally like Octane does. With Octane, adding a second GPU essentially doubles rendering speed. Adding a third GPU tripples... .etc.. But with iRay, it is not quite as linear but still close (EDITED TO CORRECT NUMBERS). iRay is still a young engine and not nearly as mature as Octane. But in time it will catch up--and in fact they have already made significant progress, and continue to improve iRay every day (Daz 4.9.2 was a big improvement especially in viewport). iRay's GPU utilization, when it is utilized, is near 100%--which is more than can be said of iClone's Indigo (last I checked Indigo only utilized less than 30% of the GPU, though it is advertized as a GPU render engine). 

     

    -P

    Post edited by PA_ThePhilosopher on
  • so honestly wouild a second 980ti of mine be  super fast  raided togeather

  • MEC4DMEC4D Posts: 5,249

    Thanks, it was almost 6K usd , and totally agree with you , however the last build and GPU scalling is so much better , as each additional card I use cut the speed in half where before it does less than that and sometimes 1.76% only with 2 cards, only the second card increased 50% and the next cards almost nothing .. now it is definitelly better than iray inn Max , but I guess that was the Nvidia latest patch that nobody was talking about as I know Autodesk complained about the GPU scaling to Nvidia especially for 980ti and Titan X in iray .

    If one day iray get even better with the GPU scaling as Octane I would be even more happy 

    Nice rig Mec. That must have cost a few pennies. Good call on going water too. When you have upwards of 3 or 4 GPU's, water cooling is almost essential.

    In regards to iRay, I expect it will eventually scale linerally like Octane does. With Octane, adding a second GPU essentially doubles rendering speed. Adding a third GPU tripples... .etc.. But with iRay, it is not quite as linear, more along the lines of ~60-75% increase (at least in my own testing). iRay is still a young engine and not nearly as mature as Octane. But in time it will catch up--and in fact they have already made significant progress, and continue to improve iRay every day. iRay's GPU utilization, when it is utilized, is near 100% (which is more than can be said of iClone's Indigo). 

    -P

     

  • PA_ThePhilosopherPA_ThePhilosopher Posts: 1,039
    edited June 2016

    so honestly wouild a second 980ti of mine be  super fast  raided togeather

    Yes. In my testing, adding another GPU should roughly double your render speed. Your 980 Ti may fair slightly better.

    Note, you may need to disable OptiX though.

    And be sure you have adequate air flow in your case to help keep them cool (The more cards you add, the hotter they will run, since they sit nearer to each other). If the cards run hot, your render times will easily double. I would suggest obtaining a program like Afterburner, so you can set a custom profile on your fans. When I was on air, I set my fans to max to 100% as soon as they neared 75 degrees C. They never saw anything higher than 75* (whereas without the custom profile slot 1 would easily exceed 82* and my render times would slow to a crawl).

    -P

    Post edited by PA_ThePhilosopher on
  • MEC4DMEC4D Posts: 5,249

    definitelly , you cut the rendering time in half and sometimes even more depends of the scene and complexcity just make sure you have 32GB RAM for optimal perfomance 

    so honestly wouild a second 980ti of mine be  super fast  raided togeather

     

  • MEC4DMEC4D Posts: 5,249
    edited June 2016

    Good point with the Optix , it slow down the rendering too much with the last driver , I made test the other days and it was definitely bad option 

    so honestly wouild a second 980ti of mine be  super fast  raided togeather

    Yes. In my tests, every time I added another GPU, I saw about 60-75% boost in render speed. Your 980 Ti would fair slightly better.

    Note, you may need to disable OptiX though.

    And be sure you have adequate air flow in your case to help keep them cool. If the cards run hot, your render times will double. I would suggest obtaining a program like Afterburner, so you can set a custom profile on your fans. When I was on air, I set my fans to max to 100% as soon as they neared 75 degrees C. They never saw anything higher than 75* (whereas without the custom profile slot 1 would easily exceed 82*).

    -P

     

    speed scenario mec4d optix-on-optix off.jpg
    1400 x 400 - 158K
    Post edited by MEC4D on
Sign In or Register to comment.