nvida 1080 question

13

Comments

  • kyoto kidkyoto kid Posts: 42,019
    AllenArt said:

    Nah, I'm not a gamer...lol. I just do rendering ;).

    Laurie

    ...same here.

  • kyoto kidkyoto kid Posts: 42,019
    edited June 2016
    MEC4D said:

    definitelly , you cut the rendering time in half and sometimes even more depends of the scene and complexcity just make sure you have 32GB RAM for optimal perfomance 

    so honestly wouild a second 980ti of mine be  super fast  raided togeather

     

    ...my build specifies 128 GB DDR4 2400 in quad channel configuration (the reason I selected the 2.6 GHz 6 core over a 4 core CPU as none support quad channel memory).  As with GPU memory and my power supply, I like having a lot of "overhead", just in case.

    Post edited by kyoto kid on
  • MEC4DMEC4D Posts: 5,249

    Yes , in my case it cost $977 for 128GB  but I am good for now with 64GB 

    kyoto kid said:
    MEC4D said:

    definitelly , you cut the rendering time in half and sometimes even more depends of the scene and complexcity just make sure you have 32GB RAM for optimal perfomance 

    so honestly wouild a second 980ti of mine be  super fast  raided togeather

     

    ...my build specifies 128 GB DDR4 2400 in quad channel configuration (the reason I selected the 2.6 GHz 6 core over a 4 core CPU as none support quad channel memory).  As with GPU memory and my power supply, I like having a lot of "overhead", just in case.

     

  • I need to correct my numbers in my previous post. I said 60-75% increase by adding another GPU. But that was on an older driver last year when I ran those tests. After doing some testing today, I can confirm that it is about a one-to-one ratio: Adding a second GPU will double your iRay render speeds, just like Octane.

  • MEC4D said:

    Good point with the Optix , it slow down the rendering too much with the last driver , I made test the other days and it was definitely bad option 

    It took me a while to figure this one out. I'm not sure why OptiX slows down render times on multi-GPU setups, but it does seem to. Perhaps its just not yet optimized for more than one card. Hopefully this will change in time.

    -P

  • MEC4DMEC4D Posts: 5,249

    I have no issues with Optix before but the last drivers with Cuda 8 changed that, I know that OptiX does not support yet Cuda 8 driver , the codes need to be changed so it convert it internal but don't think DAZ programmers did it yet , OptiX 4.0 that support Cuda 8 is on the way still in Beta by Nvidia .

    MEC4D said:

    Good point with the Optix , it slow down the rendering too much with the last driver , I made test the other days and it was definitely bad option 

    It took me a while to figure this one out. I'm not sure why OptiX slows down render times on multi-GPU setups, but it does seem to. Perhaps its just not yet optimized for more than one card. Hopefully this will change in time.

    -P

     

  • kyoto kidkyoto kid Posts: 42,019
    MEC4D said:

    Yes , in my case it cost $977 for 128GB  but I am good for now with 64GB 

    kyoto kid said:
    MEC4D said:

    definitelly , you cut the rendering time in half and sometimes even more depends of the scene and complexcity just make sure you have 32GB RAM for optimal perfomance 

    so honestly wouild a second 980ti of mine be  super fast  raided togeather

     

    ...my build specifies 128 GB DDR4 2400 in quad channel configuration (the reason I selected the 2.6 GHz 6 core over a 4 core CPU as none support quad channel memory).  As with GPU memory and my power supply, I like having a lot of "overhead", just in case.

     

    ...I found a DDR4 2400 128 GB kit for 620$

  • MEC4DMEC4D Posts: 5,249

    Before Corsair I purchased 128GB from Crucial $400 but it did not worked with my Mobo only 48GB lol fired up as it has Extreme Memory  Profile 1,  I needed  XMP 2.0 

    kyoto kid said:
    MEC4D said:

    Yes , in my case it cost $977 for 128GB  but I am good for now with 64GB 

    kyoto kid said:
    MEC4D said:

    definitelly , you cut the rendering time in half and sometimes even more depends of the scene and complexcity just make sure you have 32GB RAM for optimal perfomance 

    so honestly wouild a second 980ti of mine be  super fast  raided togeather

     

    ...my build specifies 128 GB DDR4 2400 in quad channel configuration (the reason I selected the 2.6 GHz 6 core over a 4 core CPU as none support quad channel memory).  As with GPU memory and my power supply, I like having a lot of "overhead", just in case.

     

    ...I found a DDR4 2400 128 GB kit for 620$

     

  • kyoto kidkyoto kid Posts: 42,019
    edited June 2016
    MEC4D said:

    Before Corsair I purchased 128GB from Crucial $400 but it did not worked with my Mobo only 48GB lol fired up as it has Extreme Memory  Profile 1,  I needed  XMP 2.0 

    kyoto kid said:
    MEC4D said:

    Yes , in my case it cost $977 for 128GB  but I am good for now with 64GB 

    kyoto kid said:
    MEC4D said:

    definitelly , you cut the rendering time in half and sometimes even more depends of the scene and complexcity just make sure you have 32GB RAM for optimal perfomance 

    so honestly wouild a second 980ti of mine be  super fast  raided togeather

     

    ...my build specifies 128 GB DDR4 2400 in quad channel configuration (the reason I selected the 2.6 GHz 6 core over a 4 core CPU as none support quad channel memory).  As with GPU memory and my power supply, I like having a lot of "overhead", just in case.

     

    ...I found a DDR4 2400 128 GB kit for 620$

     

    ..which Corsair kit do you have?  There are several from DDR4 2100 to DDR4 2800.

    Post edited by kyoto kid on
  • MEC4DMEC4D Posts: 5,249
    edited June 2016

    You need to check what your motherboard support, it usually not support all memory especially 128 GB , I only order memory supported and tested with the motherboard  my Corsair Dominator Platium are 2133Mhz that run at 2666Mhz  with XMP2 / $997 for 128GB  $557 for 64GB , cost more but my favorite , never disappointed .

    If you go for Godlike Mobo here is list with supported memory

    https://us.msi.com/Motherboard/support/X99A-GODLIKE-GAMING.html#support-mem

    you can also check on Corsair website if the memory is supported by your Motherboard automatic , you see my other CPU i75820K official support only 64GB but when run on Asus Deluxe/3.1USB it will accept 128GB but only Corsair Dominator Platinum 2133Mhz that why they are so good as they automatic adjust itself for the best speed and power usage , I prefer not to cut on the memory quality as that is most important part together with CPU

    kyoto kid said:
    MEC4D said:

    Before Corsair I purchased 128GB from Crucial $400 but it did not worked with my Mobo only 48GB lol fired up as it has Extreme Memory  Profile 1,  I needed  XMP 2.0 

    kyoto kid said:
    MEC4D said:

    Yes , in my case it cost $977 for 128GB  but I am good for now with 64GB 

    kyoto kid said:
    MEC4D said:

    definitelly , you cut the rendering time in half and sometimes even more depends of the scene and complexcity just make sure you have 32GB RAM for optimal perfomance 

    so honestly wouild a second 980ti of mine be  super fast  raided togeather

     

    ...my build specifies 128 GB DDR4 2400 in quad channel configuration (the reason I selected the 2.6 GHz 6 core over a 4 core CPU as none support quad channel memory).  As with GPU memory and my power supply, I like having a lot of "overhead", just in case.

     

    ...I found a DDR4 2400 128 GB kit for 620$

     

    ..which Corsair kit do you have?  There are several from DDR4 2100 to DDR4 2800.

     

    Post edited by MEC4D on
  • kyoto kidkyoto kid Posts: 42,019
    ...I'll have to check that out when I get home. That way I can have several tabs open at the same time. Research like this requires doesn't work well on a phone (especially when I have to enter each letter one at a time).
  • Mec, Did you happen to run any render tests when you upgraded the ram? I remember reading that the difference between 32, 64, and 128 GB is marginal in gpu rendering. But I have not had a chance to test this, as I have only 32 in my system. -P
  • MEC4DMEC4D Posts: 5,249

    The only difference in iray was when the memory was set on higher clock and also CPU , but it all depends of how much cards you use.

    For example with 2 cards my system use only 1-2 cores, with 2 cards 3-4 cores with 3 cards 4-8 cores CPU utilization 40% (GPU rendering only ) and how faster the memory how faster everything is going to be processed so the CPU clock and Memory clock but I did not saw any difference between 64 and 128 GB as most of it is not even used 

    Nvidia recommend 24GB system memory for 12GB VRAM , but that is for games 

    I made test with many mother boards and memory in the past month but after seeing no much differences I go back to 64GB as it was the sweet spot for me not only in iray but also with other apps I am using so decided to invest the additional $500 in something else what is more needed for me at this moment .

    Putting a lot of stuff in your system can make it slower in place of faster so the right balance is important to tune it just right , by additional memory with faster XMP profile your CPU will need to run faster too , faster mean higher temperatures , for example on 32GB and XMP 1 my CPU run at 25C with 64GB and XMP2  it run at 39C idle as the CPU cores will be set automatic on higher clock to support it .. so bigger and better don''t mean always faster etc etc .. 

    enough is enough ,  iray in Daz studio is not even ready for my new rig to use it optimal , let start from that as not Nvidia programmers or DAZ answered me a question why 10.000 cuda cores render slower in the iray viewport than 6000 Cuda cores ( rendering is just fine as it should just the viewport ) and the interactive mode is slower than photoreal mode and I smell a big bug that nobody knows how to fix it yet .

    Cath

    Mec, Did you happen to run any render tests when you upgraded the ram? I remember reading that the difference between 32, 64, and 128 GB is marginal in gpu rendering. But I have not had a chance to test this, as I have only 32 in my system. -P

     

  • Mec,
    Which nVidia driver are you currently using? I've found nVidia drivers to be a little finicky in iRay. If you have one that is too old, you lose on performance and stability. If you have one that's too new, you could also lose on stability, since nVidia is known to release drivers prematurely--drivers that have not been fully tested. If 10k cuda cores is slower in viewport than 6k cuda cores, I wonder if it could be a driver issue. 

    Interesting comments about XMP. 

    -P

  • FrankTheTankFrankTheTank Posts: 1,513

    Is there any agreement on the best driver version? I'm using 362.00 (as I think PA_ThePhilosopher is using) but I cannot use Optix Prime acceleration enabled or my rendering slows down to a snail pace.

    For example with 2x980ti GPU enabled, & optix not enabled, I'll render a frame in 1.1 minutes.

    But if Optix is checked, the same scene takes over 10 minutes per frame. So Optix seems broken even on older versions of the driver, this driver is from March I think.

  • MEC4DMEC4D Posts: 5,249

    I am using 368.39 , I tested older drivers and the same result the best driver for iray was the one before Win 10 release I think it was 356 or something closer but since I run win 10 the old good driver give me an issues as well .

    Optix slow down my renders so much making the speed of 3 cards into the speed of 2 cards and so it was when I used only 2 cards it slow down since last year iray release and never improved the rendering speed for me in any situations 

    @PA_ThePhilosopher if you don't mind I don't like people call me Mec, I am not a dude or owner of the Mec agency, call me Cath or MEC4D ..thank you ;)

  • FrankTheTankFrankTheTank Posts: 1,513

    I went and looked through the release notes for all the drivers around the time of windows 10 release (the last driver before windows 10 support would have been 353.30 released June 22, 2015) and then moving forward, I noticed version 359.06 was the last release that had CUDA 7.5, after that they switched to CUDA 8.0. Maybe thats the problem? I have window 7 Pro 64-bit, so I'm gonna give that one a try and see if it helps me, thanks Cath for the help!

  • MEC4DMEC4D Posts: 5,249

    The last best working version on my win 8 pro was 353.62 but 353.30 may be the one, I remember there was issues in some of the driver that run tye clock in a stand by mode while not rendering so make sure it does not do that, also in the older drivers the GPU power setting in Nvidia Panel was set to average , what render slower in Iray so make sure it is on max perfomance 

    since OptiX run Cuda 7.6 max it may be the key .. I remember when they released iray the opix slow down my rendering then one of the driver made it actually work fine again and then may be the 359.06 that did not work well but beside all tye drivers , OptiX need to be adjusted in the codes too so it process it internal so not only the driver is responsible here for the issue , my standalone OptiX don't render with last Nvidia Driver updates but the OptiX 4.0 is already in BETA so soon everything may works better with old and new cards .

    I went and looked through the release notes for all the drivers around the time of windows 10 release (the last driver before windows 10 support would have been 353.30 released June 22, 2015) and then moving forward, I noticed version 359.06 was the last release that had CUDA 7.5, after that they switched to CUDA 8.0. Maybe thats the problem? I have window 7 Pro 64-bit, so I'm gonna give that one a try and see if it helps me, thanks Cath for the help!

     

  • PA_ThePhilosopherPA_ThePhilosopher Posts: 1,039
    edited July 2016

    Just ran some tests this week. Figured I would post my numbers.
    Here is a quick run down of how multiple GPU's scale on my system (all GTX 780 Ti's);
    The test scene I used has 3 million polys and 31 Genesis 3/2 characters in it. I rendered to 3,000 samples;

    • 1 GPU: 16 min 34 sec
    • 2 GPUs: 9 min 30 sec
    • 3 GPUs: 6 min 59 sec
    • 4 GPUs: 4 min 41 sec

    Regarding drivers, the test scene above took 9:20 to render on 3 GPU's (driver 348). This same scene took 6:59 to render on 3 GPU's using a newer driver (driver 362). 
    (note, on only 1X GPU, there does not seem to be as much performance difference from driver to driver...other than stability of course).

    Something also worth noting. I have heard reports of some users experiencing issues with drivers downloaded through Geforce experience, so I just made it a habit to install drivers manually instead.

    Hope this helps,

    -P

    Post edited by PA_ThePhilosopher on
  • MEC4DMEC4D Posts: 5,249

    you forgot to mention the CPU and how many cores as it counts too , different system different result but the scaling is not bad at all 

    I always install manually and clean install G experiences mess up things too often  

    Just ran some tests this week. Firgured I would post my numbers.
    Here is a quick run down of how multiple GPU's scale on my system (all GTX 780 Ti's);
    The test scene I used has 3 million polys and 31 Genesis 3/2 characters in it. I rendered to 3,000 samples;

    • 1 GPU: 16 min 34 sec
    • 2 GPUs: 9 min 30 sec
    • 3 GPUs: 6 min 59 sec
    • 4 GPUs: 4 min 41 sec

    Regarding drivers, the test scene above took 9:20 to render on 3 GPU's (driver 348). This same scene took 6:59 to render on 3 GPU's using a newer driver (driver 362). 
    (note, on only 1X GPU, there does not seem to be as much performance difference from driver to driver...other than stability of course).

    Something also worth noting. There seems to be a difference in how a driver is downloaded as well; whether it is downloaded using Geforce experience, or through Windows, or installed manually. I have heard reports of some users experiencing issues with driver downloaded through Geforce experience, so I just made it a habit to install driver manually instead.

    Hope this helps,

    -P

     

  • MEC4DMEC4D Posts: 5,249

    BTW NEWS

    Meet the #GeForce GTX 1060. The performance of GTX 980,

    starting at $249. Powered by Pascal. VR and ‪#GameReady‬.

    A quantum leap for every gamer.http://nvda.ws/29vZXsG

  • PA_ThePhilosopherPA_ThePhilosopher Posts: 1,039
    edited July 2016
    MEC4D said:

    you forgot to mention the CPU and how many cores as it counts too , different system different result but the scaling is not bad at all 

    I always install manually and clean install G experiences mess up things too often  

    I am on an older Haswell: Core i7-5930K (6 cores).

    -P

    Post edited by PA_ThePhilosopher on
  • FrankTheTankFrankTheTank Posts: 1,513

    So does anyone actually need Geforce Experience at all? Is there any need for it outside of gaming and screen capture, etc? I completely removed it and just use the driver, and there doesn't seem to be any problem so far.

    I just wanted to make sure I won't hurt performance by doing this. I don't game at all, and only care about peak rendering performance.

  • MEC4DMEC4D Posts: 5,249

    I expected there are more than 4 cores that support your cards when I saw your results 

    MEC4D said:

    you forgot to mention the CPU and how many cores as it counts too , different system different result but the scaling is not bad at all 

    I always install manually and clean install G experiences mess up things too often  

    I am on an older Haswell: Core i7-5930K (6 cores).

    -P

     

  • MEC4DMEC4D Posts: 5,249

    nope , I use it for shutting the LED off on my cards only but you don't need it actually , it will not affect your performance , and only  slow down your system at start up  , I am getting my drivers and cuda toolkit from my Nvidia Developer account anyway and install drivers always clean and manually 

    So does anyone actually need Geforce Experience at all? Is there any need for it outside of gaming and screen capture, etc? I completely removed it and just use the driver, and there doesn't seem to be any problem so far.

    I just wanted to make sure I won't hurt performance by doing this. I don't game at all, and only care about peak rendering performance.

     

  • ANGELREAPER1972ANGELREAPER1972 Posts: 4,560

    I'm using geforce experiance well yeah I do have on this setup one game had other before crash and there's some new ones finally after years coming out soon I'll get that I'm a fan of the franchise and everything in that franchise comic, books, games, art, models from the games and no I'm not talking about wow but this and it's sister game franchise inspired that and others, anyway had some updates and updates sometimes gotta install more than once been trying to daz and renders weren't rendering so started up gfe again and found out had to install again drivers now can render again was starting to get annoyed till thought checking that again

  • fastbike1fastbike1 Posts: 4,081

    I have found the Geforce Experience to be trouble without any discernible benefit. I have also found that immediately installing the latest Nvidia driver update is a recipe for trouble. Was the same for ATI drivers.

    Moral here as always: if it ain't broken, don't fix it.

  • ANGELREAPER1972ANGELREAPER1972 Posts: 4,560

    well the render got going at moment has Aiko 7 in her goth lolita dress with razor cut bob hair holding a lollipop from lil monsters set along with kid4 dressed as Sam from the Trick R Treat movie aslo from lil monsters pack also eating lolly with collective3d portrait vignettes horror 1 and 3 as background/foreground and warhole inspired spotlighting for daz studio iray vol 2. Now going onto 23 hours rendering at 20% done have not taken this long to render in very long time not on this v17 nitro before anyway my desktop yeah during early iray this is a gtx 850m so when I decide what specific computer gonna get with dual gtx 1080s and of course when they can do iray going to be a huge improvement

  • alexhcowleyalexhcowley Posts: 2,404
    fastbike1 said:

    I have found the Geforce Experience to be trouble without any discernible benefit. I have also found that immediately installing the latest Nvidia driver update is a recipe for trouble. Was the same for ATI drivers.

    Moral here as always: if it ain't broken, don't fix it.

    This is why I won't let a copy of Windows 10 anywhere near my Windows 7 PC, despite the best efforts of Microsoft to persuade me otherwise.

    Cheers,

    Alex.

  • fastbike1fastbike1 Posts: 4,081

    @alexhcowley

     

    True dat!

Sign In or Register to comment.