Iray Starter Scene: Post Your Benchmarks!

18911131449

Comments

  • MEC4DMEC4D Posts: 5,249

    I look on the log files it show the exact time but sometimes the time in log file is slower than the actual rendering was what is weird, I monitor the GPU usage and not counting seconds in DS anymore as it will be never the same with GPU as different way the task are assigned with each push on rendering button as all also depends  of tour full system and not just GPU itself , for one person the Optix on and optimization speed work better today but not tomorrow .. with my 4 cards speed need to be on and optix, with 1-3 cards optix off .. there is no one settings for everyone so it is good too see what works best on your system 

    MEC4D said:

     when you render with Optix Off, you need change optimization to speed, when Optix On to memory to seed up your render , howeever somehow I don't see any differences with the last Beta build still render with the same speed On or Off

    I added a second card (with 2 x gtx 760), and did a handful of test renders with different settings and different GPUs enabled/disabled.  

    https://docs.google.com/spreadsheets/d/11-Z7MmvGJyeMhjgUAJhOAvLhbLkFdno3Jf8XBSnJ9eg/edit#gid=0

    So far "optix on" and "optimization speed" with all cores active indicate the best setting in my case.

    The "init time" varies with a large margin, even between renderings with the same settings.  Between 20 and 40 seconds on the GPUs. The CPU seems to be concistent around 11-12 seconds.

    Iterations per GPU or CPU for the actual rendering time varies a lot less. The first image appears as soon as the CPU's "init time" has passed, and iterations start increasing a lot faster as soon as GPU init times have passed.

    Lession learned:

    • Do not place the old graphics card on the shelf. It can still do a lot of good.
    • Experiment with settings, to see what actually matters.
    • Do more than one test run for benchmarks, due to init time variance.

     

  • ZarconDeeGrissomZarconDeeGrissom Posts: 5,412
    edited September 2016

    That, and there is some variation per run, especially with 'GPU Boost' cards.  With the GTX 960 I was getting between 5m 50s to 6m 20s on each test run. Close enough to call it six minutes.

    This is just the card as a dedicated Iray crunch card with the monitors plugged into a second card. All spheres, "Optix on" for all test runs.

    E1ghzCC (Equivalent 1.0GHz CUDA Cores) is based on just the clock of the card and the number of CUDA cores, it's not an exact comparison across GPU chip generations, tho it is close enough to just call it 'Horsepower' and "Horsepower per Watt", lol.

    GPU boost vs Iray performance, here is a good vid that kind of goes over it, and some.

    http://www.youtube.com/watch?v=VeQe6pEEtVI

    So, if your room is wormer today then it was yesterday, it can effect the Iray bench results on cards with GPU Boost.

    GTX960_IrayPrelim04001.png
    1400 x 680 - 327K
    HPperWatt_GT7xx_GTX9xx_002crop2.png
    700 x 500 - 71K
    Post edited by ZarconDeeGrissom on
  • gixerpgixerp Posts: 37
    edited September 2016

    Well.. I have just got my very first graphics card so thought i'd give this a go...

    I'm not sure I done it correctly tho, because the result seem a little strange. . . is it possible for it to be quicker with just the GPU ?? Using the GPU and the CPU together took longer..

    Windows 10 64bit : CPU = 4 core Intel Xeon X5667 @ 3.07GHz : GPU = Geforce GTX 980 Phantom (no tinkering, just straight outta the box and into the computer.. load scene & render)

    GPU & CPU took 8 minutes 37 secs

    GPU ONLY took 5 minutes 13

    Is that strange or normal ??

     

    Post edited by gixerp on
  • It is possible for the combination to be slower than the GPU alone - it depends, I believe, on what is in the scene and on the hardware.

  • MEC4DMEC4D Posts: 5,249

    CPU will slow down the GPU scaling no matter what scene you use , don't use it in combination 

    gixerp said:

    Well.. I have just got my very first graphics card so thought i'd give this a go...

    I'm not sure I done it correctly tho, because the result seem a little strange. . . is it possible for it to be quicker with just the GPU ?? Using the GPU and the CPU together took longer..

    Windows 10 64bit : CPU = 4 core Intel Xeon X5667 @ 3.07GHz : GPU = Geforce GTX 980 Phantom (no tinkering, just straight outta the box and into the computer.. load scene & render)

    GPU & CPU took 8 minutes 37 secs

    GPU ONLY took 5 minutes 13

    Is that strange or normal ??

     

     

  • GeneralDeeGeneralDee Posts: 131
    edited September 2016

    I finally did a quick test with my new GPU now that I can run Iray without pain. I'm still using Daz Studio 4.8

    Windows 7 64

    i7 4930k 3.4-3.7gh

    16gb RAM

    EVGA Nvidia GTX 980 Ti 6gb

    With GPU only and Optix Prime enabled the image rendered at 2 minutes 52.82 seconds.

    With both CPU and GPU also with Optix Prime it was 2 minutes 40.37 seconds

    With both CPU and GPU  without Optix Prime it was 4 minutes 21.35 seconds

    Post edited by GeneralDee on
  • HaslorHaslor Posts: 402

    Okay, I finially got around to running this test.

    The System:

    System: i5 4670K on a MSI Z97 Gamer 5 System Board and 32GB Memory

    GPU: Zotec GTX960 Amped 4GB

    OS: Windows 10 Pro

    Teh Render Times: (Worse to best, Average time of four renders) 

    GPU only render / without Optix Prime: 8 minutes 35.9 seconds

    CPU and GPU render / without Optix Prime: 8 minutes 8.9 seconds.

    CPU and GPU render / with CPU and GPU Optix Prime6 minutes 23.8 seconds

    GPU only Render / with GPU Optix Prime: 6 minutes 17.25 seconds

    GPU only Render / with CPU Optix Prime: 6 minutes 2.21 seconds (Best: 6 minutes 1.31 seconds)

  • HaslorHaslor Posts: 402
    edited October 2016

    I did a run with Optix on the CPU only and with Memory and Speed. (Phtoreal on the GPU only) 

    Speed: 5 min 53.29 sec.

    Memory: 6 min 1.96 sec.

    Putting Optix on the GPU seem to slow down my render times. (If someone could check that.)

    MEC4D said:

    I look on the log files it show the exact time but sometimes the time in log file is slower than the actual rendering was what is weird, I monitor the GPU usage and not counting seconds in DS anymore as it will be never the same with GPU as different way the task are assigned with each push on rendering button as all also depends  of tour full system and not just GPU itself , for one person the Optix on and optimization speed work better today but not tomorrow .. with my 4 cards speed need to be on and optix, with 1-3 cards optix off .. there is no one settings for everyone so it is good too see what works best on your system 

    MEC4D said:

     when you render with Optix Off, you need change optimization to speed, when Optix On to memory to seed up your render , howeever somehow I don't see any differences with the last Beta build still render with the same speed On or Off

    I added a second card (with 2 x gtx 760), and did a handful of test renders with different settings and different GPUs enabled/disabled.  

    https://docs.google.com/spreadsheets/d/11-Z7MmvGJyeMhjgUAJhOAvLhbLkFdno3Jf8XBSnJ9eg/edit#gid=0

    So far "optix on" and "optimization speed" with all cores active indicate the best setting in my case.

    The "init time" varies with a large margin, even between renderings with the same settings.  Between 20 and 40 seconds on the GPUs. The CPU seems to be concistent around 11-12 seconds.

    Iterations per GPU or CPU for the actual rendering time varies a lot less. The first image appears as soon as the CPU's "init time" has passed, and iterations start increasing a lot faster as soon as GPU init times have passed.

    Lession learned:

    • Do not place the old graphics card on the shelf. It can still do a lot of good.
    • Experiment with settings, to see what actually matters.
    • Do more than one test run for benchmarks, due to init time variance.

     

     

    Post edited by Haslor on
  • hphoenixhphoenix Posts: 1,335
    edited October 2016

    ASUS 1080GTX STRIX gpu (1860MHz boost clock), GPU only.

     

    (Optix OFF, Speed optimized)

    Test scene ran to completion in 2 min, 40.12 sec.

     

    (Optix ON, Speed optimized)

    Test scene ran to completion in 2 min, 58.58 sec.

     

    (Optix ON, Memory optimized)

    Test scene ran to completion in 3m, 6.85 sec.

     

    Times may not be completely correct, as the driver may be having an issue (viewport is not rendering, but the render window shows fine.....)

     

    Post edited by hphoenix on
  • ZarconDeeGrissomZarconDeeGrissom Posts: 5,412
    edited October 2016

    hphoenix, 'they' got the GTX10xx Iray driver going!?  Or still WIP?

    Post edited by ZarconDeeGrissom on
  • so, the 1080 performs almost exactly like the 980ti, the lack of CUDA being compensed by the higher clock speed.

  • ZarconDeeGrissomZarconDeeGrissom Posts: 5,412
    edited October 2016

    Good Q. And taking cooling requirements into the equation, not to say the price $$$$.

    Nyghtfall said:

    I got a new GTX 980 Ti, yesterday.  My full PC specs:

    CPU: 3.5 GHz Core i7-4770K
    GPU 1: 6 GB GeForce GTX 780 (Display Adapter)
    GPU 2: 6 GB GeForce GTX 980 Ti
    RAM: 32 GB
    O/S: Windows 7 Pro 64-bit

    And my new numbers:

    CPU: 25 minutes 30.89 seconds

    GPU 1: 3 minutes 49.97 seconds

    GPU 2: 2 minutes 39.41 seconds

    CPU + GPU 1: 3 minutes 48.48 seconds

    CPU + GPU 2: 2 minutes 40.75 seconds

    CPU + GPU 1 & 2: 2 minutes 9.91 Seconds

    Conclusion: Buying a dedicated GPU for rendering was absolutely worth it.  In addition to the speed boost, I can finally use all of my PC's available resources to render high-resolution images while retaining full multi-tasking functionality without any system lag.

    I think that 8GB memory to work with Iray scenes is the biggest selling point of the GTX1080/1070. For some, the only selling point. Hmmm.

    80% of the air that goes threw a jet engine, is for cooling the engine. Only 20% of it is burnt to produce thrust. lol.

    Post edited by ZarconDeeGrissom on
  • GPU used: GTX 1080 Gaming X (staight out of the box, not tinkered with)
    CPU: i7-6700K.

    Here are the results on this system:
     

    GPU only - Scene runs to completion in 2:54 minutes.
    CPU + GPU - Scene runs to completion in 2:57 minutes.

    GPU only (without OptiX) - Scene runs to completion in 6:03 minutes.

  • hphoenixhphoenix Posts: 1,335

    hphoenix, 'they' got the GTX10xx Iray driver going!?  Or still WIP?

    Yep, in the just-released beta build of DS.  My driver issues may be causing some issues, so those benchmark numbers may not be wholly accurate.

     

     

    so, the 1080 performs almost exactly like the 980ti, the lack of CUDA being compensed by the higher clock speed.

    Possibly.  Until I get my driver issues worked out, it's simply not conclusive.  However, the 1080 that I have, currently is performing (with no overclock, mind you) at a similar performance to a 980Ti that is overclocked, and with an additional 2GB of VRAM.  It should also be mentioned that it is the only card in there, and is also driving a 4k monitor @60Hz. So there is that additional load.

     

  • bluejauntebluejaunte Posts: 1,859

    Gave it a go on my 980 Ti overclocked to 1395 Mhz.

    2 minutes 39.27 seconds

    It's the only card and also driving a main display of 1440p @165hz and two additional 1680 x 1050 @60Hz, so I doubt that has any influence.

  • Silver DolphinSilver Dolphin Posts: 1,586
    edited October 2016

    I just ran new Daz beta with my shiney new Pascal 1070 ($380) and 3x Nvidia 780's 6gb total 8832 cuda cores.

    Ran Iray starter scene and it took 1min 16sec about 5000 interations.

    Post edited by Silver Dolphin on
  • I just ran new Daz beta with my shiney new Pascal 1070 ($380) and 3x Nvidia 780's 6gb total 8832 cuda cores.

    Ran Iray starter scene and it took 1min 16sec about 5000 interations.

    How long does the single 1070 take to render the benchmark scene?

  • jerhamjerham Posts: 153

    GPU : MSI GTX 1070 FE 8GB, CPU: I7 6700K (not overclocked), Memory: 32 GB

    Starter scene render times:

    GPU Only (OptiX on), Total Rendering Time: 3 minutes 14.45 seconds

    GPU + CPU (OptiX on): Total Rendering Time: 3 minutes 14.48 seconds

     

  • namffuaknamffuak Posts: 4,040

    MSI 1080 gtx gpu only 3 minutes 9.53 seconds - core clock 1936 GHz, memory clock 4513 GHz.

    MSI 980 TI and 1080 together - 1 minute 43.2 seconds.

     

    And I have an intense render scene that has taken 57 minutes 41 seconds on the 980 TI; with both cards the render time drops to 34 minutes 55 seconds.

  • BeeMKayBeeMKay Posts: 6,982
    edited October 2016

    CPU I7-4770, 32GB memory, Geforce GTX 980Ti, not overclocked. The card also drives a regular monitor.Rendering in 4.8

    GPU only, Optix Prime Acceleration ON
    Total Rendering Time: 2 minutes 53.41 seconds; 5000 iterations, 32.257s init, 139.670s rende

    GPU only, Optix Prime Acceleration OFF
    Total Rendering Time: 4 minutes 17.71 seconds; 5000 iterations, 35.365s init, 221.385s render

    GPU & CPU, Optix Prime Acceleration ON
    Total Rendering Time: 2 minutes 56.78 seconds; (GeForce GTX 980 Ti): 4607 iterations, 33.227s init, 142.541s render; CPU (7 threads): 393 iterations, 13.537s init, 161.771s render

    GPU & CPU, Optix Prime Acceleration OFF
    Total Rendering Time: 4 minutes 25.43 seconds; (GeForce GTX 980 Ti): 4532 iterations, 45.793s init, 218.612s render; CPU (7 threads): 468 iterations, 13.639s init, 250.487s render

    Post edited by BeeMKay on
  • My benchmark results:

    GPU - GTX 1070, 8G, Driver 375.63

    CPU 4770K, 16GB

    Rendering to 95% completion

    GPU 6min 13sec
    GPU+Optix 3min 5sec
    GPU+Optix+CPU 3min 6sec

  • SickleYield's test = 1 minute 43 seconds
    pc specs 
    CHRONOS-pro
    Corsair 350D
    Processors: Intel Core i7 6700K Quad-Core 4.0GHz (4.2GHz TurboBoost)
    Motherboard: ASUS Maximus VIII Gene
     Windows 10 Pro
    Graphic Cards: Dual 8GB NVIDIA GTX 1080

  • ArtiniArtini Posts: 8,528
    edited October 2016

    SickleYield's test

    GPU only: 6 minutes 6.69 seconds, 5000 iterations, 13.685s init, 351.308s render

    GPU (Optix on): 3 minutes 9.13 seconds, 5000 iterations, 13.937s init, 174.074s render

    pc specs

    Processor: Intel Core i7-3770K Quad-Core 3.5 GHz

    RAM: 32 GB DDR 3

    Motherboard: Asus P8 Z77-V PRO

    Graphic Card: Gigabyte GTX 1080 G1 Gaming 8GB VRAM, Driver 375.63

    Windows 7 Pro 64 bit

     

    Post edited by Artini on
  • BeeMKayBeeMKay Posts: 6,982

    SickleYield's test = 1 minute 43 seconds
    pc specs 
    CHRONOS-pro
    Corsair 350D
    Processors: Intel Core i7 6700K Quad-Core 4.0GHz (4.2GHz TurboBoost)
    Motherboard: ASUS Maximus VIII Gene
     Windows 10 Pro
    Graphic Cards: Dual 8GB NVIDIA GTX 1080

    Is that more than one card? The time seems rather short, i.e. compard to the 1080 right below your entry.

  • nicsttnicstt Posts: 11,714
    edited October 2016
    namffuak said:

    MSI 1080 gtx gpu only 3 minutes 9.53 seconds - core clock 1936 GHz, memory clock 4513 GHz.

    MSI 980 TI and 1080 together - 1 minute 43.2 seconds.

     

    And I have an intense render scene that has taken 57 minutes 41 seconds on the 980 TI; with both cards the render time drops to 34 minutes 55 seconds.

    I'd be interested in comparrisons on the same system of a solo render on the 980ti and the 1080 if you are able? :)

    Obviously if one is driving the monitors then it isn't a true comparrison.

    Post edited by nicstt on
  • BeeMKay said:

    SickleYield's test = 1 minute 43 seconds
    pc specs 
    CHRONOS-pro
    Corsair 350D
    Processors: Intel Core i7 6700K Quad-Core 4.0GHz (4.2GHz TurboBoost)
    Motherboard: ASUS Maximus VIII Gene
     Windows 10 Pro
    Graphic Cards: Dual 8GB NVIDIA GTX 1080

    Is that more than one card? The time seems rather short, i.e. compard to the 1080 right below your entry.

    Dual means two 1080s

  • BeeMKayBeeMKay Posts: 6,982
    BeeMKay said:

    SickleYield's test = 1 minute 43 seconds
    pc specs 
    CHRONOS-pro
    Corsair 350D
    Processors: Intel Core i7 6700K Quad-Core 4.0GHz (4.2GHz TurboBoost)
    Motherboard: ASUS Maximus VIII Gene
     Windows 10 Pro
    Graphic Cards: Dual 8GB NVIDIA GTX 1080

    Is that more than one card? The time seems rather short, i.e. compard to the 1080 right below your entry.

    Dual means two 1080s

    Ah, thanks Richard. I thought it was some part of the name.

  • TottallouTottallou Posts: 555
    edited October 2016

    GTX1080 - CPU I7-4770  4.00 GHz, 32GB memory,

    2 minutes 51.49 seconds - GPU Only Optix on
    3 mins 16 GPU Only - Optix Off

    2 minutes 51.56 seconds GPU/CPU - Optix on

    3 mins 18 GPU/CPU - Optix Off

    Post edited by Tottallou on
  • L'AdairL'Adair Posts: 9,479

    Since getting my new computer, I've done quite a few speed tests, though not with Sickleyield's test scene. I determined the CPU Only renders, using the 4.9.3.56 Beta on the new computer were about 5.6 times faster than the CPU Only renders on the HP with the same Beta,  Then the 4.9.3.117 Beta was released. Using the same scene, I determined on my system, rendering with both CPU and GTX1080, with Optix off, was the fastest setting.

    I downloaded Sickleyield's test scene and rendered it CPU Only in the 4.9 Release as a base for comparison. I also rendered the test scene with the default render settings in the file, and then I rendered the scene with quality off. All images were rendered to 5000 samples.

    First, here's the system I used:
      Intel i7-6900K CPU @ 3.20GHz (8-core)
      32GB RAM
      Windows 10 Pro x64
      MSI GTX1080 (Armor Series)

    And here are the results of the renders.

    ------------------------------------------------
    4.9 Release

      CPU Only; Default Progressive Rendering Settings—
        Received update to 05000 iterations after 995.981s. (16 minutes, 35.981 seconds)

      CPU Only, Progressive Rendering Settings: Quality Off, Max Samples 5000, Max Time 0—
        Received update to 05000 iterations after 955.855s. (15 minutes, 55.855 seconds)

    ------------------------------------------------
    4.9.3.117 Beta

      CPU Only; Default Progressive Rendering Settings—
        Received update to 05000 iterations after 1373.470s. (22 minutes, 53.47 seconds)

      CPU Only, Progressive Rendering Settings: Quality Off, Max Samples 5000, Max Time 0—
        Received update to 05000 iterations after 1341.215s. (22 minutes, 21.215 seconds)

      GTX1080 Only; Default Progressive Rendering Settings—
        Received update to 05000 iterations after 350.282s. (5 minutes, 50.282 seconds)

      GTX1080 Only, Progressive Rendering Settings: Quality Off, Max Samples 5000, Max Time 0—
        Received update to 05000 iterations after 348.377s. (5 minutes, 18.377 seconds)

      GTX1080 & CPU; Default Progressive Rendering Settings—
        Received update to 05000 iterations after 292.077s. (4 minutes, 52.077 seconds)

      GTX1080 & CPU, Progressive Rendering Settings: Quality Off, Max Samples 5000, Max Time 0—
        Received update to 05000 iterations after 286.818s. (4 minutes, 46.818 seconds)

    ------------------------------------------------

    I found it interesting the latest Beta is significantly slower than the 4.9 Release rendering in CPU Only. In the Change Log, there are comments about workarounds for Iray Bug 17733:

    • Update to NVIDIA Iray 2016.2 Release (272800.6312)

    • Iray convergence value is now being scraped from verbose message handling to show progress during a render; pending NVIDIA resolution of Iray bug 17733

    • Logging of renders performed using NVIDIA Iray is now more verbose; workaround for NVIDIA bug 17733

    • Progress reporting for renders performed using NVIDIA Iray has changed in the convergence case; if completion is based on the total number of iterations, progress is calculated based on the difference between 0, the current iteration count and the max iteration count; if completion is based on total amount of convergence, calculation is based on the difference between the initial convergence amount, the current convergence amount and the max convergence amount; workaround for NVIDIA bug 17733

    I wonder how much the workaround is affecting speed. It's hard to believe the workarounds account for 6 minutes or so difference between 4.9 Release CPU Only and 4.9.3.117 Beta CPU Only. I suspect DAZ got us the Beta as soon as they were satisfied it was stable, what with all the pitiful cries for Pascal support coming from people with new Pascal based video cards, (myself included! lol) I hope the next release will see the known bug resolved by Nvidia, and better speeds in CPU Only mode, if not all around.

  • BeeMKay said:
    BeeMKay said:

    SickleYield's test = 1 minute 43 seconds
    pc specs 
    CHRONOS-pro
    Corsair 350D
    Processors: Intel Core i7 6700K Quad-Core 4.0GHz (4.2GHz TurboBoost)
    Motherboard: ASUS Maximus VIII Gene
     Windows 10 Pro
    Graphic Cards: Dual 8GB NVIDIA GTX 1080

    Is that more than one card? The time seems rather short, i.e. compard to the 1080 right below your entry.

    Dual means two 1080s

    Ah, thanks Richard. I thought it was some part of the name.

    ye

    yep and I had them in SLI too

Sign In or Register to comment.