Iray Starter Scene: Post Your Benchmarks!

SickleYieldSickleYield Posts: 7,639
edited March 2015 in The Commons

Get the starter scene here.

Or here, if you don't want to go to deviantart.

This requires the Genesis 2 Female Starter Essentials and the DAZ Studio 4.8 Public Beta with the Iray content installed. It should require nothing else.


If rendering CPU-only, or on a single older card, delete sphere 8 and sphere 9 as in the second scene shown. Iray is not slowed down much by refraction or reflection, but it is slowed down considerably by high SSS, and these spheres use versions of the "wax" and "jade" shaders DAZ included with the Iray Uber Defaults.

---

Here are my own benchmarks with this scene, at the suggested 400x520 render size and the scene's included render settings. All are on 64 bit Windows 7.

GPUs only 1st machine (Nvidia GTX 980 x2, GTX 740 x2, all 4gb cards): 3 minutes 30 seconds to finish

CPU only 1st machine (Intel Core i7-5930K 3.5 GhZ, 6 cores): 16 minutes to reach 90%, 22 minutes total

CPU only 2nd machine (Intel Core i7-4700MQ 2.40 GhZ, 4 cores): 27 minutes 47 seconds to reach 90%, 1 hour 22 minutes total

CPU only 2nd machine (Intel Core i7-4700MQ 2.40 GhZ, 4 cores) on modified scene without spheres 8 and 9: 8 minutes to reach 90%, 14 minutes to reach 95%

I do not yet have Iray-compliant drivers for the second machine's GPU (Nvidia appears not to appreciate my desire to use Windows 7 on my laptop), so I was unable to test that one.

---

IrayTest2LowSSS.jpg
400 x 520 - 67K
BenchMarkScene330.jpg
400 x 520 - 109K
Post edited by SickleYield on
«13456749

Comments

  • cecilia.robinsoncecilia.robinson Posts: 2,208
    edited December 1969

    Thanks, Sickle! Well-done!

  • starionwolfstarionwolf Posts: 3,670
    edited December 1969

    I don't have the patience to let Iray run any longer... need to use the computer for something else. I don't have the correct video card drivers to attempt to render the scene in GPU mode.

    AMD FX-6300 - I forgot the speed. I think it is 1.4 GHz or something.

    daz_website_2.jpg
    366 x 167 - 20K
  • Lissa_xyzLissa_xyz Posts: 6,116
    edited December 1969

    GPU Only: EVGA 770GTX 4GB FTW Edition - Only card in the system

    Total Rendering Time: 10 minutes 20.49 seconds

    SickleTestScene.png
    400 x 520 - 158K
  • SickleYieldSickleYield Posts: 7,639
    edited December 1969

    For those rendering on older cards or going CPU-only, delete the two spheres labeled "sphere 8" and "sphere 9" in your scene tab. Iray is not lagged very much by reflection or refraction, unlike 3Delight, but it IS lagged by lots of SSS/transmitted color. I will change the first post to reflect this. This means you'll want to avoid using the "jade" and "wax" shaders include with the Iray Uber Defaults, but the metal, glass, water, car paint, etc. are still very fast.

    My slowest-rendering setup, the quad core on this laptop in CPU only mode, sped up to 14 minutes at 95% on this scene without those two spheres (it was almost an hour and a half before!).

    It looks like this (attached) in case anyone is confused which spheres 8 and 9 are.

    IrayTest2LowSSS.jpg
    400 x 520 - 67K
  • Saba TaruSaba Taru Posts: 170
    edited March 2015

    Computer specs: Mac Pro running OSX 10.9.5 with a 3.5 GHz 6-Core Intel Xeon E5, 32 GB RAM, and dual AMD FirePro D500 3GB graphics cards.

    CPU only render (for obvious reasons): 15 minutes 40 seconds to reach 90%. 33 minutes total.

    While I'm disappointed that they chose iRay technology, I don't find these render times unacceptable or out of bounds for the results. DS has always rendered slower than other programs on my system. Now it has a reason. :P

    [EDIT] These numbers are for the original benchmark file. Not the suggested removal of the spheres.

    Post edited by Saba Taru on
  • SickleYieldSickleYield Posts: 7,639
    edited December 1969

    Thanks to everyone participating so far!

  • thd777thd777 Posts: 943
    edited March 2015

    Render time: 3 minutes 17 seconds

    GPU: 1* GTX 780ti and 1*GTX 980 plus CPU i7 3930k

    both cards are dedicated to GPU render with the displays driven by a third card.
    Ciao
    TD

    For those interested in details:

    CUDA device 1 (GeForce GTX 780 Ti): 2391 iterations, 19.443s init, 176.519s render
    CUDA device 0 (GeForce GTX 980): 2285 iterations, 19.889s init, 176.060s render
    CPU (10 threads): 324 iterations, 17.227s init, 178.486s render

    Post edited by thd777 on
  • namffuaknamffuak Posts: 4,146
    edited December 1969

    Took a while.

    System - Windows 7 Pro, 6-core 3.5 GHz I7 (12 threads) and a GT 740 (4 GB, 384 cores)

    Both: 18 Minutes 11.26 Seconds - 3335 Iterations CPU, 1419 Iterations GPU 4754 Iterations total
    CPU: 23 Minutes 26.54 Seconds - 4747 Iterations
    GPU: 52 Minutes 55.48 Seconds - 4723 Iterations

    Note how close the iteration count is for the three results. Also, I'd like to point out that the 740 is also driving two monitors with resolutions of 1920 X 1080.

    These numbers are for the modified scene, without spheres 8 and 9 (I hit the 5,000 iteration point before finishing with them present).

  • SassyWenchSassyWench Posts: 602
    edited December 1969

    OK done :) Using CPU only.

    Laptop= I7, 2670 2.20 GHz 4 Core - 90% = 38 min. and finished in 1 hour 36 min.

    Desktop= I7, 3770 3.40 GHz 4 Core - 90% = 20 min. and finished in 51 min.

    I used all spheres just for the hell of it. LOL
    And I had mail, ICQ, and FF running on both machines if that makes a difference.

    I think those are fair times for a render.

    Thanks SY for starting this thread and for making and sharing the test scene! :)

  • LindseyLindsey Posts: 1,999
    edited March 2015

    Thanks for posting the test scene SickleYield. A real eye opener for me, as it appears times are best using CPU + 2 (puny) GPUs.
    The test scene looked pretty good/slowed down around 3000 iterations. Looking forward to see DAZ's GPU offerings.

    System Specs: i7-3930K 3.20 GHz | GT 640 3GB 144 cores | GT 430 1GB 93 cores

    Taken from the DS Log.txt:

    CPU (12 threads): 5000 iterations, 17.919s init, 2255.151s render
    Total Rendering Time: 37 minutes 54.84 seconds

    CUDA device 0 (GeForce GT 640): 5000 iterations, 19.450s init, 2220.057s render
    Total Rendering Time: 37 minutes 22.85 seconds

    CUDA device 1 (GeForce GT 430): 1750 iterations, 19.307s init, 1467.393s render
    CUDA device 0 (GeForce GT 640): 3250 iterations, 18.275s init, 1470.045s render
    Total Rendering Time: 24 minutes 50.19 seconds

    CUDA device 1 (GeForce GT 430): 1047 iterations, 18.985s init, 1018.111s render
    CUDA device 0 (GeForce GT 640): 2117 iterations, 19.546s init, 1017.067s render
    CPU (10 threads): 836 iterations, 17.330s init, 1019.054s render
    Total Rendering Time: 17 minutes 18.84 seconds

    edit: Test Scene rendered the full 5000 iterations, unmodified.

    Post edited by Lindsey on
  • electricgloreelectricglore Posts: 4
    edited March 2015

    Thanks for starting this thread, it is interesting to see the results.

    I had a go at the test scene and here are my results:

    GPU-only: 2x GTX980 3 minutes, 11.44 seconds
    CPU-only: i7-5930K 6 core at 3.55GHz 27 minutes 6.4 seconds to completion
    CPUs and GPU: 3 minutes 5.8 seconds

    I don't know if the difference between GPU only and GPU with CPU is significant, but it looks like in some scenes the hybrid approach may be fastest.

    Post edited by electricglore on
  • LindseyLindsey Posts: 1,999
    edited December 1969

    Thanks for starting this thread, it is interesting to see the results.

    I had a go at the test scene and here are my results:

    GPU-only: 2x GTX980 3 minutes, 11.44 seconds
    CPU-only: i7-5930K 6 core at 3.55GHz 27 minutes 6.4 seconds to completion
    CPUs and GPU: 3 minutes 5.8 seconds

    I don't know if the difference between GPU only and GPU with CPU is significant, but it looks like in some scenes the hybrid approach may be fastest.

    @ electricglore: Can you run the test with just one GTX980. I am considering that GPU. Thanks

  • IvyIvy Posts: 7,165
    edited December 1969

    I'm getting a little better
    Rendered in Iray
    took 12 minutes with 4 Irays points of lights and 2 Iray spots
    I'll play with some more I hate how fast my titin graphic cards spool up they sound like they gonna take off.

    best viewed ful sized

    This is My Pc specs I took it off my computer if you right click my computer and then click your windows experience it will tell you everything about your commuter abilities

    this mine

    Capture.JPG
    716 x 998 - 78K
    test1.jpg
    1280 x 720 - 122K
  • electricgloreelectricglore Posts: 4
    edited December 1969

    Lindsey said:
    Thanks for starting this thread, it is interesting to see the results.

    I had a go at the test scene and here are my results:

    GPU-only: 2x GTX980 3 minutes, 11.44 seconds
    CPU-only: i7-5930K 6 core at 3.55GHz 27 minutes 6.4 seconds to completion
    CPUs and GPU: 3 minutes 5.8 seconds

    I don't know if the difference between GPU only and GPU with CPU is significant, but it looks like in some scenes the hybrid approach may be fastest.

    @ electricglore: Can you run the test with just one GTX980. I am considering that GPU. Thanks

    @Lindsey: Rendering the scene, a single GTX980 no CPU clocked in at 5 minutes 51.16 seconds.

    I was surprised that it wasn't double, but I guess you lose a bit of efficiency having to split the job up between two (or more) cards.

    Anyway, the real question is whether a GTX970 would be that much slower, given that it is approx. half the cost of a GTX980, or better yet, two GTX970's instead of a single 980.

  • LindseyLindsey Posts: 1,999
    edited December 1969

    Lindsey said:
    Thanks for starting this thread, it is interesting to see the results.

    I had a go at the test scene and here are my results:

    GPU-only: 2x GTX980 3 minutes, 11.44 seconds
    CPU-only: i7-5930K 6 core at 3.55GHz 27 minutes 6.4 seconds to completion
    CPUs and GPU: 3 minutes 5.8 seconds

    I don't know if the difference between GPU only and GPU with CPU is significant, but it looks like in some scenes the hybrid approach may be fastest.

    @ electricglore: Can you run the test with just one GTX980. I am considering that GPU. Thanks

    @Lindsey: Rendering the scene, a single GTX980 no CPU clocked in at 5 minutes 51.16 seconds.

    I was surprised that it wasn't double, but I guess you lose a bit of efficiency having to split the job up between two (or more) cards.

    Anyway, the real question is whether a GTX970 would be that much slower, given that it is approx. half the cost of a GTX980, or better yet, two GTX970's instead of a single 980.

    Thank you for running that for me. I'll keep my eyes peeled for GPU discussions and be open to a pair of GPUs now that I just discovered there are GPUs work in PCIe x1 slots that I can dedicate for my primary video display.

  • jakethjaketh Posts: 1
    edited December 1969

    Here are my results for the test render

    System: i7-3770K @ 3.5Ghz 16GB GeForce GTX 660 Ti 4GB

    CUDA device 0 (GeForce GTX 660 Ti): 5000 iterations, 16.316s init, 1686.761s render

    Finished Rendering
    Total Rendering Time: 28 minutes 24.49 seconds

    Maximum number of samples reached.
    Iray INFO - module:category(IRAY:RENDER): 1.0 IRAY rend info : Device statistics:
    Iray INFO - module:category(IRAY:RENDER): 1.0 IRAY rend info : CUDA device 0 (GeForce GTX 660 Ti): 3269 iterations, 16.803s init, 1140.211s render
    Iray INFO - module:category(IRAY:RENDER): 1.0 IRAY rend info : CPU (7 threads): 1731 iterations, 15.508s init, 1141.309s render

    Finished Rendering
    Total Rendering Time: 19 minutes 18.42 seconds

  • Cayman StudiosCayman Studios Posts: 1,135
    edited December 1969

    Thanks for this, SickleYield.

    My system is over 5 years old: Intel Core i7-920 2.67 GHz, 4 Cores, 8 threads.

    CPU only (with spheres 8 & 9 removed): 28 mins to 90% convergence, 63 mins total.
    Iterations: 4751 (which I guess is about the same for everyone).

    I am impressed that some systems can do this in under 5 minutes. I wonder if I can jam one of these Nvidia cards onto my old Asus motherboard, because over an hour for a relatively small and simple scene is not really an attractive proposition for me. Presumably if it were four times as large (800 x 1040) it would take four times as long.

  • Dumor3DDumor3D Posts: 1,316
    edited March 2015

    Thanks for this Sickle... I had some fun here.

    System: Win 7 with i7 6 core duo with (2) GTX660 2GB (960 CUDA cores each) and (2) GTX980 4GB (2048 CUDA cores each.

    I did the full scene and all went to 5000 iterations, just a moment before it hit 95% convergence.

    Results:
    All GPUs + CPU 00:02:34
    All GPUs - CPU 00:02:44
    (2) GTX980s only 00:03:14
    (2) GTX660s only 00:09:06
    (1) GTX980+(1)GTX660 only 00:04:41
    (1) GTX660 only 00:17:07
    (1) GTX980 only 00:05:52
    CPU only 00:32:00

    So the numbers are fairly linear based on CUDA cores if you take into account maybe about 30 seconds of processing and moving the scene to the cards. (just guessing here) For instance, one GTX 660 was 17:07 while two were 09:06. If you cut both of those by 30 to 45 seconds, you wind up divisible by 2. The 980s, 2 at 03:14 vs. 1 at 05:52... same sort of math.

    This is just how I thought it should be.

    Post edited by Dumor3D on
  • SickleYieldSickleYield Posts: 7,639
    edited March 2015

    Some really good information here!

    It's interesting to me that, while I had slowdowns with my CPU on at the same time as my cards, other people have mostly had a mild if not large boost in speed. Maybe I need to rerun that test and check for variables I haven't accounted for.

    Post edited by SickleYield on
  • UHFUHF Posts: 512
    edited December 1969

    Octane has a proper benchmark here;
    http://render.otoy.com/octanebench/

    Download that and post your results. It will report how fast your nVidia set up can crunch 3D. But more than anything else it should give you an idea about how fast you are compared to other users.

    My home PC with 2 GTX 980s scores 191. My work laptop with 1 GTX 780M scores 41.

    The battery of tests being run verify different kernels for rendering. Direct Lighting is a simple fast kernel, but it doesn't support caustics like glass. Path tracing is a full unbiased render. PMC is the same only more intense, because its optimized to resolve caustics better.

  • SickleYieldSickleYield Posts: 7,639
    edited December 1969

    UHF said:
    Octane has a proper benchmark here;
    http://render.otoy.com/octanebench/

    Download that and post your results. It will report how fast your nVidia set up can crunch 3D. But more than anything else it should give you an idea about how fast you are compared to other users.

    My home PC with 2 GTX 980s scores 191. My work laptop with 1 GTX 780M scores 41.

    The battery of tests being run verify different kernels for rendering. Direct Lighting is a simple fast kernel, but it doesn't support caustics like glass. Path tracing is a full unbiased render. PMC is the same only more intense, because its optimized to resolve caustics better.

    Does that require a paid version of Octane,or is it standalone? I ask because one of Iray's advantages is that it's free, whereas the Octane plugin for DS costs $402 at current exchange rates.

  • UHFUHF Posts: 512
    edited December 1969

    UHF said:
    Octane has a proper benchmark here;
    http://render.otoy.com/octanebench/

    Download that and post your results. It will report how fast your nVidia set up can crunch 3D. But more than anything else it should give you an idea about how fast you are compared to other users.

    My home PC with 2 GTX 980s scores 191. My work laptop with 1 GTX 780M scores 41.

    The battery of tests being run verify different kernels for rendering. Direct Lighting is a simple fast kernel, but it doesn't support caustics like glass. Path tracing is a full unbiased render. PMC is the same only more intense, because its optimized to resolve caustics better.

    Does that require a paid version of Octane,or is it standalone? I ask because one of Iray's advantages is that it's free, whereas the Octane plugin for DS costs $402 at current exchange rates.
    Its a free benchmark.

    No. I'm not recommending the purchase of Octane if you aren't interested. I'm recommending that you use a standard GPU test. For instance you may not be cognizant of what many of the issues are with GPU rendering and therefore not stressing it out.

    In general I've found that performance on my PC and video cards has followed all the standard benchmarks.

  • UHFUHF Posts: 512
    edited December 1969

    I asked a guy to run the Octane Benchmark on his PC and it scored him at 4.85. Clearly he'll benefit from a GPU upgrade.

    A GTX980 is $500US, and scores 90. A GTX970 is much cheaper ($300?) and I bet it only scores 20% less.

  • SickleYieldSickleYield Posts: 7,639
    edited December 1969

    I see your point, and it's fine if people want to do that, but that isn't really the purpose of the thread. It's more to see what different systems do in this specific engine. I wanted to do this because there's been some confusion about how different kinds of hardware handle Iray rendering, and I wanted to set up a standard scene for purposes of comparison in-engine.

  • WilmapWilmap Posts: 2,917
    edited December 1969

    64 Bit Windows 7 Professional.

    GPU Only

    GeoForce NVidia GTX 560Ti - 4487 Core Edition

    12minutes exactly

    CPU Only

    Intel Core i72600CPU - @3.4 GHz - 3.70 Ghz - 12 GB Memory

    Had to stop at 90% - time 33 minutes

    Still had a long way to go I think.

    Top picture GPU

    iray_CPU.jpg
    400 x 520 - 75K
    iray_GPU.jpg
    400 x 520 - 73K
  • DigiDotzDigiDotz Posts: 515
    edited December 1969

    Blimey! I didn't realise i haven't installed starter essentials till opening this scene ..hold on

  • UHFUHF Posts: 512
    edited December 1969

    I see your point, and it's fine if people want to do that, but that isn't really the purpose of the thread. It's more to see what different systems do in this specific engine. I wanted to do this because there's been some confusion about how different kinds of hardware handle Iray rendering, and I wanted to set up a standard scene for purposes of comparison in-engine.
    Obviously Octane can't help with CPU only rendering, but Lux/Reality does. It just boils down to RAW CPU horsepower. The king there is cwhichura. His 20 core beast is about 4 times faster than any top of the line new PC.

    If you have a performance number for a render with a particular processor, then you can look up its benchmark and look up the benchmark of a different processor, and draw the correct conclusions.

    This is my current PC...
    http://www.cpubenchmark.net/cpu.php?cpu=Intel+Core+i7-4790K+@+4.00GHz

    So... looking at Wilmap's performance numbers there my PC should be 50% faster. However I happen to know that i7-2600k's can overclock like crazy, and I know I'll barely match it for speed (I also happen to have a PC like that).

    His 560TI scores 37 on the Octane Benchmark, and a single 980 GTX scores 97. So I should be able to render the same scene on the GPU 2.2 times faster? If Iray does multiGPU, then I can do it 4.4 times faster?

    The scene you posted looks quite simple and I know it would render much faster in Octane. Soo....

    I'm curious, but do you throttle your GPU while rendering? I mean slow it down so you can actually surf the internet without long pauses? Its an issue in Octane. Under full load, your PC becomes quite unusable with a single GPU and unthrottled rendering. I use the Octane plug in with my secondary GPU. I use both GPUs for final render.

  • DAZ_SpookyDAZ_Spooky Posts: 3,100
    edited March 2015

    Still working. LOL.
    Lets see what I can post here. Full scene used in all cases.

    Windows System 1: Intel i7-4710HQ Quad Core 2.5Ghz, 16GB Ram, GTX 860M (4GB) equivalent to a GT 740
    CPU only: 1 hour 3 minutes 35 seconds (Note the CPU is driving the monitor and OpenGL)
    CPU+GPU: 11 minutes 33.47 seconds.


    Windows System 2: Intel i7-4790K Quad Core 4.0 GHZ, 32GB Ram, Quadro K2200 (4GB) and K6000 (12GB). (K2200 is driving the monitors and is, roughly, between a 740 and a 750, performance wise.) (Primary test machine.)

    CPU Only: 38 minutes, 18 seconds.
    CPU+K2200: 10 minutes 10 seconds
    CPU+K6000: 4 minutes 12.41 seconds
    CPU+K2200+K6000: 3 minutes: 32 seconds


    iMac Intel Core i7 3.4ghz 16GB Ram, GTX 680MX (2GB)

    CPU Only: 1 Hour24 minutes 49 seconds
    CPU+GPU: GPU ran out of RAM and failed before render started. Render time posted under CPU Only.

    You usually want to include the CPU, so if your card fails it still renders. :)

    Post edited by DAZ_Spooky on
  • SickleYieldSickleYield Posts: 7,639
    edited December 1969

    Thanks for joining in, Spooky!

    When the GPU fails does it crash your system?

  • DAZ_SpookyDAZ_Spooky Posts: 3,100
    edited March 2015

    Thanks for joining in, Spooky!

    When the GPU fails does it crash your system?

    No the card just dropped out, which is what it is supposed to do. LOL It is also the reason people should include the CPU in their device list.

    Post edited by DAZ_Spooky on
Sign In or Register to comment.