Iray Starter Scene: Post Your Benchmarks!

17810121349

Comments

  • ZarconDeeGrissomZarconDeeGrissom Posts: 5,412
    edited June 2016

    As of this post, either because the posted result lacked important info about the benchmark run ( with spheres 8 and 9 or not, and if it was GPU only or GPU + CPU), or it has simply not been posted anywhere.

    It is spectacular that some are able to afford the kind of graphics cards that can run this benchmark within a couple of minutes, that kind of electric bill is beyond my finances in all honesty. I would very much appreciate some numbers on lesser expensive cards.  I'm sure there must be some one out there, some where, that can take the time to share the information with us less fortunate individuals.

    GTX 970 (GM204)? GTX 960 (GM206)? etc

    I do not have the kind of budget that allows me to just purchase many graphics cards, just so I can fumble onto one that works well enough for my Iray needs. Without any reliable information anywhere on how Iray runs on more affordable cards, it is foolish for me to waste my limited funds on Iray stuff of any kind.

    Post edited by ZarconDeeGrissom on
  • hphoenixhphoenix Posts: 1,335

    Here's a 970 render time for you.....

     

    Specs:

    AMD 8370E 8-core 3.3GHz CPU

    ASRock 

    24GB DDR3-800 Ram

    EVGA 970GTX Stock with ACX3.0 cooling.

    Window 7 Pro 64-bit

    (970 is driving 2 monitors, one 4K, one 1920x1200)

     

    Full Sample Scene, All spheres:  4 minutes 16 seconds.

     

    The sample scene is kind of dim in Iray, which tends to slow down convergence.  A better lit scene would be more representative of consistent 'average' requirements.  Just halving the shutter speed (which isn't really more light, but does help the camera resolve) went to 3 minutes 49 seconds.  Which was still hitting 5000 iterations.

     

     

  • ColemanRughColemanRugh Posts: 511

    As of this post, either because the posted result lacked important info about the benchmark run ( with spheres 8 and 9 or not, and if it was GPU only or GPU + CPU), or it has simply not been posted anywhere.

    It is spectacular that some are able to afford the kind of graphics cards that can run this benchmark within a couple of minutes, that kind of electric bill is beyond my finances in all honesty. I would very much appreciate some numbers on lesser expensive cards.  I'm sure there must be some one out there, some where, that can take the time to share the information with us less fortunate individuals.

    GTX 970 (GM204)? GTX 960 (GM206)? etc

    I do not have the kind of budget that allows me to just purchase many graphics cards, just so I can fumble onto one that works well enough for my Iray needs. Without any reliable information anywhere on how Iray runs on more affordable cards, it is foolish for me to waste my limited funds on Iray stuff of any kind.

    IRAY is made for buyers... not necessarily for artists. 3Delight is still king with versatility.

  • ZarconDeeGrissomZarconDeeGrissom Posts: 5,412
    edited June 2016

    hphoenix, thank you. I was unsure if such times was only privy if one had multiple titans or not. So having more CUDA cores really is important. My GT730 only has 384 CUDA cores running at 900MHz, vs the 1,664 CUDA cores in the GTX970 running at 1050MHz . About 4.3 times more cores at over 100MHz faster. That has me hopeful that the 1024 cores in the GTX 960 would also be a nice equivalent improvement over CPU only operation.

    ColemanRugh, yes, I to grow tired of Iray benchmark charts on other sites that only include thousand dollar graphics cards that I would never be able to afford.

    without going to far into it, Both 3delight and Iray have there strengths, and both can cost considerable amounts of time and money. 3delight can allow you to set up lighting and surfaces to mimic many things that dose not happen in the real world. However If you want to get more control of it or to simulate real world indoor room illumination, you really need to wright your own shader code and run it on the standalone 3deligh version, and that is not exactly beginner stuff that is easy to 'just do'. And the standalone 3delight can get incredibly expensive, especially when you dive into render farms. Iray is a potential option that I'm looking at for figuring out how to illuminate my hallway in my house, without spending a years income on hardware and software.  3delight in Daz Studio is grate if you want to tell the computer how the light bounces around your scene, not so much if you want the computer to show you how light will actually bounce around your own house (with complex shaped furniture and stuff), lol. Daz studio 3delight dose have a Caustics system, unfortunately it only works if the entire surface is visible to a camera and dose not work so well on complex shapes with multiple reflection paths. Iray really is better for that, especially for a hobbyist that has not been doing CG there entire life. I would be happy if the graphics card could do the benchmark in around 15 minutes, especially if it had more then 4GB of ram (unfortunately most affordable cards only have 4GB or less on them).

    It's just a mater of finding a graphics card that will not break the bank, that can do Iray.

    Post edited by ZarconDeeGrissom on
  • jamestjamest Posts: 19

    I was wondering if replace my GTX 980 TI by the new GTX 1080 worth it for iray.

    Did someone already use the GTX 1080 in this benchmark scene? Could you post the benchmark result to compare with results of GTX 980 TI?

  • I was wondering if replace my GTX 980 TI by the new GTX 1080 worth it for iray.

    Did someone already use the GTX 1080 in this benchmark scene? Could you post the benchmark result to compare with results of GTX 980 TI?

    The 1080 doesn't yet have driver support for Iray so no, it can't be tested (or used). Apparently nVidia were hoping to have updated drivers around the time of SIGGraph.

  • ZarconDeeGrissomZarconDeeGrissom Posts: 5,412
    edited July 2016

    And I'm sure that goes for the more watt frugal GTX 1070 as well. There has been a bit of a discussion over yonder regarding cards, and no word yet.

    http://www.daz3d.com/forums/discussion/89996/gtx-1080-iray-support/p1

    I'm sure when the Iray driver is ready, nVidia will have it on there website, somewhere.  I'm also patently holding out to see what the sub-100-watt sibling brings to the table, so I'm not in any hurry my self. The GTX 1070 at 120 watts is over my Watt budget, it would require more then just a bigger PSU.

    In better news, I should have a 64 watt 4GB GT740 card here to test soon, to compare to the 23 watt 4GB GT730 (GK208) I have.  The GTX960 was a tad over my 100 watt budget, and more then I was willing to spend on a card with only 4GB on it for an experiment, perhaps later on.

    Post edited by ZarconDeeGrissom on
  • G_smithG_smith Posts: 7

    We all know by now that rendering on a GPU is far better than CPU, so my benchmark doesn't bring any big news here... I benchmarked the original file, no removing of spheres. Sorry, but I am CPU bound mostly as I am a bryce hobbysist for a long time...

    Render machine 1:  Proliant ML350 G5 with 2x Xeon 5450 (4 cores, 3Ghz, 12mb cache), 32GB RAM (667 Mhz DDR2 FB DIMM) and a Nvidia GT730, Zotax, PCIe x1 64 bit, 384 CUDA cores 1GB DDR3.

    Render Times:

    CPU only 90% 32 min,                 100% around 38 min

    GPU only: 90% 23 min 27 sec,     100% around 32 min

    GPU+CPU:   90% 14 min 57 sec,  100% around 20 min

    Render Machine 2: 2x Xeon 2680 (8 core, 16 thread 2.7Ghz / 3.1 Turbo) on an Asrock Server board with 4x4GB DDR3-1600 ECC REG. Onboard VGA.

    CPU only 90% at 8 min 18 sec,     100% around 10 min+

    Hope it helps :)

     

     

  • ZarconDeeGrissomZarconDeeGrissom Posts: 5,412
    edited July 2016

    I have the 4GB of 64bit-wide VDDR3 Zotec GT730, and I've been getting in the 35 minute range for some time now with it also driving my displays as well.

    I did just get a GT740 to look at the other sub-one-hundred watt contender with the suggested minimum of 4GB memory. Having the display on the other card did drop the memory usage to just under 2GB with that one scene I was testing (single HD figure with nothing else in the scene).

    Tho for the sake of here, I'll stick to the "SickleYield's Iray Benchmark". The benchmark used 1,109MB of ram on the card with the displays plugged into the other card.

    23 watt 4GB GT730 (GK208) solo, 32 minutes 22.84 seconds.  (156 iterations per minute)

    64 watt 4GB GT740 (GK107) solo, 24 minutes 9.95 seconds.  (208 iterations per minute)

    All spheres, Optex-thingy on, no CPU, Just the target GPU doing Iray, Displays plugged into the other card.

    If the GT740 was not so wasteful of power, I probably would have a better opinion of it. It's just difficult to get over the fact that the GT740 consumes three times the watts of the GT730, and only delivers 30% more performance.

    Iterations per minute per watt; GT730 is at 6.79 vs. the GT740 is only at 3.25

    Iterations per minute per USD; the 80USD GT730 is 1.95 vs. the 120USD GT740 is only 1.73

    (In My opinion, and not necessarily that of Daz3D or SickleYield. This is my opinion) The 384 CUDA core GT730 is low watt, fanless, has no problems with 1080/60fps video playback, extremely snappy in the desktop, and runs very cool under load. I can think of many uses for the card that dose not involve Iray or gaming. The only application I can think of where the GT740 would excel, is on my "Wall Of Shame", lol.

    20160704_PalomaIrayTest_01001laceSkirt_Gt730Stats2.png
    392 x 482 - 7K
    20160706_IrayBenchGt730Solo_02001crop1.png
    2720 x 1200 - 712K
    Post edited by ZarconDeeGrissom on
  • Hi everyone,

    I currently have a GTX 750 and the test result I get is similar to others posted here. I'm planning to buy a GTX 1060, which has 1280 cuda cores. As far as I know, those cores determine the speed of the rendering, am I right? I will pair it with a i5-6500. Is the CPU very important for the render speed? Will the 1060 be a good upgrade?

    Sorry I don't know too much about the technical details of rendering.

  • ZarconDeeGrissomZarconDeeGrissom Posts: 5,412
    edited July 2016

    I opted to not consider many cards that do NOT have "4GB or more memory". I'm not all that interested in cards that use more then 100 watts, because I do work with audio and need the computer to be 'quiet' most of the time. I was also kind of hoping the GTX 1060 would have 8GB and use less then 100 watts, it dose not look like either will be happening any time soon. So I guess I'll be using 3DL exclusively for a bit longer.

    For me, upgrading from the NX8600GT to the GT730 (GK208) was an easy choice last year (better performance at a lower watt consumption). I'm just not impressed at all with the lower end 'Affordable' cards that are Iray capable, and the upper tear is just out of my price and watt range.

    ---------

    GTX 1080, GTX 1070, GTX 1060, I have not seen any thing on Iray drivers for it yet. So save your money and give it some time, beside the prices are still a tad over the top right now.

    https://www.daz3d.com/forums/discussion/comment/1362326/#Comment_1362326

    Post edited by ZarconDeeGrissom on
  • MEC4DMEC4D Posts: 5,249

    CPU i75960X at 3.5Ghz 64GB

    4 x Titan X Hybrid 12GB @1277Mhz with 12 288 Cuda cores

    Last Nvidia driver 368.69 total power usage for 4 cards while rendering 640Watts , on idle 36 Watts temp while rendering 37C idle 25C Water cooled .

    ______________________________________________

    4 GPU  x 58 sec , with OptiX  42 sec , changed optimization to speed without Optix 40 sec

    3  GPU x 1 min 6 sec , with Optix  53 sec

    2 GPU  x 1 min 59 sec , with Optix 1 min 14 sec

    1 GPU  x 3 min 18 sec , with Optix 2 min 19 sec

    Not tested with CPU as it only slow down rendering .

    Same scene with HDRI light 

    4 GPU x 22 sec with Optix , changed optimization to speed without Optix 20 sec

  • edited July 2016

     

    Toshiba Qosmio X870

    Intel-i7 3610QM  2.3GHz,  24GB  System RAM

    Nvidia GTX670M  3GB Video RAM

    Win7 64 bit

    Render time 16min  36 secs (with all boxes ticked in Render Settings/Hardware, CPU, OptiX etc) All Spheres, 95% Converged Ratio.

    Regards,

         Dave

    Post edited by dijidave_a64850a362 on
  • In all very impressive, Nice laptop Dave.  For others lurking around here, I will point out that the GTX670M has 336 CUDA cores, where as the non-Mobil one (without the 'M' in the name) has a few more.

    http://www.geforce.com/hardware/notebook-gpus/geforce-gtx-670m/specifications

    vs

    http://www.geforce.com/hardware/desktop-gpus/geforce-gtx-670/specifications

    I do find it odd that the Laptop version dose not have the wattage listed, given how that is kind of important for battery life of a laptop, lol.

  • ZarconDeeGrissomZarconDeeGrissom Posts: 5,412
    edited July 2016

    MEC4D, Do those Titans stay at idle when Daz Studio is open (not rendering, just open), or do they crank out to full clock when Studio is fired up. I noticed with the GT730 and the GT740, any time Studio was running (even minimized on the task bar), the cards would crank out to full throttle even when they were not doing anything at all. The GT740 would burn around 40 watts just sitting there doing nothing with the displays plugged into the GT730.

    Post edited by ZarconDeeGrissom on
  • MEC4DMEC4D Posts: 5,249

    That is driver issue , my cards are  idle at 135Mhz  consuming 9W .. I had this issue with the early display drivers just before the release of win10 , but they fix it after months

    what you can try is go to Nvidia Panel and set the GPU to a single monitor , it is set to multiple GPUs at default so when you start DS,  Open GL set all your GPU clock in standby mode and it consume a lot of power even if nothing is loaded to the GPU . Also in your case there are option for power , setting the power to performance will speed up the rendering if you use old driver .

    The last Nvidia driver I installed 2 days ago is the best ever, the GPU scaling is great in iray compared to the old one and no issues , sometimes the driver installation can be corrupted and you will not notice so always clean install and no express as it can mess up your registry and hard to fix later.. and uninstall will not help , it happened to me once when I clicked the wrong install button 

    MEC4D, Do those Titans stay at idle when Daz Studio is open (not rendering, just open), or do they crank out to full clock when Studio is fired up. I noticed with the GT730 and the GT740, any time Studio was running (even minimized on the task bar), the cards would crank out to full throttle even when they were not doing anything at all. The GT740 would burn around 40 watts just sitting there doing nothing with the displays plugged into the GT730.

     

  • Phoenix3DPhoenix3D Posts: 9

    A low end card test...

    My system: i7-2600 @ 3.5 GHz (4 cores, 8 threads), ASUS GTX 560 DIRECT CUII TOP (factory overclock) 336 CUDA cores 1G GDDR5, 8GB DDR3 RAM, Win 10 Pro.

    Loaded the scene, rendered as is, no CPU, driving a single secondary display @1920x1080, "Total Rendering Time: 9 minutes 45.7 seconds".

    Disabling as a display device, "Total Rendering Time: 9 minutes 25.62 seconds".

    As for the previous test now with CPU checked: "Total Rendering Time: 8 minutes 27.92 seconds".

    I've bought an ASUS GTX980Ti Matrix Platinum, 2816 CUDA cores 6G GDDR5 on sale to replace this card but need to upgrade the power supply which may be a few weeks yet. Will be interesting to compare the two :)

    There's another benchmark site that others may find useful: http://www.migenius.com/products/nvidia-iray/iray-benchmarks

    The GTX 780Ti Super Clocked (2880 CUDA cores) looks like a sweet card for those on a tigther budget who don't mind buying 2nd hand cards. I saw four on ebay that had come out of a Vray rendering machine while I was looking to buy the next card, though they may not come up too often. Only 3GB, but that may be enough for some.

    Iray Benchmark.png
    400 x 520 - 192K
  • MattymanxMattymanx Posts: 6,879
    MEC4D said:

    That is driver issue , my cards are  idle at 135Mhz  consuming 9W .. I had this issue with the early display drivers just before the release of win10 , but they fix it after months

    what you can try is go to Nvidia Panel and set the GPU to a single monitor , it is set to multiple GPUs at default so when you start DS,  Open GL set all your GPU clock in standby mode and it consume a lot of power even if nothing is loaded to the GPU . Also in your case there are option for power , setting the power to performance will speed up the rendering if you use old driver .

    The last Nvidia driver I installed 2 days ago is the best ever, the GPU scaling is great in iray compared to the old one and no issues , sometimes the driver installation can be corrupted and you will not notice so always clean install and no express as it can mess up your registry and hard to fix later.. and uninstall will not help , it happened to me once when I clicked the wrong install button 

    MEC4D, Do those Titans stay at idle when Daz Studio is open (not rendering, just open), or do they crank out to full clock when Studio is fired up. I noticed with the GT730 and the GT740, any time Studio was running (even minimized on the task bar), the cards would crank out to full throttle even when they were not doing anything at all. The GT740 would burn around 40 watts just sitting there doing nothing with the displays plugged into the GT730.

     

    Thanks for the info on the driver Mec.

  • how do you get the bench mark

  • nelsonsmithnelsonsmith Posts: 1,322

    GPU Only
    Windows 7  64 bit
    Intel i7 CPU 4 Cores  12.0 GB RAM  Nvidia GeForce GTX 670
    5 mins  38 secs

    For an older system, with an older Graphic card and Motherboard I guess I'm not that bad off.

     

    Benchmark_GPU.jpg
    400 x 520 - 84K
  • CPU: i7-6700k
    GPU: i7 GTX 980 ti
    RAM: 32GB

    • Total Rendering Time: 3 minutes 32.60 seconds
    • Total Rendering Time: 2 minutes 24.91 seconds (with OptiX enabled)

    Now, that hint about OptiX was illuminating :)

     

  • EBF2003EBF2003 Posts: 28
    edited August 2016

    Daz 4.9
    1 EVGA GTX-970 SSC
    1 EVGA GTX-970 FTW
    i7-5820K
    64GB ddr4 ram
    OptiX Prime Acceleration = checked

    Both Gpu's and optix on
    100% render
    1min 52 seconds

    Post edited by EBF2003 on
  • MEC4DMEC4D Posts: 5,249

     when you render with Optix Off, you need change optimization to speed, when Optix On to memory to seed up your render , howeever somehow I don't see any differences with the last Beta build still render with the same speed On or Off

    CPU: i7-6700k
    GPU: i7 GTX 980 ti
    RAM: 32GB

    • Total Rendering Time: 3 minutes 32.60 seconds
    • Total Rendering Time: 2 minutes 24.91 seconds (with OptiX enabled)

    Now, that hint about OptiX was illuminating :)

     

     

  • mmitchell_houstonmmitchell_houston Posts: 2,472
    edited August 2016

    I am in the process of running benchmarks on my systems. On System 2 (ASUS Destop, Dual Core, 16GB RAM with GeForce 960 4GB Ram -- full specs below in my sig): 4897 iterations, 215.758s init, 549.304s render (i.e. 3.59 min init, 9.15 min)  Rendering Time: 12 min 49 sec

    Needless to say, this was with the GPU. I'm not going to waste time on a CPU-only render with this system.

    Iray_Benchmark_ASUS_Computer_12min 49 sec.png
    400 x 520 - 210K
    Post edited by mmitchell_houston on
  • mmitchell_houstonmmitchell_houston Posts: 2,472
    edited August 2016

    Okay, now to hit System 1 (see sig for full specs). This is my brand new Alienware Laptop. First up, a CPU-Only test. It did okay, but a little slower than I expcted, really: 5000 iterations, 18.607s init, 2395.308s render Rendering Time: 40 minutes 16.61 seconds

    Now to turn on the GPU: 4247 iterations, 28.897s init, 421.474s render Rendering Time:  7 minutes 34.9 seconds

    The render for Optix Prime Acceleration was actually very nice: 4338 iterations, 25.220s init, 297.239s render Rendering Time: 5 minutes 24.1 seconds

    Tomorrow I shall try to get my Alienware Graphics Amplifier up and running, and see if it delivers the full power of the desktop card to my laptop.

    Iray_Benchmark_Optix_Accelerated.png
    400 x 520 - 210K
    Iray_Benchmark_CPU_Only_40min_16sec.png
    400 x 520 - 210K
    Post edited by mmitchell_houston on
  • MSI GT72, i7-6820HK, 32GB System RAM, GTX980m, 8GB VRAM, ................6min 15.22 secs, 5000 itterations (All shheres, CPU &GPU, OptiX unchecked)

     

                                cool

  • As above except OptiX enabled . . . . . . . 4min 38.45 secs

  • MEC4D said:

     when you render with Optix Off, you need change optimization to speed, when Optix On to memory to seed up your render , howeever somehow I don't see any differences with the last Beta build still render with the same speed On or Off

    I added a second card (with 2 x gtx 760), and did a handful of test renders with different settings and different GPUs enabled/disabled.  

    https://docs.google.com/spreadsheets/d/11-Z7MmvGJyeMhjgUAJhOAvLhbLkFdno3Jf8XBSnJ9eg/edit#gid=0

    So far "optix on" and "optimization speed" with all cores active indicate the best setting in my case.

    The "init time" varies with a large margin, even between renderings with the same settings.  Between 20 and 40 seconds on the GPUs. The CPU seems to be concistent around 11-12 seconds.

    Iterations per GPU or CPU for the actual rendering time varies a lot less. The first image appears as soon as the CPU's "init time" has passed, and iterations start increasing a lot faster as soon as GPU init times have passed.

    Lession learned:

    • Do not place the old graphics card on the shelf. It can still do a lot of good.
    • Experiment with settings, to see what actually matters.
    • Do more than one test run for benchmarks, due to init time variance.
  • I just completed my first test render using the Alienware Graphics Amplifier (this is a proprietary system that allows you to use an external video card with the laptop). The specs are in my sig, but I'm including them here in case I ever upgrade the graphics card (the onboard video was disabled -- only the amplifier's 980 ti was used for this test): System 1: Alienware 17 laptop: Windows 10 Pro, 32GB RAM | Intel Core i7-6700HQ (Quad-Core, 6MB Cache 3.5GHz) | Onboard Video: NVIDIA GeForce GTX 970M 3GB GDDR5 1028 CUDA Cores | Alienware Graphics Amplifier: GeForce GTX 980 ti 6GB (006G-P4-4996-KR)  2816 CUDA Cores

    My test render ran in 2 min, 39 seconds (Optix enabled). I'm very pleased!

Sign In or Register to comment.