Iray Starter Scene: Post Your Benchmarks!

191012141549

Comments

  • namffuaknamffuak Posts: 4,040
    nicstt said:
    namffuak said:

    MSI 1080 gtx gpu only 3 minutes 9.53 seconds - core clock 1936 GHz, memory clock 4513 GHz.

    MSI 980 TI and 1080 together - 1 minute 43.2 seconds.

     

    And I have an intense render scene that has taken 57 minutes 41 seconds on the 980 TI; with both cards the render time drops to 34 minutes 55 seconds.

    I'd be interested in comparrisons on the same system of a solo render on the 980ti and the 1080 if you are able? :)

    Obviously if one is driving the monitors then it isn't a true comparrison.

    OK. First - my monitors run off a GT 740 SC that is not used for rendering.

    Windows 7 Pro, 3.5 Ghz I7, 64 GB

    Both my cards are right out-of-the-box, but both are factory overclocked.

    980TI Core clock is 1240 GHz, Memory clock is 3304 GHz

    1080 Core clock is 1949 GHz, Memory clock is 4513 GHz.

    Timings - Both cards 1 minute 50.8 seconds (all terminated at 5,000 iterations)

    980TI - 3 minutes 18.8 seconds, power hit 72% or 180 W

    1080 - 3 minutes 12.16 seconds, power hit 57% or 102 W

    So - just about identical, with the edge going to the 1080 on power and possibly speed on more complex renders.

  • linvanchenelinvanchene Posts: 1,303
    edited November 2016

    @ SLI Bridge

    BeeMKay said:
    BeeMKay said:

    SickleYield's test = 1 minute 43 seconds
    pc specs 
    CHRONOS-pro
    Corsair 350D
    Processors: Intel Core i7 6700K Quad-Core 4.0GHz (4.2GHz TurboBoost)
    Motherboard: ASUS Maximus VIII Gene
     Windows 10 Pro
    Graphic Cards: Dual 8GB NVIDIA GTX 1080

    Is that more than one card? The time seems rather short, i.e. compard to the 1080 right below your entry.

    Dual means two 1080s

    Ah, thanks Richard. I thought it was some part of the name.

    ye

    yep and I had them in SLI too

    Update / Edit:

    As far as I could gather from this linked thread it is suggested to turn SLI off in Iray as well.

    https://www.daz3d.com/forums/discussion/54908/sli-and-iray

    Disabling SLI may not render faster or slower but actually prevent some conflicts that even may result in crashes.

     

    Side Note:

    -> At least for OctaneRender it is suggested not to use SLI for rendering.

    You can still install the SLI bridge if you want to use it for gaming and then manually disable SLI in the Nvidia control panel for rendering.

    - - -

     

    - - -

    @ Test results:

    Win 10 Pro 64bit | Rendering: 2 x Asus GTX 1080 STRIX | Display: Asus GTX TITAN |Intel Core i7 5820K | ASUS X99-E WS| 64 GB RAM

    -> Optix Prime Acceleration OFF:  2 Minutes 57 seconds

    -> Optix Prime Acceleration ON:  1 Minute 29 Seconds

    - - -

    Side Note 2:

    Activating the display card as well for rendering did improve the time further: 1 Minute 8 Seconds.

    -> Activating the display card for rendering is an option if the scene does still fit the VRAM and you are not going to work on anything else while rendering.

    Optix OFF - 2 min 57 sec.png
    1920 x 1080 - 583K
    Optix ON - 1 min 29 sec.png
    1920 x 1080 - 594K
    Post edited by linvanchene on
  • MattymanxMattymanx Posts: 6,873
    L'Adair said:
    I found it interesting the latest Beta is significantly slower than the 4.9 Release rendering in CPU Only.

    Thats odd cause in this thread - http://www.daz3d.com/forums/discussion/121881/render-times - someone stated they do CPU only and have seen a speed increase on their machine running the new beta.

  • StingerStinger Posts: 296
    edited November 2016

    Took 3 minutes and 2 seconds on my rig with a single GTX 980ti, 16 gigs ram, AMD FX 8350, Win10 Pro, GPU rendering only with optix prime on.

    CPU, GPU, and OP all on, 3 minutes, 22 seconds. About 10% longer.  Looks like my CPU is the bottleneck. :(

    Hoping to add another 980ti soon !

    Post edited by Stinger on
  • @ SLI Bridge

    BeeMKay said:
    BeeMKay said:

    SickleYield's test = 1 minute 43 seconds
    pc specs 
    CHRONOS-pro
    Corsair 350D
    Processors: Intel Core i7 6700K Quad-Core 4.0GHz (4.2GHz TurboBoost)
    Motherboard: ASUS Maximus VIII Gene
     Windows 10 Pro
    Graphic Cards: Dual 8GB NVIDIA GTX 1080

    Is that more than one card? The time seems rather short, i.e. compard to the 1080 right below your entry.

    Dual means two 1080s

    Ah, thanks Richard. I thought it was some part of the name.

    ye

    yep and I had them in SLI too

    Update / Edit:

    As far as I could gather from this linked thread it is suggested to turn SLI off in Iray as well.

    https://www.daz3d.com/forums/discussion/54908/sli-and-iray

    Disabling SLI may not render faster or slower but actually prevent some conflicts that even may result in crashes.

     

    Side Note:

    -> At least for OctaneRender it is suggested not to use SLI for rendering.

    You can still install the SLI bridge if you want to use it for gaming and then manually disable SLI in the Nvidia control panel for rendering.

    - - -

     

    - - -

    @ Test results:

    Win 10 Pro 64bit | Rendering: 2 x Asus GTX 1080 STRIX | Display: Asus GTX TITAN |Intel Core i7 5820K | ASUS X99-E WS| 64 GB RAM

    -> Optix Prime Acceleration OFF:  2 Minutes 57 seconds

    -> Optix Prime Acceleration ON:  1 Minute 29 Seconds

    - - -

    Side Note 2:

    Activating the display card as well for rendering did improve the time further: 1 Minute 8 Seconds.

    -> Activating the display card for rendering is an option if the scene does still fit the VRAM and you are not going to work on anything else while rendering.

    someone esle said about sli crashes tried with Sli disabled made no difference and in sli been very stable I did a scene with over 30 characters and didn't crash I've just had other problems unrelated to the cards with the beta that's also creepedover to the current full build

  • Mattymanx said:
    L'Adair said:
    I found it interesting the latest Beta is significantly slower than the 4.9 Release rendering in CPU Only.

    Thats odd cause in this thread - http://www.daz3d.com/forums/discussion/121881/render-times - someone stated they do CPU only and have seen a speed increase on their machine running the new beta.

    well the scene I did with 30+ characters was about the same time rendering via cpu only on the current full build and with my cards on the beta but with not so many very fast with the cards some few minutes though it does seem to depend to on other things like what textues are used, lights and other items and factors

  • Don't visit here much, but I liked the Iray benchmark idea. My system has an Intel i7 5820K CPU, 32GB RAM, EVGA GTX 1080 Founders Edition.

    My results:

           GPU + CPU with Optix: 2 min 42 sec

           GPU only with OptiX: 3 min (so the CPU doesn't contribute much)

           GPU only without OptiX: 6 min 18 sec (so Optix really helps)

           CPU only with Optix: after 10 min still at 86%, so I cut it. All 6 cores were running at 100%, so they are all used.

    I guess GPU with OptiX wins.

  • namffuaknamffuak Posts: 4,040

    My laptop - a Lenovo Y50 with an Nvidia 960M (640 cuda cores, core clock 1188.9 MHz, memory clock 1252.8 MHz

    Without Optix, gpu only - 25 minutes 27.7 seconds

    With Optix, gpu only - 12 minutes 2.2 seconds

  • Back in July, I posted this to DA forum under SY benchmark:

     

    I just finished building my new system; Intel Core i7-5960X @ 3.00GHz, 8 cores. 32GB DDR4 memory; and Titan X GPU with 12GB RAM.

    I haven't OC the system yet, but built it liquid cooled for the specific purpose of doing just that and tuning for rendering. As I am able to afford it, more Titan X cards will be put in the system. :)

    GPU Only: 3 minutes 44 seconds

    CPU + GPU: 3 minutes 14 seconds

    CPU Only 14 min 50 seconds.

    Wasn't able to figure out OC yet, but will do what I can. From reading the comments below I tried the OptX Prime Acceleration and the CPU+GPU render time dropped to less than a minute to 90%; 2 min 16 seconds to complete!

    My MacBook Pro took over 20 minutes to complete."

    Since then, I still havent figured out OC my cpu/gpu yet ;) but thats mainly because I am not sure I need to or should. i saw the  mec4d youtube video where 4x Titan X was near realtime, and my goal is to find a way to NOT spend $5K on GPU's, but to see if two additional gpu''s would get me to reasonable render times for 4K UHD animation purposes. At this point it seems that some folks are seeing render times very similar to this Maxwell Titan X using a GTX1070 8GB gpu. Some things I learned on the path to build this rig:

    • Sickleyield is a rock star. i have learned a whole new level of rendering skillz from your helpful feedback, thanks bunches.
    • MEC4D is the queen of hardware builds, you should see her monster 12,000 core rig in action
    • animation needs tons of cuda. My current rig animating nonstop at 4k res will take 30 days to do. 5 min movie
    • apple sucks. Blows chunks. They abandoned opencl support for a very long time and only recently may have fixed it, but there was nothing worse than sinking a ton of money into a fully loaded trashcan mac pro and having it render only twice as fast as my mac book pro laptop. I sold it at a loss and built this rig which blows the mac pro out of the water.

    Some things i still need to confirm:

    • Can i mix pascal and maxwell for iray rendering? If so then adding two gtx 1070's to my rig might get save $$$ to get me me to my goal of 5 minute 4k movie completing in less than 24 hours of rendering. 
    • Anyone tested the 1080 yet?
  • Redid the test with my desktop (EVGA Titan X 12GB (Maxwell) running in a custom rig, i7-5960X @ 3GHZ (8 core) and 32GB of RAM;

    Total Rendering Time: 2 minutes 36.76 seconds
    CUDA device 0 (GeForce GTX TITAN X):      4270 iterations, 30.570s init, 121.656s render
    CPU:      730 iterations, 15.400s init, 136.654s render

    My Surface Book with i7-6600U 2.6GHz (Quad Core) and that has Intel 540HD integrated GPU + dedicated GPU that is a custom wonky version of NVIDIA 965M GPU (2GB) and 16GB of RAM:

    stopped at 1049 iterations; 17 minutes 33 seconds. I realized that DAZ was not even using the NVidia GPU, which explains why it was almost as sucky as my earlier test on my MacBook Pro; I checked discovered that somehow Microsoft managed to lose my NVIDIA geforce experience control panels during one of who knows which updates, so I re-installed Geforce experience, and redid the test after making sure that DAZ would launch with the nvidia gpu: This time 15 min and 41 seconds.

  • Recently I have installed a GTX 1050ti to replace my GTX 750 vanilla. I found some great results with the benchmark:


    Card: Nvida GeForce GTX 1050 TI, 4 GB DDR5 RAM - Computer: AMD FX 8320 with 16 GB RAM

    Interations: 4100+ - Time: 12min 45sec'


    I guess I have made a wise move.

  • GumpOtaku, for the watts of the 4GB 1050ti, that is a very good deal for the performance .  With 4GB you can also work with a good amount of a scene as well, unlike cards with less memory.

  • GumpOtaku, for the watts of the 4GB 1050ti, that is a very good deal for the performance .  With 4GB you can also work with a good amount of a scene as well, unlike cards with less memory.

    All this for roughly $147. It is conforting to see I have mase the right call. Thanks again ZDG!

  • Does anyone have the GTX 1070? want to compare its performance with maxwell cards 9xx series

  • ZarconDeeGrissomZarconDeeGrissom Posts: 5,412
    edited November 2016
    artphobe said:

    Does anyone have the GTX 1070? want to compare its performance with maxwell cards 9xx series

    It would be nice to see, evn tho I'm sure that the "GPU boost" vs card temp would create some rather interesting clock variation and range in render times. I was going to get a 8GB 1070, tho this month is not going to happen (I just spent over two hundred on heat). The GTX960 isn't quite enough to keep my place worm during the colder months, lol.

    jerham said:

    GPU : MSI GTX 1070 FE 8GB, CPU: I7 6700K (not overclocked), Memory: 32 GB

    Starter scene render times:

    GPU Only (OptiX on), Total Rendering Time: 3 minutes 14.45 seconds

    GPU + CPU (OptiX on): Total Rendering Time: 3 minutes 14.48 seconds

     

    Driving a display and running Iray on a single card 'Probably' does not have much impact on render times, as long as your not doing to many other things at the same time. So the times posted by 'jerham' for GPU-only are probably really close to the kind of dedicated crunch card GPU-only test I did on some more affordable cards (Give or take a second or two for room temps).

    I recall seeing posts further back with the GTX980 and GTX970 running in the five to two minute range, roughly. Tho that may have other factors as well (different Iray / Daz Studio versions). Comparing them to other posts is at best a very rough guesstimate from what I've seen, tho not a blind shot in the dark. Looks like the 8GB vs 4GB memory is the biggest difference between the GTX980ti (2m24.91s) vs GTX1070 (3m14.45s) from looking at a few posts.

    Titan-What? lol. I do find it interesting that the times for the posted gtx980 cards are strikingly close to the single Titan X posts. Is there that much of a diminishing performance return on the cards that pull a hundred more watts then the GTX980, or is it the older Titan card results that are being posted (Titan XP vs Titan XM). Same amount of memory, same number of cuda cores, they are quite similar except for the cooler esthetics, clocks, and the generation of chip. It almost looks like there is no Iray advantage of the Titan XP over the older Titan XM at all.

    Post edited by ZarconDeeGrissom on
  • surrealsurreal Posts: 152

    Windows 10,
    Nvidia driver 375.70,
    Daz3D 4.9.3.128 64 public build,
    OptiX on

    QuadroM2000M 4Gb Xeon 1505v5 32GbRam
      1xGPU   12min 22sec
      1xGPU+1xCPU  10min 30

    GTX960 2Gb Xeon E5440 28GbRam
      1xGPU   7min 15sec
      2xGPU   4min  21sec
      2xGPU+2xCPU  4min 19sec

    TitanX 12Gb, GTX1080 8Gb Xeon E5-2690v3 128GbRam
      1xTitanX   2min 35sec
      1xGTX1080 3min 4sec
      2xTitanX  1min 47sec
      2xTX+1xGTX 1min 18sec
      2xTX+1xGTX+2xCPU  1min 12sec

    All take between 25to30secs to start rendering (to get to 1% on progress bar).

    Driving display and rendering (interactive usage) had almost no effect on all of the above.

  • ZarconDeeGrissomZarconDeeGrissom Posts: 5,412
    edited November 2016

    That is about similar to what I've seen on some Cinebench R15 listings. The 250 watt GTX980Ti (not the regular 980) and the 250 watt Titan XM/XP(GM200 or GP102?) perform about the same, tho the Titan has more memory to work with larger scenes.  It's more interesting to see the GTX 1080 not do much better at all then the lower watt GTX1070 with Iray and Cinebench (163 vs 169 in R15), so I'm not sure what is going on there (may be drivers, or GPU architecture limitations). Given the difference in price and the watt-hog consumption between the 1080 and 1070, shouldn't there be a drastic difference in performance  ?

    I don't totally trust that graph, as I didn't think the older GTX980 would be that much better then the 1080  , shouldn't it be the other way around  ?

    ProClockers_Cr15_Gigabyte_GTX_1070_G1_Gaming_ForRefOnly_Hmmm_01.png
    1300 x 1066 - 521K
    Post edited by ZarconDeeGrissom on
  • surrealsurreal Posts: 152

    Price is definately not a good indicator of performance.

    My 1080s are ASUS 1080 Turbos. I chose them specifically because of their lower maximum wattage rating, as case I was putting them in is only a midi tower with 1300w non-server grade powersupply and already pretty full.  

    Don't get very much time to play but I have noticed in the logs that 1080 does occationally show better performance than the TitanX. It appears to depend on the content(or size) of the scene being rendered.

  • hphoenixhphoenix Posts: 1,335

    That is about similar to what I've seen on some Cinebench R15 listings. The 250 watt GTX980Ti (not the regular 980) and the 250 watt Titan XM/XP(GM200 or GP102?) perform about the same, tho the Titan has more memory to work with larger scenes.  It's more interesting to see the GTX 1080 not do much better at all then the lower watt GTX1070 with Iray and Cinebench (163 vs 169 in R15), so I'm not sure what is going on there (may be drivers, or GPU architecture limitations). Given the difference in price and the watt-hog consumption between the 1080 and 1070, shouldn't there be a drastic difference in performance  ?

    I don't totally trust that graph, as I didn't think the older GTX980 would be that much better then the 1080  , shouldn't it be the other way around  ?

    Don't trust the Cinebench R15 benchmark.  You'll notice the 6th line is a 2GB 760 GTX.  They're showing it outperform a 1080 GTX AND 2 (!) 1070 GTX in SLI.  Cinebench has a LOT of problems comparing cards, as it quickly becomes CPU and memory/bandwidth bound.  Nor has it been updated to use some of the benefits of the Pascal architecture.  Nor can it properly even utilize an extra card (the single 1070 GTX is almost IDENTICAL in result to the SLI pair of 1070s)

     

     

  • prixatprixat Posts: 1,585

    Those are OpenGL benchmarks, all about base clock frequencies, turbo mode and memory bandwidth. They won't give much insight into processing power. cool

  • ZarconDeeGrissomZarconDeeGrissom Posts: 5,412
    edited November 2016

    (EDIT) outrider42 below nailed it I think. Beta with debugging code still on would do it. I did not know it was a beta Iray driver being tested. (End of edit)

    lol, that is kind of what I thought I was hinting at. How can the Maxwell cards do just as good as the newer Pascal cards with both Iray and C4D, unless there is some major lack of driver or hardware capability for the newer Pascal cards. It's not just C4D, A Single Titan X, and a single GTX980Ti both do this Iray bench in 2 minutes flat give or tak a few seconds (from "GPU only" posts in this thread), That dose NOT add up to the Hype of the new Pascal chips from nvidia. Also A single "GPU only" GTX1080 running in at 3 minutes doesn't look much better then some lesser cards listed in this thread that deliver almost identical "GPU only" render times (6GB GTX 780, GTX1070, etc).  Hmmm.

    For games, there is many things the new chips have the old ones don't have. However when it comes to CUDA applications, there is not to many different ways to do a simple 1 + 1 calculation, so I don't totally buy the angle that it is a lack of using newer instructions the older cards do not support to do the same thing in Iray (unlike how I feel about Vulcan and other newer game engines). It's more believable for me that the CUDA driver for Pascal is not as good as the Maxwell CUDA driver, that or the Pascal ALUs are not significantly better then the Maxwell ALUs.

    SLI, forget it, nvidia is essentially ditching SLI support (mostly), and it dose not appear to do anything for Iray any way, lol. SLI will only benefit the game engines that support it, and the rest is up to the windows driver from here on out. Luke at Linus had a funny vid regarding most of that from a gaming perspective...

    http://www.youtube.com/watch?v=A91BPapLK38

    Your guess of what mode Iray and C4D use for CUDA (OpenGL?) rendering on multiple cards is as good as mine.

    Post edited by ZarconDeeGrissom on
  • outrider42outrider42 Posts: 3,679

    It is important to note that the Daz support for Pascal is only in the beta. It may not be very well optimized as a result of that.

  • ZarconDeeGrissomZarconDeeGrissom Posts: 5,412
    edited November 2016

    Beta driver, that totally explains why the GTX 1080 and 1070 run identical results so far.  If that TitanX of paynn_4ba54e39ef is a Pascal instead of a Maxwell, that would also add up, sort of (there is almost no difference between the old and new TitanX cards aside from the clocks).

    http://www.youtube.com/watch?v=ftJr8FeohSU

    Post edited by ZarconDeeGrissom on
  • New MSI laptop gs63vr stealth pro 4k w/1060 6GB - 4:23

  • hphoenixhphoenix Posts: 1,335

    Beta driver, that totally explains why the GTX 1080 and 1070 run identical results so far.  If that TitanX of paynn_4ba54e39ef is a Pascal instead of a Maxwell, that would also add up, sort of (there is almost no difference between the old and new TitanX cards aside from the clocks).

    http://www.youtube.com/watch?v=ftJr8FeohSU

    It's not just the beta Iray release from nVidia (the geForce driver itself isn't beta) but also that software that can truly utilize the power of the Pascal architectural changes and extensions (Ansel, SMP, etc.) hasn't really been implemented.  All the existing benchmarks and 3D software and games haven't been re-written to take advantage of those.  So you won't see anywhere NEAR what they COULD do for a while yet.

    Also, many of the big touted rates are based on using SMP for VR purposes.  90 fps in VR is 180 fps in pre-Pascal cards.  So they call that 'twice' the performance.  However, it does this with the new SMP features, which basically lets the render the same frame from another viewpoint (actually up to 16) for almost FREE (computationally.)  So if it ISN'T using that feature in some way, the performance gains are nowhere near as dramatic.  But it makes for impressive stats during demos and such.....

    So don't expect really BIG gains until DAZ/Steam/etc figure out how to write their rendering code to utilize the benefits of SMP to get more performance OUTSIDE of VR applications......which I am sure devs are experimenting with already.  But it's going to take a while before we even see much benefit even in the VR apps, since most aren't written with SMP acceleration yet.....

     

    (SMP - "simultaneous multi projection" the acronym/name nVidia gave to Pascal's ability to generate up to 16 views of the same scene 'frame' at the same time with almost no additional computational cost.  VR uses it to not only get both eye views free, but also to do barrel correction at the same time.  They use 8 views (4 per eye) and it costs them almost nothing in framerate.  THAT is why Pascal posts such impressive 'theoretical' rendering specs.  Now the non-VR programmers need to figure out how to use it to get gains in NON-VR rendering.....)

     

  • can someone post the time for the DAZ3D benchmarch rendering file for a Xeon E5-2690 system and a 2 Xeon E5-2690 system please?  (CPU rendering)

  • hphoenixhphoenix Posts: 1,335

    Okay, latest results.  I got a second 1080 GTX (both are Asus ROG Strix, non-OC versions.)  Both are running at stock clocks (1600 core, 1850 boost, 10Ghz memory).  On all tests, I start DS fresh, load the scene, and render.  All use 'speed' optimization setting.

     

    1080 GTX (x2), optix off - 1m 57.5s

    1080 GTX (x2), optix on - 1m 38.89s

    1080 GTX (x2) + CPU, optix on - 2m 31.75s

     

  • SimonJMSimonJM Posts: 5,942

    Just 'for fun' - GTX680M (with driver version 368.22) in my laptop, using DS4.9.2.70:

    photoreal mode: 22 minutes 25.30 seconds - ran as thermally capped the whole time (averaging 89C).

    Tried interactive mode but the render did not seem to want come out correctly (with or without Optix), rendering black apart from a pale grey sphere in top left

  • SimonJMSimonJM Posts: 5,942
    edited December 2016

    Bit the bullet andd installed the DS beta on new computer:

    DS4.9.3.128, nVidia driver: 375.95

    GTX1050 (display card): Total Rendering Time: 12 minutes 18.64 seconds

    TITAN X (Pascal): Total Rendering Time: 3 minutes 44.84 seconds

    Both: Total Rendering Time: 2 minutes 59.26 seconds

    Post edited by SimonJM on
  • hphoenixhphoenix Posts: 1,335
    edited December 2016
    SimonJM said:

    Bit the bullet andd installed the DS beta on new computer:

    DS4.9.3.128, nVidia driver: 375.95

    GTX1050 (display card): Total Rendering Time: 12 minutes 18.64 seconds

    TITAN X (Pascal): Total Rendering Time: 3 minutes 44.84 seconds

    Okay, that time for the Titan XP seems a bit high.  My single 1080 GTX did the benchmark in 2m40s (stock clocks, no OC).  You may want to check your fan profile, if it's getting hot and throttling I could understand that long of a time.....but it should be closer to 2m25s for a Titan XP.  And make sure the CPU isn't checked, that slows down fast cards.....

     

    Post edited by hphoenix on
Sign In or Register to comment.