Iray Starter Scene: Post Your Benchmarks!

1212224262749

Comments

  • jamestjamest Posts: 19
    edited October 2017

     

    Visuimag said:

    I'm an Intel nerd, so emotionally, I'd go with the 8700 (especially as benchmarks suggest it'll be better for gaming). As far as Iray performance, however, the 1070 is what will benefit your system the most.

     

    So, a upgrade today won't give me a relevant performace? Should I keep my i7 3770 for some more time and save money?

    Post edited by jamest on
  • VisuimagVisuimag Posts: 551
    jamest said:

     

    Visuimag said:

    I'm an Intel nerd, so emotionally, I'd go with the 8700 (especially as benchmarks suggest it'll be better for gaming). As far as Iray performance, however, the 1070 is what will benefit your system the most.

     

    So, a upgrade today won't give me a relevant performace? Should I keep my i7 3770 for some more time and save money?

    Not that a CPU upgrade won't help, but not in the same way that an NVIDIA GPU will. The 1070 is what's going to really rock the boat for Iray renders. Me, however, I'd still get the 8700K, sell the 1070, and pick up the 1070 Ti (though I'm not sure how your budget looks).

  • jamestjamest Posts: 19
    Visuimag said:
    jamest said:
    Not that a CPU upgrade won't help, but not in the same way that an NVIDIA GPU will. The 1070 is what's going to really rock the boat for Iray renders. Me, however, I'd still get the 8700K, sell the 1070, and pick up the 1070 Ti (though I'm not sure how your budget looks).

    Unfortunately, my budget doesn't allow me to change both now.

    The main question for me is if I change my i7 3770 will I have more performance rendering with my 1070.

     

  • No, the difference will be neglibile.  For tangible performance gains, you'll need to either add or change your graphics card.

  • jamestjamest Posts: 19

    Thank you for the help!

  • VisuimagVisuimag Posts: 551
    edited October 2017
    jamest said:
    Visuimag said:
    jamest said:
    Not that a CPU upgrade won't help, but not in the same way that an NVIDIA GPU will. The 1070 is what's going to really rock the boat for Iray renders. Me, however, I'd still get the 8700K, sell the 1070, and pick up the 1070 Ti (though I'm not sure how your budget looks).

    Unfortunately, my budget doesn't allow me to change both now.

    The main question for me is if I change my i7 3770 will I have more performance rendering with my 1070.

     

    Yeah Jack nailed it.

    Post edited by Visuimag on
  • prixatprixat Posts: 1,585
    edited October 2017

    A couple of things I'd be interested in...

    While PCIe lanes primarily affect initial loading times, but there is also a DMI/SATA bottleneck that's encountered every time there is a screen refresh and especially when that r.png gets written to the temp folder.

    If you have a 10 minute render lying round can you compare it when you change the "min. samples update" from default 1 to the maximum 100.

    Do you get any time saving?

    minsamples.jpg
    546 x 537 - 67K
    Post edited by prixat on
  • prixatprixat Posts: 1,585
    edited October 2017

    The other thing is a benchmark of the simulation speeds.

    I used the demo scene of the "simple sheet". Resolution doesn't matter to the simulation but the GPU doing other things like 'Iray Viewport mode' obviously will.
    My fastest time was with viewport set to wire-frame and camera pointing at an empty part of the scene. lol

    750 ti - 1m 5s

    Post edited by prixat on
  • AngelAngel Posts: 1,204
    edited October 2017

     

    K so here's my results with system specs- The ones that matter most anyway.

    i7-5820k 3.3Ghz 6-Core

    64GB DDR4 RAM 2133hz

    GeForce GTX 1080 Ti SC Black Edition 11GB

    ---

    I included the full print screen so you can see the GPU Temperature as well as the time and size of the image.

    https://www.daz3d.com/forums/uploads/FileUpload/0e/efdebf3595658f2c22a913d2b989ed.png

    Ok so in contrast to my previous Bench. This is my new Benchmark using both the GTX 1080 Ti, with a GTX 1070.

    It cut about 25 seconds on the previous benchmark. Considering its a 2 minute render. Cutting off 25 seconds is quite a lot.

    So, this verifies that crossing 1080 Ti's with 1070's does work, and does improve Bench's

    Screenshot (2).png
    1600 x 900 - 515K
    Post edited by Angel on
  • tj_1ca9500btj_1ca9500b Posts: 2,047

    OK, for those who might be curious how a dual 1080 laptop does with this test:

    MSI GT83VR, with i7-6820 HK, dual 1080 (SLI) (8 gb each):

    • Optix enabled (yes this helps a lot)
    • SLI disabled,
    • both GPU's checked, CPU unchecked (CPU doesn't really reduce the render time much at all with the dual 1080s in play).
    • Standard clocks.

    First pass: 1:40

    Second pass: 1:25

  • AngelAngel Posts: 1,204

    OK, for those who might be curious how a dual 1080 laptop does with this test:

    MSI GT83VR, with i7-6820 HK, dual 1080 (SLI) (8 gb each):

    • Optix enabled (yes this helps a lot)
    • SLI disabled,
    • both GPU's checked, CPU unchecked (CPU doesn't really reduce the render time much at all with the dual 1080s in play).
    • Standard clocks.

    First pass: 1:40

    Second pass: 1:25

    With CPU enabled I've acutally noticed a drop in render speed. not sure what the deal is but yeah...

  • AngelAngel Posts: 1,204
    edited October 2017

    Basicly if I have all 3 checked, it goes from 1:25 to about 1:30-35. Which I'm no expert but if I were to diagnose with my limited knowlage I would say somewhere along the rendering lines its rendering old information instead of new. Like the GPU's are doing their job and the information is sent out, and then the CPU is taking the image the GPU rendered out and rendering the same image. So it just slows the proccess down while making no new information. I can tell by HWMonitor that the CPU is working. It is doing something because the cores heat up... But what ever its doing has a null effect... Which brings me to believe its rending an iteration thats already been rendered.

    Post edited by Angel on
  • ebergerlyebergerly Posts: 3,255

     

    K so here's my results with system specs- The ones that matter most anyway.

    i7-5820k 3.3Ghz 6-Core

    64GB DDR4 RAM 2133hz

    GeForce GTX 1080 Ti SC Black Edition 11GB

    ---

    I included the full print screen so you can see the GPU Temperature as well as the time and size of the image.

    https://www.daz3d.com/forums/uploads/FileUpload/0e/efdebf3595658f2c22a913d2b989ed.png

    Ok so in contrast to my previous Bench. This is my new Benchmark using both the GTX 1080 Ti, with a GTX 1070.

    It cut about 25 seconds on the previous benchmark. Considering its a 2 minute render. Cutting off 25 seconds is quite a lot.

    So, this verifies that crossing 1080 Ti's with 1070's does work, and does improve Bench's

    Yes, with my new 1080ti together with a 1070 my render times are improved something like 60-65% over just a 1070 alone. Not sure if that matches your results...I got a bit confused trying to decode your two images.  

  • dawnbladedawnblade Posts: 1,723

    2 min. 4 sec.(Yay!) on my new rig with a 1080ti, 32 GB RAM, i7-7700K @ 4.20GHz, MSI Z270-A PRO MB, Windows 10 Pro. The iteration it reached was 4,825 at 95.06%. Probably would have taken all night on my previous PC.

    For this test I have both CPU and GPU checked under Photoreal Devices and Interactive Devices, and OptiX Prime Acceleration on. Being that this is my first Nvidia card, I'm not sure what settings to use, and I've got to leave so I can't run any more tests this afternoon. If someone who has a similar PC or is familiar enough with hardware can let me know if I should expect to get better results with different settings, that would be great. Otherwise I'll experiment and get back to you later this week.

    I'm still setting up the PC too so I haven't been able to play with it.

  • ebergerlyebergerly Posts: 3,255
    edited October 2017

    dawnblade that's exactly what others, including myself, have gotten with a 1080ti, 2 minutes. And about 3 minutes with a 1070. 

    Below is a summary I made of results posted in this thread. 

    Benchmark.jpg
    382 x 431 - 40K
    Post edited by ebergerly on
  • dawnbladedawnblade Posts: 1,723
    Thank you ebergerly! Great to know, and the summary is super helpful.
  • nlbsnlbs Posts: 12
    edited October 2017

    System: Windows 10 Pro 64-bit, Intel Core i7 2700k 4.3 MHz, 16GB DDR3 1600 MHz, NVIDIA GeForce GTX 1080 Ti (EVGA)

     

    Total Rendering Time with OptiX Prime Acceleration and GPU only: 1 minutes 52.69 seconds

    Total Rendering Time with OptiX Prime Acceleration, GPU and CPU enabled: 1 minutes 57.24 seconds

    Total Rendering Time without OptiX Prime Acceleration and GPU only: 3 minutes 11.18 seconds

     

    Post edited by nlbs on
  • dawnbladedawnblade Posts: 1,723

    Nvidia GeForce GTX 1080ti, Win 10 Pro, 32GB DDR4 2400 RAM, i7-7700K @ 4.20GHz, MSI Z270-A PRO MB

    CPU and GPU, OptiX Prime Acceleration on: 2 min. 4 sec. 4,825 iterations at 95.06%.

    CPU and GPU, OptiX Prime Acceleration off: 3 min. 20 sec. 4,825 iterations at 95.07%.

    GPU only, OptiX Prime Acceleration on: 2 min. 4 sec. 4,816 iterations at 95.07%.

    GPU only, OptiX Prime Acceleration off: 3 min. 23 sec. 4,798 iterations at 95.02%.

     

  • TheKDTheKD Posts: 2,674

    just my 1070 Total Rendering Time: 2 minutes 45.53 seconds

    1070 + 960 2 minutes 2.17 seconds

    One thing I noticed mt 1070 is now running ~6 hotter now(only 61, so not a huge issue), probably due to more restricted airflow, and absorbing some heat from the 960 under it.

     

    Hoping the gain is a bit more apparent on a real scene render lol.

     

  • AngelAngel Posts: 1,204
    edited October 2017
    ebergerly said:

     

    K so here's my results with system specs- The ones that matter most anyway.

    i7-5820k 3.3Ghz 6-Core

    64GB DDR4 RAM 2133hz

    GeForce GTX 1080 Ti SC Black Edition 11GB

    ---

    I included the full print screen so you can see the GPU Temperature as well as the time and size of the image.

    https://www.daz3d.com/forums/uploads/FileUpload/0e/efdebf3595658f2c22a913d2b989ed.png

    Ok so in contrast to my previous Bench. This is my new Benchmark using both the GTX 1080 Ti, with a GTX 1070.

    It cut about 25 seconds on the previous benchmark. Considering its a 2 minute render. Cutting off 25 seconds is quite a lot.

    So, this verifies that crossing 1080 Ti's with 1070's does work, and does improve Bench's

    Yes, with my new 1080ti together with a 1070 my render times are improved something like 60-65% over just a 1070 alone. Not sure if that matches your results...I got a bit confused trying to decode your two images.  

    Keep an eye on the temperture though. Mine was getting pretty warm...

    I've decided to unplug it in fear of destroying the 1070. I'll just save up some money and buy another 1080Ti and Crossfire proper.

     

    You can track your hardware tempertures with this open source program. https://www.cpuid.com/softwares/hwmonitor.html

    Anything above 90C° is unhealthy. Keep that in mind. If you see thiings reaching 90C° or higher your asking for hardware failure. The laws of physics prohitbit it. The melting point of plasic is 100C°

    Post edited by Angel on
  • TheKDTheKD Posts: 2,674

    What card do you have? I have never seen mine go above 66 so far, either my 960 or 1070, during rendering or gaming. I usually get one with dual fans though, that probably helps some, along with my case fan setup.

     

  • RakeRake Posts: 1

    Some scenes go fast, others not so much....

    i7-7700K 4 core overclocked to 4.66 GHz
    RAM: 32 GB
    Windows 10 64

    GTX 1080 Ti Founders Edition
    GPU Memory 11.0 GB

    Using SickleYield's reference scene:

    FIRST RUN:
    GPU Only + Optix:
    2017-11-20 18:00:12.898 Total Rendering Time: 2 minutes 10.11 seconds

    GPU + CPU + Optix:
    2017-11-20 18:14:29.229 Total Rendering Time: 2 minutes 11.41 seconds 

    SECOND RUN:
    GPU Only + Optix:
    2017-11-20 18:55:20.349 Total Rendering Time: 2 minutes 9.92 seconds

    GPU + CPU + Optix:
    2017-11-20 19:00:45.443 Total Rendering Time: 2 minutes 17.41 seconds

     

  • nothingmorenothingmore Posts: 24
    edited November 2017

    i9-7900X @ 4.3 GHz, Win 10

    Titan Xp + GTX 1080 Ti Founders Edition + GTX 1080 Ti EVGA FTW3:

    Optix, GPU only: Total Rendering Time: 51.81 seconds

    2nd pass with scene preloaded in vram: Total Rendering Time: 40.88 seconds

    Preload with CPU: Total Rendering Time: 40.89 seconds

    Titan Xp only: Total Rendering Time: 2 minutes 2.16 seconds

    1080 Ti FE only: Total Rendering Time: 2 minutes 14.51 seconds

    1080 Ti EVGA FTW3 only: Total Rendering Time: 2 minutes 12.11 seconds

    Post edited by nothingmore on
  • JamesJABJamesJAB Posts: 1,760

    I just upgraded the GPU in my laptop (Mobile Workstation)
    Went from a Nvidia Quadro K5000M 4GB (similar to the GTX 680M) to a Nvidia Geforce GTX 980M 8GB

    Here is my benchmark results:
    Nvidia Geforce GTX 980M 8GB
    GPU only with OptixPrime enabled.
    5 minutes 19.55 seconds

    Held full boost clock speed @ 75c for the whole render.

  • My shiny new 1080Ti arrives tomorrow so I thought I'd run the test on my 1050Ti before I pull it out and retire it, and it came in at 8m45s. I will post an update tomorrow once I'm up and running.

    (The 1050Ti isn't a bad card for the price, to be fair, and I didn't mind the long rendering times -- that's what batch jobs were invented for -- but the 4GB of RAM was getting to be a severe limiting factor and I was having to make too many compromises and do too much post processing and compositing. *sigh* It seems like only yesterday I was happy with DKBTrace on my trusty 512Kb A500, and here I am having stumped up the premium for the Ti because the 8Gb on the vanilla 1080 could be cutting it a bit fine. I can't decide if "progress" is good or bad!)

  • Finally got my Threadripper setup 

    Asus Zenith Extreme motherboard

    TR4 1950x

    4 x GTX 1070 AERO 8G OC

    128Gb Ram

    Having some problems with my Wacom probably due to Win10 update so all I did was one render using cpu and all 4 gpus the total time was 1 min 16 sec

  • ebergerlyebergerly Posts: 3,255
    edited December 2017

    Having some problems with my Wacom probably due to Win10 update so all I did was one render using cpu and all 4 gpus the total time was 1 min 16 sec

    Wow, Robert, something doesn't look right. Earlier in this thread someone with 2-GTX 1070's reported 1 minute 45 seconds, and you're getting 1 minute 16 seconds with four 1070's? 

    And a 1080ti plus a 1070 was reported at 16 seconds less than you, at 1 minute. Hard to imagine that a 108ti plus a 1070 outperforms 4 x 1070's....

    Iray Benchmark Price Performance.PNG
    786 x 525 - 29K
    Post edited by ebergerly on
  • JamesJABJamesJAB Posts: 1,760

    Finally got my Threadripper setup 

    Asus Zenith Extreme motherboard

    TR4 1950x

    4 x GTX 1070 AERO 8G OC

    128Gb Ram

    Having some problems with my Wacom probably due to Win10 update so all I did was one render using cpu and all 4 gpus the total time was 1 min 16 sec

    You should try running the benchmark using just the GTX 1070 cards and see what your result is.
    I've noticed sometimes with very powerful GPUs/multiple GPUs if you add the CPU into the render cluster it can slow the whole thing down a little. 

  • ebergerly said:

    Having some problems with my Wacom probably due to Win10 update so all I did was one render using cpu and all 4 gpus the total time was 1 min 16 sec

    Wow, Robert, something doesn't look right. Earlier in this thread someone with 2-GTX 1070's reported 1 minute 45 seconds, and you're getting 1 minute 16 seconds with four 1070's? 

    And a 1080ti plus a 1070 was reported at 16 seconds less than you, at 1 minute. Hard to imagine that a 108ti plus a 1070 outperforms 4 x 1070's....

    Yeah my orginal setup with only two 1070's was only 30 sec slower

     

    JamesJAB said:

    You should try running the benchmark using just the GTX 1070 cards and see what your result is.
    I've noticed sometimes with very powerful GPUs/multiple GPUs if you add the CPU into the render cluster it can slow the whole thing down a little. 

    I intend to do this but need to get the wacom tablet problem figured out as I found that it's actually opening two instances at once sometimes Last nigt after I posted I went to shut down Studio and found that there was a second render behind the one that I posted the times for

  • OK took the cpu out of the loop straight gpu render 50.78 seconds

Sign In or Register to comment.