Iray Starter Scene: Post Your Benchmarks!

1242527293049

Comments

  • bluejauntebluejaunte Posts: 1,859

    Ah ok. Yeah that is a bit strange.

  • outrider42outrider42 Posts: 3,679

    Either way, it would be great to get more tests on this so we can see if this is a real trend. There's a lot of things at play here. Is it really just as simple as more CPU cores? We have a test with a 1080+1070 with a Ryzen 1700 that also gets beat by this 1800x dual 1070 time. Not by much, but that still should not be the case. And it beats two 1080s. I noticed the dual 1080s are a laptop, are those downclocked from desktop varients? That may explain the difference as well.

    To avoid confusion, everyone needs to verify they are doing tests the same way when they post their results. Like the Mythbusters used to say, we need consistant results. (I miss that show.)

    • Use the Iray Preview mode for the viewport before rendering.
    • Verify OptiX on or off. Most tests are using it on.
    • Verify Optimization as Speed or Memory. Most tests are using speed.
    • State what CPU you have, even if it is not being used to render. This is important if we are going to find the trend for multiple GPUs.
    • It would probably be good to state the clockspeed of the GPU, since each model can have different clocks even with the same chip.
    • Users with lower spec hardware can cap the test at 1000, 2000, or 2500 iterations and factor the result against the base of 5000 iterations. This is the easiest and most fair way to scale the test.

    And again, here is the link to this scene. You may need to right click the download button to download it, depending on how moody the Daz website is at the time. <.<

    https://www.daz3d.com/gallery/#images/526361/

    Also, @bluejaunte, your characters are probably the best models in the store, IMO. I have a few of them now. Whatever you are doing, is working extremely well. You have quickly become a PA I watch new releases for.

  • bluejauntebluejaunte Posts: 1,859
    edited March 2018

    Either way, it would be great to get more tests on this so we can see if this is a real trend. There's a lot of things at play here. Is it really just as simple as more CPU cores? We have a test with a 1080+1070 with a Ryzen 1700 that also gets beat by this 1800x dual 1070 time. Not by much, but that still should not be the case. And it beats two 1080s. I noticed the dual 1080s are a laptop, are those downclocked from desktop varients? That may explain the difference as well.

    To avoid confusion, everyone needs to verify they are doing tests the same way when they post their results. Like the Mythbusters used to say, we need consistant results. (I miss that show.)

    • Use the Iray Preview mode for the viewport before rendering.
    • Verify OptiX on or off. Most tests are using it on.
    • Verify Optimization as Speed or Memory. Most tests are using speed.
    • State what CPU you have, even if it is not being used to render. This is important if we are going to find the trend for multiple GPUs.
    • It would probably be good to state the clockspeed of the GPU, since each model can have different clocks even with the same chip.
    • Users with lower spec hardware can cap the test at 1000, 2000, or 2500 iterations and factor the result against the base of 5000 iterations. This is the easiest and most fair way to scale the test.

    And again, here is the link to this scene. You may need to right click the download button to download it, depending on how moody the Daz website is at the time. <.<

    https://www.daz3d.com/gallery/#images/526361/

    Also, @bluejaunte, your characters are probably the best models in the store, IMO. I have a few of them now. Whatever you are doing, is working extremely well. You have quickly become a PA I watch new releases for.

    Oh, yes that may very well be the case. Laptops often have mobile variants that are quite a bit slower and less power hungry. Although two 1080 in laptop, what kind of laptop is that even? Never heard of such a thing.

    Oh and thanks for the compliments! smiley

    Post edited by bluejaunte on
  • outrider42outrider42 Posts: 3,679

    I've seen a few dual 1080 laptops, they are beasts! I know the CUDA cores are not cut down. Pascal is not cut down for laptops, hence they no longer have the "M" moniker for mobility versions. A laptop 1080 is actually the full 1080 chip. But I do not know what the clock speeds are, and they may be clocked lower. The i7 in that laptop is cut down, it is a 4 core chip instead of a 6 core one, and it is clocked at only 2.7 Ghz.

  • junkjunk Posts: 1,212
    • Use the Iray Preview mode for the viewport before rendering.  YES
    • Verify OptiX on or off. Most tests are using it on.   ON
    • Verify Optimization as Speed or Memory. Most tests are using speed.   SPEED
    • State what CPU you have, even if it is not being used to render. This is important if we are going to find the trend for multiple GPUs.  EVGA 1080 Ti Hybrid, Gigabyte GeForce GTX 1070 G1 Gaming, Zotac GeForce GTX 1070 AMP Edition
    • It would probably be good to state the clockspeed of the GPU, since each model can have different clocks even with the same chip.  Overclocked only on the GTX 1080 Ti where specified in the last benchmark
    • Users with lower spec hardware can cap the test at 1000, 2000, or 2500 iterations and factor the result against the base of 5000 iterations. This is the easiest and most fair way to scale the test.

    outrider42, I have filled in the items above since it seems the first benchmark experiments between 1080 Ti and dual 1070's.  I just ran the test in reverse order and got basically the same results wihin a few seconds on all three results.  So I stand by that, at least on my build, two GTX 1070's are faster than one 1080 Ti overclocked and water cooled.  The 1070's can run faster (overclocked) but in my box they are sandwhiched so close to each other (probably about 1/4" distance.).  I use to run them at about +125 MHz on GPU, +375 MHz on Memory speed.

  • outrider42outrider42 Posts: 3,679

    I believe you, and that you can repeat the results, that's even better. What are the clocks of the cards at stock? Many cards come factory overclocked, especially 3rd party gaming cards. I think it would be a good idea to show clocks on these cards. You gained like 40 seconds in one test on the 1080ti with just 100 mhz. You oc'd the memory, but I don't believe that was a big factor. Though you can certainly test that theory if you wish since I was wrong about pcie lanes.

    The question we have is more about the other rigs you beat. What are the clocks on the dual 1080s in that laptop? I tried looking it up, the one and only review I found that listed any clock speeds stated they were base 1557 mhz. That model was slightly different, but if that is correct, that is ever slightly down from the 1607 Nvidia lists as the base of a founder's 1080. That's only 50 mhz though! That doesn't seem like enough to explain how they were beat by two 1070s. It is also possible that @tj_1ca9500b did not start the test with Iray preview active. That could very well explain it.

    @sone said they have a 1070 and 1080. They say OC by Gigabyte, I assume both a OC'd a bit. They might even have the same 1070 you do. I found 1721 mhz for a 1080 in OC mode (not counting boost.) They had a Ryzen 1700. Their fastest test, and they said the VRAM was loaded so I guess that means the Iray preview was on, was 4:16. They were just a half second faster than your dual 1070s even though they have an OC'd 1080. So this test is even more interesting than the laptop test. What is the difference between a Ryzen 1700 and 1800x besides clock speed? Is the 1800x the key to why the dual 1070s are running almost exactly as fast, within the margin of error? This is very curious, and I'd love to know what is making your 1070s sing so well, or if there is something holding the others back a bit.

  • tj_1ca9500btj_1ca9500b Posts: 2,047
    edited March 2018

    I'm hoping that someone runs these benchmarks on a dual EPYC system at some point.  Sure, I can guesstimate using the Threadripper results, but it's not the same as an actual benchmark...

    Post edited by tj_1ca9500b on
  • Robert FreiseRobert Freise Posts: 4,247

    I'm hoping that someone runs these benchmarks on a dual EPYC system at some point.  Sure, I can guesstimate using the Threadripper results, but it's not the same as an actual benchmark...

    May be a long wait given the price of those

  • junkjunk Posts: 1,212
    edited March 2018
    GPU speed, DDR speed Time Results (New outrider42 test)
    GTX 1080 Ti Hybrid SC2 (no overclock) 5 minutes 27.68 seconds
    +1000 MHz memory speed 4 minutes 53.18 seconds
    +1000 MHz memory speed (2nd run) 4 minutes 53.31 seconds
    +100 GPU only 5 minutes 22.83 seconds
    +100 GPU only (2nd run) 5 minutes 18.46 seconds
    +100 GPU, +1000 Memory 4 minutes 47.40 seconds

     

    These are my results of stock runs, overclocking just the Memory, overclocking just the GPU, overclocking both for JUST the GTX 1080 Ti.  It seems that overclocking the memory does more for render times than the GPU speed.  FYI: I cannot seem to get over 100 to be stable even with increased core voltate, power limit's etc.

    Post edited by junk on
  • GaryHGaryH Posts: 66
    edited March 2018

    Titan X Pascal @2037 + Titan Xp @2062, stock memory speeds, custom water cooled, 39C each.  GTX 970 used for display only.

    Intel 2600K (4 core) @4GHz,  32GB DDR3

    2 min 34.71s (outrider42's new scene)

    Post edited by GaryH on
  • ErrilhlErrilhl Posts: 20

    Iray Starter Scene - 5.29 with CPU / GPU - i7 3770k / Titan (the first one) all on stock

    Iray Bench 2018 - 13.42 with same.

    Actually not as bad as I would've feared

     

    iray_start_scene.png
    400 x 520 - 215K
    iray_bench_2018.png
    720 x 520 - 453K
  • Navin JNavin J Posts: 5
    edited April 2018

     

    Alienware 15 R3 (i7 7820HK @~4Ghz/OC1 Profile, GTX 1080 with Max-Q, 32GB DDR4-2400, PM981 1TB ) [HWiNFO attached]


     

    Iray Starter scene: (Genesis 8)

    (CPU: OFF) GPU & Optix: 8m 20s

    AW15R3.png
    1920 x 1030 - 135K
    8m 20sec.jpg
    1920 x 1080 - 454K
    8m20s.png
    720 x 520 - 453K
    Post edited by Navin J on
  • outrider42outrider42 Posts: 3,679
    langbakk said:

    Iray Starter Scene - 5.29 with CPU / GPU - i7 3770k / Titan (the first one) all on stock

    Iray Bench 2018 - 13.42 with same.

    Actually not as bad as I would've feared

     

    It is fascinating to see the difference a few years makes. The OG Titan turned 5 in February. Thanks for posting. 

  • Dim ReaperDim Reaper Posts: 687

    I upgraded my graphics card yesterday from a 980ti to a 1080ti.  Whilst I was waiting for the delivery I did some bench testing with the original scene, and then did some more today using the 2018 scene.  In a few weeks time I plan to put the 980ti back into the pc, so I will post back results with dual GPU then.

    System specs:

    i7 5960X with 32GB RAM.

    980ti clock core 1000 MHz

    1080ti clock core 1595 MHz

    Daz Studio version 4.9.4.122

     

    Using the original benchmark scene:

    No iray preview.  OptiX Acceleration disabled. Fresh load of the scene before hitting Render.

    980ti                            5 minutes 44 seconds

    980ti + i7 5960X            4 minutes 59 seconds

    1080ti                          3 minutes 26 seconds

    1080ti + i7 5960X         3 minutes 22 seconds

     

    No iray preview.  OptiX Acceleration enabled. Fresh load of the scene before hitting Render.

    1080ti + i7 5960X        2 minutes 8 seconds

     

    From iray preview window.  OptiX Acceleration enabled.

    1080ti + i7 5960X        1 minutes 52 seconds

    1080ti                        1 minutes 52 seconds

     

     

    Using the 2018 Render scene provided by Outrider 42:

    Iray preview enabled.  OptiX enabled.  Optimisation for Speed.

    1080ti                          5 minutes 17.76 seconds

    1080ti + i7 5960X         5 minutes 22.76 seconds

    i7 5960X only               1 hours 4 minutes 19.30 seconds

     

     

  • Emf3DEmf3D Posts: 4
    edited April 2018

    System specs:
    i7 8700k @ 4.4 Ghz x 6 
    24GB RAM @ 2666 Mhz
    2 x GTX 1070 (MSI Armor)

    Iray Bench 2018: (Genesis 8)

    Test1:     2 x 1070 / OptiX enabled / Optimization: Speed / No OC                4:33

    Test2:     2 x 1070 / OptiX enabled / Optimization: Speed / No OC                4:23    (Scene loaded in RAM)

    Test3:     2 x 1070 + CPU / OptiX enabled / Optimization: Speed / No OC    4:18

     

    Post edited by Emf3D on
  • EBF2003EBF2003 Posts: 28

    iray benchmark 2018 by outrider42

    5 minutes 3.86 seconds

    dual 970 gtx

    xeon 2696

  • JD_MortalJD_Mortal Posts: 758
    edited April 2018

    IRAY Starter scene... (RAW)

    {3x GPUs / 1x CPU}[CPU]                = 10 minutes  6.30 seconds [606.30s]    33tc (Thread Cores)[CPU + OptiX]        =  8 minutes 20.39 seconds [500.39s]    33tc[GPUs]               =  1 minutes 18.30 seconds [ 78.30s] 10752cc (CUDA Cores)[GPUs + CPU]         =  1 minutes 11.65 seconds [ 71.65s] 10752cc + 33tc[GPUs + OptiX]       =  0 minutes 44.58 seconds [ 44.58s] 10752cc[GPUs + CPU + OptiX] =  0 minutes 41.76 seconds [ 41.76s] 10752cc + 33tc{2x GPU}[Titan-Xp(C) + OptiX]=  0 minutes 58.15 seconds [ 58.15s] 7680cc{Single GPU}[Titan-X + OptiX]    =  3 minutes  5.37 seconds [185.37s] 3072cc[Titan-Xp(C) + OptiX]=  1 minutes 52.37 seconds [112.37s] 3840cc &lt;--(Video-Out)[Titan-Xp(C) + OptiX]=  1 minutes 51.48 seconds [111.48s] 3840cc

     

    2018 Benchmark test results... (RAW)

    {3x GPUs / 1x CPU} [Notes below][CPU]                = 23 minutes 14.59 seconds [1394.59s]    33tc (Thread Cores)*[CPU + OptiX]        = 22 minutes 29.92 seconds [1349.92s]    33tc[GPUs]               =  2 minutes  8.26 seconds [ 128.26s] 10752cc (CUDA Cores)[GPUs + CPU]         =  2 minutes  1.80 seconds [ 121.80s] 10752cc + 33tc[GPUs + OptiX]       =  1 minutes 55.30 seconds [ 115.30s] 10752cc[GPUs + CPU + OptiX] =  1 minutes 47.62 seconds [ 107.62s] 10752cc + 33tc{2x GPUs}[Titan-Xp(C) + OptiX]=  2 minutes 26.17 seconds [ 146.17s] 7680cc[Titan-X + OptiX]    =  7 minutes 39.40 seconds [ 459.40s] 3072cc[Titan-Xp(C) + OptiX]=  4 minutes 50.12 seconds [ 290.12s] 3840cc &lt;--(Video-Out)[Titan-Xp(C) + OptiX]=  4 minutes 47.40 seconds [ 287.40s] 3840cc{Unofficial TCC mode on 2x Titan-Xp "Special editions", WDDM mode on Titan-X}**[GPUs + TCC + OptiX] =  1 minutes  4.40 seconds [ 64.40s]* Daz only uses 33 of 36 threads, to render, on this CPU** It's a one-shot deal.Then daz crashes and cards can't be used again until reboot.


    Windows 10 (64-bit) {WDDM v2.3}
    1x i9 7980xe (CPU), 18 cores, 36 threads (Daz only uses 33 threads to render)
    64GB DDR4 (RAM) Corsair Dominator Platimum DDR4 3000 (PC4 24000)
    1x Titan-X (Maxwell) {Performace mode set to "Compute Power"}
    2x Titan-Xp (Collectors edition) {Performace mode set to "Compute Power"}
    xxx 2x Titan-V (Volta) [Not available for rendering in this version of Daz]
    2TB mSATA (Samsung 960 PRO) {Boot and Daz}
    4TB SATA (Samsung 860 PRO) {Daz Library}

    FUNNY NOTE: I hit 78% convergence/done, after 4 seconds of rendering...

    I would change a few things with the benchmark...

    1: Turn off "Sun and Sky" rendering. Do "Scene only". You are inside a box. That is wasted processing of the outside world, counted towards convergance.

    2: Turn off "Render ground". You are in a box, with a ground. That is wasted processing. Plus, the "ground", is rendering outside the box. See #3

    3: The "box" you are using, is a primitive with two sides. Inside and outside are both being rendered. Due to sun and sky being outside and a ground which the box casts unseen shadows on. And the boxes "gloss" reflection settings.

    4: You are rendering the box-material as a "Metalicity" profile, without the use of any of the metalicity settings. That slows it down, rendering nothing, (Values of 0 are still processed, since you select Metallicity as the shader type. Well, that is the default "catch-all" shader.)

    5: You do not say what settings to put for the "Texture threshold". Defaults are 512 and 1024. Mine are commonly set to 2048-5000 for rendering, so it doesn't compress the textures.

    6: Bloom is a wasteful post-processing, attempted to be done inline, while rendering. NVIDIA is horrible at processing those "novelty" filters.

    7: You have a group of "Photometric fill lights" which is empty, left-over from ???

    8: You have Smooth ON for a cube with all 90-degree angles set to 89.0, and round corners across materials ON, with no value. Both of those are wasted processing attempts with virtual 0 values. As opposed to being "skipped" when you select OFF.

    9: Fingernails, for some reason were 50% transparent, and thus, glowing, rendering the inside of the Gen8 model.

    10: Eye-moisture has all sorts of reflections and gloss but is 100% transparent (not rendered in the scene, but being processed)

    11: Lips have 0 for "glossy", but have all the settings and an image-map for gloss. Needs SOME moisture...

    P.S. We need to stop using that model... She has herpies... Some kind of reddness under her nose, on her lips, at the crease. There is also a horrible "ring aournd her eyes", in the GEN8 model. Not sure if all GEN8 models have that, or just her. Like someone cut around her eyes with a razor-blade, and the wound is not healed. Hands look awesome though, if you fix the nails so they are not 50% transparent so you are seeing the inside of her boneless body, which glows due to SSS.

    Post edited by JD_Mortal on
  • hyteckithyteckit Posts: 167
    JamesJAB said:

    Since you are sharing your i9 render numbers, I figured that I'd see how well it fared against dual 6 core Xeons (generation 1.5 i7 platform)

    CPU only - 2x Xeon X5680 @ 3.33Ghz (24GB 6 channel REG ECC DDR3)
    Total render time - 21 minutes 49.59 seconds 

    I have a similar setup, but my render times are much longer. Running Daz on Mac Pro.

    CPU only - 2x Xeon X5680 @ 3.33Ghz (24GB REG ECC DDR3)
    Total Rendering Time: 31 minutes 12.67 seconds

  • hyteckithyteckit Posts: 167

    Just got a Titan Black on my PC. Big improvement over my 12 core (2x) Xeon X5680 @ 3.33Ghz 

    Starter Scene.

    GPU Only + OptiX Prime

    Total Rendering Time: 4 minutes 26.67 seconds

     

  • outrider42outrider42 Posts: 3,679
    edited May 2018

    I'm not sure if this was posted, but I came across an Iray benchmark that includes not only the Titan V, but also the DGX-1. Rather than factoring time, the cards are rated on megapaths per second. This gives you a good indicator of relative performance, and if true, the Titan V is a real beast for Iray, absolutely destroying everything that has preceded it. (At least before the gold plated Quadros were released last month.) So that $3K does get you something...if Daz updates to support it. What I really like is that they retested older GPUs for this test, they did not just rely on the test results from older versions of Iray. I am going to make a big list of these here.

    This test is using the Iray 2017.1.2 SDK, and all GPUs shown were tested on this same version.

    TITAN V                       12.07

    Tesla V100                    11.1

    Quadro GP100                 7.89

    TITAN X (Pascal)              5.4

    GeForce GTX 1080 Ti       5.3

    Quadro P6000                   5.26

    GeForce GTX 1070 Ti      4

    Quadro M6000                 4

    GeForce GTX 1080         3.7

    TITAN X                           3.7

    GeForce GTX 780 Ti        3.3

    Quadro P4000                 3.09

    TITAN Black                     2.9

    GeForce GTX 980             2.9

    GeForce GTX TITAN          2.8

    Quadro M5000                   2.7

    Tesla K10                            2.1

    Tesla K20                            1.9

    Quadro M4000                    1.8

    GeForce GTX 750 Ti           1.1

    So how much does $60K get you?

    DGX-1                               58.91

    Quadro VCA                      30.62

    Another interesting result is how the 1070ti beats the 1080 outright. At first this may seem wrong, but that is not the case. My thinking is that the 1070ti has a more updated version of CUDA for Iray since it released so much later than the 1080 did. That is not unprecedented, as I posted the CUDA version of different GPU series in an earlier post. There are several cases where a newer card has a newer CUDA, even though it is part of the same series. In any case, this really does make the 1070ti to be a tremendous value. And in the one bench we have seen on sickleyield's scene with a 1070ti, it beat a 1080. So this may well be true!

    Here is the source. They also list a number of cloud serves, too, so if you wish to see those click the link.

    https://www.migenius.com/products/nvidia-iray/iray-2017-1-benchmarks

    Post edited by outrider42 on
  • Here is mine (with original SickleYield scene): 2 minutes 59 sec.

    GPU+CPU+Optix on.

    CPU: i7 7700K
    GPU: MSI GTX 980 ti 6 GB Armor 2X with default clock
    RAM: 16 Gb DDR4 2400 mHz

  • rock63rock63 Posts: 13

    I looked up the test I found, and I guess I missremembered it. They concluded that the core count of the CPU was the biggest factor in multi GPU rendering, as the 20 core Xeon+ 4 Titans out performed the 4 core i7+ 4 Titans 

    https://www.pugetsystems.com/labs/articles/NVIDIA-Iray-GPU-Performance-Comparison-785/

    The Ryzen 1800x does have a clear core count advantage over the previous tested systems.

    In that test all 4 Titans will be running @16x each as there are a total of 80 lanes across those 2 Xeons. If you ran that in a single 40 lane CPU with 4 cards render time would be increased as data throughput to the cards would be halved causing a higher render time over all. I have tested this theory and it is the case. I had a 16 lane CPU runnnig 2 980tis @ 8x each, render time was slower than 1 980ti running alone @ full 16x.

     

  • Dim ReaperDim Reaper Posts: 687

    I posted some results above several weeks ago when I upgraded from a 980ti to a 1080ti.  This week I finally got enough free time to fit a bigger power supply and put the 980ti back in the machine.

    Using the 2018 Render scene provided by Outrider 42:

    Iray preview enabled.  OptiX enabled.  Optimisation for Speed.

    1080ti                          5 minutes 17.76 seconds

    1080ti+980ti                 3 minutes 12.13 seconds
     

    Overall, I'm pleased with the difference on the benchtest, but I need to try some larger scenes to see the difference.

  • outrider42outrider42 Posts: 3,679

    I posted some results above several weeks ago when I upgraded from a 980ti to a 1080ti.  This week I finally got enough free time to fit a bigger power supply and put the 980ti back in the machine.

    Using the 2018 Render scene provided by Outrider 42:

    Iray preview enabled.  OptiX enabled.  Optimisation for Speed.

    1080ti                          5 minutes 17.76 seconds

    1080ti+980ti                 3 minutes 12.13 seconds
     

    Overall, I'm pleased with the difference on the benchtest, but I need to try some larger scenes to see the difference.

    A pretty solid 40% increase in speed. That should be noticeable on just about everything. Though of course you only see that extra boost when the scene fits in the 980ti's 6GB memory, minus whatever Windows takes from it (so maybe ~5.)

     

    rock63 said:

    I looked up the test I found, and I guess I missremembered it. They concluded that the core count of the CPU was the biggest factor in multi GPU rendering, as the 20 core Xeon+ 4 Titans out performed the 4 core i7+ 4 Titans 

    https://www.pugetsystems.com/labs/articles/NVIDIA-Iray-GPU-Performance-Comparison-785/

    The Ryzen 1800x does have a clear core count advantage over the previous tested systems.

    In that test all 4 Titans will be running @16x each as there are a total of 80 lanes across those 2 Xeons. If you ran that in a single 40 lane CPU with 4 cards render time would be increased as data throughput to the cards would be halved causing a higher render time over all. I have tested this theory and it is the case. I had a 16 lane CPU runnnig 2 980tis @ 8x each, render time was slower than 1 980ti running alone @ full 16x.

     

    What is your new CPU, and what is the old one? If you upgraded to a CPU with more lanes, then odds are your CPU also has more and faster cores than your old one, which would better explain the difference. In order to properly understand what is happening, we need more information. Just telling us you have more lanes is not a definitive result. It is also possible your previous system bottlenecked your GPUs in other ways. That's why we need people posting to give us more information on system spec.

    There is an easy way to test this. Cover half of the GPU pins with an insulating material like the picture below and run the test again. If there is a difference in speed, then you are on to something.

    Anybody who has such a system can test this and see if it makes any difference. I'd be interested in seeing this.

  • DadofManyDadofMany Posts: 13
    edited May 2018

    My results surprised me a bit as the CPU + GPUs. render time is almost the same as the GPUs only render time.

    CPU: i7-6950 (40 lanes), Overclocked 36%, independent cooling loop.

    32 Gig RAM

    4 x EVGA Hydrocopper GTX-1080, overclocked to 2000 MHz, 16,8,8,8 lanes, independent cooling loop

    Starter Scene:

    GPUs + OptiX = 2018-05-30 14:41:52.865 Total Rendering Time: 58.68 seconds (load and render), 45.43 seconds render loaded scene, 05000 iterations

    GPUs + CPU + OPTIX = 2018-05-30 14:59:03.121 Total Rendering Time: 57.49 seconds (load and render), 45.34 seconds render loaded scene,05000 iterations

    This is a good comparison test (CPU or no CPU) but I had some backgound processes running that slowed the benchmark slightly. I left the system configered as normal for my daily operations.

    Post edited by DadofMany on
  • jerhamjerham Posts: 153

    MSI GE63-Raider-RGB-8R Notebook (I7 8750H, GTX 1070, Performance profile setting)

    Original Scene:

    GPU + OPTIX = Total Rendering Time: 4 minutes 3.33 seconds

    GPU + CPU + OPTIX = Total Rendering Time: 3 minutes 7.23 seconds

    Outrider42 Scene: 

    GPU + OPTIX = Total Rendering Time: 9 minutes 11.83 seconds

    GPU + CPU + OPTIX = Total Rendering Time: 8 minutes 15.67 seconds

     

  • CPU : Ryzen 2700x ( PB2 & XFR2 ONLY )
    RAM : 16G(8*2) @ 3400MHZ
    VGA : EVGA 1070TI 8G

    IRAY Starter scene:
    CPU ONLY = 17 minutes 39 seconds
    CPU + GPU = 2 minutes 52 seconds
    GPU ONLY = 2 minutes 52 seconds

     

     

  • nothingmorenothingmore Posts: 24
    edited July 2018

    1080 Ti (x3) + Titan Xp

    Old scene:

    • Total Rendering Time: 32.53 seconds

    New scene:

    • 1st trial: Total Rendering Time: 1 minutes 27.11 seconds,
    • 2nd trial: Total Rendering Time: 1 minutes 26.38 seconds

    GPU clocks start to throttle near the end; hitting a thermal ceiling with air cooling

     

    kk.JPG
    144 x 109 - 11K
    Post edited by nothingmore on
  • IllidanstormIllidanstorm Posts: 655
    edited July 2018

    1080 Ti
    1800x Ryzen
    32GB RAM

     

    Only GPU: 3.29
    GPU and CPU: 3.01
    GPU Optix: 2.09
    GPU and CPU Optix: 1.48
     

    Beta 4.11: 
    GPU Optix: 1.51
    GPU Optix + Post Denoiser 8: 1.51

    So 4.11 is a bit faster than 4.10 for me.

     

    Unbenannt.JPG
    272 x 97 - 12K
    Post edited by Illidanstorm on
  • Paparspace virtual machine GPU+ P4000 configuration, only GPU:

    Optix Off : 6 minutes 18.64 seconds 4951 iterations

    Optix On : 3 minutes 58.62 seconds 4986 iterations

Sign In or Register to comment.