Iray Starter Scene: Post Your Benchmarks!

1252628303149

Comments

  • 31415926543141592654 Posts: 967

    Part 1 - Original Iray Test Scene
    I just built my first computer ... designed for rendering. I decided to compare the results to the computers I have used previously.

    I started with an HP Pavilion
    i3-5020U 2.2Ghz
    intel HD graphics 5500  (that is correct ... no graphics card)
    CPU only - 450 iterations in 15 minutes giving an approximate finished time of 166 minutes (2 3/4 hours)

    I have mostly been using a Dell Inspiron
    i5-6300HQ 3.2Ghz
    Geforce 960m with 4gb vram
    GPU only - 995 iterations in 5 minutes giving an approximate finished time of 25 minutes
    GPU+CPU+OPTIX - 1137 iterations in 5 minutes approximate finish of 22 minutes

    My new build - MSI Tomahawk X299 Motherboard
    i9-7940X with 14 cores
    Quadro M6000 with 24Gb vram
    GPU only - finished in 5m 40s
    GPU+CPU+OPTIX - finished in 2m 21s


    Part 2 - My Personal Test Scene
    I wanted a more real world example with a broader range of tests ... not useful for everyone, but perhaps worth noting some differences. My scene had the full Topical island with an animated Generation 8 character and clothing, plus lighting.

    Basic OGL - I do not have the exact numbers, but surprisingly, the Dell i5/960m outperformed the MSI i9/M6000 a little bit - I would estimate about 20% faster on an animation sequence.

    3Delight - The dell took 3 times longer to optimize the image. Once running, the MSI was only slightly more than double the speed of the Dell ... I expected much more.
         - I will also note, I tried a single generation 6 character at 4000 x 10000 pixels - bigger difference in speed - MSI in 57s and Dell in 5m 20s.

    Iray - Big difference - MSI reached 100 iterations in 2m 10s - dell reached 50 iterations in 6m 20s so an estimated 100 iterations in 12m 40s.

     

  • jackkerouacjackkerouac Posts: 1
    edited August 2018

    Okay, this is my setup:

    Operating System
    Windows 10 Pro 64-bit

    CPU
    AMD Ryzen 7 1700X
    Summit Ridge 14nm Technology

    RAM
    16.0GB Dual-Channel @ 1064MHz (15-15-15-36)

    Motherboard
    Gigabyte Technology Co. Ltd. AX370-Gaming 5 (AM4)

    Graphics
    NVIDIA GeForce GTX 1080 Ti SC2 (EVGA)

    Render Times (default file setup, i.e. loaded it in and hit Render)

    Video Card Only: 4 minutes 1.63 seconds
    Video Card with OptiX Prime Acceleration: 2 minutes 34.23 seconds
    Video Card and CPU: 3 minutes 34.20 seconds
    Video Card and CPU with OptiX Prime Acceleration: 2 minutes 15.63 seconds

    Added another 1080 ti into the mix (same brand) and got the following results:

    2 Video Cards Only: 1 minute 47.98 seconds
    2 Video Cards with OptiX Prime Acceleration: 1 minute 3.71 seconds
    2 Video Cards, CPU: 1 minute 51.91 seconds
    2 Video Cards, CPU with OptiX Prime Acceleration: 1 minute 23.73 seconds

    The only thing that seems a little weird is that rendering is faster without the CPU w/ OptiX Prime Acceleration. Not by much (about 20 seconds), but still.

    Post edited by jackkerouac on
  • PC Specs:
    Ryzen 2 2700x, 16gb RAM (@ 2800MHZ), GTX 980Ti, 1TB M.2 Samsung 960 PRO

    Rendered to 100%

    CPU Only, Optix Off
    Total Time: 20 Minutes 6 Seconds

    CPU Only, Optix On
    Total Time: 16 Minutes 52 Seconds

    GPU Only, Optix Off
    Total Time: 5 Minutes 18 Seconds

    GPU Only, Optix On
    Total Time:3 Minutes 3 Seconds

    GPU+CPU, Optix Off
    Total Time: 4 Minutes 25 Seconds

    GPU+CPU, Optix On
    Total Time: 2 Minutes 42 seconds

  • PC Specs:

    CPU: Ryzen 5 1600 OC 3.8GHz​

    GPU: GTX 1060 6GB

    RAM: 8GB DDR4 2666MHZ

     

    Results:

    Both CPU and GPU:

    2018-09-11 04:17:36.771 Total Rendering Time: 6 minutes 26.74 seconds

  • PC Specs:

    CPU: Ryzen 2700x Stock

    RAM: 16GB DDR4 2133MHZ (I know...)

    GPU: RTX 2080TI (ZOTAC AMP!)*

     

    Results:

    2080TI Only (OPTIX ON):

    2018-09-24 02:38:10.633 Total Rendering Time: 51.0 seconds

    2700X + 2080TI (OPTIX ON):

    2018-09-24 02:53:48.963 Total Rendering Time: 45.0 seconds

     

    *2080TI is not overclocked and clocks peak at 1950mhz at 67°C (20°C room temperature)

  • robogo said:

    PC Specs:

    CPU: Ryzen 2700x Stock

    RAM: 16GB DDR4 2133MHZ (I know...)

    GPU: RTX 2080TI (ZOTAC AMP!)*

     

    Results:

    2080TI Only (OPTIX ON):

    2018-09-24 02:38:10.633 Total Rendering Time: 51.0 seconds

    2700X + 2080TI (OPTIX ON):

    2018-09-24 02:53:48.963 Total Rendering Time: 45.0 seconds

     

    *2080TI is not overclocked and clocks peak at 1950mhz at 67°C (20°C room temperature)

    I was really anxious to see those numbers. The last 1080TI benchmark reached more than 2 minutes and you took only 51 seconds? That's awesome!

  • outrider42outrider42 Posts: 3,679
    robogo said:

    PC Specs:

    CPU: Ryzen 2700x Stock

    RAM: 16GB DDR4 2133MHZ (I know...)

    GPU: RTX 2080TI (ZOTAC AMP!)*

     

    Results:

    2080TI Only (OPTIX ON):

    2018-09-24 02:38:10.633 Total Rendering Time: 51.0 seconds

    2700X + 2080TI (OPTIX ON):

    2018-09-24 02:53:48.963 Total Rendering Time: 45.0 seconds

     

    *2080TI is not overclocked and clocks peak at 1950mhz at 67°C (20°C room temperature)

    The 2080ti already is already supported? It must be using the Volta drivers.

    Is this the original sickleyield bench or the 2018 bench I created?

    Either way, if this is correct, that is impressive. I am banking on this using Volta drivers, and if this is true, I think the 2080ti with proper Turing Iray drivers might get even faster.

  • ebergerlyebergerly Posts: 3,255
    Thanks for posting those results !! Sounds like without the RTX enhancements the performance improvements barely justify the price. The 2080ti costs about twice as much as 1080ti and is a little more than twice as fast.
  • robogo said:

    PC Specs:

    CPU: Ryzen 2700x Stock

    RAM: 16GB DDR4 2133MHZ (I know...)

    GPU: RTX 2080TI (ZOTAC AMP!)*

     

    Results:

    2080TI Only (OPTIX ON):

    2018-09-24 02:38:10.633 Total Rendering Time: 51.0 seconds

    2700X + 2080TI (OPTIX ON):

    2018-09-24 02:53:48.963 Total Rendering Time: 45.0 seconds

     

    *2080TI is not overclocked and clocks peak at 1950mhz at 67°C (20°C room temperature)

    The 2080ti already is already supported? It must be using the Volta drivers.

    Yes, Daz_Rawb posted to the thread on the new cards with soem times using a different scene a couple of days back. he did say that, as far as they can tell, it isn't using the RTX features just regular CUDA

  • SirapSirap Posts: 4
    edited September 2018

    outrider42 said:

    PC Specs:

    CPU: Ryzen 2700x Stock

    RAM: 16GB DDR4 2133MHZ (I know...)

    GPU: RTX 2080TI (ZOTAC AMP!)*

     

    Results:

    2080TI Only (OPTIX ON):

    2018-09-24 02:38:10.633 Total Rendering Time: 51.0 seconds

    2700X + 2080TI (OPTIX ON):

    2018-09-24 02:53:48.963 Total Rendering Time: 45.0 seconds

     

    *2080TI is not overclocked and clocks peak at 1950mhz at 67°C (20°C room temperature)

    The 2080ti already is already supported? It must be using the Volta drivers.

    Is this the original sickleyield bench or the 2018 bench I created?

    Either way, if this is correct, that is impressive. I am banking on this using Volta drivers, and if this is true, I think the 2080ti with proper Turing Iray drivers might get even faster.

    Sorry for the late reply.  Yep, they're working in 4.11 so I assume it's due to Volta support (Cuda 9.2 perhaps?).  Trying to boot up a render in 4.10 results in an endless "preparing scene" loop :P

    Oh yeah the results were based on Sickleyield's original benchmark.  I wasn't aware of a newer one, I'll give yours a shot soon! 

    EDIT:

    Only had time for a GPU only run (OPTIX off)

    2018-09-26 04:39:00.875 Finished Rendering
    2018-09-26 04:39:00.907 Total Rendering Time: 4 minutes 38.90 seconds

    EDIT 2:

    Okay I reran the benchmark with OPTIX on and it took 20 seconds longer.  Iray gonna iray I guess!

    Post edited by Sirap on
  • Rendered Outrider42's test scene (5000 iterations, 520x400) in 8 min 25 seconds, with GPU only (OptX on) ; GTX1080 oc at 1658 MHz / Memory at 1797 MHz.

    My CPU is a Core i7 6700 (dunno if it's relevant for GPU only render).

  • outrider42outrider42 Posts: 3,679
    ebergerly said:
    Thanks for posting those results !! Sounds like without the RTX enhancements the performance improvements barely justify the price. The 2080ti costs about twice as much as 1080ti and is a little more than twice as fast.

    Depends. Twice the performance with a single card is still big. MultiGPU performance does not scale perfectly, and uses a lot more power and heat. And this is before proper drivers for RT kick in, which will likely bump the speed significantly. Just look at the power of RTX:

  • ebergerlyebergerly Posts: 3,255

    Is it just me, or do those images of the dog look like someone was posting a joke or something? laugh The one on the left looks like an old 90's video game. 

    I'm curious, where did you find those? 

  • outrider42outrider42 Posts: 3,679

    Yes, it is a joke. RTX on became its own meme, if you Google search RTX MEME you'll find tons of them. The game on the left is Minecraft, while it looks ancient it is very much a popular game. Though most of the cool kids play Fortnight now.

  • ebergerlyebergerly Posts: 3,255

    Ahh, okay. I'm glad you didn't get fooled by it laugh

  • jonascsjonascs Posts: 35
    edited September 2018

    Here are my own benchmarks with the SickleYield scene, at the suggested 400x520 render size and the scene's included render settings. All are on 64 bit Windows 10.

    Intel Core i7 7700K @ 4.20GHz running at 4.80GHz
    ASUS Strix Z270F Gaming
    DDR4 32768Mbytes 4200.0 MHz @1467 MHz

    2x ASUS GTX 1080 Strix @ 1671 MHz Memory @ 1251 MHz (Boost 1810 MHz)

    NVidia Driver 411.53

     

    1 GPU, Optix off = Total Rendering Time: 4 minutes 48.44 seconds 4999 iterations.

    1 GPU, Optix on = Total Rendering Time: 2 minutes 51.84 seconds 5000 Iterations

    2 GPU's, Optix off = Total Rendering Time: 2 minutes 25.50 seconds 2478 iterations + 2522 iterations

    2 GPU's, Optix on = Total Rendering Time: 1 minutes 27.79 seconds   (forgot to see iterations, sorry)

    1 GPU + CPU Optix off = Total Rendering Time: 4 minutes 29.84 seconds, CUDA device 0 (GeForce GTX 1080): 4513 iterations, CPU: 487 iterations

    1GPU + CPU Optix on =  Total Rendering Time: 2 minutes 48.14 seconds, CUDA device 0 (GeForce GTX 1080): 4627 iterations, CPU: 373 iterations

    2 GPU + CPU Optix off = Total Rendering Time: 2 minutes 24.44 seconds, 2361 iterations + 2417 iterations, CPU 222 iterations

    2GPU + CPU Optix On = Total Rendering Time: 1 minutes 30.14 seconds, 2399 iterations + 2436 iterations, CPU 165 iterations

     

    Doing the same now with Outrider42's test scene, at the suggested render size and the scene's included render settings.

    1 GPU Optix off = Total Rendering Time: 9 minutes 0.8 seconds 5000 iterations

    1 GPU Optix on = Total Rendering Time: 7 minutes 54.85 seconds 5000 iterations

    2 GPU's Optix off = Total Rendering Time: 4 minutes 35.33 seconds 2484 + 2516 Iterations. (WITH DAZ3D BETA and Nvidia Driver 411.70 : 7 minutes 17.12 seconds)

    2 GPU's Optix On = Total Rendering Time: 4 minutes 1.65 seconds, 2485 iterations + 2515 iterations. (WITH DAZ3D BETA and Nvidia Driver 411.70 : 6 minutes 29.89 seconds - With DAZ 4.10Pro and Nvidia Driver 411.70: 4 minutes 2.39 seconds)

    1 GPU + CPU Optix off = Total Rendering Time: 9 minutes 13.45 seconds, 4585 iterations, CPU 415 iterations

    1 GPU + CPU Optix On = Total Rendering Time: 8 minutes 4.30 seconds, GPU 4618, CPU 382 Iterations 

    2 GPU's + CPU Optix off = Total Rendering Time: 4 minutes 54.20 seconds, GPU 2384 + 2433, CPU 183 iterations

    2 GPU's + CPU Optix On = Total Rendering Time: 4 minutes 21.92 seconds, GPU 2405 + 2435, CPU 160 iterations.

     

    I will return with a new post on friday when I get my two ASUS 2080 Ti Turbo with NVLink and have installed them in this rig.

    Post edited by jonascs on
  • nicsttnicstt Posts: 11,714

    Not seen a 980ti for the new scene by

    980ti Optix On for CPU and BPU (card is for rendering only)

    Total Rendering Time: 7 minutes 29.10 seconds

    CPU only Optix ON: (Threadripper 1950x)

    2018-09-26 19:32:17.258 Total Rendering Time: 23 minutes 31.43 seconds

  • TheTreeInFrontTheTreeInFront Posts: 13
    edited September 2018

    Here's my result:

    Daz Studio 4.11.0.196 Pro Public Build  (released version don't render on my 2080ti)
    CPU Intel Core i7 8700K
    Mem DDR4 32G
    Motherboard: Gigabyte Z370 AORUS Gaming 5
    Case Obsidian 750D for the 9 slots and some air space
    OptiX: On

    Specs:
    2080ti Gigabyte Windforce OC 11G GDDR6 GV-N208TWF3OC-11GC   | pcie x8  2 slots width (displayport output)
    1080   Gigabyte G1 Gaming OC  8G GDDR5 GV-N1080G1                    | pcie x8  2 slots width
    1080ti Gigabyte Aorus xtreme 11G GDDR5 GV-N108TAORUS X-11GD | pcie x4  3 slots width

    The 1080 don't have much space for air cooling since it's pressed against the backplate of the 1080ti in the last slot.
    So it get hot fast and stay hot a while after the load is gone.  Also the 1080ti has it's pcie lanes really cut to 4x, not sure
    on the impact while rendering thou, since the textures are already there from the Aux Viewport in Iray, before it start.

    Info from Daz log:
    NVIDIA display driver version: 411.63
    Your NVIDIA driver supports CUDA version up to 10.0; iray requires CUDA version 9.0; all is good.
    Using iray plugin version 4.8-beta, build 302800.3495 n, 14 Jun 2018, nt-x86-64-vc14.
    CUDA device 1 (GeForce GTX 1080 Ti): compute capability 6.1, 11 GiB total, 9.14924 GiB available
    CUDA device 0 (GeForce RTX 2080 Ti): compute capability 7.5, 11 GiB total, 9.05428 GiB available
    CUDA device 2 (GeForce GTX 1080): compute capability 6.1, 8 GiB total, 6.64077 GiB available, display attached

    Strange as the single display is attached to to the 2080ti not the 1080 as the log say...


    Results (scene from SickleYield):   [edit]NOTE: Theses result are WRONG because I did'nt notice the view switched to perspective where the character is farther on the beta but the dropdown combo was still showing Default Camera...).  So theses numbers are meaningless.[/edit]
    Idle                 72W
    1080                187W   2 m 17.93 second
    1080ti              295W   1 m 29.34 seconds
    2080ti              360W       51.53 seconds
    1080+1080ti         410W       56.16 seconds
    1080+2080ti         439W       39.46 seconds
    1080ti+2080ti       561W       34.48 seconds
    1080+1080ti+2080ti  662W       28.65 seconds
    [power was checked with a Tripp-Lite UPS, I took the highest value I saw]
    I'm guessing that when everything will be optimized for the 2080ti raytracing part of the chip, the power drawn will be higher.
    I also let it cooldown to bit between each run because of my sandwich 1080 issue.

    CPUID HWMonitor v1.36.0
             Max Temp      Grphic      Memory
    GTX1080Ti  58C  2025 MHz  5103 MHz
    GTX1080    61C  1696 MHz  5006 MHz
    RTX2080ti  71C  1890 MHz  6800 MHz

    Nothing is overclocked and from their web site, it seem I should see differents numbers for the MHz.  Maybe HWMonitor is not ready for it.
    The 2080ti should be 1635 max on the core and 14000 for the memory.

    Note again, theses numbers are with the public beta.  The release version was slower on my 1080 and 1080ti and not working for my 2080ti.

    Post edited by TheTreeInFront on
  • outrider42outrider42 Posts: 3,679

    Interesting result. Do you have EVGA PrecisionX or MSI Afterburner? Both of these will monitor your GPU data and should report the clockspeed. They also have fan controlling software, which might help keep things cooler by adjusting the fan curve. I run a pretty aggressive fan curve. If the 1080 runs too hot, it may be a good idea to take it out.

    My understanding is the pcie lanes don't have much of an effect on Iray. Somewhere in this thread there are links to a site that tested this. I don't think they went down to 4x though. But you can test this if want to by swapping them around or removing a card.

    @joanascs

    I have one small request if you don't mind. I'd like to compare how fast the two 2080ti's are with NVLink and WITHOUT NVLink. I am quite interested in seeing if the extra bandwidth of NVLink increases rendering speed at all. I think it might give a tiny boost, to where the cards are closer to truly doubling their speed. That would be a fantastic result, even if VRAM stacking is not yet functional. I am pretty certain that something would have to be enabled to make VRAM stacking work, an option probably not available yet.

  • jonascsjonascs Posts: 35
    edited September 2018

    Interesting result. Do you have EVGA PrecisionX or MSI Afterburner? Both of these will monitor your GPU data and should report the clockspeed. They also have fan controlling software, which might help keep things cooler by adjusting the fan curve. I run a pretty aggressive fan curve. If the 1080 runs too hot, it may be a good idea to take it out.

    My understanding is the pcie lanes don't have much of an effect on Iray. Somewhere in this thread there are links to a site that tested this. I don't think they went down to 4x though. But you can test this if want to by swapping them around or removing a card.

    @joanascs

    I have one small request if you don't mind. I'd like to compare how fast the two 2080ti's are with NVLink and WITHOUT NVLink. I am quite interested in seeing if the extra bandwidth of NVLink increases rendering speed at all. I think it might give a tiny boost, to where the cards are closer to truly doubling their speed. That would be a fantastic result, even if VRAM stacking is not yet functional. I am pretty certain that something would have to be enabled to make VRAM stacking work, an option probably not available yet.

    I will run the tests with and without NVLink. ;)

    Although, I believe that we might not see any real improvement until the better drivers come and when DAZ even might have support for this card. I refer ofcourse to TheTreeInFront's statement that he can only render on the Beta version.

    EDIT I just added the DAS3D Beta Version benchmarks running on the latest Nvidia Drivers, to my previous post, and the DAZ Beta is SLOOOOOOW!

     

    Post edited by jonascs on
  • outrider42outrider42 Posts: 3,679
    Yes, you need the beta to run the new cards as the beta adds support for Volta. The Turing cards apparently run on Volta drivers for now.

    Volta does have Tensor, so the Tensor cores should be enabled. I wonder what impact they have on the AI Denoising feature that the new Daz Beta has. You can use the new denoiser right now, any gpu will run it. However, I would assume that Tensor cores that were especially made for this task should be faster. But not only faster, they might perform the denoising better. A common complaint from those who have tried the new denoiser is that it kills fine detail in skin textures. So I wonder...would the Tensor cores possibly do this with more detail? Has anybody tested the new denoiser with Turing card yet?

    Iray sometimes changes how it calculates the scene convergence, and this can impact render times ( that's also why I made my benchmark set for 5000 iteration limit to avoid this). But I don't remember if people were saying the beta was that much slower. I suggest checking out the beta thread in the forum and seeing what people are saying. If nobody else has the issue, make a post about it to see if someone can help, or open a support ticket.
  • jonascsjonascs Posts: 35
    edited September 2018

    Two ASUS RTX 2080TI Turbo's No NVLink 2018 benchmark scene: Total Rendering Time: 2 minutes 46.70 seconds.

    On the same computer, same settings and also on the DAZ 4.11 Beta, with two ASUS GTX 1080 STRIX it rendered in: 6 minutes 29.89 seconds

    Now restarting and adding the NVLink.

     

    EDIT Restarted and couldn't get windows to work with the NVLink connected. It starts loading, but when I'm just about to log in to windows, at the loading screen, everythings becomes black and my keyboard also gets black (mine is othervise lit).

    Will trouble shoot. 

    Unfortunately I'm going away on Monday, so I have a limited time and a full schedule.

    Post edited by jonascs on
  • ZarconDeeGrissomZarconDeeGrissom Posts: 5,412
    edited September 2018

    ye got Iray driver and Daz Studio support that fast. Much better than the Months after the GTX10 launch availability if this is true. I expected basic RTX20 barely functional support around December or later, I am impressed. I do wonder if this is just basic CUDA support, or if the RT cores are used at all to make Iray faster.

    I dont know what strings DazSpooky and crew pulled to make that happen this fast, them guys rock!

    Post edited by ZarconDeeGrissom on
  • outrider42outrider42 Posts: 3,679

    ye got Iray driver and Daz Studio support that fast. Much better than the Months after the GTX10 launch availability if this is true. I expected basic RTX20 barely functional support around December or later, I am impressed. I do wonder if this is just basic CUDA support, or if the RT cores are used at all to make Iray faster.

    I am pretty certain that Turing is running off Volta drivers as Daz and the Iray plug in have not been updated recently. Volta has Tensor, but not ray tracing. Daz has a 2080, and posted in a Turing thread about it.
  • ZarconDeeGrissomZarconDeeGrissom Posts: 5,412
    edited September 2018

    That's is really cool, I remember Mec4D and others getting Titan Xp (GTX10 series) cards, and them not being usable at all for months till basic Iray drivers happened. I am really happy this is not so this time, even if it is Titan V mode, it's much better than the cards being expensive doorstops, lol.

    I have a few posts elsewhere I need to edit and eat my words on, and I am happy to be wrong. Thanks, Daz.  yes

    Post edited by ZarconDeeGrissom on
  • Still haven't got the NVLink to work. The issue is in Nvidia and their support's hands ATM.

    I can however provide a new benchmark of SickleYields Test Scene with  Two RTX 2080Ti with Optix ON: Total Rendering Time: 34.2 seconds

  • jonascs said:

    Still haven't got the NVLink to work. The issue is in Nvidia and their support's hands ATM.

    I can however provide a new benchmark of SickleYields Test Scene with  Two RTX 2080Ti with Optix ON: Total Rendering Time: 34.2 seconds

    Did you happen to run it with a single 2080ti to compare?

  • jonascs said:

    Still haven't got the NVLink to work. The issue is in Nvidia and their support's hands ATM.

    I can however provide a new benchmark of SickleYields Test Scene with  Two RTX 2080Ti with Optix ON: Total Rendering Time: 34.2 seconds

    Did you happen to run it with a single 2080ti to compare?

    There was a bench posted a few posts up for a single 2080Ti..

    2080ti              360W       51.53 seconds

  • jonascsjonascs Posts: 35
    edited September 2018

    Single Asus 2080 TI Turbo 2018 outrider42's test scene: Total Rendering Time: 5 minutes 38.78 seconds

    Two ASUS RTX 2080TI Turbo's No NVLink 2018 benchmark scene: Total Rendering Time: 2 minutes 46.70 seconds.

    SickleYield Test scene Single TI Turbo: Total Rendering Time: 1 minutes 16.79 seconds Optix on out of the box settings.

    Two RTX 2080Ti with Optix ON: Total Rendering Time: 34.2 seconds

    Post edited by jonascs on
  • jonascs said:

    Still haven't got the NVLink to work. The issue is in Nvidia and their support's hands ATM.

    I can however provide a new benchmark of SickleYields Test Scene with  Two RTX 2080Ti with Optix ON: Total Rendering Time: 34.2 seconds

    Did you happen to run it with a single 2080ti to compare?

    There was a bench posted a few posts up for a single 2080Ti..

    2080ti              360W       51.53 seconds

    Was wanting jonascs to compare with his dual 2080ti which he responded to.

Sign In or Register to comment.