Daz Studio Iray - Rendering Hardware Benchmarking

1356745

Comments

  • AelfinMaegikAelfinMaegik Posts: 47
    edited July 2019
    RayDAnt said:

     

    Aala said:

    (Just upgraded from 2x 1080 Ti's to 2x 2080 Ti's, did some benchmarks)

    Benchmark Results - 2x Gigabyte RTX 2080 Ti Gaming OC 11G
    Daz Studio Version: 4.12.0.33 Public Beta
    Optix Prime Acceleration: ON
    Total Rendering Time: 2 minutes 22.0 seconds
    CUDA device 0 (GeForce RTX 2080 Ti): Scene processed in 7.877s (First time render), 139.184s render
    CUDA device 1 (GeForce RTX 2080 Ti): Scene processed in 7.880s (First time render), 139.184s render

    Benchmark Results - 2x Gigabyte RTX 2080 Ti Gaming OC 11G
    Daz Studio Version: 4.12.0.33 Public Beta
    Optix Prime Acceleration: OFF
    Total Rendering Time: 2 minutes 17.11 seconds
    CUDA device 0 (GeForce RTX 2080 Ti): Scene processed in 1.992s (Second time render), 135.201s render
    CUDA device 1 (GeForce RTX 2080 Ti): Scene processed in 1.992s (Second time render), 135.201s render

    WOW! This really got me wondering since the prime off was faster. I downloaded 4.12.0.42, turned NVLINK/SLI to OFF and tried this, too. I'm really surprised. I am running air cooled, non OC, and  I'm shocked at these numbers. Why is prime off faster? (I am kinda peeved at myself- I changed two variables here... I updated beta version AND i turned off nvlink/sli)

    Daz Studio Version: 4.12.0.42 BETA
    Optix Prime Acceleration: ON

    Benchmark Results
    DAZ_STATS
    2019-07-30 21:54:57.007 Finished Rendering
    2019-07-30 21:54:57.042 Total Rendering Time: 2 minutes 18.77 seconds
    IRAY_STATS
    2019-07-30 21:55:31.063 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:
    2019-07-30 21:55:31.063 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce RTX 2080 Ti): 882 iterations, 4.569s init, 129.627s render
    2019-07-30 21:55:31.063 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 1 (GeForce RTX 2080 Ti): 918 iterations, 4.755s init, 128.987s render
    Iteration Rate: (1800 / 129.67) 13.881 iterations per second
    Loading Time: ((0 * 3600 + 2 * 60 + 18.77) - 129.67) 9.1 seconds

     

    Daz Studio Version: 4.12.0.42 BETA
    Optix Prime Acceleration: OFF

    Benchmark Results
    DAZ_STATS
    2019-07-30 22:08:00.921 Finished Rendering
    2019-07-30 22:08:00.956 Total Rendering Time: 2 minutes 12.65 seconds
    IRAY_STATS
    2019-07-30 22:08:17.102 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:
    2019-07-30 22:08:17.102 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce RTX 2080 Ti): 883 iterations, 4.572s init, 123.675s render
    2019-07-30 22:08:17.102 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 1 (GeForce RTX 2080 Ti): 917 iterations, 4.722s init, 122.562s render
    Iteration Rate: (1800 / 123.675) 14.554 iterations per second
    Loading Time: ((0 * 3600 + 2 * 60 + 12.65) - 123.675) 8.975seconds

    Yeah, this is extremely interesting.

    Logically neither you nor Aala should be seeing any dscernable difference in render times between OptiX Prime On vs OptiX Prime off tests here because, in the case of rendering with only RTX GPUs on Daz Studio 4.12+, OptiX Prime is disabled internally by Iray and all raytracing is farmed directly out to the RTCores on the cards themselves. Currently the only programmatic difference between OptiX Prime checked/unchecked with 20XX cards in 4.12+ is that one state pops up errors in the log file about optix prime being improperly called for when it isn't supported versus a clean log file with no extraneous error messages (renders still finish successfully either way.) At least that's been the case with my Titan RTX under 4.12.0.033. Could you post log files (at least the portions starting after "Rendering image") for both an Optix Prime On and Off test run? I'm gonna try doing the same with my Titan RTX tomorrow on 4.12.0.042 as well, and would love to be able to compare what my logs look like compared to yours.

    Btw could you also try OptiX Prime on vs off with just a single 2080Ti activated and see if the same patterns are true? It could be that what both you and Aala are seeing here is a multi-GPU thing. Specifically I'm wondering if perhaps the current DS/Iray version has a bug where having more than one RTX card activated at once is causing Iray to revert back to its pre-RTX GPU era behavior where OptiX Prime is used rather than RTCores for raytracing. Meaning that there may be a DS/Iray bugfix in the near future that could net both you and Aala a noticeable speedup in dual 2080Ti rendering scenarios regardless of whether OptiX Prime is checked on or off..

    log attached from the runs i posted earlier. i also saw the errors i think you mean but wasn't sure what they meant:

    2019-07-30 22:07:59.326 WARNING: ..\..\..\..\..\src\pluginsource\DzIrayRender\dzneuraymgr.cpp(305): Iray [WARNING] - IRAY:RENDER ::   1.0   IRAY   rend warn : The 'iray_optix_prime' scene option is no longer supported.

     

    EDIT: just saw your request for single ON/OFF --- really need to hit the sack before work tomorrow but will try after (~5pm MST)

    txt
    txt
    log.txt
    150K
    Post edited by AelfinMaegik on
  • ebergerlyebergerly Posts: 3,255

    https://www.reddit.com/r/Daz3D/ ?

    maybe? not sure. maybe i don't actually want to post with my normal account on that.

    Good idea. I just signed up for that subreddit and posted a question about RTX. Looks like a great alternative. Hopefully we can build a nice tech community there. Thx.
  • The Daz Reddit is a ghost town. Last post was 4 DAYS ago, and scrolling just a few posts takes you 27 days back. It is a shame that Daz has to be like the NFL...No Fun League.

    I hear ya. But that doesn't prevent you guys from chatting about this. Or anything else you want. Consider it small noise to signal ratio.

    ebergerly said:

    https://www.reddit.com/r/Daz3D/ ?

    maybe? not sure. maybe i don't actually want to post with my normal account on that.

     

    Good idea. I just signed up for that subreddit and posted a question about RTX. Looks like a great alternative. Hopefully we can build a nice tech community there. Thx.

    Great. Good to see you you and RayDant there. I'll have to decide whether to make a 2nd account now, my normal account is a bit too... right of center politicallly ;)

  • RayDAntRayDAnt Posts: 1,120
    edited July 2019
    RayDAnt said:

     

    Aala said:

    (Just upgraded from 2x 1080 Ti's to 2x 2080 Ti's, did some benchmarks)

    Benchmark Results - 2x Gigabyte RTX 2080 Ti Gaming OC 11G
    Daz Studio Version: 4.12.0.33 Public Beta
    Optix Prime Acceleration: ON
    Total Rendering Time: 2 minutes 22.0 seconds
    CUDA device 0 (GeForce RTX 2080 Ti): Scene processed in 7.877s (First time render), 139.184s render
    CUDA device 1 (GeForce RTX 2080 Ti): Scene processed in 7.880s (First time render), 139.184s render

    Benchmark Results - 2x Gigabyte RTX 2080 Ti Gaming OC 11G
    Daz Studio Version: 4.12.0.33 Public Beta
    Optix Prime Acceleration: OFF
    Total Rendering Time: 2 minutes 17.11 seconds
    CUDA device 0 (GeForce RTX 2080 Ti): Scene processed in 1.992s (Second time render), 135.201s render
    CUDA device 1 (GeForce RTX 2080 Ti): Scene processed in 1.992s (Second time render), 135.201s render

    WOW! This really got me wondering since the prime off was faster. I downloaded 4.12.0.42, turned NVLINK/SLI to OFF and tried this, too. I'm really surprised. I am running air cooled, non OC, and  I'm shocked at these numbers. Why is prime off faster? (I am kinda peeved at myself- I changed two variables here... I updated beta version AND i turned off nvlink/sli)

    Daz Studio Version: 4.12.0.42 BETA
    Optix Prime Acceleration: ON

    Benchmark Results
    DAZ_STATS
    2019-07-30 21:54:57.007 Finished Rendering
    2019-07-30 21:54:57.042 Total Rendering Time: 2 minutes 18.77 seconds
    IRAY_STATS
    2019-07-30 21:55:31.063 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:
    2019-07-30 21:55:31.063 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce RTX 2080 Ti): 882 iterations, 4.569s init, 129.627s render
    2019-07-30 21:55:31.063 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 1 (GeForce RTX 2080 Ti): 918 iterations, 4.755s init, 128.987s render
    Iteration Rate: (1800 / 129.67) 13.881 iterations per second
    Loading Time: ((0 * 3600 + 2 * 60 + 18.77) - 129.67) 9.1 seconds

     

    Daz Studio Version: 4.12.0.42 BETA
    Optix Prime Acceleration: OFF

    Benchmark Results
    DAZ_STATS
    2019-07-30 22:08:00.921 Finished Rendering
    2019-07-30 22:08:00.956 Total Rendering Time: 2 minutes 12.65 seconds
    IRAY_STATS
    2019-07-30 22:08:17.102 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:
    2019-07-30 22:08:17.102 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce RTX 2080 Ti): 883 iterations, 4.572s init, 123.675s render
    2019-07-30 22:08:17.102 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 1 (GeForce RTX 2080 Ti): 917 iterations, 4.722s init, 122.562s render
    Iteration Rate: (1800 / 123.675) 14.554 iterations per second
    Loading Time: ((0 * 3600 + 2 * 60 + 12.65) - 123.675) 8.975seconds

    Yeah, this is extremely interesting.

    Logically neither you nor Aala should be seeing any dscernable difference in render times between OptiX Prime On vs OptiX Prime off tests here because, in the case of rendering with only RTX GPUs on Daz Studio 4.12+, OptiX Prime is disabled internally by Iray and all raytracing is farmed directly out to the RTCores on the cards themselves. Currently the only programmatic difference between OptiX Prime checked/unchecked with 20XX cards in 4.12+ is that one state pops up errors in the log file about optix prime being improperly called for when it isn't supported versus a clean log file with no extraneous error messages (renders still finish successfully either way.) At least that's been the case with my Titan RTX under 4.12.0.033. Could you post log files (at least the portions starting after "Rendering image") for both an Optix Prime On and Off test run? I'm gonna try doing the same with my Titan RTX tomorrow on 4.12.0.042 as well, and would love to be able to compare what my logs look like compared to yours.

    Btw could you also try OptiX Prime on vs off with just a single 2080Ti activated and see if the same patterns are true? It could be that what both you and Aala are seeing here is a multi-GPU thing. Specifically I'm wondering if perhaps the current DS/Iray version has a bug where having more than one RTX card activated at once is causing Iray to revert back to its pre-RTX GPU era behavior where OptiX Prime is used rather than RTCores for raytracing. Meaning that there may be a DS/Iray bugfix in the near future that could net both you and Aala a noticeable speedup in dual 2080Ti rendering scenarios regardless of whether OptiX Prime is checked on or off..

    log attached from the runs i posted earlier. i also saw the errors i think you mean but wasn't sure what they meant:

    2019-07-30 22:07:59.326 WARNING: ..\..\..\..\..\src\pluginsource\DzIrayRender\dzneuraymgr.cpp(305): Iray [WARNING] - IRAY:RENDER ::   1.0   IRAY   rend warn : The 'iray_optix_prime' scene option is no longer supported.

     

    EDIT: just saw your request for single ON/OFF --- really need to hit the sack before work tomorrow but will try after (~5pm MST)

    Here's what I got doing an OptiX Prime off vs on test in 4.12.0.042 with my single Titan RTX (log file from both runs attached):

    OptiX Prime Acceleration: Off
    Total Rendering Rendering Time: 4 minutes 4.35 seconds
    CUDA device 0 (TITAN RTX): 1800 iterations, 3.200s init, 238.376s render
    Iteration Rate: (1800 / 238.376) = 7.551 iterations per second
    Loading Time: ((0 + 240 + 4.35) - 238.519) = 244.35 - 238.376 = 5.974 seconds

    OptiX Prime Acceleration: On
    Total Rendering Rendering Time: 4 minutes 4.36 seconds
    CUDA device 0 (TITAN RTX): 1800 iterations, 3.237s init, 238.519s render
    Iteration Rate: (1800 / 238.519) = 7.547 iterations per second
    Loading Time: ((0 + 240 + 4.36) - 238.519) = 244.36 - 238.519 = 5.841 seconds

    In other words, identical results (within margin of errror) for either OptiX Prime on or off. So not at all the same behavior both you and Aala seem to be getting with your dual card setups. Am definitely looking forwqard to seeing what your single card results indicate.

     

    BTW  I should've mentioned this earlier, but there seems to be a bug with at least some DS 4.12 Beta installations where the OptiX Prime on/off toggle box is doing the opposite of what it's supposed to (I only get warnings about the no longer supported "iray_optix_prime" scene option being active if I have the box unchecked.) So in the log file you posted, the first run was with OptiX Prime off and the second run was with it on regardless of what the toggle box in Daz Studio was indicating (fwiw this doesn't change the seemingly anomalous nature of your dual card results.)

    txt
    txt
    OptiX Prime Off vs On.txt
    169K
    Post edited by RayDAnt on
  • outrider42outrider42 Posts: 3,679

    Its funny we were talking about "betas" recently, because this particular Daz beta seems to be well, very much a beta. Things are wonky. OptiX Prime is still there...but its not? Funny errors, possible multiple GPU issues, there is a laundry list of things that 4.12 beta has that are of "beta" quality, more than previous betas.

    One thing I would like to test is mesh lighting. The issue with mesh lighting was that the more complicated the mesh, it made the render take vastly longer. It is such an issue that I tend to avoid mesh lights for that reason, they are slow. That sounds a lot like the discussion over ray tracing cores increasing performance with geometry. So I think it would be a good thing to test. Creating mesh lighting with many polygons and seeing the difference between the the different versions of Daz. I think we would see more of the benefits of RTX with mesh lighting than other lighting.

    The Daz Reddit is a ghost town. Last post was 4 DAYS ago, and scrolling just a few posts takes you 27 days back. It is a shame that Daz has to be like the NFL...No Fun League.

    I hear ya. But that doesn't prevent you guys from chatting about this. Or anything else you want. Consider it small noise to signal ratio.

    ebergerly said:

    https://www.reddit.com/r/Daz3D/ ?

    maybe? not sure. maybe i don't actually want to post with my normal account on that.

     

    Good idea. I just signed up for that subreddit and posted a question about RTX. Looks like a great alternative. Hopefully we can build a nice tech community there. Thx.

    Great. Good to see you you and RayDant there. I'll have to decide whether to make a 2nd account now, my normal account is a bit too... right of center politicallly ;)

    I suppose reddit is worth a shot. Its a well established place, so nobody has to create a website or anything. And if we go there then maybe it will come back to life.

  • RayDAnt said:
    RayDAnt said:

     

    Aala said:

    (Just upgraded from 2x 1080 Ti's to 2x 2080 Ti's, did some benchmarks)

    Benchmark Results - 2x Gigabyte RTX 2080 Ti Gaming OC 11G
    Daz Studio Version: 4.12.0.33 Public Beta
    Optix Prime Acceleration: ON
    Total Rendering Time: 2 minutes 22.0 seconds
    CUDA device 0 (GeForce RTX 2080 Ti): Scene processed in 7.877s (First time render), 139.184s render
    CUDA device 1 (GeForce RTX 2080 Ti): Scene processed in 7.880s (First time render), 139.184s render

    Benchmark Results - 2x Gigabyte RTX 2080 Ti Gaming OC 11G
    Daz Studio Version: 4.12.0.33 Public Beta
    Optix Prime Acceleration: OFF
    Total Rendering Time: 2 minutes 17.11 seconds
    CUDA device 0 (GeForce RTX 2080 Ti): Scene processed in 1.992s (Second time render), 135.201s render
    CUDA device 1 (GeForce RTX 2080 Ti): Scene processed in 1.992s (Second time render), 135.201s render

    WOW! This really got me wondering since the prime off was faster. I downloaded 4.12.0.42, turned NVLINK/SLI to OFF and tried this, too. I'm really surprised. I am running air cooled, non OC, and  I'm shocked at these numbers. Why is prime off faster? (I am kinda peeved at myself- I changed two variables here... I updated beta version AND i turned off nvlink/sli)

    Daz Studio Version: 4.12.0.42 BETA
    Optix Prime Acceleration: ON

    Benchmark Results
    DAZ_STATS
    2019-07-30 21:54:57.007 Finished Rendering
    2019-07-30 21:54:57.042 Total Rendering Time: 2 minutes 18.77 seconds
    IRAY_STATS
    2019-07-30 21:55:31.063 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:
    2019-07-30 21:55:31.063 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce RTX 2080 Ti): 882 iterations, 4.569s init, 129.627s render
    2019-07-30 21:55:31.063 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 1 (GeForce RTX 2080 Ti): 918 iterations, 4.755s init, 128.987s render
    Iteration Rate: (1800 / 129.67) 13.881 iterations per second
    Loading Time: ((0 * 3600 + 2 * 60 + 18.77) - 129.67) 9.1 seconds

     

    Daz Studio Version: 4.12.0.42 BETA
    Optix Prime Acceleration: OFF

    Benchmark Results
    DAZ_STATS
    2019-07-30 22:08:00.921 Finished Rendering
    2019-07-30 22:08:00.956 Total Rendering Time: 2 minutes 12.65 seconds
    IRAY_STATS
    2019-07-30 22:08:17.102 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:
    2019-07-30 22:08:17.102 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce RTX 2080 Ti): 883 iterations, 4.572s init, 123.675s render
    2019-07-30 22:08:17.102 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 1 (GeForce RTX 2080 Ti): 917 iterations, 4.722s init, 122.562s render
    Iteration Rate: (1800 / 123.675) 14.554 iterations per second
    Loading Time: ((0 * 3600 + 2 * 60 + 12.65) - 123.675) 8.975seconds

    Yeah, this is extremely interesting.

    Logically neither you nor Aala should be seeing any dscernable difference in render times between OptiX Prime On vs OptiX Prime off tests here because, in the case of rendering with only RTX GPUs on Daz Studio 4.12+, OptiX Prime is disabled internally by Iray and all raytracing is farmed directly out to the RTCores on the cards themselves. Currently the only programmatic difference between OptiX Prime checked/unchecked with 20XX cards in 4.12+ is that one state pops up errors in the log file about optix prime being improperly called for when it isn't supported versus a clean log file with no extraneous error messages (renders still finish successfully either way.) At least that's been the case with my Titan RTX under 4.12.0.033. Could you post log files (at least the portions starting after "Rendering image") for both an Optix Prime On and Off test run? I'm gonna try doing the same with my Titan RTX tomorrow on 4.12.0.042 as well, and would love to be able to compare what my logs look like compared to yours.

    Btw could you also try OptiX Prime on vs off with just a single 2080Ti activated and see if the same patterns are true? It could be that what both you and Aala are seeing here is a multi-GPU thing. Specifically I'm wondering if perhaps the current DS/Iray version has a bug where having more than one RTX card activated at once is causing Iray to revert back to its pre-RTX GPU era behavior where OptiX Prime is used rather than RTCores for raytracing. Meaning that there may be a DS/Iray bugfix in the near future that could net both you and Aala a noticeable speedup in dual 2080Ti rendering scenarios regardless of whether OptiX Prime is checked on or off..

    log attached from the runs i posted earlier. i also saw the errors i think you mean but wasn't sure what they meant:

    2019-07-30 22:07:59.326 WARNING: ..\..\..\..\..\src\pluginsource\DzIrayRender\dzneuraymgr.cpp(305): Iray [WARNING] - IRAY:RENDER ::   1.0   IRAY   rend warn : The 'iray_optix_prime' scene option is no longer supported.

     

    EDIT: just saw your request for single ON/OFF --- really need to hit the sack before work tomorrow but will try after (~5pm MST)

    Here's what I got doing an OptiX Prime off vs on test in 4.12.0.042 with my single Titan RTX (log file from both runs attached):

    OptiX Prime Acceleration: Off
    Total Rendering Rendering Time: 4 minutes 4.35 seconds
    CUDA device 0 (TITAN RTX): 1800 iterations, 3.200s init, 238.376s render
    Iteration Rate: (1800 / 238.376) = 7.551 iterations per second
    Loading Time: ((0 + 240 + 4.35) - 238.519) = 244.35 - 238.376 = 5.974 seconds

    OptiX Prime Acceleration: On
    Total Rendering Rendering Time: 4 minutes 4.36 seconds
    CUDA device 0 (TITAN RTX): 1800 iterations, 3.237s init, 238.519s render
    Iteration Rate: (1800 / 238.519) = 7.547 iterations per second
    Loading Time: ((0 + 240 + 4.36) - 238.519) = 244.36 - 238.519 = 5.841 seconds

    In other words, identical results (within margin of errror) for either OptiX Prime on or off. So not at all the same behavior both you and Aala seem to be getting with your dual card setups. Am definitely looking forwqard to seeing what your single card results indicate.

     

    BTW  I should've mentioned this earlier, but there seems to be a bug with at least some DS 4.12 Beta installations where the OptiX Prime on/off toggle box is doing the opposite of what it's supposed to (I only get warnings about the no longer supported "iray_optix_prime" scene option being active if I have the box unchecked.) So in the log file you posted, the first run was with OptiX Prime off and the second run was with it on regardless of what the toggle box in Daz Studio was indicating (fwiw this doesn't change the seemingly anomalous nature of your dual card results.)

    @RayDAnt  sorry sir,  this is later than I expected. attaching raw log, I can calc the numbers in an hour or two if you want but doing some other stuff atm. I think these single card numbers are interesting because they show no difference between optix on/of from a stat point. also, what you mentioned about bug on off could be true. my previous log was ON first, OFF second. which matches the opposite theory.

    txt
    txt
    log.txt
    213K
  • RayDAntRayDAnt Posts: 1,120
    edited September 2019

    System Configuration
    System/Motherboard: Microsoft Surface Book 2
    CPU: Intel i7-8650U @ stock
    GPU: Nvidia GTX 1050 2GB @ stock
    System Memory: 16GB DDR3 @ 1867Mhz
    OS Drive: Samsung OEM 512GB NVME SSD
    Asset Drive: Sandisk Extreme 1TB External SSD
    Operating System: W10 version 1903
    Nvidia Drivers Version: 431.60 GRD WDDM

    Benchmark Results

    Daz Studio Version: 4.11.0.383 x64
    Optix Prime Acceleration: On
    Total Rendering Time: 48 minutes 25.40 seconds
    CUDA device 0 (GeForce GTX 1050): 1800 iterations, 5.056s init, 2897.136s render
    Iteration Rate: (1800 / 2897.136) = 0.621 iterations per second
    Preload Time: ((0 + 2880 + 25.40) - 2897.136) = (2905.40 - 2897.136) = 8.264 seconds

    Optix Prime Acceleration: Off
    Daz Studio Version: 4.11.0.383 x64
    Total Rendering Time: 55 minutes 49.78 seconds
    CUDA device 0 (GeForce GTX 1050): 1800 iterations, 5.931s init, 3340.848s render
    Iteration Rate: (1800 / 3340.848) = 0.539 iterations per second
    Loading Time: ((0 + 3300 + 49.78) - 3340.848) = (3349.78 - 3340.848) = 8.932 seconds

    Optix Prime Acceleration: On
    Daz Studio Version: 4.12.0.042 x64
    Total Rendering Time: 39 minutes 13.58 seconds
    CUDA device 0 (GTX 1050): 1800 iterations, 4.792s init, 2345.589s render
    Iteration Rate: (1800 / 2345.589) = 0.767 iterations per second
    Loading Time: ((0 + 2340 + 13.58) - 2345.589) = (2353.58 - 2345.589) = 7.991 seconds

    Daz Studio Version: 4.12.0.042 x64
    Optix Prime Acceleration: Off
    Total Rendering Time: 39 minutes 9.32 seconds
    CUDA device 0 (GeForce GTX 1050): 1800 iterations, 5.543s init, 2340.262s render
    Iteration Rate: (1800 / 2340.262) = 0.769 iterations per second
    Preload Time: ((00 + 2340 + 9.32) - 2340.262) = (2349.32 - 2340.262) = 9.058 seconds

     

    Was looking through the log files for my OptiX Prime on vs off runs in 4.12.0.042 and noticed the following for OptiX Prime On:

    2019-08-01 01:11:28.132 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Updating decals.2019-08-01 01:11:28.132 WARNING: ..\..\..\..\..\src\pluginsource\DzIrayRender\dzneuraymgr.cpp(305): Iray [WARNING] - IRAY:RENDER ::   1.0   IRAY   rend warn : The 'iray_optix_prime' scene option is no longer supported.2019-08-01 01:11:28.142 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Allocating 1-layer frame buffer2019-08-01 01:11:28.148 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Using batch scheduling, caustic sampler disabled2019-08-01 01:11:28.148 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Initializing local rendering.2019-08-01 01:11:28.189 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Rendering with 1 device(s):2019-08-01 01:11:28.189 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : 	CUDA device 0 (GeForce GTX 1050)2019-08-01 01:11:28.194 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Rendering...2019-08-01 01:11:28.194 Iray [VERBOSE] - IRAY:RENDER ::   1.2   IRAY   rend progr: CUDA device 0 (GeForce GTX 1050): Processing scene...2019-08-01 01:11:28.212 Iray [INFO] - IRAY:RENDER ::   1.8   IRAY   rend info : Using OptiX Prime version 5.0.12019-08-01 01:11:28.212 Iray [INFO] - IRAY:RENDER ::   1.8   IRAY   rend info : Initializing OptiX Prime for CUDA device 02019-08-01 01:11:28.213 Iray [VERBOSE] - IRAY:RENDER ::   1.8   IRAY   rend stat : Geometry memory consumption: 35.0846 MiB (device 0), 0 B (host)2019-08-01 01:11:32.272 Iray [VERBOSE] - IRAY:RENDER ::   1.8   IRAY   rend stat : Texture memory consumption: 313.772 MiB for 38 bitmaps (device 0)

    Versus the following for OptiX Prime Off:

    2019-08-01 00:29:02.876 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Updating decals.2019-08-01 00:29:02.896 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Allocating 1-layer frame buffer2019-08-01 00:29:02.907 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Using batch scheduling, caustic sampler disabled2019-08-01 00:29:02.908 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Initializing local rendering.2019-08-01 00:29:03.124 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Rendering with 1 device(s):2019-08-01 00:29:03.124 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : 	CUDA device 0 (GeForce GTX 1050)2019-08-01 00:29:03.124 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Rendering...2019-08-01 00:29:03.130 Iray [VERBOSE] - IRAY:RENDER ::   1.4   IRAY   rend progr: CUDA device 0 (GeForce GTX 1050): Processing scene...2019-08-01 00:29:03.152 Iray [VERBOSE] - IRAY:RENDER ::   1.2   IRAY   rend stat : Geometry memory consumption: 35.0846 MiB (device 0), 0 B (host)2019-08-01 00:29:03.153 Iray [INFO] - IRAY:RENDER ::   1.2   IRAY   rend info : Using OptiX Prime version 5.0.12019-08-01 00:29:03.153 Iray [INFO] - IRAY:RENDER ::   1.2   IRAY   rend info : Initializing OptiX Prime for CUDA device 02019-08-01 00:29:07.585 Iray [VERBOSE] - IRAY:RENDER ::   1.2   IRAY   rend stat : Texture memory consumption: 313.772 MiB for 38 bitmaps (device 0)

     

    Notice how OptiX Prime is listed as being used in both cases (including the one with the warning about the "iray_optix_prime" scene option being no longer supported. This indicates that with this version of DS/Iray OptiX Prime is now always used in situations where one or more GTX (which is to say, pre-RTX) GPUs are being used for a render. Meaning that the OptiX Prime Acceleration On/Off toggle in this version of Daz Studio serves no actual purpose (has no actual programmatic effect on what goes on under the hood) and is likely to be removed from the Daz Studio interface entirely with a future update.

    Post edited by RayDAnt on
  • VisuimagVisuimag Posts: 549
    edited August 2019

    Yeah, I'm still on Titan Xp(s) and the situation is the same (OptiX ON vs OFF), especially in 4.12.0.42 (4.12.0.33 was closer to 4.10 with Prime on for me).

    Post edited by Visuimag on
  • outrider42outrider42 Posts: 3,679

    I have an thought. While Prime is disabled for a single RTX GPU, perhaps it is possible to enable it with multiple GPUs , in spite of what the event log says. Turning on Prime for an RTX card SHOULD mean that the ray tracing cores are disabled, thus giving us a way to test the ray tracing core performance. Maybe that is what we are seeing. The reason it works with multiple GPU could be because it is possible to mix GPU of different generations. Somebody has posted a 1080ti+2080ti time in the original SY bench thread.

    I think the option is bugged. It should be possible to disable Prime for non RTX cards, while enabling it for RTX.

  • RayDAntRayDAnt Posts: 1,120

    All result tables are currently fully updated for those interested.

     

    I think the option is bugged. It should be possible to disable Prime for non RTX cards, while enabling it for RTX.

    Can't find it now of course, but somewhere - maybe in the last pre-RTX version of the Iray programmer's manual - I remember reading that in multi-GPU situations all ray-tracing calculations were only ever performed on a single GPU (whichever one was deemed most powerful based on a relatively simple perfomance assessment like clock rate * multi-processor(aka SM) count.) Given the slightly unusual results some have posted here - plus the fact that there is indeed presently a confirmed bug (just recently got an official response about my bug report from the powers that be - Yay!) with how the OptiX Prime acceleration checkbox is/isn't functioning as of 4.12.0.047, I highly suspect there is still a fair amount of optimization/non-critical bugginess to be worked out of both Iray and Daz Studio in the wake of RTX capabilities - especially on multi-card setups.

  • Dim ReaperDim Reaper Posts: 687
    edited August 2019

    This thread has gone a bit quiet, so I hope that adding more data is still relevant. 
    I just added an upgrade to my machine today in the form of an RTX 2080Ti.  I found this thread to be useful in making the decision to buy, as the "Iteration Rate" gives a nice number to compare different cards or combinations of cards.

    I've run some more benchmarks with the new card, shown below - I haven't included those posted previously for the single 1080TI, single 980Ti and both together as they are already in the tables on the first page.

    EDIT:  Forgot to add the system information:
    System Configuration
    System/Motherboard: ASUS X99-S
    CPU: Intel i7 5960X , 3GHz
    System Memory: 32GB KINGSTON HYPER-X PREDATOR QUAD-DDR4
    OS Drive: Samsung M.2 SSD 960 EVO 250GB
    Asset Drive: 2TB WD CAVIAR BLACK  SATA 6 Gb/s, 64MB CACHE (7200rpm)
    Operating System: Windows 10.0.17 134 Build 17134 (1803)
    Nvidia Drivers Version: 431.60

     

    Benchmark Results – 2080Ti Only

    Daz Studio Version: 4.11.0.383 64-bit

    Optix Prime Acceleration: Yes

    Total Rendering Time: 4 minutes 58.20 seconds

    CUDA device 0 (GeForce RTX 2080 Ti):           1800 iterations, 3.716s init, 290.554s render

    Iteration Rate: (1800/290.554) = 6.195 iterations per second

    System Overhead: ((0+240+58.2)-290.554) = 7.646 seconds (EDITED after mistake pointed out - thank you RayDAnt)

     

    Benchmark Results – 1080Ti +2080Ti

    Daz Studio Version: 4.11.0.383 64-bit

    Optix Prime Acceleration: Yes

    Total Rendering Time:  3 minutes 17.30 seconds

    CUDA device 0 (GeForce RTX 2080 Ti):           1176 iterations, 4.250s init, 189.098s render

    CUDA device 1 (GeForce GTX 1080 Ti):          624 iterations, 4.290s init, 188.567s render

    Iteration Rate: (1800/189.098) = 9.519 iterations per second

    System Overhead: ((0+180+17.3)-189.098) = 8.202 seconds

     

    Benchmark Results – 2080Ti Only

    Daz Studio Version: 4.12.033 64-bit

    Optix Prime Acceleration: NO

    Total Rendering Time: 4 minutes 25.3 seconds (EDITED due to entry mistake)

    CUDA device 0 (GeForce RTX 2080 Ti):           1800 iterations, 3.726s init, 257.470s render

    Iteration Rate: (1800/257.47) = 6.991 iterations per second

    System Overhead: ((0+240+25.3)-257.47) = 7.830 seconds

     

    Benchmark Results – 1080Ti +2080Ti

    Daz Studio Version: 4.12.033 64-bit

    Optix Prime Acceleration: NO

    Total Rendering Time:  2 minutes 58.70 seconds

    CUDA device 0 (GeForce RTX 2080 Ti):           1161 iterations, 5.409s init, 167.678s render

    CUDA device 1 (GeForce GTX 1080 Ti):          639 iterations, 6.206s init, 167.820s render

    Iteration Rate: (1800/167.82) = 10.726 iterations per second

    System Overhead: ((0+120+58.7)-167.82) = 10.880 seconds

     

    What I've found interesting (and useful) is that if you look at different cards in the tables on the first page of the thread, adding the iteration rate for two single cards does give a reasonable approximation of the result for those two cards actually rendering together.

    E.g, for DS 4.12.033

    980Ti: 2.587
    1080Ti: 3.934
    Added together for estimate: 6.5
    Actual result for 980Ti+1080Ti: 6.4


    1080Ti: 3.934
    2080Ti: 6.785
    Added together for estimate: 10.72
    Actual result for 1080Ti+2080Ti: 10.73

    It might not give an exact number, but for someone thinking of upgrading and still keeping the old card in their system, it seems to be a good estimation.

    Post edited by Dim Reaper on
  • RayDAntRayDAnt Posts: 1,120
    edited August 2019

    What I've found interesting (and useful) is that if you look at different cards in the tables on the first page of the thread, adding the iteration rate for two single cards does give a reasonable approximation of the result for those two cards actually rendering together.

    E.g, for DS 4.12.033

    980Ti: 2.587
    1080Ti: 3.934
    Added together for estimate: 6.5
    Actual result for 980Ti+1080Ti: 6.4


    1080Ti: 3.934
    2080Ti: 6.785
    Added together for estimate: 10.72
    Actual result for 1080Ti+2080Ti: 10.73

    It might not give an exact number, but for someone thinking of upgrading and still keeping the old card in their system, it seems to be a good estimation.

    I know people love to hate on Iray, but honestly the more I learn about its inner workings (and how it fits into Nvidia's wider professional enterprise-level software platform) the more respect I have for it. I mean, how many current game engines/graphics rendering suites can you name that feature almost perfect multi-GPU/CPU performance scaling with modern computing workloads? I'm sure there must be at least a few out there, but you certainly don't hear anyone bragging about it. At least not on the consumer/prosumer level.

    Post edited by RayDAnt on
  • AsariAsari Posts: 703
    The Info on 2 cards and scaling is very interesting. For now I'm happy with the performance of my 2080ti especially with RTX support in Iray. Once the next GPU generation rolls around and should I decide to upgrade I would definitively consider keeping the 2080ti.
  •  

    What I've found interesting (and useful) is that if you look at different cards in the tables on the first page of the thread, adding the iteration rate for two single cards does give a reasonable approximation of the result for those two cards actually rendering together.
    1080Ti: 3.934
    2080Ti: 6.785
    Added together for estimate: 10.72
    Actual result for 1080Ti+2080Ti: 10.73

    It might not give an exact number, but for someone thinking of upgrading and still keeping the old card in their system, it seems to be a good estimation.

    I have a 1080Ti, and am thinking about adding the 2080Ti-what power supply size are you using to drive both cards comfortably?

  • Dim ReaperDim Reaper Posts: 687
    I have a 1080Ti, and am thinking about adding the 2080Ti-what power supply size are you using to drive both cards comfortably?

    I'm using a Corsair HX1200.  Possibly more power than is needed, but I also run quite a few HDDs, and a power-hungry cpu with liquid cooler, so I wanted some overhead.

     

  • mumia76mumia76 Posts: 146

    System Configuration
    System/Motherboard: ASUS ROG STRIX X570-F Gaming
    CPU: AMD Ryzen™ 7 3700X @ 4350Mhz across all cores
    GPU0: ZOTAC GeForce® GTX 1080 Mini ZT-P10800H-10P @ stock
    GPU1: MSI GeForce® GTX 1080 ARMOR 8G OC @ stock
    System Memory: Kingston HyperX Predator 32GB (2x16GB) DDR4 2666MHz HX426C13PB3K2/32 @ 2866 14-14-14
    OS Drive: Adata XPG GAMMIX S5 PCIe Gen3x4 M.2 2280 SSD 512GB
    Asset Drive: Samsung SSD 850 EVO SATA III 2.5 inch MZ-75E2T0B/EU 2TB
    Operating System: Windows 10 Professional 64-bit 1903
    Nvidia Drivers Version: 431.70
    Daz Studio Version: 4.11.0.383
    Optix Prime Acceleration: ON

    Benchmark Results GPU only
    Total Rendering Time: 6 minutes 42.84 seconds
    CUDA device 1 (GeForce GTX 1080):      905 iterations, 2.132s init, 398.137s render
    CUDA device 0 (GeForce GTX 1080):      895 iterations, 2.112s init, 398.312s render

    Iteration Rate: 2.273 + 2.247 = 4.520 iterations per second
    Loading Time: 4.528 seconds


    Benchmark Results CPU only
    Total Rendering Time: 47 minutes 30.19 seconds
    CPU:      1800 iterations, 5.996s init, 2841.804s render

    Iteration Rate: 0.633 iterations per second
    Loading Time: 8.386 seconds
     

    Benchmark Results CPU + GPU
    Total Rendering Time: 6 minutes 10.75 seconds
    CUDA device 1 (GeForce GTX 1080):      806 iterations, 2.119s init, 366.248s render
    CUDA device 0 (GeForce GTX 1080):      792 iterations, 2.083s init, 365.674s render
    CPU:      202 iterations, 1.558s init, 366.407s render

    Iteration Rate: 2.201 + 2.166 + 0.551 = 4.918 iterations per second
    Loading Time: 4.343 seconds

  • mumia76mumia76 Posts: 146

    Individual GPU results:

    Benchmark Results GPU0 (Zotac)
    Total Rendering Time: 13 minutes 4.18 seconds
    CUDA device 0 (GeForce GTX 1080):      1800 iterations, 1.871s init, 779.848s render

    Iteration Rate: 2.308 iterations per second
    Loading Time: 4.332 seconds

     

    Benchmark Results GPU1 (MSI)
    Total Rendering Time: 12 minutes 42.16 seconds
    CUDA device 1 (GeForce GTX 1080):      1800 iterations, 1.811s init, 757.945s render

    Iteration Rate: 2.375 iterations per second
    Loading Time: 4.215 seconds

     

  • RayDAntRayDAnt Posts: 1,120
    edited August 2019

    Seems there's a noticeable performance increase in Iray rendering stemming from the latest Nvidia driver release 436.02 (DS 4.12.0.047 and 4.12.0..060 have exactly the same Iray version, so it isn't that): 

    System Configuration
    System/Motherboard: Gigabyte Z370 Aorus Gaming 7
    CPU: Intel i7-8700K @ stock (MCE enabled)
    GPU: Nvidia Titan RTX @ stock (watercooled)
    System Memory: Corsair Vengeance LPX 32GB DDR4 @ 3000Mhz
    OS Drive: Samsung Pro 970 512GB NVME SSD
    Asset Drive: Sandisk Extreme Portable SSD 1TB
    Operating System: Windows 10 Pro version 1903 build 18362.295
    Nvidia Drivers Version: 436.02 GRD
    Daz Studio Version: 4.12.0.060 Beta x64
    Optix Prime Acceleration: On*

    Benchmark Results - Titan RTX Only (TCC Mode)
    Total Rendering Time: 3 minutes 49.40 seconds
    CUDA device 0 (TITAN RTX): 1800 iterations, 2.398s init, 224.474s render
    Iteration Rate: (1800 / 224.474) = 8.019 iterations per second
    Loading Time: ((0 + 180 + 49.40) - 224.474) = (229.40 - 224.474) = 4.926 seconds

    Benchmark Results - Titan RTX Only (WDDM Mode)
    Total Rendering Time: 4 minutes 10.14 seconds
    CUDA device 0 (TITAN RTX): 1800 iterations, 2.395s init, 245.123s render
    Iteration Rate: (1800 / 245.123) = 7.343 iterations per second
    Loading Time: ((0 + 240 + 10.14) - 245.123) = (250.14 - 245.123) = 5.017 seconds

     

     

    Post edited by RayDAnt on
  • JD_MortalJD_Mortal Posts: 758
    edited January 2020

    UPDATE: Again... Noticed I had the wrong core-count for my CPU. (18/36 not 16/32)

    UPDATE: Again... I added the new numbers for the new nVidia drivers and updates some computer info. (PCIe slot and lanes and correct CPU speed). I ran additional benchmarks for TCC modes, but only Single and Dual Titan-V cards in TCC mode. For the Titan-Xp Col Edi, I only have one in TCC mode, because the other is my video display. Both updates are for 4.11 and 4.12-Beta versions of Daz3D.

    UPDATE: I managed to cram all four cards in one case! I just had to break the intake guard off one of the Titan Xp Collectors Edition cards and bend one mounting-tab over, on the other one. (They wouldn't fit next to one another or within the last slot, otherwise.) So, I am testing Titan-V (2x) and Titan-Xp Collectors edition (2x) cards.

    For what it's worth, my Titan-V cards seem to have the same exact results as the Titan-RTX cards, for obvious reasons. That is reassuring and a little depressing at the same time. :) RTX cards are cheaper and have RTX, with the bonus of having more memory... I should have just waited for RTX to be invented.

    However, my cards don't seem to be any better, and are actually a little worse, in TCC mode, on prior benchmarks. (Might have to do with windows finally having fixed WDDM mode, finally. That, or TCC mode just offers little for the original Titan/Volta {non-RTX} cards.) Again, I'm a little sad about that.

    System Configuration
    System/Motherboard: ASRock x299 OC Formula (PCIe 8x/8x/8x/8x speeds)
    CPU: Intel i9-7980XE 3.80Ghz, 18 core (36 threads), 44 PCIe lanes @ Stock (Water cooled)
    GPU: Nvidia Titan V @ stock (5120 cuda cores)
    GPU: Nvidia Titan V @ stock (5120 cuda cores)
    GPU: Nvidia Titan Xp Collectors Edition @ stock (3840 cuda cores)
    GPU: Nvidia Titan Xp Collectors Edition @ stock (3840 cuda cores)
    System Memory: Corsair Dominator Platinum 64GB DDR4 @ 2133Mhz
    OS Drive: Samsung 960 PRO 2TB NVMe SSD M.2
    Asset Drive: Samsung 850 EVO 4TB V-NAND SSD
    Operating System: Windows 10 Home (64-bit) version 1903 build 18362.295
    Nvidia Drivers Version: 436.15 GRD

    NOTE: Total system power is the power, measured at the wall, with a "Kill-A-Watt" meter. This includes the total power of the CPU, GPU, Drives, RAM, Fans, Water cooler and Power-supply losses. Eg, it is not a measure of any single specific thing. It is the total potential cost to render, using the entire computer. The actual wattage consumed, must be calculated by the time it took to render. [Watts per hour] / [3600 seconds in an hour] * [Seconds to render] = [Total consumed watts] Now you multiply "consumed watts", by the number of pictures a day/week/month/year, you intend to render. That is what your electric consumption will be, in watts. Thus, it is not better to have two cheap cards that render as fast as one card does, as you consume 2x more power, which will cost you more than you saved when buying two cheaper cards, instead of just one card. (You also have to include AC cooling costs too. Roughly 1.5x what it costs to generate heat, of the same wattage.)

    NOTE: Each Titan-V pulls about 100 Watts each, when rendering. Each Titan-Xp Col Edi card pulls about 200 watts. The running system pulls about 160 watts. Thus, (100 + 100 + 200 + 200 + 160 = 760 watts), which is the power draw of all four cards running. The Titan-V is 2x faster, and runs at half the power of the Titan-Xp Col Edi cards. You can fit up to 8 cards in one system, but windows will only see 4 of the same card type. (IRAY/Daz3D may see them all, if at-least 4 are in TCC mode.) Thus, 8x Titan-V's would pull a total of 800+160=960 watts. 4x Titan-Xp Col Edi would pull the same power, but be 1/4 the speed. Both systems could easily be run off of a 1200 watt power-supply. I just wouldn't trust that system to play games, as the cards draw nearly 2x more power when playing games.

    DAZ3D: Version 4.11.0.383
    Benchmark Results - Intel i9-7980 [32 Threads] (0 cuda cores)
    (OptiX)
    - Total Rend Time: 29min 52.70sec
    - Rend Time: 1782.909sec
    (No OptiX)
    - Total Rend Time: 30min 19.50sec
    - Rend Time: 1812.990sec
    (Total System Power): ~337 Watts/hr [~169.72 total watts consumed] = (~2 pics/hr @ ~339.44 watts)

    Benchmark Results - Titan Xp Col Edi [1x] (WDDM Mode) (3840 cuda cores)
    (OptiX)
    - Total Rend Time: 8min 7.50sec
    - Rend Time: 482.323sec
    (No OptiX)
    - Total Rend Time: 8min 40.59sec
    - Rend Time: 514.893sec
    (Total System Power): ~345 Watts/hr [~49.34 total watts consumed] = (~7 pics/hr @ 345.38w)

    Benchmark Results - Titan Xp Col Edi [1x] (TCC Mode) (3840 cuda cores)
    (OptiX)
    - Total Rend Time: 7min 35.27sec
    - Rend Time: 450.900sec
    (No OptiX)
    - Total Rend Time: 8min 24.48sec
    - Rend Time: 499.044sec
    (Total System Power): ~377 Watts/hr [~52.26 total watts consumed] = (~7 pics/hr @ 365.82w)

    Benchmark Results - Titan V [1x] (WDDM Mode) (5120 cuda cores)
    (OptiX)
    - Total Rend Time: 4min 5.98sec
    - Rend Time: 241.685sec
    (No OptiX)
    - Total Rend Time: 4min 27.20sec
    - Rend Time: 261.474sec
    (Total System Power): ~278 Watts/hr [~20.19 total watts consumed] = (~14 pics/hr @ 282.66w)

    Benchmark Results - Titan V [1x] (TCC Mode) (5120 cuda cores)
    (OptiX)
    - Total Rend Time: 3min 55.66sec
    - Rend Time: 231.485sec
    (No OptiX)
    - Total Rend Time: 4min 16.18sec
    - Rend Time: 250.779sec
    (Total System Power): ~273 Watts/hr [~19.02 total watts consumed] = (~14 pics/hr @ 266.28w)

    Benchmark Results - Titan V [2x] (TCC Mode) (10240 cuda cores)
    (OptiX)
    - Total Rend Time: 2min 2.63sec
    - Rend Time: 118.214sec
    (Total System Power): ~377 Watts/hr [~12.38 total watts consumed] = (~31 pics/hr @ 383.78w)

    Benchmark Results - Titan V [2x] + Titan Xp Col Edi [2x] (WDDM Mode) (17920 cuda cores)
    (OptiX)
    - Total Rend Time: 1min 28.83sec
    - Rend Time: 83.698sec
    (No OptiX)
    - Total Rend Time: 1min 36.72sec
    - Rend Time: 90.425sec
    (Total System Power): ~740 Watts/hr [~18.59 total watts consumed] = (~40 pics/hr @ 743.60w)

    Benchmark Results - Titan V [2x TCC] + Titan Xp Col Edi [1x TCC, 1x WDDM] (TCC&WDDM Mode) (17920 cuda cores)
    (OptiX)
    - Total Rend Time: 1min 22.66sec
    - Rend Time: 80.613sec
    (No OptiX)
    - Total Rend Time: 1min 32.65sec
    - Rend Time: 86.990sec
    (Total System Power): ~755 Watts/hr [~18.24 total watts consumed] = (~41 pics/hr @ 747.84w)

    NOTE: At 30 fps, I can render a 60 second video in 45 hours, at this quality/size/complexity of this benchmark, @ 40 pics/hour. (30fps*60sec/40pics=45render-hours), and it will consume (45hrs*743.60w=33,462w), or 33Kwh in power. That is $3.63 @ ($0.11/Kwh) for rendering and about $5.45 for AC cooling, for a total of $9.08, to render for 45 hours total. Compare that to rendering with only ONE Titan Xp Collectors Edition card... (30fps*60sec/7pics=257render-hours), (257hrs*345.38w=88,762w), or 89Kwh in power. That is $9.79 for rendering and about $14.69 for AC cooling, for a total of $24.48, to render for 257 hours total.

     

    DAZ3D: Version 4.12.0.67 Beta (Public)
    Benchmark Results - Titan Xp Col Edi [1x] (WDDM Mode) (3840 cuda cores)
    (OptiX)
    - Total Rend Time: 6min 55.35sec
    - Rend Time: 409.981sec
    (Total System Power): ~378 Watts/hr [~43.05 total watts consumed] = (~9 pics/hr @ 387.45w)

    Benchmark Results - Titan Xp Col Edi [1x] (TCC Mode) (3840 cuda cores)
    (OptiX)
    - Total Rend Time: 6min 45.83sec
    - Rend Time: 400.361sec
    (Total System Power): ~380 Watts/hr [~42.26 total watts consumed] = (~9 pics/hr @ 380.34w)

    Benchmark Results - Titan V [1x] (WDDM Mode) (5120 cuda cores)
    (OptiX)
    - Total Rend Time: 3min 58.91sec
    - Rend Time: 234.216sec
    (Total System Power): ~277 Watts/hr [~18.02 total watts consumed] = (~15 pics/hr @ 270.30w)

    Benchmark Results - Titan V [1x] (TCC Mode) (5120 cuda cores)
    (OptiX)
    - Total Rend Time: 3min 48.31sec
    - Rend Time: 223.698sec
    (Total System Power): ~280 Watts/hr [~17.40 total watts consumed] = (~16 pics/hr @ 278.40w)

    Benchmark Results - Titan V [2x] (TCC Mode) (10240 cuda cores)
    (OptiX)
    - Total Rend Time: 1min 59.20sec
    - Rend Time: 114.587sec
    (Total System Power): ~380 Watts/hr [~12.10 total watts consumed] = (~32 pics/hr @ 387.20w)

    Benchmark Results - Titan V [2x] + Titan Xp Col Edi [2x] (WDDM Mode) (17920 cuda cores)
    (OptiX)
    - Total Rend Time: 1min 25.76sec
    - Rend Time: 79.914sec
    (Total System Power): ~764 Watts/hr [~16.96 total watts consumed] = (~45 pics/hr @ 763.20w)

    Benchmark Results - Titan V [2x TCC] + Titan Xp Col Edi [1x TCC, 1x WDDM] (TCC&WDDM Mode) (17920 cuda cores)
    (OptiX)
    - Total Rend Time: 1min 22.75sec
    - Rend Time: 76.070sec
    (Total System Power): ~770 Watts/hr [~16.27 total watts consumed] = (~47 pics/hr @ 764.69w)

    END OF RESULTS

    Eg, I will save 50% in electricity costs, using 1x Titan-V, V.S. using 2x Titan-Xp, to render the same quality level. Or... This setup is like having 3x Titan-V cards, with 1/6th more power used, to render the same as 3x Titan-V cards would normally render them with. Much better than my older Titan-X card was doing, alone!

    Just for fun, I did this benchmark with 4.12-Beta, using 1024/2048 sample sizes, quality 2.0, caustics on, and denoiser. Running it for 15min and 6.23sec (It didn't stop at 10min, where I told it to. :P) Attached is the result of the test. It made it to only 8% convergance.

    rend info : Device statistics:
    rend info : CUDA device 0 (TITAN V): 2085 iterations, 2.286s init, 598.247s
    rend info : CUDA device 1 (TITAN V): 2128 iterations, 2.952s init, 597.371s
    rend info : CUDA device 2 (TITAN Xp COLLECTORS EDITION): 1097 iterations, 2.872s init, 597.428s
    rend info : CUDA device 3 (TITAN Xp COLLECTORS EDITION): 1187 iterations, 2.950s init, 597.381s
    rend info : CPU: 357 iterations, 2.015s init, 598.037s render

    P.S. I'd still rather have 4x Titan-RTX cards in my machine. This scene would render in 40 seconds! But I would need 8x Titan-RTX cards to bring it down to a 20 second render-time. Soon...

    TestRenderHigh.png
    900 x 900 - 839K
    Post edited by JD_Mortal on
  • RayDAntRayDAnt Posts: 1,120
    edited November 2019

    System Configuration
    System/Motherboard: Gigabyte Z370 Aorus Gaming 7
    CPU: Intel i7-8700K @ stock (MCE enabled)
    GPU: Nvidia Titan RTX @ stock (watercooled)
    System Memory: Corsair Vengeance LPX 32GB DDR4 @ 3000Mhz
    OS Drive: Samsung Pro 970 512GB NVME SSD
    Asset Drive: Sandisk Extreme Portable SSD 1TB
    Operating System: Windows 10 Pro version 1903 build 18362.295
    Nvidia Drivers Version: 436.15 GRD
     

    Benchmark Results (WDDM mode)

    Daz Studio Version: 4.12.0.067 x64
    Optix Prime Acceleration: On*
    Total Rendering Time: 3 minutes 57.44 seconds
    CUDA device 0 (TITAN RTX): 1800 iterations, 2.296s init, 232.527s render
    Iteration Rate: (1800 / 232.527) = 7.741 iterations per second
    Loading Time: ((0 + 180 + 57.44) - 232.527) = (237.44 - 232.527) = 4.913 seconds

    Daz Studio Version: 4.11.0.383 x64
    Optix Prime Acceleration: On
    Total Rendering Time: 4 minutes 24.19 seconds
    CUDA device 0 (TITAN RTX): 1800 iterations, 2.603s init, 258.950s render
    Iteration Rate: (1800 / 258.950) = 6.951 iterations per second
    Preload Time: ((0 + 240 + 24.19) - 258.950) = (264.19 - 258.950) = 5.240 seconds

    Daz Studio Version: 4.11.0.383 x64
    Optix Prime Acceleration: Off
    Total Rendering Time: 4 minutes 41.93 seconds
    CUDA device 0 (TITAN RTX): 1800 iterations, 3.263s init, 276.304s render
    Iteration Rate: (1800 / 276.304) = 6.515 iterations per second
    Loading Time: ((0 + 240 + 41.93) - 276.304) = (281.93 - 276.304) = 5.626 seconds

     

    Benchmark Results (TCC mode)

    Daz Studio Version: 4.12.0.067 x64
    Optix Prime Acceleration: On*
    Total Rendering Time: 3 minutes 49.18 seconds
    CUDA device 0 (TITAN RTX): 1800 iterations, 2.253s init, 224.405s render
    Iteration Rate: (1800 / 224.405) = 8.021 iterations per second
    Loading Time: ((0 + 180 + 49.18) - 224.405) = (229.18 - 224.405) = 4.775 seconds

    Daz Studio Version: 4.11.0.383 x64
    Optix Prime Acceleration: On
    Total Rendering Time: 4 minutes 14.97 seconds
    CUDA device 0 (TITAN RTX): 1800 iterations, 2.256s init, 250.111s render
    Iteration Rate: (1800 / 250.111) = 7.197 iterations per second
    Loading Time: ((0 + 240 + 14.97) - 250.111) = (254.97 - 250.111) = 4.859 seconds

    Daz Studio Version: 4.11.0.383 x64
    Optix Prime Acceleration: Off
    Total Rendering Time: 4 minutes 33.50 seconds
    CUDA device 0 (TITAN RTX): 1800 iterations, 3.061s init, 268.127s render
    Iteration Rate: (1800 / 268.127) = 6.713 iterations per second
    Loading Time: ((0 + 240 + 33.50) - 268.127) = (273.50 - 268.127) = 5.373 seconds

     


     

    Some comparative number crunching on TCC vs non-TCC using both my and JD_Mortal's recent results:

    GPU WDDM (ips) TCC (ips) Performance Uplift (%)
    Titan RTX 7.741 8.021 3.617%
    Titan V 7.685 8.047 4.711%
    Titan Xp 4.390 4.496 2.415%

    Daz Studio 4.12.0.067 Beta x64 (OptiX if applicable)

     

     

    GPU WDDM (ips) TCC (ips) Performance Uplift (%)
    Titan RTX 6.951 7.197 3.539%
    Titan V 7.448 7.776 4.404%
    Titan Xp 3.732 3.992 6.967%

     Daz Studio 4.11.0.383 x64 (OptiX on)

     

    And then looking at just the performance uplifts themselves:

    GPU 4.11.0.383 x64
    (Optix)
    4.12.0.067 Beta x64
    (Optix if applicable)
    Performance Difference (%)
    Titan RTX 3.539% 3.617% 0.078%
    Titan V 4.404% 4.711% 0.307%
    Titan Xp 6.967% 2.415% -4.552%

    WDDM vs TCC Performance Uplifts Across Daz Studio Versions

     

    Interesting how only the Titan Xp seems to break the overall pattern of 4.11 giving a slightly better increase than 4.12. Makes me wonder if that 6.967% is the result of an outlier (this is all based on minimal sample sizes afterall.)

    Post edited by RayDAnt on
  • JD_MortalJD_Mortal Posts: 758
    edited September 2019
    RayDAnt said:

    Interesting how only the Titan Xp seems to break the overall pattern of 4.11 giving a slightly better increase than 4.12. Makes me wonder if that 6.967% is the result of an outlier (this is all based on minimal sample sizes afterall.)

    I think it has to do with the refined "Pascal" code. IRAY had a major update, specific to pascal. I would assume that the "Volta/RTX", will get a similar boot, in time, but comparatively, it is a good thing that WDDM mode is nearly the same as TCC mode. Translation, windows is not hindering the cards as much as it does with the Pascal cards. Also remember, WDDM has had a big update, to fix the memory hogging issue. There may also be no more "debugging watch-dogs" for "pascal cards" in IRAY too. {Nothing to see here, move along! Pascal works fine now. Lets monitor "Volta", like a hawk!}

    (Windows treats all cards as one large card now. With or without SLI. It would lock-out large chunks of memory, at the windows level, which made the cards NOT have that memory available at the hardware level. Programs had to use "windows", to "ask for memory", and then windows would surrender it. Now, windows listens to hardware requests, and is not so memory hungry across all cards. It was consuming up to 4GB from each of my 12GB cards, at times! The worst part was that hardware still indicated that nearly all 12GB was still available, but Daz3D would just crash if you actually tried to load a 8.5GB scene. Windows wouldn't release that memory, because it couldn't hear the hardware-level requests and hardware-level access couldn't use it, because windows failed to release it.)

    Also, my Titan-Xp CE cards run (stock) at about 2000Mhz, nearly 2x what my volta cards operate at. Consuming up to 2x more power in the process. (For the record, you should include the "CE", or "Col Edi", or the full title "Collectors Edition", in the benchmarks, these are NOT the old "Titan Xp" cards, or the older "Titan-XP" cards.

    There are actually four Titan versions, before the RTX. None are the same...

    Titan-X (Maxwell version) {I have one of these too... it's half the speed of my Titan-Xp CE cards.)

    Titan-X (Pascal version) AKA: Titan-XP (Capital "P")

    Titan-Xp (Official nVidia Pascal identified version) (Small "p")

    Titan-Xp Collectors Edition (Two flavors of the same card. Slightly faster than the Titan-Xp.)

    Then came...

    Titan-V (First "Volta series", solid chunk of pure matrix calculations. All or nothing.)

    Titan-RTX (Variant of "Volta", split pipe-line or segmented-access, and added real-time ray tracing hardware and faster half-precision floating point formulas. AKA: RTX)

    Titan-RTV (Not real, but when it comes out, it'll be used to travel across country when we retire and hold car parts together like auto-caulking.)

    Post edited by JD_Mortal on
  • outrider42outrider42 Posts: 3,679

    I'm glad to see a Titan V finally added. So the Titan V can indeed be faster than the Titan RTX in Iray. This proves that the numbers Migenius have been posting have been pretty correct all along.

    https://www.migenius.com/products/nvidia-iray/iray-rtx-2019-1-1-benchmarks

    The Titan V gets a clear victory in 4.11, showing that without its dedicated ray tracing cores, the Titan RTX is clearly slower. With 4.12 and the new Iray making use of ray tracing cores, the Titan RTX beats the Titan V...by 1 second. This is such a small margin that it falls with margin of error. The two are essentially tied here. And that is with RTX turned on.

    I believe this comes down to the lack of double precision (FP64) in the Titan RTX, because the Titan RTX has but 1/12 of the FP64 the Titan V does.

    So while the Titan RTX has dedicated ray tracing cores, the Titan V makes up the difference with pure FP64 muscle. So in a Iray battle between the two cards, the winner will be decided by how the scene is designed. A scene that is not as intensive for ray tracing might see the Titan V win. But a scene that really pushes ray tracing might see the Titan RTX win.

    In other words, if you have a Titan V, there is no major need to upgrade unless you really need that 24GB VRAM given the two cards will basically trade blows. The Titan V is still a beast. Of course if I was buying a new Titan right now, the Titan RTX is the easy choice given it is cheaper and has twice the VRAM.

  • RayDAntRayDAnt Posts: 1,120
    edited September 2019
    JD_Mortal said:

    Titan-Xp (Official nVidia Pascal identified version) (Small "p")

    Titan-Xp Collectors Edition (Two flavors of the same card. Slightly faster than the Titan-Xp.)

    Are you sure about that? Because based on everything I've read the CE's are completely identical to the original Xp's other than their exterior accoutrement. And observed during use vs officially published boost clocks isn't itself enough to go by since virtually all modern Nvidia cards' official boost clock specs are hundreds of megahertz lower than reality (eg. my officially 1770Mhz boosting Titan RTX is actually a 1995Mhz boosting card at stock settings.) I'm inclined to think that if you had some vanilla Titan Xp's to play around with, you'd find that they also boost to about the same 2000Mhz as your CE editions.

     

     

    So while the Titan RTX has dedicated ray tracing cores, the Titan V makes up the difference with pure FP64 muscle. So in a Iray battle between the two cards, the winner will be decided by how the scene is designed. A scene that is not as intensive for ray tracing might see the Titan V win. But a scene that really pushes ray tracing might see the Titan RTX win.

    JD_Mortal, if you aren't already totally benchmarked out by this point I'd highly recommend checking out this raytracing acceleration specific benchmarking effort Takeo.Kensei launched a couple weeks ago. Imo seeing how much/if the Titan V scales up performance-wise in terms of raytracing heavy scenes in comparison to the Titan RTX (see the patterns I found here) would be very interesting to know. If not, that (in addition to the addition 12GB's of VRAM) would be something that would make the Titan RTX an absolete steal over the Titan V - at least for 3D rendering purposes.

    I haven't written about it anywhere yet (did it late at night and then forgot about it until just now), but a week or so ago I decided - on a whim - to mess around with Takeo.Kensei's test scene, increase it's complexity approximately 4 times via simple copy/paste, and was actually able to observe an RTX on vs off performance uplift of around 10X TIMES! I'd be very interested to know if any of that translates over to the Titan V with its sheer FP64 horsepower

    Post edited by RayDAnt on
  • JD_MortalJD_Mortal Posts: 758
    edited September 2019

    Ray: I'll throw it on the grill... But, yes, there is a clear leap from XP, Xp and Xp CE cards. (You will see, if we get more benchmarks.) They beat my Volta cards at playing any game... But, as you see, they are no match for "real rendering", compared to the Titan-V. https://www.videocardbenchmark.net/high_end_gpus.html

    Summary of the historic sampled values {Passmark scores}... (Higher is better.)
    - Titan-V CEO ED: 16,988
    - RTX-2080 Ti: 16,761
    - Titan-RTX: 16,504
    - Titan-Xp CE: 15,100
    - Titan-V: 14,912
    - Titan-Xp: 14,355 (Also includes some [Titan-X {Pascal} AKA:Titan-XP] {non-nvidia manufactured} and some Xp-CE cards)
    - Titan-X: 13,655 (Also includes some [Titan-X {Pascal} AKA:Titan-XP] {nvidia manufactured, but not identified as Pascal}) Should only be the "Maxwell cards here", but the scores are tainted.

    NOTE: Many X, XP and Xp cards were "water cooled and overclocked", to get the numbers UP to 14,355 average. Almost none of the Xp-CE cards were watercooled, because they were collectors cards and limited production. The real scores are in the 10,000-11,000 range, stock cooled. Most who watercooled have listed benchmarks. Few with stock setups, ever submit benchmarks. You can see the individual values and the majority of stock Xp-CE cards are in the 16,000 range +/- 300. The Xp cards are +/- 2,500. Range from 10,000 to 16,000. Might just be better hardware and less "throttling" in the CE cards, but, overall, in every test, they are above normal Xp cards.

    Outrider: I am sure RTX will get some bigger boosts, when it comes to animation renders or progressive target rendering. Remember, DOF backgrounds don't NEED super precision, but Titan-V will always be limited to doing it. Titan-RTX can, if they eventually code for it, reduce thinking with those slower precision numbers, for those parts. Same with fast moving animations that should be "blurred" and obscured. That is kind-of the point of RTX + Volta. Tween frames can be hyper-calculated faster than a Titan-V could do, in theory. (Including motion-blur and progressive/selective denoising, which doesn't mind half precision floating point values.)

    Which is better?
    A: One big brain that can think of one thing fast, with one arm... doing something perfect.
    B: Dual, split personalities, that are controlling ambidextrous Siamese-twin arms, four of them doing something different, each... but only half as good.

    Obviously it's "C", the "Next new card that is yet to be released"... Duh!

    Post edited by JD_Mortal on
  • TheKDTheKD Posts: 2,674

    Added stats to my post for 2080 super. Wow, it's almost 3 times as fast as my 1070 :D

  • RayDAntRayDAnt Posts: 1,120
    edited September 2019
    TheKD said:

    Added stats to my post for 2080 super. Wow, it's almost 3 times as fast as my 1070 :D

    Are they for the latest beta and driver versions (4.12.0.073 and 436.30 respectively)? Need to know for how to categorize them.

    ETA: And also still on Windows 7 SP1?

    Post edited by RayDAnt on
  • TheKDTheKD Posts: 2,674
    edited September 2019

    Ah dang, It's 436.30 nvidia driver and 4.12.0.67, didn't chek for DS beta update in a while. Ill rerun it on latest beta and post that too after supper, since it goes so quick lol.

    Post edited by TheKD on
  • RayDAntRayDAnt Posts: 1,120
    edited October 2019
    TheKD said:

    Ah dang, It's 436.30 nvidia driver and 4.12.0.67, didn't chek for DS beta update in a while. Ill rerun it on latest beta and post that too after supper, since it goes so quick lol.

    Cool. Thanks!

    ETA: For what it's worth you shouldn't see any render time changes with that update (The beta hasn't seen an Iray plugin version update since 4.12.0.042.)

     

    Post edited by RayDAnt on
  • TheKDTheKD Posts: 2,674
    edited September 2019

    Yeah, was .4 seconds slower lol. Wow, just tried the benchmark scene in preview render mode, so fast and no lag when I try to move stuff!

    Post edited by TheKD on
  • System Configuration
    System/Motherboard: MSI X99A SLI Krait Edition (MS-7885)
    CPU: i7-5820K @3.8 GHz
    GPU: GPU1 GTX 1080Ti Founder's Edition Stock, GPU2 GTX 1080Ti Founder's Edition Stock
    System Memory: Corsair Vengeance LPZ 64GB DDR4 @2400 C14
    OS Drive: Samsung 860 Pro SSD 512GB
    Asset Drive: 6x WD Red 8TB (RAID 6 - Synology 1618+ with 512GB M.2 Raid 1 cache, 1Gb switched network)
    Operating System: Windows 10 Pro 64-bit Version 1903 Build 18362.356
    Nvidia Drivers Version: Studio Drivers v 431.86 WDDM
    Daz Studio Version: 4.12.0.86

    Benchmark Results
    Optix Prime Acceleration: On

    DAZ_STATS
    2019-10-02 13:06:10.376 Finished Rendering
    2019-10-02 13:06:10.417 Total Rendering Time: 4 minutes 7.66 seconds
    IRAY_STATS
    2019-10-02 13:08:26.727 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:
    2019-10-02 13:08:26.727 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce GTX 1080 Ti):      896 iterations, 3.214s init, 240.381s render
    2019-10-02 13:08:26.734 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 1 (GeForce GTX 1080 Ti):      904 iterations, 3.547s init, 240.090s render

    Iteration Rate: 7.488 iterations/second
    Preload Time: 7.278 seconds

    Optix Prime Acceleration: Off
    DAZ_STATS
    2019-10-02 11:46:13.804 Finished Rendering
    2019-10-02 11:46:13.843 Total Rendering Time: 4 minutes 8.2 seconds
    IRAY_STATS
    2019-10-02 12:11:42.875 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:
    2019-10-02 12:11:42.875 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce GTX 1080 Ti):      896 iterations, 3.825s init, 240.038s render
    2019-10-02 12:11:42.882 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 1 (GeForce GTX 1080 Ti):      904 iterations, 3.872s init, 239.394s render

    Iteration Rate: 7.499 iterations/second
    Loading Time: 8.162 seconds

    RayDAnt_DS_Iray_Benchmark_2019A_Optix-2019-10-02.png
    900 x 900 - 1M
    RayDAnt_DS_Iray_Benchmark_2019A_NoOpiX-001.png
    900 x 900 - 1M
Sign In or Register to comment.