Iray Starter Scene: Post Your Benchmarks!

14344454648

Comments

  • inacentaurinacentaur Posts: 109
    edited September 2019

    i7-8086K @ 4 Ghz (16 cores) 

    2x GeForce 980 GTX NO DRIVERS installed yet 

    2019-09-23 00:47:34.440 Total Rendering Time: 26 minutes 1.43 seconds

     

    drivers installed

    2 minutes 26.31 seconds

     

    Post edited by inacentaur on
  • outrider42outrider42 Posts: 3,679
    If you were not using drivers, then the you actually benched the i7-8086k, not the GPU.
  • timon630timon630 Posts: 37
    edited September 2019

    Intel Core i7-6700k @ 4 GHz

    Single GeForce 1080 TI Aorus

    Total Rendering Time: 2 minutes 58.13 seconds

     

    Daz 4.11 Pro

    Very strange. In this topic many results with 2:00 - 2:10 on 1080ti. Maybe you have a regular 1080 without 'Ti'

    Post edited by timon630 on
  • RayDAntRayDAnt Posts: 1,120

    i7-8086K @ 4 Ghz (16 cores) 

    2x GeForce 980 GTX NO DRIVERS installed yet 

    2019-09-23 00:47:34.440 Total Rendering Time: 26 minutes 1.43 seconds

     

    drivers installed

    2 minutes 26.31 seconds

     

    If you were not using drivers, then the you actually benched the i7-8086k, not the GPU.

    also Fwiw the i7-8086K is a 12-core 8700K variant. Not a 16-core part.

  • It seems that NVIDIA computer driver was replaced by "Studio Driver" does anyone tried yet? Any performance change on Daz 4.12?

    Daz log file keep warning to change my card 2080 ti (that is detached from the monitor) to TCC mode. That is impossible on Game Driver, it is possible with Studio Driver or is just a useless warning?

  • Daz log file keep warning to change my card 2080 ti (that is detached from the monitor) to TCC mode. That is impossible on Game Driver, it is possible with Studio Driver or is just a useless warning?

    It's only supported on some GPU's.. so it's not driver related, and sadly, not supported on 2080Ti's.

  • RayDAntRayDAnt Posts: 1,120
    edited September 2019

    It seems that NVIDIA computer driver was replaced by "Studio Driver" does anyone tried yet? Any performance change on Daz 4.12?

    Based on past testing (just substitute "Studio" wherever you see "Creator Ready" mentioned - Nvidia switched branding several months later) it makes no performance difference. Nvidia's Studio drivers are basically just a different release pipeline for their standard GeForce ones (the actual code base between the two is identical from version to version.) In theory, they provide a more stable user experience for creative app users (as opposed to gamers) for people who rely on automatic driver updates since Studio driver releases only happen after a given driver base version has been thoroughly tested on specific existing apps. Standard GeForce drivers get released asap any time new apps/features get added to the code base with little regard for whether existing apps get negatively effected.

    Daz log file keep warning to change my card 2080 ti (that is detached from the monitor) to TCC mode. That is impossible on Game Driver, it is possible with Studio Driver or is just a useless warning?

    It's a useless warning on anything other than a Quadro/Titan card. And even then, it's of limited use (slight increase in usable VRAM and/or rendering speed at the loss of display output functionality as well as temperature/resource monitoring and fan speed control in many apps like Task Manager and Corsair iCue.)

    Post edited by RayDAnt on
  • RayDAntRayDAnt Posts: 1,120

    For those wanting to know, both the latest official Daz Studio 4.12 release (4.12.0.086) and the latest Beta (4.12.0.085) ship with the SAME version of Iray as all recent previous beta releases (see here for a detailed breakdown.) So if you've benchmarked your hardware on any Beta release from the past several month or so you should see NO rendering performance differences with either of these latest releases.

  • Ryen 3600: Total Rendering Time: 15 minutes 36.27 seconds

  • It's only supported on some GPU's.. so it's not driver related, and sadly, not supported on 2080Ti's.

     

    RayDAnt said:

    Based on past testing (just substitute "Studio" wherever you see "Creator Ready" mentioned - Nvidia switched branding several months later) it makes no performance difference. Nvidia's Studio drivers are basically just a different release pipeline for their standard GeForce ones (the actual code base between the two is identical from version to version.) In theory, they provide a more stable user experience for creative app users (as opposed to gamers) for people who rely on automatic driver updates since Studio driver releases only happen after a given driver base version has been thoroughly tested on specific existing apps. Standard GeForce drivers get released asap any time new apps/features get added to the code base with little regard for whether existing apps get negatively effected.

     

    It's a useless warning on anything other than a Quadro/Titan card. And even then, it's of limited use (slight increase in usable VRAM and/or rendering speed at the loss of display output functionality as well as temperature/resource monitoring and fan speed control in many apps like Task Manager and Corsair iCue.)

    Thank you for your answers, I just downloaded the latest Game Driver 436.30, everything seems to be working. 

  • mine took 35 seconds for 2x rtx2080 

    Test01.png
    400 x 520 - 183K
  • iSeeThisiSeeThis Posts: 552
    edited October 2019

    2019-10-04 06:10:40.544 Total Rendering Time: 32.88 seconds

    2x2080Ti. No CPU. No OPA.

     

    --------------------------------------------------------------------------------------

    2019-10-04 06:13:12.003 Total Rendering Time: 32.13 seconds

    2x2080Ti. With CPU. No OPA

     

    ---------------------------------------------------------------------------------------

    2019-10-04 06:14:47.416 Total Rendering Time: 32.1 seconds

    2x2080Ti. With CPU. With OPA

     

    On the contrary to what people said that 2080Ti+4.12 settings should be without OPA, I've found that with OPA it's an ant faster.

    Post edited by iSeeThis on
  • WandererWanderer Posts: 956
    edited October 2019

    10/18/2019

    Let me start by saying I think make, model, chips, drivers, cooling, and OC'ing probably account for a lot of the variance we see in results. I think cooling is a lot more important here than a lot of people realize. I would go deeper into it, but I know if I say the sky is blue, someone will feel compelled to reply that it isn't, but rather it's green instead. The thing is, we might both be right because we're each looking at it from completely different scenarios: I might be in sunny Philadelphia (I'm not), while they're hunkered down in Michigan as a tornado bears down on them (I hope not).


    Scene 1 old benchmark (95% convergence):

    RTX 2080 Ti (no CPU/no OPA): 57.23 seconds

    RTX 2080 Ti (CPU/OPA on): 53.38 seconds

    RTX 2080 Ti (CPU on): 53.22 seconds

    RTX 2080 Ti (OPA on): 56.6 seconds


    Scene 2 newer benchmark (95% convergence):

    RTX 2080 Ti (no CPU/no OPA): 3 minutes 31.10 seconds

    RTX 2080 Ti (CPU/OPA on): 3 minutes 11.30 seconds

    RTX 2080 Ti (CPU on): 3 minutes 12.8 seconds

    RTX 2080 Ti (OPA on): 3 minutes 30.78 seconds

    Machine: 
    Asrock X299 Extreme4
    Intel i9 9940X 14 Cores/28 Threads
    DDR4 64GB Ram XMP-3200
    Nvidia GeForce RTX 2080 Ti 11 GB Ram (EVGA Black card)
    CPU/GPU Independent Cooling: 2x NZXT Kraken liquid coolers (X62 280mm)
    Only modest OC--on scene 1, I ran about 4.2 GHz on CPU and 2GHz on GPU
    GPU temps never run over 47 degrees on short renders like this. On longer ones, they rarely go over 50.

    On scene 1, best time came from CPU and GPU render.
    On scene 2, best time came from CPU, OPA on, and GPU render.
    Takeaway: Your Mileage May Vary.

     

    Test2RTX2080ti.png
    1000 x 1300 - 2M
    TestScene1d.png
    400 x 520 - 181K
    Post edited by Wanderer on
  • RayDAntRayDAnt Posts: 1,120

    FYI @iSeeThis @Wanderer the OptiX Prime Acceleration checkbox in Daz Studio 4.12+ is completely functionless. Iray's developers removed its underlying functionality with the introduction of RTX support and forgot to tell Daz about it - hence why it's still there at all (and is no longer in the most recent Beta.)

  • outrider42outrider42 Posts: 3,679

    Of course temps will effect the results. So will overclocks, as most gaming cards are overclocked in the factory, and users are able to adjust them as well. Clockspeeds can have an even bigger effect. But in general, all cards of the same GPU will be very close to each other in performance. Like just a scant few percent. Plus these benchmarks are pretty quick, it takes time for a card to get warm, and in some cases, like the 2080ti in particular, it can be nearly done with the bench before it even gets fully warmed up. Like the SY bench, I bet you can finish that bench before reaching reaching peak operating temperatures.

  • WandererWanderer Posts: 956
    edited October 2019

    @RayDAnt That's interesting, but I stand by my figures as true. I could run them again, and I might get completely different results. I'm not really sure if it matters on a short bench like these as we see there isn't really much difference. I know it doesn't really prove anything, but the log is attached.

    @outrider42 Yeah, I'd like to see a benchmark scene that takes 40-60 minutes for these cards to finish to get an idea of how things look that far out across several different specs.

    txt
    txt
    log.txt
    1M
    Post edited by Wanderer on
  • outrider42outrider42 Posts: 3,679
    @Wanderer there are a couple of different benches here, these SY is the first. Back when that bench was new, the 980ti was king. There are some more intensive benches, and I made one as well. But an important thing to remember is we have to make benches possible for everybody. If there is a bench that takes a 2080ti an hour, then a user still sporting an older card like 760 will be waiting several hours, and that would just suck. So these benches have to be simple so others can reasonably run them. We also have to stick to items included with Studio, again to be fair.

    I wouldn't worry too much, in spite of the variety between cards the numbers are still pretty consistent. And with Iray, the numbers are also pretty linear, with the exception of RT cores. For pure CUDA, if GPU X does the scene in 1 minute, and GPU Y does it in 2 minutes, then you can pretty much use that ratio to gauge how they will perform in other scenes. In this case, GPU Y will almost always take twice as long as GPU X to render.

    This predictabilty held true until Iray finally got RT core support. Without a doubt RT cores boost performance, but now the amount of that boost can vary quite wildly based on the content makeup of the scene. So now the difference in performance may not be so linear when it comes to RTX cards. The CUDA performance is still pretty linear, but then you add the RT variable to that. It can make it very difficult to truly gauge performance, because there is no such thing as a "standard scene". Of course, that doesn't stop us from trying. There is a bench with dforce strand hair that seems to show off RT acceleration better than most tests.
  • WandererWanderer Posts: 956

    Thanks @outrider42, good info. I'd ask where that bench is, but I'm guessing a forum search will tell me just as easily. 

  • outrider42outrider42 Posts: 3,679
    edited October 2019

    @Wanderer My bench is in my sig. As for the other, I'm on mobile, but I still have it bookmarked. https://www.daz3d.com/forums/discussion/344451/rtx-benchmark-thread-show-me-the-power#latest

    Post edited by outrider42 on
  • WandererWanderer Posts: 956

    @outrider42 I'm trying to get your bench, but for some strange reason, when I click the download scene file button, no download begins--it just takes me to this page: https://www.daz3d.com/gallery/. Any idea why that is?

  • WandererWanderer Posts: 956

    @outrider42 No worries, I just right-clicked and chose save link as... that seemed to work fine. But the button did not.

  • outrider42outrider42 Posts: 3,679

    @Wanderer, its something the site does. I don't know why. Maybe I'll put it on sharecg someday.

  • WandererWanderer Posts: 956

    @RayDAnt Hey, just thought you'd like to know what I found while digging through my own logs just now--without the OPA box checked

    "2019-10-19 02:16:39.360 WARNING: ..\..\..\..\..\src\pluginsource\DzIrayRender\dzneuraymgr.cpp(305): Iray [WARNING] - IRAY:RENDER ::   1.0   IRAY   rend warn : The 'iray_optix_prime' scene option is no longer supported."

     

    @outrider42 Results of running your bench with CPU/GPU (stats already given above): 3 minutes 53.33 seconds at 88.65% convergence due to hitting max samples.

  • outrider42outrider42 Posts: 3,679
    @Wanderer Yep, that's what we've seen. As RayDant said in another thread OptiX Prime is not in this build, and the checkbox for it is gone from the latest beta. This is a new Iray, which uses the full OptiX 6.0. Previously it used OptiX Prime, which was like OptiX's little brother but not the full thing. RTX presented a problem for Prime, and getting RT core support was not possible with it. So Iray was rebuilt with the full OptiX 6.0. This is a good thing for Iray moving forward for a number of reasons.

    And yes, my bench is made to cap at 5000 iterations first. I thought that was a more consistent stop condition than convergence.
  • RayDAntRayDAnt Posts: 1,120
    edited October 2019
    Wanderer said:

    @RayDAnt Hey, just thought you'd like to know what I found while digging through my own logs just now--without the OPA box checked

    "2019-10-19 02:16:39.360 WARNING: ..\..\..\..\..\src\pluginsource\DzIrayRender\dzneuraymgr.cpp(305): Iray [WARNING] - IRAY:RENDER ::   1.0   IRAY   rend warn : The 'iray_optix_prime' scene option is no longer supported."

    Yeah. See this post for a full analysis of the ramifications of that.

     

     

    As RayDant said in another thread OptiX Prime is not in this build.

    Not quite. OptiX Prime is still in this and every build (to date) of Daz Studio. It's just been demoted to an automated fallback in the case that a GPU doesn't support hardware based raytracing acceleration (ie. non-RTX) since Iray lost its own original "built-in" raytracing mechanism alternative with the conversion to the full OptiX API.

    Post edited by RayDAnt on
  • jrlaudiojrlaudio Posts: 47
    edited October 2019

    The big issue in this thread is that the numbers in these "tests" are inconsitant due to one big variable. Computers are different and the logfile contains not only the actual render time, but also includes the scene load time. Most of the mobo's used for testing multiple cards have only 16 or 20 PCIe lanes available. Not enough when 32 are required for two cards running 16x. Other factors for longer load times are how many channels of system RAM you are using (many use single channel), whether you are using SATA or NVMe drives or simple HDD drives, CPU type and core count, whether you can run full NVLink 2.0 or partial NVLink (like in a 2080) for multiple cards, and a variety of other factors. These can add many seconds to the "render times" people are posting, depending on computer configuration. It's not just about which card(s) you have.

    Those of us running Xeon workstations with multiple CPU's and 80+ PCIe lanes available (not including chipset lanes) and 4 channel RAM are seeing much lower figures for the same cards. So I am sure a better methodology is required for any of this "data" to be relevant to hobbyists DAZ users.

     

    I've upgraded ... gota change the sig 

    Hardware: z840 HP Workstation, Dual Xeon E5-2699v4 22-core (88 threads total), (16) 1x64GB DDR4-2400 ECC LR RAM, 4-channel/cpu (1Tb total RAM), (1) 2Tb Samsung 970 EVO NVMe SSD, (4) WD Black 6Tb HDD, (3) nVidia Titan RTX 24Gb GPU's across NVLink, 1125W PSU

    Post edited by jrlaudio on
  • RayDAntRayDAnt Posts: 1,120
    edited October 2019
    jrlaudio said:

    The big issue in this thread is that the numbers in these "tests" are inconsitant due to one big variable. Computers are different and the logfile contains not only the actual render time, but also includes the scene load time. Most of the mobo's used for testing multiple cards have only 16 or 20 PCIe lanes available. Not enough when 32 are required for two cards running 16x. Other factors for longer load times are how many channels of system RAM you are using (many use single channel), whether you are using SATA or NVMe drives or simple HDD drives, CPU type and core count, whether you can run full NVLink 2.0 or partial NVLink (like in a 2080) for multiple cards, and a variety of other factors. These can add many seconds to the "render times" people are posting, depending on computer configuration. It's not just about which card(s) you have.

    Fyi if you want benchmark figures immune from all the un-accounted-for variance factors you've CORRECTLY described here, you can check out this alternative benchmarking thread. Both the test scene itself and the methodology for reporting its results were designed from the ground up to take these same specific confounding factors into account (referred to collectively as "Loading Time" and found as the last colunm of date in each CPU/GPU combination's individual results chart) thereby eliminating them from skewing final results.

    Post edited by RayDAnt on
  • RTX 2070 Super Jetstream + GTX 760 4GB (no cpu)

    Total Rendering Time: 1 minutes 3.94 seconds

  • 1950X+1080Ti (Old benchmark)

    Total Rendering Time: 1 minutes 36.99 seconds 

  • AMD Ryzen 7 1800X
    Nvidia GeForce GTX 1080:
    2 min 30 secs

Sign In or Register to comment.