Iray Starter Scene: Post Your Benchmarks!

1383941434449

Comments

  • RayDAntRayDAnt Posts: 1,120
    edited April 2019

    Artini, could you by chance run those same benchmarks two more times - First with just a single P5000 and then just the Xeons enabled for rendering? I know it means doing a bunch more test passes, but having performance stats for individual components is EXTREMELY useful for potential buyers.

    Post edited by RayDAnt on
  • ebergerlyebergerly Posts: 3,255
    edited April 2019

    Prices flunctuate too much for a cost analysys to be much use. Prices also vary wildly by region. Like in Australia, prices are insane. There are many Daz users not in the US. And right now the UK has a big RTX sale going one where nearly all the RTX cards have decent discounts. Its just not practical.

    So if someone calculates a Price ($) divided by Render time improvement (%), say, $500/20%, it's just not practical to substitute a different price and divide that by 20%? The whole idea of a cost analysis isn't of much use? Even though price is usually the primary concern of most users? I guess I'm not following. Especially since we're busy working on spreadsheets with 5 different scenes and 20+ different configurations. Now THAT seems impractical. 

    Post edited by ebergerly on
  • ebergerlyebergerly Posts: 3,255
    edited April 2019

    I'm surprised that so many here seem so certain that RTX is all ready to go in Iray/Studio. Here's a recent quote from DAZ regarding RTX:

    DAZ 3D DAZ STUDIO

    Daz 3D will support RTX in 2019

    “ Many of the world’s most creative 3D artists rely on Daz Studio for truly amazing photorealistic creations. Adding the speed of NVIDIA RTX to our powerful 3D composition & rendering tools will be a game changer for creators.”

    STEVE SPENCER GM & VP of Marketing | Daz 3D

    Note he said it they WILL support RTX this year. Which strongly suggests it's not available yet, and there's no specifics on exactly which of the RTX technologies will be included.

    Now maybe everyone has an inside source at NIVIDIA and DAZ who says otherwise, but it seems to me that DAZ and NVIDIA can choose to implement whatever new architecture and software/SDK's/API's that are available with RTX cards. DAZ can decide to use the Physx/CUDA10 physics simulation stuff (to improve DForce-type cloth simulations), any of the game-based or NGX/AI/denoising stuff (for realtime previews), and so on. If someone is certain that's already been decided and it's all ready to go, then I'm way off base. But I don't think I am. 

    If it was me, I'd be all over the possibilities of implementing some fast physics/cloth simulation stuff, and vastly improving the realtime Iray 3D view performance (which is dog slow right now). And having the FLEX particle physics and fluid simulation features also seem like a great way to move Studio forward.  

    And I think we've already seen how NIVIDA is releasing RTX support in stages, so unless somebody is certain otherwise, it seems reasonable that they will continue to make additions and improvements to more fully utilize the architecture over time. But assuming that if some NVIDIA software/driver/API/SDK/etc. is released today then all the software users' developers can have it implemented in their software within a week or so is, IMO, mistaken. In the real world, software (and any other work) has to be prioritized and scheduled based on cost and resources. So unless everyone has been sitting around waiting for NVIDIA to release all the associated RTX software, it takes a lot of time to implement. 

    Post edited by ebergerly on
  • bluejauntebluejaunte Posts: 1,861

    Once Iray gets RTX support, we should get RTX support soon thereafter. Don't expect anything more than that. All the other things you mention have nothing to do with anything. DForce isn't based on PhysX. The viewport we have is basic so if the wanted any of that real time rendering benefit of RTX they would have to completely overhaul that and implement half a game engine. Do you honestly think they would cram all that into an RTX release this year instead of just giving us the latest Iray?

  • ebergerlyebergerly Posts: 3,255

    By the way, has anyone looked at some of the stuff like the RTX "Turing mesh shaders", which is just one of the new "rasterization" features? Looks like it's an amazing way to vastly speed up scenes with zillions of polygons (like landscapes with tons of vegetation):

    https://devblogs.nvidia.com/introduction-turing-mesh-shaders/  

    It would certainly be nice if something like that could find its way into Studio. Another reason why I'm going to wait 6 months or so to see which of these many features ultimately get included in Studio. 

  • outrider42outrider42 Posts: 3,679
    ebergerly said:

    Prices flunctuate too much for a cost analysys to be much use. Prices also vary wildly by region. Like in Australia, prices are insane. There are many Daz users not in the US. And right now the UK has a big RTX sale going one where nearly all the RTX cards have decent discounts. Its just not practical.

    So if someone calculates a Price ($) divided by Render time improvement (%), say, $500/20%, it's just not practical to substitute a different price and divide that by 20%? The whole idea of a cost analysis isn't of much use? Even though price is usually the primary concern of most users? I guess I'm not following. Especially since we're busy working on spreadsheets with 5 different scenes and 20+ different configurations.  

     

    Because, it's complicated... laugh

    That cost calculation becomes useless if you end up exceeding your frame buffer. VRAM is just as important, if not even more important, than the render speeds. Plus, if we expect people to plug in their own numbers...why can't they just do that on their own anyway? What the point? These cost analysis reports almost always make the cheapest GPUs look great, because they are so cheap. But such GPUs are objectively terrible in real use.

    For example, in gaming, you sometimes see "Cost per frame" as a data set. This almost always favors the the very cheapest cards, like the $120 (or less) AMD 560. Its an OK GPU. It will play a number of games at 1080p, at 60 fps, but not the newest ones. Look at this chart.

    With a few exceptions, the 1060 3GB and AMD 560 dominate such charts. But they SUCK. The 1060 might have a great cost per frame, but with 3GB it wont get you very far in most modern games. The 560 is just plain slow, it cannot even manage but 45 fps in the game. It might allow you to spot something strange and out of place, but for the most these types of charts do little good. And gain...VRAM. Unless you account for VRAM there is no point.

     

    ebergerly said:

    I'm surprised that so many here seem so certain that RTX is all ready to go in Iray/Studio. Here's a recent quote from DAZ regarding RTX:

    DAZ 3D DAZ STUDIO

    Daz 3D will support RTX in 2019

    “ Many of the world’s most creative 3D artists rely on Daz Studio for truly amazing photorealistic creations. Adding the speed of NVIDIA RTX to our powerful 3D composition & rendering tools will be a game changer for creators.”

    STEVE SPENCER GM & VP of Marketing | Daz 3D

    Note he said it they WILL support RTX this year. Which strongly suggests it's not available yet, and there's no specifics on exactly which of the RTX technologies will be included.

    Now maybe everyone has an inside source at NIVIDIA and DAZ who says otherwise, but it seems to me that DAZ and NVIDIA can choose to implement whatever new architecture and software/SDK's/API's that are available with RTX cards. DAZ can decide to use the Physx/CUDA10 physics simulation stuff (to improve DForce-type cloth simulations), any of the game-based or NGX/AI/denoising stuff (for realtime previews), and so on. If someone is certain that's already been decided and it's all ready to go, then I'm way off base. But I don't think I am. 

    If it was me, I'd be all over the possibilities of implementing some fast physics/cloth simulation stuff, and vastly improving the realtime Iray 3D view performance (which is dog slow right now). And having the FLEX particle physics and fluid simulation features also seem like a great way to move Studio forward.  

    And I think we've already seen how NIVIDA is releasing RTX support in stages, so unless somebody is certain otherwise, it seems reasonable that they will continue to make additions and improvements to more fully utilize the architecture over time. But assuming that if some NVIDIA software/driver/API/SDK/etc. is released today then all the software users' developers can have it implemented in their software within a week or so is, IMO, mistaken. In the real world, software (and any other work) has to be prioritized and scheduled based on cost and resources. So unless everyone has been sitting around waiting for NVIDIA to release all the associated RTX software, it takes a lot of time to implement. 

    What exactly is Steve supposed to say here? You need to understand that the people at Daz Studio are more tight lipped about upcoming features than Fort Knox. That Steve made this statement AT ALL is a shock in itself. You cannot possibly expect him or anyone to be more specific than that.

    But Steve does make a statement in a thread about Genesis 9 speculation that Daz Studio will be getting "cool new things" in Q2-Q4. That is a broad range, but we are already officially in Q2 TODAY. We literally could have a new feature released at any moment.

    Iray and OptiX are plugins. Nvidia provides them with all the tools when they release the SDK. And the SDK HAS released. That means Daz has the tools right now. We cannot say if they had any access before the SDK released, but installing the plugin is not rocket science. I am not saying it is easy, but I would bet you money that somebody in Daz's Utah office has RTX working in a test build right now. If they had nothing, I do not believe that Steve would have made any statement at all.

    And already asked question if Iray was even the plugin that Daz was getting. But even IF Daz was getting an all new plugin, the same still applies. Nvidia would supply it, and Daz would drop it in. It may take a couple months to get a beta shipped, but I would put money they'd still have a prototype build working.

    This all happened before with Pascal, but that situation was actually worse. At that time, Pascal would not function AT ALL with Daz 4.8. People who bought new cards were not pleased to find out they were expensive paperweights. Nvidia finally released the Iray SDK after a couple months, and Daz followed a couple months after that with 4.9 in beta form. I would expect that situation to repeat. Pascal released May 27, 2016. The Iray SDK only released in OCTOBER. Yes, OCTOBER. Daz worked quickly and released a 4.9 beta that added Pascal support just 1 month later. The full Pro version would release later, but this should demonstrate that the people at Daz Studio are capable of getting the plugin installed pretty quickly once the SDK releases.

    I noticed someone mentioned that Turing is one of Nvidia's most botched launches, but at least Turing GPUs worked with Daz right out of the gate! People who bought into Pascal had to wait months for their purchase to start working, and that was due more to Nvidia than Daz.

    It all happened again with Volta, but in that case the only Volta GPU ever released was the Titan V. So that situation was not as pressing. But 4.11 did add Volta support with its Iray SDK, and that is the SDK we still have now.

    So some more speculation. I saw that Redshift is getting OptiX to make use of RT cores. However, Redshift preferred their own API over OptiX for all other rendering tasks. So Redshift kept their solution in place for everyone who has older GPUs, because that solution is faster for them. They added a light layer of OptiX for those that have RTX cards, which they benefit greatly from. So for everybody else, they do not use OptiX. So...I think Daz will keep Iray OptiX Prime in place. Prime might not be updated for RT cores...but it may still be the fastest option for those who don't have RTX. There could be a new plugin for standard OptiX 6.0, which will give RTX owners access to the RT enhanced speeds. Other people may be able to use this plugin as well, but will not get that speed benefit, and might still get faster renders with Prime.

    That is what I think will happen. Who knows if that plugin will be still be called Iray or be a part of the Iray branch, but Iray and OptiX Prime is not going anywhere.

    Mesh shaders...probably not.

    "This blog introduces the new pipeline and gives some concrete examples in GLSL for OpenGL or Vulkan rendering. The new capabilities are accessible through extensions in OpenGL, Vulkan and via NVAPI in DX12."

    I don't see CUDA listed there. Daz only uses OpenGL for the viewport, and that has nothing to do with Iray. This could be an update for another day if Daz Studio is looking for a real time render option, but this is unrelated to the Iray plugin. So if it is in the works, it could release at a very different time from the new Iray plugin. Steve's statement in the forum does imply multiple "cool" things are coming, so a new RTX render option is just ONE thing coming. and this real time thing would fit well within the wide Q2-Q4 time frame, wouldn't it?

  • bluejauntebluejaunte Posts: 1,861
    ebergerly said:

     

    Iray and OptiX are plugins. Nvidia provides them with all the tools when they release the SDK. And the SDK HAS released. That means Daz has the tools right now. We cannot say if they had any access before the SDK released, but installing the plugin is not rocket science. I am not saying it is easy, but I would bet you money that somebody in Daz's Utah office has RTX working in a test build right now. If they had nothing, I do not believe that Steve would have made any statement at all.

    As far as I know, Daz can do nothing as long as Iray isn't ported to the new OptiX and makes use of RTX. Once that happens, they have to integrate the new version and that's it. So I'm not sure what you mean here with "RTX working in the office". Unless maybe there's some super alpha version of a new Iray floating around.

    As for marketing messages, you give the various marketing departments way too much credit laugh if you ever worked at a company with a marketing department, you know how these things are often cobbled together. I can easily see some NVIDIA dude emailing Steve, letting him know they have RTX support planned for Iray this year, and if he could just use the snippet that you read above for the mutual benefit of saying "this shiny new thing is gonna be great".

  • outrider42outrider42 Posts: 3,679
    ebergerly said:

     

    Iray and OptiX are plugins. Nvidia provides them with all the tools when they release the SDK. And the SDK HAS released. That means Daz has the tools right now. We cannot say if they had any access before the SDK released, but installing the plugin is not rocket science. I am not saying it is easy, but I would bet you money that somebody in Daz's Utah office has RTX working in a test build right now. If they had nothing, I do not believe that Steve would have made any statement at all.

    As far as I know, Daz can do nothing as long as Iray isn't ported to the new OptiX and makes use of RTX. Once that happens, they have to integrate the new version and that's it. So I'm not sure what you mean here with "RTX working in the office". Unless maybe there's some super alpha version of a new Iray floating around.

    As for marketing messages, you give the various marketing departments way too much credit laugh if you ever worked at a company with a marketing department, you know how these things are often cobbled together. I can easily see some NVIDIA dude emailing Steve, letting him know they have RTX support planned for Iray this year, and if he could just use the snippet that you read above for the mutual benefit of saying "this shiny new thing is gonna be great".

    Maybe, but I don't believe Steve would have made any comment at all if Daz didn't have something up their sleeve. Because again, we all know how tight Daz is with releasing information. They almost never tip their hand early like this. To make a public statement to any degree, plus the forum post that hints at multiple new features coming, leads me to believe it will be sooner rather than later. We are already on the 4th month of 2019. He also included Q2 as an option, so for a Q2 release, that could be anytime now and up to 3 months.

    It is true that sometimes a company head will say something that maybe the people inside are not ready to reveal. But that's usually something a larger company might do. Daz is downright tiny and probably a bit closer than most.

    And if the new plugin is NOT in fact Iray, we might never know when the SDK Daz Studio chooses is released beforehand. It could be a totally new thing waiting to be revealed with the next Daz Studio. I am not saying that is the case, but there has been no mention of Iray anywhere, which is weird. Other apps that have Iray, like Rhino and iclone were not listed as getting RTX, which certainly raises some eyebrows for me. Either way it is an Nvidia solution, so it seems that OptiX 6.0 is probably the foundation of whatever it is, and that has officially released.

    And like Pascal, look at how quickly the Daz beta followed the SDK release. I would bet you money that Daz had a test build running Pascal Iray within days of that SDK's release, if not that exact day. They have to test their alpha build first, and then move to the public beta stage. Just like some people had OptiX 6.0 already ray tracing with RTX days after OptiX 6.0 released.

    Either way, I still said its a few months out. My prediction is between June and July. Maybe early August. And then other things would drop later. We might get an updated and faster dforce, which would be cool. And a real time ray tracer would be very cool. I believe one of these will drop in Q2. One will drop Q3, and finally one will drop Q4. I am also thinking a new animation suite could be coming, to go along with the real time ray tracer, because that would be awesome.

  • bluejauntebluejaunte Posts: 1,861

    Common sense tells me there is nothing more than that. Because RTX alone isn't such a major step that would in and of itself drive Daz to implement something entirely new that they couldn't have before. DForce doesn't use PhysX as far as I know. If that's the case you have to ask why they would suddenly switch the whole thing now. Certainly not because there's a bit more blah in RTX? It just doesn't seem likely to me. And any advancements to the viewport? Again, why now? Just because of RTX?

    But hey, I'll gladly eat my words if it turns out to be more than just RTX in Iray. I'd love to be wrong in exchange for shiny new stuffsmiley

  • ArtiniArtini Posts: 8,773
    edited April 2019

    Continuing with the tests on the system with:
    2 x Xeon Gold 6140 @ 2.3 GHz
    36 cores, 72 threads
    128 GB RAM
    3 x Nvidia Quadro P5000
    Driver version:        391.25
    Core clock:        1607 MHz
    Memory data rate:    9026 MHz
    Memory interface:    256-bit
    Memory bandwidth:    288.83 GB/s
    Dedicated video memory:    16384 MB GDDR5X
    and
    Outrider42 test scene:
    https://www.daz3d.com/gallery/#images/526361/

    image
    Daz Studio 4.10

    One P5000
    Optix On
    CUDA device 0 (Quadro P5000):      5000 iterations, 0.518s init, 519.166s render
    8 minutes 39 seconds

    CPU only
    Optix On
    CPU:      5000 iterations, 4.664s init, 1498.348s render
    24 minutes 58 seconds
    CPU utilization 61%, 2.79 GHz
    It looks like Daz Studio can only use one CPU while rendering in iray.

    image

    or42pic03.jpg
    720 x 520 - 75K
    or42pic03cpu.jpg
    1265 x 715 - 166K
    Post edited by Artini on
  • There are already a fair number of threads for hardware discussions and speculation on RTX - please keep this oen focussed on the benchmarking.

  • ArtiniArtini Posts: 8,773
    edited April 2019

    SickleYield starter scene:
    https://www.daz3d.com/forums/discussion/53771/iray-starter-scene-post-your-benchmarks/p1
    with Daz Studio 4.11.0.236 Pro Beta
    Optix On
    Total Rendering Time: 1 minute 2.63 seconds
    CUDA device 0 (Quadro P5000):      1512 iterations, 2.895s init, 57.426s render
    CUDA device 1 (Quadro P5000):      1511 iterations, 2.770s init, 58.085s render
    CUDA device 2 (Quadro P5000):      1519 iterations, 2.798s init, 57.732s render
    CPU:      458 iterations, 2.313s init, 57.941s render

    NVidia Iray GPUs:
    GPU: 1 - 3 - Quadro P5000
    Memory Size: 15.9 GB
    Clock Rate: 1733500 KHz
    Multi Processor Count: 20
    CUDA Compute Capability: 6.1

    Test made after rebooting with no other programs running

     

    Post edited by Artini on
  • ArtiniArtini Posts: 8,773
    edited April 2019

    One more test with Outrider42 scene:
    https://www.daz3d.com/gallery/#images/526361/
    with Daz Studio 4.11.0.236 Pro Beta
    This time without use of the 'Iray Preview' in viewport.
    System rebooted and no other programs running.
    Optix On
    Total Rendering Time: 4 minutes 14.93 seconds
    CUDA device 0 (Quadro P5000):      1438 iterations, 3.260s init, 248.265s render
    CUDA device 1 (Quadro P5000):      1428 iterations, 3.249s init, 247.670s render
    CUDA device 2 (Quadro P5000):      1440 iterations, 3.245s init, 247.645s render
    CPU:      694 iterations, 2.356s init, 249.218s render

    Post edited by Artini on
  • There are already a fair number of threads for hardware discussions and speculation on RTX - please keep this oen focussed on the benchmarking.

    Recent discussions split to their own thread https://www.daz3d.com/forums/discussion/321401/geenral-gpu-testing-discussion-from-benchmark-thread

  • ebergerlyebergerly Posts: 3,255
    edited April 2019

    moved......

    Post edited by ebergerly on
  • RphaneeRphanee Posts: 3

    Beast: i7-8700K 3.7GHz, 16GB DDR4 RAM, 1TB SSD, 2080 8GB GDDR6

    with Daz Studio 4.11.0.236 Pro Beta

    • CPU, GPU, Optix Prime Acceleration: 1m, 19s
    • GPU, Optix Prime Acceleration: 1m, 21s
    • GPU only: 1m, 46s
  • ebergerlyebergerly Posts: 3,255
    edited April 2019
    Rphanee said:

    Beast: i7-8700K 3.7GHz, 16GB DDR4 RAM, 1TB SSD, 2080 8GB GDDR6

    with Daz Studio 4.11.0.236 Pro Beta

    • CPU, GPU, Optix Prime Acceleration: 1m, 19s
    • GPU, Optix Prime Acceleration: 1m, 21s
    • GPU only: 1m, 46s

    I assume this is for the Sickleyield scene...

    Which means this result seems to continue the trend that the 20xx cards pretty much cut the Iray render time of their 10xx counterparts in half, give or take. Which, at this point, seems to give the RTX cards some good Price/Performance numbers in the 8-10 range. Although it looks like the 2080ti is a tad higher, maybe around 15. I recall that a year or two ago the GTX cards were in the 15-20 range, so it looks like the RTX's are an equal to or better bang for the buck (since smaller price/performance is better). Certainly not earth shattering by any means, but still they have a ways to go in their development so this could improve over time. 

    Post edited by ebergerly on
  • boisselazonboisselazon Posts: 458

    Is the nvlink functional yet? I mean, having 2*11GB RTX cards, does it effectively make a "single" 22GB card seen by DAZ? (even in Beta DAZ/IRAY)

  • LenioTGLenioTG Posts: 2,118

    Is the nvlink functional yet? I mean, having 2*11GB RTX cards, does it effectively make a "single" 22GB card seen by DAZ? (even in Beta DAZ/IRAY)

    I'm not sure, but I don't think this technology has been fully implemented yet! There's still time to see those price drop a little bit! ^^

  • RayDAntRayDAnt Posts: 1,120
    edited April 2019

    Is the nvlink functional yet? I mean, having 2*11GB RTX cards, does it effectively make a "single" 22GB card seen by DAZ? (even in Beta DAZ/IRAY)

    Yes, but (I know - there's always a 'but') only if a card supports being switched into TCC (Tesla Compute Cluster) mode at the driver level. Which - at least as recently as 2-3 months ago - was apparently NOT the case with either the RTX 2080 or 2080 ti (I have neither card, but was able to get someone at the time in this thread to run some tests on a 2080 ti to verify.)

    Post edited by RayDAnt on
  • neumi1337neumi1337 Posts: 18
    edited April 2019

    Lets hope we see some more information before October 2019 for Iray with RT cores support. smiley

    Quote of an employee from Solidworks. Solidworks uses apparently Iray for Visualize:

    Visualize does not take full advantage of the new RT (raytracing) cores only found in these new RTX cards. BUT at two recent industry events, we showed technology previews of Visualize supporting the new RT cores in RTX cards and were amazed by the additional boost in performance. We're still working with the NVIDIA Iray team to finalize this development, however we're seeing another substantial boost in render speed.

    I can't comment on the exact percentage yet (since it's still finalizing development), but it's another impressive gain. This upgraded version of Visualize with the RT core support will be available in our 2020 release this October (at the very latest). However we're hoping to get it to you sooner in a Service Pack of 2019.

    Source: look at the penultimate post https://my.solidworks.com/reader/forumthreads/223594/nvidia-turingrtx-questions

     

    Post edited by neumi1337 on
  • outrider42outrider42 Posts: 3,679
    AD said:

    Thank you for this information, but unfortunately the GPU's 1080 Ti and 1070 Ti are no longer available for purchase. Are there any other alternatives that have proven their worth in practical DAZ Studio IRay rendering, but are still available for purchase?
    Thank you for the help.
     

     

    /p>

    Is the nvlink functional yet? I mean, having 2*11GB RTX cards, does it effectively make a "single" 22GB card seen by DAZ? (even in Beta DAZ/IRAY)

    Nvlink does work, and has been proven to be possible by the people behind Vray, Chaosgroup. The issue at hand is that no GPU monitor app properly reports how much VRAM is being used in Nvlink. So nobody knows the actual numbers involved without some testing. What Chaosgroup did was purposely build a scene in Vray that exceeded the 11GB VRAM. A single 2080ti would not render the scene, but two with Nvlink enabled did.

    However, that was Vray. Nobody has been able to get it working with Iray. But then you also have very few people who have two 2080ti's, so its not exactly something many people can test. Some might have them, but did not buy the Nvlink connector.

    Here is what Chaosgroup tested. The final test only has numbers for 2080ti's in Nvlink mode...because that was the only way to get that particular scene to load. They were able to confirm there is a small performance hit using Nvlink, but hey, if Nvlink is the only way to run a scene, that sure beats CPU mode, and the performance hit is actually very small. This also falls in line with what Tom Peterson stated about Nvlink in his interview. He stated that Nvlink was indeed possible, and he stated that there would be a performance hit. Bit he was talking more about gaming, where such a hit would be more harsh. That really isn't a problem for this type of rendering. BTW, Peterson has since moved on to Intel for their new GPU division. Also note that not only does Nvlink work on the 2080ti, but also the 2080 as well.

  • s3gfaul7s3gfaul7 Posts: 0

    i9-9900K, 2x RTX 2080 Ti with OptiX ON & NVLink OFF on DAZ3D v4.11.0.335 Pro BETA:

    2019-04-30 12:45:42.588 Finished Rendering
    2019-04-30 12:45:42.626 Total Rendering Time: 37.14 seconds
    2019-04-30 12:46:05.158 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : Device statistics:
    2019-04-30 12:46:05.158 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : CUDA device 0 (GeForce RTX 2080 Ti):      2474 iterations, 1.825s init, 33.645s render
    2019-04-30 12:46:05.158 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : CUDA device 1 (GeForce RTX 2080 Ti):      2526 iterations, 1.800s init, 34.152s render

    Adding the CPU to the mix increased render time by 1.1s, switching NVLink ON produced almost the same result as above (within 0.25s).

  • s3gfaul7s3gfaul7 Posts: 0

    @sylvie1998

    Driver: v430.39

    SLI ON, OptiX ON
    2019-05-01 15:02:33.038 Total Rendering Time: 37.50 seconds

    SLI ON, OptiX OFF
    2019-05-01 15:04:03.489 Total Rendering Time: 44.43 seconds

    SLI OFF, OptiX OFF
    2019-05-01 15:06:04.282 Total Rendering Time: 44.60 seconds

    SLI OFF, OptiX ON
    2019-05-01 15:07:49.979 Total Rendering Time: 36.80 seconds

  • outrider42outrider42 Posts: 3,679
    In a test that runs in only 35 seconds, it could be within the margin of error. Nvlink shouldn't really offer any performance boost, its all about pooling that VRAM.

    S3gfaul7, what I would really love to see is if you can build a scene that clearly exceeds a single 2080ti VRAM, as in testing with just one 2080ti will cause it to drop to CPU mode every time. Then enable Nvlink and see if the scene runs on the combined 2080ti duo. There is no need to share the scene. I know this may be a bit of a challenge, so I hope you don't mind. But it would be really great if we could finally prove if Nvlink is pooling VRAM in Iray or not.

    BTW here is the full link to the page where chaosgroup discusses their findings for some additional notes on Nvlink. https://www.chaosgroup.com/blog/profiling-the-nvidia-rtx-cards#
  • RayDAntRayDAnt Posts: 1,120
    edited May 2019

    Just completed a MASSIVE set of fresh benchmark runs on the new Daz Studio beta as well as its predecessor and the current release while running all/most of the recent Nvidia driver releases on both of my test machines.

     

    Benchmarking Notes

    First, a refresher on my test system specs:

    System #1 (Desktop)

    • System/Motherboard: Gigabyte Z370 Aorus Gaming 7
    • CPU: Intel i7-8700K @ 4.7Ghz (all cores)
    • GPU: Nvidia Titan RTX
    • System Memory: Corsair Vengeance LPX  32GB DDR4 CL15 @ 3000Mhz
    • OS Drive: Samsung 970 Pro 512GB
    • Asset Drive: Sandisk Extreme Portable SSD 1TB
    • Operating System: Windows 10 1809 (build 17763.437)

    System #2 (Laptop)

    • System/Motherboard: Microsoft Surface Book 2 13"
    • CPU: Intel i7-8650U @ Stock
    • GPU: Nvidia GTX 1050 2GB
    • System Memory: 16GB DDR3
    • OS Drive: Samsung SSD 512GB​
    • Asset Drive: Sandisk Extreme Portable SSD 1TB
    • Operating System: Windows 10 1809 (build 17763.437)

     

    And the specific versions of things tested:

    • Tested Rendering Devices:
      • Nvidia Titan RTX (stock cooler setup in a Thermaltake Tower 900 case - expect to see some water-cooled Titan RTX Iray benchmarks in the next month or so...)
      • Nvidia GTX 1050 2GB
      • Intel i7-8700K
    • Tested Nvidia Driver Versions:
      • 430.39
      • 425.31
      • 419.67 (Creator Ready Driver)
      • 419.35
    • Tested Daz Studio Versions:
      • 4.11.0.335 64-bit
      • 4.11.0.236 64-bit
      • 4.10.0.123 64-bit (no Turing support - GTX 1050 and i7-8700K results only)
    • Used Benchmarking Scenes:
      • Sickleyield
      • outrider42
      • DAZ_Rawb
      • Aala 1k
      • RayDAnt (public release pending full Iray RTCore support)

    All testing was performed in a carefully controlled, thermal-throttling free environment. Furthermore all benchmarking runs were finished to 100% completion and retests were done wherever data seemed particularly unusual. Due to the sheer number of benchmarking runs involved here (120 to be exact) instead of attempting to type up the individual results I am including them in (hopefully easy to understand) summarial chart form.

    Please note that all rendering performance statistics are given in terms of completed Iterations Per Second. These numbers are NOT derived from Total Rendering Time, since inherent limitations in the way Daz Studio calculates this statistic make it an increasingly innacurate measure of rendering performance as overall rendering times get shorter (ie. when using a card like a Titan RTX with benchmakring scenes also capable of being rendered in reasonable amounts of time on less capable hardware.) They have been generated by taking the Iteration Count and Render Time statistics reported directly by Iray in the Daz Studio log file as a line like this: 

    2019-03-15 00:32:31.455 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : Device statistics:2019-03-15 00:32:31.455 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : CUDA device 0 (GeForce GTX 1050): 	 1000 iterations, 6.813s init, 1584.264s render

     And simply dividing total iterations by max render time. If you wish to compare your own hardware's rendering performance with what I have here, you will need to do likewise.

    Without further ado...

     

    Benchmarking Results

     

     

     

     

     

    General Observations

    • For those on the fence about the latest Daz Studio beta/Nvidia driver versions, both appear to be the best performing releases to date.
    • For those wondering about Creator Ready drivers (419.67) and how they play with Iray, they're pretty much within margin of error with the non-Creator Ready drivers released at about the same time, and slightly less well perfoming than the most recent Game Ready driver release. So I'd recommend sticking to the latest Game Ready release for now, at least.
    • For Turing GPU owners, not only is 4.11.0.335 measurably faster across the board than 4.11.0.236 was, it also restores OptiX Prime acceleration's status as a rendering accelerator for all scenes tested (somewhat surprisingly, since 4.11.0.335 includes the exact same OptiX Prime binary file as previous.) So if you have Turing, you should start turning OptiX Prime back on.
    • The GTX 1050 actually got better rendering performance with OptiX Prime turned off than it did with it turned on for certain scenes under more recent drivers. Not sure what to make of that other than note that both scenes affected (outrider42's and DAZ_Rawb's) feature Bloom rendering effects, whereas the rest (including the one made by me) don't.
    Sickleyield Benchmark Completion Rates_ Titan RTX.png
    1799 x 371 - 24K
    Sickleyield Benchmark Completion Rates_ GTX 1050.png
    1799 x 371 - 26K
    Sickleyield Benchmark Completion Rates_ i7-8700K.png
    1799 x 371 - 19K
    outrider42 Benchmark Completion Rates_ Titan RTX.png
    1801 x 371 - 25K
    outrider42 Benchmark Completion Rates_ GTX 1050.png
    1800 x 371 - 24K
    outrider42 Benchmark Completion Rates_ i7-8700K.png
    1801 x 371 - 19K
    DAZ_Rawb Benchmark Completion Rates_ Titan RTX.png
    1801 x 371 - 25K
    DAZ_Rawb Benchmark Completion Rates_ GTX 1050.png
    1800 x 371 - 25K
    DAZ_Rawb Benchmark Completion Rates_ i7-8700K.png
    1801 x 371 - 19K
    Aala 1k Benchmark Completion Rates_ Titan RTX.png
    1801 x 371 - 24K
    Aala 1k Benchmark Completion Rates_ GTX 1050.png
    1800 x 371 - 24K
    Aala 1k Benchmark Completion Rates_ i7-8700K.png
    1801 x 371 - 19K
    RayDAnt Benchmark Completion Rates_ Titan RTX.png
    1801 x 371 - 24K
    RayDAnt Benchmark Completion Rates_ GTX 1050.png
    1800 x 371 - 24K
    RayDAnt Benchmark Completion Rates_ i7-8700K.png
    1801 x 371 - 19K
    Post edited by RayDAnt on
  • neumi1337neumi1337 Posts: 18
    edited May 2019

    some new information about the Iray RTX release :)

    NVIDIA Iray Iray 2019 roadmap Iray RTX 2019 • Release in May • RTX support, up to 5 times speedup! • MDL 1.5 support for • MDLE • localization • 2d measured curve

    look at page 57:

    https://developer.download.nvidia.com/video/gputechconf/gtc/2019/presentation/s9346-sharing-physically-based-materials-between-renderers-with-mdl.pdf

    Post edited by neumi1337 on
  • bluejauntebluejaunte Posts: 1,861

    That sounds delicous smiley

  • shaneseymourstudioshaneseymourstudio Posts: 383
    edited May 2019
    neumi1337 said:

    some new information about the Iray RTX release :)

    NVIDIA Iray Iray 2019 roadmap Iray RTX 2019 • Release in May • RTX support, up to 5 times speedup! • MDL 1.5 support for • MDLE • localization • 2d measured curve

    look at page 57:

    https://developer.download.nvidia.com/video/gputechconf/gtc/2019/presentation/s9346-sharing-physically-based-materials-between-renderers-with-mdl.pdf

    Very interesting, thanks for sharing that. I was surprised to see the hair_bsdf material on page 54. I didn't even realize there was an included hair bsdf model.

     

    Wow, also just saw where they mention the material sharing site: www.medulr.com on page 64.

    Post edited by shaneseymourstudio on
Sign In or Register to comment.