General GPU/testing discussion from benchmark thread

1679111218

Comments

  • ebergerlyebergerly Posts: 3,255
    edited July 2019
    RayDAnt said:

    A simplistic cornell box is likely one of the LEAST opportunistic cases for this.

    Not sure why that would be the case. A cornell box implies the possibility of pretty much infinite ray bounces in an enclosed environment, as opposed to, say, a simple object against a blank background, where rays of a certain length just die and you don't need to continue calculating bounces, or they just bounce once before hitting a light source.

    What am I missing? Or maybe you're talking about including all the other features that are under the RTX banner, like Physx and materials, etc. 

    Post edited by ebergerly on
  • outrider42outrider42 Posts: 3,679

    The scene migenius tests is this scene here.

    So does this look like your typical scene? This is the scene that saw a 17% performance boost. And unfortunately the 2080ti is the only RTX card on the list. Interestingly most other cards saw small boosts as well...except for the Titan V which saw a slight decrease. This change was enough to knock Titan V off its thrown. It lost to the Quadro GV100, which also happens to be Volta based just like Titan V. So the top two spots are Volta powered, which shows just how powerful Volta was and still is for rendering Iray.

    Migenius also wrote a post which I linked previously on how scene makeup can wildly vary the results. This link also appears in the 2019 benchmarks they just made.

    Iray RTX speed-up is highly scene dependent but can be substantial. If your scene has low geometric complexity then you are likely to only see a small improvement. Larger scenes can see multiples of about a 2x speed-up while extremely complex scenes can even see a 3x speed-up.

  • RayDAntRayDAnt Posts: 1,120
    ebergerly said:

    What am I missing?

    Computational complexity. Cornell boxes are, by design, one of - if not THE - most straightforward self contained environments (virtual or otherwise) you can bounce light around in. They present the best case possible for raytracing to happen. Meaning that any rendering engine with even a half-capable raytracing acceleration mechanism will already be performance limited by other parts of the graphics rendering pipeline than raytracing itself.

  • ebergerlyebergerly Posts: 3,255
    RayDAnt said:
    ebergerly said:

    What am I missing?

    Computational complexity. Cornell boxes are, by design, one of - if not THE - most straightforward self contained environments (virtual or otherwise) you can bounce light around in. They present the best case possible for raytracing to happen. Meaning that any rendering engine with even a half-capable raytracing acceleration mechanism will already be performance limited by other parts of the graphics rendering pipeline than raytracing itself.

    I hear ya, but your generalization doesn't really answer the question. Assuming the core raytracing algorithms are involved with sending a lot of rays/samples for each pixel into the scene, detecting bounces, detecting surface colors, and doing that over and over again, how would that be limited by something other than the raytracing-centric hardware and software? In what way are these boxes straightforward? Again, it seems like an opportunity for an infinite number of rays and bounces. 

  • RayDAntRayDAnt Posts: 1,120
    edited July 2019
    ebergerly said:
    RayDAnt said:
    ebergerly said:

    What am I missing?

    Computational complexity. Cornell boxes are, by design, one of - if not THE - most straightforward self contained environments (virtual or otherwise) you can bounce light around in. They present the best case possible for raytracing to happen. Meaning that any rendering engine with even a half-capable raytracing acceleration mechanism will already be performance limited by other parts of the graphics rendering pipeline than raytracing itself.

    I hear ya, but your generalization doesn't really answer the question. Assuming the core raytracing algorithms are involved with sending a lot of rays/samples for each pixel into the scene, detecting bounces, detecting surface colors, and doing that over and over again, how would that be limited by something other than the raytracing-centric hardware and software?

    Raytracing is just a single step in the total process of graphics rendering. Historically it's been one of the most time consuming ones. But that doesn't change the fact that it is still just a single part in a chain. And like a chain gang traveling cross country on foot where one man has a wooden leg, if you suddenly give that man a hoverboard the overall speed of the trip will be held back by everyone else. Weird analogy I know, but it's all I've got at the moment.

    In what way are these boxes straightforward? Again, it seems like an opportunity for an infinite number of rays and bounces. 

    Repetition isn't the same thing as complexity where graphics rendering (or most other types of computing really) are concerned. Bouncing rays 100,000 times in a simplistic environment (like a six-surface Cornell box) around similarly simplistic objects is only slightly more taxing from a computational standpoint than doing the same thing only 10 times. Because after the first dozen repetitions or so a definite pattern sets in in the output that the rendering engine can simply reuse for the rest of the cycles to reach completion. Measure once, cut twice. Or measure 10 times, cut 100,000 times. Computers aren't stupid - just dumb. A well-written program takes shortcuts whenever it can.

    Post edited by RayDAnt on
  • outrider42outrider42 Posts: 3,679
    ebergerly said:
    RayDAnt said:
    ebergerly said:

    What am I missing?

    Computational complexity. Cornell boxes are, by design, one of - if not THE - most straightforward self contained environments (virtual or otherwise) you can bounce light around in. They present the best case possible for raytracing to happen. Meaning that any rendering engine with even a half-capable raytracing acceleration mechanism will already be performance limited by other parts of the graphics rendering pipeline than raytracing itself.

    I hear ya, but your generalization doesn't really answer the question. Assuming the core raytracing algorithms are involved with sending a lot of rays/samples for each pixel into the scene, detecting bounces, detecting surface colors, and doing that over and over again, how would that be limited by something other than the raytracing-centric hardware and software? In what way are these boxes straightforward? Again, it seems like an opportunity for an infinite number of rays and bounces. 

    This is backed up by the migenius statement on RTX, again:

    Iray RTX speed-up is highly scene dependent but can be substantial. If your scene has low geometric complexity then you are likely to only see a small improvement. Larger scenes can see multiples of about a 2x speed-up while extremely complex scenes can even see a 3x speed-up.

    From here. https://www.migenius.com/articles/rtx-performance-explained

    If you desire a detailed explanation, by all means go ask them. They have contact information for 3 offices plus online. https://www.migenius.com/contact

    As for their knowledge, these guys run the Iray servers. Migenius actually handles the licensing of Iray themselves direct to customers. They were the very first place to have Iray 2019 up and running over a month ago, before Nvidia even released the SDK, which shows the level of access they have to Iray and its tech. That is because migenius now develops RealityServer themselves, rather than simply recieve it from Nvidia.

    "NVIDIA and previously mental images are long standing partners with migenius. Previously acting as a solution provider for NVIDIA’s RealityServer technology migenius has now taken over future development of the RealityServer product, based on the Iray technology provided by NVIDIA. This uniquely allows migenius unprecedented access to the underlying technologies utilised in its products. NVIDIA GPU technology provides the processing power necessary to make interactive rendering with its Iray® technology possible."

    What I am getting at is I think you can trust them when they say geometric complexity is very important to potential ray tracing speed ups. And note that they still say 3x is possible.

  • LenioTGLenioTG Posts: 2,118
    edited July 2019
    RayDAnt said:

    Btw Iray RTX has officially made its way into the Daz Studio private build channel:

    • Source maintenance

    • Extended DzCreateNewItemDlg SDK API; added setNewItemName(), getNewItemName(); added parameter to addOption()

    • Extended DzObject public API; added isBuildingGeom(), isBuildingGeomValid()

    • Updated SDK version to 4.12.0.10; SDK min is 4.5.0.100

    • Updated SDK API documentation; DzCreateNewItemDlg

    • Update to NVIDIA Iray RTX 2019.1.1 (317500.2554)

    • Fixed a timing/update issue in DzMeshSmoother

    DAZ Studio : Incremented build number to 4.12.0.10

    Fantastic, thank you for the great news!!! :D

    So...will I have to download the Public Build 4.12 when it will be out to use the new Iray version? xD
    I was hoping they were going to implement it in the final 4.11! I like things not crashing now

    How much time does it usually take this Private Build to become Public Build?

    RayDAnt said:
    neumi1337 said:

    Thx lol. Then i would think that the attached benchmark scene 'cornell_box' isn't profiting from the speedup of the RT cores. We need other benchmark scenes or have to wait until RayDAnt gets the scenes he meant.The prove Iray is using Optix 6.1.2 is a big step up.

    Yeah, keep in mind that gains from more powerful raytracing acceleration are going to be HIGHLY scene content dependent. A simplistic cornell box is likely one of the LEAST opportunistic cases for this.

    I was going to say the same thing :)

    Honestly, a 20% speed improvement is noticeable in any case, and it's free, I don't understand why people complain so much.
    If someone thinks these RTX card are not a good purchase, he can avoid buying them. Honestly, I got a 2060 and I'm very satisfied, even without this upgrade. And I think at some point I'll switch to a 2070 Super.

    And, based of what you've explained to me some time ago, I think we could have an even more significant improvement, with real-usage-scenario.
    I guess I'll wait for the actual Daz Studio tests, with Genesis 8 characters etc.!

    This article is very well done, thank you!

    So the 17% speed improvement will match the fact that most scenes have 20% of the work related to ray tracing stuff.

     

    By the way, I've read that the next generation of Nvidia GPUs will have a 7nm productive process, and they will be made by Samsung. So I guess that generation will be the one that's significant in this technology.

    Post edited by LenioTG on
  • drzapdrzap Posts: 795
    edited July 2019

    "Iray RTX speed-up is highly scene dependent but can be substantial. If your scene has low geometric complexity then you are likely to only see a small improvement. Larger scenes can see multiples of about a 2x speed-up while extremely complex scenes can even see a 3x speed-up."

     

    Don't get your hopes too high for 2x or 3x speedup.  Anyone rendering "extremely complex" scenes are not very likely at all to be using Daz Studio.   Pixar renders "extremely complex" scenes,  a hobbyist toying with Genesis figures doesn't.   Looks like the point where you get your money's worth out of those RT cores is at the high end and most Daz users will reach system RAM or processor limits before they get to that point because the only instances where I have seen someone get anywhere close to 3x (in Octane), they were pro artists doing professional work in pro software.

    Post edited by drzap on
  • drzapdrzap Posts: 795
    edited July 2019

    Here are some results from a Redshift user as he was testing the performance improvements of RTX on that platform.  Take notice of the amount of geometry he was testing:

     

    "So after doing some tests on some scenes, I have to say the speed increase Panos squeezed in on the latest RTX Alpha is great. Here are some of the statistics and render times. All of these tests are on a single RTX 2080 Ti to keep things easy to test on a 1:1 GPU basis. He also increased the base benchmark performance another 25% from the previous RTX build. ????

    =====================================

    Example 1: (GRASS Only) Example scene where RTX really takes advantage of the scene. Really simple shading and just a lot of instances of a single object. It obviously has no issue tearing through this and scales beautifully. No depth of field, no volumes, etc.

    Resolution: 3840 x 2160 UHD
    Total triangles: 50,602,962
    RTX ON - Rendering time: 3m:2s (1 GPU(s) used
    RTX OFF - Rendering time: 6m:18s (1 GPU(s) used
    RTX ON = 108% Faster

    =====================================

    Example 2: (Mountain Landscape) This time we have a scene with mass scattering and volumetric fog/scattering. In this example, RTX still scales very well albeit not as much but still performs excellently compared to CUDA only.

    Resolution: 3840 x 1607 Anamorphic UHD
    Total triangles: 93,741,172,946
    RTX ON - Rendering time: 1m:46s (1 GPU(s) used
    RTX OFF - Rendering time: 3m:5s (1 GPU(s) used
    RTX ON = 74% Faster

    =====================================

    Example 3: (Many Megascans Assets) This scene has a decent amount of 4k different texture maps for all the different Megascan assets with a variety of shader properties, it also has some pretty heavy DOF. So this is a scene that RTX generally wouldn’t accelerate too much. Even then the results are still surprisingly good.

    Resolution: 2048 x 857 Anamorphic
    Total triangles: 96,910,342
    RTX ON - Rendering time: 49.4s (1 GPU(s) used
    RTX OFF - Rendering time: 1m:10s (1 GPU(s) used
    RTX ON = 43% Faster

    =====================================

    Example 4: (Tree Sunset) In this final example we have a scene that has several instances but most of the geometry (leaves) consist of the sprite shader which can’t be accelerated by the RT cores. Even in this example though there are still a lot of regular pieces of geometry which can be accelerated like the branches on the trees. I also pushed the unified sampler pretty high to clean up the DOF as much as possible. Even though the sprite node doesn’t get accelerated by RT cores, all the geometry and DOF ray-tracing does, so it still provides a great boost in performance.

    Resolution: 1920 x 803 HD Anamorphic
    Total triangles: 480,250,028
    RTX ON = Rendering time: 12m:16s (1 GPU(s) used
    RTX OFF = Rendering time: 15m:54s (1 GPU(s) used
    RTX ON = 30% Faster

    ====================="

     

    Granted, this was an alpha build a few months ago, but he was pushing 93 billion triangles in one scene and still only had a 74% improvement.   And he was rendering vegetation in all cases, which uses relatively simple shaders.  The type of shading like the skin shaders common in Daz Studio won't fare as well.  Don't hold your breath for 300%, guys.    On the other hand, Chaosgroup's progress on Project Lavina looks very promising.  They have succeeded in rendering 300 billion triangles at 24 frames per second using RT cores.  The catch is they are using the DirectX API, which is generally the route being used by the gaming industry.  Not sure what they are sacrificing to get that kind of performance, but it is interesting.

     

    Post edited by drzap on
  • Daz Jack TomalinDaz Jack Tomalin Posts: 13,127

    93 billion triangles

    Cool I have a target poly count for me next set then :)

  • ebergerlyebergerly Posts: 3,255
    edited July 2019

    My guess is that the performance issue comes back to what I mentioned long ago about the major architecture changes represented by the whole RTX technology. Again, it's not just one thing, it's a whole list of architecture and software changes, each of which is designed to be really good at one thing and improve performance in that one area. And as a whole, they should vastly improve performance in scenes that use all of those features. Assuming, of course, that the hardware and software implementations are optimized.

    For example, RTX includes improvements in physics simulations (Physx/Flex), as well as core raytracing, and materials (MDL), and AI/denoising (tensor cores), and so on. So I'm guessing if you have a scene that relies on all of those simultaneously then the net performance improvement might be huge. And while a hobbyist certainly can have a scene with all of those things, the likelihood that they are all represented in any single scene might be small. And at present, the likelihood that all of those RTX features are optimized and ready to go in all the related hardware and software is quite small. 

    Personally, FWIW, I don't really care what the answer is on RTX. I doubt I'll spring for a new GPU anytime soon. I'm just interested in the facts, and what's ultimately important to most of us, which is actual render time improvements in DAZ/Iray. All of this other discussion is merely academic. So I'm not trying to be mean or negative or upset anyone, just figure out the facts. And I'd encourage others not to take any of this personally. As they say, facts is facts, and IMO (again, only my opinion based on my experience) the RTX implementation still has a long way to go before we know exactly how it will perform in our Iray scenes. 

    And IMO, due to the less generalized, and more specifically targeted architecture/software components, I'm guessing that maybe RTX will forever be a very variable technology performance-wise, depending a lot on the specific scene and it's components. Perhaps a bit more frustrating for those who want a simple answer to "How much faster will RTX render my scene?". 

    Post edited by ebergerly on
  • ebergerlyebergerly Posts: 3,255
    RayDAnt said:

     

    Raytracing is just a single step in the total process of graphics rendering. Historically it's been one of the most time consuming ones. But that doesn't change the fact that it is still just a single part in a chain.

    Yes, I get it. As I mentioned, last year I wrote a raytracer from scratch, and more recently I've added a bunch of CUDA code and an ImGUI interface to make it so you can have realtime interaction and rotate the camera, etc., all using the GPU. So I'm familiar with the actual steps required in the rendering pipeline (vertex and index buffers, translation matrices, materials, framebuffers, etc.) and have actually written code to implement them.

    And what takes a lot of time is, like I said, when you have many samples/rays for every pixel, and many bounces for each ray. For example, in a 1920x1080 image with over 2 million pixels, if you have 10 samples per pixel that 20 million initial rays to calculate, and each of those bounces X number of times, and each bounce requires determining a "hit", and the surface color at the hit location, and then generating the next random bounce, etc. And I think we're all familiar with interior scenes often taking a lot more time to render than exterior scenes. 

    So if those core raytracing algorithms are getting a huge speed improvement by RTX, then presumably it would be reflected in even a "simple" interior scene that does a lot of bounces (assuming the associated software is optimized for that).  

  • ebergerlyebergerly Posts: 3,255
    edited July 2019

    FWIW, as an example, below are two images generated by my raytracing program, with random occurrences of metallic and glass spheres. One image is rendered at 10 samples per pixel, and the other at 500. The 10 sample render takes under 5 seconds on my 1080ti, and the 500 samples takes just under 110 seconds. A huge difference, which, presumably, would be vastly improved by the RTX raytracing technology improvements.

    Now I'll have to go back to the RTX docs to see which of these core raytracing algorithms (hit detection, materials, etc.) are targeted by the RTX architecture, but presumably this simple scene should be hugely impacted by an optimized and fully implemented RTX. Unless someone here knows what I'm missing...

    example10.jpg
    1200 x 800 - 198K
    example500.jpg
    1200 x 800 - 112K
    Post edited by ebergerly on
  • ebergerlyebergerly Posts: 3,255
    edited July 2019

      

    drzap said:

    Granted, this was an alpha build a few months ago, but he was pushing 93 billion triangles in one scene and still only had a 74% improvement.   

     

    It's not immediately obvious to me why the complexity of the objects in the scene (# of triangles) would have much (any?) effect on render time. Yeah it would certainly require huge amounts of memory to store all the vertex/index data for all those triangles, and maybe more time to sort thru all that data to determine a "hit" (?), but at the end of the day the raytracing algorithm just sends one ray for every pixel into the scene, calculates if it hits something, determines the color of that thing, then sends off another bounce from that point and checks to see if it hits another object, and determines what's the object color at THAT hit point, and so on. How many triangles per object doesn't seem to matter. What matters more, IMO, is how many rays you send into the scene (number of additional samples per pixel) and how many times each of those rays bounces in the scene. 

    Post edited by ebergerly on
  • drzapdrzap Posts: 795

    This video just showed up on the Redshift chat group 1 hour ago....  https://www.facebook.com/youngjoya/videos/2325787537465025/

    Redshift 3.0 is thought to be only a matter of weeks until release.

     

    Darn... I just realized that you might not be able to see this video if you don't belong to the group.  The video shows two rendered animations featuring a cheeseburger on a table, like you would find at McDonalds.  They are both being rendered in realtime, one in Redshift 3.0 (now in beta) and the other in UE4 RTX.   UE4 is actually a little grainier, but they both look great, especially when you consider they are being rendered in realtime.  Wow.

  • bluejauntebluejaunte Posts: 1,861
    drzap said:

    This video just showed up on the Redshift chat group 1 hour ago....  https://www.facebook.com/youngjoya/videos/2325787537465025/

    Redshift 3.0 is thought to be only a matter of weeks until release.

     

    Darn... I just realized that you might not be able to see this video if you don't belong to the group.  The video shows two rendered animations featuring a cheeseburger on a table, like you would find at McDonalds.  They are both being rendered in realtime, one in Redshift 3.0 (now in beta) and the other in UE4 RTX.   UE4 is actually a little grainier, but they both look great, especially when you consider they are being rendered in realtime.  Wow.

    So Redshift 3 now has a real time renderer?

  • RayDAntRayDAnt Posts: 1,120

    Did two rounds of rendering the Sickleyield, outrider42, DAZ_Rawb, and Aala benchmarking scenes previously shared in this thread (plus an additional one of my own design) on my Titan RTX in Iray Server (once using 2019.1.1 RTX and a second time using the final pre-RTX version available) using the method described here, and was seeing anywhere from a 0.6-12.7% speedup for the RTX release.

    HOWEVER.

    About half way through the process I started to pixel peep on one of the scenes (I've run these benchmarks so many times now I usually just skip over the actual graphics) and noticed that their were key scene elements (like HDRIs and camera properties) either corrupted or not making it through into the final renders at all. So I'm sorry to say that this method is NOT currently adequate for properly evaluating what kind of a performance impact full RTX support in Iray has when rendering native Daz Studio content.

  • drzapdrzap Posts: 795
    edited July 2019
    drzap said:

    This video just showed up on the Redshift chat group 1 hour ago....  https://www.facebook.com/youngjoya/videos/2325787537465025/

    Redshift 3.0 is thought to be only a matter of weeks until release.

     

    Darn... I just realized that you might not be able to see this video if you don't belong to the group.  The video shows two rendered animations featuring a cheeseburger on a table, like you would find at McDonalds.  They are both being rendered in realtime, one in Redshift 3.0 (now in beta) and the other in UE4 RTX.   UE4 is actually a little grainier, but they both look great, especially when you consider they are being rendered in realtime.  Wow.

    So Redshift 3 now has a real time renderer?

    Although Redshift is developing a realtime renderer for the viewport (Redshift RT), it is not Redshift 3.  RS3.0 is just their normal blazing fast renderer with RTX enabled.  I'm not sure what he was showing in the video.  The caption said "progressive render".  That seems to indicate it was the IPR.  Still impressive, though.   They should be making available the first build of Redshift RT by the end of the year, per the developers.  In the meantime, it will be nice to have a superfast gpu renderer until Arnold GPU comes along.

    Post edited by drzap on
  • bluejauntebluejaunte Posts: 1,861
    drzap said:
    drzap said:

    This video just showed up on the Redshift chat group 1 hour ago....  https://www.facebook.com/youngjoya/videos/2325787537465025/

    Redshift 3.0 is thought to be only a matter of weeks until release.

     

    Darn... I just realized that you might not be able to see this video if you don't belong to the group.  The video shows two rendered animations featuring a cheeseburger on a table, like you would find at McDonalds.  They are both being rendered in realtime, one in Redshift 3.0 (now in beta) and the other in UE4 RTX.   UE4 is actually a little grainier, but they both look great, especially when you consider they are being rendered in realtime.  Wow.

    So Redshift 3 now has a real time renderer?

    Although Redshift is developing a realtime renderer for the viewport (Redshift RT), it is not Redshift 3.  RS3.0 is just their normal blazing fast renderer with RTX enabled.  I'm not sure what he was showing in the video.  The caption said "progressive render".  That seems to indicate it was the IPR.  Still impressive, though.   They should be making available the first build of Redshift RT by the end of the year, per the developers.  In the meantime, it will be nice to have a superfast gpu renderer until Arnold GPU comes along.

    It looked real time. This seemed weird to me as a progressive renderer should have some grain at least for a second or two, right? I guess it really will come to progressive renderers and real time renderers basically being the same at some point, in terms of user experience.

    Arnold GPU will never be where Redshift is in terms of speed. Arnold's focus was never on speed but on ease of use and realism. Their mantra is that man hours spent on setting up renders is where time needs to be saved, not on rendering since you can always throw more hardware at it. Not to mention Reshift is a biased and 100% GPU renderer, unlike Arnold.

  • ebergerlyebergerly Posts: 3,255
    edited July 2019

    Yeah, what many seem to focus on is raw, final render time, but personally I am FAR more interested in a semi-photorealistic and fast preview. Kinda like what Blender's new Eevee preview has been doing. Pretty amazing:

    https://www.youtube.com/watch?v=nxrwx7nmS5A

    If you're anything like me you spend 95% of your time tweaking stuff in the photoreal Iray preview in Studio, and a tiny fraction of time actually doing the final render. And unfortunately most of that involves a whole lot of switching between Iray preview and texture shaded mode. And personally, the new denoiser in Studio is, IMO, a whole lotta nothin'. With the Eevee preview you can generally get a very close idea of the final render. Yeah, it can take some monkey motion to set everything up, and it is a hack (not for final rendering), but still it's a vast improvement over the Iray preview. 

    I'm hoping whomever is developing Iray now is busy working on integrating some Eevee-style game technology for the preview. 

    BTW, does anyone know who is actually developing Iray? I thought NVIDIA was distancing itself last year, or that was just the plugin marketing or whatever? 

    Post edited by ebergerly on
  • drzapdrzap Posts: 795
    drzap said:
    drzap said:

     

    Arnold GPU will never be where Redshift is in terms of speed. Arnold's focus was never on speed but on ease of use and realism. Their mantra is that man hours spent on setting up renders is where

     

     Yeah.  Arnold will never be about speed and I completely agree with their philosophy.  That's ok with me since speed isn't my priority most of the time.  Redshift is nice to have when I need the speed.  When Arnold GPU is parity with their CPU renderer, I'll be all in on Arnold.

  • LenioTGLenioTG Posts: 2,118
    RayDAnt said:

    Did two rounds of rendering the Sickleyield, outrider42, DAZ_Rawb, and Aala benchmarking scenes previously shared in this thread (plus an additional one of my own design) on my Titan RTX in Iray Server (once using 2019.1.1 RTX and a second time using the final pre-RTX version available) using the method described here, and was seeing anywhere from a 0.6-12.7% speedup for the RTX release.

    HOWEVER.

    About half way through the process I started to pixel peep on one of the scenes (I've run these benchmarks so many times now I usually just skip over the actual graphics) and noticed that their were key scene elements (like HDRIs and camera properties) either corrupted or not making it through into the final renders at all. So I'm sorry to say that this method is NOT currently adequate for properly evaluating what kind of a performance impact full RTX support in Iray has when rendering native Daz Studio content.

    So it could actually be slower, since it's not loading some stuff?

  • M-CM-C Posts: 102
    ebergerly said:

    Yeah, what many seem to focus on is raw, final render time, but personally I am FAR more interested in a semi-photorealistic and fast preview. Kinda like what Blender's new Eevee preview has been doing. Pretty amazing:

    https://www.youtube.com/watch?v=nxrwx7nmS5A

    If you're anything like me you spend 95% of your time tweaking stuff in the photoreal Iray preview in Studio, and a tiny fraction of time actually doing the final render. And unfortunately most of that involves a whole lot of switching between Iray preview and texture shaded mode. And personally, the new denoiser in Studio is, IMO, a whole lotta nothin'. With the Eevee preview you can generally get a very close idea of the final render. Yeah, it can take some monkey motion to set everything up, and it is a hack (not for final rendering), but still it's a vast improvement over the Iray preview.

     

    I absolutely agree here. Since iray preview is quite slow compared to some other alternatives you constantly need to switch between texture shaded and iray while setting up a scene. I´m wasting so much time waiting for iray preview to kick in during the creation process that the actual render speed doesn´t matter that much at all. Especially since i´m rendering most images at night. But speeding up the scene creation process would really improve any workflow.

  • nicsttnicstt Posts: 11,714
    neumi1337 said:

    Thx lol. Then i would think that the attached benchmark scene 'cornell_box' isn't profiting from the speedup of the RT cores. We need other benchmark scenes or have to wait until RayDAnt gets the scenes he meant.The prove Iray is using Optix 6.1.2 is a big step up.

    Or, unlikely I hope, the speedup is just hype. :)

  • nicsttnicstt Posts: 11,714
    edited July 2019

    93 billion triangles

    Cool I have a target poly count for me next set then :)

    Ha!

    If I remember correctly, the car in my sig had over 2 million (after sub-division), although I forget by how many and it was rendered in cycles a few years ago. I've toyed with the idea of converting it to Studio, but not thrilled over the hastle. I didn't model it with that in mind.

    The thoughts of rendering 25 of them to reach one of the lower scene geometry counts... LOL.

    Post edited by nicstt on
  • RayDAntRayDAnt Posts: 1,120
    LenioTG said:
    RayDAnt said:

    Did two rounds of rendering the Sickleyield, outrider42, DAZ_Rawb, and Aala benchmarking scenes previously shared in this thread (plus an additional one of my own design) on my Titan RTX in Iray Server (once using 2019.1.1 RTX and a second time using the final pre-RTX version available) using the method described here, and was seeing anywhere from a 0.6-12.7% speedup for the RTX release.

    HOWEVER.

    About half way through the process I started to pixel peep on one of the scenes (I've run these benchmarks so many times now I usually just skip over the actual graphics) and noticed that their were key scene elements (like HDRIs and camera properties) either corrupted or not making it through into the final renders at all. So I'm sorry to say that this method is NOT currently adequate for properly evaluating what kind of a performance impact full RTX support in Iray has when rendering native Daz Studio content.

    So it could actually be slower, since it's not loading some stuff?

    It could be faster, it could be slower... You simply can't trust performance figures from software when it is displaying unexpected behavior.

    The other thing is it's actually NOT an apples-to-apples test in the first place. Every version of Iray performs differently due design differences. For this to be a true RTX on vs off test you would need to be using the same build of Iray on the same RTX card - just with the RTX functionality toggled on/off. And - so far as I can tell - there's no way to do that via Iray Server ('twas the same with OptiX Prime acceleration before this. Daz Studio has always had that toggle option for it. Server just used it regardless. Hopefully (from a performance tinkerer standpoint) there's an RTX toggle somewhere too once new DS betas start coming out.

  • LenioTGLenioTG Posts: 2,118
    M-C said:
    ebergerly said:

    Yeah, what many seem to focus on is raw, final render time, but personally I am FAR more interested in a semi-photorealistic and fast preview. Kinda like what Blender's new Eevee preview has been doing. Pretty amazing:

    https://www.youtube.com/watch?v=nxrwx7nmS5A

    If you're anything like me you spend 95% of your time tweaking stuff in the photoreal Iray preview in Studio, and a tiny fraction of time actually doing the final render. And unfortunately most of that involves a whole lot of switching between Iray preview and texture shaded mode. And personally, the new denoiser in Studio is, IMO, a whole lotta nothin'. With the Eevee preview you can generally get a very close idea of the final render. Yeah, it can take some monkey motion to set everything up, and it is a hack (not for final rendering), but still it's a vast improvement over the Iray preview.

     

    I absolutely agree here. Since iray preview is quite slow compared to some other alternatives you constantly need to switch between texture shaded and iray while setting up a scene. I´m wasting so much time waiting for iray preview to kick in during the creation process that the actual render speed doesn´t matter that much at all. Especially since i´m rendering most images at night. But speeding up the scene creation process would really improve any workflow.

    Yep, it would be nice to have a faster viewport.

    Most times I don't know why it falls to the CPU, even if the VRAM is mostly empty.

    RayDAnt said:
    LenioTG said:
    RayDAnt said:

    Did two rounds of rendering the Sickleyield, outrider42, DAZ_Rawb, and Aala benchmarking scenes previously shared in this thread (plus an additional one of my own design) on my Titan RTX in Iray Server (once using 2019.1.1 RTX and a second time using the final pre-RTX version available) using the method described here, and was seeing anywhere from a 0.6-12.7% speedup for the RTX release.

    HOWEVER.

    About half way through the process I started to pixel peep on one of the scenes (I've run these benchmarks so many times now I usually just skip over the actual graphics) and noticed that their were key scene elements (like HDRIs and camera properties) either corrupted or not making it through into the final renders at all. So I'm sorry to say that this method is NOT currently adequate for properly evaluating what kind of a performance impact full RTX support in Iray has when rendering native Daz Studio content.

    So it could actually be slower, since it's not loading some stuff?

    It could be faster, it could be slower... You simply can't trust performance figures from software when it is displaying unexpected behavior.

    The other thing is it's actually NOT an apples-to-apples test in the first place. Every version of Iray performs differently due design differences. For this to be a true RTX on vs off test you would need to be using the same build of Iray on the same RTX card - just with the RTX functionality toggled on/off. And - so far as I can tell - there's no way to do that via Iray Server ('twas the same with OptiX Prime acceleration before this. Daz Studio has always had that toggle option for it. Server just used it regardless. Hopefully (from a performance tinkerer standpoint) there's an RTX toggle somewhere too once new DS betas start coming out.

    I'll keep note for the next "revolutionary technology" that these things require years before getting ready!
    6 months and we still can't use this technology they've made us pay for without other new similar options.

    At least they're getting there :D

  • ebergerlyebergerly Posts: 3,255
    edited July 2019

    So sometime following the introduction last year of the RTX series GPU's in September (?), I got pretty much fed up with what seemed to be nonstop marketing handwaving and hype, with no relevant data and facts, so I pretty much dropped any occasional checking in on any of the youtube tech channels. I pretty much lost all interest in PC hardware. 

    So this morning I noticed one on a new NVIDIA GPU release and learned that apparently they're now releasing a "Super" version of the RTX 2080, 2070, and 2060. And, for example, it is supposed that the 2080 Super is, performance-wise, essentially a RTX 2080TI, but with a few GB less VRAM, with a price that is only around $700. Now as I recall, not long ago an RTX 2080TI was selling in the $1200 - 1400 range. And this chart from pcpartpickers shows that maximum prices in recent months shot up to around $1,900:

    https://cdn.pcpartpicker.com/static/forever/images/trends/trend.gpu.chipset.geforce-rtx-2080-ti.9468dd94339aac76a7b86aaa2562c72e.png

    Huh? And apparently the Supers will replace the non-Super variants? 

    Just when the optimist in me was expecting we might get this RTX/Iray performance stuff ironed out in the next few months, now we've got this insanity. So if you include all of the GTX and RTX and Super variants that will be out there this year, and we try to come up with a benchmark that will have any meaning whatsoever, and we wait until all the software gets updated for all of this, I honestly think that benchmarking anything RTX/Iray related in the foreseeable future will be somewhat useless. 

    And I hope we all tuck this away in the back of our minds for the next time we want to run out and buy the latest new technology right away. Sometimes negativity is the smart approach.

    Post edited by ebergerly on
  • bluejauntebluejaunte Posts: 1,861

    Haven't seen a price increase for 2080 TI. There are some more expensive variants with fridges for coolers and more overclocking than usual. But sure, we really shouldn't buy these cards. It's all too complicated anyway and we'll never be able to tell if these cards render faster or not. I am deeply sorry that I bought one, I am tearing my hair out all day long trying to understand all its complexities.

  • nicsttnicstt Posts: 11,714
    LenioTG said:
    M-C said:
    ebergerly said:

    Yeah, what many seem to focus on is raw, final render time, but personally I am FAR more interested in a semi-photorealistic and fast preview. Kinda like what Blender's new Eevee preview has been doing. Pretty amazing:

    https://www.youtube.com/watch?v=nxrwx7nmS5A

    If you're anything like me you spend 95% of your time tweaking stuff in the photoreal Iray preview in Studio, and a tiny fraction of time actually doing the final render. And unfortunately most of that involves a whole lot of switching between Iray preview and texture shaded mode. And personally, the new denoiser in Studio is, IMO, a whole lotta nothin'. With the Eevee preview you can generally get a very close idea of the final render. Yeah, it can take some monkey motion to set everything up, and it is a hack (not for final rendering), but still it's a vast improvement over the Iray preview.

     

    I absolutely agree here. Since iray preview is quite slow compared to some other alternatives you constantly need to switch between texture shaded and iray while setting up a scene. I´m wasting so much time waiting for iray preview to kick in during the creation process that the actual render speed doesn´t matter that much at all. Especially since i´m rendering most images at night. But speeding up the scene creation process would really improve any workflow.

    Yep, it would be nice to have a faster viewport.

    Most times I don't know why it falls to the CPU, even if the VRAM is mostly empty.

    RayDAnt said:
    LenioTG said:
    RayDAnt said:

    Did two rounds of rendering the Sickleyield, outrider42, DAZ_Rawb, and Aala benchmarking scenes previously shared in this thread (plus an additional one of my own design) on my Titan RTX in Iray Server (once using 2019.1.1 RTX and a second time using the final pre-RTX version available) using the method described here, and was seeing anywhere from a 0.6-12.7% speedup for the RTX release.

    HOWEVER.

    About half way through the process I started to pixel peep on one of the scenes (I've run these benchmarks so many times now I usually just skip over the actual graphics) and noticed that their were key scene elements (like HDRIs and camera properties) either corrupted or not making it through into the final renders at all. So I'm sorry to say that this method is NOT currently adequate for properly evaluating what kind of a performance impact full RTX support in Iray has when rendering native Daz Studio content.

    So it could actually be slower, since it's not loading some stuff?

    It could be faster, it could be slower... You simply can't trust performance figures from software when it is displaying unexpected behavior.

    The other thing is it's actually NOT an apples-to-apples test in the first place. Every version of Iray performs differently due design differences. For this to be a true RTX on vs off test you would need to be using the same build of Iray on the same RTX card - just with the RTX functionality toggled on/off. And - so far as I can tell - there's no way to do that via Iray Server ('twas the same with OptiX Prime acceleration before this. Daz Studio has always had that toggle option for it. Server just used it regardless. Hopefully (from a performance tinkerer standpoint) there's an RTX toggle somewhere too once new DS betas start coming out.

    I'll keep note for the next "revolutionary technology" that these things require years before getting ready!
    6 months and we still can't use this technology they've made us pay for without other new similar options.

    At least they're getting there :D

    September , 2018 was the release date, iirc. That would be close to ten months.

This discussion has been closed.