Iray Starter Scene: Post Your Benchmarks!

1363739414249

Comments

  • Takeo.KenseiTakeo.Kensei Posts: 1,303

    Has anyone benched the Threadripper 2990WX by itself yet?  I can extrapolate from the 16 core Threadripper benches, but the 2990WX has a 'unique' memory config, hence why I'm still curious.

    Yes, it WILL be slower than most GPU's, but it might be handy for really big scenes with lots of characters, if you aren't feeling like rendering different portions of the scene in multiple passes, hence my continued curiousity...

    Unless you're also using CPU renderers like 3Delight, Prman, Corona or Vray, considering the price of a 2990WX alone without the motherboard and everything that goes with it, I consider the RTX Titan to be a better choice. 24 GB of Vram is beginning to be very comfortable to work with

    And I don't think the SY scene is able to really put a stress on the new threadripper memory architecture. So even if anybody is nice enough to do the benchmark, I'm not sure you'll get enough performance indication for very big scenes.

     

    I wonder why the benchmark scene behaves that way?

    That's something that was discuted month ago already, or at least we have all the needed informations throughout all the thread. The fundation of Optix Prime is efficient raytracing. And we know that  we can divide the rendering in two aspect : raytracing and shading

    I emitted the idea few month ago when talking about Octane first benchmarks for RTX and their metric in Megasample/s . To measure RTX acceleration, you need a scene with simple shaders so that not much time is spent on shading calculation.

    New scenes submitted after SY's using new shader funtions like dual lobe specular and other new effects will make longer shading time. So if you're shading a point, and use more complex shaders, you're shifting the balance between raytracing and shading time. And at some point, instead of waiting the result of the raytracing, you're waiting for the result of the shading.

    I'm simplifying a bit, because the whole process is more complicated, but you get the idea

    If you want to measure the impact of both process, some specific scene must be carefully built. But if that can be usefull to really measure the performance of each card precisely in each field, I see a problem with these type of specific scenes : they may no longer represent a typical user's render scene

    That's one of the reason why I'm a bit divided about new benchmark scenes.

    On the other hand, the Ubershader has changed...so it would make also sense to measure the performance against it.

    Side note : to people compiling datas, I think that old measured datas make no sense against recent measurements, because DS, the Iray version and Ubershader have changed. One good thing with RTX measurements is that they all use DS Beta and most of them use recent Nvidia drivers with Windows 10, so results are thus more consistent. The only variation for theses being the hardware which should have minimal impact

     

    It also validates the reasoning behind using multiple and different benchmarks.

    I'd like to see if anybody has one of those Threadrippers as well. Also, I'd really like to see Threadripper used in multiGPU systems. The old Puget benchmarks claimed that having more CPU cores boosted multiGPU speeds, even when the CPU was not being used to actively render. They were using Xeons, as that test was long before Ryzen launched.

    So I would love to see not only how new Threadrippers perform solo, but how they effect the speeds in multiGPU. If they do, then there is double incentive for buying as many cores as you can.

    I'm not sure about the core count. There is no logic for that when the CPU doesn't participate to the render process, and I rather have an other hypothesis which is that dual socket Xeon have two distinct memory controllers. But since Puget didn't really investigate the matter there is little chance to really know

    But really, since the RTX Titan has 24GB Vram and you can NVlink two of them, I'd question the need for HEDT systems and the need for more than two GPUs

  • outrider42outrider42 Posts: 3,679

    But without testing it, we will never know for sure. We have an opportunity now that we can drop higher core counts into desktop PCs. Back when Puget was testing that was impossible as we were still stuck in a 4 core desktop world. Things have changed dramatically since then. Plus having cores benefits more than Iray. Other programs like Adobe can make use of them, so content creators can really benefit.

    And yes, the balance between ray tracing and shading is something I touched on before. Its why having a couple different benchmarks to test can be helpful. A benchmark that stresses pure ray tracing would be interesting to see, then a bench that stresses shading, and finally a bench that attempts to combine both. I think these would be great.  Because I believe some GPUs generations might perform one thing better but not the other.

    As for confusing people, I don't understand how that is an issue. Every time a new GPU is released there are literally hundreds of benchmarks run from gaming to other software. Some outlets will do over 30 different benchmarks with a variety of cards, and post all of them on a single chart. It is part of the business.

    Additionally, I have a suspicion that all the old benchmarks that have been done will be completely invalidated by the next version of Daz Studio. If they keep Iray, then Iray will have undergone some major changes in order to be able to use RTX features. Iray could perform completely different from what we are used to. And who knows, what if the next version of Daz Studio replaces Iray outright? Then we will certainly have to start all over.

    And if we have to start over, then we should start out by doing it right.

    Some people hit the 5000 iteration cap for the SY scene, while others hit the 95% convergence. And sometimes they did way less than 5000 iterations in this process. So I think it is very important that whatever benchmark is made make a choice to use one or the other, but not both, and to make sure they are nowhere close to intersecting. Or just disable one or the other. I strongly feel that iteration count is a more reliable stop condition. Time is fine, because if you can't render it in 2 hours, you are pretty boned regardless.

    OptiX On or Off from the Iray Programmer's Manual:

    "Switches from the classical built-in Ray Tracing traversal and hierarchy construction code to the external NVIDIA OptiX Prime API. Setting this option to false will lead to slower scene-geometry updates when running on the GPU and (usually) increased overall rendering time, but features less temporary memory overhead during scene-geometry updates and also decreased overall memory usage during rendering."

  • tj_1ca9500btj_1ca9500b Posts: 2,057
    edited March 2019

    Outrider pretty much laid out the case better than I could.  There are numerous other valid uses for a HEDT processor.  Video editing comes immediately to mind but there are a number of others.  And I'm simply curious. 

    I know that at least one person that has replied in this thread has a 2990WX, or at least they did when they posted their GPU only benchmarks.  But I already asked nicely and never saw a followup, unless I missed a post somewhere.  And some people simply won't have 5-6K to spend on GPUs, which was the suggestion (i.e. NV Linking 2 Titan RTX's), and are making do with 1080 Ti's...  a 2990WX might be within reach for some budgets, which comes into play if you can use the extra horsepower in other programs, and they will probably be significanlty cheaper later this year when the 7nm Threadrippers hit the market.  And the whole point of this thread is to share benchmark results so that people can understand the performance of various setups. 

    I'd also like to see a 2970WX CPU only bench, but I don't think anyone here has one of those.  If someone does I'd love to see a 2970WX CPU only bench.  And also CPU + GPU benches to see if the high core cont helps the benchmark more significantly - for 'regular processors it is sometimes slower, but usually you only gain a couple of percentage points toward reducing render times on your Iray renders.  The 2970WX's are also much cheaper than the 2990WX's.

    And I'm also curious about whether a higher count HEDT processor significantly decreases load times and if it improves responsiveness noticably for Daz Studio in the viewport.  I'm working with a very large scene currently, and it takes more than an hour simply for it to load in Daz (and yes, I'll be rendering it in multiple passes, no chance of it fitting in the 1080 Ti's memory otherwise).  And adding more things to this huge scene is similarly sluggish.  Sure, some of that is likely memory related (I only have 32GB of DDR4 in the system in question), but if the additional cache that the HEDT processors usually have comes into play... well is is just good stuff to know, for those that are wondering.  But while that's not necessarily Iray related, it does play into setting up those Iray renders in the first place.

    It might relate to Iray in the sense of 'computing' the scene before transferring it to the GPU though, even for just say the viewport.  If you watch your GPU monitoring when you switch from say texture shaded to Iray, it generally takes a bit before the GPU actually spikes on usage - it will simply idle until the scene computations are finished and they are finally transferred to the GPU.  In my case, I have an isolated 1080Ti that doesn't run the monitor, and most of the time it's parked/idle...

    So yeah, I'd at least like to see one bench of the 2990WX and also the 2970WX if someone has those.  I'd REALLY like to know how Daz Studio reacts to the NUMA issues due to their unique core configs.  And if Daz Studio can even use all of those threads 48/64) in the first place.  We have a few higher core count Xeon benches, but as I said, more test info good!  High core count EPYC benches would be nice too, but those aren't HEDT processors so they generally have lower core frequencies than the Threadrippers do.

    For research purposes if nothing else.  GPU renders are always preferred, but it's good to know how long that CPU only render might take if your GPU is bypassed, and you need to guess how long letting the Iray render run CPU only may take, say if you are wondering if it might be done after you do other things like mow the lawn, take a nap, etc...

    Yes, it will in almost all cases be slower (unless you have a really really low end GPU), but it might be an acceptable number, if you have a frame of reference to base your guess on.

    Post edited by tj_1ca9500b on
  • Takeo.KenseiTakeo.Kensei Posts: 1,303
    edited March 2019

    Outrider pretty much laid out the case better than I could.  There are numerous other valid uses for a HEDT processor.  Video editing comes immediately to mind but there are a number of others.  And I'm simply curious. 

    I'm not questionning HEDT or test on 2990W for other use outside of DS/Iray. I'm questionning the finality and information you can get from running a CPU benchmark on a small scene

    I know that at least one person that has replied in this thread has a 2990WX, or at least they did when they posted their GPU only benchmarks.  But I already asked nicely and never saw a followup, unless I missed a post somewhere.  And some people simply won't have 5-6K to spend on GPUs, which was the suggestion (i.e. NV Linking 2 Titan RTX's), and are making do with 1080 Ti's...  a 2990WX might be within reach for some budgets, which comes into play if you can use the extra horsepower in other programs, and they will probably be significanlty cheaper later this year when the 7nm Threadrippers hit the market.  And the whole point of this thread is to share benchmark results so that people can understand the performance of various setups. 

    About the price, a base system for the 2990W is at least 3K, without really pushing the component choice

    CPU     AMD - Threadripper 2990WX 3 GHz 32-Core Processor $1724.89     
    CPU Cooler     Corsair - H100i PRO 75 CFM Liquid CPU Cooler    $89.89     
    Motherboard ASRock - X399 Taichi ATX TR4 Motherboard $304.98     
    Memory Corsair - Vengeance LPX 128 GB (8 x 16 GB) DDR4-3200     $949.99
    Power Supply Rosewill - 1600 W 80+ Gold Certified Semi-Modular ATX Power Supply    $220.98

    Total $3290.73

    Add the rest of the components and you'll be in the 4K

    For that price you can build a system with 64GB RAM and a RTX Titan which would yield superior performance by a large margin

    I recon a dual RTX Titan would rather cost at least 7K if you want to use them with NVLink (you need a PC that can feed them with minimum 128 GB RAM)

    And I'm also curious about whether a higher count HEDT processor significantly decreases load times and if it improves responsiveness noticably for Daz Studio in the viewport.

    No it doesn't. Core count has nothing to do with that. And the benchmark will not give you any information about that

    Since you say that your 1080 Ti is not used for display,

    1°/ Did you configure DS so that your GFX card is set to "Display Optimization" = "Best"?

    2°/ Do you have 8 RAM Sticks running at 3200 Mhz and with low Cas Latency ? If you don't have populated all your RAM slots, your system will be slow. And we also know AMD systems are sensible to RAM speed.

    3°/ If core count had any influence, it would be worse with the 2990W because the Quad channel memory will be starving the 32 core in term of memory bandwith. For DS you're better off going with lesser core which would be better feed in memory bandwith...that is if core count had any influence on loading time, which I doubt

    4°/ I hope you're loading assets from a SSD drive.

      I'm working with a very large scene currently, and it takes more than an hour simply for it to load in Daz (and yes, I'll be rendering it in multiple passes, no chance of it fitting in the 1080 Ti's memory otherwise).  And adding more things to this huge scene is similarly sluggish.  Sure, some of that is likely memory related (I only have 32GB of DDR4 in the system in question), but if the additional cache that the HEDT processors usually have comes into play... well is is just good stuff to know, for those that are wondering.  But while that's not necessarily Iray related, it does play into setting up those Iray renders in the first place.

     

    That's adding to my point because you won't get these type of information from the rendering benchmark since the measured time starts only after datas are transfered

     

    I'm not against benchmark, but for them to be meaningfull, you need rigor, full specs system and well built benchmark

    Post edited by Takeo.Kensei on
  • tj_1ca9500btj_1ca9500b Posts: 2,057
    edited March 2019

    Not all usage cases are going to match yours, Takeo.  You are talking about an ideal rendering config, which is all fine and good, but a lot of people use their computers to do other things as well.  Not everyone is going to need HEDT, or a Titan RTX for that matter, and there is no such thing as 'one size fits all'.

     

    BTW, r.e. my current system: Both of my memory channels are populated (Ryzen system), 16 x 2, at 3200 CL14, which is pretty much the 'ideal' config.  It's not recommended to put two sticks per channel in a Ryzen system if you can simply put in bigger sticks, one per channel.  You can do 2 sticks per channel, and it's OK to do so, but there's a slight memory performance hit when you do that.  Plus my motherboard only has two memory slots in the first place, one per channel. It's an X470 BTW.  It is ultimately intended to become an HTPC once 7nm hits the market and I build a new system.  The system actually works quite well for Daz: Vega 11 graphics drives the monitor, with the Nvidia GPU 100% dedicated to rendering.  Oh and my Daz install and scene files are on a 970 EVO SSD, which is rather quick on the responsiveness, so the drive isn't a significant bottleneck for this system.  But that's not quite relevant to this conversation.

     

    As to my questions about load times and general performance inside of Daz, if you were reading carefully, I DID state that they didn't apply directly to the benchmarking.  However, I was noting that there could be other benefits to having more onboard CPU cache, etc. when working in Daz Studio, outside of rendering and benchmarking.  This is just good info to know in general, but how quickly you can load and set up a scene is an important aspect to consider when looking at overall performance inside of Daz.  An Iray benchmark can't tell you everything of course, but it can give you an idea of what performance to expect when you are in various situations where your GPU may or may not be in play.

    I.E. "well, it takes a 16 core Threadripper 10 minutes to complete this scene, a 1080 Ti can do it in 2".  That sort of comparison. We can guesstimate that a 32 core 2990WX will do it in 5, but we won't actually know that until someone benches it.  The WX Threadrippers have a unique memory config, so we don't really know if we can simply halve the 16 core Threadripper times, or if it's going to be somewhere in between, and if so, how much.

    BTW, that 10 minute 16 core Threadripper figure? I found it here: It's actually 9 minutes and 14 seconds, I was just rounding up for convenience.

    https://www.daz3d.com/forums/discussion/comment/2926016/#Comment_2926016

    If it's 5-6x slower, well there are times (like when you can mow the lawn while the render is baking or whatever) that you might actually let the render run anyways if it drops to CPU only.  Now if a 32 core Threadripper can do that in 2.5x - 3x the time of a 1080 Ti... well that's actually appropaching a somewhat acceptable tradeoff.  If it's closer to 4x and 5x, instead of 2.5x, then it might be worth it to spend the time doing the scene optimizer thing,assuming the scene will fit in the GPU if you do that.  Also, that 2.5x-3x might be quicker than breaking your scene up into parts and rendering different things in different passes and merging them in Photoshop.

    Multiple GPUs of course drop the render time of GPU renders by about half (2), 2/3rds (3), etc. but again if you use your CPU for other things, well HEDT may be more useful to you in general than multiple GPUs.  It all comes down to the usage case.  I'm not advocating for using Threadrippers for rendering (the 3Delight folks might love Threadrippers though, since 3Delight is by default CPU only in Daz Studio), but it's just good to know how well they generally fare if they DO end up rendering.

    So, yeah, mainly I just want to see 2990WX and 2970WX Iray scene benches.  Without a philisophical debate r.e. the practicality or the usefulness, I just want to know.

     

     

    Post edited by tj_1ca9500b on
  • ebergerlyebergerly Posts: 3,255
    edited March 2019

    The problem with benchmarking is, especially with RTX technology, that it is increasingly difficult to define a benchmark scene that truly represents all of the strengths of all the new, specialized architecture and software that RTX technology brings. There's raytracing, which is only part of a render. There's Physx, which only applies for physics calculations. There's rasterization (advanced shaders). There's AI/tensor cores for denoising-type stuff. And lots of complexities within all of that. 

    And the average tech enthusiast has never actually programmed any of this and has no real idea exactly what they do and how much benefit they can provide, so it's virtually impossible to design a scene to fully utilize its strengths. Especially now, when the technology isn't even ready for prime time, and everything is pretty much just speculation. I mean, if DAZ and Iray don't implement a feature, then what good is it to benchmark that feature? And I'm guessing we won't really know the final story on that for months. 

    And even if you could define an ideal scene, how many of us have scenes that have similar utilization of all those different features? Probably few if any. So what good is it to define an ideal RTX scene with, say, Physx and some cool new RTX shader features like "mesh shading" if none of us have scenes that use them? 

    And as others have mentioned, the more difficult and complex it is to render an ideal benchmark, espcially if there are more than one benchmark, and the more combinations of system configurations (GPU's, etc.), the less useful the whole process will be because you're far less likely to get enough people willing to test on their system. Each time you start a new benchmark you lose all the previous results and have to start from scratch. 

    Of course that's assuming the end goal here is to get a list of benchmark results so that new users can get an idea of what to buy. OTOH, if the goal is to merely play with and speculate on new technology, then I suppose anything will work. 

    Post edited by ebergerly on
  • bluejauntebluejaunte Posts: 1,902

    Don't overcomplicate things. It's not that hard. Create a scene that represents what the average user renders every day. Probably an environment with one or more characters in it, some HDRI light and some spot lights. There is no reason to create a benchmark that tests specific features of the hardware. It's irrelevant if those features are unable to influence the render times of an average real world render. Artificial benchmarks serve no purpose.

     

  • LenioTGLenioTG Posts: 2,118

    Don't overcomplicate things. It's not that hard. Create a scene that represents what the average user renders every day. Probably an environment with one or more characters in it, some HDRI light and some spot lights. There is no reason to create a benchmark that tests specific features of the hardware. It's irrelevant if those features are unable to influence the render times of an average real world render. Artificial benchmarks serve no purpose

    From the bottom of my ignorance, I agree!! xD

    This thread main purpose, I guess, is to let a user see what GPU would be best for him!

    So, as long as the benchmark scene represents the avarage daily use, I think it's good!

    When the RTX technology will be fully implemented, we'll see how well it does on standard scenes! I don't think many poeple will change what they render just because a new GPU is out! ^^

  • ebergerlyebergerly Posts: 3,255

    Don't overcomplicate things. It's not that hard. Create a scene that represents what the average user renders every day. Probably an environment with one or more characters in it, some HDRI light and some spot lights. There is no reason to create a benchmark that tests specific features of the hardware. It's irrelevant if those features are unable to influence the render times of an average real world render. Artificial benchmarks serve no purpose.

    Yeah, but it IS complicated, whether we want to believe that or not. For example, there are some new RTX shader features that Iray might implement at some point. And Studio might also implement. But both have to implement them, since Studio needs to develop interfaces for them. And the products in the store need to implement them. And they may end up being the greatest thing since sliced bread. Or they may be unimplemented. Why even think about making a new benchmark right now with unknowns like that? And what if they come up with a new DForce that implements the new Physx/Flex/CUDA10 simulation features?

    I'm guessing most people aren't even aware of what all those features are or what they do. 

    I know we love to oversimplify stuff, but if we want to make it somewhat useful we need to deal with some complexities.

     

  • bluejauntebluejaunte Posts: 1,902
    ebergerly said:

    Don't overcomplicate things. It's not that hard. Create a scene that represents what the average user renders every day. Probably an environment with one or more characters in it, some HDRI light and some spot lights. There is no reason to create a benchmark that tests specific features of the hardware. It's irrelevant if those features are unable to influence the render times of an average real world render. Artificial benchmarks serve no purpose.

    Yeah, but it IS complicated, whether we want to believe that or not. For example, there are some new RTX shader features that Iray might implement at some point. And Studio might also implement. But both have to implement them, since Studio needs to develop interfaces for them. And the products in the store need to implement them. And they may end up being the greatest thing since sliced bread. Or they may be unimplemented. Why even think about making a new benchmark right now with unknowns like that? And what if they come up with a new DForce that implements the new Physx/Flex/CUDA10 simulation features?

    I'm guessing most people aren't even aware of what all those features are or what they do. 

    I know we love to oversimplify stuff, but if we want to make it somewhat useful we need to deal with some complexities.

     

    RTX shader features? What are those? I thought RTX was acceleration for raytracing and had nothing to do with shading.

    Anyway, if such a case happened and new technology is henceforth used by the majority or all users of Daz Studio, then obviously one would have to create a new benchmark scene to reflect that. I don't think there's a way around that. You cannot possibly future-proof a benchmark scene for features that don't exist yet.

  • ebergerlyebergerly Posts: 3,255
    kameneko said:

    When the RTX technology will be fully implemented, we'll see how well it does on standard scenes! I don't think many poeple will change what they render just because a new GPU is out! ^^

    I assure you, if Iray and DAZ can fully implement some of the cool new features that RTX promises, a lot of people will be paying a lot of $$ to re-make their scenes 

     

  • ebergerlyebergerly Posts: 3,255

     

    RTX shader features? What are those? I thought RTX was acceleration for raytracing and had nothing to do with shading.

    There's a whole world of stuff in RTX that most don't seem to know about or understand:

    https://developer.nvidia.com/rtx

    Anyway, if such a case happened and new technology is henceforth used by the majority or all users of Daz Studio, then obviously one would have to create a new benchmark scene to reflect that. I don't think there's a way around that. You cannot possibly future-proof a benchmark scene for features that don't exist yet.

    I'm not talking about future-proofing, just waiting 6 months until we find out what the final RTX implementation will be in Studio/Iray before we jump the gun making new benchmarks that will just have to be re-done later. 

     

  • RayDAntRayDAnt Posts: 1,135
    edited March 2019

    For those interested, I have extensive comparative performance statistics on all of the benchmarking scenes so far featured in this thread which address/illustrate/answer many most of the questions people have been bringing up about them as of late (been assembling them off and on over the last month or so as part of my process in creating an ideal RTX-oriented but still legacy-hardware-friendly scene for general Daz Studio Iray rendering performance benchmarking.) Right now the data is spread over mutliple barely legible small print spreadsheets. But once I can get it broken down into digestable chunks (and more easily readable graphs where applicable) I'll start posting it here. Potentially starting later today (look for a post from me with Benchmarking the Benchmarks in largely friendly letters at the top of it. :)

    Post edited by RayDAnt on
  • ebergerlyebergerly Posts: 3,255

    But RayDAnt, Iray/Studio hasn't implemented the RTX stuff yet, correct? How are present metrics meaningful, other than as an interim set of data that will change in a few months? 

  • RayDAntRayDAnt Posts: 1,135
    edited March 2019
    ebergerly said:

    But RayDAnt, Iray/Studio hasn't implemented the RTX stuff yet, correct? How are present metrics meaningful, other than as an interim set of data that will change in a few months? 

    Because in order to know best where you're going you've gotta understand first where you've been.

    ETA: Also, to be clear, I'm mostly talking about the questions people have been bringing up in this thread as of the last 20 posts or so - not just the last 5 (ie. not RTX related.)

    Post edited by RayDAnt on
  • EBF2003EBF2003 Posts: 28
    edited March 2019

    update SickleYield test (open daz, load duf, hit render,no changes made to render settings) 1st render times only, no second render used

    dual 1070ti msi duke cards

    xeon 2696 v4

    4.10 test cpu+gpus+optix on = Rendering Time: 1 minutes 13.96 seconds

    4.11 beta cpu+gpus+optix on = Rendering Time: 1 minutes 18.56 seconds

    driver 417.71

    Post edited by EBF2003 on
  • ebergerlyebergerly Posts: 3,255

    I'm thinking what might be most helpful in this benchmark thread is a list of standard config stuff to make sure everyone is comparing apples to apples. Such as making sure you specify what scene you're rendering (now that there are multiple scenes out there), where you're reading your render time numbers, whether you're doing an intial render or the second render (I recall the second one is significantly faster than the 1st since the scene is already loaded or something like that...), making sure you have the correct render settings, etc. 

  • bluejauntebluejaunte Posts: 1,902
    ebergerly said:

     

    RTX shader features? What are those? I thought RTX was acceleration for raytracing and had nothing to do with shading.

    There's a whole world of stuff in RTX that most don't seem to know about or understand:

    https://developer.nvidia.com/rtx

    Ah, right. They are calling the whole platform RTX. I was thinking just RT cores when we say RTX.

  • ebergerlyebergerly Posts: 3,255

    Ah, right. They are calling the whole platform RTX. I was thinking just RT cores when we say RTX.

    Yeah, unfortunately that's one of those very common tech enthusiast myths resulting from a lot of people watching youtube videos that don't come close to really explaining the complicated parts of various tech subjects. So they grab on to something simple like "tensor cores" thinking that's all there is (and not really knowing what they are). Unfortunately, "complicated" doesn't sell in the youtube world. 

    But yeah, the RTX cards and related software have potential to do a lot of cool stuff, but unfortunately you never hear about a lot of it because nobody has posted attention-grabbing videos about them laugh

  • ebergerlyebergerly Posts: 3,255
    edited March 2019
    RayDAnt said:
    ebergerly said:

    But RayDAnt, Iray/Studio hasn't implemented the RTX stuff yet, correct? How are present metrics meaningful, other than as an interim set of data that will change in a few months? 

    Because in order to know best where you're going you've gotta understand first where you've been.

    ETA: Also, to be clear, I'm mostly talking about the questions people have been bringing up in this thread as of the last 20 posts or so - not just the last 5 (ie. not RTX related.)

    RayDAnt, one thing you might consider if you haven't already is including something like I did in a spreadsheet I made last year, where I included Price/Performance ratios to see what GPU's give you the most bang for the buck (copy attached). I basically took the newegg price at the time, then chose a base GPU to compare to (a GTX1060), and determined render time improvements in percent. Interesting though how most of the GPU's had a price/performance ratio in a narrow range of around 13-15, but when you get into multiple GPU's you're getting a lower bang for the buck (ie, higher price for a given performance improvement). Which makes sense I suppose, since two GPU's don't cut render times in half. But with single GPU's it was nice with the GTX's, because if you spend more for a better GPU you get a proportional increase in improvement and you're not getting less for more.  

    Though I'm guessing that with the present unfinished state of RTX the ratios of price/performance will be a lot higher. 

    Iray Benchmark Price Performance.jpg
    786 x 525 - 233K
    Post edited by ebergerly on
  • bluejauntebluejaunte Posts: 1,902
    edited March 2019
    ebergerly said:

    Ah, right. They are calling the whole platform RTX. I was thinking just RT cores when we say RTX.

    Yeah, unfortunately that's one of those very common tech enthusiast myths resulting from a lot of people watching youtube videos that don't come close to really explaining the complicated parts of various tech subjects. So they grab on to something simple like "tensor cores" thinking that's all there is (and not really knowing what they are). Unfortunately, "complicated" doesn't sell in the youtube world. 

    But yeah, the RTX cards and related software have potential to do a lot of cool stuff, but unfortunately you never hear about a lot of it because nobody has posted attention-grabbing videos about them laugh

    RT cores is the main price here, there is absolutely justification to mainly talk about that when you say RTX.  Maybe AI for denoising, but any benchmark would probably disable it unless we can guarantee it has no loss in visual quality, which I have not seen so far with denoisers always removeing fine surface detail. Keep in mind also that mesh shading is listed under rasterization, which as far as I know only happens in game engines and won't be relevant to us Iray folks.

    Although "offering a new shader model for the vertex, tessellation, and geometry shading stages of the graphics pipeline, supporting more flexible and efficient approaches for computation of geometry" sounds great for Iray too laugh

    Post edited by bluejaunte on
  • ebergerlyebergerly Posts: 3,255
    edited March 2019

    BTW, I just did a quick check and it looks like someone posted a $2,500 Titan RTX render time that gives it an insanely overpriced Price/Performance ratio of twice what you'd expect: instead of the 15 range it's more like 32. I assume this will improve once RTX gets fully implemented.

    And since the GTX 1080ti has been priced out of the market, it has a Price/Performance of 23. 

    So yeah, if the RTX cards get their price/performance down in the 15 range or better that's when I'll get interested. Although it looks like the RTX 2060 has a ratio of 8 unless my calculator is broken. Sweet.  

    Post edited by ebergerly on
  • ebergerlyebergerly Posts: 3,255
    edited March 2019

    RT cores is the main price here, there is absolutely justification to mainly talk about that when you say RTX.  Maybe AI for denoising, but any benchmark would probably disable it unless we can guarantee it has no loss in visual quality, which I have not seen so far with denoisers always removeing fine surface detail. Keep in mind also that mesh shading is listed under rasterization, which as far as I know only happens in game engines and won't be relevant to us Iray folks.

    Again, it's complicated...

    For most of us I presume we'd be VERY interested if our Iray could do realtime and accurate denoising rendering in the 3D viewport, even with some loss in quality. Which is kinda what Blender has with its new Eeevee renderer. So you can manipulate the viewport with a realtime Iray response, and all you lose is some image quality. Right now you have to jump thru some hoops to turn off "point at" and do some other settings to improve realtime response, but if you could get realtime response that's much faster without all those settings I'm sure that would be heaven for most of us. So some sort of benchmark for that would be very interesting to most of us I think. In fact for me I think that would be one of the biggest selling points, rather than just render speed. And heck, if they could pull off some realtime cloth simulation with the new Physx simulation stuff that would be wonderful. Kinda like MD. And certainly worthy of some benchmark attention. For me that's the #1 thing I'm hoping for out of RTX. Pure ray tracing not so much... 

    Post edited by ebergerly on
  • bluejauntebluejaunte Posts: 1,902

    It's not complicated for me. I don't need to know technical details, I need to know how fast stuff is and that's it. I already know that a 2080 TI renders roughly twice as fast as a 1080 TI and costs roughly twice as much. I learned this without having a meticulously researched benchmark or much technical knowledge. And it can only imrpove further when RTX features are enabled. That's all I needed to know to make a purchasing decision.

    If you are only willing to spend that kind of money for more than twice the performance or some other features, then indeed you will have to wait and see.

  • ebergerlyebergerly Posts: 3,255
    edited March 2019

    It's not complicated for me. I don't need to know technical details, I need to know how fast stuff is and that's it. I already know that a 2080 TI renders roughly twice as fast as a 1080 TI and costs roughly twice as much.

    Well, actually they cost about the same, $1,200. 

    And BTW, I'm impressed that you found an RTX 2080ti render time in this thread. Honestly it's gotten so complicated with all the new benchmark scenes and extraneous discussions I've given up trying to add stuff to my summary spreadsheet. In fact I'm very impressed that RayDAnt has taken on the task of summarizing all of this. laugh

    Post edited by ebergerly on
  • bluejauntebluejaunte Posts: 1,902
    edited March 2019
    ebergerly said:

    It's not complicated for me. I don't need to know technical details, I need to know how fast stuff is and that's it. I already know that a 2080 TI renders roughly twice as fast as a 1080 TI and costs roughly twice as much.

    Well, actually they cost about the same, $1,200. 

    A 1080 TI costs 1200 bucks now?? Jesus...

    Wait, what? How is that even possible. Who in their right mind would buy a 1080 TI then. Must have misunderstood you laugh

    Post edited by bluejaunte on
  • ebergerlyebergerly Posts: 3,255
    ebergerly said:

    It's not complicated for me. I don't need to know technical details, I need to know how fast stuff is and that's it. I already know that a 2080 TI renders roughly twice as fast as a 1080 TI and costs roughly twice as much.

    Well, actually they cost about the same, $1,200. 

    A 1080 TI costs 1200 bucks now?? Jesus...

    Yeah, no kidding. And if you look at newegg they are actually advertising 1080ti's and they have tons of new ones for sale at $1,200+. Kinda hilarious. I paid like $750 for mine. I even saw a popup ad from them proudly proclaiming the old price was $1,999 and now it's marked down. 

  • bluejauntebluejaunte Posts: 1,902
    ebergerly said:
    ebergerly said:

    It's not complicated for me. I don't need to know technical details, I need to know how fast stuff is and that's it. I already know that a 2080 TI renders roughly twice as fast as a 1080 TI and costs roughly twice as much.

    Well, actually they cost about the same, $1,200. 

    A 1080 TI costs 1200 bucks now?? Jesus...

    Yeah, no kidding. And if you look at newegg they are actually advertising 1080ti's and they have tons of new ones for sale at $1,200+. Kinda hilarious. I paid like $750 for mine. I even saw a popup ad from them proudly proclaiming the old price was $1,999 and now it's marked down. 

    I'm looking at used ones (thought there were no new ones being sold anymore) and they still go for 600ish, even as low as 400. I don't see how anyone would pay 1200 for a 1080 TI.

  • LenioTGLenioTG Posts: 2,118
    RayDAnt said:

    For those interested, I have extensive comparative performance statistics on all of the benchmarking scenes so far featured in this thread which address/illustrate/answer many most of the questions people have been bringing up about them as of late (been assembling them off and on over the last month or so as part of my process in creating an ideal RTX-oriented but still legacy-hardware-friendly scene for general Daz Studio Iray rendering performance benchmarking.) Right now the data is spread over mutliple barely legible small print spreadsheets. But once I can get it broken down into digestable chunks (and more easily readable graphs where applicable) I'll start posting it here. Potentially starting later today (look for a post from me with Benchmarking the Benchmarks in largely friendly letters at the top of it. :)

    That would be nice, great job! :D

    ebergerly said:

    But RayDAnt, Iray/Studio hasn't implemented the RTX stuff yet, correct? How are present metrics meaningful, other than as an interim set of data that will change in a few months? 

    Who will implement it? Nvidia or Daz developers themselves?

  • outrider42outrider42 Posts: 3,679

    How do you define what the average user renders? That is both impossible and illogical. You could ask 50 people and get 50 different answers...and how would you "average" those answers? Any sensible benchmark would still be simple enough for people use. Just load the scene and press render...how complicated is that? Nobody is saying we need a 10 GB scene of a dense forest that would take most people hours to render. We also need to remember that any benchmark scene needs to use items that are included with Daz Studio and/or primitives.

    A ray tracing focused bench could be nothing but primitives with red, green and white in a Cornell Box. This scene would fit into almost any GPU's VRAM. Cornell Boxes have been ray tracing standards since the 1980's, and nothing RTX does will change that.

    The shader focused scene would be something more like mine, with a Genesis 8 model wearing the Dancer Outfit (because that's one of only outfits included with Start Essentials). I tried to update the outfit and hair with some different Iray properties for my scene. I would suggest using chromatic sss on the skin, as that seems to be more challenging for GPUs to resolve without needing a lot of VRAM. We are ALREADY benefiting from Turing shaders, as the 2080ti proves to be as fast as two 1080ti's put together, as bluejaunte has reminded us. And the 2080ti is only using one THIRD of its capability since Tensor and RT cores are not being used. So wow.

    The 3rd scene can combine these features, placing the G8 model in a 3D environment, one of which is included with Start Essentials. With some portrait style lighting, the environment would provide for a place for light to bounce plus the shading of the surfaces. Again, smart use of the scene can keep this well under 4 GB total so most people can run it. It would be the most demanding of the 3 scenes, which combined tests usually are. Benchmarks for gaming do this all the time. 3DMark has a physics bench for CPU, and a more pure graphics test for the GPU. Then a 3rd combined test uses all of these elements in a combined score. And now that Turing introduces RT cores that are separate from CUDA, it is only logical to test these. After all, future GPUs could handle ray tracing very differently. Future generations could add a ton of RT cores but keep the CUDA cores nearly the same. There are so many things that could change.

    A dforce bench would need a different test of its own and such a test can be done with primitives and no shaders. Daz already has such a scene for its tutorial on dforce built right in. That tutorial scene could serve as the basis for a dforce bench if people want one.

    The benches I proposed here are all basic elements of any rendering software. RTX on or off does not change the quality of the tests themselves. Nvidia is not doing something totally new, ray tracing has existed for decades. Turing provides dedicated hardware acceleration for this task, and that's all. As I said, Turing's shading ability is already benefiting Daz users. The only reason that old benchmarks could be invalid is if Iray undergoes a dramatic change. For example, if the updated Iray is now much faster, that would throw out all of the old bench times immediately. The new Iray might be faster at only ray tracing, or it might be faster at shading as well. Either way, having the tests in place to bench it makes sense, as the nature of tests don't need to change if they are already set up properly. 

    Sickleyield did an amazing job when she made her bench scene. But that was in 2015 with Daz Studio 4.8. Iray has undergone many changes already in that time, and it is set for the biggest change of its existence yet. There is no speculation in that statement. Nvidia has already confirmed that Daz Studio is getting RTX ray tracing, and no matter what, adding RTX to Iray is a huge change. That's assuming the ray tracing plugin is still Iray. And if it is NOT Iray, well, everything gets thrown out the window regardless. Now for a little speculation: Daz is "due" for a new Genesis figure this year if they hold to their 2 year cycle every past Genesis has had. That means a Summer release for Genesis 9. And what better time to debut the new render plugin and a new version of Daz Studio? So I SPECULATE that all of these will release this Summer, about 3-4 months from today. Daz Studio 4.12 (or maybe even Daz 5.0), Genesis 9, and the new render plugin all this Summer.

    Saying Turing is "not ready for prime time" is also a misnomer. It might be more accurate to say that Iray is not ready for prime time, because so many other rendering options ALREADY have RTX ray tracing support or are getting it very soon. Video games have had RTX for months. Unreal has it, and Unity is getting it in just a few days. RTX was ready for prime day months ago, its the API vendors who need to up their game.

    The Titan has always been the pony car of the Nvidia's lineup. The price to performance ratio has never been good for any Titan, so any such comparison will always look silly. They only offer a little more power than the x80ti of the same generation. But as I described before, people don't buy Titans because of their price to performance. They buy Titans because they want the fastest card. With 24 GB of VRAM and a few of Quadro's pro features, the Titan RTX is the fastest GPU in the world. There is nothing that can match it. The Quadro RTX 8000 might come close for $10,000, but it has lower clock speeds.

    EDIT: Oh snap, the Quadro RTX 8000 has dropped in price from $10000 to $5500, a massive price drop over its initial offering. Wow. I am surprised the Titan V is still at $3000, but it does still have major advantages over the newer Titan RTX and even Quadro RTX at specific tasks.

    I'm glad I got my 1080ti's for $500. The second one was $475, and that includes tax and shipping. Which brings up a problem with the price performance chart. Prices change too much, and the chart does not factor used prices. The 1080ti is no longer in production, so its only a matter of time before they are totally gone.

    Titan RTX VS the Quadro lineup in Octane. Also note the massive boost in performance by turning the RT cores on.

     

Sign In or Register to comment.