Iray Starter Scene: Post Your Benchmarks!

1313234363749

Comments

  • bluejauntebluejaunte Posts: 1,861
    edited January 2019
    RayDAnt said:

     

    Iteration count is just another way of limiting rendering time (other ways being max time itself, max number of samples, and pixel convergence %), it does not mean you cannot end with less iterations when the scene pixels converge faster to their final value.

    What does convergence actually mean? When does a pixel count as converged?

    Each time Iray completes a new iteration of a scene it goes through and compares the newly computed value of each and every pixel in the current intermediately rendered image to what it was in the iteration just before, and if the two values are within a certain threshold of difference from each other (as defined by the "Rendering Quality" parameter found under Render Settings in Daz Studio) it flags that pixel as "converged". Iray then keeps a running tally of how many pixels in the current intermediately rendered image are flagged as "converged" versus the pixel count as a whole, and uses this ratio ("Rendering Converged Ratio" in Render Settings) as one of the key parameters for determining when a render is completed.  It's actually a very elegant way of solving what is technically an unsolvable problem (determining a point at which a theoretically endless process of iterative computation like raytracing can fairly said to be "complete".)

    So in other words,

    Q: When does a pixel count as converged?

    A: When the value of "Rendering Quality" says so.

    What does that actually mean? The default value of "Rendering Quality" in all versions of DS (afaik) is 10. And I've never come across anyone changing it. Increasing it (it has a max value of 10000...) leads to an increase in per-pixel rendering quality at the cost of a linear increase in rendering time (ie. a render at RQ=10 taking ten minutes to "converge" will take twenty minutes to "converge" at RQ=20.)

    Otherwise there really isn't much more (in terms of concrete things) to say on the matter - other than point out that convergance % can be used as a valid performance benchmark (as long as people stick to the same value for Iray's "Rendering Quality" parameter.)

    What can I say - 3d rendering is a cryptic business...

    Nice explanation, makes sense. So that makes me wonder about one thing. 95% convergence ratio means that potentially different pixels will be converged each time? Is this somewhat random or will the exact same pixels be converged each run? Would it not be theoretically possible that pixels that were much harder to shade than others, say those of a complex skin shader rather than a simple background, would more likely end up being in that 5% portion of not-converged pixels? Also would we not need to use 100% in a benchmark?

    Post edited by bluejaunte on
  • RayDAntRayDAnt Posts: 1,120
    edited January 2019

    Nice explanation, makes sense. So that makes me wonder about one thing. 95% convergence ratio means that potentially different pixels will be converged each time? Is this somewhat random or will the exact same pixels be converged each run?

    In theory, the rendering process will follow EXACTLY the same processing path each and every time the same scene is rendered under identical hardware/software configurations. However, in practice, due to the sheer number of tiny mathematical calculations going on at a hardware level during a typical rendering process (eg. the least computationally complex benchmarking scene in this thread right now is Sickleyield's, which is design-limited to rendering a 400x520 = 208,000 pixel image to 5000 iterations - ie. performing the complex mathematical task of raytraced rendering 1.04 billion separate times) the reality of quantum mechanics is gonna set in, meaning that somewhere along some of those passes a 1 is gonna erroniously turn into a 0 or vice-versa. Leading (most likely) to a wholly unnoticeable visual difference in the image somwhere during the intermediate phase of the rendering process - but a difference none the less.

    This is actually the reason why "professional" Quadro NVidia cards feature (seemingly unnecessarily) higher cost ECC (error-correcting code) memory. You can have the best performing silicon substrate chip on your hands imaginable, and you're still gonna end up with occasional bit-level errors once you start pushing astronomically large numbers of computations through it. I'd liken it to a quantum-mechanical margin of error.

     

    Would it not be theoretically possible that pixels that were much harder to shade than others, say those of a complex skin shader rather than a simple background, would more likely end up being in that 5% portion of not-converged pixels?

    Absolutely. Within the quantam mechanical margin of error mentioned above, it's always gonna be the same pixels for a partcular scene (using the same render parameters) that take the longest to resolve.

     

    Also would we not need to use 100% in a benchmark?

    Only if your goal is to benchmark using 100% convergence (rather than a certain amount of time or iteratiions) as your limiting factor (remember: one-factor experimental designs are best.) Or to eliminate convergence as a limiting factor on a scene altogether if eg. your goal is to benchmark to a certain number of iterations regardless of all else. Although there is a much more elegant way to do this: set "Rendering Quality Enable" in Render Settings to OFF - this will deactivate the whole mechanism of convergence tracking in Photoreal Iray completely. Which could actually help you in performance, since Word of God has it that:

    The convergence quality estimate has some computational cost. It is only computed after some initial number of samples to ensure a reasonably meaningful estimate. Furthermore, the estimate is only updated from time to time.  http://raytracing-docs.nvidia.com/iray/manual/index.html#iray_photoreal_render_mode#5095

     

    ETA:

    Another thing to consider is that playing around with less than 100% convergence values but larger (or smaller) than default per-pixel "Rendering Quality" values could lead to better "tailored" rendering for specific scenes. But I wouldn't see this being a useful place to experiment in the context of a benchmark.

    Post edited by RayDAnt on
  • bluejauntebluejaunte Posts: 1,861
    edited January 2019

    Right. So in any case, which method you use in the benchmark doesn't matter that much. If the Iray version changes, then anything could screw up the results. An Iteration could be doing different calculations, quality settings could have been changed or even what is considered converged may be different. If only Iray had an actual benchmark mode that would hopefully be kept "fair" across versions.

    Post edited by bluejaunte on
  • fred9803fred9803 Posts: 1,559

    I suppose it only matters if people are using the results in this thread to decide which GPU to buy.

    We've got 1080ti results ranging from 2 minutes to 8 minutes for the SY scene, and a RTX 2070 clocking in at 1 minutes 49.11 whereas my RTX 2080 took more than 6 minutes, which was qretty much what I expected.

    You're right about different views on convergence. If I had stopped at 95% like some people I could have claimed 3.6 minutes instead of 6+. What we have here are not "benchmarks".

  • bluejauntebluejaunte Posts: 1,861
    fred9803 said:

    I suppose it only matters if people are using the results in this thread to decide which GPU to buy.

    We've got 1080ti results ranging from 2 minutes to 8 minutes for the SY scene, and a RTX 2070 clocking in at 1 minutes 49.11 whereas my RTX 2080 took more than 6 minutes, which was qretty much what I expected.

    You're right about different views on convergence. If I had stopped at 95% like some people I could have claimed 3.6 minutes instead of 6+. What we have here are not "benchmarks".

    You need to at least use the same settings as everyone else obviously laugh

  • fred9803fred9803 Posts: 1,559

    True bluejaunte! Perhaps the ground rules should have been stated on the first page.

  • RayDAntRayDAnt Posts: 1,120
    edited January 2019
    fred9803 said:

    True bluejaunte! Perhaps the ground rules should have been stated on the first page.

    Yeah, the first rule of benchmarking is NEVER CHANGE ANYTHING away from how the original author configured it. Otherwise your results won't necessarily be comparable to others' (which defeats the whole purpose of a benchmark in the first place.)

    And if there are configuration options (like OptiX Prime Acceleration On/Off) which exist outside what a preconfigured scene can control, you make sure to mention which way you had it when sharing results.

    Post edited by RayDAnt on
  • bluejauntebluejaunte Posts: 1,861
    fred9803 said:

    True bluejaunte! Perhaps the ground rules should have been stated on the first page.

    The benchmark scene configures these settings. Just don't change anything and hit render.

  • RayDAntRayDAnt Posts: 1,120
    edited January 2019
    fred9803 said:

    I suppose it only matters if people are using the results in this thread to decide which GPU to buy.

    Once I finish doing comparative testing of my Titan RTX (it's taking forever because my only other reference system is a 1050-based Surface Book...) I was entertaining the idea of doing a deep-dive of the entire thread to see how much of an exhaustive list of comparable benchmarking stats for different GPUs/CPUs I could scrounge up. Do you think it's worth the effort? I'm too focused on my own stats at the moment to give it a quick perusal (I was figuring that with 34+ pages of posts there's gotta be some useful content in there...)

    Post edited by RayDAnt on
  • fred9803fred9803 Posts: 1,559

    Yeh, when I see a 1070TI comming in at < 3 minutes I go ...mmmmm.

  • RayDAnt said:
    fred9803 said:

    I suppose it only matters if people are using the results in this thread to decide which GPU to buy.

    Once I finish doing comparative testing of my Titan RTX (it's taking forever because my only other reference system is a 1050-based Surface Book...) I was entertaining the idea of doing a deep-dive of the entire thread to see how much of an exhaustive list of comparable benchmarking stats for different GPUs/CPUs I could scrounge up. Do you think it's worth the effort? I'm too focused on my own stats at the moment to give it a quick perusal (I was figuring that with 34+ pages of posts there's gotta be some useful content in there...)

    Could take volunteers to split up pages. I would be willing to take a few. Just need to establish what info is needed to give you, or whoever, from the plethora. Could start another thread for the volunteers to pool their info into a more concise place, and for any questions regarding the process of cataloging the info. 

    Final results from the users could be compiled on something simple initially onto a notepad file or someone who is more crafty than I could make an excel spreadsheet of what all info is needed with different headers for all the info that everyone can use as a template to fill in from their portion and then return to you, or whomever, for compiling the data.

    Just a thought.

  • fred9803 said:

    True bluejaunte! Perhaps the ground rules should have been stated on the first page.

    The benchmark scene configures these settings. Just don't change anything and hit render.

    As long as Edit>Preferences>Sceen isn't set to ignore render settings.

  • AalaAala Posts: 140
    edited January 2019
    fred9803 said:

    I suppose it only matters if people are using the results in this thread to decide which GPU to buy.

    We've got 1080ti results ranging from 2 minutes to 8 minutes for the SY scene, and a RTX 2070 clocking in at 1 minutes 49.11 whereas my RTX 2080 took more than 6 minutes, which was qretty much what I expected.

    You're right about different views on convergence. If I had stopped at 95% like some people I could have claimed 3.6 minutes instead of 6+. What we have here are not "benchmarks".

    Can you try out my scene? Either the 1000- iterations one, or the 2 minutes one, though both would be best. The scene is designed to take the least amount of time to start out of all. It's on the last page, attached on my post with the images.

    Post edited by Aala on
  • prixatprixat Posts: 1,585

    As a rule of thumb, the only proper way to do every benchmark run is to:

    1. Launch DAZ Studio
    2. If your viewport was set to Iray instead of Textured put it back to Textured, close DAZ Studio and launch it again so it starts in textured mode.
    3. Load benchmark scene.
    4. Don't change any settings.
    5. Hit render and don't touch PC until done.

    After render is done, take the rendering time from the troubleshooting log. Doing anything else invalidates your result.

    It goes without saying that your PC should not be running other tasks in background while rendering (not even a web browser).

    It was decided a few pages back that we are not measuring motherboard and memory settings. Therefore the viewport SHOULD be set to Iray to minimise the effect of differing systems and their differing pre-render transfer times. At the very least ignore the first render and record the 2nd render time.

  • RayDAntRayDAnt Posts: 1,120

    As a rule of thumb, the only proper way to do every benchmark run is to:

    1. Launch DAZ Studio
    2. If your viewport was set to Iray instead of Textured put it back to Textured, close DAZ Studio and launch it again so it starts in textured mode.
    3. Load benchmark scene.
    4. Don't change any settings.
    5. Hit render and don't touch PC until done.

    After render is done, take the rendering time from the troubleshooting log. Doing anything else invalidates your result.

    It goes without saying that your PC should not be running other tasks in background while rendering (not even a web browser).

    You should also save (and then clear) the contents of DS's log file along with the final test render EACH AND EVERY TIME you run a benchmark. That way you have insurance against needing to run the entire benchmark again in case new questions come up about its results.

    You should also be running each separate trial a minimum of 2-3 times and averaging their results together (or retesting for one if it's results vary widely from the others in some way) to eliminate outliers (have I mentioned how long it's been taking me to do my benchmarks? Now you know why...)

  • bluejauntebluejaunte Posts: 1,861
    fred9803 said:

    True bluejaunte! Perhaps the ground rules should have been stated on the first page.

    The benchmark scene configures these settings. Just don't change anything and hit render.

    As long as Edit>Preferences>Sceen isn't set to ignore render settings.

    Oh right, forgot that existed.

  • RayDAnt said:
    artphobe said:

    Used GTX 1080 vs new RTX 2060?

    2060 has way fewer cuda cores but it has tensor stuff.

    Which is a better buy?

    Keep in mind, Turing era cuda cores are significantly more powerful than Pascal ones. With that said, 2060s only go up to 6GB of VRAM which imo is pretty low for modern workloads - especially for cg rendering.

    how about a 2070 vs a 1080 ti

    2304 vs 3584 cuda cores

    but the 2070 has 288 Tensor cores, 36 RT cores. Which ones does iRay utilize? and hows does one of those cores compare to a cuda core in terms of performance?

  • RayDAntRayDAnt Posts: 1,120
    edited January 2019
    artphobe said:

    how about a 2070 vs a 1080 ti

    2304 vs 3584 cuda cores

    And don't forget - the 1080 ti gives you 11GB of VRAM vs 8GB for the 2070. And in the 3d render game not enough vram is a much worse problem than not enough speed (since speed can always be made up for by spending more time on a render. Lack of VRAM means no GPU acceleration whatsoever.)

     

    artphobe said:

    but the 2070 has 288 Tensor cores, 36 RT cores. Which ones does iRay utilize?

    Right now the only thing Iray fully utilizes on RTX (aka Turing based) cards is Cuda cores. Although the Cuda cores on Turing cards follow the same new multi-path design as the ones found on Nvidia's latest gen professional Volta GPUs - meaning that one Turing Cuda core is worth (depending on the type of workload) somewhere in the ballpark of 1.5 Pascal Cuda cores. So it might be more helpful to think of the 2070 vs 1080ti comparison (as things stand right now with no RT/Tensor core support) as being between 3456 vs. 3584 Cuda cores.

     

     and hows does one of those cores compare to a cuda core in terms of performance?

    Tensor cores are not directly applicable to speeding up raytracing/traditional graphics rendering because they are ASICs which accelerate parallel processing of matrix math and convolution operations specifically. Neither of which are particularly useful capabilites in speeding up graphics rendering processes, since graphics rendering tends to be about doing a bunch of distantly related small tasks quickly (eg. calculating the values of separate pixels in a rendered scene.) Whereas Tensor cores are all about doing a bunch of closely related large tasks quickly (eg. evaluating - in large groups at a time - the values of already rendered pixels in order to come up with estimations of what intermediate pixels could look like - aka DLSS.) 

    Technically speaking, RT Cores aren't directly applicable to speeding up the actual graphics rendering process either (all of the actual graphiics rendering done in those fully raytraced demos Nvidia has been playing at trade shows is still technically being done entirely on Cuda cores.) RT cores are also ASICs (like Tensor cores) except this time they are designed to do one thing and one thing only: determine which direction a single ray of eg. light should go (an astronomically computationally complex task) so that a Cuda/CPU core can then concentrate on the relatively straightforward task of rendering a pixel value from it.

    For an idea of what it could mean for DS/Iray rendering performance if/when full RT core support comes, take DS/Iray's implementation of OptiX Prime as an example. OptiX Prime is an entirely software-based, hardware-agnostic addon library to Nvidia's general purpose pathtracing API OptiX which specializes in doing one thing and one thing only - determining which direction a single ray of eg. light should go so that a Cuda/CPU core can then concentrate on the relatively straightforward task of rendering a pixel value from it. In other words, a single instance of OptiX Prime serves exactly the same function as an RT core. The only key difference betweeen them being that instances of OptiX Prime take up precious clock cycles of general purpose Cuda/CPU core processing power, whereas RT cores are able to function at a much higher level of comparative performance and with no ill effects on any other parts of the system since they exist as self-contained hardware units embedded on the GPU substrate itself.

    So in a hypothetical version of Iray with full RT core support, it isn't inconceivable to think that you could see a Pascal era "OptiX Prime acceleration enabled" level increase in raytracing performance (which my tests on a 10xx series card put at around 20%) compounded in direct porportion to the number of RT cores on your card. So on a graphics card with 1 RT core, a 100 minute render without RT core support could take as little as 80 minutes with it included (25% better performance.) Likewise on a card with 2 RT cores it could take as little as 100 - 20% of 100 = 80 - 20% of 80 = 64 minutes (56% better performance.) While on a card like the 2070, with its 36 RT cores, a 100 minute render could take as little as 100 - 20% (repeated recursively 36 times) = 0.04056481921 minutes aka 2.43 seconds (which would amount to a raytracing calculation performance increase of 246419%...)

    While this last number may seem completely unrealistic (yes, I meant to write 246419%) remember that the only thing actually being accelerated here is the initial step of the overall raytracing process itself. Which, while historically  the most time-consuming part of the process, isn't the only slowdown point. For example, Cuda cores are still gonna take the same amount of time as before to render the final values of pixels. It's just that they will no longer be stuck constantly having to help make/wait for the delivery of fresh path data so that they can use finish those pixels. They can just get what need ready made and in real time from the RT cores, and be about business. If you were to venture an estimate as to what the real-world performance increase in all of this would be, I'd say about an order of magnitude - so about 10x better performance in raytraced DS/Iray rendering as currently seen on Turing cards today.

    Post edited by RayDAnt on
  • whitesealwhiteseal Posts: 2
    edited January 2019

    A random result of Aala's benchmark scene rendered on GTX 970 for reference. Tested with DAZ Studio 4.10.

     

    OptiX Prime Acceleration disabled

    • 1000 iterations: 5 minutes 52.39 seconds
    • 2 minutes test: 328 iterations

     

    OptiX Prime Acceleration enabled

    • 1000 iterations: 5 minutes 16.22 seconds
    • 2 minutes test: 364 iterations
    2mins.png
    900 x 900 - 2M
    2mins (OptiX).png
    900 x 900 - 2M
    Post edited by whiteseal on
  • RayDAntRayDAnt Posts: 1,120
    edited January 2019
    whiteseal said:

    A random result of Aala's benchmark scene rendered on GTX 970 for reference. Tested with DAZ Studio 4.10.

    1000 iterations: 5 minutes 52.39 seconds

    2 minutes test: 328 iterations

    Is this with or without OptiX Prime acceleration enabled? Because it's useless info to the rest of us without that little factoid.

    Post edited by RayDAnt on
  • RayDAntRayDAnt Posts: 1,120
    edited January 2019

    [see next post]

    Post edited by RayDAnt on
  • RayDAnt said:
    whiteseal said:

    A random result of Aala's benchmark scene rendered on GTX 970 for reference. Tested with DAZ Studio 4.10.

    1000 iterations: 5 minutes 52.39 seconds

    2 minutes test: 328 iterations

    Is this with or without OptiX Prime acceleration enabled? Because it's useless info to the rest of us without that little factoid.

    Forgot to mention, that's without OptiX Prime acceleration.

    Updated results with acceleration enabled to my reply above.

  • RayDAntRayDAnt Posts: 1,120
    edited January 2019
    prixat said:

    It was decided a few pages back that we are not measuring motherboard and memory settings. Therefore the viewport SHOULD be set to Iray to minimise the effect of differing systems and their differing pre-render transfer times. 

    Fyi if you want to know just the pure IRay rendering time (minus any overhead from the DS app or other system components like RAM/HDD or even initialization processes in Iray itself) just check your log file once the render has fully completed or you have closed the render window for one or more of the following lines (depending on which render devices you have activated):

    2019-01-22 03:43:51.465 Iray INFO - module:category(IRAY:RENDER): 1.0 IRAY rend info : CUDA device 0 (TITAN RTX): 943 iterations, 0.258s init, 94.652s render
    2019-01-22 03:43:51.465 Iray INFO - module:category(IRAY:RENDER): 1.0 IRAY rend info : CPU: 57 iterations, 0.093s init, 95.256s render

    The values marked in bold are straight from Iray itself, whereas values like:

     2019-01-22 03:43:10.135 Total Rendering Time: 1 minutes 35.94 seconds

    Come from Daz Studio, and consequently include all slowdowns stemming from your overall system.

     

    By the way, if you want to know how much of a discrepency there is between the time it takes Iray itself to finish a scene versus what DS reports in general, just turn the DS statistic into seconds and subtract the sum of the largest init + render value pair reported by Iray from it. So in this case, that would mean:

    1 minutes 35.94 seconds = 95.940s

    0.258s + 94.652s = 94.910s while 0.093s + 95.256s = 95.349s

    95.349s > 94.910s

    Therefore

    95.940 - 95.349 = 0.591 seconds

    For the record, in my testing I've yet to see a difference between them that was greater than about 2.5%. And anything less than 5% is usually regarded as statisitcally insignificant. So I wouldn't go too crazy about it. For most intents and purposes the longer time reported by Daz Studio (as hours minutes seconds while you render/afterward in the log file) is all that's needed. The one exception would be if two different render test cases (eg. with/without Optix Prime enabled) only differ by a couple of seconds. Then you might wanna examine these numbers more closely.

     

    If in both cases rendering time does not include asset loading and geometry processing then how is rendering 1.5x faster when viewport is in Iray mode?

    Daz Studio normally uses Iray in a minimum time to convergance optimized rendering mode called batch scheduling. However if you activate Iray in a viewport, it instead switches Iray into a rapid progressive feedback optimized rendering mode (for obvious reasons) called interactive scheduling (see pages 22-24 of the official "The Iray Light Transport Simulation and Rendering System" whitepaper via the official iray dev blog.) Ergo the observed differences in rendering rates. And also the reason why you should definitely NOT have Iray enabled in a viewport while performing benchmarks - it introduces lots of extra hardware calls into the rendering pipeline and makes your log files much harder to read. It also demonstrably doesn't eliminate the factor of differing pre-render transfer times  since the log file clearly states that time is still spent processing scene contents at the start of every render.  

     

    Maybe someone from DAZ could clarify this behavior?

    In their defense, the very reason why I know the above is true is because it is explicitly stated in the log file. Activating Iray for a final render gives you this:

    2019-01-24 01:20:57.214 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : Using batch scheduling, caustic sampler disabled

    Whereas activating Iray in a viewport gives you this:

    2019-01-24 01:24:38.942 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : Using interactive scheduling, architectural sampler unavailable, caustic sampler disabled

     

    Imo one of the coolest things about Daz Studio is how it enables people with little or no computer graphics/programming experience to create really cool photorealistic 3d art pretty much immediately. At the same time peaking under the hood can be a bit... overwhelming.

    Post edited by RayDAnt on
  • RayDAntRayDAnt Posts: 1,120
    edited January 2019

    > ...Come from Daz Studio, and consequently include all slowdowns stemming from your overall system.

    As I have shown in the calculation in my post the Total Rendering Time does not include other tasks. We agree on other points.

    Compare the Total Rendering Time (converted into seconds) to the longest init and render times reported by Iray in a log file for that same render and you will see that Total Rendering Time is always slightly longer. Eg:

    2019-01-21 04:54:12.987 Finished Rendering

    2019-01-21 04:54:13.017 Total Rendering Time: 11 minutes 57.43 seconds

    [...]

    2019-01-21 13:53:46.227 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : Device statistics:

    2019-01-21 13:53:46.227 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : CUDA device 0 (GeForce GTX 1050): 859 iterations, 0.743s init, 714.705s render

    2019-01-21 13:53:46.237 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : CPU: 141 iterations, 0.232s init, 716.240s render

    11 minutes 57.43 seconds = 717.430 total seconds

    0.743s init, 714.705s render = 715.448 total seconds

    0.232s init, 716.240s render = 716.472 total seconds

     717.430 > 716.472 > 715.448

    And 717.430 - 716.472  = 0.958 seconds

    This is because Total Rendering Time comes from Daz Studio (rather than directly from Iray) and includes time spent doing things like processing the scene for each render device, calculating light and object geometry and loading assets like textures into graphics memory (if applicable.) Again, this isn't to say that Total Rendering Time isn't a perfectly adequate value all by itself for most benchmarks. However, if you need to get really precise about it, the way to know how long Iray (separately from the rest of Daz Studio) takes to render something is by looking at the stats (bolded above) already designed to tell you that - and certainly not by introducing alternate Iray rendering modes into the mix by enabling Iray in a viewport.

    Post edited by RayDAnt on
  • RayDAntRayDAnt Posts: 1,120
    edited January 2019

    @RayDAnt:

    You are not looking at the correct lines in the log -- look at the times next to "Rendering Image" and "Finished Rendering". Difference between those two is "Total Rendering Time".

    Also, find "Rendering Image" in the log and you will see that the geometry and light setup and asset loading are BEFORE that line and are not included in Total Rendering Time.

    Daz Studio scene processing, geometry, light setup and asset loading for the viewport occur between where

    2019-01-25 04:11:24.650 *** Scene Cleared ***

    and

    2019-01-25 04:11:52.788 Rendering image

    appear in the log file.

    Iray asset loading, geometry, rendering device initialization (which in the case of GPUs includes an additional round of scene processing, light hierarchy and work space allocation initialization), rendering, and finally writing the final rendered image to a temp directory in your AppData folder all occur after this point, but before

    2019-01-25 04:38:05.099 Finished Rendering

    Which is why the Total Rendering Time reported by Daz Studio is always longer than the longest pair of init/render time statistics separately reported by Iray for each active hardware rendering device once the render is complete.

    So if knowing the exact length of the Iray rendering process itself (ie. without any additional processing time introduced by assets moving through the rest of your system) is your goal, the Iray stats for init/render are what you're gonna use (and not Total Rendering Time since it includes a variety of other things besides just the actual Iray rendering process in its timespan.)

    Post edited by RayDAnt on
  • Just got my RTX 2070, and here is the test result.

     

    SickleYield's Test

    • GTX 970
      DAZ Studio 4.10, OptiX Prime Acceleration On: 4 minutes 30.12 seconds
       
    • RTX 2070 + GTX 970
      DAZ Studio 4.11 Beta , OptiX Prime Acceleration Off: 1 minutes 45.3 seconds

     

    DAZ_Rawb's Test

    • GTX 970
      DAZ Studio 4.10, OptiX Prime Acceleration On: 12 minutes 9.55 seconds
       
    • RTX 2070 + GTX 970
      DAZ Studio 4.11 Beta , OptiX Prime Acceleration Off: 5 minutes 13.52 seconds

     

    outrider42's Test

    • GTX 970
      DAZ Studio 4.10, OptiX Prime Acceleration On: 11 minutes 33.58 seconds
       
    • RTX 2070 + GTX 970
      DAZ Studio 4.11 Beta , OptiX Prime Acceleration Off: 6 minutes 26.48 seconds

     

    Aala's Test

    • GTX 970
      DAZ Studio 4.10, OptiX Prime Acceleration On:
      1000 iterration: 5 minutes 16.22 seconds
      2 mins test: 364 iterations
       
    • RTX 2070 + GTX 970
      DAZ Studio 4.11 Beta , OptiX Prime Acceleration Off:
      1000 iterration: 1 minutes 47.13 seconds
      2 mins test: 1150 iterations

     

Sign In or Register to comment.