Iray Starter Scene: Post Your Benchmarks!

1323335373849

Comments

  • AalaAala Posts: 140
    edited January 2019

    Damn, that's a big difference between Optix Prime On and Off.

    EDIT: Ah sorry, just realized you used two cards on the Optix Prime Off tests.

    Post edited by Aala on
  • RayDAntRayDAnt Posts: 1,120

    @RayDAnt:

    Perhaps it would be a good moment to start a new "Iray Benchmarks 2019" thread with clear rules for posting results and a new scene to boot?

    As it happens, I've actually been spending the afternoon writing up a comprehensive benchmarking guide (with things like all the current resources available pooled together and detailed explanations of what to record and why) and was gonna post it here later today. Then, depending on what other people think, it could be perfect as the start for a new thread.

  • RayDAntRayDAnt Posts: 1,120
    edited February 2019

    RayDAnt’s Guide to Daz Studio Benchmark Testing Methodology

    Author’s Note: This guide is meant to be an exhaustive resource for those interested in getting the most precise results possible from benchmarking in Daz Studio. If some parts (such as the Performing a Single Test Pass section found below) seem overly nitpicky to you, that is because they are overly nitpicky by design. Not everything here must be followed to the letter in order for you to get accurate results. If a single test pass of a benchmarking scene on your new GPU gives you a result that is reasonably in line with what others with the same piece of hardware have reported in the past, there is no need to test it a dozen or more times over just to be sure. Chances are it's accurate. However, if superfluous testing is your thing then this guide should be right up your alley.

    Feel free to ask questions, point out errors, etc.

    PS: According to my preview view of this post all those tables have borders, so... yeah.

     

    Before You Start

    A. Configuring Daz Studio: 

    In order for your results to be comparable to others, make sure that the following configuration options in whichever version of Daz Studio you are testing are set to the following default values.

    • Edit -> Preferences -> Scene -> Backdrop Color: -> Ignore when opening a Scene file = Unchecked (default)
    • Edit -> Preferences -> Scene -> Backdrop Image: -> Ignore when opening a Scene file = Unchecked (default)
    • Edit -> Preferences -> Scene -> Render Settings: -> Ignore when opening a Scene file = Unchecked (default)
    • Edit -> Preferences -> Interface -> Miscellaneous -> Multi-Threading = On (default)
    • Render -> Render Settings -> Editor tab -> Optimization -> Instancing Optimization = Speed (default)

    B. Choosing a Benchmarking Scene: 

    The following is a chronological list of all the popular benchmarking scenes for Daz Studio. Although any scene can technically be used as a benchmark, these scenes have the advantage of being reliant on only core content included within Daz Studio itself. This makes them of universal use for all Daz users. Ideally you should benchmark with every one of them (since each one measures performance in a slightly different way due to differences in scene composition and pre-configured render settings.) However if time/effort becomes an issue, start with the first (oldest) one in the list and work your way forward (down.) That way you have the most existing data to compare it to should you end up stopping short.

    1. SickleYield's Iray Scene For Time Benchmarks (found here)
    2. Outrider42's Iray Test 2018 B (found here)
    3. DAZ_Rawb's Iray Render Test 2 (found here)
    4. Aala's Iray Cornell Box 1k Iteration Limit (found here)
    5. Aala's Iray Cornell Box 2 min Time (found here)

     

    Benchmarking

    A. Selecting Test Conditions: 

    Generally speaking there are two rendering options not controlled by a scene whose configuration must be tracked during the rendering process in order to draw meaningful conclusions from the results. They are 1) Render Device(s) and 2) OptiX Prime Acceleration (b located in the “Photoreal Devices” area found under the Advanced -> Hardware tab of the Render Settings pane.) In a typical DS friendly system with a single cuda-capable GPU, this means that there are six different test configurations (officially known as Test Conditions) to choose from for any given pass of a benchmark. These Test Conditions are:

    Test Condition #  Render Device(s)  OptiX Prime Acceleration
    01                CPU Only          Enabled
    02                CPU Only          Disabled
    03                GPU Only          Enabled
    04                GPU Only          Disabled
    05                CPU + GPU         Enabled
    06                CPU + GPU         Disabled

     

    So a full set of test results for a single benchmark of a typical single-GPU system would look something like this:​

    [Benchmark Description]:
                    OptiX Enabled          OptiX Disabled

    CPU Only        Condition #01 Results  Condition #02 Results
    GPU Only        Condition #03 Results  Condition #04 Results
    CPU + GPU       Condition #05 Results  Condition #06 Results

     

    Therefore benchmarking runs shoudl be completed to cover Test Conditions in the following order of priority::This can add up to A LOT of testing. Especially in systems with multiple GPUs (a double-GPU system works out to be fourteen separate Test Conditions, while a triple-GPU system works out to be THIRTY separate Test Conditions!!!) Therefore it is essential to consider which Test Conditions are the most important ones to cover in situations where covering all available Test Conditions simply isn't practical (eg. as is often the case with CPU Only benchmarking due to it taking too long.) CPU + GPU rendering tests, while technically giving the best rendering performance a computer system is able to achieve, are of little practiical use to most people interested in detailed rendering performance benchmarks since most of them are looking to upgrade individual components in already existing self-built systems. Similarly CPU Only rendering benchmarks, while useful to those just starting out with non-graphics dedicated hardware, are also of little practical use to most interested parties since even the cheapest modern GPUs are significantly faster at 3d rendering than all but the most expensive modern CPUs.

    1. GPU Only passes (Test Condition #3-4 above) - applies to each GPU separately in a multi-GPU system
    2. CPU Only passes (Test Condition #1-2 above)
    3. Multiple-device passes (Test Condition #5-6 above)

    B. Preparing Your System: 

    Each time you sit down to do a session of tests with Daz Studio, it is imperative that you go through the following checklist of things in order to make sure that your results are truly accurate.

    1. Save the benchmarking scenes listed above somewhere on your computer. Ideally to an easily accessed folder where you can also save benchmarking results.
    2. Freshly reboot your computer.
    3. Bring up task manager so that you can monitor system load throughout the testing process.
    4. Close down any programs (eg. Cloud drive syncing apps, Steam, etc.) which normally run in the background of your system that may unexpectedly hog your system resources.
    5. Open a File Explorer window and navigate to “c:\Users\<YourUserName>\AppData\Roaming\DAZ 3D\Studio4” (or “c:\Users\<YourUserName>\AppData\Roaming\DAZ 3D\Studio4 Public Build” if you are testing with the Beta release.)
    6. Open the file “log.txt” in any text editor, delete everything inside, and then save/close the file. Keep this File Explorer window open as you will need to access the log file repeatedly during the actual testing process.
    7. Wait until general system load (CPU usage, etc.) in task manager have dropped down to a consistently low level. Then proceed to part C.

    C. Performing a Single Test Pass:

    1. Launch the version of Daz Studio you wish to test.
    2. Open your chosen benchmarking scene.
    3. Go to the Render Settings -> Advanced -> Hardware tab and make sure that Render Device and OptiX Prime Acceleration are properly configured for your currently chosen Test Condition.
    4. Press Render, and while keeping half an eye on task manager for signs of any supsicious activity, let it run until final completion.
    5. DO NOT under any circumstances interrupt or cancel the rendering process before it fully completes. Doing so will invalidate the test, meaning that you will have to start the current pass over again for it to be included in your final results.
    6. Once the render is finished, go to the completed images's window and and save it together with your other benchmarking files as eg. "Sickleyield Benchmark Test Condition 01 Pass A.png" so that you don't drive yourself crazy later (you WILL forget which thing is which.)
    7. Close Daz Studio. Never press "Yes" when DS asks whether you wish save changes to the original benchmarking scene. Doing so will most likely invalidate any benchmarks you perform with it later on. Benchmarking files must be kept exactly as they were when first downloaded in order to still function as valid benchmarks.
    8. Go to the File Explorer window still open from earlier and reopen the file "log.txt".
    9. Go to the very end of the Log file and make look for something like this:
      2019-01-26 18:26:50.641 --------------- DAZ Studio 4.11.0.236 exited ------------------2019-01-26 18:26:50.641 ~
      If you don't see this, close the log file and wait until Daz Studio fully disappears from the list of running apps in task manager. Then reopen the Log file again and check once more. If you still don't see something like these two lines, there was an errror during the rendering process and will have to perform this pass over again.
    10. Save a copy of the Log file as eg. "Sickleyield Benchmark Test Condition 01 Pass A.txt" in the same folder as the rest of your benchmarking files so that it matches up with its corresponding render.
    11. Go back to the original Log file, select all, hit delete, and then save/close.
    12. Repeat  this process for each pass.

    D. Collecting Enough Test Passes

    The rule of thumb is that you want to have at least two successfully completed test passes per Test Condition per Benchmarking scene you wish to cover. The reason for this is because that allows you to both identify any outliers (eg. if pass A of Test Condition #01 took 2 hours to complete whereas pass B only took ten minutes you know one of them was a dud, and you can verify which by running a third pass C and seing which one it agrees with)  and average together multiple values for highly variable stats like Total Rendering Time for more precise results.

    So in the case of wanting to do a complete run of benchmarks using Sickleyield's Iray Scene For Time Benchmarks on a single-GPU computer, the actual testing process would go like this:

    • Render 001: Iray Scene For Time Benchmarks, CPU Only, OptiX Enabled
    • Render 002: Iray Scene For Time Benchmarks, CPU Only, OptiX Enabled
    • Render 003: Iray Scene For Time Benchmarks, CPU Only, OptiX Disabled
    • Render 004: Iray Scene For Time Benchmarks, CPU Only, OptiX Disabled
    • Render 005: Iray Scene For Time Benchmarks, GPU Only, OptiX Enabled
    • Render 006: Iray Scene For Time Benchmarks, GPU Only, OptiX Enabled
    • Render 007: Iray Scene For Time Benchmarks, GPU Only, OptiX Disabled
    • Render 008: Iray Scene For Time Benchmarks, GPU Only, OptiX Disabled
    • Render 009: Iray Scene For Time Benchmarks, GPU + CPU, OptiX Enabled
    • Render 010: Iray Scene For Time Benchmarks, GPU + CPU, OptiX Enabled
    • Render 011: Iray Scene For Time Benchmarks, GPU + CPU, OptiX Disabled
    • Render 012: Iray Scene For Time Benchmarks, GPU + CPU, OptiX Disabled

    For a total of 12 test passes. So a bare minimum version (just testing for GPU performance)  with the same scene would equate to:

    • Render 001: Iray Scene For Time Benchmarks, GPU Only, OptiX Enabled
    • Render 002: Iray Scene For Time Benchmarks, GPU Only, OptiX Enabled
    • Render 003: Iray Scene For Time Benchmarks, GPU Only, OptiX Disabled
    • Render 004: Iray Scene For Time Benchmarks, GPU Only, OptiX Disabled

    For a total of 4 test passes.

     

    Analyzing The Data

    This part is pretty much all up to you. The log files have all sorts of detailed data in them on everything from memory consumption to Iray (rather than Daz Studio) initialization and rendering times to number of active cores during a CPU render (did you know that it always drops by one if you have CPU + GPU rendering enabled?) and all sorts of other things you can infer based on Test Conditions (eg. how much of a percentage difference in performance OptiX Prime makes under different conditions.) You can also cross-test between different versions of Daz Studio - assuming you feel like running all the tests and crunching the numbers.

     

    Reporting Your Results

    Be sure to always inlcude the following along with any benchmarking results:

    • Version(s) of Daz Studio used
    • Name/model of CPU used (if including CPU benchmarks)
    • Names/models of all GPUs used (if including GPU benchmarks)
    • Names of all benchmarking scenes used for each set of included tests.

    As for results themselves, the singlemost useful statistic for you to report is Total Rendering Time eg:

    2019-01-11 05:57:03.394 Total Rendering Time: 53 minutes 1.51 seconds

    Found near the bottom of each log file. Although this statistic is technically not exactly representative of the true amount of time spent rendering a scene (since it includes processing time used by Daz Studio to prep the scene in addition to the actual time spent by the Iray plugin rendering it) it is usually no more than a few seconds longer. Which makes it a perfectly acceptable statisitc for all but the most exacting benchmarking processes. 

    The next most useful thing to report would be iterations rendered by each active rendering device eg:

    2019-01-11 06:15:06.885 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : CPU: 5000 iterations, 6.097s init, 3173.550s render

    Especially since certain benchmarking scenes (like Aala's Iray Cornell Box 2 min Time) use a fixed Total Rendering Time - making the number of iterations essential to understanding differences in system performance.

    Other than that, there really aren't any hard "rules" about what to include. As far as how you go about actually writing up your results, I find that using tables helps. Eg:

    SickleYield's Iray Scene For Time Benchmarks DS 4.11 Beta: Total Rendering Time (3 passes per test):
                          OptiX Enabled             OptiX Disabled
    i7-8700K              19 minutes 25.91 seconds  23 minutes 23.42 seconds
    Titan RTX             1 minute 5.59 seconds     1 minute 24.77 seconds
    i7-8700K + Titan RTX  1 minutes 7.06 seconds    1 minute 26.02 seconds

     

    Here's another chart from the same set of tests, but this time showing iterations rendered by each hardware device:

    SickleYield's Iray Scene For Time Benchmarks DS 4.11 Beta: Iterations Per Rendering Device (3 passes per test):
                          OptiX Enabled  OptiX Disabled
    i7-8700K              5000           5000
    Titan RTX             5000           5000
    i7-8700K / Titan RTX  242 / 4758     254 / 4746

     

    Post edited by RayDAnt on
  • bluejauntebluejaunte Posts: 1,861
    edited January 2019

    Thanks RayDAnt. I have to admit, I'd be surprised if anyone ever follows such an exhaustive guide. And if some might, some won't and we're back where we started. But you know, to be fair, it's not that much of an issue. I recently "benchmarked" a bit after I had replaced my two 1080 TI with one 2080 TI. It was one of my own scenes and I can make the following statements, which are still useful because they compare the two setups directly:

    2 x 1080 TI 15.35 min

    1 x 2080 TI 14.76 min

    1 x 2080 TI + 1 x 1080 TI 10.25 min

    Also I noticed that with overclocking on the 2080 TI (after running the automated OC scanner, pretty useful), the render times actually went up a bit, whereas games ran a bit faster (about 85 vs 80 fps standing in a specific place in the main hall in Resident Evil 2 Remake that just came out). Also today I upgraded to Win10 1809 (this is the one needed to get raytracing in the few games that have it) and the render time was 14.53 min, so a little bit faster.

    OptiX was off all the time and latest beta of Daz Studio was used.

    Oh with the 2080 TI + 1080 TI I noticed the former throttled clock speed quite a bit, it got too hot. So with better cooling this might have rendered slightly faster.

    Post edited by bluejaunte on
  • RayDAntRayDAnt Posts: 1,120

    @RayDAnt:

    Great guide.

    I would suggest that, for the results to be included in the benchmark thread, they have to be submitted as a ZIP file with original logs for each result as a proof. Then whoever creates the benchmark thread can add those results to the table in the first post.

     

    I thought about that. The problem is log files include personally identifying info like your windows account name, which - while not being the worst thing to leak - does pose a privacy concern.

  • RayDAntRayDAnt Posts: 1,120
    edited January 2019

    Thanks RayDAnt. I have to admit, I'd be surprised if anyone ever follows such an exhaustive guide.

    I have a professional background in academic research and experimental design. That's why this is so intensive. This is the level of testing you would do if you were working in a professional environment (eg. if you were in charge of new hardware acquisition at an effects studio like ILM this is the sort of process you'd go through.) Admittedly Daz Studio caters primarily to a hobbyist crowd. The thing is, even most hobbyists can benefit from some knowledge of how things are done on a professional level. So I guess that's really why I made this guide - so that people interested in taking things to the next level know what that means.

    And if some might, some won't and we're back where we started.

    Not really. Just so long as people make it plain how carefully controlled their testing was it shouldn't matter. You just assume that people who don't mention having done multiple passes didn't, and and consdier the trustworthiness of their results accordingly. Plus - with this level of precision in testing - you really don't need more than a sample or two for any given piece of hardware to have a very good idea of how that device performs in general (ie. all we need is one 1080ti owner's careful benchmarks to know pretty much all we need about most every 1080ti card out there.)

    But you know, to be fair, it's not that much of an issue.

    Yeah. Ultimately, having super-accurate benchmarks is not that big of a deal. It's just a cool thing for other people to do and share with those of us with a more hardware bent. That way we can draw wider conclusions that occassionally might even be helpful to normal people (eg. those shopping for a new GPU with an interest in using DS who'd like to know what their money could get.) But yeah, if this stuff is too much time/effort for someone - just run a simple single benchmark and say that's all you did. It's still one more data ponit.

    Oh with the 2080 TI + 1080 TI I noticed the former throttled clock speed quite a bit, it got too hot. So with better cooling this might have rendered slightly faster.

     Is your 2080ti a Founders Edition? Because their coolers - while a significant upgrade from what Nvidia's done in the past - are known for not being the best out there. I'm actually very interested in this myself since the Titan RTX sports exactly the same cooler (despite having more processing power.) Making me wonder how much more performance I'm missing (every bit helps when trying to justify a $2500 purchase...)

    Post edited by RayDAnt on
  • bluejauntebluejaunte Posts: 1,861
    RayDAnt said:

    Thanks RayDAnt. I have to admit, I'd be surprised if anyone ever follows such an exhaustive guide.

     Is your 2080ti a Founders Edition? Because their coolers - while a significant upgrade from what Nvidia's done in the past - are known for not being the best out there. I'm actually very interested in this myself since the Titan RTX sports exactly the same cooler (despite having more processing power.) Making me wonder how much more performance I'm missing (every bit helps when trying to justify a $2500 purchase...)

    No but the cheapest I could find here, a Zotac Twin Fan which is probably pretty close to the Founders Edition in design. I don't think it has a very dramatic effect even if it does throttle, not enough to worry about when you're rendering with several cards and get such dramatically better speed overall anyway. I will run this card alone though and sell my 2 x 1080 TI. I thought it was an excellent time to make this swap and basically get the same render speed but with possible RTX stuff in the future, and even better performance in games since I was only ever gaming on one 1080 TI. All for no cost since these two old ones will sell for roughly what the new one costs. Might even make a slight profit.

    Alone it runs around 76c, which is a bit more than I remember from a single 1080 TI but then again I read that the 2080 TI does run a bit hotter in general and I don't know how many degrees a better cooler can shave off that. I did want a 2-slot card in case I ever wanna put another one in. My mainboard kinda sucks for that, the cards are too close together. And SLI isn't supported, I'll probably upgrade that at some point. Maybe then I could fit two 3-slot cards but I don't know how wise that would be. With so little space between them I don't know if even the greatest air cooler would change the fact that there's just very little room to blow the hot air to.

  • RayDAntRayDAnt Posts: 1,120

    Alone it runs around 76c, which is a bit more than I remember from a single 1080 TI but then again I read that the 2080 TI does run a bit hotter in general and I don't know how many degrees a better cooler can shave off that. I did want a 2-slot card in case I ever wanna put another one in. My mainboard kinda sucks for that, the cards are too close together. And SLI isn't supported, I'll probably upgrade that at some point. Maybe then I could fit two 3-slot cards but I don't know how wise that would be. With so little space between them I don't know if even the greatest air cooler would change the fact that there's just very little room to blow the hot air to.

    Sounds to me like you have a custom GPU watercooling loop in your future (especially if/when you add a 2nd RTX card to the mix once you have those 1080ti's off your hands.) I know they're usually marketed/priced as overclocking luxuries, but imo they (and all large AIOs) are actually really useful as ways to reduce heat/noise levels and increase hardware longevity in the face of lengthy workloads like 3d rendering at stock settings. 

    By the way, since you happen to have a 2080ti, would you be up for some quick (mostly painless) testing regarding its level of NVLink/VRAM pooling support with current Nvidia drivers? I've been investigating which configuration options in Nvidia's System Management Interface seem to work with my Titan RTX, and I'd love to know if one thing in particular (WDDM/TCC mode switching) is the same or not on a 2080ti (since I suspect that is the key to getting VRAM pooling working in NVLink-aware software like the current beta version of DS/Iray.)

  • bluejauntebluejaunte Posts: 1,861

    If I can, but I have no NVLINK and only one 2080 TI?

  • RayDAntRayDAnt Posts: 1,120
    edited February 2019

    If I can, but I have no NVLINK and only one 2080 TI?

    Not a problem. This is more a test of general NVLink functionality on Turing GPUs than it is of getting specific use cases (like VRAM pooling for 3d rendering) to work. To give some background.

    Prior to Turing, Nvidia had two separate lines of GPU microarchitectures going to serve the consumer and professional business markets - Pascal on the consumer side and Volta for business. Both architectures had different hardware methods for combining computational power between multiple GPUs: SLI on Pascal (a relatively old limited-bandwidth technology enabling framebuffer sharing) and NVLink on Volta (a newer high-bandwidth technology enabling direct sharing of memory resources.) With Turing, Nvidia decided to dispense with having two separate lines of chips by taking the overall design of the (technically newer/more advanced) Volta architecture, stripping it of some of its most costly pro-sided features (like FP64 cores), adding in some newly developed consumer AND business sided ones (like RT Cores), and giving it a shiny new name: Turing.

    This meant that - for the first time ever - they now had an entire product line of consumer-oriented GPUs sporting NVLink technology instead of SLI. And since SLI and NVLink are not directly functional equivalent technologies, any software (eg. games) written to work on SLI would no longer be able to make use of that functionality - despite there being a technologically superior way of achieving the same thing present in hardware. In order to address this in the short term, Nvidia decided to include a software emulation of SLI functionality in its drivers which uses a small portion of a supporting card's (all RTX 2070 2080 cards and up) NVLInk bandwidth to implement a shared framebuffer. This has lead to a great deal of confusion among early adopters of Turing-based cards because - despite NVLink being an actively marketed feature of the new architecture - the only level of plug-and-play resource sharing users have so far been able to independently verify as working is SLI. And many have taken this as proof that true NVLink functionality on high-consumer grade RTX cards is a lie.

    However this overlooks a key fact: prior to Turing NVLink functionality was never a plug-and-play feature (at least not on Windows.) It was always something you would have to use obscure command-line instructions (and follow strict video output port choice guidlines) in order to enable. And even then, the only software capable of using it were things designed with the capability in mind (which, by the way, includes the version of Iray currently found in DS 4.11 Beta - at least if the changelog for Iray 2017.1 beta, build 296300.616 is to be believed.) Some folks over at Puget Systems had a thought about this last October, and decided to do some testing with a just-released pair of 2080s and a 2080ti to see if they could get NVLink enabled using traditional methods (you can read the write-up of the entire experiment here.) In brief, what they found at the time was that none of their cards could be switched into the correct driver mode, and consequently their conclusion was that traditional NVLink functionality on RTX cards isn't possible.

    However just last week I was able to personally verify that the Titan RTX does support being put into this mode - at least on current Nvidia drivers. This got me to wondering whether this was just because of it being a Titian (rumor has it that all Titans get certain quadro-level priviledges regardless of their technically consumer-level status)  or maybe because of a driver update since October (afaik no one has tried to do this test ever since.) So the actual thing to test here is whether your 2080ti currently supports being switched into "TCC driver model" mode. The way to find out is pretty straightforward. Nvidia drivers already include a command-line utility called System Management Interface (read more about it here) which allows you to control this as well as control/monitor many other things about all your Nvidia GPUs. So all you really need to do run some stuff on the command-line to find out. Here's whatt to do:

    Test:

    1. Open a command-line window with Administrator privileges (Windows key -> type "cmd" and choose "Run as Administrator" under options.)  It is essential that Admin privileges are available as this test will fail otherwise.
    2. Navigate to the folder
      c:\Program Files\NVIDIA Corporation\NVSMI\
      This is where the System Management Interface (along with a handy Pdf version of its documentation) is located.
    3. Run the following command:

      c:\Program Files\NVIDIA Corporation\NVSMI\nvidia-smi.exe -L

      This will output a line like the following for each GPU in your system:

      GPU 0: TITAN RTX (UUID: GPU-a30c09f0-e9f6-8856-f2d0-a565ee23f0bf)

      Take note of the number (marked in red above) next to the name of your 2080ti. You will need this for the rest of the steps in this process.

    4.  Run the following command with the number from step 3 in place of 0 after "-i ":

      c:\Program Files\NVIDIA Corporation\NVSMI\nvidia-smi.exe -i 0 --query-gpu=driver_model.current,driver_model.pending --format=csv

      This should output:

      driver_model.current, driver_model.pendingWDDM, WDDM

      "WDDM" is the name of the Nvidia driver model mode normally active under windows. In the next step you are gonig to see if you can temporarily schedule it to change or not

    5. Now, run the following command:
      c:\Program Files\NVIDIA Corporation\NVSMI\nvidia-smi.exe -i 0 -fdm TCC
      If you get a response like this:
      WARNING: Disconnect and disable the display before next reboot. You are switching to TCC driver model with display active!Set driver model to TCC for GPU 00000000:01:00.0.All done.Reboot required.
      Then congratulations! this means that traditional NVLink functionality on the 2080ti has changed since October. However If you get an error message like this:
      Unable to set driver model for GPU 00000000:01:00.0: Not SupportedTreated as warning and moving on.All done.
      Then I'm sorry to say that true NVLink support on the 2080ti is still a mystery. Go ahead and ignore the rest of these instructions

    Test Cleanup (only needed if "TCC" mode change successful):

    If you were successful in step #3 above, you will need to switch the driver mode of your 2080ti back to its default value "WDDM" prior to rebooting your system. Otherwise you will temporarily lose th ability to run any displays off of that card. To do this, run the follwoing in the same command-line window as before (with the number from step 3 in place of the 0 after "-i"):

    c:\Program Files\NVIDIA Corporation\NVSMI\nvidia-smi.exe -i 0 -fdm WDDM

    You should get the following response:

    Set driver model to WDDM for GPU 00000000:01:00.0.All done.

    Please note that rebooting is unnecessary at this point. To verify everything is back to normal, Run (with the number from step 3 in place of the 0 after "-i"):

    c:\Program Files\NVIDIA Corporation\NVSMI\nvidia-smi.exe -i 0 --query-gpu=driver_model.current,driver_model.pending --format=csv

    This should output:

    driver_model.current, driver_model.pendingWDDM, WDDM

    And as long as "WDDM" appears twice you are back to the way you were before.

     

    ETA: Even if this test fails for you, it still may be the case that this method for enabling NVLInk is actually possible on the 2080ti. If you print out the help file for the SMI utility it lists out which Nvidia cards are/aren't supported and to what extent. As of right now it makes no mention of ANY RTX card (Titan or otherwise.) Meaning that it hasn't yet been officially updated for the RTX line. In which case it happening to work as expected with my Titan RTX is likely something that will be inherited by other capable RTX cards down the line.

    Post edited by RayDAnt on
  • bluejauntebluejaunte Posts: 1,861

    Get the 2nd message in step 5.

  • RayDAntRayDAnt Posts: 1,120

    Get the 2nd message in step 5.

    Interesting. Thanks! Imo this is definitely something to revisit once full RTX support comes to general purpose Nvidia software.

  • I got one 2080 TI that was defective, they finally replaced it today. I intended to replace two 1070 by one 2080 ti.

    The numbers are disappointing, but I guess the Beta results are not very reliable yet

    I used Outrider42's Iray Test 2018 B (iray_bench_2018_b223da33.duf) only.

    CPU: Ryzen 7 1700
    RAM: 64gb DD4 2400mhz

    GPU Only
    Daz 4.10.0.123 Pro Geforce 417.01
    1070(Monitor On) + 1070 - Optix On  = 4 minutes 39.1 seconds
    1070(Monitor On) + 1070 - Optix Off = 5 minutes 21.69 seconds

    Daz 4.11.0.123 Pro Geforce 417.01
    1070(Monitor On) + 1070 - Optix On  = 8 minutes 0.85 seconds
    1070(Monitor On) + 1070 - Optix Off = 8 minutes 51.79 seconds

    Strange thing: on Beta the image shows up really fast, the process imeddiately jump to 52%. However the interactions are much slower.

    Daz 4.11.0.236 Pro Geforce 417.71
    2080 TI (Monitor on) - Optix On = 5 minutes 26.36 seconds
    2080 TI (Monitor on) - Optix Off = 5 minutes 14.74 seconds

    In the next days I intend to add a 970 just for OS, letting 2080 ti only for renders. But I think only when we have 4.11 public, the numbers would be stable.

     

  • Takeo.KenseiTakeo.Kensei Posts: 1,303

     

     

    RayDAnt said:

    This meant that - for the first time ever - they now had an entire product line of consumer-oriented GPUs sporting NVLink technology instead of SLI. And since SLI and NVLink are not directly functional equivalent technologies, any software (eg. games) written to work on SLI would no longer be able to make use of that functionality - despite there being a technologically superior way of achieving the same thing present in hardware. In order to address this in the short term, Nvidia decided to include a software emulation of SLI functionality in its drivers which uses a small portion of a supporting card's (all RTX 2070 cards and up) NVLInk bandwidth to implement a shared framebuffer. This has lead to a great deal of confusion among early adopters of Turing-based cards because - despite NVLink being an actively marketed feature of the new architecture - the only level of plug-and-play resource sharing users have so far been able to independently verify as working is SLI. And many have taken this as proof that true NVLink functionality on high-consumer grade RTX cards is a lie.

     

    That's good to try to bring some rigor to this thread but I think you made a mistake and a confusion

    The RTX 2070 doesn't have NVLINK / SLI. The support only begins with the 2080

    The TCC mode is not directly related to NVLINK support. I rather think what you should check is wether RTX Card have Peer-to-peer functionnality enabled. Check the output of the command nvidia-smi.exe  nvlink -c. The TCC mode for Quadro GP100 and GV100 you've seen may only be necessary for these cards. For gaming card and Quadro RTX, enabling SLI in the Nvidia control panel is the step to do. That's what is written in the Pudget Blog

    The question regarding Hardware and driver relation for me would be to know if the RTX 2070 (eventually also the RTX 2060) is NVLINK capable although it doesn't have the hardware to do it

  • RayDAntRayDAnt Posts: 1,120

    The RTX 2070 doesn't have NVLINK / SLI. The support only begins with the 2080

    Yeah, I miss-spoke there. Just corrected it.

    The TCC mode is not directly related to NVLINK support.

    It used to be on WIndows platforms (again, there was no NVLink-capable hardware that didn't rely on TCC driver optimization mode for using that functionality prior to the Turing architecture.)

     

    I rather think what you should check is wether RTX Card have Peer-to-peer functionnality enabled. Check the output of the command nvidia-smi.exe  nvlink -c.

    The folks over at Puget Systems have already shown conclusively that RTX 2080 and up cards do have fully functional peer-to-peer capabilities enabled at the hardware level. However, as of when they did their research back in October, the only software platform capable of demonstrating it was Linux. Hence my curiosity about whether any software changes effecting Windows driver level functions like "TCC" mode availability had occured with cards like the 2080ti since then.

     

    The TCC mode for Quadro GP100 and GV100 you've seen may only be necessary for these cards. For gaming card and Quadro RTX, enabling SLI in the Nvidia control panel is the step to do. That's what is written in the Pudget Blog

    The thing I'm curious about here isn't whether or not NVLink functionality is supported in new ways on the Windows platform with consumer-level NVLink capable hardware (since - as you point out - we already know that there are/will be.) What I'm out to discover is whether any other consumer-level NVLink capable hardware (besides the Titan RTX, which I can personally confirm does support "TCC" driver mode - meaning that Puget Blog's conclusion's are now obsolete) has support for old ways of using it on Windows. Because software backwards compatiblity with NVLink on RTX is potentially a huge deal in the content production software world (even the version of Iray in the current DS Beta already supports it.)

    The question regarding Hardware and driver relation for me would be to know if the RTX 2070 (eventually also the RTX 2060) is NVLINK capable although it doesn't have the hardware to do it

    NVLink is a piece of physical hardware located on a GPU die itself, which then gets a set of dedicated traces and connecting port(s) on a card's pcb to make it functional. As it stands, Nvidia only has two Turing GPU die designs with second-generation NVLink support (what the offcial Turing Architecture whitepaper calls it.) They are:

    The TU102 (sporting a 100Gps bidirectional NVLink interface) which is the chip found in all:

    • Top-tier professional Quadro RTX cards (RTX 8000, RTX 6000)
    • Top-tier consumer cards (Titan RTX, RTX 2080ti)

    The TU104 (sporting a 50Gbps bidirectional NVLink interface) which is the chip found in all:

    • Mid-tier professional Quadro RTX cards (RTX 5000, RTX 4000)
    • Mid-tier consumer cards (RTX 2080)
    • Data processing only Tesla RTX cards (Tesla T4)

    The RTX 2070 and 2060 both use variants of a 3rd Turing die design called the TU106 which lacks NVLink hardware altogether. Now, in theory you could take the GPU die from a less functional card variant of a die design and "jump it up" to perform like its fullest hardware implementation (eg. transform a 2080ti into an RTX 8000 by software unlocking 4 SMs and soldering 8 4GB ECC GDDR6 RAM chips to its pcb.)  But you can never "jump up" a specific variant of a die design past its fullest hardware implementation  - which in the case of the TU106 is the RTX 2070 (see Appendix B.) Meaning that there will never be a way for either of them to support NVLink. 

  • RayDAntRayDAnt Posts: 1,120
    edited February 2019

    @RayDAnt:

    As far as I know, switching to TCC mode is not supported on GeForce GPUs, only on Quadro and Tesla.

    Here's what I get with the Titan RTX (neither a Quadro nor a Tesla card) which - at least until recently - was still considered part of the GeForce lineup:

    C:\Program Files\NVIDIA Corporation\NVSMI&gt;nvidia-smi.exeWed Jan 30 17:36:36 2019+-----------------------------------------------------------------------------+| NVIDIA-SMI 417.71       Driver Version: 417.71       CUDA Version: 10.0     ||-------------------------------+----------------------+----------------------+| GPU  Name            TCC/WDDM | Bus-Id        Disp.A | Volatile Uncorr. ECC || Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. ||===============================+======================+======================||   0  TITAN RTX          WDDM  | 00000000:01:00.0  On |                  N/A || 41%   37C    P8    25W / 280W |   1231MiB / 24576MiB |      1%      Default |+-------------------------------+----------------------+----------------------++-----------------------------------------------------------------------------+| Processes:                                                       GPU Memory ||  GPU       PID   Type   Process name                             Usage      ||=============================================================================||    0       724    C+G   Insufficient Permissions                   N/A      ||    0      6796    C+G   ...les (x86)\GIGABYTE\AppCenter\ApCent.exe N/A      ||    0      7872    C+G   ...6)\Google\Chrome\Application\chrome.exe N/A      ||    0     11372    C+G   C:\Windows\System32\MicrosoftEdgeCP.exe    N/A      ||    0     12228    C+G   C:\Windows\System32\MicrosoftEdgeSH.exe    N/A      ||    0     12772    C+G   ...ons\Software\Current\LogiOptionsMgr.exe N/A      ||    0     12988    C+G   ...t_cw5n1h2txyewy\ShellExperienceHost.exe N/A      ||    0     13180    C+G   ...dows.Cortana_cw5n1h2txyewy\SearchUI.exe N/A      ||    0     13240    C+G   C:\Windows\explorer.exe                    N/A      ||    0     13588    C+G   ...hell.Experiences.TextInput.InputApp.exe N/A      ||    0     15052    C+G   ...ptions\Software\Current\LogiOverlay.exe N/A      ||    0     15168    C+G   ...\Corsair\CORSAIR iCUE Software\iCUE.exe N/A      ||    0     16152    C+G   ...osoft.LockApp_cw5n1h2txyewy\LockApp.exe N/A      ||    0     16256    C+G   ...DIA GeForce Experience\NVIDIA Share.exe N/A      ||    0     17580    C+G   C:\Windows\System32\MicrosoftEdgeCP.exe    N/A      ||    0     20248    C+G   ...oftEdge_8wekyb3d8bbwe\MicrosoftEdge.exe N/A      |+-----------------------------------------------------------------------------+C:\Program Files\NVIDIA Corporation\NVSMI&gt;nvidia-smi.exe -LGPU 0: TITAN RTX (UUID: GPU-a30c09f0-e9f6-8856-f2d0-a565ee23f0bf)C:\Program Files\NVIDIA Corporation\NVSMI&gt;nvidia-smi.exe -i 0 --query-gpu=driver_model.current,driver_model.pending --format=csvdriver_model.current, driver_model.pendingWDDM, WDDMC:\Program Files\NVIDIA Corporation\NVSMI&gt;nvidia-smi.exe -i 0 -fdm TCCWARNING: Disconnect and disable the display before next reboot. You are switching to TCC driver model with display active!Set driver model to TCC for GPU 00000000:01:00.0.All done.Reboot required.C:\Program Files\NVIDIA Corporation\NVSMI&gt;nvidia-smi.exe -i 0 --query-gpu=driver_model.current,driver_model.pending --format=csvdriver_model.current, driver_model.pendingWDDM, TCC

     Hence my curiosity on the matter.

    Post edited by RayDAnt on
  • RayDAntRayDAnt Posts: 1,120
    edited February 2019

    @RayDAnt:

    Sadly, Titan RTX seems to be considered "prosumer", not "consumer" (after all it is ~$2,500 which is the same as entry level Quadro RTX).

    On my 2080 Ti that doesn't work.

    This is what the current version of Nvidia's System Management Interface has to say about hardware compatibility:

    C:\Program Files\NVIDIA Corporation\NVSMI&gt;nvidia-smi.exe -hNVIDIA System Management Interface -- v417.71NVSMI provides monitoring information for Tesla and select Quadro devices.The data is presented in either a plain text or an XML format, via stdout or a file.NVSMI also provides several management operations for changing the device state.Note that the functionality of NVSMI is exposed through the NVML C-basedlibrary. See the NVIDIA developer website for more information about NVML.Python wrappers to NVML are also available.  The output of NVSMI isnot guaranteed to be backwards compatible; NVML and the bindings are backwardscompatible.http://developer.nvidia.com/nvidia-management-library-nvml/http://pypi.python.org/pypi/nvidia-ml-py/Supported products:- Full Support    - All Tesla products, starting with the Kepler architecture    - All Quadro products, starting with the Kepler architecture    - All GRID products, starting with the Kepler architecture    - GeForce Titan products, starting with the Kepler architecture- Limited Support    - All Geforce products, starting with the Kepler architecture

     Notice how it makes no mention of non-GeForce branded Titan products (like my Titan RTX.) This tells me that the current version of NVSMI has not yet been fully updated to accurately reflect feature-specific hardware compatiblity on Turing-era products. Which in turn makes me very optimistic for a change (increase) in NVSMI functionality for 2080 and 2080ti cards specifically since they are the first and only GeForce branded cards to feature exactly the same interconnecting technology (NVLink 2.0) as their Quadro counterparts) once that software update happens. Most likely around the same time fully Turing-compatible versions of other core Nvidia software solutions like OptiX get released (likely no earlier than around the time of GTC 2019 in late March.)

    Major software refresh cycles follow a very different pattern at the professional computing level (late adoption/excellent bidirectional compatibility vs. early adoption/poor compatiblity) vs. what you typically see at the consumer level. And although Daz Studio primarily functions as a consumer-level product, the core components that make it run (eg. Iray) are thoroughly professional. I think it's gonna be a while before we can really say for certain just how much functionality mid-range Turing cards (like the 2080 and 2080ti) have. Keep in mind that it took something like a full year for 10XX series Pascal cards to get fully implemented at the Daz Studio software level. As of right now Turing has only been out for 4 months.

    Post edited by RayDAnt on
  • Titans have always been a mix of both world aka gaming + some Quadro features and always had the possibility to be put in TCC mode since forever

    There is a point where you make a confusion. Pudget System did make test on Windows. Not Linux. There is no TCC mode in Linux because that is not needed to enable P2P communication.

    With Windows, the WDDM drivers get in the way.. But all that is just artificial market segmenation in my POV.

    The point where you are right is that Nvidia must have changed something in the drivers and Cuda 10 implementation as they must allow P2P communication without the TCC requirement for NVLink to work on consumer cards, but don't wait for TCC on consumer cards, that's still a premium feature

  • RayDAntRayDAnt Posts: 1,120
    edited February 2019

    There is a point where you make a confusion. Pudget System did make test on Windows. Not Linux.

    Here is the NVLink functionality test Puget Systems did on Windows back in October https://www.pugetsystems.com/labs/articles/NVLink-on-NVIDIA-GeForce-RTX-2080-2080-Ti-in-Windows-10-1253/

    And here is the follow-up NVLink functionality test Puget Systems did on Linux, also in October https://www.pugetsystems.com/labs/hpc/NVLINK-on-RTX-2080-TensorFlow-and-Peer-to-Peer-Performance-with-Linux-1262/ In which is said the following:

    My colleague William George has done some testing with NVLINK on Windows 10 and at this point it doesn't appear to be fully functional on that platform. [...] I think you can expect that to change soon. It should be fully functional on Windows 10 after a round of updates.

     

    The point where you are right is that Nvidia must have changed something in the drivers and Cuda 10 implementation as they must allow P2P communication without the TCC requirement for NVLink to work on consumer cards, but don't wait for TCC on consumer cards, that's still a premium feature

    You're missing the wider point. Just 5 months ago, NVLink itself was still a premium (high-end/professional only) feature... now it isn't. We are in uncharted territory. Hence why imo this stuff (eg. "TCC" support across the mid-tier RTX line) deserves periodic testing.

    Post edited by RayDAnt on
  • I don't think I miss anything. If TCC was to be allowed for Geforce card that would already be done. There is nothing to develop and thus there can't be surprise feature months after launch. It's not something you get after waiting for someone to develop it with new Cuda 10 or anything else. If the driver allows it you get it, no more, no less. And I'm pretty sure that will not come to geforce because that is one aspect that differenciate Quadro from Geforce and we know Nvidia prefers to sell Quadro. I don't see anything that can trigger Nvidia to change that yet, evonomically speaking . And as for the fact that we get Nvlink in consumer level cards, I'll just say it's a stripped down Nvlink. It was a necessity to promote RTCore because as of now, gamers don't get the performance that they expect, and NVlink was mostly advertised as SLI replacement. For us doing rendering the functionnality can be usefull but that was not the selling point for Nvidia. Their goal is the gaming market and TCC for Nvlink doesn't bring anything in that field. For people using the card for rendering, It can be an incentive to buy bigger. If you want the real thing, put money on the table. Since we don't have the software to measure the impact of the lesser bandwith we'll have to wait to know the influrence. I'm just gonna say Rendez vous in one year or two to check the status once software development will be more advanced as it is now

  • RayDAntRayDAnt Posts: 1,120
    edited February 2019

    I don't think I miss anything. If TCC was to be allowed for Geforce card that would already be done. There is nothing to develop and thus there can't be surprise feature months after launch. It's not something you get after waiting for someone to develop it with new Cuda 10 or anything else. If the driver allows it you get it, no more, no less.

    TCC is a driver level-feature. Meaning that it's support on any given piece of hardware is "something you get after waiting for someone to develop it with new Cuda 10 or anything else" by definition. 

    And I'm pretty sure that will not come to geforce because that is one aspect that differenciate Quadro from Geforce

    So did the inclusion of NVLink hardware. And again - look where we are today.

    and we know Nvidia prefers to sell Quadro.  I don't see anything that can trigger Nvidia to change that yet, evonomically speaking . 

    For the first time in recent history Nvidia has decided to use exactly the same GPU architecture (Turing) for all current series business and consumer graphics cards. Meaning that they have fundamentally changed the dynamics (economics) of their own graphics hardware eco-system.

    And as for the fact that we get Nvlink in consumer level cards, I'll just say it's a stripped down Nvlink. NVlink was mostly advertised as SLI replacement. For us doing rendering the functionnality can be usefull but that was not the selling point for Nvidia. Their goal is the gaming market and TCC for Nvlink doesn't bring anything in that field. For people using the card for rendering, It can be an incentive to buy bigger. If you want the real thing, put money on the table. Since we don't have the software to measure the impact of the lesser bandwith we'll have to wait to know the influrence.

    Both the RTX 2080TI and 2080 feature exactly the same hardware level NVLink integration as their Quadro RTX counterparts, as indicated both by Nvidia's offcial documentation and all independent testing conducted so far on the matter.

    I'm just gonna say Rendez vous in one year or two to check the status once software development will be more advanced as it is now

    Most likely more like a month or two imo since that is when RTX updated Nvidia programmer-side resources like the OptiX API are rumored to be coming out.

    Post edited by RayDAnt on
  • RayDAntRayDAnt Posts: 1,120
    kameneko said:

    I've rendered the test scene from SY:

    CPU + GPU: 11m and 12s to finish (from Texture Shaded preview, Optix off, memory optimization)

    GPU Only: 32m 22s to finish (from Texture Shaded preview, Optix ON, memory optimization)

    From Iray preview with speed optimization the scene goes over my VRAM.

    My system specs:

    • Ryzen 5 1600
    • GTX 1060 3Gb
    • 4x4Gb 2667Mhz RAM
    • Samsung 970 Evo 250Gb

    I guess I need an upgrade! xD

    What seems strange to me are other benchmarks posted here with a 1060 6Gb, that take 6,5 minutes, but the CUDA difference should be of just 10%!

    Is there a simple list of average benchmarks? It would be really useful to see what GPU to buy, since this thread is too long and contains too many comments other than benchmarks...it would be nice something like "GPU only: 1050 in 15min, 1060 in 10 min, 1070 in 7min, 1080, 2060, 2070, 2080..." etc.

    FWIW I just noticed that your testing notes say you were using memory optimization. This will majorly throw off your numbers from the rest of the results here because Sickleyield's benchmark is preset to use the Daz Studio default speed optimization

  • junkjunk Posts: 1,224

    sylvie1998 and RayDAnt I really KNOW people are not going to use this guideline.  I've been asking for everyone to switch the viewport away from iray for over a year but they just keep going.  If you could reduce the instructions down to a single one line blurb that hits most of the requirements it could go a long way.  I really think you should start a new thread with these benchmark requirements.  Too much damage has been done in this particular thread and soon your requirements will be buried by page 36, 37, 40, 50, 80, 120, etc.   

    If it were a one line blurb people would keep pasting it over and over so that they are not buried by the pages.  A 150 line rule post is too much for 99% of the people to read.  IMHO

  • RayDAntRayDAnt Posts: 1,120
    edited February 2019
    junk said:

    sylvie1998 and RayDAnt I really KNOW people are not going to use this guideline. 

    I knew from the get-go that it was gonna be impractical for most. But I needed to write it all out (along with the kitchen-sink) before being able to come up with something more concise - which I am already working on (more on that later.) My posting of it here was primarily for the sake of receiving feedback.

     

    junk said:

    I've been asking for everyone to switch the viewport away from iray for over a year but they just keep going. 

    This past week I decided to finally start going post-by-post through this entire thread to see what sort of overall benchmarking summary/table could be made out of it. And the long and short of it is that out of the approximately 80% of people's posted results I was able to translate into spreadsheet form with some level of confidence (796 records/spreadsheet lines and counting baby!) only about 30 of them are even close to being comprehensive enough for someone to draw firm conclusions from. And lack of viewport type reporting is actually small fish so far as missing information is concerned. The biggest practical barrier has actually proved to be people not reporting OS version, Nvidia drivers version, and full-length Daz Studio version at the time of testing, since comparing different hardware is impossible if you don't know which versions of software people are running (and MANY versions of all three have come and gone since this thread was first started.)

     

    junk said:

    If you could reduce the instructions down to a single one line blurb that hits most of the requirements it could go a long way.  I really think you should start a new thread with these benchmark requirements. 

    Way ahead of you. I'm already about 75% of the way there to working out a much more streamlined version of the benchmarking guide I already posted - plus some fill-in-the-blank templates for the minimum necessary system/ds log file stats people need to report for their results to be meaningful to others. I was gonna use these - combined with some sort of easily updateable condenced summary of people's results (as reported so far) - as the basis for a new thread called something like "General Daz Studio Performance Benchmarking".

    In truth, the only thing keeping me from starting something up right now is that there are currently too many different benchmarking scenes floating around. As of this post this thread features results from a mixture of nine different test scenes:

    1. SickleYield's Iray Scene For Time Benchmarks (first thread appaearance here): currently 650+ results
    2. SickleYield's Iray Scene For Time Benchmarks (Spheres 8 & 9 Removed) (first thread appearance here): currently 25+ results
    3. JamesJAB's 2017 Benchmark 480p (first thread appearance here): currently 3+ results
    4. JamesJAB's 2017 Benchmark 720p (first thread appearance here
    5. JamesJAB's 2017 Benchmark 1080p (first thread appearance here): currently 3+ results
    6. JamesJAB's 2017 Benchmark 2160p (first thread appearance here
    7. Outrider42's Iray Test 2018 B (first thread appearance here): currently 80+ results
    8. DAZ_Rawb's Iray Render Test 2 (first thread appearance here): currently 19+ results
    9. Aala's Iray Cornell Box 1k Iteration Limit (first thread appearance here): currently 6+ results
    10. Aala's Iray Cornell Box 2 min Time (first thread appearance here): currently 4+ results

    Clearly this is far too many test scenes for anyone with anything but the most state-of-the-art rendering hardware currently available to be working through in a remotely reasonable amount of time (especially when you consider that each individual test scene usually ends up needing to be rendered multiple times.) And who in their right mind (other than possibly me) would want to be spending that much time benchmarking anyway?

    Ideally we need as few scenes as possible - I'm currently thinking two:

    1. A scene featuring a good variety of DS relevant scene content that is best suited for being rendered on older GPU/CPU hardware in a reasonable amount of time
    2. A scene featuring a good variety of DS relevant scene content that is better suited for being rendered on newer GPU hardware in a reasonable amount of time.

    Since it would be much better to reuse existing test scenes rather than create new ones (potential backwards compatibility with previously shared results/fewer test scenes going around to confuse everybody) and I don't currently know which of these scenes (if any) meet either of these requirements, I'm gonna need to do some special testing first. A benching of benchmarks, if you will. Over the next several days I'm gonna start in on this process, and once I'm satisfied with the results/conclusions I'm getting, my intention is to post my findings here regarding which ones seem (un)suitable and why. Then, after getting some feedback, I'll start a new thread.

     

    junk said:

    Too much damage has been done in this particular thread and soon your requirements will be buried by page 36, 37, 40, 50, 80, 120, etc.   

    To be fair, this thread was never really meant to be a catch-all benchmarking discussion for all things DS. It was originally just for tests of a single benchmark (SickleYield's, in two different cofigurations) at a particular point in time. Meaning that keeping track of things like software/hardware changes in the long term would've been largely irrelevant. It's just that we are now 4 years on - meaning that all those pages and pages of benchmarks in this thread where people fail to mention whether they're using HDDs or SSDs for content storage (the key factor in whether or not Iray viewport use makes a noticeable difference in benchmarking btw) since large enough SSDs weren't even a thing back then end up being pretty much just useless numbers today. Too much core tech has changed.

     

    ETA: Clarity

    Post edited by RayDAnt on
  • bluejauntebluejaunte Posts: 1,861

    Sounds good but for Iray we have the OptiX vs Optix Prime issue, unless that has somehow evaporated?

Sign In or Register to comment.