Iray Starter Scene: Post Your Benchmarks!

1394042444549

Comments

  • LenioTGLenioTG Posts: 2,118

    Great work @RayDAnt :D

    neumi1337 said:

    some new information about the Iray RTX release :)

    NVIDIA Iray Iray 2019 roadmap Iray RTX 2019 • Release in May • RTX support, up to 5 times speedup! • MDL 1.5 support for • MDLE • localization • 2d measured curve

    look at page 57:

    https://developer.download.nvidia.com/video/gputechconf/gtc/2019/presentation/s9346-sharing-physically-based-materials-between-renderers-with-mdl.pdf

    5x would be a great improvement!! :D

  • outrider42outrider42 Posts: 3,679
    edited May 2019
    neumi1337 said:

    some new information about the Iray RTX release :)

    NVIDIA Iray Iray 2019 roadmap Iray RTX 2019 • Release in May • RTX support, up to 5 times speedup! • MDL 1.5 support for • MDLE • localization • 2d measured curve

    look at page 57:

    https://developer.download.nvidia.com/video/gputechconf/gtc/2019/presentation/s9346-sharing-physically-based-materials-between-renderers-with-mdl.pdf

    This is good information to know, and I think it is fine to have a post about it here because it helps get the word out. However, as awesome as this is, the benchmark thread is for posting benchmarks and their discussion. So further discussion about RTX coming to Iray should go either the GPU discussion thread or get its own thread.

    Post edited by outrider42 on
  • LenioTGLenioTG Posts: 2,118

    You're right @outrider42 :D

    But, in the end, I think this thread is not that helpful for benchmarks anymore!
    Too much has changed since its opening, and the methodology is too disorganized to actually help us.

    I think the best solution will be for RayDAnt to open that new benchmarking thread once these RTX features will be implemented, but, most importantly, we need to filter the valid tests and to put them all in the same place, maybe with a nice graph that compares the performances between different GPUs!

    I don't think it would still be a great idea to mix them up with CPUs. It's already confounding enough to consider multi-GPUs system IMHO.

  • outrider42outrider42 Posts: 3,679
    I do agree some, that's why I advocated for some different benches. However I do think that a good number of these old benches can still serve as general reference. I've never been against that notion, rather this is OK as a starting point. And then an updated place for more modern and hopefully accurate benching as we jump into this new RTX phase for Daz Iray.
  • Spektra3DXSpektra3DX Posts: 0

    Daz Studio Public Beta 4.11
    SY scene, stock scene as downloaded.

    1x RTX 2080 Ti

    Optix On: 44.8s
    Optix Off: 51.8s

    2x RTX 2080 Ti

    Optix On: 26.1s
    Optix Off: 29.9s

    3x RTX 2080 Ti

    Optix On: 20.1s
    Optix Off: 22.1s

  • serifarzikserifarzik Posts: 0

    Public Beta 4.11

    1 x RTX Quadro 4000

    Sicleyield's Starter Scene
    Optix On
    Total Rendering Time: 1 minutes 40.51 seconds
    5000 iterations

  • RayDAntRayDAnt Posts: 1,120
    edited May 2019

    FYI in case anyone's wondering, Daz Studio 4.11.0.366 Beta (the version released most recently) uses the exact same version of Iray as 4.11.0.335 did. Meaning that pure rendering performance between the two version is identical - ie. benchmark results for '335 will also be true for '366.

    ETA: And Nvidia driver 430.64 is within margin of error of 430.39 performance-wise too.

     


    I do agree some, that's why I advocated for some different benches. However I do think that a good number of these old benches can still serve as general reference.

     The issue isn't so much the specific benchmarking scene(s) being used (the OG Sickleyield scene is still my goto benchmark for any time something changes, regardless of its thoroughly antiquated content - because it's fast.)  The problem is with how little info other than just a Total Rendering Time statistic (plus an OptiX on/off note more recently) people have been in the habit of posting as their results (not their fault - the OP only ever specificed TRT and graphics card model as things to report. And not Sickleyield's fault either - knowing exactly what stats need to be included for benchmark results to stay relevant years after they're recorded is something you can only learn through experience.)

    Unless a set of results include the following bare minimum of surrounding platform info:

    1. Current Operating System (both version and build numbers)
    2. Current Daz Studio version (both long version number like 4.11.0.366 and bit depth)
    3. Current Nvidia Drivers

    Those results are useless once an additional version or two of any of these things comes and goes around. For reference, since the start of this thread, Windows has gone through numerous major/minor upgrades (as well as version obsolences), as has Daz Studio (4.10 final didn't come out until December of 2017 - this thread's been kicking around since early 2015) , and - of course - Nvidia drivers have gone through hundreds if not thousands of tweaks/upgrades.

    Post edited by RayDAnt on
  • outrider42outrider42 Posts: 3,679
    I totally get that about the old benches, I spoke about numerous times. But even so, there is still some value there. There are GPUs tested under there that have never been tested again. I think its OK to mention them under the context that they are outdated/unverified benches.

    Perhaps they can be listed in a separate chart of "unverified" benches. I think that would be fair. It would still serve a purpose as a general frame of reference. Some people might be looking at eBay for old cards and think they have found something. Though the benches may not be perfect, it would still provide a starting point, for some sort of comparison. When Pascal came along, it pretty much took over the thread. But we have all sorts of cards that predate Pascal that have not been tested on 4.10. Or 4.11.

  • LenioTGLenioTG Posts: 2,118
    edited May 2019

    Still no one with a GTX 1660 or 1660 Ti?

    They have a really good value.

    PS: you probably already know this, but there's this thing called "Octanebench", that has basically every GPU benchmarked: https://www.cgdirector.com/octanebench-benchmark-results/

    Are these results comparable in Iray?

    Post edited by LenioTG on
  • outrider42outrider42 Posts: 3,679
    I believe Octane is comparable.

    But there is one very important thing to note about the 1600 cards, they do not have dedicated ray tracing cores. So what you see right now is what you get, they will not be getting any enhanced performance when full RTX support is released. All of the RTX cards have these cores, and so they are expected to see large performance gains when those new cores get supported. We know that full support is coming. That's why I believe that the 2060 will become a cost effective favorite.

    But we have to be patient.
  • LenioTGLenioTG Posts: 2,118
    I believe Octane is comparable.

     

    But there is one very important thing to note about the 1600 cards, they do not have dedicated ray tracing cores. So what you see right now is what you get, they will not be getting any enhanced performance when full RTX support is released. All of the RTX cards have these cores, and so they are expected to see large performance gains when those new cores get supported. We know that full support is coming. That's why I believe that the 2060 will become a cost effective favorite.

     

    But we have to be patient.

    Thanks for the answer :D

    Don't they also have some RTX capabilities? Won't they at least improve a little bit?
    It's not possible to ignore the price difference between a 375€ 2060 and a 220€ 1660!

    And when do you think this support will come?

  • outrider42outrider42 Posts: 3,679
    edited May 2019
    LenioTG said:
    I believe Octane is comparable.

     

    But there is one very important thing to note about the 1600 cards, they do not have dedicated ray tracing cores. So what you see right now is what you get, they will not be getting any enhanced performance when full RTX support is released. All of the RTX cards have these cores, and so they are expected to see large performance gains when those new cores get supported. We know that full support is coming. That's why I believe that the 2060 will become a cost effective favorite.

     

    But we have to be patient.

    Thanks for the answer :D

    Don't they also have some RTX capabilities? Won't they at least improve a little bit?
    It's not possible to ignore the price difference between a 375€ 2060 and a 220€ 1660!

    And when do you think this support will come?

    No, the 1660 and 1660ti have no RTX features at all. No ray tracing cores, no tensor cores. That is why they are cheaper.

    We do not know when RTX support is coming to Daz. All we know is that it *is* coming. Nvidia announced that a new Iray will be shipping with RTX support in May. And well, May is half way over, so that could be any day now! However, the people at Daz must implement the new Iray into Daz Studio, and that takes a little time. It usually takes Daz about 1 or 2 months to release a new version of Daz Studio once they receive the new Iray.

    So all I can do is guess that Iray RTX could be coming around July-August if Nvidia is on time.

    Keep watching for any announcements that Nvidia has released a new Iray SDK. That is the big key. Daz cannot do anything until this happens. Once the new Iray is released, then we will know there is a finish line in sight.

    I know it may be tough to jump from the 1660 to a 2060. I have been there. But I truly do believe that saving the money for RTX will prove to be worth doing once RTX support comes. If you are on the fence, just keep waiting, and keep saving while you wait. Once it does come, we can do all the testing and see once and for all if RTX is indeed worth the money. Until then we can only speculate. I personally believe that RTX will be worth the wait, but that is my opinion. Everything else that has adopted RTX has seen big gains.

    Post edited by outrider42 on
  • LenioTGLenioTG Posts: 2,118
    LenioTG said:
    I believe Octane is comparable.

     

    But there is one very important thing to note about the 1600 cards, they do not have dedicated ray tracing cores. So what you see right now is what you get, they will not be getting any enhanced performance when full RTX support is released. All of the RTX cards have these cores, and so they are expected to see large performance gains when those new cores get supported. We know that full support is coming. That's why I believe that the 2060 will become a cost effective favorite.

     

    But we have to be patient.

    Thanks for the answer :D

    Don't they also have some RTX capabilities? Won't they at least improve a little bit?
    It's not possible to ignore the price difference between a 375€ 2060 and a 220€ 1660!

    And when do you think this support will come?

    No, the 1660 and 1660ti have no RTX features at all. No ray tracing cores, no tensor cores. That is why they are cheaper.

    We do not know when RTX support is coming to Daz. All we know is that it *is* coming. Nvidia announced that a new Iray will be shipping with RTX support in May. And well, May is half way over, so that could be any day now! However, the people at Daz must implement the new Iray into Daz Studio, and that takes a little time. It usually takes Daz about 1 or 2 months to release a new version of Daz Studio once they receive the new Iray.

    So all I can do is guess that Iray RTX could be coming around July-August if Nvidia is on time.

    Keep watching for any announcements that Nvidia has released a new Iray SDK. That is the big key. Daz cannot do anything until this happens. Once the new Iray is released, then we will know there is a finish line in sight.

    I know it may be tough to jump from the 1660 to a 2060. I have been there. But I truly do believe that saving the money for RTX will prove to be worth doing once RTX support comes. If you are on the fence, just keep waiting, and keep saving while you wait. Once it does come, we can do all the testing and see once and for all if RTX is indeed worth the money. Until then we can only speculate. I personally believe that RTX will be worth the wait, but that is my opinion. Everything else that has adopted RTX has seen big gains.

    I think we need to do something like that Octanebench but for Iray!
    Honestly, I'd find it much more useful than a 42 pages thread with random info on random versions xD

    That's good, I can wait until August! Today my PC+ membership expires, so I'll save something up (I hope).

    Maybe that's why they still haven't released Daz 4.11!

    You're surely more expert than me, but in April they said that ray tracing was coming to Pascal and 1600 series GPUs too: https://www.tomshardware.com/reviews/nvidia-pascal-ray_tracing-tested,6085.html
    It's not a complete ray tracing, but it does something I guess!

  • outrider42outrider42 Posts: 3,679
    edited May 2019
    LenioTG said:
    LenioTG said:
    I believe Octane is comparable.

     

    But there is one very important thing to note about the 1600 cards, they do not have dedicated ray tracing cores. So what you see right now is what you get, they will not be getting any enhanced performance when full RTX support is released. All of the RTX cards have these cores, and so they are expected to see large performance gains when those new cores get supported. We know that full support is coming. That's why I believe that the 2060 will become a cost effective favorite.

     

    But we have to be patient.

    Thanks for the answer :D

    Don't they also have some RTX capabilities? Won't they at least improve a little bit?
    It's not possible to ignore the price difference between a 375€ 2060 and a 220€ 1660!

    And when do you think this support will come?

    No, the 1660 and 1660ti have no RTX features at all. No ray tracing cores, no tensor cores. That is why they are cheaper.

    We do not know when RTX support is coming to Daz. All we know is that it *is* coming. Nvidia announced that a new Iray will be shipping with RTX support in May. And well, May is half way over, so that could be any day now! However, the people at Daz must implement the new Iray into Daz Studio, and that takes a little time. It usually takes Daz about 1 or 2 months to release a new version of Daz Studio once they receive the new Iray.

    So all I can do is guess that Iray RTX could be coming around July-August if Nvidia is on time.

    Keep watching for any announcements that Nvidia has released a new Iray SDK. That is the big key. Daz cannot do anything until this happens. Once the new Iray is released, then we will know there is a finish line in sight.

    I know it may be tough to jump from the 1660 to a 2060. I have been there. But I truly do believe that saving the money for RTX will prove to be worth doing once RTX support comes. If you are on the fence, just keep waiting, and keep saving while you wait. Once it does come, we can do all the testing and see once and for all if RTX is indeed worth the money. Until then we can only speculate. I personally believe that RTX will be worth the wait, but that is my opinion. Everything else that has adopted RTX has seen big gains.

    I think we need to do something like that Octanebench but for Iray!
    Honestly, I'd find it much more useful than a 42 pages thread with random info on random versions xD

    That's good, I can wait until August! Today my PC+ membership expires, so I'll save something up (I hope).

    Maybe that's why they still haven't released Daz 4.11!

    You're surely more expert than me, but in April they said that ray tracing was coming to Pascal and 1600 series GPUs too: https://www.tomshardware.com/reviews/nvidia-pascal-ray_tracing-tested,6085.html
    It's not a complete ray tracing, but it does something I guess!

    I'd certainly like a dedicated benchmark suit like Octane has. But Daz does not have one, so its entirely up to us, which has been a point of discussion for some time.

    I see now how you may be confused. When Nvidia gave Pascal ray tracing drivers, that was for video games. The update was indeed for Pascal and the 1600 cards. But this update is only software, and only for video games. It allows gamers to turn on ray tracing in video games when using these GPUs, but they do not get any hardware acceleration. This only works for the few video games that support ray tracing.

    This has no impact on Daz Iray or other renderers like Octane, because they are not game engines. They already do ray tracing (without and special hardware like RTX). The update was for DirectX 12 and Vulcan, which is not what Iray uses. If you look at a typical video game, you can quickly see the difference. In order to render quickly video games "cheat" to great fake shadows use extremely simple light. Now GPUs (RTX mainly) have dedicated hardware to make it possible to ray trace in a video game. Ultimately this is what the whole RTX launch is about...stuff like Iray just happens to benefit as well. That's also why it has taken so very long for Iray to get RTX support, it is not a top priority for Nvidia like video games.

    Anyway, I hope that clears it up a bit. Now I would suggest further posts like this to be made in the General GPU Hardware discussion thread that was spun off by Richard. Because this starts to stray from the thread topic of posting benchmarks. Sorry, I don't have the link at the moment as I am using mobile.
    Post edited by outrider42 on
  • LenioTGLenioTG Posts: 2,118
    LenioTG said:
    LenioTG said:
    I believe Octane is comparable.

     

    But there is one very important thing to note about the 1600 cards, they do not have dedicated ray tracing cores. So what you see right now is what you get, they will not be getting any enhanced performance when full RTX support is released. All of the RTX cards have these cores, and so they are expected to see large performance gains when those new cores get supported. We know that full support is coming. That's why I believe that the 2060 will become a cost effective favorite.

     

    But we have to be patient.

    Thanks for the answer :D

    Don't they also have some RTX capabilities? Won't they at least improve a little bit?
    It's not possible to ignore the price difference between a 375€ 2060 and a 220€ 1660!

    And when do you think this support will come?

    No, the 1660 and 1660ti have no RTX features at all. No ray tracing cores, no tensor cores. That is why they are cheaper.

    We do not know when RTX support is coming to Daz. All we know is that it *is* coming. Nvidia announced that a new Iray will be shipping with RTX support in May. And well, May is half way over, so that could be any day now! However, the people at Daz must implement the new Iray into Daz Studio, and that takes a little time. It usually takes Daz about 1 or 2 months to release a new version of Daz Studio once they receive the new Iray.

    So all I can do is guess that Iray RTX could be coming around July-August if Nvidia is on time.

    Keep watching for any announcements that Nvidia has released a new Iray SDK. That is the big key. Daz cannot do anything until this happens. Once the new Iray is released, then we will know there is a finish line in sight.

    I know it may be tough to jump from the 1660 to a 2060. I have been there. But I truly do believe that saving the money for RTX will prove to be worth doing once RTX support comes. If you are on the fence, just keep waiting, and keep saving while you wait. Once it does come, we can do all the testing and see once and for all if RTX is indeed worth the money. Until then we can only speculate. I personally believe that RTX will be worth the wait, but that is my opinion. Everything else that has adopted RTX has seen big gains.

    I think we need to do something like that Octanebench but for Iray!
    Honestly, I'd find it much more useful than a 42 pages thread with random info on random versions xD

    That's good, I can wait until August! Today my PC+ membership expires, so I'll save something up (I hope).

    Maybe that's why they still haven't released Daz 4.11!

    You're surely more expert than me, but in April they said that ray tracing was coming to Pascal and 1600 series GPUs too: https://www.tomshardware.com/reviews/nvidia-pascal-ray_tracing-tested,6085.html
    It's not a complete ray tracing, but it does something I guess!

     

    I'd certainly like a dedicated benchmark suit like Octane has. But Daz does not have one, so its entirely up to us, which has been a point of discussion for some time.

     

    I see now how you may be confused. When Nvidia gave Pascal ray tracing drivers, that was for video games. The update was indeed for Pascal and the 1600 cards. But this update is only software, and only for video games. It allows gamers to turn on ray tracing in video games when using these GPUs, but they do not get any hardware acceleration. This only works for the few video games that support ray tracing.

     

    This has no impact on Daz Iray or other renderers like Octane, because they are not game engines. They already do ray tracing (without and special hardware like RTX). The update was for DirectX 12 and Vulcan, which is not what Iray uses. If you look at a typical video game, you can quickly see the difference. In order to render quickly video games "cheat" to great fake shadows use extremely simple light. Now GPUs (RTX mainly) have dedicated hardware to make it possible to ray trace in a video game. Ultimately this is what the whole RTX launch is about...stuff like Iray just happens to benefit as well. That's also why it has taken so very long for Iray to get RTX support, it is not a top priority for Nvidia like video games.

     

    Anyway, I hope that clears it up a bit. Now I would suggest further posts like this to be made in the General GPU Hardware discussion thread that was spun off by Richard. Because this starts to stray from the thread topic of posting benchmarks. Sorry, I don't have the link at the moment as I am using mobile.

    Thank you, now I've understood! :)

  • nothingmorenothingmore Posts: 24

    Intel i9-7900X @ default frequency
    Asus Rog Rampage VI Extreme mobo
    64gb ddr4 @3200
    Windows 10 Pro version 1809 build 17763.503
    Nvidia driver version 430.64
    Daz build: 4.11.366 Beta

    All GPUs are running at default frequency

    SickleYield Scene:
    (1) Titan Xp
    OptiX on:
    trial 1: 105.819s.
    OptiX off: 151.398s.
    (1) GTX 1080Ti EVGA ftw3
    OptiX on: 116.702s.
    OptiX off: 163.403s.
    (1) GTX 1080Ti FE (Nvidia reference)
    Optix on: 117.870s.
    Optix off: 165.396s.
    (1) Titan Xp + (x3)1080Ti
    OptiX on: 30.022s
    OptiX off: 42.054s
    (x3) 1080Ti
    OptiX on: 40.847
    (1) Titan RTX
    Optix on: 62.189
    OptiX off: 76.030
    (x2) Titan RTX
    Optix on: 32.271s
    Optix off: 39.157s
    Outrider scene:
    (1) Titan Xp + (x3)1080Ti
    Optix on: 142.725s
    (x2) Titan RTX
    Optix on: 134.941s

     

    Take-home message: Using SickleYield's benchmark, the 2 Titan RTX's were about 2 seconds slower than the 4 Pascal cards. Using Outrider's benchmark, the 2 Titan RTX's were about 8 seconds faster than the 4 Pascal cards.

    Interestingly, Spectra3DX's 2080Ti's are outperforming the Titan RTX's by a good margin. I'm curious how Specktra's GPUs are clocked. Also, NVLink shows up tomorrow. I expect a slight increase in render times due to overhead (as demonstrated in Vray), but I haven't found anything definitive on whether memory pooling via NVLinkworks in Iray. When the bridge shows up tomorrow I'll try to push a 25+gb scene to see what happens.

  • RayDAntRayDAnt Posts: 1,120

    NVLink shows up tomorrow. I expect a slight increase in render times due to overhead (as demonstrated in Vray), but I haven't found anything definitive on whether memory pooling via NVLinkworks in Iray. When the bridge shows up tomorrow I'll try to push a 25+gb scene to see what happens.

    Memory pooling should absolutely work out of the box with your two Titan RTXs via NVLink as long as you go and enable TCC mode in your drivers (see this post/the posts around it for an in-depth discussion/step-by-step process on how to do that.) Memory pooling without TCC mode enabled (TCC mode is currently limited to Quadro and Titan cards) is where the big mystery lies.

  • timon630timon630 Posts: 37

    Tell me, please, if there are two or more graphics cards, the scene(8gb) is loaded into the memory of all( 8/8/8/8 or 8/0/0/0)?

  • tj_1ca9500btj_1ca9500b Posts: 2,047
    timon630 said:

    Tell me, please, if there are two or more graphics cards, the scene(8gb) is loaded into the memory of all( 8/8/8/8 or 8/0/0/0)?

    The full scene will be loaded into each graphics card, so your 8/8/8/8 analogy would be correct.

    If you are using NVLink however, which supposedly can pool the memory, then two graphics card will essentially combine their memory of those two cards into a common block.  HOWEVER, this will slow down memory performance as cards will need to access memory across the NVLink for larger scenes.  That's what the folks ove at the OTOY forums have observed.

    Of course, NVLink is NOT required for multi-GPU rendering in Daz Iray, or SLI for that matter.  Just keep in mind that the card with the least memory will set the upper limit of the size of a scene.  It is possible to mix and match cards for Iray rendering, as you can see in various results in this thread.  Essentially, the more cuda cores you can throw at a render, the faster the render will generally be.

  • RayDAntRayDAnt Posts: 1,120

    Just keep in mind that the card with the least memory will set the upper limit of the size of a scene.

    This is incorrect. In a non memory-pooled setup, any one card lacking enough onboard memory to handle a scene will simply sit out the rendering while any remaining, capable devices finish the render. A lower capacity card will never interfere with a higher capacity card's ability to function.

  • outrider42outrider42 Posts: 3,679
    RayDAnt said:

    Just keep in mind that the card with the least memory will set the upper limit of the size of a scene.

    This is incorrect. In a non memory-pooled setup, any one card lacking enough onboard memory to handle a scene will simply sit out the rendering while any remaining, capable devices finish the render. A lower capacity card will never interfere with a higher capacity card's ability to function.

    Indeed. Lets say you have 3 cards. A 1080ti with 11GB, a 1060 with 6GB, and a 1050 with 2GB.

    If you create a scene that takes about 5GB, then the 1050 will not run. But the 1060 will, and will combine its CUDA cores with the 1080ti to render faster than the 1080ti would alone.

    If you create a scene that is about 9GB, then only the 1080ti will run, and the other two cards will have no benefit.

    So while you can combine cards of different spec, it is recommended to try using cards that are reasonably close in VRAM size if you plan on doing the multiple GPU route. Like a 1070 and a 1080 pair well since they both have 8GB memory. Of course, people can do what what they want, there are no rules in stone about this.

  • timon630timon630 Posts: 37
    edited May 2019

    Hi, everybody. Sorry for my English.
    I put all the data in one table.

    Using multiple graphics cards

    Based on the results of the data, I made a table with which you can quickly estimate which of the graphics cards is better to take and what performance is to be expected:

    To use, you need: 
    1. Go to the table (link)
    2. File - Make a copy
    3. Edit prices in the selected screen below the column "price". Now in this column prices from my city

    4. Select the maps of interest from the drop-down lists


    5. The table shows the overall performance of the selected graphics cards (converges with the real tests)

    Post edited by Chohole on
  • tj_1ca9500btj_1ca9500b Posts: 2,047
    edited May 2019
    RayDAnt said:

    Just keep in mind that the card with the least memory will set the upper limit of the size of a scene.

    This is incorrect. In a non memory-pooled setup, any one card lacking enough onboard memory to handle a scene will simply sit out the rendering while any remaining, capable devices finish the render. A lower capacity card will never interfere with a higher capacity card's ability to function.

    I wasn't being entirely clear on this.  In order for a particular card to participate on the render, the scene needs to be able to fit inside of the card. 

    As I've never personally mixed and matched cards with different VRAM sizes, I wasn't sure if Daz would just bypass those cards if the scene couldn't fit in the smaller cards or drop to CPU only.  Since Daz loves to drop to CPU only at times on renders, particularly when you are repeating renders on a scene that fit in previous passes, I was leaning towards a full CPU only drop, where you would ned to 'uncheck' the boxes for the smaller GPUs to avoid CPU only.  It's good to know that I won't need to do that if I ever mix cards with different ram amounts though.

    Thank you for the clarification!

    Post edited by tj_1ca9500b on
  • outrider42outrider42 Posts: 3,679
    I will add that while Daz usually handles the different VRAM sizes like stated, sometimes it screws up and still drops to CPU. I ran different sizes for a long time. I had a 1080ti + 970. That's quite a difference. There were times when I had weird things happen. The top half of the render would be 50% transparent, the bottom would be OK, and Daz would sometimes crash when I stopped. I forget the exact error, but it would result in a fatal error forcing Daz to close. I know it was related to VRAM, typically if this happened, it was right at the cusp of the 970's 4GB limit. If I exceeded the 4GB by a lot, this error happened less. Its like when the scene is close Daz might mishandle the dropoff, as if it gets confused on whether it should drop the GPU or not.

    I stress this didn't happen that much, but it did happen. When I bought my 2nd 1080ti and removed the 970, I never had this happen again. So this is just heads up that it can happen.

    That's why I said its probably a good idea to use cards that are kind of close in capacity, if not equal. I'm sure if I had a 8GB GPU instead of the 4GB, the problem would have happened less if at all.
  • tj_1ca9500btj_1ca9500b Posts: 2,047
    edited May 2019
    I will add that while Daz usually handles the different VRAM sizes like stated, sometimes it screws up and still drops to CPU. I ran different sizes for a long time. I had a 1080ti + 970. That's quite a difference. There were times when I had weird things happen. The top half of the render would be 50% transparent, the bottom would be OK, and Daz would sometimes crash when I stopped. I forget the exact error, but it would result in a fatal error forcing Daz to close. I know it was related to VRAM, typically if this happened, it was right at the cusp of the 970's 4GB limit. If I exceeded the 4GB by a lot, this error happened less. Its like when the scene is close Daz might mishandle the dropoff, as if it gets confused on whether it should drop the GPU or not.

     

    I stress this didn't happen that much, but it did happen. When I bought my 2nd 1080ti and removed the 970, I never had this happen again. So this is just heads up that it can happen.

     

    That's why I said its probably a good idea to use cards that are kind of close in capacity, if not equal. I'm sure if I had a 8GB GPU instead of the 4GB, the problem would have happened less if at all.

    Good to have a real life usage case perspective on this.  I'm not disagreeing with you at all on these points.  Even mixing cards from different generations, even with the same VRAM amounts, may cause increased instability I would think.

    That being said, we probably should get back on topic r.e. only posting render benchmarks, and move this discussion to the other thread if necessary since the goal is to not overly clutter this thread.  I do appreciate y'all's experience on this topic though!

    https://www.daz3d.com/forums/discussion/321401/general-gpu-testing-discussion-from-benchmark-thread

    Post edited by tj_1ca9500b on
  • RayDAntRayDAnt Posts: 1,120
    edited May 2019
    timon630 said:

    Hi, everybody. Sorry for my English.
    I put all the data in one table.

    Using multiple graphics cards

    Based on the results of the data, I made a table with which you can quickly estimate which of the graphics cards is better to take and what performance is to be expected:

    To use, you need: 
    1. Go to the table (link)
    2. File - Make a copy
    3. Edit prices in the selected screen below the column "price". Now in this column prices from my city

    4. Select the maps of interest from the drop-down lists


    5. The table shows the overall performance of the selected graphics cards (converges with the real tests)

    Awesome effort. Although there are some key factors seemingly missing from your analysis which - if left uncontrolled for - will lead to statisitcally significant innacuracies in your graphs:

    1. Daz Studio version tested.
    Unless all of the datapoints you are using from this thread came from EXACTLY same version of Daz Studio (eg. 4.11.0.236 vs. 4.11.0.366 vs. 4.10.0.123) there is a potential margin of error in people's results anywhere from about 4 seconds under the various 4.11.0.XXX  variants to as much as around 55 seconds if 4.10.0.123 is included as well. And since many of the performance differences you've tabulated between cards are within the 1-4 second range (especially on the higher end) this is a potentially problem for making charts like this (it's one of the main reasons I haven't yet produced one myself.) 

    2. Daz Studio Total Rendering Time vs. GPU/CPU scene rendering time. 
    When Sickleyield first started this thread 4+ years ago s/he decided to use Total Rendering Time as the base statistic for reporting relative rendering performance across devices/scenarios. Technically speaking, Total Rendering Time - as reported by Daz Studio during/after the completion of a render - is NOT just a measure of device rendering time. It is an overall measure of how long it takes Daz Studio to load/process the assets of a scene, initialize the Iray rendering plugin, transfer those assets to that plugin for rendering, wait for the plugin to finish the rendering process, and finally saving the final render as outputed by the Iray plugin for review by the user. In other words, Total Rendering Time includes a graphics card rendering performance unrelated overhead of anywhere from a couple of seconds (as exemplified by my totally solid state Titan RTX/i7-8700K rendering machine build) to as much as a whole minute or longer (some spinning disk machines.)
    4+ years ago this potentially minute+ offset between Device Rendering Time vs. Total Rendering Time wasn't much of an issue (at least for single card benchmarks) since most rendering devices needed tens of minutes to complete Sickleyield's scene. And having an extra 30 seconds give-or-take added to that most likely wasn't going to change anything to the extent of, say, rearranging the order of things in a graph of relative rendering performance between different GPU models. However, now that we are at the point where many cards are capable of finishing Sickleyield's scene in a couple of minutes or less and differences between many cards are within mere seconds of each other, this 30+ second margin-of-error is a huge problem. It's why most of the benchmarking numbers in this thread are currently (unfortunately) useless going forward. And the only way to remedy this is for people to report Device Rendering Statistics like this

    2019-04-24 05:42:51.762 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : Device statistics:2019-04-24 05:42:51.762 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : CUDA device 0 (GeForce GTX 1050): 	 4986 iterations, 14.769s init, 570.178s render

    where 570.178 seconds is an actual measure of device rendering performance you can base calculations on, rather than Total Rendering TIme like this

    2019-04-24 05:38:38.299 Finished Rendering2019-04-24 05:38:38.353 Total Rendering Time: 9 minutes 51.42 seconds

    where 9 minutes 51.42 seconds isn't. Notice that 9 minutes 51.42 seconds is 591.42 or 21.242 more than 570.178. Meaning that this TRT statistic is off by 21.242 seconds. And this difference isn't going to scale by time - ie. the same scene rendered to 10 iterations in <10 seconds is still going to have the same overhead of 21.242 seconds. Making it look 3x less efficient at rendering than it really is.

    ETA:
    There is a fix for this - start a new benchmarking thread where the OP's directions are to report Device Statistics rather than/in addition to Total Rendering Time as a person's benchmarking results. I actually already have just such a thread pretty much ready to post at a moment's notice. I'm just waiting for concrete news regarding when to expect RTCore support to debut since that will dictate how complex a benchmarking scene needs to be to be meaningful on Turing level hardware.

    Post edited by RayDAnt on
  • LenioTGLenioTG Posts: 2,118

    I can't see the images!

  • outrider42outrider42 Posts: 3,679
    I can see them fine, even on my android phone.

    They not be 100% accurate, but they can show the general performance. And that's great.
  • timon630timon630 Posts: 37
    LenioTG said:

    I can't see the images!

    Copy images in table list "Iray Starter Scene" 

    Iray Starter Scene (SickleYield), Optix On, No Preload (1).png
    925 x 1190 - 67K
  • kattyg911kattyg911 Posts: 7
    timon630 said:
    LenioTG said:

    I can't see the images!

    Copy images in table list "Iray Starter Scene" 

    Heya, where u got results for a 1070 ti ?
    Becuz i have 2:27-2,20 with it, 3:00 looks strange :0 just a 30+ % diffirence
    There is a doubt in the remaining results =\

    Optix ON GPU (Only 1x 1070 TI)
    2:27 if u open scene first time
    2:20 +-  if u render same scene second time

    Optix ON CPU + GPU (2687W X2 + 1070 TI X1)
    2:08


    Win 8.1 Pro
    DAZ 4.11 Pro Public Build
    430.64 Game Ready Driver
    Z9PE D8 WS, 2687W X2, 128GB RAM
    ZOTAC GTX 1070 TI AMP! Extreme

    That is, my system is quite old, I do not think that this is a problem in the drivers or something else
    Or in the results only reference models of video cards are used, if yes then sorry.

Sign In or Register to comment.