Daz Studio Iray - Rendering Hardware Benchmarking

1192022242545

Comments

  • outrider42outrider42 Posts: 3,679

    chrislb said:

    skyeshots said:

    This is great work. Thank you for doing this. Very helpful for the overclockers here. I think my biggest opponent so far with overclocks has been the ambient temps. The 3 and 4 card setups put out a few thousand BTU under load. With overclocks & sizable renders, internal temps tend to climb with ambient. This is OK if I catch it and open a window, but less than ideal. The other opponent with overclocks on the MSI Ventus cards is the 2 cable adapters. They can only handle a modest push.

    I think to see how signiifigant the real world difference is with overclocking, I may need to set up a larger scene with more objects and light sources.

    There are two ways to approach it. RTX cards have two different cores that effect performance. Traditional CUDA cores handle the shading, and this takes the longest amount of time in most scenes. A scene that has a lot of complex shaders would test this aspect more (realistic skin is quite complex, some close ups of skin should do the trick). Then you have the geometry, and the RT cores now handle a large bulk of this task with the CUDA cores. A scene with a lot of geometry will test this, like the "RTX Show Me the Power" benchmark thread that was made back around when Daz first got RT support. That scene uses dforce strand hair, you could use a scene with lots of trees, as foliage can really eat up geometry. For reference that thread is here https://www.daz3d.com/forums/discussion/344451/rtx-benchmark-thread-show-me-the-power/p1

    I would be curious to see if the memory effects one aspect more than the other. Ray tracing is a memory extensive task, so my thinking is perhaps the extra memory speed benefits the ray tracing part of a scene more than the shading part. So if that is true, then a scene that has more geometry will see larger gains from memory overclocking than scenes heavier on shading. But I could have it backwards, since shading often takes the longest to do.

  • chrislbchrislb Posts: 95

    If I were going to create a complex scene and do some "real world" type of testing to see what the actual real world difference is with overclocking vs render times, would it be better to test to a certain convergence percentage or a certain number of iterations?  I was thinking it would be more relevant to measure time to a certain convergence percentage for a "real world" type of test and compare render times.

    Also, I was thinking that it might be better to test several different types of scenes to do real world comparisons of the effects of overclocking.  Maybe people with different types of hair hair, various objects with different metlallic, glossy, and matte surfaces, and some type of scenery with trees and plants?

  • Matt_CastleMatt_Castle Posts: 2,299

    chrislb said:

    would it be better to test to a certain convergence percentage or a certain number of iterations? 

    Although you may first want to get an idea of how many iterations are necessary for a given test to be "complete", the actual tests should use a fixed number of iterations as their limit.

    While in theory the difference is moot - with the same test scene on the same convergence settings, it should converge in the same number of samples - the convergence percentage is going to be a lot less consistent, as it's only intermittently calculated, so you'll end up with random variation from that which will make results harder to compare. Sometimes it may calculate the convergence just after it reaches the limit, sometimes it may calculate just before it reaches the limit (and therefore have to wait until the convergence is next calculated).

    Given we're comparing performance differences of a few percent, you don't want any unnecessary variation in the data.

  • RayDAntRayDAnt Posts: 1,120
    chrislb said:

    If I were going to create a complex scene and do some "real world" type of testing to see what the actual real world difference is with overclocking vs render times, would it be better to test to a certain convergence percentage or a certain number of iterations?  I was thinking it would be more relevant to measure time to a certain convergence percentage for a "real world" type of test and compare render times.

    Also, I was thinking that it might be better to test several different types of scenes to do real world comparisons of the effects of overclocking.  Maybe people with different types of hair hair, various objects with different metlallic, glossy, and matte surfaces, and some type of scenery with trees and plants?

    Don't have time to explain in full now, but because of how Iray manages divying up the rendering process between different rendering capable devices you MUST use iteration count rather than something else like convergence %. Otherwise, as already noted, your results can end up slightly skewed between runs on different machines/different rendering device configs.

  • Matt_CastleMatt_Castle Posts: 2,299
    edited March 2021

    For a minor update on the general subject of minor hardware tweaks and such, I've finally got around to turning on Resizable BAR in my BIOS (the RTX 3060 is currently the only Nvidia card to support it). Now, I wasn't expecting it to make any difference to render speed (and, indeed, it didn't), as there's very little communication between the GPU and CPU after the scene is loaded, but was wondering if it would have any impact on scene load time (which could be a plus for Iray Preview).

    ... alas, seems not, unless I'm going to run a large number of tests to be sure I've got a solid control test. The difference as compared to the numbers I have from before is negligible, well within margin of error.

    Post edited by Matt_Castle on
  • chrislbchrislb Posts: 95

    After doing some real world testing, I'm not sure of the benefits of overclocking the GPU.  I tried a more complex scene with multiple objects and light sources.  I tried two renders with the GPU locked at 1905 MHz then at 2145 MHz.  The difference in a 1 hour and 7 minute long render stopped at 12300 iterations was about 13 seconds.

    GPU 2145 MHz VRAM stock (9502x2)
    Total Rendering Time: 1 hours 7 minutes 23.79 seconds
    GPU 1905 MHz VRAM stock (9502x2)
    Total Rendering Time: 1 hours 7 minutes 36.8 seconds

    This was on a water cooled 3090 with a 140mm fan blowing on the backplate to keep the rear VRAM cool.  The water pump and radiator fans were set to their maximum speed to keep the temperature stable.  Most of the time the 3090 was drawing under 400 watts of power and the watercooling loop has about a 2000 watt cooling capacity with the fans and pump at max speed.  The GPU temperatures for both renders fluctuated between 41-44C during the entire render.  

  • skyeshotsskyeshots Posts: 148
    edited March 2021

    outrider42 said:

    So while I have said many times that it is ok to build an 'unbalanced' PC for Daz because of how heavily weighted Iray is to GPU, that is only for people who cannot afford building new machines. It certainly does help having modern hardware. But at some point most people have to make a decision on what parts to focus on, because it is impossible to have it all for 3D rendering. There is literally no limit to what you can throw at 3D rendering,
    the only limit is your budget and perhaps power limitations. The goal of this thread is to give people information so they can hopefully make an informed decision for themselves that works best for them. Often times the GPU really is the best thing one can buy for an upgrade, but right now, maybe it isn't the best time to do that unless you get lucky.

    This sounds good, but the truth is that money only goes so far. Aside from the GPU shortages, virtually every high performance component has been hard to source right now. Motherboards, water cooling parts, power supplies (ouch) and drives have been very slow to ship. With millions of remote workers, the demand for laptops and other devices has never been higher and it is not expected to get better anytime soon.

    As far as the scalpers and miners go, I would say the A6000 is the correct answer to both problems, if you are an artist and have the means.

    Post edited by skyeshots on
  • skyeshotsskyeshots Posts: 148

    System/Motherboard: Gigabyte X299X
    CPU: I9-10920X  4.4 GHZ
    GPU: PNY RTX A6000 x3
    GPU: MSI 3090 x1 (+1100 mem)
    System Memory: 96 GB Corsair Dominator Platinum DDR4-3466
    OS Drive: Optane 380 GB
    Asset Drive: Same
    Operating System: Win 10 Pro, 1909
    Nvidia Drivers Version: 461.92 Studio Drivers
    Daz Studio Version: 4.15

    2021-03-20 12:28:55.172 Total Rendering Time: 29.68 seconds
    2021-03-20 12:29:04.872 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:
    2021-03-20 12:29:04.872 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (RTX A6000):      441 iterations, 2.051s init, 24.273s render
    2021-03-20 12:29:04.873 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 3 (GeForce RTX 3090):      477 iterations, 1.804s init, 24.488s render
    2021-03-20 12:29:04.882 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 2 (RTX A6000):      426 iterations, 2.004s init, 23.495s render
    2021-03-20 12:29:04.882 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 1 (RTX A6000):      436 iterations, 2.057s init, 24.147s render
    2021-03-20 12:29:04.882 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CPU:      20 iterations, 1.480s init, 24.267s render
    Loading Time: 5.192
    Rendering Performance: 1800/24.488 = 73.505 iterations per second

     

    Optane arrived yesterday...Updated to the 461.92 drivers, added a bit more ram and OC the CPU to 4.4, which put the CPU back into the 'contributing' catagory.

    I'm hoping to eventually pull the 3090 out of this station and replace with a 4th A6000. When I ran the 3x A6000 cards today (without the 3090), I was at exactly 75% of this iteration rate above (54.91 per second). So, it is possible there will be zero performance hit replacing the last 3090. My guess is the 3x A6000 ran smoother due to all the cards being identical. It also opens the doors to  the Branch Production drivers, improved VRAM pooling, ECC utilization and easier airflow management. It would also look better in the PC.

    On another note, I have the OS and Daz on the Optane drive directly. It seems to help in general workflows, particularly during frequent saves and loading up new assets into a scene. This is subjective though; it just 'feels' faster. On the downside, it added some heat @ about 5 degrees under 2 of the GPUs.

  • outrider42outrider42 Posts: 3,679

    skyeshots said:

    outrider42 said:

    So while I have said many times that it is ok to build an 'unbalanced' PC for Daz because of how heavily weighted Iray is to GPU, that is only for people who cannot afford building new machines. It certainly does help having modern hardware. But at some point most people have to make a decision on what parts to focus on, because it is impossible to have it all for 3D rendering. There is literally no limit to what you can throw at 3D rendering,
    the only limit is your budget and perhaps power limitations. The goal of this thread is to give people information so they can hopefully make an informed decision for themselves that works best for them. Often times the GPU really is the best thing one can buy for an upgrade, but right now, maybe it isn't the best time to do that unless you get lucky.

    This sounds good, but the truth is that money only goes so far. Aside from the GPU shortages, virtually every high performance component has been hard to source right now. Motherboards, water cooling parts, power supplies (ouch) and drives have been very slow to ship. With millions of remote workers, the demand for laptops and other devices has never been higher and it is not expected to get better anytime soon.

    As far as the scalpers and miners go, I would say the A6000 is the correct answer to both problems, if you are an artist and have the means.

    Well right now is a pretty unique moment in history. There has never been a time like this where so many parts were so hard to get, and when scalpers horded everything for months on end. But the point still stands. While some things are hard to get, it is still mostly a matter of money as to obtaining all of these parts. If one really wanted to pay the scalpers, many parts are in fact readily available. You may have had to wait, but you still got your stuff in the end.

    I just built a Ryzen 5800X PC myself, so I know it can be done. I used my same power supply and GPUs, but most everything else is new. 

    The fact remains that you can build an entire rendering farm if you so desired. And if you cannot do so, either because of space or availability, you can always rent one. Migenius did benchmarks on servers from companies like Amazon and Google (scroll down). https://www.migenius.com/products/nvidia-iray/iray-rtx-2019-1-1-benchmarks

    They tested servers with up to 8 GPUs, and mention they have seen some some with 16 GPUs. They make a point of saying that 16 GPU server still had 90% effeciency, showing how well Iray can scale even with so many. Nvidia sells DGX boxes ready to go, though they don't have any Ampere with RT cores in these, they still pack a lot massive amount of raw compute power. 

    So the point still stands. The only thing stopping people is how much money they are willing to spend on their hardware. This is very different than video games, where one GPU is all you can use now, as very few games even support SLI anymore. But you can hog wild with Iray.

  • prixatprixat Posts: 1,585
    edited April 2021

    System/Motherboard: ASUS B550M-A
    CPU: R7 5800X 4.2GHz stock
    GPU: EVGA RTX 3060 12GB
    System Memory: 16 GB Corsair DDR4-3200
    OS Drive: Intel 660p 1TB
    Asset Drive: Same
    Operating System: Win 10 Home, 20H2
    Nvidia Drivers Version: 461.92 Game Ready
    Daz Studio Version: 4.15.0.14 Beta

    Results:
    2021-03-25 18:12:09.474 Total Rendering Time: 3 minutes 48.46 seconds
    2021-03-25 18:12:13.984 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce RTX 3060): 1800 iterations, 1.451s init, 225.417s render
    Iterations per second: 7.985

    Rendering Time (GPU + CPU): 3 minutes 37.51 seconds
    Faster with CPU by : 10.95s 
    even though GPU slowed to 7.777 iterations per second

    Post edited by prixat on
  • skyeshotsskyeshots Posts: 148

    outrider42 said:

    Well right now is a pretty unique moment in history. There has never been a time like this where so many parts were so hard to get, and when scalpers horded everything for months on end. But the point still stands. While some things are hard to get, it is still mostly a matter of money as to obtaining all of these parts. If one really wanted to pay the scalpers, many parts are in fact readily available. You may have had to wait, but you still got your stuff in the end.

    I just built a Ryzen 5800X PC myself, so I know it can be done. I used my same power supply and GPUs, but most everything else is new. 

    The fact remains that you can build an entire rendering farm if you so desired. And if you cannot do so, either because of space or availability, you can always rent one. Migenius did benchmarks on servers from companies like Amazon and Google (scroll down). https://www.migenius.com/products/nvidia-iray/iray-rtx-2019-1-1-benchmarks

    They tested servers with up to 8 GPUs, and mention they have seen some some with 16 GPUs. They make a point of saying that 16 GPU server still had 90% effeciency, showing how well Iray can scale even with so many. Nvidia sells DGX boxes ready to go, though they don't have any Ampere with RT cores in these, they still pack a lot massive amount of raw compute power. 

    So the point still stands. The only thing stopping people is how much money they are willing to spend on their hardware. This is very different than video games, where one GPU is all you can use now, as very few games even support SLI anymore. But you can hog wild with Iray.

    Now you tell me this, just as I am putting finishing touches on my PC.

    You are right that cloud based rendering is a fantastic alternative to tying up your PC and the costs associated with HEDT. You referenced an older benchmark though and using an older version of IRAY. Today, I think a 4 card A6000 build would be similar to an 8 card Quadro 6000. Perhaps faster? Though not as fast as 8 card Titan V.

  • outrider42outrider42 Posts: 3,679

    skyeshots said:

    outrider42 said:

    Well right now is a pretty unique moment in history. There has never been a time like this where so many parts were so hard to get, and when scalpers horded everything for months on end. But the point still stands. While some things are hard to get, it is still mostly a matter of money as to obtaining all of these parts. If one really wanted to pay the scalpers, many parts are in fact readily available. You may have had to wait, but you still got your stuff in the end.

    I just built a Ryzen 5800X PC myself, so I know it can be done. I used my same power supply and GPUs, but most everything else is new. 

    The fact remains that you can build an entire rendering farm if you so desired. And if you cannot do so, either because of space or availability, you can always rent one. Migenius did benchmarks on servers from companies like Amazon and Google (scroll down). https://www.migenius.com/products/nvidia-iray/iray-rtx-2019-1-1-benchmarks

    They tested servers with up to 8 GPUs, and mention they have seen some some with 16 GPUs. They make a point of saying that 16 GPU server still had 90% effeciency, showing how well Iray can scale even with so many. Nvidia sells DGX boxes ready to go, though they don't have any Ampere with RT cores in these, they still pack a lot massive amount of raw compute power. 

    So the point still stands. The only thing stopping people is how much money they are willing to spend on their hardware. This is very different than video games, where one GPU is all you can use now, as very few games even support SLI anymore. But you can hog wild with Iray.

    Now you tell me this, just as I am putting finishing touches on my PC.

    You are right that cloud based rendering is a fantastic alternative to tying up your PC and the costs associated with HEDT. You referenced an older benchmark though and using an older version of IRAY. Today, I think a 4 card A6000 build would be similar to an 8 card Quadro 6000. Perhaps faster? Though not as fast as 8 card Titan V.

    It depends on the scene how well the RTX compares to Titan V. The V still has great raw compute power, but scenes that lean on the RT cores will push the RTX cards farther ahead. At any rate, I'm sure the servers have been upgraded in the times since these benchmarks were ran. The most recent Migenius bench came out right after Turing released.

    I'm not sure what is happening with Migenius, since that haven't updated their benchmarks in so long. I was starting to get concerned but they finally made a blog post in Feburary regarding RealityServer 6.1 coming out soon, so they are still going.

    https://www.migenius.com/articles/whats-new-in-realityserver-6-1

    It mentions using Iray 2020.1.4, however the Daz beta is actually up to 2020.1.5. But it mentions things I didn't see in the Daz plugin, like an improved denoiser:

    "Improved Denoiser

    The Deep Learning Denoiser is now fully migrated to use the OptiX denoiser built into the NVIDIA drivers. This comes with generally improved denoising results and the possibility for improvements added through driver updates rather than always requiring updates to Iray and RealityServer itself. As a nice side benefit, the size of the RealityServer distribution has shrunken significantly."

    Unless I missed this somewhere in the previous patch notes. I certainly would like to see this. If the denoiser worked better, well that would be a huge thing for a lot of people as it could save a ton of time on rendering to full convergence. Nvidia made huge strides with their gaming denoiser, it only makes sense that it extends to Iray at some point. Denoising makes animating with Iray much more feasable.

  • Kevin SandersonKevin Sanderson Posts: 1,643
    edited March 2021

    Hmmm, I wonder if it's in but we just haven't heard. Solomon Jagwe and Free Nomon use the Denoiser in rendering their YouTube videos cutting down Iray renders to around 30 seconds a frame with much lower samples.

     

    Post edited by Kevin Sanderson on
  • RayDAntRayDAnt Posts: 1,120

    outrider42 said:

    skyeshots said:

    outrider42 said:

    Well right now is a pretty unique moment in history. There has never been a time like this where so many parts were so hard to get, and when scalpers horded everything for months on end. But the point still stands. While some things are hard to get, it is still mostly a matter of money as to obtaining all of these parts. If one really wanted to pay the scalpers, many parts are in fact readily available. You may have had to wait, but you still got your stuff in the end.

    I just built a Ryzen 5800X PC myself, so I know it can be done. I used my same power supply and GPUs, but most everything else is new. 

    The fact remains that you can build an entire rendering farm if you so desired. And if you cannot do so, either because of space or availability, you can always rent one. Migenius did benchmarks on servers from companies like Amazon and Google (scroll down). https://www.migenius.com/products/nvidia-iray/iray-rtx-2019-1-1-benchmarks

    They tested servers with up to 8 GPUs, and mention they have seen some some with 16 GPUs. They make a point of saying that 16 GPU server still had 90% effeciency, showing how well Iray can scale even with so many. Nvidia sells DGX boxes ready to go, though they don't have any Ampere with RT cores in these, they still pack a lot massive amount of raw compute power. 

    So the point still stands. The only thing stopping people is how much money they are willing to spend on their hardware. This is very different than video games, where one GPU is all you can use now, as very few games even support SLI anymore. But you can hog wild with Iray.

    Now you tell me this, just as I am putting finishing touches on my PC.

    You are right that cloud based rendering is a fantastic alternative to tying up your PC and the costs associated with HEDT. You referenced an older benchmark though and using an older version of IRAY. Today, I think a 4 card A6000 build would be similar to an 8 card Quadro 6000. Perhaps faster? Though not as fast as 8 card Titan V.

    It depends on the scene how well the RTX compares to Titan V. The V still has great raw compute power, but scenes that lean on the RT cores will push the RTX cards farther ahead. At any rate, I'm sure the servers have been upgraded in the times since these benchmarks were ran. The most recent Migenius bench came out right after Turing released.

    I'm not sure what is happening with Migenius, since that haven't updated their benchmarks in so long. I was starting to get concerned but they finally made a blog post in Feburary regarding RealityServer 6.1 coming out soon, so they are still going.

    https://www.migenius.com/articles/whats-new-in-realityserver-6-1

    It mentions using Iray 2020.1.4, however the Daz beta is actually up to 2020.1.5. But it mentions things I didn't see in the Daz plugin, like an improved denoiser:

    "Improved Denoiser

    The Deep Learning Denoiser is now fully migrated to use the OptiX denoiser built into the NVIDIA drivers. This comes with generally improved denoising results and the possibility for improvements added through driver updates rather than always requiring updates to Iray and RealityServer itself. As a nice side benefit, the size of the RealityServer distribution has shrunken significantly."

    Unless I missed this somewhere in the previous patch notes. I certainly would like to see this. If the denoiser worked better, well that would be a huge thing for a lot of people as it could save a ton of time on rendering to full convergence. Nvidia made huge strides with their gaming denoiser, it only makes sense that it extends to Iray at some point. Denoising makes animating with Iray much more feasable.

     DS/Iray made the jump to the Optix denoiser in the driver all the way back with DS 4.12.2.050 Beta since it incorporated Iray RTX 2020.1.1. The likely reason why you missed it is because the feature change actually came as part of Iray RTX 2020.1.0 build 334300.2228 which itself got skipped over in Daz Studio's beta release schedule. Making it easy to overlook the fact that the content of its changelog post also applies to all DS/Iray versions thereafter (official Iray changelogs are cummulative.)

    Myself I have noticed significant improvements in denoising results (especially regarding things like treating human skin textures so that they don't end up looking like furniture or hubcaps) since the changeover. For years (ever since @DAZ_Rawb asked for suggestions on how to do such a thing) I've entertained the idea of establishing a separate thread/testing methodolgy just for studying denoiser performance. The main thing holding me back has been the task of creating just the right kind of scene to generate good images for comparison.

  • outrider42outrider42 Posts: 3,679

    RayDAnt said:

    outrider42 said:

    skyeshots said:

    outrider42 said:

    Well right now is a pretty unique moment in history. There has never been a time like this where so many parts were so hard to get, and when scalpers horded everything for months on end. But the point still stands. While some things are hard to get, it is still mostly a matter of money as to obtaining all of these parts. If one really wanted to pay the scalpers, many parts are in fact readily available. You may have had to wait, but you still got your stuff in the end.

    I just built a Ryzen 5800X PC myself, so I know it can be done. I used my same power supply and GPUs, but most everything else is new. 

    The fact remains that you can build an entire rendering farm if you so desired. And if you cannot do so, either because of space or availability, you can always rent one. Migenius did benchmarks on servers from companies like Amazon and Google (scroll down). https://www.migenius.com/products/nvidia-iray/iray-rtx-2019-1-1-benchmarks

    They tested servers with up to 8 GPUs, and mention they have seen some some with 16 GPUs. They make a point of saying that 16 GPU server still had 90% effeciency, showing how well Iray can scale even with so many. Nvidia sells DGX boxes ready to go, though they don't have any Ampere with RT cores in these, they still pack a lot massive amount of raw compute power. 

    So the point still stands. The only thing stopping people is how much money they are willing to spend on their hardware. This is very different than video games, where one GPU is all you can use now, as very few games even support SLI anymore. But you can hog wild with Iray.

    Now you tell me this, just as I am putting finishing touches on my PC.

    You are right that cloud based rendering is a fantastic alternative to tying up your PC and the costs associated with HEDT. You referenced an older benchmark though and using an older version of IRAY. Today, I think a 4 card A6000 build would be similar to an 8 card Quadro 6000. Perhaps faster? Though not as fast as 8 card Titan V.

    It depends on the scene how well the RTX compares to Titan V. The V still has great raw compute power, but scenes that lean on the RT cores will push the RTX cards farther ahead. At any rate, I'm sure the servers have been upgraded in the times since these benchmarks were ran. The most recent Migenius bench came out right after Turing released.

    I'm not sure what is happening with Migenius, since that haven't updated their benchmarks in so long. I was starting to get concerned but they finally made a blog post in Feburary regarding RealityServer 6.1 coming out soon, so they are still going.

    https://www.migenius.com/articles/whats-new-in-realityserver-6-1

    It mentions using Iray 2020.1.4, however the Daz beta is actually up to 2020.1.5. But it mentions things I didn't see in the Daz plugin, like an improved denoiser:

    "Improved Denoiser

    The Deep Learning Denoiser is now fully migrated to use the OptiX denoiser built into the NVIDIA drivers. This comes with generally improved denoising results and the possibility for improvements added through driver updates rather than always requiring updates to Iray and RealityServer itself. As a nice side benefit, the size of the RealityServer distribution has shrunken significantly."

    Unless I missed this somewhere in the previous patch notes. I certainly would like to see this. If the denoiser worked better, well that would be a huge thing for a lot of people as it could save a ton of time on rendering to full convergence. Nvidia made huge strides with their gaming denoiser, it only makes sense that it extends to Iray at some point. Denoising makes animating with Iray much more feasable.

     DS/Iray made the jump to the Optix denoiser in the driver all the way back with DS 4.12.2.050 Beta since it incorporated Iray RTX 2020.1.1. The likely reason why you missed it is because the feature change actually came as part of Iray RTX 2020.1.0 build 334300.2228 which itself got skipped over in Daz Studio's beta release schedule. Making it easy to overlook the fact that the content of its changelog post also applies to all DS/Iray versions thereafter (official Iray changelogs are cummulative.)

    Myself I have noticed significant improvements in denoising results (especially regarding things like treating human skin textures so that they don't end up looking like furniture or hubcaps) since the changeover. For years (ever since @DAZ_Rawb asked for suggestions on how to do such a thing) I've entertained the idea of establishing a separate thread/testing methodolgy just for studying denoiser performance. The main thing holding me back has been the task of creating just the right kind of scene to generate good images for comparison.

    I was under the impression this was an improved version of the OptiX Denoiser, not the old cranky pre-RTX denoiser. There would be no need for Migenius to point this out when the Optix denoiser was around for their previous version of RealityServer which had Iray RTX. Migenius has been shipping Iray RTX for over 2 years.

  • skyeshotsskyeshots Posts: 148

    System/Motherboard: Gigabyte X299X
    CPU: I9-10920X  5.2 GHZ
    GPU: PNY RTX A6000 x4
    System Memory: 96 GB Corsair Dominator Platinum DDR4-3466
    OS Drive: Optane 380 GB
    Asset Drive: Same
    Operating System: Win 10 Pro, 1909
    Nvidia Drivers Version: 461.92 Production Branch
    Daz Studio Version: 4.15

    2021-03-20 12:28:55.172 Total Rendering Time: 29.68 seconds
    2021-03-26 14:42:57.293 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:
    2021-03-26 14:42:57.293 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 1 (RTX A6000):      440 iterations, 1.835s init, 24.615s render
    2021-03-26 14:42:57.305 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 3 (RTX A6000):      440 iterations, 1.892s init, 24.601s render
    2021-03-26 14:42:57.305 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 2 (RTX A6000):      447 iterations, 1.708s init, 24.940s render
    2021-03-26 14:42:57.305 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (RTX A6000):      452 iterations, 1.786s init, 24.974s render
    2021-03-26 14:42:57.305 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CPU:      21 iterations, 1.431s init, 25.141s render
    Loading Time: 4.916
    Rendering Performance: 1800/24.974s = 72.08 iterations per second
     

    A6000 x4 w/NVLink x2. Well under 1500W total draw. Final benchmark for now. Back to doing homework..

    A64.jpg
    753 x 565 - 141K
  • TheKDTheKD Posts: 2,667
    edited April 2021

    Jeez, look at dazzy warbucks over here. Looking to adopt a new son?   :P

    Post edited by TheKD on
  • skyeshotsskyeshots Posts: 148
    edited March 2021

    TheKD said:

    Jeez, look at dazzy warbuckucks over here. Looking to adopt a new son?   :P

     

    Really.png
    900 x 900 - 1M
    Post edited by skyeshots on
  • Twilight76Twilight76 Posts: 318

    System Configuration - HP Pavilion TG01-1007ng Gaming-PC

    System/Motherboard: HP 8767

    CPU: Intel Core i7-10700F 2900 MHz stock

    GPU: NVIDIA Gforce RTX 3060 Ti 8 GB MHz stock

    System Memory: Samsung 32GB DDR4 @2933 MHz

    OS Drive: Kioxia BG4 M.2 1024 GB PCI Express 3.0 BiCS FLASH TLC NVMe

    Asset Drive: Kioxia BG4 M.2 1024 GB PCI Express 3.0 BiCS FLASH TLC NVMe

    Operating System: Windows 10 Home 64bit Version 2009

    Nvidia Drivers Version: 461.92

    Daz Studio Version: 4.15.0.14 64bit

     

    Benchmark Results

    DAZ_STATS

    2021-04-08 00:36:41.191 Total Rendering Time: 2 minutes 49.20 seconds

    2021-04-08 01:09:24.431 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce RTX 3060 Ti): 1800 iterations, 1.856s init, 165.187s render

     

    Iteration Rate: 10.909 iterations per second

  • prixatprixat Posts: 1,585
    edited April 2021

    UPDATE:
    New BIOS, new nVidia drivers, memory increased to 32GB, 2 additional fans pulling air in front... and most importantly, turned off all the RGB laugh

    Consistent 10 second improvement.

    System/Motherboard: ASUS B550M-A
    CPU: R7 5800X 4.2GHz
    GPU: EVGA RTX 3060 12GB
    System Memory: 32 GB Corsair DDR4-3200
    OS Drive: Intel 660p 1TB
    Asset Drive: Same
    Operating System: Win 10 Home, 20H2
    Nvidia Driver: 465.89 Studio Drivers
    Daz Studio Version: 4.15.0.14 Beta

    2021-03-25 18:12:09.474 Total Rendering Time: 3 minutes 37.73 seconds
    2021-03-25 18:12:13.984 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce RTX 3060): 1800 iterations, 1.373s init, 214.782s render

    Iterations per second: 8.381 (compared to the old 7.985)

     

    UPDATED UPDATE:

    Overclocking the GPU, to get round that tiny 192bit Memory bus, fortunately the EVGA software makes overclocking very easy (and relatively safe)
    The Memory Bandwidth increased from stock 360GB/s to a theoretical 408GB/s

    2021-04-09 08:33:29.098 Total Rendering Time: 3 minutes 21.69 seconds
    2021-04-09 08:33:29.115 Loaded image r.png
    2021-04-09 08:33:29.148 Saved image: C:\TEMPDAZ\RenderAlbumTmp\Render 1.jpg
    2021-04-09 08:33:35.003 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:
    2021-04-09 08:33:35.003 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (NVIDIA GeForce RTX 3060): 1800 iterations, 1.417s init, 198.102s render

    Iterations per second: 9.086

    that's about 30 seconds faster than the original run.

     

    FURTHER UPDATED UPDATE:

    I applied a couple of tweaks to the render settings, so this is no longer a compareable benchmark!
    In progressive rendering, increase the 'Min Update Samples' to 100 and increase the 'Update Interval' to 26s.

    2021-04-09 08:49:34.534 Total Rendering Time: 3 minutes 17.78 seconds
    2021-04-09 08:49:34.551 Loaded image r.png
    2021-04-09 08:49:34.583 Saved image: C:\TEMPDAZ\RenderAlbumTmp\Render 3.jpg
    2021-04-09 08:49:40.707 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:
    2021-04-09 08:49:40.707 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (NVIDIA GeForce RTX 3060): 1800 iterations, 1.393s init, 194.802s render

    Iterations per second: 9.24

    This gets the GPU to a speed where adding the CPU made render times less than a second faster.

    Post edited by prixat on
  • TheKDTheKD Posts: 2,667
    edited April 2021

    Decided to use this to stress test my 4275 clock speed after passing a bunch of cinebench 300 second tests:

     

    Test Beast Mode
    System Configuration
    System/Motherboard: TUF GAMING X570-PLUS
    CPU: AMD Ryzen 9 3900X 1.31V 4275mh per core lock, multithreaded on, Beast Mode
    GPU: GPU1 MSI Gaming GeForce RTX 2080 Super Tri-Frozr @ stock speed, aggressive fan profile.  
    System Memory:  G.SKILL Ripjaws V Series DDR4 3600MHz 16GB x2  G.SKILL F4-3000C15D-16GVKB Ripjaws V Series 8GB 3000MHz x2    TOTAL 48GB @ 1066mhz
    OS Drive: Samsung 960 EVO M.2 PCIe NVMe 250GB Internal SSD
    Asset Drive: Seagate green one SATA-III 6.0Gb/s 4TB
    Operating System: win10 E 10.0.19042
    Nvidia Drivers Version: 465.89
    Daz Studio Version: 4.15.0.14


    Benchmark Results 3900x + 2080 super
    DAZ_STATS  
    Total Rendering Time: 3 minutes 52.47 seconds

    IRAY_STATS  
    CUDA device 0 (NVIDIA GeForce RTX 2080 SUPER):1558 iterations, 2.243s init, 228.033s render    
    CPU:242 iterations, 1.782s init, 227.931s render

    Iteration Rate: 7.893 iterations per second
    Loading Time: 4.437 seconds

     

     

    Benchmark Results 3900x only
    DAZ_STATS
    Total Rendering Time: 27 minutes 1.13 seconds

    IRAY_STATS
    rend info : CPU:1800 iterations, 1.491s init, 1617.357s render

    Iteration Rate: 1.112 iterations per second
    Loading Time: 3.773 seconds

     

    Benchmark Results 2080 super only
    DAZ_STATS
    Total Rendering Time: 4 minutes 13.19 seconds

    IRAY_STATS
    CUDA device 0 (NVIDIA GeForce RTX 2080 SUPER):1800 iterations, 1.790s init, 249.191s render

    Iteration Rate: 7.223 iterations per second
    Loading Time: 3.999 seconds

    Post edited by TheKD on
  • Hello,

    I am moving my first steps with DAZ studio and I building a setup in order to get decent speed renders.

    I find there is a really bad market shortage for 30XX cards (you can only buy on secondary market for 4x the price).

    What are good alternatives, or are there none?

  • thenoobduckythenoobducky Posts: 68
    edited April 2021

    Some up to date testing on 3090 amd 4.15. Fresh restart with only daz studio and notepad++ running (no web browser etc).

     

    System Configuration
    System/Motherboard: x570
    CPU: 3900x
    GPU: EVGA 3090 FTW3 Ultra
    System Memory: 60ish
    OS Drive: NVME
    Asset Drive: SSD/HDD
    Operating System: Windows 10
    Nvidia Drivers Version: 465.89
    Daz Studio Version: 4.15.0.2

    Benchmark Results

     

    Rebar Disabled:

    2021-04-09 01:33:08.938 Total Rendering Time: 1 minutes 35.8 seconds

    2021-04-09 01:33:11.557 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (NVIDIA GeForce RTX 3090): 1800 iterations, 2.187s init, 90.660s render

    2021-04-09 01:35:28.212 Total Rendering Time: 1 minutes 33.58 seconds

    2021-04-09 01:35:30.445 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (NVIDIA GeForce RTX 3090): 1800 iterations, 1.386s init, 90.628s render

    2021-04-09 01:37:37.301 Total Rendering Time: 1 minutes 33.48 seconds

    2021-04-09 01:37:46.220 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (NVIDIA GeForce RTX 3090): 1800 iterations, 1.361s init, 90.554s render

    2021-04-09 01:40:24.423 Total Rendering Time: 1 minutes 33.56 seconds

    2021-04-09 01:40:32.270 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (NVIDIA GeForce RTX 3090): 1800 iterations, 1.349s init, 90.662s render

    2021-04-09 01:42:29.371 Total Rendering Time: 1 minutes 33.68 seconds

    2021-04-09 01:43:34.050 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (NVIDIA GeForce RTX 3090): 1800 iterations, 1.360s init, 90.789s render

    2021-04-09 01:45:48.928 Total Rendering Time: 1 minutes 33.67 seconds

    2021-04-09 01:46:07.552 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (NVIDIA GeForce RTX 3090): 1800 iterations, 1.457s init, 90.667s render

    2021-04-09 01:48:09.372 Total Rendering Time: 1 minutes 33.49 seconds

    2021-04-09 01:48:20.141 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (NVIDIA GeForce RTX 3090): 1800 iterations, 1.363s init, 90.573s render

    2021-04-09 01:50:17.075 Total Rendering Time: 1 minutes 33.61 seconds

    2021-04-09 01:50:31.019 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (NVIDIA GeForce RTX 3090): 1800 iterations, 1.362s init, 90.703s render

    2021-04-09 01:52:30.872 Total Rendering Time: 1 minutes 33.46 seconds

    2021-04-09 01:52:33.238 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (NVIDIA GeForce RTX 3090): 1800 iterations, 1.379s init, 90.540s render

    2021-04-09 01:54:28.304 Total Rendering Time: 1 minutes 33.57 seconds

    2021-04-09 01:54:32.717 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (NVIDIA GeForce RTX 3090): 1800 iterations, 1.398s init, 90.622s render

    2021-04-09 02:05:19.869 Finished Rendering

     

    Rebar Enabled:

    2021-04-09 02:10:44.527 Total Rendering Time: 1 minutes 33.41 seconds

    2021-04-09 02:10:45.783 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (NVIDIA GeForce RTX 3090): 1800 iterations, 1.380s init, 90.455s render

    2021-04-09 02:12:51.281 Total Rendering Time: 1 minutes 33.62 seconds

    2021-04-09 02:13:34.538 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (NVIDIA GeForce RTX 3090): 1800 iterations, 1.361s init, 90.693s render

    2021-04-09 02:15:35.850 Total Rendering Time: 1 minutes 33.67 seconds

    2021-04-09 02:15:41.373 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (NVIDIA GeForce RTX 3090): 1800 iterations, 1.371s init, 90.738s render

    2021-04-09 02:17:35.446 Total Rendering Time: 1 minutes 33.65 seconds

    2021-04-09 02:17:40.073 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (NVIDIA GeForce RTX 3090): 1800 iterations, 1.371s init, 90.728s render

    2021-04-09 02:19:34.480 Total Rendering Time: 1 minutes 33.46 seconds

    2021-04-09 02:19:45.568 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (NVIDIA GeForce RTX 3090): 1800 iterations, 1.368s init, 90.533s render

    2021-04-09 02:26:04.160 Total Rendering Time: 1 minutes 33.82 seconds

    2021-04-09 02:26:24.885 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (NVIDIA GeForce RTX 3090): 1800 iterations, 1.343s init, 90.943s render

    2021-04-09 02:28:24.496 Total Rendering Time: 1 minutes 33.36 seconds

    2021-04-09 02:28:29.668 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (NVIDIA GeForce RTX 3090): 1800 iterations, 1.369s init, 90.436s render

    2021-04-09 02:30:20.767 Total Rendering Time: 1 minutes 33.57 seconds

    2021-04-09 02:30:31.706 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (NVIDIA GeForce RTX 3090): 1800 iterations, 1.384s init, 90.632s render

    2021-04-09 02:32:27.771 Total Rendering Time: 1 minutes 33.56 seconds

    2021-04-09 02:32:45.309 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (NVIDIA GeForce RTX 3090): 1800 iterations, 1.369s init, 90.633s render

    2021-04-09 02:34:45.158 Total Rendering Time: 1 minutes 33.41 seconds

    2021-04-09 02:34:57.558 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (NVIDIA GeForce RTX 3090): 1800 iterations, 1.362s init, 90.489s render

    Post edited by thenoobducky on
  • Jason GalterioJason Galterio Posts: 2,562

    Did some work on my system, adding a GeForce RTX 2070 SUPER on an external rig.

    System Configuration
    System/Motherboard: Asus TUF Z390-Plus
    CPU: Intel Core i9-9900k 3.6 GHz
    GPU: RTX 2080 Super & GeForce RTX 2070 Super
    System Memory: 32 GB DDR4 2666 MHz
    OS Drive: Intel M.2 1 TB NVMe SSD
    Asset Drive: Same
    Operating System: Windows 10 Pro
    Nvidia Drivers Version: Game Ready 445.87
    Daz Studio Version: 4.15.0.2
    Optix Prime Acceleration: N/A

    New Benchmark Results
    2021-04-19 22:36:51.607 Finished Rendering
    2021-04-19 22:36:51.661 Total Rendering Time: 2 minutes 14.45 seconds

    2021-04-19 22:37:03.550 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:
    2021-04-19 22:37:03.550 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce RTX 2080 SUPER):      909 iterations, 2.406s init, 129.362s render
    2021-04-19 22:37:03.550 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 1 (GeForce RTX 2070 SUPER):      891 iterations, 5.603s init, 125.030s render

    Iteration Rate:
    device 0:      7.03 iterations per second
    device 1:      7.13 iterations per second
    Loading Time: 6 seconds


    Old Benchmark Results
    2020-05-15 17:05:08.199 Finished Rendering
    2020-05-15 17:05:08.227 Total Rendering Time: 5 minutes 22.7 seconds

    2020-05-15 17:05:39.757 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:
    2020-05-15 17:05:39.757 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce RTX 2080 SUPER):   1800 iterations, 6.378s init, 313.040s render

    Iteration Rate: 5.75 iterations per second
    Loading Time: 9.96 seconds

  • System/Motherboard: Asus x570 TUF
    CPU: Ryzen 7 2700  @4.05 GHZ
    GPU: 1080 GTX 8gb evga
    System Memory: 32 GB Corsair 3600mhz
    OS Drive: Kingston SSD m2 NVME 512GB
    Asset Drive: Same
    Operating System: Win 10 Pro,
    Nvidia Drivers Version: 461.92
    Daz Studio Version: 4.15

    (full gpu oc)
    2021-04-22 20:10:27.257 Total Rendering Time: 9 minutes 43.51 seconds
    2021-04-22 20:10:33.586 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:
    2021-04-22 20:10:33.586 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce GTX 1080):      1800 iterations, 1.797s init, 578.507s render

    Rendering Performance: 1800/578.507 = 3.111 iterations per second


    System/Motherboard: Asus x570 TUF
    CPU: Ryzen 7 2700  @4.05 GHZ
    GPU: Zotac GeForce RTX 3070 TwinEdge OC
    System Memory: 32 GB Corsair 3600mhz
    OS Drive: Kingston SSD m2 NVME 512GB
    Asset Drive: Same
    Operating System: Win 10 Pro,
    Nvidia Drivers Version: 466.11
    Daz Studio Version: 4.15

    (gpu stock,~2000/1750)
    2021-04-22 23:58:22.512 Total Rendering Time: 2 minutes 37.87 seconds
    2021-04-22 23:59:53.798 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:
    2021-04-22 23:59:53.798 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce RTX 3070):      1800 iterations, 7.718s init, 147.000s render

    Rendering Performance: 1800/147.000 = 12.245 iterations per second

     

    (gpu smooth oc,~2050/8000)
    2021-04-24 00:17:34.869 Total Rendering Time: 2 minutes 14.96 seconds
    2021-04-24 00:18:04.544 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:
    2021-04-24 00:18:04.544 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (NVIDIA GeForce RTX 3070):      1800 iterations, 1.770s init, 131.075s render

    Rendering Performance: 1800/131.075 = 13,733 iterations per second

  • droidy001droidy001 Posts: 277
    edited April 2021

    System Configuration
    System/Motherboard: Asrock b450m steel legend
    CPU: Ryzen 5 2600 @ stock
    GPU: EVGA 3060xc gaming @ stock
    System Memory: 32 GB Corsair vengence @3000
    OS Drive: Intel 660p 512GB
    Asset Drive: same
    Operating System: Windows 10 pro 19042.928
    Nvidia Drivers Version: 462.31
    Daz Studio Version: 4.15.0.2
    Optix Prime Acceleration: n/a

    Benchmark Results

    Total Rendering Time: 3 minutes 43.33 seconds

    IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce RTX 3060):      1800 iterations, 2.084s init, 218.546s render
    Iteration Rate: 8.236 iterations per second
    Loading Time: 4.784 seconds

    Post edited by droidy001 on
  • prixatprixat Posts: 1,585
    edited April 2021

    droidy001 said:

    System Configuration
    System/Motherboard: Asrock b450m steel legend
    CPU: Ryzen 5 2600 @ stock
    GPU: EVGA 3060xc gaming @ stock
    System Memory: 32 GB Corsair vengence @3000
    OS Drive: Intel 660p 512GB
    Asset Drive: same
    Operating System: Windows 10 pro 19042.928
    Nvidia Drivers Version: 462.31
    Daz Studio Version: 4.15.0.2
    Optix Prime Acceleration: n/a

    Benchmark Results

    Total Rendering Time: 3 minutes 43.33 seconds

    IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce RTX 3060):      1800 iterations, 2.084s init, 218.546s render
    Iteration Rate: 8.236 iterations per second
    Loading Time: 4.784 seconds

    That's an interesting comparison to my system (see a few posts earlier) with exactly the same EVGA 3060 XC.
    It looks like your system is not holding back the GPU, I wonder if it will limit the GPU overclockability?


    I'm now up to 9.7 interations per second, with overclocked GPU, with CPU, I upgraded the front intake fans to 2 x 140mm and set more aggressive fan profiles, I even moved the pc to a colder room. laugh

    Post edited by prixat on
  • droidy001droidy001 Posts: 277
    edited April 2021

    prixat said:

    droidy001 said:

    System Configuration
    System/Motherboard: Asrock b450m steel legend
    CPU: Ryzen 5 2600 @ stock
    GPU: EVGA 3060xc gaming @ stock
    System Memory: 32 GB Corsair vengence @3000
    OS Drive: Intel 660p 512GB
    Asset Drive: same
    Operating System: Windows 10 pro 19042.928
    Nvidia Drivers Version: 462.31
    Daz Studio Version: 4.15.0.2
    Optix Prime Acceleration: n/a

    Benchmark Results

    Total Rendering Time: 3 minutes 43.33 seconds

    IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce RTX 3060):      1800 iterations, 2.084s init, 218.546s render
    Iteration Rate: 8.236 iterations per second
    Loading Time: 4.784 seconds

    That's an interesting comparison to my system (see a few posts earlier) with exactly the same EVGA 3060 XC.
    It looks like your system is not holding back the GPU, I wonder if it will limit the GPU overclockability?


    I'm now up to 9.7 interations per second, with overclocked GPU, with CPU, I upgraded the front intake fans to 2 x 140mm and set more aggressive fan profiles, I even moved the pc to a colder room. laugh

    I had noticed you and read your comments. It's only a small difference at stock, so it's going to be something subtle.

     

    My case is an inwin 301 I have 3 intake fans on the bottom, exhaust at the rear and a 240mm aio exhausting on the right hand panel. These fans when rendering are not going above 40%. The gpu fans are around 60% (and still almost silent)

    Gpu is getting to around between 68 and 71c with no throttling. Gpu boost steady at 1950mhz.

    Ambient temp approx 22c

     

    Which evga 3060 is it?

     

    Edit:just got home from work and double checked my figures. The boost clock is actually holding steady at 1980 not 1950.

    I also ramped up the fans and ran the benchmark again. Temps maxed out at 54c, but got exactly the same result, so that's as far as she goes at stock. 

    Post edited by droidy001 on
  • prixatprixat Posts: 1,585
    edited April 2021

    It's the same model GPU and I get the same sort of performance at stock values.

    I'm using MSI Afterburner with basic overclock. The 1000MHz GPU memory overclock seems to make a bigger difference than the 200MHz GPU increase...
    (The screencap is, mid-render, with the benchmark scene)

    (Those values are not stable in gaming and I use 990MHz, 190MHz for gaming. That's stored in the second Afterburner profile.)

    Screenshot 2021-04-28 091310.png
    887 x 632 - 277K
    Post edited by prixat on
  • skyeshotsskyeshots Posts: 148

    carondimonio5 said:

    Hello,

    I am moving my first steps with DAZ studio and I building a setup in order to get decent speed renders.

    I find there is a really bad market shortage for 30XX cards (you can only buy on secondary market for 4x the price).

    What are good alternatives, or are there none?

    I recommend doing Quad NVidia A6000. 

    Set your air conditioner to 70 degrees F (21 degree C) for best results. You will need an air conditioner that can offset 4000 BTUs of heat comfortably.

    Or you can also try something more conservative like the new A5000 GPU. I would love to see your benchmarks here if you get one.

Sign In or Register to comment.