Daz Studio Iray - Rendering Hardware Benchmarking

11718202223

Comments

  • thesdzthesdz Posts: 0

    System Configuration
    System/Motherboard: Dell Alienware Aurora R11
    CPU: Intel Core i7-10700 @ 2.90Ghz
    GPU: Nvidia GeForce 3060 Ti @ 1410 MHz
    System Memory: 32gb (2x16) Corsair Vengeance LPX DDR4 @ 3000Mhz
    OS Drive: KIOXIA 512gb NVMe
    Asset Drive: Toshiba DC10ACA200
    Operating System: Windows 10 Home 20H2
    Nvidia Drivers Version: 461.40
    Daz Studio Version: 4.15.0.2 64-bit
    Optix Prime Acceleration:

    Benchmark Results
    2021-02-20 18:44:44.311 Finished Rendering
    2021-02-20 18:44:44.351 Total Rendering Time: 2 minutes 42.92 seconds
    2021-02-20 18:46:50.858 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:
    2021-02-20 18:46:50.858 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce RTX 3060 Ti):      1800 iterations, 2.517s init, 158.099s render
    Iteration Rate: (1800 / 158.099) 11.385 iterations per second
    Loading Time: ((0 * 3600 + 2 * 60 + 42.92) - 158.099) 4.821 seconds

  • chrislbchrislb Posts: 55

    skyeshots said:

    colcurve said:

    which rtx is the most efficient (price/iterations) with regard to iray currently? do octane benchmarks give similar results as iray benchmarks? octane bench seems to recommend 3080

    RTX 3060 Ti is (currently) the most efficent in terms of price per iteration for Iray as well as Octane.

    That's assuming your scene doesn't require more VRAM than the 3060 ti has.  For many people 8GB can be very limiting.

  • outrider42outrider42 Posts: 2,909

    Yeah, your VRAM is your ultimate limit on what you can render. GPU render speed is nice, but that speed becomes zero if you run out of VRAM.

    That is why the 3060 with 12gb is a very compelling card. It is an oddball in the Ampere lineup given it has more VRAM than much faster cards, like the 3060ti, 3070, and even the "flagship" 3080.

  • outrider42outrider42 Posts: 2,909
    edited February 25

    I didn't see a more 4.14/4.15 bench for a 1080ti, so here is one. I posted a time for two of them, but not one.

    Windows 10 20H2

    CPU: i5 4690K

    GPU #1:  EVGA 1080ti SC2

    GPU #2: MSI 1080ti Gaming  <--display GPU, NOT used in this bench.

    RAM 32GB HyperX

    OS Drive Samsung 860 EVO 1TB

    Asset Drive: Samsung 860 EVO 1TB and WB 4TB Black HDD

    2021-02-25 00:28:27.967 Total Rendering Time: 6 minutes 37.22 seconds

    2021-02-25 00:28:40.071 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce GTX 1080 Ti): 1800 iterations, 4.482s init, 389.181s render

    That comes out to 4.62 iterations per second, and 8.039 second loading time.

    Compared to the 4.12 and earlier marks this is over a full minute faster total rendering time than my previous best. Considering that two 1080tis knocked off a bit over 30 seconds this is not surprising, but it is good to see that the performance gain in 4.14+ scales well.

    I also just wanted to have a 1080ti bench up to compare to the upcoming 3060 that somebody will end up getting. This will be interesting, because the 1080ti beats the 2060 easily at gaming, but just barely at Iray. Then the 2060 Super turned the table and beat the 1080ti at Iray, though still lost at gaming. The early leaks for the 3060 don't look promising, at least for gaming. But Ampere has proven to be an absolute beast at Iray. The 3060ti easily beats the 1080ti by a WIDE margin, in fact the 3060ti more than DOUBLES the performance in this bench scene. It is crazy just how good Ampere is at Iray. The 3060ti also beats the 1080ti at gaming, but nowhere near that margin, not enough to warrant upgrading from a 1080ti in most cases.

    I think the 3060 might end up just barely matching a 1080ti in gaming, if that. But when it comes to Iray it will beat it easily, and also by a large margin (obviously not the huge margin the 3060ti does, but probably 1.5x worth, maybe even more). Considering the 12GB VRAM, this card is indeed compelling for people who use Iray and other OptiX based render engines. It certainly beats the 2060 based on VRAM alone, but it looks to increase Iray performance by a nice amount as well.

    Now if only people can buy them. But the x60 class has historically been not just a best seller, but also the most available. Hopefully these will be easier to get for those who want them.

    It would be great if any 2060 and 2060 Super owners could bench 4.14 and 4.15 and post their results.

    Post edited by outrider42 on
  • rgacesargacesa Posts: 0

    Few runs for my rendering rig (1080Ti):

    OS:Windows 10 20H2

    CPU: Ryzen 1700 (8 cores / 16 threads, 3000 MHz)

    RAM: 32GB (2 x 16GB Corsair Vengeance DDR4 2666Mhz)

    MB: MSI X370 Gaming Plus (MS-7A33)

    Assets drive: 1TB SSD (Crucial CT1050MX300)

    GPU #1:  Palit GTX1080Ti (core clock 1885 Mhz, memory clock 5505 Mhz)

    Test 1 (Daz 4.15.0.2) NVIDIA 461.40 Studio drivers:

    2021-02-26 10:11:24.764 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce GTX 1080 Ti): 1800 iterations, 2.687s init, 407.398s render;  => 4.4183 iterations per second

    Test 2 (Daz 4.15.0.2) NVIDIA 461.40 Studio drivers: 

    2021-02-26 10:32:16.058 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce GTX 1080 Ti): 1800 iterations, 2.490s init, 404.459s render  => 4.450 iterations per second

    Test 3 (Daz Beta 4.15.0.13) NVIDIA 461.40 Studio

    2021-02-26 10:42:21.964 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce GTX 1080 Ti): 1800 iterations, 2.450s init, 402.249s render  => 4.474 iterations per second

     

  • Matt_CastleMatt_Castle Posts: 1,492

    Maybe not as impressive as it can be, as I'm still running it on an old motherboard while I'm working out what the rest of the system should be like, but I'm guessing I'm one of the first to offer a benchmark for a 3060:

    System Configuration
    System/Motherboard: Hewlett Packard IPM87-MP
    CPU: Intel Core i7 4790 @ stock
    GPU: Asus RTX 3060 TUF OC @ stock
    System Memory: 2x Micron PC3-12800U-11-11-B1 8GB DDR3 @ 1600 MHz
    OS Drive: Samsung 850 EVO 250GB mSATA
    Asset Drive: 6TB Seagate ST6000VN0033 IronWolf NAS
    Operating System: Windows 10 Home 64-bit, Version 10.0.19041 Build 19041
    Nvidia Drivers Version: 461.72 Game-Ready
    Daz Studio Version: Daz Studio 4.15.0.2

    Benchmark Results:
    2021-02-26 18:42:09.377 Finished Rendering
    2021-02-26 18:42:09.417 Total Rendering Time: 3 minutes 54.5 seconds

    2021-02-26 18:42:12.391 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:
    2021-02-26 18:42:12.391 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce RTX 3060):      1800 iterations, 9.602s init, 221.020s render

    Iteration Rate: 8.144 iterations per second
    Loading Time: 13.5 seconds

    About what could be expected with the core count relative to the 3060 Ti, but that's the baby brother of the 30 series so far, and it's still keeping pace with the top end 20 series cards.

  • outrider42outrider42 Posts: 2,909

    Hot diggity damn, I guessed 8 and you got 8.1, LOL. And no, the old motherboard has zero impact on your render speed, so don't worry about that. You'd get the exact same speed with that card in a shiny new PC.

    So the 3060 really is an intriguing card for Iray users. Iray lets Ampere show what it is truly capable of. It is a shame Nvidia didn't release higher capacity SKUs of faster cards, but the slowest Ampere is still really fast for Iray compared to any previous generation. Faster than a 2080 Super, possibly matching a 2080ti, though it is important to consider we don't have a recent bench for a 2080ti with 4.14 or 4.15, so the number that is up is not correct.

    On that note, we could use a lot of updated benches from people who run various cards. There are a ton of missing benches for GPUs with 4.14+. So it is important to consider this when comparing these numbers, since 4.14 increased the iteration count across the board.

  • Matt_CastleMatt_Castle Posts: 1,492

     Yeah, it's pretty much on the nose of three-quarters of the 3060 Ti. Basically in direct proportion to core counts, so not a particular surprise, but it at least shows the pattern.

    As far as the higher VRAM versions of faster cards... yeah, it's a huge shame, as I'd've loved the hypothetical 16GB 3070 that was at one point rumoured. Being a bit further up the performance stack would have been nice for gaming use, if nothing else; a lot of reviewers consider the 3060 a bit anemic in rasterisation for any more than 1080p when newer games are in consideration.

    It'll be interesting to see (assuming we get any new benchmarks) where the 3060 actually stacks up against Turing in a like-for-like test. Alas, while I do have DS 4.12.0.86 backed up, it's moot as it wouldn't recognise the 30 series, so I can't approach the problem that way around.

  • outrider42outrider42 Posts: 2,909
    edited February 27

    It is still great to see Iray scale so perfectly. That is the problem most have with Ampere gaming, the core counts do not translate to gaming performance. So it was uncertain as to how Ampere would handle Iray. But it has pretty much stuck to near exactly what the core counts would suggest. So while gamers are not super happy...we are. If crypto had not blown up the market, we would be able to get whatever we want because many gamers might have passed on Ampere. Or they may have even opted for AMD with how competitive they are now.

    Ultimately that is why we don't have the 16gb or 20 gb cards that were rumored. Nvidia did not need them, which is something I spoke about. Nvidia is selling everything they can get out the door...so why do they need to add more VRAM? The extra VRAM was all in response to AMD's large capacities.

    Intel is finally supposed to bring out some GPUs of their own, so having a 3rd player in the market could balance things more. That is still a ways off, either late this year or in 2022. Regardless their very presence in the GPU market will shake it up a bit. Nvidia and AMD cannot get lazy, they will need to compete. Timing will be critical, if Intel releases while crypto is still going strong, they could help alleviate the overall supply.

    Post edited by outrider42 on
  • SevrinSevrin Posts: 5,562

    System Configuration
    System/Motherboard: ASUS Prime Z270-K
    CPU: Intel Core i7 7700K  4.2 Ghz
    GPU: MSI Ventus 2080ti
    System Memory: Kingston&Patriot 32 Gb DDR4 2400
    OS Drive: Samsung 850 EVO 256 Gb
    Asset Drive: WD WB20EZRZ 2 Tb
    Operating System: Windows 10 Pro Build 19042
    Nvidia Drivers Version: 461.72
    Daz Studio Version: 4.15.0.2 64bit
    Optix Prime Acceleration:


    Benchmark Results
    2021-02-26 21:16:47.729 Finished Rendering

    2021-02-26 21:16:47.766 Total Rendering Time: 3 minutes 30.61 seconds

    2021-02-26 21:17:45.677 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:

    2021-02-26 21:17:45.677 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce RTX 2080 Ti): 1800 iterations, 3.001s init, 205.535s render


    Iteration Rate: 8.75 iterations per second
    Loading Time: 5.26 seconds

    Rendering was the quickest thing about this whole exercise.

  • outrider42outrider42 Posts: 2,909

    Sevrin said:

    System Configuration
    System/Motherboard: ASUS Prime Z270-K
    CPU: Intel Core i7 7700K  4.2 Ghz
    GPU: MSI Ventus 2080ti
    System Memory: Kingston&Patriot 32 Gb DDR4 2400
    OS Drive: Samsung 850 EVO 256 Gb
    Asset Drive: WD WB20EZRZ 2 Tb
    Operating System: Windows 10 Pro Build 19042
    Nvidia Drivers Version: 461.72
    Daz Studio Version: 4.15.0.2 64bit
    Optix Prime Acceleration:


    Benchmark Results
    2021-02-26 21:16:47.729 Finished Rendering

    2021-02-26 21:16:47.766 Total Rendering Time: 3 minutes 30.61 seconds

    2021-02-26 21:17:45.677 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:

    2021-02-26 21:17:45.677 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce RTX 2080 Ti): 1800 iterations, 3.001s init, 205.535s render


    Iteration Rate: 8.75 iterations per second
    Loading Time: 5.26 seconds

    Rendering was the quickest thing about this whole exercise.

    Thanks!

    So the 3060 does NOT beat a 2080ti, but it is in the ballpark. It certainly isn't bad considering the original MSRPs of these two items, and the 3060 uses about 90 Watts less, or more depending on the overclocks.

    And to think this is slowest Ampere card to date.

    BTW, what do you mean? Did you have an issue?

  • Matt_CastleMatt_Castle Posts: 1,492

    So, not quite a match for the 2080 Ti in a like-for-like test, but I'd say that 93% (caveats about single data points and specific models*) isn't enough difference you'd really notice it in practice.

    * It's worth being specific that the TUF 3060 has a mahoosive 3-fan 2.7-slot cooler, so will have more spare thermal headroom to boost than a more "standard" 2-fan 2-slot card. I doubt that's more than a very few percent though.

    It's interesting to note that Nvidia see this as targeting GTX 1060 owners, which remains the most popular gaming card reported on Steam (~9% of users, followed by the 1050 Ti, 1650, 1050 and 2060, collectively covering over a fifth of gaming computers). If any significant proportion of those users do upgrade to 3060 cards, then that's actually going to be a really big leap in what a "typical" gaming PC can do when it comes to Iray.

  • SevrinSevrin Posts: 5,562

    outrider42 said:

    Sevrin said:

    System Configuration
    System/Motherboard: ASUS Prime Z270-K
    CPU: Intel Core i7 7700K  4.2 Ghz
    GPU: MSI Ventus 2080ti
    System Memory: Kingston&Patriot 32 Gb DDR4 2400
    OS Drive: Samsung 850 EVO 256 Gb
    Asset Drive: WD WB20EZRZ 2 Tb
    Operating System: Windows 10 Pro Build 19042
    Nvidia Drivers Version: 461.72
    Daz Studio Version: 4.15.0.2 64bit
    Optix Prime Acceleration:


    Benchmark Results
    2021-02-26 21:16:47.729 Finished Rendering

    2021-02-26 21:16:47.766 Total Rendering Time: 3 minutes 30.61 seconds

    2021-02-26 21:17:45.677 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:

    2021-02-26 21:17:45.677 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce RTX 2080 Ti): 1800 iterations, 3.001s init, 205.535s render


    Iteration Rate: 8.75 iterations per second
    Loading Time: 5.26 seconds

    Rendering was the quickest thing about this whole exercise.

    Thanks!

    So the 3060 does NOT beat a 2080ti, but it is in the ballpark. It certainly isn't bad considering the original MSRPs of these two items, and the 3060 uses about 90 Watts less, or more depending on the overclocks.

    And to think this is slowest Ampere card to date.

    BTW, what do you mean? Did you have an issue?

    It takes longer to load the scene than to do the render.  Also longer to write up the results. cheeky

  • SevrinSevrin Posts: 5,562

    outrider42 said:

    Sevrin said:

    System Configuration
    System/Motherboard: ASUS Prime Z270-K
    CPU: Intel Core i7 7700K  4.2 Ghz
    GPU: MSI Ventus 2080ti
    System Memory: Kingston&Patriot 32 Gb DDR4 2400
    OS Drive: Samsung 850 EVO 256 Gb
    Asset Drive: WD WB20EZRZ 2 Tb
    Operating System: Windows 10 Pro Build 19042
    Nvidia Drivers Version: 461.72
    Daz Studio Version: 4.15.0.2 64bit
    Optix Prime Acceleration:


    Benchmark Results
    2021-02-26 21:16:47.729 Finished Rendering

    2021-02-26 21:16:47.766 Total Rendering Time: 3 minutes 30.61 seconds

    2021-02-26 21:17:45.677 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:

    2021-02-26 21:17:45.677 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce RTX 2080 Ti): 1800 iterations, 3.001s init, 205.535s render


    Iteration Rate: 8.75 iterations per second
    Loading Time: 5.26 seconds

    Rendering was the quickest thing about this whole exercise.

    Thanks!

    So the 3060 does NOT beat a 2080ti, but it is in the ballpark. It certainly isn't bad considering the original MSRPs of these two items, and the 3060 uses about 90 Watts less, or more depending on the overclocks.

    And to think this is slowest Ampere card to date.

    BTW, what do you mean? Did you have an issue?

    It takes longer to load the scene than to do the render.  Also longer to write up the results. cheeky

  • System/Motherboard: Gigabyte Z590 Master
    CPU: I9-10900K @ 3.7 ghz
    GPU0: PNY RTX A6000
    GPU1: MSI RTX 3090  
    System Memory: 32 GB Corsair Dominator Platinum DDR4-3466
    OS Drive: 240 GB Corsair Sata 3 SSD
    Asset Drive: Same
    Operating System: Win 10 Pro, 1909
    Nvidia Drivers Version: 461.72 Studio Drivers
    Daz Studio Version: 4.15


    RTX A6000
    2021-02-26 20:27:53.108 Finished Rendering
    2021-02-26 20:27:53.136 Total Rendering Time: 1 minutes 43.3 seconds
    2021-02-26 20:28:07.466 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:
    2021-02-26 20:28:07.466 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (RTX A6000):      1800 iterations, 1.655s init, 99.244s render
    Device Iteration Rate: 1800/99.244 = 18.137 iterations per minute
    Loading Time 4.056 sec

    RTX 3090
    2021-02-26 20:24:02.112 Finished Rendering
    2021-02-26 20:24:02.136 Total Rendering Time: 1 minutes 36.8 seconds
    2021-02-26 20:24:10.852 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:
    2021-02-26 20:24:10.852 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 1 (GeForce RTX 3090):      1800 iterations, 1.498s init, 92.469s render
    Device Iteration Rate: 1800/92.244 = 19.465 iterations per minute
    Loading Time 4.331 sec

    A6000 & 3090 Combined Render:
    2021-02-26 20:20:41.737 Finished Rendering
    2021-02-26 20:20:41.764 Total Rendering Time: 59.34 seconds
    2021-02-26 20:20:46.656 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:
    2021-02-26 20:20:46.656 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (RTX A6000):      908 iterations, 5.859s init, 50.940s render
    2021-02-26 20:20:46.656 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 1 (GeForce RTX 3090):      892 iterations, 2.707s init, 50.018s render
    Rendering Performance: 1800/50.940 = 35.336 iterations per second
    Loading Time 8.4 seconds

     

    Some A6000 scores here. Working on an x299x build to pair it with (2) 3090s.

    This should allow 48GB x2, with option to increase speeds 33% for scenes under 24GB.

     

     

     

  • ArtiniArtini Posts: 6,472
    edited February 27

    Hi, it's me again.

    Just got my hands for testing on another interesting piece of hardware.

    Shiny, new MacBook Pro with Apple M1 chip and 8 GB RAM.

    Will have it for making some tests, but apparently it cannot run Daz Studio, so I will test it with Blender instead.

    At least there is a MacOS version of Blender, so I could test it performance with included Rosetta 2 - intel CPU emulator.

    We have the official Daz Studio to Blender bridge, so maybe I could render some scene made in Daz Studio

    and convert it to Blender.

    At least it is easy to specify the hardware:

    CPU: Apple M1

    GPU: Apple M1

     

    Post edited by Artini on
  • outrider42outrider42 Posts: 2,909
    Sevrin said:

    outrider42 said:

    Sevrin said:

    System Configuration
    System/Motherboard: ASUS Prime Z270-K
    CPU: Intel Core i7 7700K  4.2 Ghz
    GPU: MSI Ventus 2080ti
    System Memory: Kingston&Patriot 32 Gb DDR4 2400
    OS Drive: Samsung 850 EVO 256 Gb
    Asset Drive: WD WB20EZRZ 2 Tb
    Operating System: Windows 10 Pro Build 19042
    Nvidia Drivers Version: 461.72
    Daz Studio Version: 4.15.0.2 64bit
    Optix Prime Acceleration:


    Benchmark Results
    2021-02-26 21:16:47.729 Finished Rendering

    2021-02-26 21:16:47.766 Total Rendering Time: 3 minutes 30.61 seconds

    2021-02-26 21:17:45.677 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:

    2021-02-26 21:17:45.677 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce RTX 2080 Ti): 1800 iterations, 3.001s init, 205.535s render


    Iteration Rate: 8.75 iterations per second
    Loading Time: 5.26 seconds

    Rendering was the quickest thing about this whole exercise.

    Thanks!

    So the 3060 does NOT beat a 2080ti, but it is in the ballpark. It certainly isn't bad considering the original MSRPs of these two items, and the 3060 uses about 90 Watts less, or more depending on the overclocks.

    And to think this is slowest Ampere card to date.

    BTW, what do you mean? Did you have an issue?

    It takes longer to load the scene than to do the render.  Also longer to write up the results. cheeky

    That doesn't sound right. It loads in about 30 seconds to a minute at most for me. You may have errors or you G8F morph library is huge.

    The data is helpful, so it is much appreciated that you take the time to fill out the information.
  • skyeshotsskyeshots Posts: 45
    edited February 27

    A600 under 3090

    Re: RTX A6000

    For starters, this is an "Ampere" card. Nvidia has officially discontinued the "Quadro" designations. Studio drivers are needed to run the A6000 opposite 30 series cards. For air based systems, I found better temps by putting the A6000 below the 3090, so that the bottom case fans blows upward directly onto the card, and the downward facing 3090 fans blow onto the A6000 as well. In the opposite config, with the A6000 on top, temps for both cards were dramatically higher. The bus speeds on the 3090 is quicker anyways, so dropping it in the x16 socket is just as well.

    Pros:
    1. Seems to handle IRAY perfectly fine.
    2. Ridiculous amounts of VRAM
    3. Wattage is stable at 300W so you can do 4x A6000 with a single 1600W PSU.
    4. Cable management is easier with just (1) EPS 12V 8-pin per card.
    5. The card itself is absolutely gorgeous.

    Cons
    1. Waterblocks for A6000 are currently impossible to find, creating some build limitations.
    2. Slightly slower VRAM compared to the 3090, though this is only modest slowdown in Daz.
    3. Normal PSUs lack the additional EPS adapters needed.

    image000000.jpg
    1927 x 1887 - 720K
    Post edited by skyeshots on
  • outrider42outrider42 Posts: 2,909
    edited February 28

    Here is a new test, a Ryzen 5800X with 8 cores and 16 threads.

    Daz 4.15.0.2

    Windows 10 2H20

    CPU: Ryzen 5800X stock clocks

    GPUS:  EVGA 1080ti Black,   MSI 1080ti Gaming X (neither used for this render)

    Asset Drive: Samsung 870 4TB SSD, Daz is running on Inland M.2 2TB SSD

    RAM: GSkill 3200  64GB

    2021-02-27 20:44:51.945 Total Rendering Time: 41 minutes 0.36 seconds

    2021-02-27 20:46:05.770 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:

    2021-02-27 20:46:05.770 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CPU: 1800 iterations, 1.565s init, 2456.665s render

    That comes out to 0.732 iterations per second.

    So now we have a Ryzen 5000 on the table, the performance jump over previous Ryzens is pretty high, however this still pales in comparison to any GPU. You still need something like Threadripper to match even modest GPU performance. The 2080ti is 0.65 iterations faster then the 3060, to put this in perspective. The number is mostly academic. And with most AMD Ryzens you will need a GPU anyway just to drive the monitor anyway. Also, my 5800X jumped to 90C and stayed there for the whole render. It dropped back to normal almost instantly after the render was done, so I know the cooler is doing its job. So if you really wish to pursue CPU rendering you will want to also invest in very good cooling, probably liquid.

    However, the overall performace of my PC itself has improved. The PC is just snappier, zipping and unzipping files is painless. The performance of the Daz app itself is indeed a little better. Not drastic, but working on larger scenes feels better. The computer isn't bogging down. Even the simple task of browsing my pictures is faster, my SSDs perform faster. So obviously there are benefits, though I am coming from a 4 core i5 from 2014. That is a big jump.

    But I ran a test with all 3 devices enabled anyway. So this is with two 1080tis and a 5800X.

    2021-02-27 22:02:47.996 Total Rendering Time: 3 minutes 21.82 seconds

    2021-02-27 22:03:19.082 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:

    2021-02-27 22:03:19.082 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce GTX 1080 Ti): 829 iterations, 1.653s init, 196.913s render

    2021-02-27 22:03:19.082 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 1 (GeForce GTX 1080 Ti): 842 iterations, 1.996s init, 197.572s render

    2021-02-27 22:03:19.085 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CPU: 129 iterations, 1.383s init, 198.263s render

    I also ran one last test with just the two 1080tis, so I can I have a direct comparison.

    2021-02-27 22:12:49.632 Total Rendering Time: 3 minutes 22.48 seconds

    2021-02-27 22:12:59.685 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:

    2021-02-27 22:12:59.685 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce GTX 1080 Ti): 893 iterations, 1.395s init, 199.483s render

    2021-02-27 22:12:59.689 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 1 (GeForce GTX 1080 Ti): 907 iterations, 1.789s init, 199.044s render

    We have seen stuff like this before. When combining the CPU with the GPUs, it actually slows the GPUs down. Thus the net result is less than a second gain when using the CPU. Plus the CPU jumps to 90C. It just isn't worth it to use the CPU with the 1080tis, not even a rather solid desktop CPU like the 5800X. So I am better off not using the CPU for any rendering at all.

    Still I am happy with the upgrade because of everything else that has improved. I also play some games, and the 5800X firmly removes any CPU bottleneck I may have had.

    Post edited by outrider42 on
  • Dim ReaperDim Reaper Posts: 647

    outrider42 said:

     

    ...though it is important to consider we don't have a recent bench for a 2080ti with 4.14 or 4.15, so the number that is up is not correct.

    On that note, we could use a lot of updated benches from people who run various cards. There are a ton of missing benches for GPUs with 4.14+. So it is important to consider this when comparing these numbers, since 4.14 increased the iteration count across the board.

    I ran a couple of new benches on 4.14 and 4.15 before I upgraded my main version of DS last month.  They are back on page 18 of this thread, but here is a copy for comparison with the 3060 runs in 4.15.

     

    All three tests carried out on:

    System Configuration

    System/Motherboard: ASUS X99-S

    CPU: Intel i7 5960X @3GHz

    System Memory: 32GB KINGSTON HYPER-X PREDATOR QUAD-DDR4

    OS Drive: Samsung M.2 SSD 960 EVO 250GB

    Asset Drive: 2TB WD CAVIAR BLACK  SATA 6 Gb/s, 64MB CACHE (7200rpm)

    Operating System: Windows 10 1909 OS Build 18363.1256

    Nvidia Drivers Version: 460.79

     

    Benchmark Results – 2080Ti

    Daz Studio Version: 4.12.086 64-bit

    Total Rendering Time: 4 minutes 14.22 seconds

    CUDA device 0 (GeForce RTX 2080 Ti): 1800 iterations, 3.899s init, 246.330s render

    Iteration Rate: (1800/246.33) = 7.307 iterations per second

    Loading Time: ((0+240+14.2)-246.33) = 7.870 seconds

     

    Benchmark Results – 2080Ti

    Daz Studio Version: 4.14.0.8 64-bit

    Total Rendering Time: 3 minutes 46.24 seconds

    CUDA device 0 (GeForce RTX 2080 Ti): 1800 iterations, 2.945s init, 219.671s render

    Iteration Rate: (1800/219.67) = 8.194 iterations per second

    Loading Time: ((0+180+46.2)-219.67) = 6.530 seconds

     

     

    Benchmark Results – 2080Ti

    Daz Studio Version: 4.15.02 64-bit

    Total Rendering Time: 3 minutes 40.50 seconds

    CUDA device 0 (GeForce RTX 2080 Ti): 1800 iterations, 8.408s init, 208.864s render

    Iteration Rate: (1800/208.86) = 8.618 iterations per second

    Loading Time: ((0+180+40.5)-208.86) = 11.640 seconds

     

    Very interesting just how close that 3060 test is to the 2080ti, and will be even more interesting once they release the models with more vram.

  • outrider42outrider42 Posts: 2,909
    Thanks for that. I'm intrigued you got a faster time in 4.15 than 4.14, that hasn't been happening very often. And yeah, *if* Nvidia ever releases higher capacity 3080s or 3070s that there be much rejoicing in these forums. At least for the people who manage to get one. As it stands the 3060 is a pretty unique card in the lineup.
  • Matt_CastleMatt_Castle Posts: 1,492

    Dim Reaper said:

    Very interesting just how close that 3060 test is to the 2080ti, and will be even more interesting once they release the models with more vram.

    My current assumption is there won't be any new models with more VRAM than the 3060 in the near future.

    Although a 16GB 3070-level and 20GB 3080-level card were at one point rumoured, the latest suggestion is that the 3080 Ti will be a 12GB card (on a technical level, it's probably a cut-down 3090 with half the RAM), which likely also rules out a 16GB 3070 (best guesses are the 3070 Ti might be a 10GB card, a cut down 3080)

    I would have loved a 16GB 3070, but I no longer think that's likely to happen, so I decided to jump on the 12GB 3060; speed-wise, it was already going to be a massive leap over what I already had, and the 12GB of VRAM is quite possibly as good as this generation will be (at least until a "refresh" half-generation where they bring out "Super" cards or something).

  • outrider42outrider42 Posts: 2,909
    I think at this rate we will not see the higher VRAM capacities as long as the crypto boom is going. Nvidia can sell every single card they build instantly, so there is simply no incentive for them to build cards with more VRAM that cost more to manufacture.

    The only way this changes is if the crypto market crashes and Nvidia has to wake up and compete. The rumored specs for Intel's GPU lineup show multiple SKUs, with several tiers available with 8 or 16GB models. It sure would be embarrassing for Nvidia to be the only GPU maker to not be offering these capacities. It would make them look bad.

    But this is far into the future.
  • narkfestmojonarkfestmojo Posts: 68

    I heard the crypto mining boom is already starting to show signs of crashing because too many people have started doing it. Might actually be good for non-crypto miners if a whole bunch of ex-mining cards flood the market.

    Also, kinda annoying there's a 16GB version of the RTX3080 available for laptop, although it's not equivalent to the desktop version of the RTX3080. I think NVIDIA can arbitrarily double the VRAM on almost any of their cards.

  • outrider42outrider42 Posts: 2,909
    The laptop versions are totally different chips this time around. Laptop buyers wont see much improvement over Turing at all, and the VRAM might have been chosen as a way to make the laptops look better. But it really wouldn't be hard for Nvidia to add more VRAM to existing desktop models. But they clearly have no desire to do so, since so many models have been scrapped. And now the rumored 3080ti has only 12gb not 20.
  • takezo_3001takezo_3001 Posts: 1,271
    edited March 1

    System Configuration
    System/Motherboard: ASROCK TAICHI BIOS 5.80
    CPU: AMD RYZEN 9 3900x @ STOCK CLOCKS
    GPU: NVIDIA RTX 3090 @ STOCK CLOCKS
    System Memory: GSKILL FORTIS 64GB @ 2400
    OS Drive: SAMSUNG EVO 850 1TB SSD
    Asset Drive: EXTERNAL BUFFALO 4TB @ 5400
    Operating System: WIN 10 PRO 64 20H2
    Nvidia Drivers Version: GAME READY 461.72
    Daz Studio Version: ds 4.15.0.13 64 BITS
    Optix Prime Acceleration: STATE (Daz Studio 4.12.1.086 or earlier only)

    Benchmark Results

    2021-02-28 22:14:26.257 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:
    2021-02-28 22:14:26.257 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce RTX 3090):      1800 iterations, 1.612s init, 90.874s render

    (Total Rendering Time: 1 minutes 34.47)
    ==================================================================================
    This card was the best investment that I ever made, saving up for nearly a year to get it was well worth it!

     

    Post edited by takezo_3001 on
  • RincewindRincewind Posts: 4

    System Configuration
    System/Motherboard: X570 Taichi
    CPU: 3900X w/slight undervolt (otherwise at stock)
    GPU: Gigabyte Aorus 2070 Super @ Stock, EVGA 3060 XC @ Stock (both on PCI-E x8 at 3.0 / 4.0 respectively)
    System Memory: 64GB G-Skill DDR4 @ 3200
    OS Drive: Samsung 970 Pro 1TB NVMe
    Asset Drive: Seagate 4TB Barracuda
    Operating System: Windows 10 Home x64 10.0.19042
    Nvidia Drivers Version: 461.72 DCH
    Daz Studio Version: 4.15.0.2

    Benchmark Results
    2021-03-05 08:41:54.190 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce RTX 3060):      980 iterations, 6.423s init, 119.968s render
    2021-03-05 08:41:54.190 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 1 (GeForce RTX 2070 SUPER):      820 iterations, 6.171s init, 119.201s render

    Total Rendering Time: 2 minutes 13.22 seconds
    Rendering Performance:

    • 15 iterations per second (total)
    • 8.17 iterations per second (3060)
    • 6.88 iterations per second (2070S)

    Loading Time: 13.252 seconds

    I got the card installed shortly before I had to start work for the day so haven't done much testing yet, but I have noticed that on more complex scenes the performance gap widens further between the two cards.

     

  • skyeshotsskyeshots Posts: 45

    takezo_3001 said:

    System Configuration
    System/Motherboard: ASROCK TAICHI BIOS 5.80
    CPU: AMD RYZEN 9 3900x @ STOCK CLOCKS
    GPU: NVIDIA RTX 3090 @ STOCK CLOCKS
    System Memory: GSKILL FORTIS 64GB @ 2400
    OS Drive: SAMSUNG EVO 850 1TB SSD
    Asset Drive: EXTERNAL BUFFALO 4TB @ 5400
    Operating System: WIN 10 PRO 64 20H2
    Nvidia Drivers Version: GAME READY 461.72
    Daz Studio Version: ds 4.15.0.13 64 BITS
    Optix Prime Acceleration: STATE (Daz Studio 4.12.1.086 or earlier only)

    Benchmark Results

    2021-02-28 22:14:26.257 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:
    2021-02-28 22:14:26.257 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce RTX 3090):      1800 iterations, 1.612s init, 90.874s render

    (Total Rendering Time: 1 minutes 34.47)
    ==================================================================================
    This card was the best investment that I ever made, saving up for nearly a year to get it was well worth it!

    Grats, Takezo on the card! They are expensive and currently hard to find. Which model did you end up with? 

    Finishing your benchmark results: 

    Device Iteration Rate: 1800/90.874 = 19.81 iterations per minute

  • takezo_3001takezo_3001 Posts: 1,271

    skyeshots said:

    takezo_3001 said:

    System Configuration
    System/Motherboard: ASROCK TAICHI BIOS 5.80
    CPU: AMD RYZEN 9 3900x @ STOCK CLOCKS
    GPU: NVIDIA RTX 3090 @ STOCK CLOCKS
    System Memory: GSKILL FORTIS 64GB @ 2400
    OS Drive: SAMSUNG EVO 850 1TB SSD
    Asset Drive: EXTERNAL BUFFALO 4TB @ 5400
    Operating System: WIN 10 PRO 64 20H2
    Nvidia Drivers Version: GAME READY 461.72
    Daz Studio Version: ds 4.15.0.13 64 BITS
    Optix Prime Acceleration: STATE (Daz Studio 4.12.1.086 or earlier only)

    Benchmark Results

    2021-02-28 22:14:26.257 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:
    2021-02-28 22:14:26.257 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce RTX 3090):      1800 iterations, 1.612s init, 90.874s render

    (Total Rendering Time: 1 minutes 34.47)
    ==================================================================================
    This card was the best investment that I ever made, saving up for nearly a year to get it was well worth it!

    Grats, Takezo on the card! They are expensive and currently hard to find. Which model did you end up with? 

    Finishing your benchmark results: 

    Device Iteration Rate: 1800/90.874 = 19.81 iterations per minute

    Thanks, I'm so glad I got an FE version at MSRP via HotStock/BestBuy before the scalpers hacked BestBuy's current anti-bot measures!

    Also thanks for finishing my benchmark results, this card is a beast, and even rendering for 30 minutes my card did not go above 60 degrees Celcius, hell it never broke the 55 thresholds; BTW my case is a behemoth with 3 230 mm and one 140 fans and I try and keep my apt at 61 degrees Fahrenheit!

  • skyeshotsskyeshots Posts: 45

    outrider42 said:

    I think at this rate we will not see the higher VRAM capacities as long as the crypto boom is going. Nvidia can sell every single card they build instantly, so there is simply no incentive for them to build cards with more VRAM that cost more to manufacture.

     

    The only way this changes is if the crypto market crashes and Nvidia has to wake up and compete. The rumored specs for Intel's GPU lineup show multiple SKUs, with several tiers available with 8 or 16GB models. It sure would be embarrassing for Nvidia to be the only GPU maker to not be offering these capacities. It would make them look bad.

     

    But this is far into the future.

    This is a great topic. Last year when I started using Daz, I was running quad SLI with 4GB GPUs. It was sad to watch the renders roll over to CPU every time. 

    Now, moving up to the A6000 cards from the 3090s, whle it does have some compromises, the A6000 offers plenty of VRAM at 48 GB per card. If I had to do this all over, instead of building out a triple 3090 setup, I would have gone straight to quad A6000. There simply was not enough benchmarking done yet to make an informed decision. There were no IRAY tests on record. NVidia themselves were not very helpful. I hope the tests I did here help others in a similar situation or currently on the fence.

    As far as real world workflows in Daz, my current pain point comes from saving work (which I do often). This is especially true for working with large scenes. I'm going to try an 905P Optane drive. Not sure how this will compare to drives like the Samsung 970/980 Pro in Daz, but on paper it looks like it may help. If others have current builds with Optane or thoughts on this, please jump in.

Sign In or Register to comment.