Daz Studio Iray - Rendering Hardware Benchmarking

1679111223

Comments

  • outrider42outrider42 Posts: 2,853
    It only seems to effect Iray, but I have tested aside from games and Iray. About a year or so ago there were people seeing a 5% performance drop in games from drivers, and this caused many theories to go around about why. But it wasn't consistent for everyone. Then the performance came back in the next driver update. Some did more testing, and found that the drivers sometimes just didn't mesh well with certain systems. Sometimes just reinstalling a driver could fix it. You will find people who still think they nerf performance on purpose, but I actually don't believe that.

    So many people do benchmarks in games and other software that even a small drop is going to get noticed. What I'm seeing is only in Iray, which is something nobody uses in benchmarks. It is extremely niche software. If this was on purpose, it wouldn't make any sense.

    However something, intentional or not, is going on. As I said before I can run my older versions of Studio and see the same performance loss. If I roll back drivers to 436, I can get that performance back. So while I cannot run the latest Studio with driver 436, I believe that proves that a driver is the issue, not the latest version of Iray, and it has been this way since around 443+.

    And I'm not alone. Users with some Turing cards also saw a drop. Also, unlike the situation i talked about with games, this has persisted across numerous drivers since 443.
  • Daz Jack TomalinDaz Jack Tomalin Posts: 11,552

    Quick sanity check - per this guide from Puget I procured a pair of the Quadro NVLink bridges to test dual SLI/NVlink on my render server.. that is, 2 pairs of 2080ti's.

    But it certainly seems like that guide correct, and even with the newest 451.48 drivers, SLI is only being able to activated on one pair of cards.

    Has anyone seen any hacks/info on getting two pairs to work in Windows?  At the moment the options are either older drivers or Linux.

  • RayDAntRayDAnt Posts: 860

    Quick sanity check - per this guide from Puget I procured a pair of the Quadro NVLink bridges to test dual SLI/NVlink on my render server.. that is, 2 pairs of 2080ti's.

    But it certainly seems like that guide correct, and even with the newest 451.48 drivers, SLI is only being able to activated on one pair of cards.

    Has anyone seen any hacks/info on getting two pairs to work in Windows?  At the moment the options are either older drivers or Linux.

    Hm... I have a vague remembrance reading somewhere that in order for 2+ Nvidia cards to function properly in a vram sharing mode, PCI-E lane topology from each of x16 slot used needs to be direct to CPU via the same identical chipset path. Are you sure your current motherboard actually lives up to the task? (merely having 4+ physical full bandwidth x16 slots on the board is only half the battle.) This is one of those areas where proper hardware documentation tends to be extremely lacking. Just check out the troubleshooting stage of any LinusTechTips project video series dealing with a 4+ GPU setup project and you should see what I mean.

  • Daz Jack TomalinDaz Jack Tomalin Posts: 11,552

    Hmmm no idea, it's a Asus X299 Sage/10g

  • It only seems to effect Iray, but I have tested aside from games and Iray. About a year or so ago there were people seeing a 5% performance drop in games from drivers, and this caused many theories to go around about why. But it wasn't consistent for everyone. Then the performance came back in the next driver update. Some did more testing, and found that the drivers sometimes just didn't mesh well with certain systems. Sometimes just reinstalling a driver could fix it. You will find people who still think they nerf performance on purpose, but I actually don't believe that.

     

    So many people do benchmarks in games and other software that even a small drop is going to get noticed. What I'm seeing is only in Iray, which is something nobody uses in benchmarks. It is extremely niche software. If this was on purpose, it wouldn't make any sense.

     

    However something, intentional or not, is going on. As I said before I can run my older versions of Studio and see the same performance loss. If I roll back drivers to 436, I can get that performance back. So while I cannot run the latest Studio with driver 436, I believe that proves that a driver is the issue, not the latest version of Iray, and it has been this way since around 443+.

     

    And I'm not alone. Users with some Turing cards also saw a drop. Also, unlike the situation i talked about with games, this has persisted across numerous drivers since 443.

    Same here. I've noticed it both on 1080ti and RTX 2070 (non-super). The 2070 pushes the bench scene in 7 min 34 s (device render) on DAZ 4.12.1.118 and driver 451.48

  • cheelong72cheelong72 Posts: 96
    edited September 2020

    Just a hobbyists benchmark results

    System Configuration
    System/Motherboard: Biostar B150GTN 
    CPU: Intel(R) Core(TM) i7-6700K CPU @ 4.00GHz / stock
    GPU: NVIDIA GeForce GTX 1060 6GB / stock
    System Memory: 1319 CL16-16-16 D4-2400 16GB DDR4 @ 2133 MHz (seems like a generic brand)
    OS Drive: DGM SSD 240GB S3-240A SCSI (240GB)
    Asset Drive: TOSHIBA DT01ACA200 SCSI (2TB)
    Operating System: Windows 7 Professional 6.1.7601 SP1 Build 7601
    Nvidia Drivers Version: 26.21.14.4587 (2020.04.03)
    Daz Studio Version: 4.12.1.118 Pro Edition 64bit
    Optix Prime Acceleration: N/A (Daz Studio 4.12.1.086 or earlier only)


    Benchmark Results (GPU only)
    DAZ_STATS
    2020-07-09 15:06:19.829 Finished Rendering
    2020-07-09 15:06:19.844 Total Rendering Time: 21 minutes 31.16 seconds
    IRAY_STATS
    2020-07-09 15:18:59.388 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:
    2020-07-09 15:18:59.388 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce GTX 1060 6GB):   1542 iterations, 5.922s init, 1281.983s render

    Iteration Rate: (1542 / 1281.983) =  1.2028 iterations per second
    Loading Time: ((0 * 3600 + 21 * 60 + 31.16) - 1281.983) =  9.177 seconds


    Benchmark Results (CPU+GPU)
    DAZ_STATS
    Total Rendering Time: 20 minutes 23.65 seconds
    IRAY_STATS
    IRAY   rend info : Device statistics:
    IRAY   rend info : CUDA device 0 (GeForce GTX 1060 6GB):   1559 iterations, 9.021s init, 1188.526s render
    IRAY   rend info : CPU :   241 iterations, 4.504s init, 1195.223s render


    Iteration Rate: ((1559 + 241) / 1188.526) =  1.5145 iterations per second
    Loading Time: ((0 * 3600 + 20 * 60 + 23.65) - 1188.526) =  35.124 seconds



    Felt like playing tennis with a ping-pong paddle.
    Or golf with a hockey stick, volleyball with a bowling ball,.... you know....
    So I got me some new toys.



    System Configuration
    System/Motherboard: TUF GAMING X570-PLUS 
    CPU: AMD Ryzen 9 3900X 12-Core Processor 3.79GHz (stock)
    GPU: NVIDIA GeForce RTX 2070 SUPER 8GB / stock
    System Memory: Corsair Vengence LPX 32GB kit (4x16GB = 64GB) DDR4 @ 3200 MHz
    OS Drive: Samsung SSD 970 EVO Plus 1TB M.2 NVMe (disk 0)
    DAZ3D Drive: Samsung SSD 970 EVO Plus 1TB M.2 NVMe (disk 1)
    Asset Drive: AMD-RAID0 (Samsung SSD 860 EVO 1TB x 2)
    Operating System: Windows 10 Home 10.0.18363 Build 18636
    Nvidia Drivers Version: 27.21.14.5167 (2020.07.05)
    Daz Studio Version: 4.12.1.118 Pro Edition 64bit
    Optix Prime Acceleration: N/A (Daz Studio 4.12.1.086 or earlier only)


    Benchmark Results (GPU)
    DAZ_STATS
    2020-07-16 21:37:49.845 Finished Rendering
    2020-07-16 21:37:49.876 Total Rendering Time: 5 minutes 51.51 seconds
    IRAY_STATS
    2020-07-16 21:38:08.433 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce RTX 2070 SUPER):   1800 iterations, 2.358s init, 346.371s render

    Iteration Rate: (1800 / 346.371) =  5.197 iterations per second
    Loading Time: ((0 * 3600 + 5 * 60 + 51.51) - 292.483) =  5.139 seconds


    Benchmark Results (CPU+GPU)
    DAZ_STATS
    2020-07-16 18:46:25.543 Finished Rendering
    2020-07-16 18:46:25.567 Total Rendering Time: 4 minutes 58.22 seconds
    IRAY_STATS
    2020-07-16 18:46:52.008 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce RTX 2070 SUPER):   1509 iterations, 1.939s init, 293.483s render
    2020-07-16 18:46:52.008 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CPU:   291 iterations, 1.595s init, 293.221s render

    Iteration Rate: (1509 / 293.483) =  5.142 iterations per second [edit]
    Iteration Rate: (1509 +291/ 293.483) =  6.133 iterations per second
    Loading Time: ((0 * 3600 + 4 * 60 + 58.22) - 292.483) =  5.737 seconds

     

    Now a happy hobbyist ^____^

    Post edited by cheelong72 on
  • FlayronFlayron Posts: 2

    System Configuration
    System/Motherboard: Asus Z170 PRO GAMING
    CPU: Intel Core i7-6700K @ 4.7Ghz 
    GPU: Asus DUAL-RTX2070S-O8G-EVO @ stock
    System Memory: G Skill F4-2400C15-4GRR 16GB DDR4 @ 3000Mhz - 17-17-17-37-1T
    OS Drive: Samsung SSD 860 QVO 1TB
    Asset Drive: Same
    Operating System: Microsoft Windows 10 Professional (x64) Build 18363.959 (1909/November 2019 Update)
    Nvidia Drivers Version: 451.22 GRD WDDM
    Daz Studio Version: 4.12.1.117 Beta 64-bit
    Optix Prime Acceleration: No


    Benchmark Results
    DAZ_STATS:
    2020-07-17 16:53:38.742 Finished Rendering
    2020-07-17 16:53:38.764 Total Rendering Time: 6 minutes 1.29 seconds
    IRAY_STATS:
    2020-07-17 16:53:51.330 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:
    2020-07-17 16:53:51.330 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce RTX 2070 SUPER): 779 iterations, 8.756s init, 349.007s render
    Iteration Rate: (779 / 8.756) 88.97 iterations per second
    Loading Time: ((0 * 3600 + 6 * 60 + 1.29) - 349.007) 12.283 seconds

  • So my only question is are we running benchmarks only at stock speeds or can we push the overclocks to squeeze some performance out?

  • outrider42outrider42 Posts: 2,853
    Stock speed has little meaning when almost every GPU has a different one. That why we ask people to list specs. Some people have mentioned they overclocked. If you want to overclock, go for it, just mention that you do it. Also, overclocking for Iray is not super recommended given the long periods you may be running a render which pushes GPUs 100% nonstop. You do so at your own risk. I would not do an aggressive OC.
  • Stock speed has little meaning when almost every GPU has a different one. That why we ask people to list specs. Some people have mentioned they overclocked. If you want to overclock, go for it, just mention that you do it. Also, overclocking for Iray is not super recommended given the long periods you may be running a render which pushes GPUs 100% nonstop. You do so at your own risk. I would not do an aggressive OC.

     

    Thats good to know, I was thinking of running a watercooled GPU and CPU to use overclocks but if its not recommended then I will just save the money that watercooling adds and drop it in somewhere else.

  • outrider42outrider42 Posts: 2,853

    If you water cool then that would be better. But for me personally, I don't think trying to push overclocking for Iray is a good idea. Maybe others would disagree. Iray is quite different from gaming. On one hand, some games are actually more demanding than Iray. Like if I run 3DMark, my temps will exceed what they do for Iray. Also if you run demanding games at high refresh rates, like 120+, this can push things up, too. That is because how games work, they constantly refresh VRAM, and that is something that Iray does not do. Iray loads the scene once, and that is it. That is why the entire scene must fit in VRAM for Iray. But Iray still runs the GPU processor at 100% and depending on your scene, it can run for a long time. Most games will have periods where they ramp up or down, but Iray does not, it runs full throttle the entire duration of the render.

    So cooling is important. I am not saying you must water cool, I don't, and I have two 1080tis which will be warmer than a single card. But I have a good case for airflow and I use MSI Afterburner to control fan speeds to run faster than stock. This keeps temps under control for me. And when I game, I only use one card. Water cooling can be great, but it can be a hassle...you wouldn't want to spring a leak. <.<

    I have overclocked a little, but only a little. The performance gains are minor in the scope of things. A good card that is already factory overclocked and has a good cooler is probably fine.

  • TheKDTheKD Posts: 2,574

    One of the biggest reasons not to overvolt GPU without watercooling is they have built in throttles. So once the temp starts going up, the performance will start dropping due to it throttling itself.

  • tj_1ca9500btj_1ca9500b Posts: 1,979
    edited July 2020
    TheKD said:

    One of the biggest reasons not to overvolt GPU without watercooling is they have built in throttles. So once the temp starts going up, the performance will start dropping due to it throttling itself.

    I actually 'undervolt' my GPU for this very reason.  Some graphics card companies build in 'factory overclocks' to push the silicon to it's limits, which pushes the card closer to the throttle limits.  In my case, I run the card at about 70% of the aftermarket card's 'default' voltage, which helps drop my temps while not giving up any appreciable performance.  Incidentally, this drops the voltage to close to the same 'spec' as the reference cards. For shorter renders, yes my renders might be a bit slower, but a lot of my renders can take up to an hour, more than long enough for the thermal throttling to rear it's ugly head.  Plus my case layout and ambient temp situations are a bit less than ideal, so the card will heat up a bit more to begin with...

    Post edited by tj_1ca9500b on
  • outrider42outrider42 Posts: 2,853

    Do keep in mind he wants to game, so undervolting may not be ideal. The gaming cards usually have their spec balanced for gaming, and that is fine for Iray. These days you might see 3 specs, a base clock, a boost clock, and a game clock. The boost is only the temporary speed it will do and kind of pointless. The game clock is what matters and it is supposed to be pretty consistent. These are based on the cooler it comes with, and an average ambient room temp and case. You can always look up reviews, which I encourage you do. Like I said, the truly demanding games will actually run the GPU hotter than Iray does. So with that in mind, you should have an idea of what to expect, as Iray generally will not exceed the temps on these games. So that means that assuming your room is not hot and your case is not hot, you shouldn't throttle. I like the reviews that Gamers Nexus does, they are very thorough and cover every aspect of the GPU in detail, including cooler design, temps, and power use. They will also throw in a variety of creation benchmarks, like Blender, in their testing.

    And of course, you have this thread for Iray specific benchmarks, which is probably the only place on the internet you will find such a comprehensive source of benches for that engine. You might find an article about Iray once in a while, but it will be either really old and outdated or only cover 3 or 4 GPUs.

    BUT I would rather not clutter this thread with too much discussion. Maybe open another thread specifically about overclocking if you want to continue discussing it. It has not been explored all that heavily in a direct manner. Like somebody actually going through testing various clocks on the same GPU and running the same test scene with them at long enough time period to verify the max stable temps are reached. This test scene here may not do that too well on faster cards, as some setups may finish the scene before throttling can be an issue. Even so, I do not think the effort is worth it, unless you can really keep cool temps. Otherwise you may throttle at some point and drop back down anyway.

  • ArtAngelArtAngel Posts: 1,048
    edited August 2020

    I have various versions on various rigs and I am about to upgrade to Daz Studio 4.12 on one of my PC's (dual RTX 2080 Ti) but before I do, I decided to benchmark some stats for version 4.11.0.86 I have dual 1080ti and will test that one later. After I download some newer versions I may retest after updating drivers, but for now here are some stats on 1 of my rigs. I will not repeat the sytem info as all tests were done on that system. All tests were performed with Optix Prime Acceleration OFF and at stock speed no overclocking.

    System Configuration
    System/Motherboard: MSI MEG x299 Creation
    CPU: IntelCore i9-9980XE @ SPEED/stock
    GPU1: GeForce RTX 2080 Ti @ SPEED/stock
    GPU2: GeForce RTX 2080 Ti @ SPEED/stock
    System Memory: G.Skills Ripjaws 128 @ 2133MHz
    OS Drive: Samsung SSD 2TB
    Asset Drive: Samsung (Internal) SSD 4TB
    Operating System: Windows 10 Pro v 10.0.18.362 Build 18362
    Nvidia Drivers Version: 430.86
    Daz Studio Version: 4.11.0.383

    TEST 1: RTX2080 Ti x2 (SLI / NVLink Enabled) the render utilized both cards

    DAZ STATS

    020-08-12 11:00:14.159 Finished Rendering
    2020-08-12 11:00:14.206 Total Rendering Time: 2 minutes 46.7 seconds

    IRAY STATS

    2020-08-12 11:00:23.766 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : Device statistics:

    2020-08-12 11:00:23.766 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : CUDA device 1 (GeForce RTX 2080 Ti): 896 iterations, 3.152s init, 159.991s render

    2020-08-12 11:00:23.766 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : CUDA device 0 (GeForce RTX 2080 Ti): 904 iterations, 3.121s init, 159.865s render

    Render Performance: (896+904)/159-991 = 11.25063285 iteratios per second

    Load Time: 0 * 3600 + 2 * 60 + 46.7 = 10.267 seconds

    ---------------------------------------------------------------------------------------

    TEST 2: CPU  +  RTX2080 Ti x 2 (SLI / NVLink Enabled) - the render did not utilize both cards and did not utilize the CPU

    DAZ STATS

    2020-08-12 11:59:35.369 Finished Rendering
    2020-08-12 11:59:35.400 Total Rendering Time: 2 minutes 36.28 seconds

    IRAY STATS

    2020-08-12 12:01:16.150 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : Device statistics:

    2020-08-12 12:01:16.150 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : CUDA device 1 (GeForce RTX 2080 Ti): 818 iterations, 3.378s init, 149.011s render

    Render Performance: 818 / 149.011 = 5.49 iterations per second

    Load Time: (0 * 3900 + 2 * 60 + 36.28) - 149.011 = 7.27 seconds

     

    TEST 3: RTX2080 Ti x1 Only checked first GPU (SLI / NVLink ENABLED) info not complete?

    DAZ STATS

    2020-08-12 11:32:35.244 Finished Rendering
    2020-08-12 11:32:35.276 Total Rendering Time: 5 minutes 21.3 seconds

    IRAY STATS

    None but last two lines above Daz Stats are as follows.

    2020-08-12 11:32:34.620 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : Received update to 01800 iterations after 318.018s.
    2020-08-12 11:32:34.651 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : Maximum number of samples reached.

    TEST 4: RTX2080 Ti x 1 Only checked 2nd  GPU (SLI / NVLink ENABLED)

    DAZ STATS

    2020-08-12 11:50:09.563 Finished Rendering
    2020-08-12 11:50:09.610 Total Rendering Time: 5 minutes 17.24 seconds

    IRAY STATS

    2020-08-12 11:50:19.826 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : CUDA device 0 (GeForce RTX 2080 Ti): 1800 iterations, 3.064s init, 311.212s render

    Render Performance: 1800 / 311.212 = 5.783838669 iterations per second

    Load Time: (0 * 3600 + 5 * 60 + 17.24 = 6.028 seconds

     

    TEST 5: RTX2080 Ti x 2  (SLI / NVLink DISABLED) the render did not utilize both cards

    DAZ STATS

    2020-08-12 12:19:44.552 Finished Rendering
    2020-08-12 12:19:44.584 Total Rendering Time: 2 minutes 45.82 seconds

    IRAY STATS

    2020-08-12 12:19:47.895 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : CUDA device 1 (GeForce RTX 2080 Ti): 894 iterations, 3.184s init, 159.205s render

    Render Performance: 894 / 159.205 = 5.615401526 iterations per second

    Load Time: (0 * 3600 + 2 * 60 + 45.82) - 15615401526 = 6.615 seconds

     

    TEST 6: CPU + RTX2080 Ti x 2  (SLI / NVLink DISABLED) render utilized CPU and both cards

    DAZ STATS: 

    2020-08-12 12:25:20.372 Finished Rendering
    2020-08-12 12:25:20.419 Total Rendering Time: 2 minutes 36.39 seconds

    IRAY STATS

    2020-08-12 12:25:32.713 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : CUDA device 1 (GeForce RTX 2080 Ti): 821 iterations, 3.355s init, 149.337s render

    2020-08-12 12:25:32.729 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : CUDA device 0 (GeForce RTX 2080 Ti): 827 iterations, 3.345s init, 149.398s render

    2020-08-12 12:25:32.729 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : CPU: 152 iterations, 2.757s init, 150.592s render

    Render Performance: (821 + 827 + 152) / 150.592 = 11.95282618 iterations per second

    Load Time: (0 * 3600 + 2 * 60 + 36.39) - 150.592 = 5.798 seconds

    Edited: to add Render Performance * Load Time stats to Test 2 and correct test 6

    My calculations for TEST 6 appear to be calculated wrong and should read

    Render Performance: (821 + 827 + 152) / 149.7756 = 12.01797355 iterations per second

    Load Time: (0 * 3600 + 2 * 60 + 36.39) - 149.7756 = 6.614333 seconds

    Post edited by ArtAngel on
  • outrider42outrider42 Posts: 2,853
    edited August 2020

    You've been running two 2080tis with Daz 4.11? You should a big gain going to 4.12 as it uses Iray RTX. Your initial numbers fall in line with the other dual 2080ti from 4.11. That same combo jumped to over 14 iterations per second with the newer version of Iray.

    Your second test still had Nvlink enabled, so I am guessing that even though you unchecked one of the 2080tis, it still ran them both while only reporting the data for one card. That's pretty wacky, but Nvlink makes the software see it as one GPU. The best way to check is to look at a GPU monitoring software (not Task Manager) and see if both cards ramp up if you do this again. What is really fascinating by this is that this result is 10 seconds faster. The "unreported GPU" did a lot more work than before. This test runs to 1800 iterations, so each GPU should be around 900 iterations each, and they are for the other tests. But in this test, your one GPU did 818 iterations, meaning the hidden GPU would have to have done 982 iterations to reach the 1800 stop condition. But really, unchecking a GPU in SLI just to save 10 seconds wouldn't be logical.

    Your second to last test does the same thing, as your GPU only shows 894 iterations, plus the time lines up with both GPUs running. So the other GPU had to be running somehow without being reported. Again, a GPU monitor should show this.

    I like using MSI Afterburner myself, as it can show a lot of detailed data for both GPUs and CPUs in real time, and it also allows me to control my GPU fans. For me, that makes a big difference with two big GPUs in my rig with just air cooling.

    One last piece of advice, if you don't do it already, consider making backups of 4.11. Once you upgrade you can't go back. While your 2080tis will obviously benefit, the rig with 1080tis may not benefit so much, or you may just want to keep it. Some people have complained that the newer Iray RTX uses more VRAM than the old one, which may be of concern to you.

    Post edited by outrider42 on
  • ArtAngelArtAngel Posts: 1,048

    One last piece of advice, if you don't do it already, consider making backups of 4.11. Once you upgrade you can't go back. While your 2080tis will obviously benefit, the rig with 1080tis may not benefit so much, or you may just want to keep it. Some people have complained that the newer Iray RTX uses more VRAM than the old one, which may be of concern to you.

    When I tried a CPU only test it slugged so I bailed. Are you referring to the beta or the Pro version? Also, iIn your opinion, what is the most effective fool-proof way to back up 4.11?

  • outrider42outrider42 Posts: 2,853
    edited August 2020

    I either copy the Daz program files to another location or drive, or install the new version to another drive entirely. I've done both. I also have 5 drives on my PC, LOL. I honestly don't know what is better, but it seems to have worked. If you do install to a new location, make sure DIM is set to not delete the previous version, there should be a check box somewhere for that. I lost one of my beta versions that way and had a wild time with system restores and other things to get it back. I am actually not quite sure how I got that version working again.

    But ultimately I have 5 or 6 versions of Daz Studio on my PC, a mix of beta and general releases.

    If you do not have a beta already, then that would be the easiest way to go. You can safely install the beta as it will not conflict with the general release.

    I just call the Pro version the general release, as the beta is technically the "Daz Studio Pro Beta", its always confusing to me.

    And yeah, the CPU isn't going to do much, especially compared to those GPUs. Even in your GPU+CPU renders, it may even be slowing you down slightly, mitigating the GPU speeds. Your two GPUs tended to be between 2:36 and 2:45, while using all 3 parts was 2:45. Something is bottlenecking that data, maybe the bus between them.

    Post edited by outrider42 on
  • ArtAngelArtAngel Posts: 1,048

    Quick question re : 4.2 Running The Benchmark Step 1: Close all running programs (including any open instances of Daz Studio) before continuing (failing to do so may artificially decrease your measured rendering performance.)
     

    Norton 360 does not show as app but is running. Would background processess (75 ) running - including Norton 360 affect the outcome greatly or is leaving them on okay when I test this next machine with 1080's?

  • outrider42outrider42 Posts: 2,853
    It should be fine. Ideally you want to control all variables so that everything is equal for all tests and all users doing the bench. There is a small chance that Norton could go "Hey I want to do an update right now! And do a full scan!" in the middle of the bench.

    Actually I had the most bizarre issue a while back. During my video games, I would experience a noticeable hitch in gameplay periodically. It got to be incredibly annoying. I thought perhaps it was my aging CPU. It turns out that it was something so silly, it hurts the brain. After messing around, I found the answer...my desktop background. I had my desktop set to cycle pictures. The loading of a new pic was the cause of that hitch I was experiencing. I couldn't believe that something like that could have such an impact on my games. They were my own pics, too.

    I don't believe it impacted Iray. The hitches were hitting the CPU, not the GPU. It did effect Daz in app, as the hitch would occur setting up scenes and was quite annoying. It just goes to show that the strangest things can cause unexpected issues. My gaming performance was really awful because of a desktop setting.
  • RayDAntRayDAnt Posts: 860
    ArtAngel said:

    Quick question re : 4.2 Running The Benchmark Step 1: Close all running programs (including any open instances of Daz Studio) before continuing (failing to do so may artificially decrease your measured rendering performance.)
     

    Norton 360 does not show as app but is running. Would background processess (75 ) running - including Norton 360 affect the outcome greatly or is leaving them on okay when I test this next machine with 1080's?

    Like @outrider42 said, it's more a safeguard against getting innacurate results than a strict guideline. As long as you're reasonably confident that an application isn't going to suddenly start grabbing CPU/GPU processing cycles in the middle of the test behind your back you should be fine. 

  • ArtAngelArtAngel Posts: 1,048
    edited August 2020
    RayDAnt said:
    ArtAngel said:

    Quick question re : 4.2 Running The Benchmark Step 1: Close all running programs (including any open instances of Daz Studio) before continuing (failing to do so may artificially decrease your measured rendering performance.)
     

    Norton 360 does not show as app but is running. Would background processess (75 ) running - including Norton 360 affect the outcome greatly or is leaving them on okay when I test this next machine with 1080's?

    Like @outrider42 said, it's more a safeguard against getting innacurate results than a strict guideline. As long as you're reasonably confident that an application isn't going to suddenly start grabbing CPU/GPU processing cycles in the middle of the test behind your back you should be fine. 

    Thank-you @outrider42 Understanding the software is crucial and these benchmarks have been that pivotal moment where the lights go on.

     @RayDAnt Thanks for creating this thread and the benchmark duf file. I think this is a crucial asset to the Daz community.

    To either of you:  I have a scenerio that needs clarification

    A recent bench test posted by a user, included this benchmark

    ------------------------------------------------------------------------

    Benchmark Results (CPU+GPU)
    DAZ_STATS
    2020-07-16 18:46:25.543 Finished Rendering
    2020-07-16 18:46:25.567 Total Rendering Time: 4 minutes 58.22 seconds
    IRAY_STATS
    2020-07-16 18:46:52.008 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce RTX 2070 SUPER):   1509 iterations, 1.939s init, 293.483s render
    2020-07-16 18:46:52.008 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CPU:   291 iterations, 1.595s init, 293.221s render

    Iteration Rate: (1509 / 293.483) =  5.142 iterations per second
    Loading Time: ((0 * 3600 + 4 * 60 + 58.22) - 292.483) =  5.737 seconds

    ---------------------------------------------------------------------------------------------------------

    If the above were my test the results would read: 

    Iteration Rate: (1800 / 293.352) =  6.135 iterations per second
    Loading Time: ((0 * 3600 + 4 * 60 + 58.22) - 293.352) =  4.868 seconds

    This isn't about being right or wrong it is about doing the test correctly and at this point I'm questioning if I am calculating things properly.  

    Yesterday I completed 12 tests on 1080ti (analytics of pre updating software/drivers and promise to not post a ton of results- maybe two or three). I  averaged the seconds of both devices (doubt that matters) instead of taking highest like I did for 2080ti test but also included the cpu iterations in calculations where cpu was utilized.  Please clarify so I can get this right and many thanks in advance.

    Also for the asset drive, are you referring to the one where Daz Library contents are stored or the one where I stored your .duf file? For the the 2080ti I had the .duf lloading from an old 8GB usb stick, whereas the DAZ content library is on a 4TB Internal Storage SSD. On this machine the library/contents are on a 3TB external SSD. but I loaded the .duf from an internal storage SSD. Is the location of the .duf relevent?

    A great benefit of  this discussion/thread, for users like me, who already spent their budget on computers, is this  test can also be used to assist with software upgrade decisions and unveil precise results on how a simple driver upgrade or roll back can affect performance stats (for better or worse), providing you do a bench test before the change. And of course  other lucky buggers,  shopping for upgrades, can make decissions based on unbiased performance stats everyone shares.  I'm amazed this thread isn't on Part 17, at page 99, about to restart as Part 18.

    Thanks again

    Juanita

    EDIT re error on previous bench mark-posted by me

    My calculations for TEST 6 also appear to be calculated wrong so I corrected them.

     

     

    Post edited by ArtAngel on
  • ArtAngelArtAngel Posts: 1,048
    edited August 2020

    System Configuration

    System/Motherboard: ASUS STRIX Z2070E

    CPU: IntelCore i7-7700k @ 4.2 GHz

    GPU1: GeForce GTX1080 Ti @ SPEED/stock

    GPU2: GeForce GTX1080 Ti @ SPEED/stock

    System Memory: G.Skills Ripjaws 64GB @ 2133MHz/stock

    OS Drive: Samsung SSD PRO 2TB

    Asset Drive: Western Passport Ultra (eternal) 3TB

    Operating System: Windows 10 Pro v 1511 Build 10586.446

    Nvidia Drivers Version: 378.92

    Daz Studio Version: 4.10.0.123

     

    Benchmark Results

    SLI: ENABLED  DISABLED

    Optix Prime Acceleration: ON

     

     

     

    DAZ_STATS

    2020-08-13 06:24:19.957 Finished Rendering

    2020-08-13 06:24:19.988 Total Rendering Time: 5 minutes 32.82 seconds

    2020-08-13 06:24:20.004 Loaded image r.png

     

    IRAY_STATS

    2020-08-13 06:24:26.887 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : Device statistics:

    2020-08-13 06:24:26.887 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : CUDA device 0 (GeForce GTX 1080 Ti): 894 iterations, 10.632s init, 318.939s render

    2020-08-13 06:24:26.887 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : CUDA device 1 (GeForce GTX 1080 Ti): 906 iterations, 10.611s init, 318.462s render

    2020-08-13 06:24:32.878 Iray INFO - module:category(IRT:RENDER):   1.1   IRT    rend info : Shutting down irt render plugin.

    2020-08-13 06:24:55.613 *** Scene Cleared ***

     

    Iteration Rate: (1800 / 318.7005) = 5.6479 iterations per second

    Loading Time: ((0 * 3600 + 5 * 60 + 32.82) - 318.7005) = 13.3815 seconds

    Post edited by ArtAngel on
  • RayDAntRayDAnt Posts: 860
    edited August 2020
    ArtAngel said:
    RayDAnt said:
    ArtAngel said:

    Quick question re : 4.2 Running The Benchmark Step 1: Close all running programs (including any open instances of Daz Studio) before continuing (failing to do so may artificially decrease your measured rendering performance.)
     

    Norton 360 does not show as app but is running. Would background processess (75 ) running - including Norton 360 affect the outcome greatly or is leaving them on okay when I test this next machine with 1080's?

    Like @outrider42 said, it's more a safeguard against getting innacurate results than a strict guideline. As long as you're reasonably confident that an application isn't going to suddenly start grabbing CPU/GPU processing cycles in the middle of the test behind your back you should be fine. 

    Thank-you @outrider42 Understanding the software is crucial and these benchmarks have been that pivotal moment where the lights go on.

     @RayDAnt Thanks for creating this thread and the benchmark duf file. I think this is a crucial asset to the Daz community.

    To either of you:  I have a scenerio that needs clarification

    A recent bench test posted by a user, included this benchmark

    ------------------------------------------------------------------------

    Benchmark Results (CPU+GPU)
    DAZ_STATS
    2020-07-16 18:46:25.543 Finished Rendering
    2020-07-16 18:46:25.567 Total Rendering Time: 4 minutes 58.22 seconds
    IRAY_STATS
    2020-07-16 18:46:52.008 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce RTX 2070 SUPER):   1509 iterations, 1.939s init, 293.483s render
    2020-07-16 18:46:52.008 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CPU:   291 iterations, 1.595s init, 293.221s render

    Iteration Rate: (1509 / 293.483) =  5.142 iterations per second
    Loading Time: ((0 * 3600 + 4 * 60 + 58.22) - 292.483) =  5.737 seconds

    ---------------------------------------------------------------------------------------------------------

    If the above were my test the results would read: 

    Iteration Rate: (1800 / 293.352) =  6.135 iterations per second
    Loading Time: ((0 * 3600 + 4 * 60 + 58.22) - 293.352) =  4.868 seconds

    This isn't about being right or wrong it is about doing the test correctly and at this point I'm questioning if I am calculating things properly.  

    Yes, you are doing it the right way. This benchmark result calculation you spotted upthread was calcualted incorrectly. Iterations per second equals the combined total iterations completed by all CPUs/GPUs used for the render divided by the longest render time. Which in the case of this test scene always works out to be 1800 - unless something has gone wrong.

     

    ArtAngel said:

    Yesterday I completed 12 tests on 1080ti (analytics of pre updating software/drivers and promise to not post a ton of results- maybe two or three).

    Post away! I've sort of tapered off updating the beginning tables of the thread mostly because:

    1. Most device combinations have been covered multiple times already,
    2. Iray/driver updates with significant performance up/downlifts have been scarce recently
    3. I'm worried about hitting the forum's per-post character limit if I keep expanding things

    But none of those concerns apply to additional posts made by me/others of newer benchmarking data. And the more there is of that in this thread in general, the more potentially useful this thread will be at a later date when significant performance changes start to crop up again.

     

    I averaged the seconds of both devices (doubt that matters) instead of taking highest like I did for 2080ti test but also included the cpu iterations in calculations where cpu was utilized.  Please clarify so I can get this right and many thanks in advance.

    Properly speaking it is whichever device's reported render time that's the longest that you should use (not an average of two or more.) Because - no matter how much faster one rendering device may complete its portion of the overall render - that longest value is how much time it took Iray to finish the overall render. And making observations on the overall completion rate of a single render task is what this benchmark is based around.

    With that said, in practice I have never seen it differ more than a second or so between devices (making the difference resulting from whether you use the longest, shortest, or an average of some sort negligible to the final stat.) But there's no telling when a situation may arise (perhaps with the upcoming RTX gen. 2 cards?) where that difference suddenly becomes something that matters. So you should be gonig off of the longest render time.

     

    Also for the asset drive, are you referring to the one where Daz Library contents are stored or the one where I stored your .duf file? For the the 2080ti I had the .duf lloading from an old 8GB usb stick, whereas the DAZ content library is on a 4TB Internal Storage SSD. On this machine the library/contents are on a 3TB external SSD. but I loaded the .duf from an internal storage SSD. Is the location of the .duf relevent?

    The location where the Daz Library contents are stored is what matters because that's where texture files that Iray needs to load directly are located. The location of the benchmark.duf file works out to be completely irrelevant to the test itself because the Iray plugin embedded in Daz Studio never interacts with .duf files directly (it doesn't even know how to read them since they are a proprietary Daz format.)

     

    Post edited by RayDAnt on
  • outrider42outrider42 Posts: 2,853
    edited August 2020

    @artangel, I'm curious what models of 1080tis you have. At least in this test, yours are way slower than mine, like a full minute slower. A minute out of a 4-5 minute test is quite large. That seems out of place, the other users who posted with two 1080tis were roughly around the same speed as mine.

    If they are Founder's Editions, those were only clocked around 1500 MHz, while mine are both over 1900, nearly 2000 even. That is a large difference and could be our answer. I totally forgot the Founders were clocked that low.

    Could they be running hot and throttling themselves? I always use MSI Afterburner to control my fans, I created a simple, yet aggressive fan curve so that the fans kick in harder and at lower temps. It keeps my overall temps down so they can run at a more sustained clock speed for the duration of a long render, or demanding video game.

    And it could be a mix of both. The Pascal Founder's cards usually ran hotter and so throttling a bit wouldn't be a surprise, on top of having slower clocks by default.

    If this is the case, I do find it very interesting. I had wondered if blower style coolers would help in a multiple GPU configuration since they eject the heat out of the case and not on each. But that probably doesn't overcome the lower clock speeds. After Pascal, Nvidia upgraded their Founders Edition cooling for Turing to dual fans. Rumors suggest they may do a radical new cooling design for upcoming Ampere.

    This test also goes to show that GPU models are not equal.

    Post edited by outrider42 on
  • ArtAngelArtAngel Posts: 1,048

    @artangel, I'm curious what models of 1080tis you have. At least in this test, yours are way slower than mine, like a full minute slower. A minute out of a 4-5 minute test is quite large. That seems out of place, the other users who posted with two 1080tis were roughly around the same speed as mine.

    If they are Founder's Editions, those were only clocked around 1500 MHz, while mine are both over 1900, nearly 2000 even. That is a large difference and could be our answer. I totally forgot the Founders were clocked that low.

    Could they be running hot and throttling themselves? I always use MSI Afterburner to control my fans, I created a simple, yet aggressive fan curve so that the fans kick in harder and at lower temps. It keeps my overall temps down so they can run at a more sustained clock speed for the duration of a long render, or demanding video game.

    And it could be a mix of both. The Pascal Founder's cards usually ran hotter and so throttling a bit wouldn't be a surprise, on top of having slower clocks by default.

    If this is the case, I do find it very interesting. I had wondered if blower style coolers would help in a multiple GPU configuration since they eject the heat out of the case and not on each. But that probably doesn't overcome the lower clock speeds. After Pascal, Nvidia upgraded their Founders Edition cooling for Turing to dual fans. Rumors suggest they may do a radical new cooling design for upcoming Ampere.

    This test also goes to show that GPU models are not equal.

    How do I find the exact model info? !!! I just crawled under the desk and can clearly see for the 1080 machine they differ. I am going to go get a camera and take a pic and post it.

    But I have these screenshot here re rendering. I actually tried rendering in 4.11.0.366 Beta with the 1080's and after 7 minutes slugging away at 2% I bailed.

    Slugging away with old driver

    Frustrated I updated the NVidia Driver driver and got this result on the 4.11.0.366

    After Driver Update

    Been testing the newsest 4.12.2.6 so I'll have to try another test tomorrow on the 1080ti's  but first I'm taking the side off the case or maybe grab a pic through the glass side..

    411.After 7 minutes bailed.JPG
    1488 x 1013 - 144K
    411AfterUpdate.JPG
    1324 x 304 - 113K
  • ArtAngelArtAngel Posts: 1,048
    edited August 2020

    @artangel, I'm curious what models of 1080tis you have. At least in this test, yours are way slower than mine, like a full minute slower.....

    This test also goes to show that GPU models are not equal.

    I took off the cover to check out the fans. Seems to be 3. 1 in the rear, 1 by the cooler and 1 in the front, but took photos.

    Edit there are five fans at least. One more in front bottom and another in back bottom where power supply is.

    1.jpg
    5312 x 2988 - 6M
    2.jpg
    5312 x 2988 - 5M
    3.jpg
    5312 x 2988 - 3M
    Front.jpg
    5312 x 2988 - 3M
    Post edited by ArtAngel on
  • outrider42outrider42 Posts: 2,853

    In your first pic, I see what looks like a white label at the very back end of the card on top. That should have a SKU to identify it. Looking at the shroud from the side, it kind of looks like a Founders card.

     

    That will be trickier for the bottom card, and I wont ask you to remove the other just to see that label. But we do see the bottom card is a EVGA. We just don't know which one, as EVGA made Founders style cards, too.

    Most Founders cards have a very unique look compared to other cards. They almost look like some kind of space ship with their angular shape. You can also see the single fan.

    That pic looks like the card on top, but I can't say for sure.

    If you know what your clock speeds are, that would narrow it down a lot. Most models have a set clockspeed as their goal, provided temps are good. If card #1 is running near 1480 MHz, that's a Founders for sure.

    EVGA made a ton of cards. There are single fan models that look similar to the above pic. They have dual fan and 3 fan models, too.

    Looking at the style of the text on the EVGA, it could be a blower card.

    https://www.newegg.com/evga-geforce-gtx-1080-ti-11g-p4-5390-kr/p/N82E16814487357

    Again, the giveaway would be the single fan on the underside of the card.

     

  • fred9803fred9803 Posts: 1,315
     

    This test also goes to show that GPU models are not equal.

    I would have to agree with that. Along with different hardware configurations that affect render times. But this thread is great for giving people a general idea of GPU performance with DS/Iray. Thank you outrider for starting it.

    Those duel 2080ti render times are (expectedly) stunning compared to my 2080. And when the new NIDIDA ones come along I suppose I'll be tempted to upgrade.... an arm and a leg considering here in Australia they'll be much more pricey than in the US.

    Then again, I just rendered out an interior scene with 4 G4Fs and it took about 25 minutes. Perhaps 5 minutes or less with the theorised new Ampere. Do I actually need to spend 2 thousand bucks to save 20 minutes as a hobbyist? At a certain point the law of diminishing returns kicks in, especially for those who don't render for a living.

  • outrider42outrider42 Posts: 2,853

    The good thing with Iray is that in general, aside from the GPU itself, the rest of the computer doesn't impact render times much at all, as basically only temperatures truly effect it much.

    This is very counter to video games, which can potentially rely on every part of a PC working in harmony.

    Also, you have to credit RayDAnt for starting this thread and compiling the handy charts.

    It certainly comes down to how much value one places on their hobbies. As a hobby, it can be quite expensive indeed! Especially if you are buying up content. But some people spend crazy money on video gaming, or modding cars, or collecting stuff. What people may do for hobbies or entertainment can have a big value to them for their personal well being.

    Perhaps saving 20 minutes doesn't sound like much, but that scales. In your example, you are talking 5 times faster. If you build a bigger scene, perhaps now instead of 5 hours, you are looking at 1 hour. A savings of 4 hours. Now it starts to sound better. And this is cumulative as well. Now you can potentially produce 5 times as many renders over a set period of time. This adds up.

    Getting more renders done can also effect your ability to learn. The best way to learn is by doing things repeatedly. If you have a real bad computer, where it takes an hour+ for even a basic pic, it can slow down how fast you learn. If you have a fast PC, then you get to see your results faster, so you can adjust things. You can fix poses, alter materials and lights, and make a new render each time until you get it right (or use Iray viewport in near real time). You can experiment and see how this effects that, ect. Someone with a fast computer can be done with this before the very slow PC is even done with their first pic or two.

    That's how I see it.

    And I play video games, too. So this ties in nicely with that other hobby. :)

    But I don't want to hijack our thread. We need to stay on topic actual benchmarking.

Sign In or Register to comment.