Iray Starter Scene: Post Your Benchmarks!

14344464849

Comments

  • AalaAala Posts: 140
    Aala said:
    RayDAnt said:

    Did a test with a more complex custom scene. Stonemason Private Library with Sina in the middle rendered to 0.5 quality at 2048 x 1024.

    4.11

    2019-07-23 20:47:12.905 Total Rendering Time: 37 minutes 15.89 seconds

    2019-07-23 21:06:44.282 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : CUDA device 0 (GeForce RTX 2080 Ti): 4325 iterations, 26.049s init, 2199.728s render

     

    4.12 beta

    2019-07-24 00:40:15.960 Total Rendering Time: 27 minutes 48.19 seconds

    2019-07-24 00:47:29.518 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : CUDA device 0 (GeForce RTX 2080 Ti): 4331 iterations, 16.180s init, 1640.894s render

     

    Anybody have any recommendations for what the most complex scene in the Daz Studio store is?

    You don't really need a complex scene, meaning lots of assets in the scene. You can have something with a dozen million vertices and loads of 4k textures, but if it's open-air, it'll still render quickly if everything can fit into the memory.

    What you need is a scene that is hard to render, meaning still lots of noise after a long render time. In my experience, the usual guilty scenes for this are bathrooms. Lots of reflections, tiny spaces for 'traced rays' to find light in, and usually white themes that translate to a lot of bounces per ray. And if you design the scene so you have to crack the exposure up to see anything, it'll be even noisier.

    Although bathroom and small indoor spaces are difficult scenes to render, and take very long times; the RTX cards really wont make much a difference there. Where an RTX card should make a difference is when you have an outdoor scene with 100 instanced  trees where each tree has 1000 of so leaves. This is just my opinion based on what I've read, I don't own an RTX card, was waiting for 4.12 to be released to make my decision. But, so far based on what i've seen here and the type of render's I do, i'd probbably be better of with 2 1080ti vs 1 2080ti. 

    Indoor scenes can be reasonably complex too when you have tons of elements in them. RTX definitely takes advantage when there's ray intersection because of the RT cores, but since outdoor scenes render quickly enough already, we might want something different. Looks like we're going to have to design multiple different scenes and test them out.

  • Takeo.KenseiTakeo.Kensei Posts: 1,303

    There is also an other option that I didn't thought at first but that should be more logical if we want to stay in the typical DS scene, that is using strand hairs

    I haven't tested the functionnality yet that's why, but in theory, some dense hair mesh should do the trick to show RTCore performance advantage

    So if someone wants to build a test scene with lots of hair he/she is welcome

    BTW as some people mentionned it, there is a thread for discussions like this one here https://www.daz3d.com/forums/discussion/321401/general-gpu-testing-discussion-from-benchmark-thread#latest

  • richardandtracyrichardandtracy Posts: 5,049

    Here is a blast from the past.

    I am using a 4 core/8 thread Xeon, E3-1245, 3.4 GHz, 16GB RAM with an Nvidia Quadro 2000. The machine dates to Aug 2013 when it became my work PC, then after it was retired in January the company offered it to me instead of throwing it away.

    Benchmark times: Full scene with graphics card and optimization: 50 mins 56 .82 secs to get to 5000 iterations and 99.99%, so almost there. It was at 90% after 3 mins 46.

    Have not seen times like this since page 4 of this thread. Looks like a secondhand 1060 is under £100, so it may be worth getting for the machine. Dunno, it is a fair bit of money.

    Regards,

    Richard.

     

  • LenioTGLenioTG Posts: 2,118

    Here is a blast from the past.

    I am using a 4 core/8 thread Xeon, E3-1245, 3.4 GHz, 16GB RAM with an Nvidia Quadro 2000. The machine dates to Aug 2013 when it became my work PC, then after it was retired in January the company offered it to me instead of throwing it away.

    Benchmark times: Full scene with graphics card and optimization: 50 mins 56 .82 secs to get to 5000 iterations and 99.99%, so almost there. It was at 90% after 3 mins 46.

    Have not seen times like this since page 4 of this thread. Looks like a secondhand 1060 is under £100, so it may be worth getting for the machine. Dunno, it is a fair bit of money.

    Regards,

    Richard.

    If the 1060 is the 6Gb version, I guess it's a good deal!

  • outrider42outrider42 Posts: 3,679
    Yeah, do make sure it is a 6gb model. There are 3gb models and even at that price you would deeply regret making that purchase if many of your scenes drop to CPU.
  • regarding 1060s and such... 
    ----
    I had a 980ti (and still do) but added a 1050 for my second machine and then stuck them both in my main machine. 
    ----
    obviously I knew 1050 only had 76 cuda cores but it wasn't orginally purchased for rendering anyway. 
    ---
    But it wasn't until I ran gpu shark in detail mode.. that I discovered a lot of data they may be buried in the specs on nvidias website.. but not on the boxes etc or in product descriptions you see in catalogs etc. 
    ---
    the 1050 has 4 g .. the 980 has 6 so that would seem to be a wash except if I have a big scene and drop to cpu..
    but wait the dd5 in the 1050 has a clock speed about 130 while the ddr5 in the 980 has a clock speed of 384 whoops
    ---
    and so on for most other specs almost everything on the 1050 was appreciably lower quality.
    ---
    so I decided to look for another 980 but they were running an average of 350 and up... and could only find the 6g not a 12... 
    so I googled for 12g and found a titanx with 12g for $400 and loaded my paypal credit. 
    ----
    I need to run these tests to get some benchmarks 
    but I got some quick data from cpu/gpu bench mark when I was checking cards..
    ----
    2080ti  11g score 16,800 running $1000 --- about 50% higher benchmark than the 980ti but only about 25% higher than the titan
    2080    score  15510 .........
    1080ti  11g score 14,200 running $550  1/3 higher than the 980 but less the 10% higher than the titan 
    titanx  12g score  13,665 grabbed the $440 one, I did ... the Titan is not a huge leap past the 980 maybe around 20% but does have the 12g .and really didn't cost that much more. than a 
    980ti  6g score 11,400  $300/600
    ----
    they talk about the cuda cores being a factor... well all of the 5 cards listing have about the same number
    ----
    I think it might be interesting to make sure we include the cuda count with our benchmarks. 


     

  • outrider42outrider42 Posts: 3,679
    edited August 2019
    When it comes to rendering Iray, the memory bandwidth doesn't really come into play. The entire scene loads onto the VRAM at once, at the start of the render, and stays in memory for the duration of the render. So the speed up you might see is the initial load of the scene, and that's about it. You can test this by altering your memory speed with an app that lets you, like MSI Afterburner or EVGA PrecisionX. Overclocking might be tough, but downclocking would be quite easy. This is different from a game, which can constantly request new data on the fly. Iray is not like like a video game.

    CUDA changes with every generation, generally it gets better. Each core on a newer GPU can do more work than an older one. And easy example is comparing a 680 to a 1050ti. The 680 has exactly double the CUDA cores, which at first glance sounds great. But in reality the two cards perform about the same, and the 1050ti might actually be a bit faster. The difference is the 680 released in 2012, the 1050ti released in 2016.

    At the time of its release, the 680 was the fastest GPU in the world. But a few generations later that level of performance was the bottom of mid tier. And the 1060? Well both of the 1060s (3gb and 6gb) are faster than 680.

    The Titans suffer a similar fate, though have aged just a touch better since most have 12gb of VRAM.

    And here is a final but extremely important piece of information: all Kepler GPUs are nearing their end of Iray support. In the notes for the latest Iray, Kepler is marked for discontinued support in a "future release of Iray", which likely means the next big Iray release will end Kepler support. Kepler covers all 600 and 700 series GPUs, and even a few low end cards in the 900 series. So I would advise not buying a Kepler based card if you intend on keeping it for a while. Next year Iray may drop support.
    Post edited by outrider42 on
  • rock63rock63 Posts: 13

    Just got my hands on a 2080ti today and combined with a 1080ti and the result was Total Rendering Time: 43.64 seconds. Pretty happy with that !

  • outrider42outrider42 Posts: 3,679
    I'd be interested in seeing how that pairing fairs in more scenes.
  • rock63rock63 Posts: 13
    edited August 2019
     

    What would you like me to try ?

     

    Post edited by rock63 on
  • rock63rock63 Posts: 13
    edited August 2019

    Total Rendering Time: 4 minutes 48.50 seconds 1 2080ti

    Total Rendering Time: 3 minutes 16.85 seconds 1 x 2080ti + 1 x 1080ti

    Image on the right was with two cards

    outrider2080ti.jpg
    720 x 520 - 185K
    outrider 2 cards.jpg
    720 x 520 - 185K
    Post edited by rock63 on
  • alan bard newcomeralan bard newcomer Posts: 2,100
    edited August 2019

    what's the link to the newer test scene, the one right above. 
    ---
    it's not the one on the first page

    here's a start on a comparison chart ...  
    the gpu benchmark would be a generic one as opposed to a 3d only one. 
    ----
    but it's also obvious that the daz version makes a big difference 
    from 4.10 to 4.11 .. my test speeded up from about 4.5 to 1.5... 
    ----
    Is it possible to kill the preview.. is it created only on the video card or is data being passed back and forth between daz and the card to show the render progress? In which case anyone have a way to compare time on a ssd vs a hd?
     

    gpu benchmark list.jpg
    1077 x 819 - 292K
    Post edited by alan bard newcomer on
  • p0rtp0rt Posts: 217

    2 or 3 minutes on a AMD FX 8300 @ 4.6GHZ and 2x 1070 in nvidia control panel SLI enabled, and optix disabled in Daz

  • does the SLI make a difference? 
    enabled in the nvidia control panel, but I'm guessing you have to have the physical bridge installed for that? 

     

  • just realized that my memory usage only jumped but about 3000mb for the test but now I wonder if I should have a few other things turned off?
    ---
    I do tend to have lots of stuff going on. 

    windows open.jpg
    602 x 552 - 153K
  • outrider42outrider42 Posts: 3,679
    Nvidia themselves states SLI is not recommended for Iray and can hurt rendering performance.

    The possible exception to this rule is Nvlink on higher end RTX cards, if it pools the VRAM together for scenes larger than single card can handle.

    The link to that other scene is in my sig.

    Using other programs can certainly alter things. If anything if using GPU it will effect render times as well as available VRAM. For benchmarking purposes it is best practice to close other apps before running a bench. A browser might not be an issue, but some web pages will access the GPU, like videos and browser based games.
  • outrider42outrider42 Posts: 3,679
    Nvidia themselves states SLI is not recommended for Iray and can hurt rendering performance.

    The possible exception to this rule is Nvlink on higher end RTX cards, if it pools the VRAM together for scenes larger than single card can handle.

    The link to that other scene is in my sig.

    Using other programs can certainly alter things. If anything if using GPU it will effect render times as well as available VRAM. For benchmarking purposes it is best practice to close other apps before running a bench. A browser might not be an issue, but some web pages will access the GPU, like videos and browser based games.
  • RayDAntRayDAnt Posts: 1,120
    edited August 2019
    but it's also obvious that the daz version makes a big difference 
    from 4.10 to 4.11 .. my test speeded up from about 4.5 to 1.5... 

    If you check the size of the Iray install folder inside the Daz Studio installation locations for each version, 4.11's is approximately 4x larger because of how much internal code was changed/rewritten between them. They almost might as well be different render engines altogether. Hence the time differences.

     

    Is it possible to kill the preview.. is it created only on the video card or is data being passed back and forth between daz and the card to show the render progress?

    The preview  makes virtually no difference performance-wise. But if you wish to render without it, go to:

    Render Settings > Editor tab > General > Render Target

    And change it from the default value ("New Window") to "Direct To File".

     

     In which case anyone have a way to compare time on a ssd vs a hd?

    Do your renders using SSD vs HDD and make sure to save the contents of the DS log file both times. Then look for the following two lines of info:

    2019-08-01 01:50:39.203 Total Rendering Time: 39 minutes 13.58 seconds

    2019-08-01 01:52:08.378 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce GTX 1050): 1800 iterations, 4.792s init, 2345.589s render

     Subtract the green value from the red one (converted into seconds.) The difference will be the amount of time taken by your specific system configuration to initialize things and move data from disk to GPU rather than doing rendering itself.

    Post edited by RayDAnt on
  • p0rtp0rt Posts: 217

    does the SLI make a difference? 
    enabled in the nvidia control panel, but I'm guessing you have to have the physical bridge installed for that? 

     

    it's make's rendering 100x faster on my PC, but then SLI work's best if you have 2 card's which are the same, OptiX acceleration is more for server rendering over desktop PC

  • just noticed this info in the log file:
     

    2019-08-04 21:09:10.197 NVidia Iray GPUs:

    2019-08-04 21:09:10.203 GPU: 1 - GeForce GTX TITAN X

    2019-08-04 21:09:10.203 Memory Size: 11.9 GB

    2019-08-04 21:09:10.203 Clock Rate: 1076000 KH

    2019-08-04 21:09:10.203 Multi Processor Count: 24

    2019-08-04 21:09:10.203 CUDA Compute Capability: 5.2

    2019-08-04 21:09:10.203 GPU: 2 - GeForce GTX 980 Ti

    2019-08-04 21:09:10.203 Memory Size: 5.9 GB

    2019-08-04 21:09:10.203 Clock Rate: 1190000 KH

    2019-08-04 21:09:10.203 Multi Processor Count: 22

    2019-08-04 21:09:10.203 CUDA Compute Capability: 5.2
    -----
    wonder how other cards read at this point. 
    this is the data generated when first loading a scene.
    so it's nvidia rating the card?
     

  • somebody please post benches of RTX 2060 super and 2070 super with the default SY, thank you.

  • if the default is the one on the first page of the thread 
    then with a titanx and a 980 ti .. I did it in one minute and 46s in 4.11
    ----
    earlier in 4.10
    the scores were 4 minutes 57s for the titan
    5 minutes 3 secs for the 980 ti
    and 4 min 22 secs for the cpu. dual xeon 2630v3s
    so 16 cores 32 threads so that should be better than a single cpu
    -----
    but in general running the cpu with the cards doesn't seem to make much diff
    ----
     

     

  • updated list of gpus.. the benchmark is from gpu benchmark... which pretty much follows as expected.
    but there are large varations in cuda counts  etc.
    ---
    so would be interesting to see a test result from each of these cards by itself
    --
    note, the mobile versions of all these cards seem to score at least 10% lower that the desktop versions. 
    ---
    so while in general usage the cards would stack pretty much by which is supposed to be a higher card
    the number of cuda cores for 3d may play a factor in rendoring


     

    video card list 2.jpg
    1325 x 1218 - 995K
  • alan bard newcomeralan bard newcomer Posts: 2,100
    edited August 2019

    the effects of the gamma rays on the man in moon marigold rendering system.
    ===
    benchmark 3 days ago for the page scene in 4.11 .. titanx and 980ti  optix is on for all, as the first time I tried it cut time a lot
    it has been suggested that it's for render farms... I think that maybe the program treats the extra card as a render farm
    remember that sli is not recommended, and that would be that the second card isn't seen as a second card?
    one minute and 46s in 4.11
    today 
    three minutes and 24s on the titanx
    four minutes and  14s on the 980ti
    two minutes and  30s on both of them together or 45 seconds or around 30% slower than the other day
    is mercury in retrograde, is my electricity not as energetic, 
    I am preparing an article of the Journal of Irreproducable results.
    ----
    I did test on the cpus only...  dual xeon 2630v3 16 cores/32 threads took a mere 23m
    okay, I love my cards.
    ---
    the listing of cards is in a spread sheet so I was able to plug in the data for the benchmarks
    I think I can handle a variety of two card set ups by using a layout like this
    ====
    first time is the time for the card the line is for, than w/ (add 2nd card) and time in next column. next w/will be another card with the time for it and the base card following that card.

     

    video card with benchmarks 2.jpg
    2621 x 1218 - 1M
    Post edited by alan bard newcomer on
  • heya folks, daz3d get RTX support with latest nvidia studio driver, can someone test it ? 
    https://www.nvidia.com/en-us/design-visualization/rtx-enabled-applications/

  • outrider42outrider42 Posts: 3,679
    kattyg911 said:

    heya folks, daz3d get RTX support with latest nvidia studio driver, can someone test it ? 
    https://www.nvidia.com/en-us/design-visualization/rtx-enabled-applications/

    That has been tested, right here in this thread, and a couple of others. We cannot directly test it because Daz does not offer any kind of Iray on or off button, so we have no way of being able to see exactly how much the ray tracing cores are adding. It is simply on all the time. This is only in the new beta 4.12.

  • kattyg911kattyg911 Posts: 7
    edited August 2019
    kattyg911 said:

    heya folks, daz3d get RTX support with latest nvidia studio driver, can someone test it ? 
    https://www.nvidia.com/en-us/design-visualization/rtx-enabled-applications/

    That has been tested, right here in this thread, and a couple of others. We cannot directly test it because Daz does not offer any kind of Iray on or off button, so we have no way of being able to see exactly how much the ray tracing cores are adding. It is simply on all the time. This is only in the new beta 4.12.

    But 4.12 is out, u can download it now

     

    Post edited by kattyg911 on
  • outrider42outrider42 Posts: 3,679
    kattyg911 said:
    kattyg911 said:

    heya folks, daz3d get RTX support with latest nvidia studio driver, can someone test it ? 
    https://www.nvidia.com/en-us/design-visualization/rtx-enabled-applications/

    That has been tested, right here in this thread, and a couple of others. We cannot directly test it because Daz does not offer any kind of Iray on or off button, so we have no way of being able to see exactly how much the ray tracing cores are adding. It is simply on all the time. This is only in the new beta 4.12.

    But 4.12 is out, u can download it now

     

    Yes, my post was past tense. We have been on this since day one. Please look back through this thread to find some answers. There are also other threads that explore the topic. Use Google to search for them, not the Daz Forum Search, which is horrible.

  • RayDAntRayDAnt Posts: 1,120
    edited August 2019
    kattyg911 said:

    heya folks, daz3d get RTX support with latest nvidia studio driver, can someone test it ? 
    https://www.nvidia.com/en-us/design-visualization/rtx-enabled-applications/

    Yeah, we know. Eg. see this thread for some comparative numbers.The press release from Nvidia today is technically old news.

    Post edited by RayDAnt on
  • RobinsonRobinson Posts: 751
    edited August 2019

    As posted in another thread test scene my previous times were:

    GPU Only 1 x Geforce RTX 2070  = 2 minutes 21.14 seconds (Optix Prime off)
    GPU Only 1 x Geforce RTX 2070  = 1 minutes 49.11 seconds (Optix Prime on)
    GPU Only 1 x Geforce RTX 2070 + 1 x Geforce GTX 970 = 1 minutes 45.78 seconds (Optix Prime off)
    GPU Only 1 x Geforce RTX 2070 + 1 x Geforce GTX 970 = 1 minutes 19.47 seconds (Optix Prime on)

    , with 4.12 (2070 only) they were:

    GPU Only 1 x Geforce RTX 2070  = 1 minutes 29.28 seconds (Optix Prime off)
    GPU Only 1 x Geforce RTX 2070  = 1 minutes 34.58 seconds (Optix Prime on)

    Don't turn on Optix Prime if you've got an RTX GPU (I'm wondering if those 5 seconds were due to some caching or similar, hence Optix Prime setting may be irrelevant).  Also 4.12 beta is around 63% faster for me.

     

    Edit: Confirmed, Optix Prime setting is irrelevant, i.e. it was slower because that was the first bench I did (probably cached shader compilation or similar).  Running again I get:

    GPU Only 1 x Geforce RTX 2070  = 1 minutes 27.3 seconds (Optix Prime on)

    Post edited by Robinson on
Sign In or Register to comment.