Iray Starter Scene: Post Your Benchmarks!

1272830323349

Comments

  • ZarconDeeGrissomZarconDeeGrissom Posts: 5,412
    edited September 2018

    ebergerly the Daz Studio beta does apparently, some have been running SY and Outriders Iray test scene for some time on the RTX2080ti. It may be in TitanV/Tesla V100 driver mode, it is working none the less.

    I myself expected Iray/CUDA support to not happen for a few months, just like the GTX10 launch.

    Post edited by ZarconDeeGrissom on
  • ZarconDeeGrissomZarconDeeGrissom Posts: 5,412
    edited September 2018
    ebergerly said:

    BTW it would be great if someone could generate a spreadsheet to summarize the RTX results for the outrider scene like I did below for the Sickleyield. It gets awful confusing trying to keep everything straight. Also, I think it's important that everyone using that scene has identical settings, so I think that needs to be clarified if it hasn't been already.

    Also, I have a sneaking suspicion that when we're getting down to the 1 minute range of render times the Sickleyield scene is starting to age, and might not be giving a good idea of actual render times but rather some other setup-related times. Not sure, but something to keep in mind. 

    And I also think we really need to take the results on RTX now with a huge grain of salt since all the pieces aren't in place yet software-wise. 

    That is why I tried to make an updated benchmark in the first place. Users were already breaking under a minute with multiple GPUs and it seemed to me like the scene was hitting a speed limit of Iray itself. The original uses a Genesis 2 model, and the surfaces are converted to Iray as G2 never shipped with Iray out of the box. I used Genesis 8 and made a few alterations to the surfaces, I think it has the new dual lobe settings. I gave the outfit weighted top coat glossy settings. I wanted to make the camera closer to the model since closeups take longer (less texture compression), and the ball of light is giving off a bright light that is right next to her skin for SSS calculations (also why her nails kind of glow...I just liked it, its magic, LOL.) So I tried to give the engine more stuff to do.

    And then I added bloom, mostly because I thought it looked nice, but it also adds a bit more for the engine to calculate. The room over all is still quite dark, which Iray has always hated. It takes longer for Iray to calculate the light bounce in a darker scene. If you add a light to the scene or brighten it up, it runs much faster. The same goes for the original SY scene. I tried to write a number of things to do for the bench in my post, but I didn't think about it getting buried in the thread. I think I will write this into the gallery post, and maybe put it in a sig.

    It shouldn't be too much of a task to make a table of this scene since it only goes back a few pages. There are times people don't list what scene they use.

    Generally speaking though, Iray seems to scale very well. I have run a variety of gpus at this point, and the percentage gained with one card over another is pretty consistant across the board, regardless of the scene make up. If you have a gpu that runs one scene 30% faster than another gpu, it will probably maintain that 30% faster for nearly all scenes. However that was because all Iray gpus ran off CUDA. Soon we will add the RT cores to the mix, and I am still convinced Tensor is doing something in the render process, otherwise how do we explain the times that are near double the 1080ti.

    Speaking of Tensor, I still want to see how Turing handles the AI denoiser. I would expect the AI denoiser to be much faster with a Turing card. Both my scene and the SY scene have noticeable grain in some spots. Would anybody like to test out the denoiser on one or both of these? I don't have the current beta myself.

    That would be really nice to see vs GTX10 and GTX9 if there are any. I've been to busy to cull through this thread for some time for scores on various cards. Oh, and I feel that Iray via CUDA is going to happen, Iray with RT and Tensor performance bosts I'm not too sure of. RT cores can be very handy, the code just needs to be written to do it. Tensor from what I've read elsewhere tends to be more fuzzy math based (not great for calculating exact numbers). As I had written elsewhere, it only took over three months for GTX10 cards to get Iray support going when they launched, and it 'only' took the Adobe Premiere dev team over ten years to add Intel iGPU acceleration to video encoding (for an iGPU that Intell CPUs had since the Core Duo days, lol). Judging by that, RT and Tensor acceleration for Iray may take a few months to work out, or it may take a corporate threat to Nvidia to convince them it would be good to do. Don't get me wrong, I would love Iray to be better, I just have doubts AMD will ever have that kind of threat to Nvidia to make them want to do it.

    I also like your variant of the Iray bench, because it uses more of what Iray shaders offer and obviously brings a GPU to its knees. I also appreciate the original SY test as it was made for less capable CPUs. Both tests appear to be useful. If only I could convince GN to make a Monkey head test scene equivalent for Iray, lol. Hmmmm.

    Post edited by ZarconDeeGrissom on
  • ebergerlyebergerly Posts: 3,255

    ebergerly the Daz Studio beta does apparently, some have been running SY and Outriders Iray test scene for some time on the RTX2080ti. It may be in TitanV/TTesla V100 driver mode, it is working none the less.

    I myself expected Iray/CUDA support to not happen for a few months, just like the GTX10 launch.

    Yes, it will render the scenes, but it does not take advantage of the new Turing architecture. 

  • ZarconDeeGrissomZarconDeeGrissom Posts: 5,412
    edited September 2018

    ah, yes, as I was just fighting Grammarly over fixing the text in my last post, lol. yes, Iray via CUDA, vs Iray on RT and Tensor cores, lol.

    to save this thread the mayhem, My thoughts are over in the other thread.

    Post edited by ZarconDeeGrissom on
  • jonascsjonascs Posts: 35
    edited September 2018

    Did I just use 151% of my VRAM in iraymode?

    https://www.daz3d.com/forums/discussion/270866/ot-new-nvidia-cards-to-come-in-rtx-and-gtx-versions-the-reviews-are-trickling-in/p22

    Edit, most likely not... seem to be error reading in GPU-Z

    Post edited by jonascs on
  • Ran the 2018 test scene with different settings in the GPU-Z Tweak II.

    All renders: Two RTX 2080 Ti with NVLink, Optix Off, SLI setting to OFF.

    Silent : Total Rendering Time: 2 minutes 45.65 seconds

    Gaming : Total Rendering Time: 2 minutes 43.87 seconds

    OC : Total Rendering Time: 2 minutes 43.67 seconds

    Guess there are more to tweak if I'm to do it manually. 

    But I'm surpriced that the difference between Gaming and OC isn't more than 0.20 seconds.

     

  • carbunclemaxcarbunclemax Posts: 88
    edited October 2018

    I did this test as well as octane benchmark for both Graphics Cards combined. The octane test gave me a score of 274 for the two 1080s.

    I also did tests with the above DAZ scene and found that it helped a little to have both GPUs and CPU selected for OptiX Prime acceleration even though I thought this would only affect the behaviour of the viewport.

    For instance with no OptiX Prime acceleration my full rig CPU: i7 5930K-01 and two Geoforce nVidia GTX 1080 cards was 4mins and 55 secs whereas with full OptiX it was 3 mins and 3 secs. So I have table on that basis

    1 CPU  2 x GPUs 3mins 3 secs total 4127 iterations CUDA/CPU 3826/342 iterations

    0 CPU 2 x GPUs  3mins 9 secs 4221 iterations

    1 CPU 1 x GPU 3 mins 7 secs total 4179 iterations total CUDA/CPU 3873/347 iterations

    1 CPU 0 x GPUs 30mins and 14 secs 4158 iterations  [14 mins to 90% convergence] 

    Most of the heat I get in my system is when I have at least one of the GPUs running. Even when the CPU is running at 100% there was little heat generated compared to running both GPUs.

    Also I tried rendering a bigger file with my full machine. So I used the same file and took a bigger pixel count in the camera frame.

    569 x 740 pixels or around 400k pixels compared to original 400 x 520 200k pixels took 5 mins and 5 secs but used no more RAM [still at around 30% or 32GB]

    985 x 1280 [1,200K pixels] took 14 mins 59 secondsstill no more RAM used according to Windows task manager.

    Thanks for posting this thread SickleYield. I am pretty happy with my results and it may be worth exploring a specialised render software to further increase the capacity of my current machine. I also should probably improve flow through of air around my GPU's to improve cooling. Also thanks to UTF for information on Octane benchmark.

    Clearly specialised software can help with render times and make best use of your machine but DAZ is pretty good to learn and free. I note that the fact I have Windows 10 creators edtion and that may affect how well DAZ uses my equipiment as well. I also expect the mainboard has a big effect on render times because of the communications buses etc. although RAM memory doesn't seem to be a big factor at the size of files I useually render. I built my own machine so I tend to use new parts but stuff I can pick up quickly at local suppliers. Sometimes this can mean the main board is a little dated but generally very happy because I saved probably $1000 or more on my build even if it is not perfect.

     

    Post edited by carbunclemax on
  • I bought a Nvidia RTX 2080 and tested, if the Raytracing-Cores go to an iRay realtime performance or help the editor iray mode to be almost realtime. 

    Test setup:  CPU Intel Core i7 8700K, Z370 Chipset, 32GB RAM, SSD with 2GB/s transfer rate (that matters because the texture loading is much faster), DAZ 4.10 on Winsows 10 64Bit, CUDA 9.1
    rendering od the iray Starter scene, Optix Prime on:

    CPU + GPU: >8 min. How disappointing! With the 1070 it rendered in 3:11 Min.! But I think, maybe the Intel security patches slowed down the performance.
    8700K + 2080 + 1070: First Picture: 0:20, 90%: 0:34, 95%: 2:00, 100%: 3:10

    But with DAZ 4.11 Beta, it's much much faster - it uses a iRay codebase that requires at least Kepler Chipset:
    8700K + 2080:             First Picture: 0:05, 90%: 0:09, 95%: 0:59, 100%: 1:21
    8700K + 2080 + 1070: First Picture: 0:05, 90%: 0:06, 95%: 0:08, 100%: 0:46

    CUDA 10 had the same performance as CUDA 9. Adding the CPU to rendering tendentially reduces the performance.

    So conclusion is, software improvement has more effect than the new grapphics cards have. The performance also scales good when you add a GTX 1070 to the RTX 2080.
    It's obvious: Currently neither CUDA 9 nor CUDA 10 nor DAZ Studio use the new Raytracing cores. So it's just evolution, no revolution. As I read here in the forum, only Optix will support the raytracing cores, but iRay uses Optix Prime, which only uses the normal cores. It's a pity! Nvidia failed to get the software ready. So we can only hope that it will be soon, but for iRay, nobody commited to yet :-(

    At least, if you use the newes Gfx cards with DAZ 4.11, you have a nice performance. When using 2080+1070, the iray shading in the editor feels almost smooth and you can see a picture which gives you an impression of the scene in about 3 seconds. 

     

  • MendomanMendoman Posts: 400
    edited October 2018

    Got my new 2080ti today, so I decided to run few tests. My old card is Titan X (Maxwell). I used SickleYield's test scene and one of my own. My own has 3 G3 characters and 16k HDRI and bunch of other props. I ran tests in DS 4.10 and 4.11 beta with my old card, and just 4.11 beta with my new card, since 4.10 does not work with turing cards.

     

    OS: Windows 8.1

    CPU: i7-4790K 4.00 GHz

    Memory: 16 GB

     

    GPU: TITAN X (Maxwell) 12GB

    SICKLEYIELD TEST SCENE DS 4.10: 2018-10-03 17:17:56.857 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : CUDA device 0 (GeForce GTX TITAN X): 5000 iterations, 0.282s init, 167.993s render

     

    SICKLEYIELD TEST SCENE DS 4.11 BETA: 2018-10-03 17:54:58.711 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : CUDA device 0 (GeForce GTX TITAN X): 5000 iterations, 0.161s init, 162.242s render

     

    MY OWN TEST SCENE DS 4.10: 2018-10-03 15:42:05.890 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : CUDA device 0 (GeForce GTX TITAN X): 1017 iterations, 66.354s init, 1079.686s render

     

    MY OWN TEST SCENE DS 4.11 BETA: 2018-10-03 18:31:58.716 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : CUDA device 0 (GeForce GTX TITAN X): 1085 iterations, 27.502s init, 1781.088s render

     

     

    GPU: 2080ti

    SICKLEYIELD TEST SCENE DS 4.11 BETA: 2018-10-03 20:37:24.052 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : CUDA device 0 (GeForce RTX 2080 Ti): 5000 iterations, 0.119s init, 64.704s render

     

    MY OWN TEST SCENE DS 4.11 BETA: 2018-10-03 20:56:40.508 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : CUDA device 0 (GeForce RTX 2080 Ti): 1105 iterations, 27.384s init, 917.414s render

     

    I'm not quite sure why my own test scene renders so much faster in DS 4.10 than 4.11 with the same GPU. Maybe beta still has some optimization to do or maybe there's some shader changes or something, where my custom shaders are not behaving nicely. I don't know, I'm just guessing here. I hope it gets fixed by release.

     

    PS: I updated my drivers after I installed my new card, so Nvidia drivers are bit different. I didn't check what my old driver version was, but those were couple months old, and did not recognize my new card at all, so I had to update. Installing those new drivers was little confusing though. I think 411.70 are the newest drivers at the moment, but I think it does not support windows 8.1. There seems to be support for Windows 7 though... Luckily 411.63 seems to work with Windows 8.1, and also supports new RTX 20 series cards, so running with those now.

    Post edited by Mendoman on
  • outrider42outrider42 Posts: 3,679
    Looking at the iteration count, your scene ran a lot more iterations in 4.11 compared to 4.10. It took 68 more iterations to converge. I wrote earlier that Iray may change how it calculates convergence in each version, and this may throw off the iteration count. That's why it is better to render to a set number of iterations to get a more accurate idea of performance. Set your scene to cap at maybe 500 iterations and see how that compares as that will be a more equal bench. Also, save the images and compare to see if they are equal quality. If 4.11 ran more iterations in your test, it stands to reason that the render 4.11 produces should be a bit cleaner.

    You are not the first to observe 4.11 rendering slower.
  • linvanchenelinvanchene Posts: 1,303
    edited October 2018

    DAZ_Rawb shared the Iray Render Test 2 scene on September 21:

    https://direct.daz3d.com/forums/discussion/comment/3969916/#Comment_3969916

    - - -

    The test scene was run with several cards.

    The DAZ Studio log files were shared with DAZ3D staff in Request #282031
    - - -


    Test system

    Win 10 Pro 64bit
    Intel Core i7 5820K
    ASUS X99-E WS
    64 GB RAM

    Asus GTX 1080 STRIX A8G
    Asus GTX 1080 Ti FE
    ASUS GeForce RTX 2080 Ti TURBO

    Nvidia Driver Version: 416.16
    DAZ Studio 4.11.0.231
    Preview Viewport Wire Shaded
    DAZ Studio was closed and restarted between renders.

    - - -

    1x GTX 1080
    Optix Prime Acceleration Off
    11 minutes 22.98 seconds
    OptiX Prime Acceleration On
    10 minutes 13.62 seconds

    - - -

    1x GTX 1080 Ti
    Optix Prime Acceleration Off
    8 minutes 28.28 seconds
    OptiX Prime Acceleration On
    7 minutes 34.25 seconds

    - - -

    2x GTX 1080 Ti
    OptiX Prime Acceleration Off
    4 minutes 27.10 seconds
    OptiX Prime Acceleration On
    4 minutes 1.66 seconds

    - - -

    1x RTX 2080 Ti
    OptiX Prime Acceleration Off
    4 minutes 37.84 seconds
    OptiX Prime Acceleration On
    4 minutes 44.83 seconds

    - - -

    2x RTX 2080 Ti
    OptiX Prime Acceleration Off
    2 minutes 36.17 seconds
    OptiX Prime Acceleration On
    2 minutes 36.98 seconds
    - - -

    20181005 DAZ_Rawb Test Scene 2x RTX 2080 Ti.png
    1920 x 1080 - 1M
    Post edited by linvanchene on
  • stitchstitch Posts: 10

    iray 2018 bench

    EVGA 1080ti SC2, 1670MHz boost (stock)

    i7-2600K

    16GB Ram

    Daz 4.10

     

    Optix, GPU-only: 5 minutes 47.35 seconds

     

    Seems my time is slower than others.

  • stitch said:

    iray 2018 bench

    EVGA 1080ti SC2, 1670MHz boost (stock)

    i7-2600K

    16GB Ram

    Daz 4.10

     

    Optix, GPU-only: 5 minutes 47.35 seconds

     

    Seems my time is slower than others.

    Some of these folks run multiple video cards with total aggregate CUDA cores that outnumber the 3584 on your 1080Ti and with allowance for VRAM, will performance better than the 1080Ti by itself. And then there's crazy folks like us looking at buying mulitple RTX 2080Ti's for rendering. 

  • evacynevacyn Posts: 950
    • Twin Founders Edition GeForce GTX 1080Ti (stock)

    • i5-6600K 3.5Ghz (overclocked to 4.1Ghz)

    • 16GB RAM

    • DAZ 4.10

     

    =================

    1x GTX 1080 Ti
    OptiX Prime Acceleration On
    2 minutes 6.1 seconds (95.04%/04966 iterations)

    1x GTX 1080 Ti
    OptiX Prime Acceleration Off
    3 minutes 34.38 seconds (95.01%/4950 iterations)

    =================

    2x GTX 1080 Ti
    OptiX Prime Acceleration On
    1 minutes 4.39 seconds (95%/4951 iterations)

    2x GTX 1080 Ti
    OptiX Prime Acceleration Off
    1 minutes 57.53 seconds (95%/4961 iterations)

     

  • evacynevacyn Posts: 950
    edited October 2018

    And with DAZ_Rawb shared the Iray Render Test 2 scene on September 21:

    https://direct.daz3d.com/forums/discussion/comment/3969916/#Comment_3969916

    2x GTX 1080 Ti
    OptiX Prime Acceleration On
    3 minutes 20.9 seconds (95.13%/1973 iterations)

    Post edited by evacyn on
  • tycidetycide Posts: 40

    Hello,

    Here are my specs and results from the benchmark test:

    Intel i7-8086K 4.00GHz liquid cooled, OC @ 5.0Ghz
    32GB DDR4-2666 RAM
    Samsung 960 PRO M.2 PCIe Drive
    2x NVIDIA Quadro P4000 8GB
    Win10 64bit
    DazStudio 4.10

    TL:DR
    Total Rendering Time: 1 minutes 52.71 seconds

    -----
    Here are relevant entries from the log file:
    2018-11-03 01:19:24.322 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : Using OptiX Prime ray tracing (3.9.1).
    2018-11-03 01:19:24.384 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : Geometry import (13 objects with 336k triangles, 13 instances yielding 336k triangles) took 0.048043
    2018-11-03 01:19:24.384 Iray INFO - module:category(MATCNV:RENDER):   1.0   MATCNV rend info : found 214 textures, 0 lambdas (0 unique)
    2018-11-03 01:19:24.400 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : Emitter geometry import (4 light sources with 2112 triangles, 2 instances) took 0.00s
    2018-11-03 01:19:24.416 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : CPU: using 10 cores for rendering
    2018-11-03 01:19:24.572 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : Rendering with 3 device(s):
    2018-11-03 01:19:24.572 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : CUDA device 0 (Quadro P4000)
    2018-11-03 01:19:24.572 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : CUDA device 1 (Quadro P4000)
    2018-11-03 01:19:24.572 Iray VERBOSE - module:category(IRAY:RENDER):   1.7   IRAY   rend stat : Geometry memory consumption: 9.58072 MiB (device 0), 0 B (host)
    2018-11-03 01:19:24.587 Iray VERBOSE - module:category(IRAY:RENDER):   1.11  IRAY   rend stat : Geometry memory consumption: 9.58072 MiB (device 1), 0 B (host)
    2018-11-03 01:19:34.048 Iray VERBOSE - module:category(IRAY:RENDER):   1.7   IRAY   rend stat : Texture memory consumption: 666.891 MiB (device 0)
    2018-11-03 01:19:34.048 Iray VERBOSE - module:category(IRAY:RENDER):   1.11  IRAY   rend stat : Texture memory consumption: 666.891 MiB (device 1)
    2018-11-03 01:19:34.064 Iray INFO - module:category(IRAY:RENDER):   1.7   IRAY   rend info : Light hierarchy initialization took 0.02s
    2018-11-03 01:19:34.064 Iray VERBOSE - module:category(IRAY:RENDER):   1.11  IRAY   rend stat : Lights memory consumption: 365.066 KiB (device 1)
    2018-11-03 01:19:34.064 Iray VERBOSE - module:category(IRAY:RENDER):   1.7   IRAY   rend stat : Lights memory consumption: 365.066 KiB (device 0)
    2018-11-03 01:19:34.157 Iray VERBOSE - module:category(IRAY:RENDER):   1.11  IRAY   rend stat : Material measurement memory consumption: 0 B (GPU)
    2018-11-03 01:19:34.157 Iray VERBOSE - module:category(IRAY:RENDER):   1.11  IRAY   rend stat : Materials memory consumption: 206.625 KiB (GPU)
    2018-11-03 01:19:34.204 Iray INFO - module:category(IRAY:RENDER):   1.5   IRAY   rend info : CUDA device 0 (Quadro P4000): Scene processed in 9.634s
    2018-11-03 01:19:34.204 Iray INFO - module:category(IRAY:RENDER):   1.5   IRAY   rend info : CUDA device 0 (Quadro P4000): Allocated 4.76105 MiB for frame buffer
    2018-11-03 01:19:34.204 Iray INFO - module:category(IRAY:RENDER):   1.6   IRAY   rend info : CUDA device 1 (Quadro P4000): Scene processed in 9.638s
    2018-11-03 01:19:34.204 Iray INFO - module:category(IRAY:RENDER):   1.6   IRAY   rend info : CUDA device 1 (Quadro P4000): Allocated 4.76105 MiB for frame buffer
    2018-11-03 01:19:34.282 Iray INFO - module:category(IRAY:RENDER):   1.5   IRAY   rend info : CUDA device 0 (Quadro P4000): Allocated 1.65625 GiB of work space (2048k active samples in 0.067s)
    2018-11-03 01:19:34.282 Iray INFO - module:category(IRAY:RENDER):   1.6   IRAY   rend info : CUDA device 1 (Quadro P4000): Allocated 1.65625 GiB of work space (2048k active samples in 0.069s)
    2018-11-03 01:19:34.438 Iray INFO - module:category(IRAY:RENDER):   1.4   IRAY   rend info : CPU: Scene processed in 9.863s
    2018-11-03 01:19:34.438 Iray VERBOSE - module:category(IRAY:RENDER):   1.2   IRAY   rend stat : Native CPU code generated in 2.55e-007s
    2018-11-03 01:21:15.967 Iray VERBOSE - module:category(IRAY:RENDER):   1.0   IRAY   rend progr: 95.05% of image converged
    2018-11-03 01:21:15.967 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : Received update to 04970 iterations after 111.395s.
    2018-11-03 01:21:15.967 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : Convergence threshold reached.
    2018-11-03 01:21:16.260 Finished Rendering
    2018-11-03 01:21:16.291 Total Rendering Time: 1 minutes 52.71 seconds

  • bk007dragonbk007dragon Posts: 113
    edited November 2018

    Here is my system and test results:
    I took a stock 2010 ASUS cm5571 and put 620w seasonic Power Supply and NVDIA 1070 Ti In it.

    OS: Windows 7 Home Premium 64bit

    CPU: Pentium Dual Core CPU E5400 @ 2.7GHTZ

    MEMORY: 6 Gig DDR3 @ 1666 GHTZ'

    GPU: Nvidia 1070 Ti

    Original SY Test:

         GPU ONLY:              3 min 25.58 sec.

         CPU+GPU:               5 min 41.62 sec.

    Outrider42 Test:

         GPU ONLY:               8 min 42.63 sec  84.33% converged

         CPU+GPU:               13 min 0.92 sec 84.39% converged.

     

     

    Moral of the story: If you have a new GPU and an old ancient CPU leave the CPU out of it.

    Post edited by bk007dragon on
  • Emf3DEmf3D Posts: 4
    edited November 2018

    Okay, I experience 75% longer render times with Daz Studio 4.11 Beta, than with the official 4.10 release...Am i doing something wrong??

    System specs:
    i7 8700k @ 4.4 Ghz x 6 
    24GB RAM @ 2666 Mhz
    2 x GTX 1070 (MSI Armor) @ 1910 Mhz (No overclock)

    Here are my results using Outrider's benchmark scene:
    All tests have the following settings:
    2 x 1070 / OptiX enabled / Optimization: Speed / No OC / No CPU

    Daz Studio 4.10 - 4 min 36 sec
    Daz Studio 4.11 - 8 min 03 sec

    That was with SLI on. I tested with both versions again with SLI off. ~ 2 sec difference. Negligible.

    I used an older Nvidia Driver - version 391.01.  So i donwloaded the newest driver - version 416.34
    Daz Studio 4.10 - 4 min 39 sec
    Daz Studio 4.11 - 8 min 18 sec

    No, the newsest driver didn't help. In fact 4.11 runs 15 seconds slower! So, i reverted back to the old 391.01 version.

    I could see with msi Afterburner that Daz Studio 4.10 uses 5-10% more power draw on my GPUs during renders, than Daz Studio 4.11. Also, 4.10 "GPU Frame Buffer usage" (texture loading?) is 5-10% higher than 4.11 during renders. 

    I still prefer version 4.11 because of its denoiser! it gives me renders i can work with within seconds! Translucent materials takes forever to resolve, even if the scene renders twice as fast. So, densoiser helps a lot with things like that. Also, I've noticed 4.11 utlizes my memory/RAM better during interactive renders. Its like its purging my memory evey few seconds, discarding unused elements and clearing up more momery. Also, it utilizes my VRAM better. It's like i can load more objects into my scenes without clogging it up quickly, and i can move around with interactive renders much faster.

    However, 75% slower renders, even with denoiser switched off, doesn't seem right to me. 
    Am i missing something? Maybe a wrong render setting, etc? I certainly would like similar performance than with Daz Studio 4.10.

    Post edited by Emf3D on
  • outrider42outrider42 Posts: 3,679

    Have you put in a support ticket? Maybe they can give suggestions.

    It is a beta, so there may be issues.

    Perhaps you can render a scene in both versions. Render the denoised one in a few seconds, enough to clear up those translucencies and such. And then run the render in 4.10 to get the rest, and photoshop the best elements of both together. That may not be ideal, but still faster than rendering a big scene in 4.11, especially if you are talking about an hours long render going 75% slower in 4.11.

  • outrider42outrider42 Posts: 3,679
    edited November 2018
    stitch said:

    iray 2018 bench

    EVGA 1080ti SC2, 1670MHz boost (stock)

    i7-2600K

    16GB Ram

    Daz 4.10

     

    Optix, GPU-only: 5 minutes 47.35 seconds

     

    Seems my time is slower than others.

    Did you use the Iray preview mode in the viewport? That could explain the difference. Another possibility is if your GPU is is getting too warm and throttling a bit. I use MSI Afterburner to control my GPU fans more aggresively.

    I just ran the 2018 test with my newly acquired 1080ti, which happens to be a EVGA SC2 just like yours. I also kept it at the stock speed, which is already overclocked compared to the Founder's 1080ti.

    i5-4690 stock

    16GB RAM

    Daz 4.10 Pro, Nividia driver 399.24

    Total render time: 5 minutes, 34 seconds.

    So after all that, its only 13 seconds faster. That leads me to believe yours is throttling just a bit. 

    I ran some other tests as well, and found some interesting things with the 4.11 beta. The beta is indeed a lot slower for me in the 2018 bench. It is not 75% slower like Emf3D is experiencing with my bench. HOWEVER, the SY bench is actually a couple seconds FASTER in 4.11. Hmmm, what could be going on here? I'll be posting some results later.

    Also, I was really impressed with how smooth the balls were with denoising. Not a spec of grain on those balls!

    Post edited by outrider42 on
  • Emf3DEmf3D Posts: 4

    I ran some other tests as well, and found some interesting things with the 4.11 beta. The beta is indeed a lot slower for me in the 2018 bench. It is not 75% slower like Emf3D is experiencing with my bench. HOWEVER, the SY bench is actually a couple seconds FASTER in 4.11. Hmmm, what could be going on here? I'll be posting some results late

    Interesting! I took a chance and just tried the old SickleYield scene again. Totally different results:

    Daz Studio 4.10 - 1 min 51 sec
    Daz Studio 4.11 - 1 min 32 sec

    So it must be something specific to your benchmark scene that Daz 4.11 doesn't like. That means my own scenes can do the same, not knowing whats causing it, giving me either longer or faster renders than Daz 4.10. Well, at least i know now the issue is not specific to my system, or my 4.11 install. The only difference is I experience this effect much more than others - when Daz 4.11 is slower, its much slower.

     

  • outrider42outrider42 Posts: 3,679

    OK, so you can render the SY scene in the same time, too. That is interesting. So this has me wondering. There are people complaining that their renders take longer in 4.11, while others say they are just as fast. So it might all come down to the scene composition itself. This is quite intriguing. SY's scene uses an older figure, while mine uses G8, but that shouldn't be it. My scene also makes use of the dual lobe specularity, which hers does not since it didn't exist back then. So I wonder if it could be dual lobe specularity causing this?

    I'm going to test this later by switching any dual lobe surfaces back to top coat weight when I get a chance.

    The one other difference is my scene uses bloom. But I would bet most people don't use bloom, and that those complaining about 4.11 being slower are not using it. Still I'll turn it off and test it as well.

    Perhaps we can narrow this down and discover what is causing 4.11 to render longer!

  • RobinsonRobinson Posts: 751

    Daz Studio Public Beta 4.11
    Stock scene as downloaded, unmodified. All times are to full completion.

    GPU Only 1 x Geforce RTX 2070  = 2 minutes 21.14 seconds (Optix off)
    GPU Only 1 x Geforce RTX 2070  = 1 minutes 49.11 seconds (Optix on)
    GPU Only 1 x Geforce RTX 2070 + 1 x Geforce GTX 970 = 1 minutes 45.78 seconds (Optix off)
    GPU Only 1 x Geforce RTX 2070 + 1 x Geforce GTX 970 = 1 minutes 19.47 seconds (Optix on)

    All to full completion.  My 2070 is the MSI Armor, i.e. non-binned chip at stock.  When I benchmarked before with just the 970 I didn't get any benefit from Optix.  It made little to no difference.  With the 2070 it appears to.

  • outrider42outrider42 Posts: 3,679
    Our first 2070 test! Awesome. And very interesting result. You also are possibly the first to have a Turing benefit from OptiX on. I think everybody else has either had little difference or even lost time with Optix on, so this is sort of head scratching.

    The times posted are on par with a 1080ti, and even faster with Optix on. That is a big jump because in gaming the 1080ti still beats the 2070. Hello Tensor cores.

    Would you be interested in running my benchmark in my sig? I'd like to see if those results hold up, if the 2070 can still beat the 1080ti.
  • edited November 2018

    Daz Studio 4.10.0.123 - Nvidia Driver 399.24 (WHQL)

    Optix on - GPU Only - MSI GeForce GTX 960 4gb + Asus Turbo 1080Ti (downclocked 1452MHz)

    SickleYeld scene 1 minute 51.48 seconds

    Outrider scene 4 minutes 41.53 seconds

    DAZ_Rawb test scene 4 minutes 56.97 seconds

    Post edited by marco_pasqualini_foto on
  • nicsttnicstt Posts: 11,714
    Emf3D said:

    Okay, I experience 75% longer render times with Daz Studio 4.11 Beta, than with the official 4.10 release...Am i doing something wrong??

    System specs:
    i7 8700k @ 4.4 Ghz x 6 
    24GB RAM @ 2666 Mhz
    2 x GTX 1070 (MSI Armor) @ 1910 Mhz (No overclock)

    Here are my results using Outrider's benchmark scene:
    All tests have the following settings:
    2 x 1070 / OptiX enabled / Optimization: Speed / No OC / No CPU

    Daz Studio 4.10 - 4 min 36 sec
    Daz Studio 4.11 - 8 min 03 sec

    That was with SLI on. I tested with both versions again with SLI off. ~ 2 sec difference. Negligible.

    I used an older Nvidia Driver - version 391.01.  So i donwloaded the newest driver - version 416.34
    Daz Studio 4.10 - 4 min 39 sec
    Daz Studio 4.11 - 8 min 18 sec

    No, the newsest driver didn't help. In fact 4.11 runs 15 seconds slower! So, i reverted back to the old 391.01 version.

    I could see with msi Afterburner that Daz Studio 4.10 uses 5-10% more power draw on my GPUs during renders, than Daz Studio 4.11. Also, 4.10 "GPU Frame Buffer usage" (texture loading?) is 5-10% higher than 4.11 during renders. 

    I still prefer version 4.11 because of its denoiser! it gives me renders i can work with within seconds! Translucent materials takes forever to resolve, even if the scene renders twice as fast. So, densoiser helps a lot with things like that. Also, I've noticed 4.11 utlizes my memory/RAM better during interactive renders. Its like its purging my memory evey few seconds, discarding unused elements and clearing up more momery. Also, it utilizes my VRAM better. It's like i can load more objects into my scenes without clogging it up quickly, and i can move around with interactive renders much faster.

    However, 75% slower renders, even with denoiser switched off, doesn't seem right to me. 
    Am i missing something? Maybe a wrong render setting, etc? I certainly would like similar performance than with Daz Studio 4.10.

    I consistently get 95-100% longer times in 4.11 - and have through the various itterations. This is with the test scenes and my own, which I've also tested.

  • bk007dragonbk007dragon Posts: 113
    edited November 2018

    New system, new test

    CPU:     Intel Core i7 8700

    RAM:     32 gb DDR4 @ 2666mhz

    GPI:      1x NVIDEA 1080Ti

                 1xNVIDEA  1070 Ti

     

    All tests with both 1080 Ti and 1070 Ti, monitor connected to 1070Ti

    Original Scene:   1m 24.7s (no CPU)

                                 1m 24.88s (with CPU) 

    Outrider42 Scene:    3m 25.99s (no CPU)

                                     3m 25.78s (with CPU)

    Post edited by bk007dragon on
  • neumi1337neumi1337 Posts: 18
    edited November 2018

    A question for those who own two 2080 Ti with a nvlink bridge. Could someone please test a scene with vram limitation (eg. adove 12gb) under Daz3d 4.11 with Iray (and SLI on)? Then we would have a clear answer if vram pooling/shared memory works or not on the turing arch.

    Quote from the website: https://www.chaosgroup.com/blog/profiling-the-nvidia-rtx-cards

    Important note: It seems like the regular GPU memory reporting API provided by NVIDIA currently (at the time of this writing) does not work correctly in SLI mode. This means that programs like GPUz, MSI Afterburner, nvidia-smi, etc. might not show accurate memory usage for each GPU. Knowing this, we have modified the memory statistics shown in the V-Ray frame buffer so you can track actual GPU memory usage there. We expect NVIDIA will correct these reporting issues in the future.

    Do a first run with one gpu only and sli mode off to prove that the scene is limited by the vram. The rendering should drop to cpu render mode. Any monitoring software would show a wrong vram usage (see note adove).

    In a second run you would need to activate sli mode and select both cards for rendering.

    Thx in advance.

    Post edited by neumi1337 on
  • Just benched my new rig.. finally..

    Threadripper 2990WX and 3x2080ti's.  Didn't bench with the CPU since it didn't make much of a difference (about 10secs off the first run of Outriders scene)

    DS 4.11 beta

    Outriders scene:

    1m 47 (Optix off)
    1m 49 (on)

    SY's scene:

    30s (OptiX off)

    23s (on)

    So there you go.. fingers crossed some RTX stuff comes down the pipeline to speed it up some more.

  • outrider42outrider42 Posts: 3,679

    Just benched my new rig.. finally..

    Threadripper 2990WX and 3x2080ti's.  Didn't bench with the CPU since it didn't make much of a difference (about 10secs off the first run of Outriders scene)

    DS 4.11 beta

    Outriders scene:

    1m 47 (Optix off)
    1m 49 (on)

    SY's scene:

    30s (OptiX off)

    23s (on)

    So there you go.. fingers crossed some RTX stuff comes down the pipeline to speed it up some more.

    Do you by chance have the Nvlink? Ever since I posted in another thread how chaosgroup got the 2080 and 2080ti to pool VRAM people are wondering if it can be done for Iray and what performance impacts may be.

Sign In or Register to comment.