Iray Starter Scene: Post Your Benchmarks!

1232426282949

Comments

  • outrider42outrider42 Posts: 3,679

    Keep in mind that it is because your GPU is running everything that the render can make it sluggish. Yes, even a Titan. Modern Windows uses the GPU for display. Iray always runs whatever you throw at it at full speed, so even the Titan is pushing itself when you have it handling all of these things.

    That's another reason why people recommend using a second GPU to drive the monitor with Daz. Any junker of a GPU can handle the task.

    So perhaps you can look into some settings and see if anything can be changed around. And again, make sure your Titan is not throttling itself. If it is running hot then that is absolutely why it is slowing down and the system with it. The hints are present, with longer render times and a sluggish PC, it sounds to me like it is throttling itself.

    With the mining boom inflating costs, I can see what you are thinking. The 1070 is going for such an absurd price now so you'll get a big chunk of change back that makes the Titan much easier to upgrade to. I'd probably do the same thing if I had been lucky enough to get a 1070 before the inflation happened (or more likely a 1080ti.)

  • drzapdrzap Posts: 795
    edited February 2018

    "CPU was never checked... And rendering is by far the most important part on a... rendering program. "  D

    Daz Studio is not a "rendering program".  It does far more than just render and, with the gpu that you have, if you spend more time rendering than doing other things in your work, then you are probably doing it wrong.  Rendering always seems to get the glamorous attention, especially with hobbyists, but in fact most 3D artists (with the exception of look dev artists) spend far more time doing other things than rendering, like preparing the shot, texturing, modeling, etc...    Rendering is just one part of the whole pie, and most of the time is spent doing CPU work. Remember, the CPU can do the GPU's work, but a GPU will never be able to replace the CPU.  Outrider might be right about the throttling issue.  Many things can happen when your system is not well balanced (old tech mixed with new tech).

    Post edited by drzap on
  • prixatprixat Posts: 1,585
    ebergerly said:

    I'm wondering if you can't just take the Sickleyield scene and turn on caustic sampler and architectural sampler? That should do it I would think. So people can use the same scene, same small memory requirements, and increase render time from minutes to years. smiley

    That way everyone can play, independent of GPU VRAM and stuff 

    Just skimming through the current Studio Release Notes: "...the Architectural Sampler is deprecated, it will be removed in the next version of Iray.".

  • Robert FreiseRobert Freise Posts: 4,260
    edited February 2018

    Having a problem all at once Studio is refuseing to use my GPU's to render system specs are Threadripper 1950 128 gb ram 4 GTX 1070 Ti graphics cards system is liquid cooled latest drivers and version of Studio.

    All cards are selected in Iray cpu is NOT checked log file for Studio sees all four cards gives a graphics card fail defaulting to cpu error

    I have ran dignostics and everything says that there are no problems with any of the cards or anything else in the system.

    This is the first render I've tried since all the dforce stuff started was working prior to dforce upgrades 

    Don't know if this has any bearing but virtualization is off

    Update it appears that somethig in the scene itself was causeing the problem currently checking each item to see what

    Post edited by Robert Freise on
  • OK can't check each item in scene as the saved scene is corrupt and locks Studio up and as some of the figures were custom modifications and only saved with the scene that puts the kybash on checking them.

    May try redoing them out of curiosity but it would be a pita with no gurantee that I duplcated everything the same way

  • junkjunk Posts: 1,226

    Another benchmark from original scene.  Got a new GPU a couple of months ago and finally put all three into my DAZ machine.

    GTX 1080 Ti overclocked (+75 GPU, +975 RAM)
    GTX 1070 overclocked (+160 GPU, +360 RAM)
    GTX 1070 overclocked (+160 GPU, +360 RAM)
    --------------
    49.79 seconds.  

    I'll try the increasing of resolution or another scene just a preliminary test.

  • Blind OwlBlind Owl Posts: 501
    edited February 2018

    It occurred to me to try simultaneous renders, one with each of my graphics cards. Didn't know if it would even work, so I loaded the test scene into one instance of DS and set the 1080 GTX as the sole device, then did the same in another instance of DS with the 1080 TI as the sole device.

    With both instances rendering, the times agreed pretty closely with my one-render-at-a-time experiment (previous page): 3:10 for the GTX, 2:30 for the TI.

    In certain circumstances--I'm not sure what exactly--this might be a handy technique for those of us with more than one graphics card (?). If nothing else, it satisfied my 'satiable* curiosity.smiley

    * To quote R. Kipling.

    Post edited by Blind Owl on
  • drzapdrzap Posts: 795
    edited February 2018
    Blind Owl said:

    It occurred to me to try simultaneous renders, one with each of my graphics cards. Didn't know if it would even work, so I loaded the test scene into one instance of DS and set the 1080 GTX as the sole device, then did the same in another instance of DS with the 1080 TI as the sole device.

    With both instances rendering, the times agreed pretty closely with my one-render-at-a-time experiment (previous page): 3:10 for the GTX, 2:30 for the TI.

    In certain circumstances--I'm not sure what exactly--this might be a handy technique for those of us with more than one graphics card (?). If nothing else, it satisfied my 'satiable* curiosity.smiley

    * To quote R. Kipling.

    If you are doing animation, this is a good technique to make sure you are getting the most out of your video cards.  Rather than ganging your cards on each frame, give each card an instance of DAZ and a range of frames.  Two identical cards aren't exactly 2X faster when ganged together, but they should be when working separately.  I know you don't have identical cards, but the principle is the same.

    Post edited by drzap on
  • Turns out my sluggishness was due to architectural sampler being turned on and carried to every new scene (since I didn't configure Daz Studio to reset render settings on each new scene)... So all our theorycrafting was wrong and my Titan can handle everything on her own it seems...

  • AMD threadripper 1950x (win 10, 32Gb RAM), two Gtx 1080 ti, one Gtx 1080.
    OptiX on

    GPUs only : 52 sec.

    GPUs + CPU : 58 sec.

    CPU only : 5 min. + (I canceled it)

  • plarffplarff Posts: 256
    drzap said:

    "CPU was never checked... And rendering is by far the most important part on a... rendering program. "  D

    Daz Studio is not a "rendering program".  It does far more than just render and, with the gpu that you have, if you spend more time rendering than doing other things in your work, then you are probably doing it wrong.  Rendering always seems to get the glamorous attention, especially with hobbyists, but in fact most 3D artists (with the exception of look dev artists) spend far more time doing other things than rendering, like preparing the shot, texturing, modeling, etc...    Rendering is just one part of the whole pie, and most of the time is spent doing CPU work. Remember, the CPU can do the GPU's work, but a GPU will never be able to replace the CPU.  Outrider might be right about the throttling issue.  Many things can happen when your system is not well balanced (old tech mixed with new tech).

     

    I have a scene that kept crashing my DAZ app... i then selected the CPU for render ass well as the GPU and not one crash since then. My card is a 3GB GTX1060 which is entry level really but it handles it.

    My issue now is to get this one scene to look crisp and high quality.

  • With Daz 4.10

    GTX 970   4:35

    I have been wanting to upgrade, but the costs of cards today is just too out of control. After looking around, I decided to buy a GTX 670 that has 4 GB and run it with my 970. I have a big power supply so I can handle 2 cards. While many cards are inflated by mining, the older 600 series cards don't seem to be hit so hard. The 670 4 GB was much more rare than the 2 GB version.

    So new times:

    GTX 970+670  3:09

    Looking at this thread, that compares to a 1070 or 1080. Considering 1080s are like $800 right now, that's not a bad ugrade for less than 15% of the price. Granted, I don't have 8 GB of memory, but at least both of my cards have 4 GB and will generally run at the same time for most scenes.

    I ran the 670 by itself and it got 8:45. I thought it would be faster than that. That seems to be about equal to a what someone said the 1050 does. But this 670 has double the VRAM and cost a lot less (used obviously.)

    All times with OptiX enabled.

  • outrider42outrider42 Posts: 3,679
    edited February 2018

    Take 2 of my proposed Iray 2018 test scene.

    The purpose behind this is to push light, and if I may, possibly a more pleasing image. I enabled the bloom filter for that reason. Each of the spheres has been subdivided. The glass sphere was made emissive to add ambience, and all of the spheres are positioned so that they will catch and reflect light from multiple sources towards the camera. There is a new light sphere added behind her head for a bit more light and reflections. What I wanted to do with this image is to say "this is Iray" and put it up in your face. Being closer to the model means her textures are less compressed by the engine, and they shine through more. This scene will push any spec. I'm hoping a 1080ti will do this in about 7-8 minutes. The time differences will be easier to see.

    It finishes at 5000 iterations before it hits the 95% convergence. This might be an even more accurate measure, as everybody races to 5000 iterations. Of course, I could be wrong.

    It is not intended to replace sickleyield's scene, but rather be a test for the higher end hardware to better differentiate these high end machines from each other. Who can hit 5000 fastest? Lets see what you quad Titan pimps can do. The scene is still quite small at 720 by 520, so anybody can run this scene if they dare. But if you don't have something better than a 1060, you might want to to just stay clear and stick to the original bench.

    And what's cool is that you can upload scene files directly to the gallery. So I have done so, if you want to play with the scene, the duf file is in the gallery link below. All you need is Geness 2 and 8 Female Essentials installed, which are of course free and included with Daz Studio. The model is Genesis 8, and she is wearing the old Genesis 2 Dancer outfit. The hair is Genesis 8, but it is pretty much unchanged from G2F. However I have changed the surface settings of just about everything on her and her outfit. She has dual lobe specularity settings, and the outfit and hair gloss has been overhauled with top coat settings. (Which is again intended to make them more reflective to the light.) I also renamed the spheres so you can better tell what each is supposed to be, that always kind of irked me a bit before.

    So if you want to download and try it out, here it is.

    https://www.daz3d.com/gallery/#images/526361/

    And of course anyone can use the settings for the items in this scene for yourselves, if you happen to like my settings for her and her outfit/hair.

    Post edited by outrider42 on
  • tj_1ca9500btj_1ca9500b Posts: 2,047

    Take 2 of my proposed Iray 2018 test scene.

    The purpose behind this is to push light, and if I may, possibly a more pleasing image. I enabled the bloom filter for that reason. Each of the spheres has been subdivided. The glass sphere was made emissive to add ambience, and all of the spheres are positioned so that they will catch and reflect light from multiple sources towards the camera. There is a new light sphere added behind her head for a bit more light and reflections. What I wanted to do with this image is to say "this is Iray" and put it up in your face. Being closer to the model means her textures are less compressed by the engine, and they shine through more. This scene will push any spec. I'm hoping a 1080ti will do this in about 7-8 minutes. The time differences will be easier to see.

    It finishes at 5000 iterations before it hits the 95% convergence. This might be an even more accurate measure, as everybody races to 5000 iterations. Of course, I could be wrong.

    It is not intended to replace sickleyield's scene, but rather be a test for the higher end hardware to better differentiate these high end machines from each other. Who can hit 5000 fastest? Lets see what you quad Titan pimps can do. The scene is still quite small at 720 by 520, so anybody can run this scene if they dare. But if you don't have something better than a 1060, you might want to to just stay clear and stick to the original bench.

    And what's cool is that you can upload scene files directly to the gallery. So I have done so, if you want to play with the scene, the duf file is in the gallery link below. All you need is Geness 2 and 8 Female Essentials installed, which are of course free and included with Daz Studio. The model is Genesis 8, and she is wearing the old Genesis 2 Dancer outfit. The hair is Genesis 8, but it is pretty much unchanged from G2F. However I have changed the surface settings of just about everything on her and her outfit. She has dual lobe specularity settings, and the outfit and hair gloss has been overhauled with top coat settings. (Which is again intended to make them more reflective to the light.) I also renamed the spheres so you can better tell what each is supposed to be, that always kind of irked me a bit before.

    So if you want to download and try it out, here it is.

    https://www.daz3d.com/gallery/#images/526361/

    And of course anyone can use the settings for the items in this scene for yourselves, if you happen to like my settings for her and her outfit/hair.

    I just loaded your new test scene.duf.   It gave me this message, but otherwise loaded OK...

    Some assets needed to load the file were missing...(snip)

    • data/daz 3d/genesis 8/female eyelashes/morphs/isourcetextures/hyuna/fhm-isthyuna.dsf
    • data/daz 3d/genesis 8/female eyelashes/morphs/p3design/misc/phmeyelasheslong_p3d.dsf

    My result (one pass):

    • MSI GT83VR laptop with dual GTX 1080's (8gb each, 6.572 GiB available, stupid Windows 10 steals the other 1.4+ GiB from each card)
    • Intel i7-6820HK @ 2.7Ghz
    • 2 GPUs only, no CPU.  Optix enabled, 520x720 (the settings that came with your .duf file I think).
    • Says it allocated 1.65625 GiB of workspace.

    4 minutes 35.60 seconds

    Atached is how the render looks with the missing files.

    test scene.png
    720 x 520 - 453K
  • outrider42outrider42 Posts: 3,679
    Interesting. The scene doesn't use those files at all, so why asks for them makes no sense. That is a little faster than I expected, but that is two 1080s. Anyway, I'll try to see if I can update the file. Was there a product that checks duf files or something for things like this?
  • outrider42outrider42 Posts: 3,679
    edited February 2018

    I loaded the scene into a fresh install of Daz, which doesn't have anything for G8 besides the base, and then resaved the file. That should save it without any of those extra items that come up. The scene itself is exactly the same, it does not use whatever it asked for, those products have some sort of error that Daz needs to work out.

    https://www.daz3d.com/gallery/#images/526361/

    Post edited by outrider42 on
  • SoneSone Posts: 84

    Hi outrider42, I checked it out, you did a nice mod. :)

    used both   GTX 1070 and GTX 1080 OC cards from Gigabyte

    Ryzen 1700 8/16 core not enabled

    my results

    Total Rendering Time: 5 minutes 1.83 seconds     optix off 1st run
    Total Rendering Time: 4 minutes 25.75 seconds   optix on 1st run
    Total Rendering Time: 4 minutes 16.38 seconds   optix on 2nd run RAM and VRAM loaded

  • junkjunk Posts: 1,226
    edited March 2018

    I loaded the scene into a fresh install of Daz, which doesn't have anything for G8 besides the base, and then resaved the file. That should save it without any of those extra items that come up. The scene itself is exactly the same, it does not use whatever it asked for, those products have some sort of error that Daz needs to work out.

    https://www.daz3d.com/gallery/#images/526361/

    Hi Outrider42,  When I click on the "Download Scene File" it simply kicks me back to the gallery.  I also removed the "#" in the url that you provided and it still does tthe same behavior.  I did try two different browsers with one logged in, the second not logged in.  Any thoughts?  I'd love to try your new test scene.

    I got it now by right-clicking the button and saying to open in a new tab.  Sorry for the confusion but it's still strange that it happens when clicking normally.

    Post edited by junk on
  • junkjunk Posts: 1,226
    edited March 2018

    With new outrider42 scene utilizing:
    GTX 1080 Ti +100 GPU, +1000 Memory Clock
    GTX 1070 +120 GPU, +375 Memory Clock
    GTX 1070 no overclocking (too close to others, gets warm)
    Ryzen 1800x OC to 3.75GHz, 32GB
    AUX Port set to IRAY first

    2 minutes 18.92 seconds  (1st run)
    2 minutes 17.68 seconds (2nd run)

    Stops at 5000 iterations as someone else noted.

    Post edited by junk on
  • outrider42outrider42 Posts: 3,679
    I'm thinking your trouble downloading the scene is likely related to these 404 errors being reported in the store. Perhaps I'll upload it to ShareCG as a fail safe.

    May I ask where you got those cards, are they recent purchases? I ask because I want to upgrade so bad, but market prices are killing me with the GPU shortage going on.

    I think the 5000 iteration cap is helpful, and this can be the easiest way to scale the test for the future. Low end machines can limit their test to 1000 iterations and multiply the result by 5. And as even more powerful hardware comes along, we can increase the iteration count if it will keep going. I wonder if it will hit 10000 iterations before 100% convergence?
  • junkjunk Posts: 1,226
    edited March 2018

    I do see the 404 errors quite often lately.  By the way, I loved the mass effect trilogy.  Very very good stuff.

    The 1080 Ti SC2 water cooled was my latest purchase back in mid December.  Since then the prices went up from $798.89 to $969.00 which stinks thanks to Crypto-Mining.  The 1070's were bought way before the craze and were items I got for about $375.00 with two different deals on Jet.com.  Those days are gone sorry to say.

    Nice test render scene by the way.

     

    Post edited by junk on
  • outrider42outrider42 Posts: 3,679

    Mass Effect was indeed good stuff.

    Yeah, I would just about kill for 1070s at $375. I had planned on buying the 1070 before this mess started. But those plans are pretty much destroyed now. I have thought about the 1080ti since that would be like two 1070s (and then some,) but then it got hit by the inflation, too.

    Since you have this setup, would you mind testing a different combo? I'd like to see the combo of two 1070s vs a single 1080ti being tested from the same PC to see exactly how they stack up against each other.

  • junkjunk Posts: 1,226

    No problem.  Here are the results of the tests:

    5 minutes 29.59 seconds (1080 Ti normal clocked)
    4 minutes 48.33 seconds (1080 Ti +80 GPU,+1000 Memory)
    4 minutes 16.69 seconds (1070 x 2 normal clocked)

    So two 1070's do better than a single 1080 Ti even when it is overclocked.

  • outrider42outrider42 Posts: 3,679
    Whoa, I didn't expect it to be that much different. There are people who swear up and down that the 1080ti would be faster than two 1070s because of...reasons. This test would seem to obliterate any such thinking. 1070s have 1920 cores, so two have 3840. The 1080ti has 3584. So two 1070s have a 256 core advantage. I'm surprised it could be that much faster, though.

    Your dual 1070 test ran faster than some previous tests, two 1080s, and a 1080+1070 which is odd. You do have a different CPU. The 2 tests before you had an i7-6820hk and Ryzen 1700. So we could be seeing how CPU effects multiple GPU setups. In single GPU tests, this isn't a factor, but when you throw 2, 3 or more GPUs in rig, the CPU can make a difference. And apparently, a Ryzen 1800x can make multiple GPU rigs sing. Interesting.

    The Ryzen 1800x has 24 pcie lanes, while the i7-6820hk has just 16. But the 1700 also has 24 lanes. Hmm...
  • junkjunk Posts: 1,226
    edited March 2018

    Hey you and I surely are thinking the same things.  I was just going to let you know that the 1070's are at a slight disadvantage too because of the 24 pcie lanes.  In the BIOS, the motherboard has them listed as:

    1080 Ti = x8
    1070 1st = x8
    1070 2nd = x4

    When I had just the 1080 Ti it became a x16 configuration.  With two cards, if my memory serves correctly, was a x8 and x8.  The RAM is running at 3000 MHz with timing of 14-14-14-14 with a bunch of other little tweaks to try and get more performance out.  Perhaps that can make a difference?  

    Another thought could be that I have a 1TB NVME Samsung 960 evo driving everything.  Perhaps some speed advantage could come from this?  I don't know what though since by setting the AUX port on IRAY first would pre-load everything to the GPU before hand.

    Post edited by junk on
  • outrider42outrider42 Posts: 3,679
    Yeah, once the scene has loaded all that extra stuff goes out the window. So it's all up to the GPUs running as fast as they can. In multiGPU rigs the cards have to share some data, so this is getting factored by something. In one of the few professional bench tests I've seen on Iray, they concluded that the lanes available to the multiple cards were the biggest factor when comparing 4 Titans with an i7 to the same 4 Titans with a Xeon. But that test was a few years ago.
  • bluejauntebluejaunte Posts: 1,861

    I thought we had concluded a while ago lanes are mostly irrelevant?

  • outrider42outrider42 Posts: 3,679

    I looked up the test I found, and I guess I missremembered it. They concluded that the core count of the CPU was the biggest factor in multi GPU rendering, as the 20 core Xeon+ 4 Titans out performed the 4 core i7+ 4 Titans 

    https://www.pugetsystems.com/labs/articles/NVIDIA-Iray-GPU-Performance-Comparison-785/

    The Ryzen 1800x does have a clear core count advantage over the previous tested systems.

  • bluejauntebluejaunte Posts: 1,861

    I looked up the test I found, and I guess I missremembered it. They concluded that the core count of the CPU was the biggest factor in multi GPU rendering, as the 20 core Xeon+ 4 Titans out performed the 4 core i7+ 4 Titans 

    https://www.pugetsystems.com/labs/articles/NVIDIA-Iray-GPU-Performance-Comparison-785/

    The Ryzen 1800x does have a clear core count advantage over the previous tested systems.

    Biggest factor sounds like a bit of a stretch too. Reading the conclusion here: https://www.pugetsystems.com/labs/articles/NVIDIA-Iray-CPU-Scaling-786/

    Combined with the results from our NVIDIA Iray GPU Performance Comparison article, this makes it very clear that if you are designing a new system specifically for Iray you should prioritize your budget towards purchasing a high number of powerful video cards long before worrying about upgrading the CPU. A faster CPU can help with things like scene load times, but a system with plenty of GPU power will make your CPU choice basically moot when it comes to rendering

    It may seem odd that we took the time to publish a an article about CPU scaling just to say that it doesn't matter, but this is extremely useful information to have. Just as an example of how important it is to prioritize the GPU for Iray, below are two systems that you could configure on our website. 

  • outrider42outrider42 Posts: 3,679
    edited March 2018

    They are biggest factor in the difference of time between rigs running the multiple cards. I am not saying they are the biggest factor period, nor have I ever. I am explaining the difference in time. Because you see the results in this page. The same testing I linked says the same thing. While they still suggest upgrading the CPU LAST or not at all for multi GPU rigs, the fact remains they still say the core count made the difference. They did not say anything else made a difference...thus the CPU made the biggest differnce in multiple GPU rigs. That is not an illogical statement.

    And remember that test is 2 years old. It predates Ryzen and has no Pascal cards in it.

    Two 1070s just straight up whipped two 1080s in the same test, with the differnce between them being one has the 8 core Ryzen and one has the 4 core i7. Even if you were to say the result was close, it should NOT be. Two 1080s should beat two 1070s every single time, without fail. So the difference is clear, the CPU made a difference...the "biggest difference" in this particular test.

    Post edited by outrider42 on
Sign In or Register to comment.