Iray Starter Scene: Post Your Benchmarks!

1101113151649

Comments

  • SimonJMSimonJM Posts: 5,942
    hphoenix said:
    SimonJM said:

    Bit the bullet andd installed the DS beta on new computer:

    DS4.9.3.128, nVidia driver: 375.95

    GTX1050 (display card): Total Rendering Time: 12 minutes 18.64 seconds

    TITAN X (Pascal): Total Rendering Time: 3 minutes 44.84 seconds

    Okay, that time for the Titan XP seems a bit high.  My single 1080 GTX did the benchmark in 2m40s (stock clocks, no OC).  You may want to check your fan profile, if it's getting hot and throttling I could understand that long of a time.....but it should be closer to 2m25s for a Titan XP.  And make sure the CPU isn't checked, that slows down fast cards.....

     

    It was performaance capped, per GPU-Z, for voltage reliability, temperature got up to around 60C with fan at less than 50%

  • Ran the benchmark on my 1070 and this is what I got.

    Optix off. GPU Only
    Total Rendering Time: 5 minutes 25.62 seconds

    Optix on. GPU Only
    Total Rendering Time: 3 minutes 15.56 seconds

    CPU Optix made no real difference btw

    This card is also driving three screens. I was pleased the heat didn't go up anymore than gaming.

    My card details : https://www.asus.com/Graphics-Cards/TURBO-GTX1070-8G/

  • well well well, it seems that for now, the pascal results are completely inconsistent. Probably due more to nvidia not optimised than to daz3d not super ready.

  • Actually, when lined up in order of watts TDP (kila-watt-hog Titan, 200 watt, 150 watt, 120 watt, etc), the render times do appear to line up surprisingly well. The hype that a 150 watt GTX1070 would outperform a kila-watt-hog Titan X, was based on stuff games use and not particularly something that makes CUDA math ops faster. So I'm not all that surprised. There are benefits to the newer cards (cooling, lower idle power, etc), just not that drastic of a CUDA performance over older cards.

    The biggest benefit is the 8GB memory vs the older cards 4GB limit, and even that is sort of pathetic for some things.

  • Yep. Also the price though, a 1070 is is a third the cost of a Titan X.

  • posecastposecast Posts: 386

    A dual 1070 test would be awesome if someone has the ability!

  • Ran the benchmark on two of my rigs and here are the results:

    1. Optix off. GPU only. i3 3245 8GB RAM 1xGTX 970 4GB VRAM: 6 mins 28.548 secs to finish.
    2. Optix off. GPU only. i7 2600k 16GB RAM 2xGTX 980 Ti 6GB VRAM - Only one used for rendering. The other drives the monitor: 3 mins 57.928 secs to finish.
  • HavosHavos Posts: 5,294

    Ran the benchmark on two of my rigs and here are the results:

    1. Optix off. GPU only. i3 3245 8GB RAM 1xGTX 970 4GB VRAM: 6 mins 28.548 secs to finish.
    2. Optix off. GPU only. i7 2600k 16GB RAM 2xGTX 980 Ti 6GB VRAM - Only one used for rendering. The other drives the monitor: 3 mins 57.928 secs to finish.

    Based on what Widdershins Studio posted above, this would imply that a GTX 1070 is only slightly faster than a GTX 970, which seems rather odd.

  • Havos said:

    Ran the benchmark on two of my rigs and here are the results:

    1. Optix off. GPU only. i3 3245 8GB RAM 1xGTX 970 4GB VRAM: 6 mins 28.548 secs to finish.
    2. Optix off. GPU only. i7 2600k 16GB RAM 2xGTX 980 Ti 6GB VRAM - Only one used for rendering. The other drives the monitor: 3 mins 57.928 secs to finish.

    Based on what Widdershins Studio posted above, this would imply that a GTX 1070 is only slightly faster than a GTX 970, which seems rather odd.

    @Havos: I'm not sure whether the 1080 was being used for the render at all or not. Maybe it was a CPU only render and hence the lack of substantial difference in times. Coz, and quite possibly I don't fully understand this yet, it seems like the GTX 10XX cards are not supported by Iray yet. Certainly seems so for my new MSI gaming laptop with a GTX 1060 6GB VRAM. Iray doesn't use it at all and very unambiguously says the GPU is not supported. And it also seems that way from these two links: Maybe I'm missing something.

    https://forum.nvidia-arc.com/showthread.php?14632-Will-the-Geforce-GTX-1080-be-supported/page3

    https://forum.allegorithmic.com/index.php?topic=11775.0

    @Widdershins Studio: Curious what your build is? CPU, RAM etc. 

  • HavosHavos Posts: 5,294
    Havos said:

    Ran the benchmark on two of my rigs and here are the results:

    1. Optix off. GPU only. i3 3245 8GB RAM 1xGTX 970 4GB VRAM: 6 mins 28.548 secs to finish.
    2. Optix off. GPU only. i7 2600k 16GB RAM 2xGTX 980 Ti 6GB VRAM - Only one used for rendering. The other drives the monitor: 3 mins 57.928 secs to finish.

    Based on what Widdershins Studio posted above, this would imply that a GTX 1070 is only slightly faster than a GTX 970, which seems rather odd.

    @Havos: I'm not sure whether the 1080 was being used for the render at all or not. Maybe it was a CPU only render and hence the lack of substantial difference in times. Coz, and quite possibly I don't fully understand this yet, it seems like the GTX 10XX cards are not supported by Iray yet. Certainly seems so for my new MSI gaming laptop with a GTX 1060 6GB VRAM. Iray doesn't use it at all and very unambiguously says the GPU is not supported. And it also seems that way from these two links: Maybe I'm missing something.

    https://forum.nvidia-arc.com/showthread.php?14632-Will-the-Geforce-GTX-1080-be-supported/page3

    https://forum.allegorithmic.com/index.php?topic=11775.0

    @Widdershins Studio: Curious what your build is? CPU, RAM etc. 

    The new Pascal cards are supported by the latest DS beta, so that is the one that people with these cards have been using for benchmarks. If they had been running in CPU mode only the times would have been much longer. I also have a GTX 970, and the next card I will get is likely to be a GTX 1070, I had been hoping the performance would be at least double my existing card.

  • Havos said:
    Havos said:

    Ran the benchmark on two of my rigs and here are the results:

    1. Optix off. GPU only. i3 3245 8GB RAM 1xGTX 970 4GB VRAM: 6 mins 28.548 secs to finish.
    2. Optix off. GPU only. i7 2600k 16GB RAM 2xGTX 980 Ti 6GB VRAM - Only one used for rendering. The other drives the monitor: 3 mins 57.928 secs to finish.

    Based on what Widdershins Studio posted above, this would imply that a GTX 1070 is only slightly faster than a GTX 970, which seems rather odd.

    @Havos: I'm not sure whether the 1080 was being used for the render at all or not. Maybe it was a CPU only render and hence the lack of substantial difference in times. Coz, and quite possibly I don't fully understand this yet, it seems like the GTX 10XX cards are not supported by Iray yet. Certainly seems so for my new MSI gaming laptop with a GTX 1060 6GB VRAM. Iray doesn't use it at all and very unambiguously says the GPU is not supported. And it also seems that way from these two links: Maybe I'm missing something.

    https://forum.nvidia-arc.com/showthread.php?14632-Will-the-Geforce-GTX-1080-be-supported/page3

    https://forum.allegorithmic.com/index.php?topic=11775.0

    @Widdershins Studio: Curious what your build is? CPU, RAM etc. 

    The new Pascal cards are supported by the latest DS beta, so that is the one that people with these cards have been using for benchmarks. If they had been running in CPU mode only the times would have been much longer. I also have a GTX 970, and the next card I will get is likely to be a GTX 1070, I had been hoping the performance would be at least double my existing card.

    @Havos: Ahhh... knew I was missing something! :-) Thanks. Downloaded the beta for the 1060 and did some more testing and collated the results below for easier readability/comparison; ordered by graphics card: 

    1. OptiX off. CPU + GPU. i7 6700 16GB RAM 1xGTX 745 4GB VRAM. 17 mins 7.301 secs to finish. 

    2. OptiX off. GPU only. i3 3245 8GB RAM 1xGTX 970 4GB VRAM: 6 mins 28.548 secs to finish. 

    3. OptiX off. GPU only. i7 2600k 16GB RAM 2xGTX 980 Ti 6GB VRAM - Only one used for rendering. The other drives the monitor: 3 mins 57.928 secs to finish. 

    4. OptiX off. GPU only. i7 6700HQ 16GB RAM 1xGTX 1060 6GB VRAM: 7 mins 22.750 secs to finish. 

    5. OptiX off. CPU + GPU. i7 6700HQ 16GB RAM 1xGTX 1060 6GB VRAM: 6 mins 40.280 secs to finish. 

    Only in the case of GTX 1060 did the rendering converge before 5000 iterations. In both #4 and #5, even though on a laptop, the 1060 was able to shave off about 200~300 iterations. The 980 Ti proved the fastest though.

  • WandererWanderer Posts: 956

    Here's a noobish couple of questions--and I'm sorry I haven't read all 13 pages of posts, but I did use the search feature to attempt to find an answer. When my render finishes, the history window disappears so I have a difficult time catching the stats on my renders. Is there a setting I'm missing that will keep it up after it is finished? How are you all tracking your stats so easily?

  • Here's a noobish couple of questions--and I'm sorry I haven't read all 13 pages of posts, but I did use the search feature to attempt to find an answer. When my render finishes, the history window disappears so I have a difficult time catching the stats on my renders. Is there a setting I'm missing that will keep it up after it is finished? How are you all tracking your stats so easily?

    Do your render then take the menu up top -> Help -> Troubleshooting -> View Log File.

    Scroll to the end to get the last render details.

  • Havos said:
    @Widdershins Studio: Curious what your build is? CPU, RAM etc. 

    Intel Core i7-5820K

    Samsung SM951 AHCI PCIe M.2 256GB

    Corsair Vengeance LPX DDR4 2400 C14 4x4GB

     

  • WandererWanderer Posts: 956

    Thank you so much, Widder. That will be a great help.

  • WandererWanderer Posts: 956

    Okay, so here are my results--I'll spare the posting of the images because they all pretty much look the same and just fine to me. Thought I would add to the conversation because I have a pretty limited system compared to what a lot of people are running, and I think the results are interesting.

    System Specs: i5-2500k @ 3.30 GHz, using 3 cores for rendering
    GeForce GTX 680, compute capability 3.0, 2048MB Total, 1753MB available, display attached (3 monitors @ 1920x1080 ea), 1536 Cuda cores (according to GeForce dot com)
    ASRock Z77 Extreme4
    16 GB RAM (don't remember make and model)

    Because of the age and limitations of my CPU, I've skipped CPU-only testing.

    First Test Render:

    Optimization: Memory
    CPU + GPU + OptiX
    5000 Iterations (4546 GPU + 454 CPU)
    Total Time: 7 min, 36.68 sec


    Second Test Render:

    Optimization: Memory
    CPU + GPU
    5000 Iterations (4518 GPU + 482 CPU)
    Total Time: 9 min, 35.14 sec

    Third Test Render:

    Optimization: Memory
    GPU + OptiX
    5000 Iterations 
    Total Time: 6 min, 50.23 sec

    Fourth Test Render:

    Optimization: Memory
    GPU Only
    5000 Iterations
    Total Time: 9 min, 11.39 sec

    Fifth Test Render:

    Optimization: Speed
    GPU + OptiX
    5000 Iterations
    Total Time: 6 min, 26.12 sec

    So, for my system, the best render time using all of the default scene render settings was GPU and OptiX without CPU. The best time overall involved changing instancing optimization to speed with GPU and OptiX only. Considering times on some of these powerhouse systems, I don't think my system handles this simple little scene too shabbily. I am curious about comparisons on much more demanding scenes in Iray however. Also, I'm now trying to figure out what graphics card or cards I should be saving up to buy. I'm also wondering what to do to improve moving cameras around in scenes with lots of polygons. Even some hairs can be challenging for my setup. Will a better graphics card (or a second one) improve this or do I need to get a better CPU?

  • Havos said:

    Ran the benchmark on two of my rigs and here are the results:

    1. Optix off. GPU only. i3 3245 8GB RAM 1xGTX 970 4GB VRAM: 6 mins 28.548 secs to finish.
    2. Optix off. GPU only. i7 2600k 16GB RAM 2xGTX 980 Ti 6GB VRAM - Only one used for rendering. The other drives the monitor: 3 mins 57.928 secs to finish.

    Based on what Widdershins Studio posted above, this would imply that a GTX 1070 is only slightly faster than a GTX 970, which seems rather odd.

    GTX 970 vs GTX1070. Ah, there both about 150 watt cards, so not that odd actually. One has double the ram (8GB) for Iray, so is much better in that regard.

  • Infinityseed, a second card dedicated for the monitors will help many things, tho may not help much with OpenGL viewfield in studio when moving around a scene. Some things (like hair and grass) have so much geometry that even a Titan is brought to it's knees with OpenGL, it will be better with better cards just not perfect 140fps spectacular.  Don't get to hopeful about the shiny new stuff, it is an incremental improvement not an order of magnitude better then the former generation stuff.

  • WandererWanderer Posts: 956
    edited December 2016
    Havos said:

    Ran the benchmark on two of my rigs and here are the results:

    1. Optix off. GPU only. i3 3245 8GB RAM 1xGTX 970 4GB VRAM: 6 mins 28.548 secs to finish.
    2. Optix off. GPU only. i7 2600k 16GB RAM 2xGTX 980 Ti 6GB VRAM - Only one used for rendering. The other drives the monitor: 3 mins 57.928 secs to finish.

    Based on what Widdershins Studio posted above, this would imply that a GTX 1070 is only slightly faster than a GTX 970, which seems rather odd.

    GTX 970 vs GTX1070. Ah, there both about 150 watt cards, so not that odd actually. One has double the ram (8GB) for Iray, so is much better in that regard.

    I don't know much about the watts angle, so forgive my ignorance on this please. I'm also not sure how much this applies with respect to rendering engines and such, BUT, in my gaming experience, the differences between Nvidia cards from two different generations that end with the same 2 digits is sometimes much less than the difference between cards of the same generation with a great difference between the last two digits. In other words, If I compare a 780 to a 680 and a 780 to a 740, I would not be surprised to find a much closer performance between the 780 and 680, or at least a better performance from the 680 than the 740. Maybe that's the same as what you're saying. Just offering another way to see it. 

    Because of mistakes I've made in the past, I personally would never purchase an Nvidia card, for any purpose, that ended in anything less than 60.

    Post edited by Wanderer on
  • WandererWanderer Posts: 956

    Infinityseed, a second card dedicated for the monitors will help many things, tho may not help much with OpenGL viewfield in studio when moving around a scene. Some things (like hair and grass) have so much geometry that even a Titan is brought to it's knees with OpenGL, it will be better with better cards just not perfect 140fps spectacular.  Don't get to hopeful about the shiny new stuff, it is an incremental improvement not an order of magnitude better then the former generation stuff.

    Thanks, I appreciate that. I'm getting the idea from looking at the figures out there that while the savings with more investment are at times significant, they may or may not be worth it for my purposes. 

  • WandererWanderer Posts: 956

    Oh, I forgot to mention, I rendered the scene with all orbs still in. So, again, I didn't do too badly for my system's age.

     

  • ZarconDeeGrissomZarconDeeGrissom Posts: 5,412
    edited December 2016

    Infinityseed, yea, about the same angle. Nvidia has been keeping each tear at the same watt level for a few generations now with NO improvement on that front (GTX #60 is 120 watts, GTX #70 is 150 watts, etc). And I've seen the same 6800GT vs 7800GT vs 8800GT, all about the same with about a 5% to 10% improvement in in-game FPS between generations. When the gforce 9800 watt-hog was launched, I stopped gaming for the most part (combination of life and the electric bill of such high end cards).

    The only reason I upgraded my Display card from a silent heat pipe NX8600GT to a 4GB GK208 card, was the lower watts, 4GB ram, and Iray capable CUDA cores (and it is still silent).  I do have a GTX960 card in this comp, it's just a tad loud and expensive for a daily display driver for a non-gamer.

    My_DispDriverCard_001lbl1.png
    380 x 480 - 20K
    Post edited by ZarconDeeGrissom on
  • WandererWanderer Posts: 956

    Infinityseed, yea, about the same angle. Nvidia has been keeping each tear at the same watt level for a few generations now with NO improvement on that front (GTX #60 is 120 watts, GTX #70 is 150 watts, etc). And I've seen the same 6800GT vs 7800GT vs 8800GT, all about the same with about a 5% to 10% improvement in in-game FPS between generations. When the gforce 9800 watt-hog was launched, I stopped gaming for the most part (combination of life and the electric bill of such high end cards).

    The only reason I upgraded my Display card from a silent heat pipe NX8600GT to a 4GB GK208 card, was the lower watts, 4GB ram, and Iray capable CUDA cores (and it is still silent).  I do have a GTX960 card in this comp, it's just a tad loud and expensive for a daily display driver for a non-gamer.

    Very interesting. See, all this time I was feeling like I was really behind the times... which I guess I am, but now I don't feel as bad about waiting to upgrade. I might keep this card to run my displays, and just see about getting either another 680 if super cheap enough, or upgrading to something better if there's a sufficient price drop in the next six months. Good to know that even if I could go top end I still wouldn't be all that much farther ahead in some ways. However, having said all that, those 3 minute drops on this benchmark do sound pretty sweet. Just thinking about the render I spent over 6 hours on the other night, and a few I've run around 24 hours. Being able to drop the time by at least half would be nice.

  • cridgitcridgit Posts: 1,757
    edited May 2022

    Redacted

    Post edited by cridgit on
  • Finally manged to borrow an nvidia card to test with iray, a 1050TI 4GB. Not exactly a screaming beast, but even this a is HUGE improvement. My base setup is a AMD 8-core 16GB, DS 4.9 beta

    1: CPU only ~61 min
    2: GPU only 7:02
    3: GPU + CPU 6:39

    Judging from the other scores, it looks like Iray doesn't like AMD CPUs at all. :/ Not surprising, some companies only optimize their code for Intel chips. As for my scores, looks like the 1050TI does OK for an entry level card. Also the CPU gave a ~10% speed boost to the GPU. Considering that it takes 10x longer by itself, it makes sense that it could contribute ~10% of the work in only 10% of the time.

  • PA_ThePhilosopherPA_ThePhilosopher Posts: 1,039
    edited December 2016

    Finally got around to running a test:

    • 1 minute, 15 seconds to 5000 iterations
    • (4x 780Ti's, no cpu, no optix)

    If anyone has a quad system with 980 Ti's or 1080's, let me know. I would be curious to see how they stack up.....

     

    -P

    Post edited by PA_ThePhilosopher on
  • if i wanna go all out and use speed instead of memory optimazation i get this

    Total Rendering Time: 1 minutes 52.53 seconds - gpu & cpu with optix speed mode

  • renders done with viewport as opengl mode not iray (so there is no pre-computation of the shaders) also i fully close down daz3d inbetween tests and clear the log.txt file before i do each render

    • Total Rendering Time: 03 minutes 09.60 seconds - gpu no optix
    • Total Rendering Time: 02 minutes 11.60 seconds - gpu optix
    • Total Rendering Time: 02 minutes 58.65 seconds - cpu & gpu no optix
    • Total Rendering Time: 02 minutes 07.44 seconds - cpu & gpu optix
    • Total Rendering Time: 19 minutes 34.19 seconds - cpu no optix
    • Total Rendering Time: 16 minutes 37.23 seconds - cpu optix

     

    specs:

    i7 5820k @ [email protected] - Gigabyte 980 TI XTREME 1530/4000 - 32GB ddr4-2800 - 120gb hp ssd - 960gb pny ssd - MSI X99A SLI PLUS - vg248qe -Logitech G303 - soundblaster e1/FiiO e07k - Superlux HD668B/sennheiser hd 380 - Windows 10 PRO

  • Finally got around to running a test:

    • 1 minute, 15 seconds to 5000 iterations
    • (4x 780Ti's, no cpu, no optix)

    If anyone has a quad system with 980 Ti's or 1080's, let me know. I would be curious to see how they stack up.....

     

    -P

    2min 43secs GPU & CPU with optix

    2min 49secs GPU only with optix

    4min 25sec  GPU & CPU No optix

    4min 35secs GPU  only No optix

    System: I7-6700 (4 cores) 4.0 Gig, with one NVidia gtx 1080 (EVGA)

    Optix reduces the time by about 40%

    CPU's don't add much even with 4 cores hard at work (99.99% cpu usage)

    Someone posted (OP?) that adding more GPUs would yield linear gains; e.g. 2 1080 GPUs would half the time - 1min 22secs.

  • PA_ThePhilosopherPA_ThePhilosopher Posts: 1,039
    edited December 2016
    docbother said:

    2min 43secs GPU & CPU with optix

    2min 49secs GPU only with optix

    4min 25sec  GPU & CPU No optix

    4min 35secs GPU  only No optix

    System: I7-6700 (4 cores) 4.0 Gig, with one NVidia gtx 1080 (EVGA)

    Optix reduces the time by about 40%

    CPU's don't add much even with 4 cores hard at work (99.99% cpu usage)

    Someone posted (OP?) that adding more GPUs would yield linear gains; e.g. 2 1080 GPUs would half the time - 1min 22secs.

    Thanks, yes I am interested in seeing how the gains stack up in a 4x GPU system. In my system with 780 Ti's they were almost linear (well, at least for the second card. Gains are less with the third and fourth, but water compensates for that). I'd be curious to see how a system of (4x) 980 Ti's or (4x) 1080's performs comparitavely. 

    -P

    Post edited by PA_ThePhilosopher on
Sign In or Register to comment.