Iray Starter Scene: Post Your Benchmarks!

1131416181949

Comments

  • outrider42outrider42 Posts: 3,679

    Is that CPU+GPU or the GPU alone?

    Either way, impressive, especially since the 1060 has "only" 1280 cores. My 670 has 1366 cores, but it took 6 or 7 minutes to render. I can't remember now. Of course the 1060 is clocked over 500 Mhz faster, and that makes a huge difference, too. Better CUDA plus faster clocks = big boost. I want to search these pages to see what the 960 was getting.

  • PA_ThePhilosopherPA_ThePhilosopher Posts: 1,039
    edited February 2017

    I just updated to the latest version of Daz Studio (4.9.3.166), and lastest nvidia driver (378.49), and my render time has increased from 1 min 15 sec to 1 min 55 sec.... not sure why... DS is also unstable now, regular crashes...

    my old driver was 362.00....an amazing driver---very stable and fast...

    -P

    Post edited by PA_ThePhilosopher on
  • grinch2901grinch2901 Posts: 1,246

    Is that CPU+GPU or the GPU alone?

    GPU only

  • TaozTaoz Posts: 9,743
    edited February 2017

    MSI GTX 1070 Armor OC (factory overclocked), Win 10/64 on 7-10 year old hardware (Asrock PB5-DE (PCI-E 1.1 slot) , Q6600 quad core 2.4 Ghz, 8 GB DDR2 RAM). Didn't think the card would work on the system, but it booted up fine and Win 10 had already installed the latest drivers before I got the driver DVD loaded.

    Optix disabled:
    4798 iterations -  Total Rendering Time: 6 minutes 20.2 seconds

    Optix eabled:
    4812 iterations - Total Rendering Time: 4 minutes 15.69 seconds

    So it does fairly well even on older systems. Before, with CPU rendering, the test render took over 2 hours with 2100+ iterations.

     

     

    gpu-z_sy_testrender.jpg
    416 x 487 - 95K
    Post edited by Taoz on
  • outrider42outrider42 Posts: 3,679

    Now that is a cool test! Very impressive!  Its pretty cool to see you could pick up a pc from a yard sale or pawn shop and slap a 1070 in it to turn it into an Iray beast! LOL. So many tests with these new cards have i7's in their machines, so this is really nice. Astoundingly, that is still faster than the 1060 running with a brand new i7, granted a mobile i7, but unquestionably better machine with all the possible advancements you can imagine with pcei and ram. Is the PC usable when it renders? Like can you use a browser while it runs? I've got a Core 2 Quad sitting in the closet that is very similar, it might be a Q8600.

    In gaming, this would be a horrible idea, but for Iray, that's not really the case. In the research I have seen, the cpu is not really important if you use a single gpu to render. It seems the cpu only starts to make a big difference when you try to run multiple gpus. Then it matters, but even then, its not like a world changing difference unless you have seriously high end gpus. So if you want to maximize your quad Titan dream Iray rig, the you'll want a dual Xeons to drive them. But I haven't seen a test with such an old pc, those tests were comparing newer cpus.

    So, I wager if you tried using a second gpu in that old system, you would not see as much of a boost if those same cards were in a more modern pc, that's assuming it can even take one.

  • gardenguyvictorgardenguyvictor Posts: 22
    edited February 2017

    Please delete this.

    Post edited by gardenguyvictor on
  • Alright so I just tested this with a 6700k and two 1070s and a 980 ti.

    Optix on

    2017-02-04 21:33:00.657 Total Rendering Time: 1 minutes 0.53 seconds

    Optix off

    2017-02-04 21:29:57.903 Total Rendering Time: 1 minutes 12.63 seconds

     

    Pretty dope.

     

  • TaozTaoz Posts: 9,743

    Now that is a cool test! Very impressive!  Its pretty cool to see you could pick up a pc from a yard sale or pawn shop and slap a 1070 in it to turn it into an Iray beast! LOL. So many tests with these new cards have i7's in their machines, so this is really nice. Astoundingly, that is still faster than the 1060 running with a brand new i7, granted a mobile i7, but unquestionably better machine with all the possible advancements you can imagine with pcei and ram. Is the PC usable when it renders? Like can you use a browser while it runs? I've got a Core 2 Quad sitting in the closet that is very similar, it might be a Q8600.

    In gaming, this would be a horrible idea, but for Iray, that's not really the case. In the research I have seen, the cpu is not really important if you use a single gpu to render. It seems the cpu only starts to make a big difference when you try to run multiple gpus. Then it matters, but even then, its not like a world changing difference unless you have seriously high end gpus. So if you want to maximize your quad Titan dream Iray rig, the you'll want a dual Xeons to drive them. But I haven't seen a test with such an old pc, those tests were comparing newer cpus.

    So, I wager if you tried using a second gpu in that old system, you would not see as much of a boost if those same cards were in a more modern pc, that's assuming it can even take one.

    I was a bit surprised by the speed too, I imagined there might be some major bottlenecks on that system, but after all it's the card that does the major part of the job.

    I don't really know what role(s) the CPU plays when doing GPU rendering with these cards though. It's running at 100% when rendering the test render, I can still browse webpages and do other things without any problems though, but I guess that's just because of Windows' process management.

    Just did another test render btw, this time it took only 3 minutes 27 seconds with Optix. I uninstalled WTFast and rebooted before I did that render, wonder if that can be the reason.

  • junkjunk Posts: 1,246

    2017-02-04 21:33:00.657 Total Rendering Time: 1 minutes 0.53 seconds

    Pretty dope.

    VERY DOPE!  Damn. I was thinking about picking up a second 1070 to help reduce my rendering times.  But you take that one step further with a 980TI!  Wow.

    Does anyone know if you can SLI two GTX 1070's by different manufacturers?  I know Daz/rendering does not want SLI but I'm thinking about when I'm gaming and want to push my Predator X34 screen to the max 100Hz with all details turned up.

  • Yes you can sli cards from different manufacurers.

  • PA_ThePhilosopherPA_ThePhilosopher Posts: 1,039
    edited February 2017

    I just updated to the latest version of Daz Studio (4.9.3.166), and lastest nvidia driver (378.49), and my render time has increased from 1 min 15 sec to 1 min 55 sec.... not sure why...

    my old driver was 362.00....an amazing driver---very stable and fast...

    -P

     

    I just reverted back to Daz 4.9.2.70 and to driver 362.00 and my render times decresed from 1 min 55 sec to 1 min 5 sec...  not sure why its faster on the older version/driver, but it is... my system kept crashing on DS 4.9.3.166 with any post-372 driver....

    -P

    Post edited by PA_ThePhilosopher on
  • How much of a difference is there in speed between a 980 TI, 1070 and 1080?

  • artphobe said:

    How much of a difference is there in speed between a 980 TI, 1070 and 1080?

    Not much... 980 Ti and 1080 are about equal.... 1070 a little slower...

  • artphobe said:

    How much of a difference is there in speed between a 980 TI, 1070 and 1080?

    Not much... 980 Ti and 1080 are about equal.... 1070 a little slower...

    But the 1070 and 1080 have more memory, and I believe run cooler.

  • I know its a bad comparison. But for example in Cycles CUDA: the GTX 1080 avgs 69s and 1070 avgs 72 (source: http://blenchmark.com/gpu-benchmarks)

    If its that close, then it would be better to get a 1070.

  • PA_ThePhilosopherPA_ThePhilosopher Posts: 1,039
    edited February 2017
    But the 1070 and 1080 have more memory, and I believe run cooler.

    For me, I've never really been too concerned about memory or power consumption when buying a GPU. I only have 3 GB on my 780 Ti's, but never had a problem with memory (plus, I must admit that the reports I've heard from people who upgraded their VRAM ended up with less responsive viewport feedback and slower render times, have concerned me). And power consumption is really only critical for laptop users or those overly concerned about effeciency of their hardware. As a general rule of thumb, higher power consumption = higher performance (assuming you can keep things cool, i.e., water).

    Post edited by PA_ThePhilosopher on
  • But the 1070 and 1080 have more memory, and I believe run cooler.

    For me, I've never really been too concerned about memory or power consumption when buying a GPU. I only have 3 GB on my 780 Ti's, but never had a problem with memory (plus, I must admit that the reports I've heard from people who upgraded their VRAM ended up with less responsive viewport feedback and slower render times, have concerned me). And power consumption is really only critical for laptop users or those overly concerned about effeciency of their hardware. As a general rule of thumb, higher power consumption = higher performance (assuming you can keep things cool, i.e., water).

    For the cost of watercooling a single 1080.. I got two 1070s. Unless you're already using a custom water cooling system, and have an extra 200-300 to spend per gpu.

  • namffuaknamffuak Posts: 4,071
    artphobe said:

    How much of a difference is there in speed between a 980 TI, 1070 and 1080?

    I have a 980 ti and a 1080 - and there is no significant difference in speed. But thhe 1080 has 8 GB Vram and the 980 ti only has 6 GB. Also, in my long-running test render, the 1080 hits 70% power while running at full gpu and clock speed, while the 980 ti is running at 60% power. 70% of 180W is 126W; 60% of 250W is 150W - so the 1080 wins on both memory and power draw.

  • PA_ThePhilosopherPA_ThePhilosopher Posts: 1,039
    edited February 2017

    For the cost of watercooling a single 1080.. I got two 1070s. Unless you're already using a custom water cooling system, and have an extra 200-300 to spend per gpu.

    I think that was a wise decision. Multi-GPU setups provide the best bang for the buck and will always beat out a single GPU that is marginally better.... Rather than buying a single Titan, I always advise people to stack up two lesser cards (1080, 980 Ti, 780 Ti, etc). As for water cooling, it is helpful on 1 or 2 GPU's....But...as soon as you throw in a 3rd or 4th GPU, water is, well, how should I say it.... amazing... it will blow your mind... adding water is like adding 2 more cards... 

    Post edited by PA_ThePhilosopher on
  • CapscesCapsces Posts: 465

    Thought I'd put this in here in case anyone wants to see the results of upgrading a really old computer. This is a video card/memory upgrade comparison for a 9 year old Core i7 965 Extreme in a Rampage II Extreme. Prior to the upgardes, I deleted the two spheres from the benchmark scene, because it looked like it was not going to render with them. I did leave the spheres in for the post upgrade renders. I'm not sure the iterations are accurate. I did one render where the iterations were included with the render information (in the log file) after the render completed, but no other renders did that, so I went with the last line before before the render ended.

    6gb RAM
    Asus Geforce GTX 480 with 1.5gb GDDR5 and 480 CUDA cores
    Neither have Optix enabled

    CPU and GPU
    Time: 59 min. 46 secs.
    Iterations: 3920

    CPU Only
    Time: 59 min. 22 secs.
    Iterations: 3938

    GPU only would not render.

    12gb RAM
    MSI Geforce GTX 1070 Gaming X with 8gb GDDR5 and 1920 CUDA cores

    CPU and GPU
    Time: 5 min. 36 secs.
    Iterations: 4775

    CPU and GPU - Optix Enabled
    Time: 3 min. 58 secs.
    Iterations: 4799

    GPU Only
    Time: 4 min. 57 secs.
    Iterations: 4800

    GPU Only - Optix Enabled
    Time: 3 min. 23 secs.
    Iterations: 4794

  • outrider42outrider42 Posts: 3,679
    Capsces said:

    Thought I'd put this in here in case anyone wants to see the results of upgrading a really old computer. This is a video card/memory upgrade comparison for a 9 year old Core i7 965 Extreme in a Rampage II Extreme. Prior to the upgardes, I deleted the two spheres from the benchmark scene, because it looked like it was not going to render with them. I did leave the spheres in for the post upgrade renders. I'm not sure the iterations are accurate. I did one render where the iterations were included with the render information (in the log file) after the render completed, but no other renders did that, so I went with the last line before before the render ended.

    6gb RAM
    Asus Geforce GTX 480 with 1.5gb GDDR5 and 480 CUDA cores
    Neither have Optix enabled

    CPU and GPU
    Time: 59 min. 46 secs.
    Iterations: 3920

    CPU Only
    Time: 59 min. 22 secs.
    Iterations: 3938

    GPU only would not render.

    12gb RAM
    MSI Geforce GTX 1070 Gaming X with 8gb GDDR5 and 1920 CUDA cores

    CPU and GPU
    Time: 5 min. 36 secs.
    Iterations: 4775

    CPU and GPU - Optix Enabled
    Time: 3 min. 58 secs.
    Iterations: 4799

    GPU Only
    Time: 4 min. 57 secs.
    Iterations: 4800

    GPU Only - Optix Enabled
    Time: 3 min. 23 secs.
    Iterations: 4794

    This and Taozen's test really seem to prove that if you use just one gpu, it simply doesn't matter what hardware you have for Iray. Pretty much everybody with a 1070, no matter what the rest of the build has, is getting sub 4 minute render times with Optix enabled. By the same token, all 1060's seem to be over 4, sometimes reaching into 5 minutes, again regardless of other components. Even a junky old pc with a 1070 can beat a brand new pc with a 1060. This is very important to consider...you do not need to buy a whole new pc to upgrade. 1080's are faster still, but not dramatically so (and certainly not enough for the price difference between them.)

    So IMO if money is a factor, buying a 1070 alone is a far better deal than buying a new pc *plus* a 1060. Even if you have squat for system ram, even if you have older pcie slots. The 1070 is the best deal, and it will, almost like magic, turn any old junker into a rendering beast, with its 8gb of vram. A 1060 will also provide an instant upgrade, but less so.

    If you have enough power for multiple gpus, that's also a bang for buck consideration.

    Now I would like to see if anybody is crazy enough to try multiple gpus in an older pc. I think this might be where the older hardware will start to bottleneck the render speeds. But the question is how much of difference this makes in this kind of hardware. There is a small Iray benchmark test that ran several different setups. While the testing was only with very high end equipment for the time (a variety of i7s, a Xeon, 980s and Titans.). With cpus that had fewer lanes, multi-gpu setups were slower than Xeon setups with more lanes. The more gpus, the more this became evident. I can't find this test at this time. But they didn't test single gpu setups like these here in this thread. These tests are far more relevant to regular users than those high end tests.

  • outrider42outrider42 Posts: 3,679
    edited March 2017

    Ok, I thought I'd do some more experimenting. I also think its time to start considering a new benchmark scene with Pascal cards diving so easily into the 3 and 4 minute times. With only seconds between different classes of cards, maybe a bigger scene would show the disparity better.

    Anyway, I have a 970 4gb and a 670 2gb. i5-4690k, 16gb ram. Everything here is at stock clocks. If "optix" is listed, then optix was ON. Then I list whether I used speed or memory optimization. After each test, I closed DS and reloaded it. These are in 4.9, unless noted. I'm also using the latest driver, 378.66. For refernce, the 970 has 1644 CUDA cores, and the 670 has 1366 CUDA cores. The 970 is also clocked faster, over 200 mhz faster at their respecive boosts. The log says the 970 has compute ability "5.2," while the 670 has "3.0." Interesting.

    970 optix memory  4:32

    670 optix memory  8:55

    970+670  speed   3:30

    970+670+i5   optix speed  3:36

    970+670  optix memory  3:12

    970+670  optix memory (test 2)  3:08

    970+670  optix memory (DS 4.8)  3:08  There doesn't seem to be much difference in speed in my case. And oddly, memory seems faster than speed, LOL.

    Ok, lets try something new. Lets push this scene a bit by doubling its render size. The original scene is 400x520. So how about 800x1040?

    Well....

    970+670 optix speed (800x1040)  11:12

    I looked at the log, the 970 rendered 2879 iterations. The 670 added 1518, so that totals 4397.

    Whoa, doubling the scene nearly quadruples the render time here. At first, I thought it was going well, it got to 93% complete in just 3 minutes! But after that, progress dropped to a crawl. However, I really couldn't see a difference in the progress of the image quality beyond 93%. You can clearly see bad pixelization and fireflies on those blue balls. But grain is everywhere in this image. Her reflection in the glass beside her is absolutely awful. This scene sickleyield created really showcases the worse elements of Iray, and just how much Iray hates darkness. 

    So I am want to challenge some people, especially those with their fancy Pascal cards, to try this scene at higher resolutions and see how they stack up. I think the results might be surprising!

    I edited to add 970 and 670 solo numbers. I also decided to add some light to the scene. Two large 900k lumin planes of light were added. One directly above, the other on the right. This reduced time considerably. The render time for the 800x1040 scene dropped to 7:55. That's a savings of 3:17 seconds! The render is a little bit cleaner, too, except for those hateful blue balls! You can actually see the texture in her pants, LOL.

     

    Post edited by outrider42 on
  • posecastposecast Posts: 386

    Yes, someone please post the old benchmark with a 1080ti...

  • artphobeartphobe Posts: 97

    Also someone with a Ryzen CPU too please

  •  

    artphobe said:

    Also someone with a Ryzen CPU too please

    Still waiting for my motherboard and RAM. I'll be sure to post some CPU and combined results (I have a GTX Titan X).

  • outrider42outrider42 Posts: 3,679

    I'd like to see Ryzen plus any of the Pascal gpus to see if it makes any difference there, as well. But people need to get their hands on them first.

    I'm pretty certain 1080ti's will be beasts for iray. Gaming benchmarks are turning in Titan Pascal performance, sometimes even slightly faster. So the 1080ti will probably render very close to the speed of the Titan. With a slightly higher clock, it might render even faster, but only slightly. There's no reason why it should be much different, with the exact same CUDA core count. 

  • artphobeartphobe Posts: 97
    edited March 2017

    I'd like to see Ryzen plus any of the Pascal gpus to see if it makes any difference there, as well. But people need to get their hands on them first.

    I'm pretty certain 1080ti's will be beasts for iray. Gaming benchmarks are turning in Titan Pascal performance, sometimes even slightly faster. So the 1080ti will probably render very close to the speed of the Titan. With a slightly higher clock, it might render even faster, but only slightly. There's no reason why it should be much different, with the exact same CUDA core count. 

    Pretty sure having double thread count would definately help in render times.

    Im looking hard for a cheap iray card now. Got my eyes on a $50 GTX 750 Ti, but its 2GB T_T - that vram is just tooooo low sigh

    where are all those low end cards with over the top memory capacity when you need them

    Post edited by artphobe on
  • Robert FreiseRobert Freise Posts: 4,282
    artphobe said:

    Also someone with a Ryzen CPU too please

    Building sytem now maybe by end of week

  • outrider42outrider42 Posts: 3,679
    artphobe said:

    I'd like to see Ryzen plus any of the Pascal gpus to see if it makes any difference there, as well. But people need to get their hands on them first.

    I'm pretty certain 1080ti's will be beasts for iray. Gaming benchmarks are turning in Titan Pascal performance, sometimes even slightly faster. So the 1080ti will probably render very close to the speed of the Titan. With a slightly higher clock, it might render even faster, but only slightly. There's no reason why it should be much different, with the exact same CUDA core count. 

    Pretty sure having double thread count would definately help in render times.

    Im looking hard for a cheap iray card now. Got my eyes on a $50 GTX 750 Ti, but its 2GB T_T - that vram is just tooooo low sigh

    where are all those low end cards with over the top memory capacity when you need them

    I'm sure Ryzen will do well in cpu mode, its gpu mode and gpu+cpu I'm curious about. It indeed may go faster, but then it may not make a big difference.

    Wow, a 750ti for $50 isn't bad. But still, try to hold out for more if you can. 2gb isn't going to cut it now with these newer products that gobble up vram like Hungry Hungry Hippos. Its becoming a real problem, imo. I've got 4gb now, and that isn't doing much better. Its really frustrating.

    You could go 670 4gb for $90, faster and double the vram, right now on ebay. You got the 760, 770, 960 and 970 all in 4gb varieties in the $120-160 range.

    I have a 670, and its fine, other than it being a 2gb model. If it had 4gb I'd probably would have waited longer to upgrade to a 970. I use both when I can.

  • Silver DolphinSilver Dolphin Posts: 1,596

    They do have 750's with 4gb of video ram.

Sign In or Register to comment.