Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.
Comments
Here's what it probably was:
Iray RTX 2019.1.1, build 317500.2554
Fixed Bugs
Ie. just the very first render after either an Iray update or an nvidia driver update will take slightly longer with a Turing card.
Sounds about right. Thanks for that.
Thank you for these. So:
Without RT cores before v4.12: GPU Only 1 x Geforce RTX 2070 = 1 minutes 49.11 seconds (Optix Prime on)
With RT cores in v4.12: GPU Only 1 x Geforce RTX 2070 = 1 minutes 27.3 seconds (Optix Prime on)
~12% improvement with dedicated RT hardware? am I missing something. Some guys were saying 2x-10x improvement.
Problems with arithmetic? It all depends on the specific scene, but the average gain seems to be 20% (test)
Yeah, to echo others it depends ENTIRELY on the specific composition of each individual scene how much of a boost there is. So far the best rule seems to be that the larger (in terms of VRAM memory footprint/objects complexity) the greater the boost. Unfortunately all the benchmarking scenes currently around are optimized for small VRAM footprints (so that they can be rendered on all levels of hardware.) Making them bad candidates for showing how much of a difference dedicated RTCores can make.
I wasn't comparing the right figures. I should have chosen the lowest non-RT figure against the 4.12 one. You're right, it's not much of an improvement. I think this blog explains it quite well:
e.g. some of the scenes tested there have a x2 or x3 improvement. Others only order 10-20%. The bench scene in this thread is no longer fit for purpose. Or perhaps it is and Daz users don't generally deal with the kind of scene complexities that would result in a huge speedup. Anyway it doesn't matter so much to me. I'm just happy this stuff is getting out there. Hardware manufacturers will iterate on it and make improvements. It's been fun playing with RTX in Unreal Engine too.
I made a new benchmark scene that will show the power of RTX cards here https://www.daz3d.com/forums/discussion/344451/rtx-benchmark-thread-show-me-the-power#latest
No need for a big scene
Late to the party, but ran the original scene on a seven-year-old Dell T5600 with 2x 1050Ti and dual Xeon 8-core with Daz 4.11. Also did a single run on a two-year-old iMac. Both have 32GB of memory, but the clock speed for the CPU is about twice as fast on the iMac.
2x 1050Ti with 16-core CPU assist: 3min 6sec, 3min 3sec, 3min 4sec, 3min 6sec. Average is about 3min 5sec.
Quad-core i7 iMac, CPU only: 45min 5sec. (I have no interest in reruns on the iMac without Nvidia cards.)
I hate to render on the iMac, but setting up a scene is more comfortable for me on the iMac than the Dell. The Dell was a frustration-purchase because of Apple's Nvidia=Never OSX updates.
You should check out Iray Server. It isn't free past the first month, but it would let you both have your cake and eat it too (scene setup on your iMac. Scene renders on any/all Windows or Linux boxes in your house while SIMULTANEOUSLY doing the next scene's setup back on your iMac.)
Now I have fitted a GTX 1060 6Gb to the same machine, working on GPU only, the benchmark dropped to 4 mins 14 secs. Have to say I'm a little happy.
Regards,
Richard
I have a 2010 i5 iMac. This ought to be interesting. I'll let you know.
Just a note...
I was testing the new 4.12-Beta version and the renderings are finishing before the 5000 iteration mark. Seems the new convergance formula is saying that the renders are complete, a lot faster.
You will have to manually adjust that value, to put it above the default setting, or your benchmarks will not be accurate. (From card to card, or from 4.11 to 4.12-beta, if you are comparing notes.)
I will post more benchmarks once I have run one with the correct values for 4.12-beta.
Some people have made new benchmark threads that cap at a set iteration count.
Fwiw 4.10's convergence algorithm routinely did this as well. It's one of the main reasons I ended up developing an alternative scene to use for benchmarking going forward. Because of the inherent lack of consistency.
ETA:Having said that, I'd actually advise AGAINST attempting to tweak Sickleyield's original scene toward more consistency this late in the game. Doing so will essentially mean having two versions of a virtually visually identical benchmarking scene floating around. Which is a guaranteed recipe for confusion down the line (hence why I didn't just tweak an existing benchmarking scene for the newer benchmarking thread I started.)
SY's scene has already been consistently inconsistent about this for years now.
Yes, I agree with the issues that the scene has, but it is what I keep using as my baseline. I'll try a newer one next...
Finally, after a long wait, Titan-V cards now work on BOTH 4.11 and 4.12-Beta (Public). Honestly though, they may have worked months ago, but I have not had time to test them out.
For those looking for Titan-V benchmarks, here are some, in comparison to the Titan-Xp Collectors Edition cards.
Daz3D V4.11
Titan-Xp Col Edi, No OptiX (149.342sec), With OptiX (105.732sec)
Titan-V, No OptiX (78.362sec), With OptiX (63.192sec)
Daz3D V4.12-Beta (Public) * Seems that OptiX is the only choice... Unselecting it, does not stop it. Render test adjusted to 99%, so it actually hit 5000 iterations.
Titan-Xp Col Edi, No OptiX (103.636sec), With OptiX (103.636sec)
Titan-V, No OptiX (63.132sec), With OptiX (63.132sec)
I tried TCC mode, for both versions, but the numbers were WORSE than WDDM mode, but only by a few seconds. Unlike the suggestion in the LOGs will indicate. The result is not more efficient, at all, in any variation of rendering, on either version. Though, it should be. However, in all fairness, this was only done with ONE card in TCC mode, at a time. I think it may have gains if there are multiple cards in TCC mode, which I will try after I get both my Titan-V's setup with at-least one Titan-Xp Collectors Edition card. I can't fit two of the Collectors Editions cards in, with the Titan-V's, because the cards have an annoying cosmedic guard around the fan which can't be removed unless I use a hack-saw. (That may be an option.) However, I do have a normal Titan-X (The non-pascal version), which I can throw in there, for more comparison. All cards can be put in TCC mode, but I need one in WDDM mode, or I can't see anything. My CPU does not have an internal video-card, being the older/new i9-7980XE.
Next is Dual Titan-V's... But I have to flip my water cooler around, if I want to fit all four cards in the tower. :)
UPDATE: I managed to fit all four cards in the machine! Just took some cutting and bending of parts. (Seriously)
Daz3D V4.12-Beta (Public) * Seems that OptiX is the only choice... Unselecting it, does not stop it. Render test adjusted to 99%, so it actually hit 5000 iterations.
Titan-V [x2] + Titan-Xp Col Edi [x2] + CPU, No OptiX (21.558sec), With OptiX (21.541sec), Optomized for speed and OptiX (21.119sec)
P.S. I shaved a lot more time off the Titan-Xp Collectors edition, if I set the optomised settings for "SPEED" and not "MEMORY". From 149.342sec down to 113.425sec. The Titan-V was faster too, but it only went from 78.362sec down to 62.981sec.
Final note...
Both of the above cards, together, best times with OptiX, was 39.970sec, on both versions of Daz3D.
Yes, many of my renders, especially ones with no water or clouds or such amorphous substances are now mostly finishing before the 2000 max iterations I set for them. I set convergence at 95% / 1.0 and CPU renders on so (relatively) fast on this 7 year old laptop now.
I think it's because it looks like the convergence criteria in the SDK or something has changed. Eg, the render I'm doing now is at 56% after only 1 hour 36 minutes 16 seconds but has done only 138 of 2000 allowed iterations. It can run as long as it takes to do 2000 iterations.
Hi,
I decided to check 4.12 making a test with a heavy load: image size 4096x2048px, max samples 10000 iteractions, max time 14400 secs, rendered quality 5.0, no denoiser.
Hardware: RTX 2080ti (dedicated for render), Ryzen 7 1700, 64gb 2400mhz RAM, Daz installed and scene saved on SSD WD Blue 1TB sata, Windows 10 on SSD Adata XPG Gammix S11 Pro 512GB M.2 NVMe. GTX 970 (for Windows).
Running on 4.11:
2019-09-01 23:11:32.275 Total Rendering Time: 2 hours 37 minutes 3.8 seconds
Running on 4.12:
2019-09-02 05:27:05.118 Total Rendering Time: 2 hours 3 minutes 25.85 seconds
I look for the Iray version on the log file, I found:
2019-09-02 03:21:15.039 Iray [INFO] - API:MISC :: 0.0 API misc info : Iray RTX 2019.1.3, build 317500.3714, 19 Jul 2019, nt-x86-64
Somebody mention that the first render is slower and the others are faster, so I rendered again but didn't see any difference:
2019-09-02 07:47:01.392 Total Rendering Time: 2 hours 3 minutes 22.49 seconds
In this case I had little more than 30 percent of improvement on the render time. I didn't notice any different on the bright between the two versions, what happened between 4.10 and 4.11, don't know if there will be any disadvantage to keep using 4.12 Beta.
I got a new PC with an Ryzen 7 3700x and a MSI GeForce RTX 2080 Ti GAMING X TRIO
here are my results. First is Daz 4.11 and second is Daz 4.12.67 Beta with the IrayTestSceneStarterEssentialsOnly scene
Daz 4.11/4.12 GPU Optix
1 minutes 8.94 seconds/56.28 seconds
Daz 4.11/4.12 GPU
1 minutes 21.65 seconds/53.84 seconds
Daz 4.11/4.12 GPU CPU Optix
1 minutes 7.26 seconds/57.5 seconds
Daz 4.11/4.12 GPU CPU
1 minutes 21.37 seconds/54.74 seconds
one question, who do I know if RTX is used? Is there something in the log?
Which is faster to render with IRAY?
i7-9700k w/32gb memory
or
Nvidia GTX 980 4gb
From what I've read a i7-9700K is about 10% more powerful than an i7-8700K and a GTX 980 about 87% as powerful as a GTX 980Ti. Based on the following performance numbers (found here):
i7-8700K: 00.414 ips
GTX 980Ti: 2.094 ips
That would put the i7-9700K and GTX 980 at roughly:
i7-9700K: 00.455 ips
GTX 980: 01.822 ips
Or the GTX 980 at slightly more than 3x faster than the i7-9700K.
As a rule, CPU Iray performance pales in comparison to GPU performance across multiple generations. The only reason why CPU might be superior over GPU is if VRAM size is an issue.
Thank you! I wish this forum had a thank you button feature.
RTX 2070 Super
DS4.12 GPU (Optix Prime) = 1 minute 11 seconds
DS4.12 GPU (NO Optix Prime) = 1 minute 07 seconds
@DigitDotz how much did that cost?
Intel Core i7-6700K @ 4 GHz
GeForce GTX980 x2 (Kingpin)
Total Rendering Time: 3 minutes 1.44 seconds (Two Daz instances open)
2 minutes 54.24 seconds (closed the other Daz instance)
Daz 4.11 Pro
Intel Core i7-6700k @ 4 GHz
Single GeForce 1080 TI Aorus
Total Rendering Time: 2 minutes 58.13 seconds
Aorus OC mode
2 minutes 54.33 seconds
2x GeForce 1080Ti Aorus
2019-09-20 23:41:40.798 Total Rendering Time: 1 minutes 39.60 seconds
Intel Core i7-6700k @ 4 GHz
Single GeForce 1080 TI Aorus
Total Rendering Time: 2 minutes 58.13 seconds
Daz 4.11 Pro
Intel Core i7-6700k @ 4 GHz
Single GeForce 1080 TI Aorus
Total Rendering Time: 2 minutes 58.13 seconds
2x GeForce 1080 Ti (no SLI)
2019-09-20 23:41:40.798 Total Rendering Time: 1 minutes 39.60 seconds
with 1 SLI bridge: 2019-09-21 00:01:03.278 Total Rendering Time: 1 minutes 36.59 seconds
Daz 4.11 Pro
msi gaming trio x 2080 super with optix on in the beta daz studio
2019-09-21 09:27:57.616 Total Rendering Time: 52.7 seconds
with optix off
2019-09-21 09:30:57.447 Total Rendering Time: 50.97 seconds
kinda weird optix off rendered faster for this one lol
If you dissect the log file you will discover that the OptiX Prime acceleration toggle in 4.12 Beta is a placebo if you are rendering with a Turing GPU. OptiX (non-Prime) and consequently RTCores ALWAYS get used for raytracing rather than OptiX Prime. The only difference the toggle makes is that it generates non-critical error messages in the log file about it no longer meaning anything.