Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
Indoor scenes can be reasonably complex too when you have tons of elements in them. RTX definitely takes advantage when there's ray intersection because of the RT cores, but since outdoor scenes render quickly enough already, we might want something different. Looks like we're going to have to design multiple different scenes and test them out.
There is also an other option that I didn't thought at first but that should be more logical if we want to stay in the typical DS scene, that is using strand hairs
I haven't tested the functionnality yet that's why, but in theory, some dense hair mesh should do the trick to show RTCore performance advantage
So if someone wants to build a test scene with lots of hair he/she is welcome
BTW as some people mentionned it, there is a thread for discussions like this one here https://www.daz3d.com/forums/discussion/321401/general-gpu-testing-discussion-from-benchmark-thread#latest
Here is a blast from the past.
I am using a 4 core/8 thread Xeon, E3-1245, 3.4 GHz, 16GB RAM with an Nvidia Quadro 2000. The machine dates to Aug 2013 when it became my work PC, then after it was retired in January the company offered it to me instead of throwing it away.
Benchmark times: Full scene with graphics card and optimization: 50 mins 56 .82 secs to get to 5000 iterations and 99.99%, so almost there. It was at 90% after 3 mins 46.
Have not seen times like this since page 4 of this thread. Looks like a secondhand 1060 is under £100, so it may be worth getting for the machine. Dunno, it is a fair bit of money.
Regards,
Richard.
If the 1060 is the 6Gb version, I guess it's a good deal!
regarding 1060s and such...
----
I had a 980ti (and still do) but added a 1050 for my second machine and then stuck them both in my main machine.
----
obviously I knew 1050 only had 76 cuda cores but it wasn't orginally purchased for rendering anyway.
---
But it wasn't until I ran gpu shark in detail mode.. that I discovered a lot of data they may be buried in the specs on nvidias website.. but not on the boxes etc or in product descriptions you see in catalogs etc.
---
the 1050 has 4 g .. the 980 has 6 so that would seem to be a wash except if I have a big scene and drop to cpu..
but wait the dd5 in the 1050 has a clock speed about 130 while the ddr5 in the 980 has a clock speed of 384 whoops
---
and so on for most other specs almost everything on the 1050 was appreciably lower quality.
---
so I decided to look for another 980 but they were running an average of 350 and up... and could only find the 6g not a 12...
so I googled for 12g and found a titanx with 12g for $400 and loaded my paypal credit.
----
I need to run these tests to get some benchmarks
but I got some quick data from cpu/gpu bench mark when I was checking cards..
----
2080ti 11g score 16,800 running $1000 --- about 50% higher benchmark than the 980ti but only about 25% higher than the titan
2080 score 15510 .........
1080ti 11g score 14,200 running $550 1/3 higher than the 980 but less the 10% higher than the titan
titanx 12g score 13,665 grabbed the $440 one, I did ... the Titan is not a huge leap past the 980 maybe around 20% but does have the 12g .and really didn't cost that much more. than a
980ti 6g score 11,400 $300/600
----
they talk about the cuda cores being a factor... well all of the 5 cards listing have about the same number
----
I think it might be interesting to make sure we include the cuda count with our benchmarks.
Just got my hands on a 2080ti today and combined with a 1080ti and the result was Total Rendering Time: 43.64 seconds. Pretty happy with that !
What would you like me to try ?
Total Rendering Time: 4 minutes 48.50 seconds 1 2080ti
Total Rendering Time: 3 minutes 16.85 seconds 1 x 2080ti + 1 x 1080ti
Image on the right was with two cards
what's the link to the newer test scene, the one right above.
---
it's not the one on the first page
here's a start on a comparison chart ...
the gpu benchmark would be a generic one as opposed to a 3d only one.
----
but it's also obvious that the daz version makes a big difference
from 4.10 to 4.11 .. my test speeded up from about 4.5 to 1.5...
----
Is it possible to kill the preview.. is it created only on the video card or is data being passed back and forth between daz and the card to show the render progress? In which case anyone have a way to compare time on a ssd vs a hd?
2 or 3 minutes on a AMD FX 8300 @ 4.6GHZ and 2x 1070 in nvidia control panel SLI enabled, and optix disabled in Daz
does the SLI make a difference?
enabled in the nvidia control panel, but I'm guessing you have to have the physical bridge installed for that?
just realized that my memory usage only jumped but about 3000mb for the test but now I wonder if I should have a few other things turned off?
---
I do tend to have lots of stuff going on.
If you check the size of the Iray install folder inside the Daz Studio installation locations for each version, 4.11's is approximately 4x larger because of how much internal code was changed/rewritten between them. They almost might as well be different render engines altogether. Hence the time differences.
The preview makes virtually no difference performance-wise. But if you wish to render without it, go to:
Render Settings > Editor tab > General > Render Target
And change it from the default value ("New Window") to "Direct To File".
Do your renders using SSD vs HDD and make sure to save the contents of the DS log file both times. Then look for the following two lines of info:
2019-08-01 01:52:08.378 Iray [INFO] - IRAY:RENDER :: 1.0 IRAY rend info : CUDA device 0 (GeForce GTX 1050): 1800 iterations, 4.792s init, 2345.589s render
Subtract the green value from the red one (converted into seconds.) The difference will be the amount of time taken by your specific system configuration to initialize things and move data from disk to GPU rather than doing rendering itself.
it's make's rendering 100x faster on my PC, but then SLI work's best if you have 2 card's which are the same, OptiX acceleration is more for server rendering over desktop PC
just noticed this info in the log file:
2019-08-04 21:09:10.197 NVidia Iray GPUs:
2019-08-04 21:09:10.203 GPU: 1 - GeForce GTX TITAN X
2019-08-04 21:09:10.203 Memory Size: 11.9 GB
2019-08-04 21:09:10.203 Clock Rate: 1076000 KH
2019-08-04 21:09:10.203 Multi Processor Count: 24
2019-08-04 21:09:10.203 CUDA Compute Capability: 5.2
2019-08-04 21:09:10.203 GPU: 2 - GeForce GTX 980 Ti
2019-08-04 21:09:10.203 Memory Size: 5.9 GB
2019-08-04 21:09:10.203 Clock Rate: 1190000 KH
2019-08-04 21:09:10.203 Multi Processor Count: 22
2019-08-04 21:09:10.203 CUDA Compute Capability: 5.2
-----
wonder how other cards read at this point.
this is the data generated when first loading a scene.
so it's nvidia rating the card?
somebody please post benches of RTX 2060 super and 2070 super with the default SY, thank you.
if the default is the one on the first page of the thread
then with a titanx and a 980 ti .. I did it in one minute and 46s in 4.11
----
earlier in 4.10
the scores were 4 minutes 57s for the titan
5 minutes 3 secs for the 980 ti
and 4 min 22 secs for the cpu. dual xeon 2630v3s
so 16 cores 32 threads so that should be better than a single cpu
-----
but in general running the cpu with the cards doesn't seem to make much diff
----
updated list of gpus.. the benchmark is from gpu benchmark... which pretty much follows as expected.
but there are large varations in cuda counts etc.
---
so would be interesting to see a test result from each of these cards by itself
--
note, the mobile versions of all these cards seem to score at least 10% lower that the desktop versions.
---
so while in general usage the cards would stack pretty much by which is supposed to be a higher card
the number of cuda cores for 3d may play a factor in rendoring
the effects of the gamma rays on the man in moon marigold rendering system.
===
benchmark 3 days ago for the page scene in 4.11 .. titanx and 980ti optix is on for all, as the first time I tried it cut time a lot
it has been suggested that it's for render farms... I think that maybe the program treats the extra card as a render farm
remember that sli is not recommended, and that would be that the second card isn't seen as a second card?
one minute and 46s in 4.11
today
three minutes and 24s on the titanx
four minutes and 14s on the 980ti
two minutes and 30s on both of them together or 45 seconds or around 30% slower than the other day
is mercury in retrograde, is my electricity not as energetic,
I am preparing an article of the Journal of Irreproducable results.
----
I did test on the cpus only... dual xeon 2630v3 16 cores/32 threads took a mere 23m
okay, I love my cards.
---
the listing of cards is in a spread sheet so I was able to plug in the data for the benchmarks
I think I can handle a variety of two card set ups by using a layout like this
====
first time is the time for the card the line is for, than w/ (add 2nd card) and time in next column. next w/will be another card with the time for it and the base card following that card.
heya folks, daz3d get RTX support with latest nvidia studio driver, can someone test it ?
https://www.nvidia.com/en-us/design-visualization/rtx-enabled-applications/
That has been tested, right here in this thread, and a couple of others. We cannot directly test it because Daz does not offer any kind of Iray on or off button, so we have no way of being able to see exactly how much the ray tracing cores are adding. It is simply on all the time. This is only in the new beta 4.12.
But 4.12 is out, u can download it now
Yes, my post was past tense. We have been on this since day one. Please look back through this thread to find some answers. There are also other threads that explore the topic. Use Google to search for them, not the Daz Forum Search, which is horrible.
Yeah, we know. Eg. see this thread for some comparative numbers.The press release from Nvidia today is technically old news.
As posted in another thread test scene my previous times were:
, with 4.12 (2070 only) they were:
Don't turn on Optix Prime if you've got an RTX GPU (I'm wondering if those 5 seconds were due to some caching or similar, hence Optix Prime setting may be irrelevant). Also 4.12 beta is around 63% faster for me.
Edit: Confirmed, Optix Prime setting is irrelevant, i.e. it was slower because that was the first bench I did (probably cached shader compilation or similar). Running again I get: