Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
Awesome thanks!
7700k vs 8700k probably. It makes a dif in game benchmarks as well.
OK Got the NVLink to work!
Dual ASUS RTX 2080 TI Turbo's, Optix On, NVLink - in SLI mode ON in NVidia settings - SickleYields Test render: Total Rendering Time: 35.4 seconds
Dual ASUS RTX 2080 TI Turbo's, Optix On, NO NVLink - SLI mode Off in NVidia settings - SickleYields Test render: Total Rendering Time: 34.2 seconds
Dual ASUS RTX 2080 TI Turbo's, Optix On, NVLink - SLI mode Off in NVidia settings - SickleYields Test render:Total Rendering Time: 34.13 seconds
Conclution, NVLink gives a few milliseconds improvement, and SLI ON as before gives a decreese in speed.
Did another render with the other card, as i think one is running 16x and the other 8x also there is a air-flow difference between the cards. the new time is as follows.
SickleYield Test scene Single TI Turbo 2: Total Rendering Time: 1 minutes 6.61 seconds
SickleYield Test scene Single TI Turbo 1: Total Rendering Time: 1 minutes 16.79 seconds
EDIT - Remade the render with #1 and #2 cards and there are differing from one render to another. #1 Total Rendering Time: 1 minutes 6.24 seconds, #2 Total Rendering Time: 1 minutes 6.9 seconds, #1 Total Rendering Time: 1 minutes 5.91 seconds, #2 1 minutes 6.29 seconds
Very cool info. Thanks jonascs.
outrider42, it was only a thought, their not the same comp, and may have many differances between them.
Yes there seem to be some differences, not only size and fans but clock.
My cards are actually A LOT smaller than my previous 1080's
yes, i noticed that too,as I said.. the beta is SLOOOW! No idea why...
yeah, I see similar with a scene I was just running on an R7-1700non-x in 3DL.
It is probably just debugging code still active in the beta. I've seen similar in the past with some beta variants.
Well, regardless of how the beta runs, the test of 34.13 seconds with two 2080tis is almost as fast as FOUR GPUs by nothingmore. Nothingmore has (3) 1080ti's + Titan Xp and got 32.53 seconds. Nothingmore also noted that they were reaching thermal throttling, which yeah, 4 gpus in a box might do that, LOL.
So the mark of 34.13 is certainly a record in the SY bench for 2 gpus.
Thetreeinfront ran the SY bench in 28.65, which I believe is the overall record, but that took 3 gpus, one being a 2080ti, along with a 1080, and a 1080ti.
However, the differences in these times is so very small. When you look at my bench, the gap is surprising.
Nothingmore ran my 2018 bench in 1:27, while jonascs ran it in 2:46. Suddenly, what was less than 2 seconds in SY's bench became a staggering 79 seconds in my bench. I would expect the gap to be bigger, but not that much bigger. That comes out to a 6% difference in the SY bench, and a 47% difference in my bench.
What does my bench do to punish jonascs so much? The time is not scaling as expected. I'd like to see Thetreeinfront run my bench if possible.
[edit] Forget all that, I now understand that the beta is not always showing me the frame it's rendering. The combo show Default camera but it's rendering Perspective. So in fact, everything I wrote here are reverse. The right run is what I thought was the bad one... 1 min 8 is my card when it stop to 5000 [/edit]
Sorry for the wall of log, should be enough to see the speed without putting every line of the original.
I tried to test with outrider42 scenes but I could'nt have anything reliable. At first on my 1080 alone on 4.10 and 4.11, I was getting the render in 5 seconds.
I realised after a while that the setting "Rendering Quality Enable" was ON and putting that off allowed the render to reach 5000 iterations. Then I was getting a very different number of minutes on each run, did'nt make any sense.
I noticed that SickleYield's test seem to assign all the settings (except the camera the first time) to have a more consistant result.
Also, I noticed that I can load the scenes, then do 10 tests back to back (just unselecting Iray in Aux Viewport and reselecting it after changing cards selection), and always get the same render speed, but when I loaded one scene, then the other, then the first, then the other, my 29 seconds got to 2 minutes on one of the run.
I did'nt post anything because I did'nt know what was going on but tonight, did other tests. I got this bad result (it stopped before the end reaching 5000 iteration). I then did a test only on the 1080ti alone.
Then reselected only the 2080ti. The result is quite different. I think it's a beta issue at that point because everything else was exactly the same except the time of day.
then
I cannot see the cause of the difference. The run before the bad one was also a 53 sec one. I thought at first it was because I was changing scenes without restarting the application, but this time I'm 100% sure nothing was modified (the scenes was not reloaded) except doing the file "Render 5.jpg" between both of thoses with the 1080ti.
Also, I tried to see why the beta is faster on the same stuff, I created a scene with 4 characters close to the camera, saved it. Loaded it on 4.10, render, then same on 4.11.
I wanted specifically see why in the past I was seeing less memory used because I thought at the time (I was on win7) that it was the reason for the faster rendering in beta...
Here's the interesting part
then
Interesting bit is the
4.10 Texture memory consumption: 5.64942 GiB (device 1)
become
4.11 Texture memory consumption: 1.69287 GiB for 106 bitmaps (device 1)
on the beta.
So I would like to know if I made a mistake in the config (which I did'nt really touch) and what theses lambdas are. Because it might affect the 2080ti too and may have given me false results
So, I just found out what was going on for me. To reload the right data to the right cards while doing my test, I took the habbit to put the Aux Viewport to Texture, change the cards, then Iray. Now that is what is causing me a problem, as the camera seem to change to perspective without the interface showing it.
Outrider42 scenes, 2080ti alone OptiX On 5000 iterations
First : Total Rendering Time: 5 minutes 28.1 seconds
Second : Total Rendering Time: 5 minutes 30.45 seconds
Third test with 2080ti+1080ti+1080 (with the right camera) : Total Rendering Time: 2 minutes 44.34 seconds
Don't know why I did'nt notice before, but sometime the frame change to perspective instead of Default Camera, I was rendering the wrong thing. Just changing the card without touching anything else. I was too concentrated on the log, the numbers and my notes to see the camera change!
Nope I was wrong, I don't have any record and yes 2 x 2080ti beat my 3 cards....
Real result SickleYield Bench:
2080ti+1080ti+1080 Total Rendering Time: 35.58 seconds
2080ti only Total Rendering Time: 1 minutes 12.81 seconds
Ok, some more data.
This time I'm testing optix and Nvidia SLI setting.
Running Outrider42's 2018 Test Scene, wich I believe I only did without the NVLink before.
Two RTX 2080 Ti NVlink On, for ALL renders.
SLI setting ON
Optix On: Total Rendering Time: 2 minutes 51.30 seconds
Optix off: Total Rendering Time: 2 minutes 44.5 seconds
CUDA device 0 (GeForce RTX 2080 Ti): 2481 iterations, 1.222s init, 160.364s
CUDA device 1 (GeForce RTX 2080 Ti): 2519 iterations, 1.216s init, 160.628s, correspond to what I did with the 1080's earlier, this is what I believe is due to 16x and 8x.
SLI OFF
Optix On: Total Rendering Time: 2 minutes 47.62 seconds
Optix Off: Total Rendering Time: 2 minutes 43.87 seconds
Well, it's not a contest so my cards beat nothing. But I'm still impressed, that is some serious hardware! ;)
I just ran the Sickleyield again with my 1080ti + 1070, and the render time is 1 minute 16 seconds, a few seconds longer than a single 2080ti, and also about the same as 2 x 1080ti based on the benchmarks posted here. Personally, I'd allow maybe 10-15 seconds either way on these tests to allow for various differences, but it seems like it's reasonable to say a 2080ti is in the same ballpark as a 1080ti + 1070, or 2 x 1080ti, at least at this early stage.
Especially since when you run a different scene it might give different relative performance based on scene contents and stuff. And I think this will be more apparent when the software gets finished for the RTX cards, since they should perform much better where the specific RTX architecture & software matches the ray tracing and de-noising and physics sims, etc, needs of the scene.
I'm guessing that will improve significantly when all the software gets finalized.
BTW it would be great if someone could generate a spreadsheet to summarize the RTX results for the outrider scene like I did below for the Sickleyield. It gets awful confusing trying to keep everything straight. Also, I think it's important that everyone using that scene has identical settings, so I think that needs to be clarified if it hasn't been already.
Also, I have a sneaking suspicion that when we're getting down to the 1 minute range of render times the Sickleyield scene is starting to age, and might not be giving a good idea of actual render times but rather some other setup-related times. Not sure, but something to keep in mind.
And I also think we really need to take the results on RTX now with a huge grain of salt since all the pieces aren't in place yet software-wise.
Right, didn't study it that closely, the other option is that it is because the monitor is connected to #2!?
Hear hear!
And I agree, because so many things can be different between builds, that one computer in southern Hoth will Not score the same as another in the Atacama, lol. GTX10 starts to back off on boost clocks over 40c or 50c (at the die, not room temp), and there is also power limits per plug, and if the 12v rail is a bit lower voltage than another comp the Nvidia card will think it's pulling more power than it actually is and back off the clocks as well. RTX20 is just as picky from the looks of it. That isn't even looking at things like driver or studio version, extra soft running in the background, etc. The real contest is the ratio of render time difference between the RTX20 and GTX10 in the same build at the same location.
Daz studio sometimes will not clear out texture data, so eventually, it will run out of memory on the GPU, just a thought. (sorry, not awake enough to grasp the rest of the numbers.)
sorry, I fell asleep on the KB, long day. I'm not so worried about the dif, because as I hinted in my reply to Jonas's, so many things can affect different builds in different locations, that absolute numbers are not really directly comparable between different systems. I'm not that worried myself about absolute scores.
I'm thrilled it's working at all, and surprised there is a dif between GTX and RTX given CUDA counts. (oh, I got to fix that Iray now thing, lol)
(sips some coffee and rereads things) "What does my bench do", I should go back a few pages and look I guess, I only know of the SY bench.
What 'bench' do you speak of? lol. (how many of the former scores are not the SY bench, and are they all clearly indicated to be such?) Is this the 'bench' in question? Thats odd, the DL button just brings me to the Daz Gallery, the page HTML must be broken, hmmm, ok, Daz Galloryjust be its normal flaky self. Just keep right click-and-save till it stops going back to the root gallery and gives you a duf to save.
Wow, I am impressed. Goes with what DazSpooky and I have been saying for some time now. 4GB min for Iray, lol. Not a real test, I have other stuff open, we are at over 16min at only 85% so far... 23min 32.71sec with a dozen other apps going. FX8350 and GTX1050ti, room temp 23.3333333c (75F).
(note to self, 4.10.0.123 4.11.0.196 pending test)
OK, run on 'this comp' with a fresh reboot for each run, and nothing in the background.
It would appear that the 2018b test is using something the old test may not that may have debugging code still on. (I will guess there is a newer beta than what I have, I am not oblivious to the fact that comp is off the net 99.9% of the time, to save me from random win10 update reboots in the middle of renders, lol) Oh, and the R7 CPU is at 3.2GHz, my laziness after a few Bios updates resetting the OC settings.
That is why I tried to make an updated benchmark in the first place. Users were already breaking under a minute with multiple GPUs and it seemed to me like the scene was hitting a speed limit of Iray itself. The original uses a Genesis 2 model, and the surfaces are converted to Iray as G2 never shipped with Iray out of the box. I used Genesis 8 and made a few alterations to the surfaces, I think it has the new dual lobe settings. I gave the outfit weighted top coat glossy settings. I wanted to make the camera closer to the model since closeups take longer (less texture compression), and the ball of light is giving off a bright light that is right next to her skin for SSS calculations (also why her nails kind of glow...I just liked it, its magic, LOL.) So I tried to give the engine more stuff to do.
And then I added bloom, mostly because I thought it looked nice, but it also adds a bit more for the engine to calculate. The room over all is still quite dark, which Iray has always hated. It takes longer for Iray to calculate the light bounce in a darker scene. If you add a light to the scene or brighten it up, it runs much faster. The same goes for the original SY scene. I tried to write a number of things to do for the bench in my post, but I didn't think about it getting buried in the thread. I think I will write this into the gallery post, and maybe put it in a sig.
It shouldn't be too much of a task to make a table of this scene since it only goes back a few pages. There are times people don't list what scene they use.
Generally speaking though, Iray seems to scale very well. I have run a variety of gpus at this point, and the percentage gained with one card over another is pretty consistant across the board, regardless of the scene make up. If you have a gpu that runs one scene 30% faster than another gpu, it will probably maintain that 30% faster for nearly all scenes. However that was because all Iray gpus ran off CUDA. Soon we will add the RT cores to the mix, and I am still convinced Tensor is doing something in the render process, otherwise how do we explain the times that are near double the 1080ti.
Speaking of Tensor, I still want to see how Turing handles the AI denoiser. I would expect the AI denoiser to be much faster with a Turing card. Both my scene and the SY scene have noticeable grain in some spots. Would anybody like to test out the denoiser on one or both of these? I don't have the current beta myself.
Make sure you check the latest in the "OT: New Nvidia Cards to come in RTX and GTX versions?! The Reviews are Trickling In.." thread. Looks like the RTX cards' Turing architecture isn't supported by Iray and CUDA.