Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
I've not seen it mentioned, that the 20 series need beta, and there is a fix that is implemented in beta that slows renders down.
Today you are my hero Ribanok, thank you so much! :D
That's a good idea!
Great! :D
Thank you, even one sample is more useful than none!
My GTX 1060 3Gb was over 12min actually!
UPDATE: I've taken the test again. I don't know what went wrong, but this time Ryzen 5 1600 (one core and two threads disabled) + GTX 1060 3Gb, OptiX Prime Acceleration enabled, 4.11 Public Build, driver 419.35: 4m 9s.
In any case, I think a parameter for the next render test could be GPU only! Because the CPU part adds a lot of variety.
I know there is no Turing support for Iray. The beta came out before Turing did, and it would be impossible for the SDK that the beta has to contain Turing drivers. But the beta did add Volta support.
And of course Iray is Nvidia only. It is based on CUDA, which is Nvidia's API. Whether or not it can run on anything is not the point. It does not run on everything. Nor does the standard version of OptiX, as it only runs on GPU. OptiX is not OptiX Prime, and they are not even considered to be under the same umbrella.
"My question is whether OptiX Prime counts under the umbrella of OptiX for this purpose."
No. We have no plans to support RTX acceleration with OptiX Prime.
That comes from the Nvidia forum, the people who mod that forum are all actual developers on the project.
Whether something can or cannot be done does not mean it will. The people at Octane tried for years to make CUDA run on AMD. They made this promise I believe about 4 years ago. They are still claiming that they want to make it happen, but after 4 years, I cannot say my hopes are very high. Not to mention, they claimed to be "close" when they made that announcement 4 years ago. So while CUDA may just be general purpose processing...it sure has been tough to crack.
That is quite an assumption there. It is pretty hard to render in less than an hour unless you just render at tiny resolutions or use no backgrounds. Almost everything coming to the store these days is extending render times because of its design. Many new characters are using chromatic ss, which seriously balloons render times. Simply switching from a non chromatic skin to chromatic can double render times easily, especially if you want to do a proper close up.
Then you have 3D environments. Using one will dramatically increase render times. Toss in just 2 or 3 Genesis 8 characters with any decent 3D environment and you looking at a render that will take hours even with a 1080ti. Considering that new environments are released pretty much every day, and there are PAs who ONLY make environments, I think it is safe to say many people are rendering for hours.
Have you stopped to consider that maybe Iray has a speed limit? That perhaps the SY scene can reach a point to where it renders so fast that Iray itself cannot go faster? If that was to happen, then the results would certainly be skewed. This happens in gaming. Some older games actually have limits to how fast they can run, and some reach CPU limits, in either case this can cause GPUs to have very similar results even when the GPUs may be quite far apart in raw power. This skews results and can make cards look much closer in ability than they really are. We are reaching render times in the 30 second range when people stack cards. So I have to wonder if the results are reaching a peak that is causing the data to look closer than it should.
That is why you have different benchmarks. If Iray had never changed, then perhaps you might have a point. But Iray has changed. The version of Iray in Daz today is quite different from what it launched with back in 2015. In fact, Daz 4.8 Iray had a very well known bug with SS that caused some things to appear redder than they should. New versions have added dual lobe spec and chromatic ss. There is also dforce...you know I'd like to see a dforce bench.
The RTX Titan has 24GB of VRAM. Your 1070 only has 8. The instant you exceed that the 1070 becomes a brick. And if you exceed the 11 in the 1080ti, it becomes a very expensive paperweight. The Titan has its purpose, and it is also just one GPU. That person could add any other card to their machine and render even faster. It is indeed perhaps too expensive, and the Titan is only a bit more powerful than the 2080ti. But it may not be wise to justify your belief based on one 4 year old benchmark. The Titan has always been a hybrid between gaming cards and Quadro, and as such the Titan has additional features. One being TCC, Tesla Compute Cluster. This requires having another video card to drive the video, as the Titan will become a dedicated compute card. This has two perks, it makes every once of VRAM available, so Windows does not take any VRAM from it. The second is it can increase performance. The Titan V shows a marked improvement in several rendering apps when using TCC mode. I have never seen anyone test TCC with Iray, so I have no idea if it makes a difference. Perhaps our Titan user can test this. But I have a feeling it can impact performance some.
If I bought a 2080ti, it would render as fast or faster than my two 1080ti's together. I vividly recall you said that was not possible, and yet bluejante proved it. So even without being "ready for prime time", the cards are still very fast. And it will be even faster in the future. OptiX 6.0 has released, and the results from it are spot on with the claims made months ago.
This comes from here http://boostclock.com/show/000250/gpu-rendering-nv-rtxon-gtx1080ti-rtx2080ti-titanv.html
Here's some additional testing from a fellow on the Nvidia forums. https://devtalk.nvidia.com/default/topic/1047464/optix/rtx-on-off-benchmark-optix-6/
Turning RTX on speeds up the render by anywhere from 2.5x to over 9x the speed of rendering with RTX off. And this is before considering the fact that 2080ti is already faster than everything but the Titan with RTX off.
I don't know if Iray will get OptiX 6.0. It is very hard to say. But even if it does not, there other render engines taking advantage of RTX like Octane. The 2019 Octane Preview is something you can download and try today. In the scene below, the 2080ti already enjoys a pretty large lead over the 1080ti, but turning RTX on causes it to just explode.
Tomorrow Arnold 5.3 for GPU will go to open beta. Apparently they already support RTX so maybe some interesting benchmarks will come out of that. Or I guess we could get a trial of the standalone and see what a 2080 TI has over a 1080 TI.
Wow guys...those benchmark are great! You're basically suggesting me to wait for a RTX 2060 rather than a GTX 1070/1660 Ti!
When do you think RTX support will come to Iray? Because it will come, right?
There was already a fully RTX accelerated demo of Iray more than a month ago. It's release is most likely imminent (see this conversation over at the Iray developers forum.
It's on my to-do list. It goes without saying that 3d rendering isn't the only thing I use my computer for. And using TCC means having to revert back down to CPU integrated graphics for primary display output and acceleration of any non-TCC compatible graphics workloads.
Also fwiw there are already benchmarks from at least one user in this thread using TCC enabled on multiple Titan X's. Although, if memory serves, they didn't post any sort of with/without it performance comparison.
UPDATE: found it. It actually ually does have some relative performance info. But I don't know how useful it is since it was in the very early days of TCC Titan support and in mixtures of TCC supported/unsupported cards.
I recently bought an RTX 2080. My Previous card was a 980 ti. I am using the 4.11 beta.
Zotac Geforce GTX 980 Ti AMP! Edition
OptiX On: 2:12 (132 seconds)
OptiX Off: 2:43 (163 seconds)
Gigabyte Geforce RTX 2080 Gaming OC
OptiX on 1:15 (75 seconds)
OptiX off 1:47 (107 seconds)
The Delta (OptiX on versus Off) wth the 980 ti is 19.01% and the Delta with the 2080 is 29.90%. I have no idea if that is attributed to the RT cores alone or just the newer GPU. The 2080 is 43.18% faster at rendering than the 980 ti with OpiX on,
Just to clarify D-A Kross. Which test scene are you using? Are you rendering to 100% convergence? Are you starting from the Iray preview mode or from scratch?
I ask because my 2080 takes more than twice your time to render.
I have the same question. however, nvidia claims they are going to bring iray support for rtx cards. According to their post made in the following site, it renders fast with rtx cards than gtx ones. nvidia says it's caused by AI, and real time ray tracing, which are only avaliable in RTX cards. all in all, it seems like gtx 2060 is going to be worth in the future, but at the moment gtx 1660 ti is worth. we can't predict the future, we never know Nvidia will introduce real time ray tracing, ai denoising features to consumers cards too, the post says about quadro cards. so I suggest you to wait for a couple of months to see how it turns out, if you are rush to buy then go ahead and get gtx 1660 ti. I don't trust Nvidia, I feel like it will take at least a couple of years to see a significant boost in rendering speed with rtx cards. however, if you use octane then it's a different story.
https://blogs.nvidia.com/blog/2019/02/10/quadro-rtx-powered-solidworks-visualize/
Sickle Yield's starter scene to 5000 samples and 400 x 520 reosolution (the default settings that came with the scene)
Nice to know, thank you! :D
I'm not in a hurry to buy that RTX 2060 anyway! :)
It wouldn't hurt if Daz3d would be talking about the nvidia RTX support in their new update (presumably the new Iray version with RT core support)/Optix 6.0).
https://nvidianews.nvidia.com/news/nvidia-rtx-ray-tracing-accelerated-applications-available-to-millions-of-3d-artists-and-designers-this-year
Yeah so, kudos to NVIDIA. I was one of the guys saying Iray feels like an afterthought to them but it doesn't really look that way now. Eating my words
GPUs: Dual Nvidia RTX 2080 TI
CPU: Intel i9-9900K
Rendered with CPU on and both cards, Optix on: 34.4 seconds
Rendered with GPUs only, CPU off, Optix on: 38.6 seconds.
To sort of follow on from outrider42's last post, Nvidia recently published the following two graphics as part of its announcement of RTX backwards compatibility on most Pascal 10xx series cards. Which, if you study them carefully, shed a great deal of light on/confirm what people have so far noticed about Iray rendering performance on Turing based GPUs. Particularly as regards the use of OptiX Prime acceleration (that it actually takes longer to render with it enabled than not in scenes featuring more advanced material/lighting interactions.) First the graphics:
Caveat: both these graphics are illustrations of Frametime performance patterns in a gameplay oriented biased rendering engine (in this case the game engine for Metro Exodus) and not a best-visual-quality oriented unbiased rendering engine like Iray. This means that the same graphics for an Iray iteration would most likely feature several times longer processing times in the post ray-tracing portion of the rendering process (the combination of the parts labeled "INT + FLOAT" and "TENSOR CORE" in the 2nd graphic.) With this said, the main useful takeaways are:
This explains why:
Cuda core-only Iray rendering is significantly faster on Turing hardware than equivalent Cuda core count Pascal hardware. Because Turing features concurrent INT and FLOAT operations.
Cuda core-only Iray rendering using OptiX Prime acceleration sometimes (scene dependent) leads to measurably longer rendering times on Turing hardware. Because OptiX Prime ray-tracing acceleration is based in part on having to process INT operations without the benefit of INT specific Cuda cores (since INT specific Cuda cores didn't exist as a thing prior to Turing.) Meaning that rendering scenes needing a significant numbers of INT operations with OptiX Prime enabled will lead to less efficent rendering times since the usual performance gains of OptiX Prime are already being achieved at a hardware level - leaving you with just the added overhead of OptiX Prime essentially spinning its wheels for nothing.
Turing GPUs will be orders of magnitude faster at Iray rendering overall (not just in terms of the the ray-tracing part of the process) with the utilization of RTCores than without. The key question of course is by how much. My current best educated guess is 3x times more efficient on average, with specific scenes varying widely both down and up from that number depending on scene content.
I don't think the INT gain for gaming can be of any use. For what I know, in 3D rendering, you do calculation in FP32
Potential gain could be with FP16 but even that may not even be applicable for 3D render since we tend to rather need more precision
For me there are two factor for acceleration :
Cuda vs RTCore raytracing which you can see in the RTCore part
New vs old cuda core : New Cuda cores are bigger than Pascal ones and allow more operations to be done. I think that for the same cuda core count you can expect a 30% gain
That explains that the RTX 2060 beats the GTX 1070 in actual bench without RTCore acceleration
From that I would expand some conjecture : the GTX 1660 could well be on par with a GTX 1070 for Iray with less cuda cores
* Edit for precision
I suppose another approach to speculating on the RTX performance when it finally is ready for prime time is to look at the prices and assume that NVIDIA understands that its customers will pay somewhat proportionally more for proportional performance increases. Kinda like it was for the GTX series as I recall. And I'm assuming NVIDIA has a real good idea of how the RTX will actuallly perform in 6 months or so when all of this is ironed out, which is how they set the pricing for the various models.
Seemed like with GTX they charged something like 30-40% more for what ended up being something like a 30-40% decrease in Iray render times (eg, between 1050 to 1060 to 1070, etc.). Unless my rememberer is broken...
And with RTX the price differences at this point seem to be similar between 2060, 2070, and 2080...like a 40% increase in price for each successive RTX model ($350, $500, $700). And at $1200 the 2080ti is like a 70% increase above the 2080.
So right now the 2060 gives like a 25% improvement (ie, decrease) in render times over a 1070. And maybe when this all gets ironed out it might increase to say 40-50% decrease in render times.
So here's my speculation based solely on prices as indicator of future performance:
Personally, I guess I prefer this approach rather than speculating on future performance based on stuff like INTs and FLOAT's.
Ray-traced rendering requires a significant percentage of INT operations because intersections are calculated using bounded volume heirarchies, which are a form of discrete math structure ("discrete math" is just a fancy term for mathematical constructs that DON'T use continuous aka floating point data.) So for real time ray-traced gaming graphics/not necessarily real time ray-traced 3d production graphics like Iray, having dedicated INT processors would indeed significantly improve performance.
It's been a while, but last year I wrote a super simple ray tracer from scratch in C# (it could use either CPU or GPU for rendering), and I recall that most of it was Vector3's with XYZ values as floats for the rays and hit points, etc.. But I did simple spheres, not bounding boxes. And the only INT's were the description of the image-related stuff in pixels.
Which raises the question about how you can have float vectors interacting with INT stuff...seems like a mess.
Oh wow, I forgot I even did a Windows Forms version that reads from a CSV file which defines the scene parameters.
Okay, to get back on track, it will be nice when we finally get all of this RTX stuff ironed out and we can actually put render times to all of this rather than having to rely on vague speculation.
Some thoughts:
Why is it that we hear confirmation for RTX for Daz3D from Nvidia before Daz themselves???? And a date on top of that, at least 2019 is something. The secrecy around here...its ridiculous. Is there a reason why Daz does not wish to inform their customers of what is being planned? Ever. Do you have any idea how much customers have been concerned about the state of Iray, and therefor Daz Studio itself, ever since RTX came out? You see, when you do not inform your customers of what you are doing, your customers will speculate about it. And often in a bad light.
What's even nuttier, is that there is a comment for RTX from Daz for the Nvidia presentation.
Daz 3D will support RTX in 2019 “ Many of the world’s most creative 3D artists rely on Daz Studio for truly amazing photorealistic creations. Adding the speed of NVIDIA RTX to our powerful 3D composition & rendering tools will be a game changer for creators.” STEVE SPENCER GM & VP of Marketing | Daz 3D.
Why is this statement not here, somewhere, anywhere on this website?
But....because nobody wants to talk to us customers, there is still something to speculate. Take a look at the list of software getting RTX in 2019:
The first software providers debuting acceleration with NVIDIA RTX technology in their 2019 releases include:
Notice something...missing? OK, now this is what I see. I think Daz is the only one on this list using Iray. For example, iClone, which also has an Iray plugin now, is not listed here. And...there is no mention of Iray anywhere in this presentation, nor in Steve's statement.
So what does this mean? Will RTX be coming to Iray...or will there be a whole new render plugin for Daz?
How about somebody from Daz stepping up to the plate and confirming something to its customers here on the forum. The cat is already out of the bag for RTX, so how about telling us where the RTX is coming from, and if Iray will continue to be a part of Daz Studio moving forward if indeed Daz is getting a new render engine. Will there be a Legacy Iray for users? There are a LOT of questions to be had.
Asus RTX 2060 / Studio 4.11 Beta
GPU only OptiX ON: 2min 14sec
GPU only OptiX OFF: 2min 57sec
Thank you for posting your results! :D
I'm going to buy a RTX 2060...when I'll have the money, so not any time soon! xD (Especially if those guys at Daz continue going out with great sales xD)
That's actually about on par with my Titan RTX - 1:08 with OptiX Prime on vs 1:27 with it off. That's a 19 second difference,or roughly 25% worse performance.
Not really what I've seen
https://www.daz3d.com/forums/post/quote/53771/Comment_4104011
https://www.daz3d.com/forums/post/quote/53771/Comment_4234711
https://www.daz3d.com/forums/post/quote/53771/Comment_4272246
I think the SickleYield scene may favor Optix ON
I didn't run any bench so I don"t know the difference between the scenes, so I can't form any hypothesis and there may not be enough precise datas for that
I tested the OptiX setting on a different scene and it didn't show as much variation as the benchmark scene.
OptiX ON: Total Rendering Time: 1 minutes 47.14 seconds*
OptiX OFF: Total Rendering Time: 1 minutes 56.79 seconds*
*Note: These times are not for the benchmark scene
Has anyone benched the Threadripper 2990WX by itself yet? I can extrapolate from the 16 core Threadripper benches, but the 2990WX has a 'unique' memory config, hence why I'm still curious.
Yes, it WILL be slower than most GPU's, but it might be handy for really big scenes with lots of characters, if you aren't feeling like rendering different portions of the scene in multiple passes, hence my continued curiousity...
I wonder why the benchmark scene behaves that way?
It also validates the reasoning behind using multiple and different benchmarks.
I'd like to see if anybody has one of those Threadrippers as well. Also, I'd really like to see Threadripper used in multiGPU systems. The old Puget benchmarks claimed that having more CPU cores boosted multiGPU speeds, even when the CPU was not being used to actively render. They were using Xeons, as that test was long before Ryzen launched.
So I would love to see not only how new Threadrippers perform solo, but how they effect the speeds in multiGPU. If they do, then there is double incentive for buying as many cores as you can.
It's true that it would be more reliable with more benchmark scene, and that we need just a handful of people willing to do them well...but don't you think we already have a few people willing to take the test in the right way, and that making it even more complex would just kill this benchmark thing? Isn't there any way to develop a new test scene that takes into factor those new variables?