Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
From the info I linked to above, it shows that support for Titan V (Volta) was implemented for OptiX 5, but I haven't been able to find anything specific about the AI-Denoiser being only for the Titan V (though It probably is their best platform for it). OptiX 5 is simply the next version of their OptiX SDK (optimized path tracing), with the new features/enhancements I noted above.
Redshift will have the denoiser in its next update. If the results are what I expect, I'll be dropping it. That's my render times cut in half. The card will pay for itself.
This denoising thing has me very interested. I found a tech paper written by the guys (from S. Korea) who, I think, developed the denoising algorithm used by the guy who introduced it in Blender. Aside from being a brain-exploding tech paper with incomprehensible math, from what I can extract from it, as well as some discussions in the Blender site, it seems like something like this:
Y'know how ambient occlusion is just a fake shadow, determined by a smart guess based on the geometry? Basically, if the geometry/normals indicates a corner (like where the base of a building meets the ground), then it throws a shadow there. It's not an actual ray-traced shadow, but a fake using an intelligent guess from the normals pass or whatever. At least that's how I understand it.
Well it seems this de-noising is similar (if my hunch is correct). Instead of bringing the final render into Photoshop and applying a blur, it's a bit like bringing the render PLUS a bunch of the associated render passes into Photoshop and using some information from those passes to figure how best to apply a blur.
I mean, at the end of the day, it's all about figuring out what color (RGB) each pixel should be. So if you can take all the information in the important passes and extract enough info to figure out what the color of the pixel should be, you are basically applying a very intelligent post-processing filter. And you can make a much better result without having to actually calculate using a time consuming ray-tracing. Which is why sometimes it works great, but other times the result looks too blurry. Because the only way to really get the correct color of each pixel is to crank up the samples and render it for another 10 hours, using the ACTUAL geometry and materials and lights.
If you think about it, say you have a sphere sitting on the ground, and should have a shadow underneath. But you have speckly noise, which is bright pixels which don't belong. So if you're smart, you can use the normals to determine that the surface is facing down, and it should be in shadow, and there is no indirect light there (the indirect pass shows nothing), and so on, so you can kinda guess that the outlying bright pixel doesn't belong, so you blur it. I'm guessing that's what de-noising is doing.
Now I need to figure out what "linear regression" really means, cuz apparently they're using a lot of it
Without those tensors, it will probably behave just like an ordinary denoiser. The Pascals don't have a lot of compute power for AI.
...crikey I could build a pretty nice system for 3 grand even given the spike in consumer GPU prices.
My render target is 30 sec/frame (24fps) on 8 GTX1080i's. My system only has enough slots for 3 double-wide cards, so I was going to be in the market for Supermicro's 8 gpu server. Those things are almost 4 grand for the bare bones box. Not counting the purchase of 7 gpu's. Not counting the purchase of 2 Xeons and ram. Now, if the Titan performs as I expect with the AI denoiser, it should be the equivalent of 3 1080ti's. So, magically, I will have room in my workstation for 9 1080ti's, thereby exceeding my rendering target and saving a considerable amount of money and trouble.
...granted you do animation and from the sound of it, as more than just a hobby.
I'm looking to head back to 3DL for my illustration work, particularly with the release of IBL Master, as Iray just takes far too long when one doesn't have an up to date GPU. Given the ongoing shortage of mid to high range consumer GPU cards and the resulting high prices being demanded (in some cases more than double what a card cost at rollout) not going ot be in the market for a GPU upgrade in the foreseeable future unless I come into a big pile of money.
.... sadly, just a little more than a hobby. Certainly not for money at this point. But I have learned that many opportunities come to those who are prepared for them and unfortunately, I don't have much patience for compromise. If I'm going to do it, I will swing for the fences. Lots of noodles and dumpling meals in my future, but as your Lombardi quote says,".... chase perfection".
...on the backside of life here what with being retired. Oh I occasionally land a commission for a book cover or other illustration which gives me a few extra zlotys in the pocket now and then, but for myself, primarily doing this for the love of it and to illustrate my writings. Already had music taken away from me by advancing artrhitis, with this I still have a creative outlet.
"on the backside of life here ..."
To the contrary, my friend! As long as you have a love to chase, you are never on the backside. "Though our outside is withering away, the inside is being renewed from day to day."
Just for the sake of completeness. Iray does have a denoiser filter in the render settings. But AFAIK it just doesn't work at all. No matter what you try it does nothing. So if anybody is able to get something out of this thing you're very welcome.
Mine is real-time rendering on an average card. I believe this will be possible with EEVEE. Of course it will not be exactly the same as a "real" rendering. But given the usual compromises in animation I feel it will be very close anyway. To this goal I guess an option for DAZ Studio would be to support Iray real-time. Even if I don't know how good it is and/or if it can be integrated in the viewport.
After some test I can say that:
1080p - 250 iterations - Noise Filter enabled (5,5,0.5 thanks to Padone)
It's a really good settings, obv far from the original quality but absolutely acceptable for the moment and takes "only" about 2mins for framerate.
Pity that Daz reloads the scene at each frame, otherwise the time would be much lower, almost half.
Buy iclone 7 and 3Dxchange import your Daz models and scenes into iclone and animate to your hearts content. Iray inside of Daz studio is not really set up to do animation! Daz Iray is great for stills but not animation.
AFAIK the noise filter just does nothing you can turn it off and you'll notice no variations. You can use After Effects or a similar software for post-denoising. As for the scene reload, you can try keeping the viewport in iray mode too while rendering, this way it shouldn't reload the scene for each frame.
Also I agree with Silver Dolphin that iClone 7 is a good option for animation.
There is a trick to stopping Daz from reloading each frame in animation: Before you render, switch to the Iray mode in the viewport. That's it! Doing this will keep the scene in its memory instead of dumping and starting over for every single frame. A massive time saver!
Why this hasn't been advertised more is beyond me.
Now my question, what is the difference in the render time without that noise filter setting?
For that reason i'll give up most of my 3d hobby, even zbrush or modo can't joy me anymore and 'm full back into $ of hardware synth's & ableton... welcome back music world.
And the preliminary benchmarks are in....great news for animation renderers! The Titan V trounces the 1080ti in V-ray and Furryball.
Vray renders are over 60% faster, and, more important for me, Furryball RT is almost twice as fast with Volta power. This is a dream come true. Things are finally coming together on the hardware decision side. Here is a link: https://www.pugetsystems.com/blog/2017/12/12/A-quick-look-at-Titan-V-rendering-performance-1083/
Why spend $3,000 though? Why not just buy two 1080ti's? Of course not at these prices, but if you can find some for cheaper I'd think the benefit/cost would be huge over the Titan V wouldn't it?
So... If you're wanting to render animations then you have a different cost model. Correct? You need efficiency any way you can get it. Correct?
Octane is used in movies.. Its FAST... The render Dusty showed would be about 10 minutes in Octane (I use a single GTX 980 for my regular work). The latest push in Octane is for 360 VR renders. There's also no texture RAM restrictions on Octane, so I can do 8.5GB renders on 4 BG video cards.
BLR got rid of their render farm for single video card and Octane, here's some of their work;
2 reasons:
1. two 1080ti's = two slots and twice the power usage. I only have 3 double wide slots and I don't want to waste them
2. Volta have tensor cores, which could be useful in the forthcoming AI denoiser. GTX cards are practically useless for AI.
3. Only Titans, Quadro and Teslas can turn off that Vram consuming bug in Windows that everyone's complaining about, so I can make use of all 12 GB on the card.
Oh, that's 3 reasons.
I'm curious about the VRAM consuming bug. I keep looking for real info about it but can't find anything definite. Where did you see the info on it?
BTW, wow you're right about the power usage. Looks like a Titan V and 1080ti use almost exactly the same power (350watts). So yeah, two 1080ti's will give you an extra 350 watts over a Titan V. Guess they must have made some major efficiency advances.
Yeah, Octane is some good stuff. Hopefully, it will support the Volta chips soon. I use Redshift, which is even faster. Nice animation there. GPU renderers like Octane are in heavy use in small productions, but not in Hollywood movies yet. Besides being Vram deficient (even with out of core render options), GPU's still can't do all of the simulation work that is required by CGIFX. For example, Octane is not compatible with many of Maya's dynamic simulations. Octane is barely on the radar in the Maya world (Maya world = Hollywood CGIFX). I feel the GPU renderer's time is coming, but for now, at least in major fx films, the cpu renderers rule. The good news is that gpu renderers are ideal for the little guys. Which is why the Volta is so valuable.
The reason for the vRam suckage is because manufacturers have to comply with the Windows Display Driver Model (WDDM). Nvidia provides software to allows you to disengage WDDM for cards that aren't driving a monitor.
Yeah, that's what I've heard on some forums. But I'm looking for actual manufacturer references explaining the issue and solution, because I've never seen anything official, only some forum posts. I'm not convinced it's real. Even on the GeForce forums some are saying it's just a misunderstanding of how things work. Not sure who to believe.
drZap: That video was not done with Octane... They did however convert their 3D components over to Octane and test it in VR.
Yeah, I understand, but I can clearly see it's not a big film like Hollywood makes. It's a budget independent film, which is the perfect niche for Octane and Redshift right now.
This issue doesn't matter to me. The only thing that matters to me is my performance in Redshift and they officially recommend users to disable WDDM for more memory access and better performance. Can't do that with a GeForce card so the Titan is for me.
Oh. never mind. I misunderstood your meaning. I thought you were showing an Octane render.
No one has posted a solution on many other forums about WDDM. I just did a Google search. One Microsoft forum, the company guy danced around the issue. Either they don't know about it exactly as his reply was BS, or they do and don't know how to fix it. I'm on Windows 7 so it doesn't bother me and besides my GPU for rendering Iray is my second one. But they should fix it.
BTW, I just looked at my Task Manager and GPU-Z to see what the VRAM memory situation is. Prior to loading Studio, the Dedicated GPU Memory usage was 0GB out of the total 11GB VRAM, and that was shown in both apps. And GPU "Hardware reserved memory" is only 137MB (whatever that is). But as soon as I open Studio, both apps show that number jump up to 2 GB out of the 11GB on the GTX-1080ti. Which makes me think that the Studio software is allocating 2GB on the GPU, not Windows 10. And when I load a relatively small scene with just one G3, that number goes up to 3.3GB, but only when the Iray preview gives the first grainy image. Which kinda makes sense, since that's when the GPU is doing its thing and grabbing data for the render off it's VRAM.
Seems to me if Windows WDDM was allocating memory it would do it prior to opening Studio. Of course, the numbers I'm reading may not mean what I think they mean.