AI slashes Iray rendering times with interactive denoising
The ray tracing process generates highly realistic imagery but is computationally intensive, and can leave a certain amount of noise in an image. Removing this noise while preserving sharp edges and texture detail, is known in the industry as denoising. Using NVIDIA Iray, Huang showed how NVIDIA is the first to make high-quality denoising operate in real time by combining deep learning prediction algorithms with Pascal architecture-based NVIDIA Quadro GPUs.
It’s a complete gamechanger for graphics-intensive industries like entertainment, product design, manufacturing, architecture, engineering and many others.
The technique can be applied to ray-tracing systems of many kinds. NVIDIA is already integrating deep learning techniques to its own rendering products, starting with Iray.
With Iray, there’s no need to worry about how the deep learning functionality works. We’ve already trained the network and use GPU-accelerated inference on Iray output. Creatives just click a button and enjoy interactivity with the improved image quality with any Pascal or better GPU.
More here: https://blogs.nvidia.com/blog/2017/05/10/ai-for-ray-tracing/
Cant wait for this to show up for Daz3D :D

Comments
....interesting process. Just left a post asking how accessible will this new function be. Apparently only the Titan, Quadro, and Tesla cards are designated for deep learning work (though it would make the Titan XP more worth the price).
Of course it also depends on if Daz is one of the software companies in line to get the SDK that supports this.
Yea I was thinking the same - like how much time would be required for the learning process. Apparently it looks like that part has already been done (implied by "We’ve already trained the network and use GPU-accelerated inference on Iray output") ? Idk
If it is Quadro only, it won't matter much. People already complain about buying a decent GTX.
An interesting idea but if they've trained the network using cars in showrooms, on open roads and product design pictures how well would it apply to other scenes? Maybe Daz would have to do more deep learing sessions with Vicky 7 in temples with swords?
If it is a software solution then there is no reason it would not work on any cards, although they may make it only work on the top end cards to help push sales of those. We can only hope they relax those restrictions further down the road so that it runs on consumer level cards
...yeah VRAM could still be an issue. If the scene drops from the GPU to the CPU, then it would mean deep learning will not be able to assist the process.
Interesting that Nvidia also has made no mention of updating the VCA to the new 24 GB Qadro P6000. Makes that 50,000$ price tag a bit less attractive. They may as well use Titan XPs instead and drop the cost to 10,000$
In reading the blog post, they are using an AI network and used the NVIDIA DGX-1 AI supercomputer, to train a neural network. This does not appear to be a card level solution but s distributed neural network solution. The DGX-1 is a monster with dual 20 core Xeons, 8 Tesla V100 GPUs with a total of 40,960 Cuda Cores, 128 GB of GPU memory, 512 GB of 2133 DDR4 memory,storage is 4x 1.92 TB SSD raid 0 (4 sepearate raid 0 ssd setups), 2 10GbE network connections, pulls 3200 watts of power and fits in a 3U rack case and weight 134 Lbs. This thing has 960 Tflops of computing power.
Be sweet if AMD integrated a similar AI into it's opensource ProRenderer and matched it up with it's new 16GB video card with 2TB onboard video card SSD.
nvidia should extend denoising to work in CPU mode too. It would still speed up convergence and inprove quality.
You use a much bigger machine to train a neural network than you use to run it. Look at a human, it takes anywhere from 12 to 25 years to train the neural net to create art, but once trained, art can be created in minutes, hours, days, or months, depending on the art. It only takes 1/200 to 1/20,000 the processing power to create art as it does to train up a creator.
Your phone's camera has denoising: not as good as the neural net under discussion, but real time on a processor that runs on a few grams of batteries.
...sounds like time to go buy a lotto ticket.
Implemented support for NVIDIA Iray Deep Learning Denoiser; added “Post Denoiser Available”, “Post Denoiser Enable” and “Post Denoiser Start Iteration” properties to the “Filtering/Post Denoiser” property group; “Post Denoiser Available” must be enabled prior to starting a render in order to cause the other “Post Denoiser” properties to be revealed and have meaning
So it will be in the 4.11 Beta