AI slashes Iray rendering times with interactive denoising

The ray tracing process generates highly realistic imagery but is computationally intensive, and can leave a certain amount of noise in an image. Removing this noise while preserving sharp edges and texture detail, is known in the industry as denoising. Using NVIDIA Iray, Huang showed how NVIDIA is the first to make high-quality denoising operate in real time by combining deep learning prediction algorithms with Pascal architecture-based NVIDIA Quadro GPUs.

It’s a complete gamechanger for graphics-intensive industries like entertainment, product design, manufacturing, architecture, engineering and many others.

The technique can be applied to ray-tracing systems of many kinds. NVIDIA is already integrating deep learning techniques to its own rendering products, starting with Iray.

With Iray, there’s no need to worry about how the deep learning functionality works. We’ve already trained the network and use GPU-accelerated inference on Iray output. Creatives just click a button and enjoy interactivity with the improved image quality with any Pascal or better GPU.

 

More here: https://blogs.nvidia.com/blog/2017/05/10/ai-for-ray-tracing/

Cant wait for this to show up for Daz3D :D

Comments

  • kyoto kidkyoto kid Posts: 41,847

    ....interesting process.  Just left a post asking how accessible will this new function be.  Apparently only the Titan, Quadro, and Tesla cards are designated for deep learning work (though it would make the Titan XP more worth the price).

    Of course it also depends on if Daz is one of the software companies in line to get the SDK that supports this.

  • artphobeartphobe Posts: 97

    Yea I was thinking the same - like how much time would be required for the learning process. Apparently it looks like that part has already been done (implied by "We’ve already trained the network and use GPU-accelerated inference on Iray output")  ? Idk

  • fastbike1fastbike1 Posts: 4,078

    If it is Quadro only, it won't matter much. People already complain about buying a decent GTX.

  • Peter WadePeter Wade Posts: 1,666

    An interesting idea but if they've trained the network using cars in showrooms, on open roads and product design pictures how well would it apply to other scenes? Maybe Daz would have to do more deep learing sessions with Vicky 7 in temples with swords?

  • HavosHavos Posts: 5,574
    fastbike1 said:

    If it is Quadro only, it won't matter much. People already complain about buying a decent GTX.

    If it is a software solution then there is no reason it would not work on any cards, although they may make it only work on the top end cards to help push sales of those. We can only hope they relax those restrictions further down the road so that it runs on consumer level cards

  • outrider42outrider42 Posts: 3,679
    Since he spoke of Pascal, it is purely software as Pascal has finished launching new cards. So all they need to do is deliver driver updates. But since he made no mention of GTX, I believe this is exclusive to the workstation and Titan cards. Hey, they have to keep some excuse to charge $1000+ for Titan versus a 1080ti.

    And while this is cute, it doesn't address Iray's biggest failing, its intense hunger for vram. If they can use AI to come up with denoising, then surely that AI could also learn to optimize the textures in a given scene. It should be able to analyze the scene and drop textures that are not visible, and compress textures that are distant from the camera. Pascal already does something like this for VR gaming.
  • kyoto kidkyoto kid Posts: 41,847

    ...yeah VRAM could still be an issue.  If the scene drops from the GPU to the CPU, then it would mean deep learning will not be able to assist the process.

    Interesting that Nvidia also has made no mention of updating the VCA to the new 24 GB Qadro P6000.  Makes that 50,000$ price tag a bit less attractive. They may as well use Titan XPs instead and drop the cost to 10,000$

  • In reading the blog post, they are using an AI network and used the NVIDIA DGX-1 AI supercomputer, to train a neural network.  This does not appear to be a card level solution but s distributed neural network solution.  The DGX-1 is a monster with dual 20 core Xeons, 8 Tesla V100 GPUs with a total of 40,960 Cuda Cores, 128 GB of GPU memory, 512 GB of 2133 DDR4 memory,storage is 4x 1.92 TB SSD raid 0 (4 sepearate raid 0 ssd setups), 2 10GbE network connections, pulls 3200 watts of power and fits in a 3U rack case and weight 134 Lbs.  This thing has 960 Tflops of computing power.

  • nonesuch00nonesuch00 Posts: 18,714

    Be sweet if AMD integrated a similar AI into it's opensource ProRenderer and matched it up with it's new 16GB video card with 2TB onboard video card SSD. 

    nvidia should extend denoising to work in CPU mode too. It would still speed up convergence and inprove quality.

  • wizwiz Posts: 1,100

    In reading the blog post, they are using an AI network and used the NVIDIA DGX-1 AI supercomputer, to train a neural network.  This does not appear to be a card level solution but s distributed neural network solution.

    You use a much bigger machine to train a neural network than you use to run it. Look at a human, it takes anywhere from 12 to 25 years to train the neural net to create art, but once trained, art can be created in minutes, hours, days, or months, depending on the art. It only takes 1/200 to 1/20,000 the processing power to create art as it does to train up a creator.

    Your phone's camera has denoising: not as good as the neural net under discussion, but real time on a processor that runs on a few grams of batteries.

     

  • kyoto kidkyoto kid Posts: 41,847

    In reading the blog post, they are using an AI network and used the NVIDIA DGX-1 AI supercomputer, to train a neural network.  This does not appear to be a card level solution but s distributed neural network solution.  The DGX-1 is a monster with dual 20 core Xeons, 8 Tesla V100 GPUs with a total of 40,960 Cuda Cores, 128 GB of GPU memory, 512 GB of 2133 DDR4 memory,storage is 4x 1.92 TB SSD raid 0 (4 sepearate raid 0 ssd setups), 2 10GbE network connections, pulls 3200 watts of power and fits in a 3U rack case and weight 134 Lbs.  This thing has 960 Tflops of computing power.

    ...sounds like time to go buy a lotto ticket.

    • Removed deprecated “Noise Filter Enable” property from NVIDIA Iray Render Settings; option was redundant with “Noise Degrain Filtering” being 0 (off) or >0 (on)
    • Implemented support for NVIDIA Iray Deep Learning Denoiser; added “Post Denoiser Available”, “Post Denoiser Enable” and “Post Denoiser Start Iteration” properties to the “Filtering/Post Denoiser” property group; “Post Denoiser Available” must be enabled prior to starting a render in order to cause the other “Post Denoiser” properties to be revealed and have meaning   

    So it will be in the 4.11 Beta

Sign In or Register to comment.