OT: New Nvidia Cards to come in RTX and GTX versions?! RTX Titan first whispers.

1161719212227

Comments

  • kyoto kidkyoto kid Posts: 40,589

    ...the mathematical stuff is over my head. 

    The Octane Bench seems to be only for evaluating card performance with Octane which doesn't help as I don't Octane nor the resources to purchase different cards to test (as well as build a test rig).

    I am more interested in just seeing actual reviews that include videos of the process in operation or stills of finished images along with the performance data.  A bunch of graphs and charts are pretty meaningless without seeing what the end result actually is.  It would also be nice if it wasn't done on some "uberworkstation" that few of us could afford.

    __________

    @ Aala ...these new Nvidia cards will be applying different technologies than we have previously encountered.  Anything that sounds like a "shortcut" to further speed a process up (such as AI denoising) could have an impact on final render quality.  I've been in the traditional fine arts most of my life and unfortunately have developed some pretty high standards when it comes to quality of either image or sound production/reproduction.

  • ebergerlyebergerly Posts: 3,255
    edited September 2018

    I tend to agree with Kyoto Kid. At the end of the day, what most of us are (presumably) interested in is an answer to the question "My GTX-1080ti (or whatever) card renders my Studio scene today in 10 minutes. If I buy a 2080 (or 2080ti), how long will it take that scene to render with the same results/quality?". Numbers representing relative performance to a GTX-980 on a different platform with different software and specified in megasamples/sec with Turning functionality not yet enabled seem to be, while perhaps interesting, somewhat meaningless towards answering that question. 

    And I also agree with his response to Aala's statement of "Quality is the same with all cards. Different cards don't render images differently. Only the speed at which those tasks are acomplished differs.".

    If I render the a scene today on my GTX-1080ti, there is no AI de-noising enabled. In fact, none of my software has a clue what that even means. Nor does it know what the Turing-related shader changes are. Nor does it know about the CUDA and NGX and Optix, etx., changes. And if I moved that scene over to a computer with another 1080ti and all the new and "updated for Turning" software (at least what little is available right now), I have no clue whether it will implement an AI de-noising on that same scene even though the 1080ti doesn't have the hardware to speed it up. And so on with all the other Turing-related software updates. 

    So yeah, de-noising can affect image quality. Doesn't mean it will, but it can. And presumably different shader algorithms can change your results too. So yeah, I think Aala is probably right under very strictly-controlled conditions where all the variables are known and it's truly an apples-to-apples comparison. But at this point, when the software isn't even ready yet, I'm not sure it's even possible to make an apples-to-apples comparison. 

    Like I've said from the beginning, until we can add another entry in the attached chart of Sickleyield benchmark results (or an equivalent Iray benchmark) I think any conclusions are very premature.  

    But on a bright note, it sounds like a number of folks here have said they pre-ordered some RTX cards, so I'm hoping that we start to see some preliminary results for Studio Iray renders (even though we don't have the software available yet) in the next few weeks. 

    BenchmarkNewestCores.jpg
    514 x 524 - 62K
    Post edited by ebergerly on
  • AalaAala Posts: 140
    kyoto kid said:

    @ Aala ...these new Nvidia cards will be applying different technologies than we have previously encountered.  Anything that sounds like a "shortcut" to further speed a process up (such as AI denoising) could have an impact on final render quality.  I've been in the traditional fine arts most of my life and unfortunately have developed some pretty high standards when it comes to quality of either image or sound production/reproduction.

    Different technologies on a hardware level, yes, but not on a software level. At least not on the level above the drivers. The algorithms that are used on the AI Denoiser integrated in Daz Studio are the same for both the 10 series and the 20. The newer generation of course has the new Tensor and RT cores to speed up the process, but that's just it, they only speed up but don't run fundamentally different.

    It would be a fundamental design flaw if an Iray Uber Shader you apply on a surface, lit up by a certain light, would render out differently on different cards. That would mean the physics of Ray Tracing would work differently on different cards and that's a total no-go. Same goes for the AI Denoiser.

    What the new cards posses are cores that have a different architectural design compared to the CUDA cores (Ray Tracing and Tensor Cores) that are tailored for certain algorithms to be processed faster and with less energy. Same goes for CUDA cores compared to traditional CPU cores, but the quality you get on a CPU Render is still the same, it just takes much longer. So far, the RT cores aren't usable on Iray because someone will need to write drivers which will have to recognize those Ray-Tracing algorithms and delegate them to the RT cores instead of CUDA. As far as I know, this has already been done but will have to be applied on a new patch via Windows 10. Then it remains to be seen if we can use them immediately with Iray, or if the Iray devs will have to integrate it too.

     

     

  • drzap said:

    I think this is a sensible expectation based on what we know with a caveat:  The Optix denoiser needs pre-training before it can be effective in Daz Studio.  This means that although you could very well see a 3x speedup in denoising, it could be nullified by poor inferencing due to lack of proper training.... and the onus for the deep learning will be on Daz3D for this, not Nvidia.  Each deep learning task is application specific, with the results passed to the video card as a driver update for use by the tensor cores in realtime.  From what I have heard, Optix denoising is terrible right now in Daz Studio.  This is likely due to Daz hasn't finished the training yet.

    The denoiser should at least get Nvidia's training package. Since we don't have any implementation details, it's a bit hard to know what would be the problem

    The only thing I can think of, would be that nvidia tend to train it's Neural Network to recognise rather "Architectural renders" and since DS renders are not in that field, the denoiser would fail. In this case you'd be right and the denoiser would lack a proper training. And I also don't know if the training would be difficult or not because there is a lot of different render type in the DS community (ex toons vs realistic vs NPR vs etc..)

    In the worst case you could at least get acceleration through the two other fields to get a beginning of comparison but that would be incomplete

    @Kyoto Kid : the denoising feature is already in DS. That you may not be using it is a personnal choice but for people aiming at real time rendering that is an obligation. Now the difference with Turing is that the denoise is done through RT core and not Cuda core. The feature won't be different just by changing the hardware as Aala says

    @Linvanchene: You're in a DAZ forum, so lots of users don't really care about "outside" softwares. ebergerly is also not totaly wrong in the sense that Octane is way quicker than Iray, so the speed up will certainly not fully translate. A 1.76x speed increase may become only a 1.5x inside DS. I've seen a few videos of Octane 4 also. The engine is quite different from a classical path tracer as it now integrates Brigade and has many features that are not in Iray. Even without Turing features it is already close to real time. Don't get me wrong, your numbers are interresting as they give an order of potential speed up. But that's all they can represent

  • kyoto kidkyoto kid Posts: 40,589
    Aala said:
    kyoto kid said:

    @ Aala ...these new Nvidia cards will be applying different technologies than we have previously encountered.  Anything that sounds like a "shortcut" to further speed a process up (such as AI denoising) could have an impact on final render quality.  I've been in the traditional fine arts most of my life and unfortunately have developed some pretty high standards when it comes to quality of either image or sound production/reproduction.

    Different technologies on a hardware level, yes, but not on a software level. At least not on the level above the drivers. The algorithms that are used on the AI Denoiser integrated in Daz Studio are the same for both the 10 series and the 20. The newer generation of course has the new Tensor and RT cores to speed up the process, but that's just it, they only speed up but don't run fundamentally different.

    It would be a fundamental design flaw if an Iray Uber Shader you apply on a surface, lit up by a certain light, would render out differently on different cards. That would mean the physics of Ray Tracing would work differently on different cards and that's a total no-go. Same goes for the AI Denoiser.

    What the new cards posses are cores that have a different architectural design compared to the CUDA cores (Ray Tracing and Tensor Cores) that are tailored for certain algorithms to be processed faster and with less energy. Same goes for CUDA cores compared to traditional CPU cores, but the quality you get on a CPU Render is still the same, it just takes much longer. So far, the RT cores aren't usable on Iray because someone will need to write drivers which will have to recognize those Ray-Tracing algorithms and delegate them to the RT cores instead of CUDA. As far as I know, this has already been done but will have to be applied on a new patch via Windows 10. Then it remains to be seen if we can use them immediately with Iray, or if the Iray devs will have to integrate it too.

     

     

    ...drivers can also be downloaded directly from the Nvida site and installed manually so one doesn't need to wait for the MS patch.  I could install it in W7 but it would be of no purpose as my system is most likely too old to really get much out of the new technologies (it has PCIe 2.0 x 16 slots and only supports DirectX11).

  • kyoto kidkyoto kid Posts: 40,589
    drzap said:

    I think this is a sensible expectation based on what we know with a caveat:  The Optix denoiser needs pre-training before it can be effective in Daz Studio.  This means that although you could very well see a 3x speedup in denoising, it could be nullified by poor inferencing due to lack of proper training.... and the onus for the deep learning will be on Daz3D for this, not Nvidia.  Each deep learning task is application specific, with the results passed to the video card as a driver update for use by the tensor cores in realtime.  From what I have heard, Optix denoising is terrible right now in Daz Studio.  This is likely due to Daz hasn't finished the training yet.

     

    @Kyoto Kid : the denoising feature is already in DS. That you may not be using it is a personnal choice but for people aiming at real time rendering that is an obligation. Now the difference with Turing is that the denoise is done through RT core and not Cuda core. The feature won't be different just by changing the hardware as Aala says

     

    ...so what purpose will the Tensor cores serve in a consumer grade GPU card?   Not many of us or those in the gaming community are scientists looking to run geologic, climate modelling simulations or become involved in deep learning development.  Makes more sense that Tensor cores would be more efficient for functions like AI denoising than CUDA or RT ones.

    As to choice, yes this is part of why I dropped away from using LuxRender as I was not impressed with the render quality when using the speed boost option.   At normal speed to get the best quality, Lux made geologic time look quick.

  • AalaAala Posts: 140
    kyoto kid said:
    drzap said:

    I think this is a sensible expectation based on what we know with a caveat:  The Optix denoiser needs pre-training before it can be effective in Daz Studio.  This means that although you could very well see a 3x speedup in denoising, it could be nullified by poor inferencing due to lack of proper training.... and the onus for the deep learning will be on Daz3D for this, not Nvidia.  Each deep learning task is application specific, with the results passed to the video card as a driver update for use by the tensor cores in realtime.  From what I have heard, Optix denoising is terrible right now in Daz Studio.  This is likely due to Daz hasn't finished the training yet.

     

    @Kyoto Kid : the denoising feature is already in DS. That you may not be using it is a personnal choice but for people aiming at real time rendering that is an obligation. Now the difference with Turing is that the denoise is done through RT core and not Cuda core. The feature won't be different just by changing the hardware as Aala says

     

    ...so what purpose will the Tensor cores serve in a consumer grade GPU card?   Not many of us or those in the gaming community are scientists looking to run geologic, climate modelling simulations or become involved in deep learning development.  Makes more sense that Tensor cores would be more efficient for functions like AI denoising than CUDA or RT ones.

    As to choice, yes this is part of why I dropped away from using LuxRender as I was not impressed with the render quality when using the speed boost option.   At normal speed to get the best quality, Lux made geologic time look quick.

    For AI related stuff where speed really matters. The AI denoiser for Iray isn't something we need to perform multiple times per second on an image for a final image, but it could certainly help when we preview things with Iray on the viewport.

    The real reason why it's been placed on the new GeForce line is primarely to support DLSS:

    https://wccftech.com/nvidia-dlss-explained-nvidia-ngx/

    Personally though, I fully expect the RT cores to help us speed up our renders 2x to 6x when everything's fully integrated. CUDA cores are optimized for rastarization, but the RT cores, on paper, should give us a huge performance boost on top of that.

  • bbaluchonbbaluchon Posts: 34
    edited September 2018

    Rendered the test scene (from DAZ_Rawb on page 17) in 8 minutes 30 seconds, on a GTX-1080 OC (stock). That makes sense with all various numbers I found around here. ^^

    About RTX cards, given by the official numbers, all CUDA cores on a 1080Ti are capable of ~1 GigaRay/sec, where a 2080Ti RT cores all alone can deliver about 10 GigaRays, for a total horsepower of 11-12 Gigarays/sec (RT + CUDA cores), give or take.

    So in theory, the RTX cards will be around 10x more powerful for RT renders in DAZ ?

    I'm looking for drivers and DAZ software to fully support the thing, but it sounds very promising IMHO... We may be able to pose our models during constant iray render, wich is very useful when trying subtle shader changes, testing lights, etc.

    Basically it's around the ray-tracing power of a Quadro card, minus the VRAM capacity (wich is disappointingly low for RTX models in my opinion).

    Post edited by bbaluchon on
  • drzapdrzap Posts: 795
    Aala said:
    kyoto kid said:
    drzap said:

    I think this is a sensible expectation based on what we know with a caveat:  The Optix denoiser needs pre-training before it can be effective in Daz Studio.  This means that although you could very well see a 3x speedup in denoising, it could be nullified by poor inferencing due to lack of proper training.... and the onus for the deep learning will be on Daz3D for this, not Nvidia.  Each deep learning task is application specific, with the results passed to the video card as a driver update for use by the tensor cores in realtime.  From what I have heard, Optix denoising is terrible right now in Daz Studio.  This is likely due to Daz hasn't finished the training yet.

     

    @Kyoto Kid : the denoising feature is already in DS. That you may not be using it is a personnal choice but for people aiming at real time rendering that is an obligation. Now the difference with Turing is that the denoise is done through RT core and not Cuda core. The feature won't be different just by changing the hardware as Aala says

     

    ...so what purpose will the Tensor cores serve in a consumer grade GPU card?   Not many of us or those in the gaming community are scientists looking to run geologic, climate modelling simulations or become involved in deep learning development.  Makes more sense that Tensor cores would be more efficient for functions like AI denoising than CUDA or RT ones.

    As to choice, yes this is part of why I dropped away from using LuxRender as I was not impressed with the render quality when using the speed boost option.   At normal speed to get the best quality, Lux made geologic time look quick.

    For AI related stuff where speed really matters. The AI denoiser for Iray isn't something we need to perform multiple times per second on an image for a final image, but it could certainly help when we preview things with Iray on the viewport.

    The real reason why it's been placed on the new GeForce line is primarely to support DLSS:

    https://wccftech.com/nvidia-dlss-explained-nvidia-ngx/

    Personally though, I fully expect the RT cores to help us speed up our renders 2x to 6x when everything's fully integrated. CUDA cores are optimized for rastarization, but the RT cores, on paper, should give us a huge performance boost on top of that.

    Actually denoisers greatly decrease the time needed for a renderer to complete an image, because the renderer doesn't need to complete the image.  This is a very crucial part of attaining the realtime raytracing goal.  Normally, denoising is not a slow process, taking not much more than a second (if that), but if your goal is to produce 30 to 60 frames a second, that is still much too slow.  Tensor cores, relying on a deep learning database and neural networks are able to process denoising as a post operation and finish a noisy renderer in a fraction of the time.  Granted, in iRay (or most 3D workflows) this is of smaller value compared to the other aspects of rendering a frame, but in realtime applications, every millisecond counts.  So, provided that the game developers take the time to train the denoiser properly with renders of their actual gameplay, denoising can greatly aid them in achieving the realtime raytracing dream.

  • nicsttnicstt Posts: 11,714
    drzap said:
    Aala said:
    kyoto kid said:
    drzap said:

    I think this is a sensible expectation based on what we know with a caveat:  The Optix denoiser needs pre-training before it can be effective in Daz Studio.  This means that although you could very well see a 3x speedup in denoising, it could be nullified by poor inferencing due to lack of proper training.... and the onus for the deep learning will be on Daz3D for this, not Nvidia.  Each deep learning task is application specific, with the results passed to the video card as a driver update for use by the tensor cores in realtime.  From what I have heard, Optix denoising is terrible right now in Daz Studio.  This is likely due to Daz hasn't finished the training yet.

     

    @Kyoto Kid : the denoising feature is already in DS. That you may not be using it is a personnal choice but for people aiming at real time rendering that is an obligation. Now the difference with Turing is that the denoise is done through RT core and not Cuda core. The feature won't be different just by changing the hardware as Aala says

     

    ...so what purpose will the Tensor cores serve in a consumer grade GPU card?   Not many of us or those in the gaming community are scientists looking to run geologic, climate modelling simulations or become involved in deep learning development.  Makes more sense that Tensor cores would be more efficient for functions like AI denoising than CUDA or RT ones.

    As to choice, yes this is part of why I dropped away from using LuxRender as I was not impressed with the render quality when using the speed boost option.   At normal speed to get the best quality, Lux made geologic time look quick.

    For AI related stuff where speed really matters. The AI denoiser for Iray isn't something we need to perform multiple times per second on an image for a final image, but it could certainly help when we preview things with Iray on the viewport.

    The real reason why it's been placed on the new GeForce line is primarely to support DLSS:

    https://wccftech.com/nvidia-dlss-explained-nvidia-ngx/

    Personally though, I fully expect the RT cores to help us speed up our renders 2x to 6x when everything's fully integrated. CUDA cores are optimized for rastarization, but the RT cores, on paper, should give us a huge performance boost on top of that.

    Actually denoisers greatly decrease the time needed for a renderer to complete an image, because the renderer doesn't need to complete the image.  This is a very crucial part of attaining the realtime raytracing goal.  Normally, denoising is not a slow process, taking not much more than a second (if that), but if your goal is to produce 30 to 60 frames a second, that is still much too slow.  Tensor cores, relying on a deep learning database and neural networks are able to process denoising as a post operation and finish a noisy renderer in a fraction of the time.  Granted, in iRay (or most 3D workflows) this is of smaller value compared to the other aspects of rendering a frame, but in realtime applications, every millisecond counts.  So, provided that the game developers take the time to train the denoiser properly with renders of their actual gameplay, denoising can greatly aid them in achieving the realtime raytracing dream.

    For characters, I find the denoiser of no use until close to normal convergence anyway. The exception is probably toon-like as the skin textures in those products and style have far fewer skin details (blemeshes for example).

    It does help for hard surface and background items - sometimes including characters depending on the amount of DoF.

  • Takeo.KenseiTakeo.Kensei Posts: 1,303
    edited September 2018
    kyoto kid said:
    drzap said:

    I think this is a sensible expectation based on what we know with a caveat:  The Optix denoiser needs pre-training before it can be effective in Daz Studio.  This means that although you could very well see a 3x speedup in denoising, it could be nullified by poor inferencing due to lack of proper training.... and the onus for the deep learning will be on Daz3D for this, not Nvidia.  Each deep learning task is application specific, with the results passed to the video card as a driver update for use by the tensor cores in realtime.  From what I have heard, Optix denoising is terrible right now in Daz Studio.  This is likely due to Daz hasn't finished the training yet.

     

    @Kyoto Kid : the denoising feature is already in DS. That you may not be using it is a personnal choice but for people aiming at real time rendering that is an obligation. Now the difference with Turing is that the denoise is done through RT core and not Cuda core. The feature won't be different just by changing the hardware as Aala says

     

    ...so what purpose will the Tensor cores serve in a consumer grade GPU card?   Not many of us or those in the gaming community are scientists looking to run geologic, climate modelling simulations or become involved in deep learning development.  Makes more sense that Tensor cores would be more efficient for functions like AI denoising than CUDA or RT ones.

    As to choice, yes this is part of why I dropped away from using LuxRender as I was not impressed with the render quality when using the speed boost option.   At normal speed to get the best quality, Lux made geologic time look quick.

    You don't need to developp anything just because you have tensor cores. There are two parts in AI : Training and inference

    Somebody makes an AI driven app and train it. You will just use that app (inference) and with the help of tensor cores the app will go faster

    There are many fields were AI is being developped. For gamers, the DLSS, as mentionned, is one example. or for graphists,Adobe is making something https://petapixel.com/2018/01/23/select-subject-photoshop-now-ai-powered-one-click-selections/

    Any task that you could find pretty annoying could be automatised better with some AI help. There are lots of AI driven research/app being developped for image classification/recognition. You could imagine an AI driven DAZ library classification. Add some AI speech regognition/assistant and you could dream of driving DS with just the voice. Application field is limitless. What you need is worforce for coding and lots of datas for training

    @Drzap : I tested a bit the denoiser with DS beta 4.11.0.171

    I've set the Post Denoiser Start Iteration at 1 to check. At least it seems to work and I get a noise free preview in less the 1 min

    Before the preview gets enough samples, it looks like an impressionist painting

    perhaps is it better with the beta ?

    Post edited by Takeo.Kensei on
  • perhaps is it better with the beta ?

    The new denoiser is only in the beta at this time.

  • AalaAala Posts: 140
    drzap said:
    Aala said:
    kyoto kid said:
    drzap said:

    I think this is a sensible expectation based on what we know with a caveat:  The Optix denoiser needs pre-training before it can be effective in Daz Studio.  This means that although you could very well see a 3x speedup in denoising, it could be nullified by poor inferencing due to lack of proper training.... and the onus for the deep learning will be on Daz3D for this, not Nvidia.  Each deep learning task is application specific, with the results passed to the video card as a driver update for use by the tensor cores in realtime.  From what I have heard, Optix denoising is terrible right now in Daz Studio.  This is likely due to Daz hasn't finished the training yet.

     

    @Kyoto Kid : the denoising feature is already in DS. That you may not be using it is a personnal choice but for people aiming at real time rendering that is an obligation. Now the difference with Turing is that the denoise is done through RT core and not Cuda core. The feature won't be different just by changing the hardware as Aala says

     

    ...so what purpose will the Tensor cores serve in a consumer grade GPU card?   Not many of us or those in the gaming community are scientists looking to run geologic, climate modelling simulations or become involved in deep learning development.  Makes more sense that Tensor cores would be more efficient for functions like AI denoising than CUDA or RT ones.

    As to choice, yes this is part of why I dropped away from using LuxRender as I was not impressed with the render quality when using the speed boost option.   At normal speed to get the best quality, Lux made geologic time look quick.

    For AI related stuff where speed really matters. The AI denoiser for Iray isn't something we need to perform multiple times per second on an image for a final image, but it could certainly help when we preview things with Iray on the viewport.

    The real reason why it's been placed on the new GeForce line is primarely to support DLSS:

    https://wccftech.com/nvidia-dlss-explained-nvidia-ngx/

    Personally though, I fully expect the RT cores to help us speed up our renders 2x to 6x when everything's fully integrated. CUDA cores are optimized for rastarization, but the RT cores, on paper, should give us a huge performance boost on top of that.

    Actually denoisers greatly decrease the time needed for a renderer to complete an image, because the renderer doesn't need to complete the image.  This is a very crucial part of attaining the realtime raytracing goal.  Normally, denoising is not a slow process, taking not much more than a second (if that), but if your goal is to produce 30 to 60 frames a second, that is still much too slow.  Tensor cores, relying on a deep learning database and neural networks are able to process denoising as a post operation and finish a noisy renderer in a fraction of the time.  Granted, in iRay (or most 3D workflows) this is of smaller value compared to the other aspects of rendering a frame, but in realtime applications, every millisecond counts.  So, provided that the game developers take the time to train the denoiser properly with renders of their actual gameplay, denoising can greatly aid them in achieving the realtime raytracing dream.

    Ah yeah, that's definitely true. Again, I'm not 100% sure, but from what I've seen I'm assuming that they're not ray-tracing the entire scene. It's probably a combination of rasterization and ray-tracing where they turn off the shadows and reflections on the traditional renderer and add them via ray-tracing. It's basically a biased ray-tracing method, unlike Iray.

  • ebergerlyebergerly Posts: 3,255
    Aala said:

     

    Personally though, I fully expect the RT cores to help us speed up our renders 2x to 6x when everything's fully integrated. CUDA cores are optimized for rastarization, but the RT cores, on paper, should give us a huge performance boost on top of that.

    I'm sure you're right, depending on what you're comparing it to. I assume you mean you're expecting to cut render times in half, to maybe to 1/6 the time. So a 6 minute render on existing device might be as fast as 3 minutes, or even down to only 1 minute. Of course, if you're comparing that to an existing render on my GTX-745, then 6x speedup is pretty easy to achieve laugh. So I assume you mean relative to a 1080ti?

    Since the 2080ti is roughly twice the price of a 1080ti, I assume the minimum improvement in render times by a 2080ti has to be equal to 2x a 1080ti or nobody would buy one. So yeah, at least 2x a 1080ti. 

    As far as what the final numbers will be, I think that's FAR too complicated to guess at this point. 

    BTW, Gamers Nexus did some NVLink testing yesterday (?) on two 2080ti's and decided he'd still recommend sticking with a single GPU rather than go the SLI/NVLink route. He said for some games the NVLink did help the SLI peformance so you actually can get a doubling in framerate for dual GPU's, but for other games it's all over the place, and in some cases makes little difference to the SLI performance (alternate frame rendering). It all depends on the software and how they have it configured/optimized. Which is one reason I say that guessing render times at this point is way too complicated.    

  • ebergerlyebergerly Posts: 3,255
    edited September 2018

    By the way, regarding the de-noiser, there are some good videos out there describing the Blender de-noiser (introduced last year?), which is really pretty nice. However, as with any post-processing function it has some limitations. Whether those limitations matter in any particular render is up to the user. I think the intent with the Blender de-noiser was to improve the realtime preview as well as game performance, not necessarily to make much difference in professional renders. Doesn't mean it won't end up making a big difference in renders, but as you can see from some of the videos there are cases where there is a clear limitation in what the de-noiser algorithm can do. Though I'm guessing that the NVIDIA version will probably be a significant improvement on the Blender version. 

    Here's an example video showing some of the benefits and limitations. Again, it may or may not apply directly to the new RTX de-noising, but it shows that de-noising isn't necessarily a perfect solution. Also, note the complex interaction of other render settings in order to achieve the optimum results for a particular scene. So maybe with the RTX cards there's also more to it than just checking "Apply De-Noising" if you want optimum results:

    https://www.youtube.com/watch?v=2yhqIyagDQM 

    Post edited by ebergerly on
  • drzapdrzap Posts: 795
    ebergerly said:

    By the way, regarding the de-noiser, there are some good videos out there describing the Blender de-noiser (introduced last year?), which is really pretty nice. However, as with any post-processing function it has some limitations. Whether those limitations matter in any particular render is up to the user. I think the intent with the Blender de-noiser was to improve the realtime preview as well as game performance, not necessarily to make much difference in professional renders. Doesn't mean it won't end up making a big difference in renders, but as you can see from some of the videos there are cases where there is a clear limitation in what the de-noiser algorithm can do. Though I'm guessing that the NVIDIA version will probably be a significant improvement on the Blender version. 

    Here's an example video showing some of the benefits and limitations. Again, it may or may not apply directly to the new RTX de-noising, but it shows that de-noising isn't necessarily a perfect solution:

    https://www.youtube.com/watch?v=2yhqIyagDQM 

    The intent of the AI denoiser was to minimize the typical limitations in traditional denoisers.  Unfortunately, outside of tech demos, I haven't seen a good use for the AI denoiser other than for previews in the viewpoint.  It's too new a technology and the training requirements for it to work as advertised are a little steep.  In theory, it should be simple to find a few thousand pictures to feed it and allow the network to do its work.  But where are they going to get tens of thousands of renders to feed the deep learning server?  I think that's the rub.  If I remember correctly, one of the advertised features was that it could learn as you continue to use it.  In other words, it would train on our renders as we work.  But that would take some time.  All I know is Arnold's implementation of it is awful.  Daz Studio, definitely could use some work, and I prefer my traditional denoiser, Altus, over the Optix one in Redshift, so this technology has some promise, but there is a ways to go.

  • ebergerlyebergerly Posts: 3,255
    edited September 2018

    Yeah, I think what gets lost in the discussion of stuff like this is that something like de-noising is a balance between speed (aided by the Tensor core hardware) and quality (a function of the software and AI stuff). Just because the hardware exists doesn't mean the software is going to give some magical results. Yeah, it will be faster due to the hardware designed to speed up that function, but the extent to which it has great quality is a function of how the software is written and optimized, and the balancing act of how much post-processing you're willing to accept.

    I'm guessing that it's not just a case of "Select RTX Tensor Core De-Noising" and your render is now super fast and high quality. I assume that even with the AI stuff you'll need to do something like the balancing act described in the above video. And that balancing act will also affect render times and render time comparisons.  

    Post edited by ebergerly on
  • kyoto kidkyoto kid Posts: 40,589
    edited September 2018
    Aala said:
    kyoto kid said:
    drzap said:

    I think this is a sensible expectation based on what we know with a caveat:  The Optix denoiser needs pre-training before it can be effective in Daz Studio.  This means that although you could very well see a 3x speedup in denoising, it could be nullified by poor inferencing due to lack of proper training.... and the onus for the deep learning will be on Daz3D for this, not Nvidia.  Each deep learning task is application specific, with the results passed to the video card as a driver update for use by the tensor cores in realtime.  From what I have heard, Optix denoising is terrible right now in Daz Studio.  This is likely due to Daz hasn't finished the training yet.

     

    @Kyoto Kid : the denoising feature is already in DS. That you may not be using it is a personnal choice but for people aiming at real time rendering that is an obligation. Now the difference with Turing is that the denoise is done through RT core and not Cuda core. The feature won't be different just by changing the hardware as Aala says

     

    ...so what purpose will the Tensor cores serve in a consumer grade GPU card?   Not many of us or those in the gaming community are scientists looking to run geologic, climate modelling simulations or become involved in deep learning development.  Makes more sense that Tensor cores would be more efficient for functions like AI denoising than CUDA or RT ones.

    As to choice, yes this is part of why I dropped away from using LuxRender as I was not impressed with the render quality when using the speed boost option.   At normal speed to get the best quality, Lux made geologic time look quick.

    For AI related stuff where speed really matters. The AI denoiser for Iray isn't something we need to perform multiple times per second on an image for a final image, but it could certainly help when we preview things with Iray on the viewport.

    The real reason why it's been placed on the new GeForce line is primarely to support DLSS:

    https://wccftech.com/nvidia-dlss-explained-nvidia-ngx/

     

    ...OK so this sounds more beneficial for games and animation not so much for rendering high quality static images. So again likely not worth the extra cost if this is all one does.

    While Turing comes with a variety of performance-oriented shading improvements like Mesh Shading, Variable Rate Shading, and Texture-space Shading, so far DLSS is the one that’s seeing widespread adoption with 25 games already confirmed to adopt it and developers like Phoenix Labs talking positively of its benefits.

     

    It’s indeed promising to say the least. By cross-referencing NVIDIA’s own benchmarks, the GeForce RTX 2080 with DLSS enabled should jump to 57.6 FPS in Shadow of the Tomb Raider, almost catching up with the base 59 FPS registered by the RTX 2080Ti. Which, in turn, could soar to well over 70FPS at 4K resolution with DLSS enabled

    Post edited by kyoto kid on
  • After reading and watching a number of the early reviews, the price/bang for the buck issues seems to be a major thing. A question: at this moment, which would be better in terms of both performance and cost: one 2080 or two 1080s? I'll be buying an HP Omen or Origin PC rig around Christmas or the New Year to use for rendering and moderate gameplay, plus some CAD and 3D printing work.

     

    Thanks!!!

     

  • nicsttnicstt Posts: 11,714

    After reading and watching a number of the early reviews, the price/bang for the buck issues seems to be a major thing. A question: at this moment, which would be better in terms of both performance and cost: one 2080 or two 1080s? I'll be buying an HP Omen or Origin PC rig around Christmas or the New Year to use for rendering and moderate gameplay, plus some CAD and 3D printing work.

     

    Thanks!!!

     

    At this stage: neither. There are too many unanswered questions still.

  • ebergerlyebergerly Posts: 3,255

       Yeah, if it was me I'd wait until 1Q of next year. 

  • After reading and watching a number of the early reviews, the price/bang for the buck issues seems to be a major thing. A question: at this moment, which would be better in terms of both performance and cost: one 2080 or two 1080s? I'll be buying an HP Omen or Origin PC rig around Christmas or the New Year to use for rendering and moderate gameplay, plus some CAD and 3D printing work.

     

    Thanks!!!

    Right now there's really no reason to buy a 2080. We don't know enough about Raytracing and DLSS, and RTX 2080 performance in rasterization games is basically 1% faster than the 1080 ti which is, well, cheaper than the 2080.

  • Ghosty12Ghosty12 Posts: 1,983
    edited September 2018

    Linus just upped a video on NVLink and what it is capable of but he does say that the NVLink that comes on the Geforce RTX cards will not be able to do the GPU and Memory Pooling that the Quadro's can do..  So if you want that sort of capability then one will need to enter the world of Quadro..

    Post edited by Ghosty12 on
  • kyoto kidkyoto kid Posts: 40,589

    ...that has been my thinking all along.  Most likely it has to do in part with the Quadro drivers.

  • Ghosty12Ghosty12 Posts: 1,983
    edited September 2018

    You also notice that the Quadro's have two bridges while the Geforce only has one, he even mentions of what Nvidia said to him how the Geforce bridges have been neutered as well..

    Post edited by Ghosty12 on
  • AalaAala Posts: 140
    kyoto kid said:

    ...that has been my thinking all along.  Most likely it has to do in part with the Quadro drivers.

    Also the price. The Quadro NVLink version is $600.

  • ebergerlyebergerly Posts: 3,255
    edited September 2018
    I was kinda holding out a dim glimmer of hope that the RTX NVLink/SLI connectors would still allow memory stacking if software developers could make it work (based on Tom Petersons statements), but now I'm wondering if that is ONLY if you ALSO buy a non-neutered $600 NVLink bridge. Geez I wish these reviewers would stop being so negative.
    Post edited by ebergerly on
  • nicsttnicstt Posts: 11,714
    ebergerly said:
    I was kinda holding out a dim glimmer of hope that the RTX NVLink/SLI connectors would still allow memory stacking if software developers could make it work (based on Tom Petersons statements), but now I'm wondering if that is ONLY if you ALSO buy a non-neutered $600 NVLink bridge. Geez I wish these reviewers would stop being so negative.

    In principle I would have no objection to buying the $600 Nvlink. Im not about to even consider spending the cash on quadros that offer decent RAM.

  • kyoto kidkyoto kid Posts: 40,589
    Aala said:
    kyoto kid said:

    ...that has been my thinking all along.  Most likely it has to do in part with the Quadro drivers.

    Also the price. The Quadro NVLink version is $600.

    ...The ones he had were Volta edition cards which do require two bridges.  The new RTX Quadros will only use a single bridge (which most likely has the two way link integrated into it), and still offer the pooling advantage.  Details/pricing of the new Quadro NVLink bridges have not yet been released as best as I know.

  • kyoto kidkyoto kid Posts: 40,589
    ebergerly said:
    I was kinda holding out a dim glimmer of hope that the RTX NVLink/SLI connectors would still allow memory stacking if software developers could make it work (based on Tom Petersons statements), but now I'm wondering if that is ONLY if you ALSO buy a non-neutered $600 NVLink bridge. Geez I wish these reviewers would stop being so negative.

    ..the Volta cards shown have a different slot configuration for the dual link bridges so those would not work with the RTX 20xx series as they wouldn't fit the connector slots.  They also would not work with the RTX Quadros (see link)

    https://www.nvidia.com/en-us/design-visualization/nvlink-bridges/

Sign In or Register to comment.