OT: New Nvidia Cards to come in RTX and GTX versions?! RTX Titan first whispers.

2456727

Comments

  • Seven193Seven193 Posts: 1,065

    Will Daz Studio get a NVidia Ray-Tracing preview mode, and will it be faster than NVidia Iray preview mode?

  • kyoto kidkyoto kid Posts: 40,678
    edited August 2018
    kyoto kid said:

    this service was found to be vulnerable to being compromised.

    Everything is.. sadly.. but protect yourself best you can and you'll be fine. Everyone is in the same boat.

    ...I just read about this the other morning in one of the tech journals I subscribe to which tends to be on top of security matters.

    Post edited by kyoto kid on
  • kyoto kidkyoto kid Posts: 40,678
    edited August 2018
    ...the "lesser" consumer/enthusiast versions should be announced in about a week. I'm still expecting the Turing Titan to be upgraded to 16GB but most likely without NVLink so it doesn't step on the Q5000.
    Post edited by kyoto kid on
  • Here are the video demos from the presentation.  The "one card" reference was to the Star Wars Stormtrooper demo.

    Starting at 3:39 are the other two full length demos.

     

  • SorelSorel Posts: 1,390
    Eek, Def waiting for the gtx version
  • kyoto kidkyoto kid Posts: 40,678

    ...from what I discovered on another site, none of the new Quadro RTX cards s will have 640 Tensor cores like the Volta models do.  The 6000 and 8000 will have 576 and the 5000, 384, so I would expect the RTX enthusiast models would have even less.

  • ebergerlyebergerly Posts: 3,255
    edited August 2018

    I love how Mr NVIDIA holds up a GPU to the audience that looks no different from an old GTX-1050. Implied awesomeness.

    I wonder if they even have a Turing or whatever GPU.

    Post edited by ebergerly on
  • BendinggrassBendinggrass Posts: 1,368

    I am new to this information.

    From what I read here the Tensor cores are superior to the Cuda cores. Is that right?

  • ebergerlyebergerly Posts: 3,255
    edited August 2018

    I am new to this information.

    From what I read here the Tensor cores are superior to the Cuda cores. Is that right?

    Not superior...different.

    They are designed to excel at different tasks. Tensor cores are really good at certain matrix operations. And if the stuff you're doing needs that, it's awesome. Graphics don't generally need those particular operations. 

    Keep in mind most computing hardware is designed for different markets which have different problems to solve. It's very complicated. 

    BTW, here's an explanation from someone at NVIDIA about what exactly the Tensor cores are designed for:

    "Tensor Cores are programmable matrix-multiply-and-accumulate units . Each Tensor Core provides a 4x4x4 matrix processing array which performs the operation D = A * B + C, where A, B, C and D are 4×4 matrices . "

    So we're talking about VERY specific multiplication/addition operations on 3-dimensional matrices. And these are very small 4x4 matrices. Some problems can be boiled down to just those specific types of operations, others are completely different. 

    By the way, a "tensor" is a specific thing in mathematics & physics:

    "Tensors are important in physics because they provide a concise mathematical framework for formulating and solving physics problems in areas such as stresselasticityfluid mechanics, and general relativity. In applications, it is common to study situations in which a different tensor can occur at each point of an object; for example the stress within an object may vary from one location to another. This leads to the concept of a tensor field." 

     

     

    Post edited by ebergerly on
  • ebergerlyebergerly Posts: 3,255
    edited August 2018

    Here's another example of how different the tasks can be depending on the problem being solved:

    Let's say you have a 1920x1080 image, and you want to tone down the red in each pixel. The image is represented by a huge matrix of 1920 x 1080 pixels, and each pixel/element in the matrix is a number.And each number in the matrix might be a 24 bit number (8 bits each for R, G, & B). So if you want to tone down the red in every pixel you need to do the same, simple operation on each of the 2 million pixels. So 2 million times you want to do a simple replacement of the "red" part of the number with zerio. So you do a very simple multiply operation on 2 million numbers. In a GPU you can do all of those simple multiplies simultaneously if it has enough cores (actually  "SM's" or Streaming Multiprocessors). You don't need registers to save results from a bunch of steps in the calculation, you just take a number, mulitply by a fixed value, and you have your result. That's what GPU's are good at.

     

    Post edited by ebergerly on
  • outrider42outrider42 Posts: 3,679
    ebergerly said:

    I am new to this information.

    From what I read here the Tensor cores are superior to the Cuda cores. Is that right?

    Not superior...different.

    They are designed to excel at different tasks. Tensor cores are really good at certain matrix operations. And if the stuff you're doing needs that, it's awesome. Graphics don't generally need those particular operations. 

    Keep in mind most computing hardware is designed for different markets which have different problems to solve. It's very complicated. 

    BTW, here's an explanation from someone at NVIDIA about what exactly the Tensor cores are designed for:

    "Tensor Cores are programmable matrix-multiply-and-accumulate units . Each Tensor Core provides a 4x4x4 matrix processing array which performs the operation D = A * B + C, where A, B, C and D are 4×4 matrices . "

    So we're talking about VERY specific multiplication/addition operations on 3-dimensional matrices. And these are very small 4x4 matrices. Some problems can be boiled down to just those specific types of operations, others are completely different. 

    By the way, a "tensor" is a specific thing in mathematics & physics:

    "Tensors are important in physics because they provide a concise mathematical framework for formulating and solving physics problems in areas such as stresselasticityfluid mechanics, and general relativity. In applications, it is common to study situations in which a different tensor can occur at each point of an object; for example the stress within an object may vary from one location to another. This leads to the concept of a tensor field." 

     

     

    The simple answer is "yes", yes they are better. For gamers, they wont be a big deal until a big game ships with RTX in it. But for us, Tensor is a big deal. The things that Tensor does great just so happen to be the things Iray does.

    kyoto kid said:

    ...from what I discovered on another site, none of the new Quadro RTX cards s will have 640 Tensor cores like the Volta models do.  The 6000 and 8000 will have 576 and the 5000, 384, so I would expect the RTX enthusiast models would have even less.

    The 5000 will cost $2300. If the next Titan is $3000, then no toes would stepped on at all, you get what you ay for. Also, just like with CUDA, you cannot directly compare Tensor cores across generations. Every generation sees an improvement. The 576 Tensor cores in the new 6000 and 8000 are better than the 640 in the Titan V.

  • Seven193Seven193 Posts: 1,065
    edited August 2018

    I'm still waiting to hear what Daz is going to do with this new technology.  Are they going to include real-time ray-tracing into their product?

    Post edited by Seven193 on
  • SixDsSixDs Posts: 2,384
    edited August 2018

    "So are we slowly moving to Real Time Raytraced games"?

    Yeah, really slowly. I expect we'll see that right after we all receive our flying cars. The horsepower required to do interactive, realtime ray tracing in games is so immense, you would need a render farm to even approach playable frame rates. The average gamer will probably never have a system even remotely capable of realtime raytracing, and it is the average systems that games are designed for.

    "Are they going to include real-time ray-tracing into their product"

    Well, they already do, actually. Its just that realtime takes awhile. If you are referring to instantaneous rendering, particularly for creating animations, then probably never - see the above.

    Whenever someone, especially marketers, start talking about realtime raytracing in games and similar, you really need to read the fine print in order to understand what they really mean, which often amounts to fakery.

    Post edited by SixDs on
  • kyoto kidkyoto kid Posts: 40,678
    edited August 2018
    ebergerly said:

    I am new to this information.

    From what I read here the Tensor cores are superior to the Cuda cores. Is that right?

    Not superior...different.

    They are designed to excel at different tasks. Tensor cores are really good at certain matrix operations. And if the stuff you're doing needs that, it's awesome. Graphics don't generally need those particular operations. 

    Keep in mind most computing hardware is designed for different markets which have different problems to solve. It's very complicated. 

    BTW, here's an explanation from someone at NVIDIA about what exactly the Tensor cores are designed for:

    "Tensor Cores are programmable matrix-multiply-and-accumulate units . Each Tensor Core provides a 4x4x4 matrix processing array which performs the operation D = A * B + C, where A, B, C and D are 4×4 matrices . "

    So we're talking about VERY specific multiplication/addition operations on 3-dimensional matrices. And these are very small 4x4 matrices. Some problems can be boiled down to just those specific types of operations, others are completely different. 

    By the way, a "tensor" is a specific thing in mathematics & physics:

    "Tensors are important in physics because they provide a concise mathematical framework for formulating and solving physics problems in areas such as stresselasticityfluid mechanics, and general relativity. In applications, it is common to study situations in which a different tensor can occur at each point of an object; for example the stress within an object may vary from one location to another. This leads to the concept of a tensor field." 

     

     

    The simple answer is "yes", yes they are better. For gamers, they wont be a big deal until a big game ships with RTX in it. But for us, Tensor is a big deal. The things that Tensor does great just so happen to be the things Iray does.

    kyoto kid said:

    ...from what I discovered on another site, none of the new Quadro RTX cards s will have 640 Tensor cores like the Volta models do.  The 6000 and 8000 will have 576 and the 5000, 384, so I would expect the RTX enthusiast models would have even less.

    The 5000 will cost $2300. If the next Titan is $3000, then no toes would stepped on at all, you get what you ay for. Also, just like with CUDA, you cannot directly compare Tensor cores across generations. Every generation sees an improvement. The 576 Tensor cores in the new 6000 and 8000 are better than the 640 in the Titan V.

    ...but why then spend an extra 700$ for a card that does not have NVLink support and probably still only 12 GB of VRAM?  If anything the Titan may be the one on the "out".

    Post edited by kyoto kid on
  • ebergerlyebergerly Posts: 3,255
    edited August 2018

     

     

    The simple answer is "yes", yes they are better. For gamers, they wont be a big deal until a big game ships with RTX in it. But for us, Tensor is a big deal. The things that Tensor does great just so happen to be the things Iray does.

     

    Ummm, okay...so can you explain exactly how the tensor core architecture of D=A*B+C operations on 4x4 matricies applies directly to Iray, and how "The things that Tensor does great just so happen to be the things Iray does."? 

    Post edited by ebergerly on
  • drzapdrzap Posts: 795
    The big story here for Daz 3D users is the potential for realtime raytracing. This is made possible by Nvidia's RTX cores that are dedicated to this task. Of course this will be featured in Iray by means of Optix, but it is not the sole means. Microsoft has a new API and there is another gateway through Vulcan. This doesn't necessarily mean anything for Daz Studio. First, Daz will need to take advantage of the RTX API in it's licensed version of iRay. This seems likely, but not a certainty. Daz Studio iRay doesn't always follow in lockstep with the commercial version. As far as tensor cores are concerned, I researched this awhile ago with the release of the Titan V. There is no indication that tensor cores have anything to do with realtime raytracing directly but they are extremely useful for the training of AI denoising and inferencing features. This is nothing new and has been in action since the Volta. Nvidia seems to be expanding the use of AI in the enabling of VFX and that can be nothing but good news for filmmakers and special effects artists but really nothing much to do with Daz Studio users. I would not be surprised if the consumer GeForce cards didnt even have tensor, since AI is being aimed at content creators and not those who consume the content. At any rate, it seems that realtime is really here. My favorite renderers have already announced support for the RTX protocol and I look forward to getting my hands on a Quadro or two.
  • ebergerlyebergerly Posts: 3,255
    Drzap, I think your views on this pretty much line up with my research on the subject. And at the end of the day, all of the "enthusiast" hype and speculation boil down to actual benchmark results with Iray scenes. Until that comes we won't really know the facts.
  • bluejauntebluejaunte Posts: 1,863

    Can someone explain what makes tensor cores better at realtime raytracing but not normal raytracing as in Iray? Aren't these pretty much the same tasks?

  • outrider42outrider42 Posts: 3,679
    drzap said:
    The big story here for Daz 3D users is the potential for realtime raytracing. This is made possible by Nvidia's RTX cores that are dedicated to this task. Of course this will be featured in Iray by means of Optix, but it is not the sole means. Microsoft has a new API and there is another gateway through Vulcan. This doesn't necessarily mean anything for Daz Studio. First, Daz will need to take advantage of the RTX API in it's licensed version of iRay. This seems likely, but not a certainty. Daz Studio iRay doesn't always follow in lockstep with the commercial version. As far as tensor cores are concerned, I researched this awhile ago with the release of the Titan V. There is no indication that tensor cores have anything to do with realtime raytracing directly but they are extremely useful for the training of AI denoising and inferencing features. This is nothing new and has been in action since the Volta. Nvidia seems to be expanding the use of AI in the enabling of VFX and that can be nothing but good news for filmmakers and special effects artists but really nothing much to do with Daz Studio users. I would not be surprised if the consumer GeForce cards didnt even have tensor, since AI is being aimed at content creators and not those who consume the content. At any rate, it seems that realtime is really here. My favorite renderers have already announced support for the RTX protocol and I look forward to getting my hands on a Quadro or two.

    Then explain why Titan V more than doubles the Iray rendering speed over Titan Pascal or 1080ti.
  • starionwolfstarionwolf Posts: 3,667

    I don't think the RTX video card will fit in my computer because it might be too long.  Thanks for sharing the photo.

  • drzapdrzap Posts: 795
    edited August 2018
    drzap said:
    The big story here for Daz 3D users is the potential for realtime raytracing. This is made possible by Nvidia's RTX cores that are dedicated to this task. Of course this will be featured in Iray by means of Optix, but it is not the sole means. Microsoft has a new API and there is another gateway through Vulcan. This doesn't necessarily mean anything for Daz Studio. First, Daz will need to take advantage of the RTX API in it's licensed version of iRay. This seems likely, but not a certainty. Daz Studio iRay doesn't always follow in lockstep with the commercial version. As far as tensor cores are concerned, I researched this awhile ago with the release of the Titan V. There is no indication that tensor cores have anything to do with realtime raytracing directly but they are extremely useful for the training of AI denoising and inferencing features. This is nothing new and has been in action since the Volta. Nvidia seems to be expanding the use of AI in the enabling of VFX and that can be nothing but good news for filmmakers and special effects artists but really nothing much to do with Daz Studio users. I would not be surprised if the consumer GeForce cards didnt even have tensor, since AI is being aimed at content creators and not those who consume the content. At any rate, it seems that realtime is really here. My favorite renderers have already announced support for the RTX protocol and I look forward to getting my hands on a Quadro or two.

    Then explain why Titan V more than doubles the Iray rendering speed over Titan Pascal or 1080ti.

    There are many differences between Pascal and Volta both in software and hardware. To conclude that rendering speed improvements are the result of tensor cores without citing proof isn't wise. Volta has more CUDAs, different memory architecture as well as vastly improved data bandwidth, among other things, besides the addition of tensor. I have not seen any rendering benchmarks that have recorded anything close to 2X performance improvement over the previous generation (perhaps you can provide one). The most common number is about 60% like that experienced by a very dependable source at Puget Systems (link below), who also state that tensor is unlikely to have any affects on rendering. This is in line with what Nvidia itself states. Maybe you can tell us why you are so convinced otherwise? https://www.pugetsystems.com/blog/2017/12/12/A-quick-look-at-Titan-V-rendering-performance-1083/
    Post edited by drzap on
  • nicsttnicstt Posts: 11,715

    Yup, the lower priced Quadro might be the way to go

  • ebergerlyebergerly Posts: 3,255
    edited August 2018

    Like I’ve said many times before, this stuff is REALLY complicated. Trying to simplify it into a “which is better?” discussion is, IMO, a big mistake.

    My very limited understanding thus far is something like this, to be taken with a big grain of salt:

    Hardware and software work together to solve problems. Good software is designed to take advantage of and work together with the hardware, and vice versa. NVIDIA’s approach with all of this is to design a software platform and hardware platform that work together as efficiently as possible to solve specific problems.

    And that’s their goal with the RTX Platform, which is a new combination of hardware and software designed to work together to solve stuff like ray tracing, physical simulations, AI, etc. Unlike CPU’s which are designed to be usable for every piece of software on your computer, these NIVDIA platforms narrow all of that down to very specific fields and problems such as graphics and ray tracing and AI and so on.

    This new RTX platform includes a new hardware architecture called Turing. A Turing GPU has multiple hardware components, each of which is designed to excel at certain specific tasks.

    • There’s the RT (Ray Tracing) Core, which is great at ray tracing because it’s designed for the specific tasks needed for ray tracing.
    • There’s the CUDA Core, which apparently now excels at physical simulations (bouncing balls, fluids, etc).
    • There’s the Tensor Core, which does AI and related stuff real well. Apparently this may also include stuff like de-noising of renders, which in my view is kind of a shortcut way of speeding up renders by faking/guessing results rather than going to the expense of actually calculating the ray tracing in detail. Almost like a post-processing feature. It’s NOT designed as a raytracing feature in my understanding, but more like an AI/post-processing feature where you apply artificial intelligence to guess what the answer should be. From other de-noising stuff I’ve seen it seems like it uses existing pass data and scene information (like normals, and depth pass, and so on) to smartly guess at shadows etc. In some cases the results are acceptable, others not so much. But in Blender, for example, it’s generally assumed to be used only for pre-visualization and games, not final professional renders since it’s not quite as clean as an actual ray trace. Though this has probably improved with RTX.       
    • And, like all other GPU’s, there’s the Streaming Multiprocessor (SM). With RTX these are now generally assigned tasks involving shading/rasterization.

    RTX also includes software API’s ( Advanced Programmer Interfaces) and SDK’s (Software Development Kits) to allow software developers to access all of this hardware. These include stuff like Optix for ray tracing, a new version of CUDA and Physx for physical simulations, NGX for Tensor Cores, and so on.   

    So the goal here is for all of these hardware and software components to work together to separate out the tasks and assign them to the most efficient part of the RTX hardware/software platform.

    So presumably rendering of a 3D scene can be separated into different components based on the particular task. The basic ray tracing might be assigned to the RT (ray tracing) cores, and some shading would be assigned to the SM’s, and maybe if the render includes a physical simulation of bouncing balls or water it would go to the CUDA cores, and maybe if there was some de-noising enabled it would assign some work to the Tensor cores. Or maybe it de-noising is disabled the Tensor Cores would be irrelevant. Who knows?

    At the end of the day I doubt that there are many people who truly (or even partially) understand how those assignments would be made, what parts of the RTX platform would be involved for a given scene, and what imact all of that would have to final render speed.

    Does your scene involve fluid sims? If so, maybe RTX would implement some new CUDA Cores and Physx API to speed it up. Does your render involve some final post-processing? If so, then maybe your render would involve Tensor Cores and the NGX API. If it’s doing ray tracing, then presumably the biggest impact would be the performance of the RT Cores and related API’s.

    Again, this stuff is extremely complicated, and without actual render benchmark data we have no clue what the ultimate performance might be. The developer and API could decide to use components that aren’t really designed for a particular function but will work anyway for whatever reason. Who knows? All we can guess is that obviously it will be faster than previous generations. That’s a no-brainer. How much faster for Studio/Iray? Nobody has a clue.

    And like others have said, another unknown is how/if Studio and Iray will implement any or all of this for our needs.

    It’s not simple. Not by a long shot. 

    And again, I'd caution folks not to fall into the "enthusiast" trap, where there's a lot of enthusiasm, but details and fundamentals are missing. The more you know about this stuff, the more you realize you don't know nothin'. That's certainly the case for me...

    Post edited by ebergerly on
  • kyoto kidkyoto kid Posts: 40,678
    edited August 2018

    I don't think the RTX video card will fit in my computer because it might be too long.  Thanks for sharing the photo.

    ...good point. I have fairly large case and it barely fits my Titan-X.

    Post edited by kyoto kid on
  • outrider42outrider42 Posts: 3,679
    drzap said:
    drzap said:
    The big story here for Daz 3D users is the potential for realtime raytracing. This is made possible by Nvidia's RTX cores that are dedicated to this task. Of course this will be featured in Iray by means of Optix, but it is not the sole means. Microsoft has a new API and there is another gateway through Vulcan. This doesn't necessarily mean anything for Daz Studio. First, Daz will need to take advantage of the RTX API in it's licensed version of iRay. This seems likely, but not a certainty. Daz Studio iRay doesn't always follow in lockstep with the commercial version. As far as tensor cores are concerned, I researched this awhile ago with the release of the Titan V. There is no indication that tensor cores have anything to do with realtime raytracing directly but they are extremely useful for the training of AI denoising and inferencing features. This is nothing new and has been in action since the Volta. Nvidia seems to be expanding the use of AI in the enabling of VFX and that can be nothing but good news for filmmakers and special effects artists but really nothing much to do with Daz Studio users. I would not be surprised if the consumer GeForce cards didnt even have tensor, since AI is being aimed at content creators and not those who consume the content. At any rate, it seems that realtime is really here. My favorite renderers have already announced support for the RTX protocol and I look forward to getting my hands on a Quadro or two.

     

    Then explain why Titan V more than doubles the Iray rendering speed over Titan Pascal or 1080ti.

     

    There are many differences between Pascal and Volta both in software and hardware. To conclude that rendering speed improvements are the result of tensor cores without citing proof isn't wise. Volta has more CUDAs, different memory architecture as well as vastly improved data bandwidth, among other things, besides the addition of tensor. I have not seen any rendering benchmarks that have recorded anything close to 2X performance improvement over the previous generation (perhaps you can provide one). The most common number is about 60% like that experienced by a very dependable source at Puget Systems (link below), who also state that tensor is unlikely to have any affects on rendering. This is in line with what Nvidia itself states. Maybe you can tell us why you are so convinced otherwise? https://www.pugetsystems.com/blog/2017/12/12/A-quick-look-at-Titan-V-rendering-performance-1083/

    If you read the thread you would have found my link to a benchmark on the first page, 12th post. I've posted this bench at least 3 times in other threads on this site, all threads covering the future GPUs as well as the Iray Benchmark thread that sickleyield made. I'm not linking it again. It comes from a group that has been benchmarking Iray and other programs for some time. They have covered numerous Iray SDKs.

    And yes, they actually benchmarked IRAY, unlike Puget in your link. I fail to see where Puget has benchmarked Iray. They didn't even test Octane. Why couldn't they wait? The bench you linked is "a quick look". Its not even full bench. The last time Puget truly benched Iray was 2016, before the Titan V released, before the 2017 Iray SDK released. Again, my link benched everything on the same 2017 Iray SDK. Their bench was done back in January, meaning they allowed time for proper driver support to kick in.

    Another thing, you mentioned you doubted if Daz would get the SDK for this Iray. Why? Nvidia makes only one SDK. Its not like you get to pick and chose. If Daz Studio wishes to keep pace with growing competition, it would be in their own best interest to have the newest SDK plugin as soon as they can. And Daz will need the SDK in order to support Turing at all. This is a highly anticipated release, so a lot of people are not only waiting on Turing, but for Daz to support Turing.

    Tensor cores are good for AI denoising, and hey look, Daz updated its denoiser with the latest beta. Hmm....

    Also, CEO Huang did say Tensor could be useful for gaming, too. Considering he is the CEO, I would be more apt to take his word on that possibilty.

    https://www.dvhardware.net/article68217.html

    Quote:

    "The type of work that you could do with deep learning for video games is growing. And that’s where Tensor Core to take up could be a real advantage. If you take a look at the computational that we have in Tensor Core compare to a non optimized GPU or even a CPU, it's now to plus orders of magnitude on greater competition of throughput. And that allows us to do things like synthesize images in real time and synthesize virtual world and make characters and make faces, bringing a new level of virtual reality and artificial intelligence to the video games."

  • drzapdrzap Posts: 795
    edited August 2018
    drzap said:
    drzap said:
    The big story here for Daz 3D users is the potential for realtime raytracing. This is made possible by Nvidia's RTX cores that are dedicated to this task. Of course this will be featured in Iray by means of Optix, but it is not the sole means. Microsoft has a new API and there is another gateway through Vulcan. This doesn't necessarily mean anything for Daz Studio. First, Daz will need to take advantage of the RTX API in it's licensed version of iRay. This seems likely, but not a certainty. Daz Studio iRay doesn't always follow in lockstep with the commercial version. As far as tensor cores are concerned, I researched this awhile ago with the release of the Titan V. There is no indication that tensor cores have anything to do with realtime raytracing directly but they are extremely useful for the training of AI denoising and inferencing features. This is nothing new and has been in action since the Volta. Nvidia seems to be expanding the use of AI in the enabling of VFX and that can be nothing but good news for filmmakers and special effects artists but really nothing much to do with Daz Studio users. I would not be surprised if the consumer GeForce cards didnt even have tensor, since AI is being aimed at content creators and not those who consume the content. At any rate, it seems that realtime is really here. My favorite renderers have already announced support for the RTX protocol and I look forward to getting my hands on a Quadro or two.

     

    Then explain why Titan V more than doubles the Iray rendering speed over Titan Pascal or 1080ti.

     

    There are many differences between Pascal and Volta both in software and hardware. To conclude that rendering speed improvements are the result of tensor cores without citing proof isn't wise. Volta has more CUDAs, different memory architecture as well as vastly improved data bandwidth, among other things, besides the addition of tensor. I have not seen any rendering benchmarks that have recorded anything close to 2X performance improvement over the previous generation (perhaps you can provide one). The most common number is about 60% like that experienced by a very dependable source at Puget Systems (link below), who also state that tensor is unlikely to have any affects on rendering. This is in line with what Nvidia itself states. Maybe you can tell us why you are so convinced otherwise? https://www.pugetsystems.com/blog/2017/12/12/A-quick-look-at-Titan-V-rendering-performance-1083/

    If you read the thread you would have found my link to a benchmark on the first page, 12th post. I've posted this bench at least 3 times in other threads on this site, all threads covering the future GPUs as well as the Iray Benchmark thread that sickleyield made. I'm not linking it again. It comes from a group that has been benchmarking Iray and other programs for some time. They have covered numerous Iray SDKs.

    Also, CEO Huang did say Tensor could be useful for gaming, too. Considering he is the CEO, I would be more apt to take his word on that possibilty.

    https://www.dvhardware.net/article68217.html

    Quote:

    "The type of work that you could do with deep learning for video games is growing. And that’s where Tensor Core to take up could be a real advantage. If you take a look at the computational that we have in Tensor Core compare to a non optimized GPU or even a CPU, it's now to plus orders of magnitude on greater competition of throughput. And that allows us to do things like synthesize images in real time and synthesize virtual world and make characters and make faces, bringing a new level of virtual reality and artificial intelligence to the video games."

    I took note of that benchmark. I disregarded it because it failed to include render times, which is the conventional way to measure rendering performance. Instead it uses a unit i have never heard of nor have seen anywhere else on the internet. How does it compare to render times? does it match linearly or does it diminish as the number gets higher? Nobody knows since there is no explaination. There is also nothing said about tensor contributing to the high score and since it is the only test that has measured such a big difference between gpu's (and I have seen quite a few benchmarks of Volta on various renderers), I'm going to have to discount it. Puget Systems has a much better reputation that I can rely on. Not saying that this megapaths metric isn't legit. I'm just saying it is unheard of and unexplained, thus not a reliable reference. I'll pass.

    "Another thing, you mentioned you doubted if Daz would get the SDK for this Iray. Why? Nvidia makes only one SDK. Its not like you get to pick and chose..."

    Did I say that? read it again. I said it "seems likely" that at least some derivitive of RTX will end up in Iray. That does not sound like doubt but Daz has a history of omitting features they feel they dont need in Daz Studio (motion blur!). Nothing is certain.

    "Tensor cores are good for AI denoising, and hey look, Daz updated its denoiser with the latest beta. Hmm...."

    Yeah, I said that but that has very little to do with raytracing except reduce the passes needed for an acceptable render and the benchmark you cite mentions nothing about using the denoiser. More than likely, the reason why Volta performs so well is because it uses "Volta optimized CUDAs" (per Nvidia website) and iRay is optimized to use Volta CUDA. Also note that they ran the benchmark on a Linux workstation using commercial iRay . Thats about as far away from Daz Studio as you can get. I dont see how that quote from Huang explains how tensors enable realtime raytracing. Tensors are a great thing. But there are not needed to use the AI denoiser (they are only used in the training process by the developer) and there is nothing that I can see that says they are a vital component of raytracing. They are used for AI purposes, which is exciting in itself, but not directly related with realtime. Nvidia has never been shy about promoting itself. If tensor cores had further value in the 3D rendering community, I have a feeling the news would have been plastered across every 3D site on the internets.
    Post edited by drzap on
  • ebergerlyebergerly Posts: 3,255
    edited August 2018

    I think the need to focus on Tensor cores, or any other component of these GPU's, as "totally super awesome!!" reflects a basic misunderstanding of how this all works, and what a platform is.

    An example that might help illustrate the concept:

    Imagine you have 10 engineers assigned to design an awesome flying car. Each engineer is a specialist in his area, and has a high performance computer with software and hardware that specializes in each aspect of design. So Engineer 1 has a computer that runs software and has specialized hardware that specializes in designing navigation systems. Engineer 2 has a special computer with software and hardware that specializes in designing propulsion systems. So in total there are 10 engineers, 10 sets of hardware, and 10 specialized sets of software. Total of 30 components working on designing a flying car.

    So when the flying car comes off the production line, does anyone jump up and down and say "WOW, Engineer #7 's computer hardware is so awesome because it designed the flying car's electrical system super fast!!" ? Probably not. It's only one small component that helped the overall effort, but the real brains were in the engineers' heads and the design software they were using. 

    Keep in mind that the real awesomeness here is arguably the software. Hardware is just dumb little chips on a board. Tensor Cores or RT Cores or CUDA Cores do nothing. It's the software that's doing the magic and solving the render and doing it insanely fast. The hardware is just there to carry out the instructions. So why doesn't everyone jump up and down saying "OMG !!! Those NGX API's are so awesome how they do that de-noising !!!" Or "WOW, that new Optix softwore is a BEAST the way it does the ray tracing so fast !!!".

    I think the reason is the same reason why Mr. NVIDIA holds up a GPU in front of thousands of people. We like hardware. We relate to hardware. We can touch it and feel it and see it. Software not so much. It's hidden and complicated and we can't touch it. And NVIDIA marketing knows that. And that's why they focus on the hardware, and hypes it as much as possible. And for those who love to get excited over hardware it's awesome. 

    Anyway, to some extent Tensor Cores are irrelevant. They're just one part of a very complex group of components, hardware and software, that is all working together to solve (in our case) renders. Nobody knows HOW it all works together, but it does. All we know is how fast and how well it renders our scenes. And we won't know that until we actually try it.   

    Post edited by ebergerly on
  • ebergerlyebergerly Posts: 3,255
    edited August 2018
    BTW I'm obviously NOT downplaying the importance of the amazing hardware developments that are crucial to allowing these zillions of parallel calculations. But whether those hardware elements happen to be arranged in a way that calculates tensors or ray tracing or physics seems somewhat a minor concern and not a reason for being a fanboy of, say, tensor or cuda cores. I'd tend to be more of a fanboy of the entire RTX platform if anything, since it seems to be focused on solving stuff I care about.
    Post edited by ebergerly on
Sign In or Register to comment.