General GPU/testing discussion from benchmark thread

1235718

Comments

  • outrider42outrider42 Posts: 3,679

    No, we are not. Daz 4.10 has Iray 2017 and the beta has a version from 2018. This migenius article is the only article I have seen that even references Iray 2019's existence. Iray 2019 has not released yet. Nvidia had promised to release it in May, but here we are in June and no announcement from them yet. Migenius may have a bit of an inside scoop on Iray because of their relationship with Nvidia. Unlike Daz they do not play it so "close to the vest".

  • LenioTGLenioTG Posts: 2,118

    No, we are not. Daz 4.10 has Iray 2017 and the beta has a version from 2018. This migenius article is the only article I have seen that even references Iray 2019's existence. Iray 2019 has not released yet. Nvidia had promised to release it in May, but here we are in June and no announcement from them yet. Migenius may have a bit of an inside scoop on Iray because of their relationship with Nvidia. Unlike Daz they do not play it so "close to the vest".

    Wow, now I'm even happier to have found that article, I hope that's true!! :D

  • LenioTG said:

    Are we currently using this Iray 2019?

    According to https://www.daz3d.com/forums/discussion/330161/daz-studio-pro-4-11-nvidia-iray#latest

    we are currently using  Iray 2018.1.3, build 312200.3564

    Maybe the next build will have the 2019 version.

     

  • outrider42outrider42 Posts: 3,679

    I would not expect any dramatic improvement for non-RTX, as it does not give any idea of how much faster it may be. But an improvement is an improvement. RTX is the real star here.

    But it does depend on the types of scenes you create as to what that performance will be. I will say this, because RTX will make more complex scenes faster, this will give users freedom to start thinking about building such scenes. You don't need to be afraid of them anymore! Though obviously the VRAM will always be a factor in just how complicated you can get.

    I still think the 2060 is the best value for this. And watch out for possible price drops. If the rumor is true and the 2070 drops to $400, wow, that would be a truly great deal. Not only is it faster, but it has the bigger 8GB capacity. Of course, the 2060 would drop as well, and IF the 2070 hits $400, then the 2060 may drop to about $300 plus or minus a bit. But we'll have to see. I am not sure the 2070 will drop that much. It may only be a $50 drop to ~$450.

    We will find out during E3. I do not believe Nvidia has an actual stage show planned, but I am not sure. Either way, the timing of their announcement will be around the time that AMD reveals Navi. E3 takes place June 8-13. The PC gaming show takes place on June 10, that may be when AMD talks about Navi, so Nvidia's announcement may come anytime before or after that. Its all part of the corporate head games as Nvidia wants to overshadow AMD's show. And of course, we the consumers benefit.

  • RayDAntRayDAnt Posts: 1,121
    edited June 2019
    RayDAnt said:
    LenioTG said:

    So, I've found this recent article: http://blog.smarttec.com.au/category/maxwell-render/

    Is it reliable?

    It talks of this Iray 2019.1.0: does this support tensor cores?

    Because he has also done many tests (in the middle of the article), and the average improvement in standard scenes is 20-30%.

    Sounds reliable and in line with other sources (raytracing vs shading debate). The typical Daz scene is not overly complex so we might have to accept that we won't get immense improvement with RTX. In my own stuff which is mostly portrait style promos with simple backgrounds this will likely be very evident.

     

    LenioTG said:
    LenioTG said:

    So, I've found this recent article: http://blog.smarttec.com.au/category/maxwell-render/

    Is it reliable?

    It talks of this Iray 2019.1.0: does this support tensor cores?

    Because he has also done many tests (in the middle of the article), and the average improvement in standard scenes is 20-30%.

    Sounds reliable and in line with other sources (raytracing vs shading debate). The typical Daz scene is not overly complex so we might have to accept that we won't get immense improvement with RTX. In my own stuff which is mostly portrait style promos with simple backgrounds this will likely be very evident.

     

    I've not understood! So you're saying that we'll see the most improvements in simple scenes? I do 1-2 characters in a room usually.

    There are (at minimum) two distinct ways of evaluating complexity in a Daz Studio scene:
    1. computational complexity of lights/shaders used in each scene object
    2. quantitative complexity (technically quantity) of scene objects visible
    RTCores will accelerate the former, but not necessarily the latter (only if the latter is a bunch of objects best described by the former.) So if you spend your time rendering scenes with indirectly lit, complexly shaded objects like people in closeup (human skin shading is one of the most computationally complex cg tasks out there) you should see substantial gains with RTCore acceleration. If you spend your time rendering lots of directly lit, simply shaded objects (like opaque clothing or rooms filled with office furniture - ie. the sort of stuff this particular tester's rendering tasks seem to be primarily made up of) RTCore gains should be only very small.

    That sort of contradicts what I had heard before and also what the Maxwell article says.

     In general we have found RTX provides the greatest benefit in very complex scenes with millions of triangles 

    That does not sound like a portrait of a character where the skin shader is the most demanding thing. It sounds like lots of object with lots of light bouncing around and hence lots of raytracing. I guess we will just have to wait and see. Shouldn't be too long now.

    The surface of human skin is translucent with significant reflective and refractive properties on multiple, distinct sub-surface layers. Hence the invention of things like sub-surface shading. Any time you have realistically shaded skin surfaces in closeup showing in a scene, you are looking at an example of extremely complex light bouncing processes going on. Which is what the actual workload of raytracing is all about.

    The quoted article's conclusion that "In general we have found RTX provides the greatest benefit in very complex scenes with millions of triangles" is kind of a misnomer because raytracing - which is to say RTCore acceleration - really has nothing directly to do with how many triangles exist in a rendered scene. Raytracing is solely about bouncing light around a scene. Obviously a scene with lots of distinct surfaces is gonna have a much higher potential for light bouncing going on. You can brute-force a high raytracing workload out of even the most simplistically shaded scene if you load the visual field up with enough things. But even the most simplistically modeled object with a complexly shaded multi-level surface is gonna give your raytracing engine a major workout.

    This stuff really all comes down to what specific sorts of scene assets are being fed into a given unbiased renderer for output. Daz Studio, with its focus on the depiction of lifelike humans in realistic environments for throroughly artistic ends, really is a very special use-case of Iray. And the only way to really judge how that particular sort of workload performs in terms of rendering complexity is by testing with it directly.

    Performance statistics from someone else are only as useful to you in judging what your situation is insofar as the workload(s) behind their numbers are similar to your workloads. And since human figures (which is to say, skin surfaces) are such a staple of typical DS workloads, there really isn't much to be gained from performance studies based on workloads where that sort of thing really isn't very present (like Maxwell Render, whose typical workloads are in the architectural design realm.)

    Post edited by RayDAnt on
  • RayDAntRayDAnt Posts: 1,121
    edited June 2019

    I will say this, because RTX will make more complex scenes faster, this will give users freedom to start thinking about building such scenes. You don't need to be afraid of them anymore! Though obviously the VRAM will always be a factor in just how complicated you can get.

    Fwiw this was actually one of the main factors that drove me to fork up the money for a Titan RTX back in January. Yeah, even now a 24GB framebuffer is still vastly overkill for pretty much anything other than 8k video editing. But further down the road, once RTX acceleration support gets fully sorted out, the sort of scenes that used to be prohibitively time-consuming (like very high resolution tableaus) are gonna go so much faster that you're gonna want to start dropping really high quality textured stuff in there (the sort of content that you'd only ever render in isolation for memory usage concerns)  because that is the next logical step in expanding your scene design. Which in turn greatly ups the ante for memory consumption. 

    When it comes to the whole dilemma of memory capacity (GB counts) versus rendering efficiency (RT/Tensor/Cuda core counts) the analogy that comes to my mind is cargo capacity versus pulling power in motor vehicles. Large cargo capacity with high torque (eg. a Mac truck/Titan RTX) makes sense. So does small capacity with lower torque (eg. a sportscar/RTX 2060.) But midling cargo capacity with high torque (think an overpowered SUV with no trailer hookup/RTX 2080ti)? That dynamic simply doesn't fit.

    Post edited by RayDAnt on
  • bluejauntebluejaunte Posts: 1,861
    RayDAnt said:
    RayDAnt said:
    LenioTG said:

    So, I've found this recent article: http://blog.smarttec.com.au/category/maxwell-render/

    Is it reliable?

    It talks of this Iray 2019.1.0: does this support tensor cores?

    Because he has also done many tests (in the middle of the article), and the average improvement in standard scenes is 20-30%.

    Sounds reliable and in line with other sources (raytracing vs shading debate). The typical Daz scene is not overly complex so we might have to accept that we won't get immense improvement with RTX. In my own stuff which is mostly portrait style promos with simple backgrounds this will likely be very evident.

     

    LenioTG said:
    LenioTG said:

    So, I've found this recent article: http://blog.smarttec.com.au/category/maxwell-render/

    Is it reliable?

    It talks of this Iray 2019.1.0: does this support tensor cores?

    Because he has also done many tests (in the middle of the article), and the average improvement in standard scenes is 20-30%.

    Sounds reliable and in line with other sources (raytracing vs shading debate). The typical Daz scene is not overly complex so we might have to accept that we won't get immense improvement with RTX. In my own stuff which is mostly portrait style promos with simple backgrounds this will likely be very evident.

     

    I've not understood! So you're saying that we'll see the most improvements in simple scenes? I do 1-2 characters in a room usually.

    There are (at minimum) two distinct ways of evaluating complexity in a Daz Studio scene:
    1. computational complexity of lights/shaders used in each scene object
    2. quantitative complexity (technically quantity) of scene objects visible
    RTCores will accelerate the former, but not necessarily the latter (only if the latter is a bunch of objects best described by the former.) So if you spend your time rendering scenes with indirectly lit, complexly shaded objects like people in closeup (human skin shading is one of the most computationally complex cg tasks out there) you should see substantial gains with RTCore acceleration. If you spend your time rendering lots of directly lit, simply shaded objects (like opaque clothing or rooms filled with office furniture - ie. the sort of stuff this particular tester's rendering tasks seem to be primarily made up of) RTCore gains should be only very small.

    That sort of contradicts what I had heard before and also what the Maxwell article says.

     In general we have found RTX provides the greatest benefit in very complex scenes with millions of triangles 

    That does not sound like a portrait of a character where the skin shader is the most demanding thing. It sounds like lots of object with lots of light bouncing around and hence lots of raytracing. I guess we will just have to wait and see. Shouldn't be too long now.

    The surface of human skin is translucent with significant reflective and refractive properties on multiple, distinct sub-surface layers. Hence the invention of things like sub-surface shading. Any time you have realistically shaded skin surfaces in closeup showing in a scene, you are looking at an example of extremely complex light bouncing processes going on. Which is what the actual workload of raytracing is all about.

    I would think so too. But it's a skin shader, right? So how much of that SSS is actual raytracing, and how much is shading? We say this shader is complex, and it was said that the shading can take a considerable amount of time in a frame and even if raytracing was 100x faster on RT cores, it cannot make rendering 100x faster because the shading may take something like 80% of the time, depending on the scene. So you end up with 20% of the frame pretty nullified by RT, but 80% remain.

    Is SSS actually a simple shader that just causes rays to bounce around in the skin? If so, we could indeed see great improvement with RTX.  But what would be an example of a complex shader then?

     

  • RayDAntRayDAnt Posts: 1,121
    edited June 2019
    RayDAnt said:
    RayDAnt said:
    LenioTG said:

    So, I've found this recent article: http://blog.smarttec.com.au/category/maxwell-render/

    Is it reliable?

    It talks of this Iray 2019.1.0: does this support tensor cores?

    Because he has also done many tests (in the middle of the article), and the average improvement in standard scenes is 20-30%.

    Sounds reliable and in line with other sources (raytracing vs shading debate). The typical Daz scene is not overly complex so we might have to accept that we won't get immense improvement with RTX. In my own stuff which is mostly portrait style promos with simple backgrounds this will likely be very evident.

     

    LenioTG said:
    LenioTG said:

    So, I've found this recent article: http://blog.smarttec.com.au/category/maxwell-render/

    Is it reliable?

    It talks of this Iray 2019.1.0: does this support tensor cores?

    Because he has also done many tests (in the middle of the article), and the average improvement in standard scenes is 20-30%.

    Sounds reliable and in line with other sources (raytracing vs shading debate). The typical Daz scene is not overly complex so we might have to accept that we won't get immense improvement with RTX. In my own stuff which is mostly portrait style promos with simple backgrounds this will likely be very evident.

     

    I've not understood! So you're saying that we'll see the most improvements in simple scenes? I do 1-2 characters in a room usually.

    There are (at minimum) two distinct ways of evaluating complexity in a Daz Studio scene:
    1. computational complexity of lights/shaders used in each scene object
    2. quantitative complexity (technically quantity) of scene objects visible
    RTCores will accelerate the former, but not necessarily the latter (only if the latter is a bunch of objects best described by the former.) So if you spend your time rendering scenes with indirectly lit, complexly shaded objects like people in closeup (human skin shading is one of the most computationally complex cg tasks out there) you should see substantial gains with RTCore acceleration. If you spend your time rendering lots of directly lit, simply shaded objects (like opaque clothing or rooms filled with office furniture - ie. the sort of stuff this particular tester's rendering tasks seem to be primarily made up of) RTCore gains should be only very small.

    That sort of contradicts what I had heard before and also what the Maxwell article says.

     In general we have found RTX provides the greatest benefit in very complex scenes with millions of triangles 

    That does not sound like a portrait of a character where the skin shader is the most demanding thing. It sounds like lots of object with lots of light bouncing around and hence lots of raytracing. I guess we will just have to wait and see. Shouldn't be too long now.

    The surface of human skin is translucent with significant reflective and refractive properties on multiple, distinct sub-surface layers. Hence the invention of things like sub-surface shading. Any time you have realistically shaded skin surfaces in closeup showing in a scene, you are looking at an example of extremely complex light bouncing processes going on. Which is what the actual workload of raytracing is all about.

    I would think so too. But it's a skin shader, right? So how much of that SSS is actual raytracing,

    Translucency/transparency is a (literal) matter of color shading, so that would logically fall under the traditional definition of a shading workload. However visual reflection and refraction are scenarios where light beams are being bent at different angles based on a set of environmental criteria - and they are subject to endless feedback to boot. Which would put them squarely in raytracing territory.

    When people talk about "simple" versus "complex" shaders/shading, what they're really talking about is how many of the elemental properties of light (eg. diffusion, reflection and refraction) the shader is itself attempting to simulate in connection with the various textures and surface shapes associated with it. The simplest form of shader is just dealing with relative levels of darkness - ie reflectivity. And calculating straightforward reflections is the least computationally complex raytracing task there is. So "simple" shaders are never gonna see much of a performance boost from raytracing acceleration. However as you get into more "complex" shaders where more convoluted properties of light like diffusion and refraction are also being explicitly simulated, the raytracing worklaod is going to increase exponentially. Meaning that raytracing acceleration is gonna make a much bigger difference.

    Keep in mind that shaders and "shading" - although different forms of the same word - refer to distrinctly different things. A shader (short for "shader program") is a catch-all term for computer programs designed to perform some sort of specialized function in a graphics rendering pipeline. Whereas "shading" is a term for the age-old method of simulating depth perception in a 3d enviroment via calcualting different levels of darkness. Shader programs originally got their name because, in early computer graphics, performing shading operations was their primary function. But in modern graphics pipelines shading is only one of many things these programs can do. And raytracing is just another tool in the arsenal where shader programs are concerned.

     

    Post edited by RayDAnt on
  • bluejauntebluejaunte Posts: 1,861

    Hmm, if you read this

    https://www.chaosgroup.com/blog/what-does-the-new-nvidia-rtx-hardware-mean-for-ray-tracing-gpu-rendering-v-ray

     In general, scenes with simple shaders and lots of geometry spend more time on ray casting and less time on shading and will benefit the most from the RT Cores. In contrast, scenes with complicated shading networks and lots of procedural textures, but relatively simple geometry, may see a smaller speed boost.

    So we won't have procedural textures, but what do they mean by "complex shading networks"?

  • outrider42outrider42 Posts: 3,679

    I'm guessing that is when you have multiple shaders acting upon an object. Like say, for glass, perhaps the glass is dirty, so you have shaders for dirt on top of the glass, and maybe there is a liquid inside the glass.

    It is possible that a number of Daz skins would be considered as such. You can have dual lobe gloss working with SSS with translucency, ect.

    Its also worth pointing out that Daz models can get pretty complicated with subdivision. Going back to video games, I showed you guys Quake 2 running with RTX. This is a game from 20 years ago. The models and textures are clearly quite simple. In one of the videos that discussed making RTX work in Quake 2, the discussion turned to how the simple geometry of Quake 2 compared to modern games as a key factor in its performance when fully ray traced. Keep in mind that the modern games that have featured ray tracing only use one or two aspects of ray tracing, they are not fully ray traced. Quake 2 RTX is FULLY ray traced.

    So can geometry play a big role. The question is what do they consider to be high amount of geometry?

    BTW, migenius has released their Reality Server 5.3 which now has full RTX support. This was on May 29, the day after the RTX rundown that LenioTG found. So Migenius is the first user of Iray to actually update to the new Iray 2019. https://www.migenius.com/articles/new-in-realityserver-5-3

    Since Migenius has several benchmarks already, I expect them to release a new bench for 2019 that lists a bunch of GPUs. Look for this benchmark, because this will very likely be the first place you are going to see any Iray RTX benchmarks. I expect they will have this bench done before Daz even updates Iray, LOL.

    So there is also another major piece of news then. We have confirmation that Iray 2019 is finished. I find it very strange that Nvidia has not posted this. It is also possible that Migenius has access to Iray 2019 before its official release. That's also why in the past I talked about keeping an eye on Migenius.

    At any rate, if Migenius has Iray 2019 in their hands, that means Daz is getting it soon... they may already have access to it to the new SDK. So the countdown to RTX starts now! I'm holding to my July-August timeline prediction. Time to jump on the hype train, people! WOOOOOOOOOO!

    If anybody wants to contact Migenius with some questions, or maybe you want to rent a Iray server from them, they have offices in Australia, Tokyo, and London, or you can use this page.

    https://www.migenius.com/contact

     

  • LenioTGLenioTG Posts: 2,118
    LenioTG said:

    Are we currently using this Iray 2019?

    According to https://www.daz3d.com/forums/discussion/330161/daz-studio-pro-4-11-nvidia-iray#latest

    we are currently using  Iray 2018.1.3, build 312200.3564

    Maybe the next build will have the 2019 version.

    Thank you, let's hope this build will come soon! :D

    I would not expect any dramatic improvement for non-RTX, as it does not give any idea of how much faster it may be. But an improvement is an improvement. RTX is the real star here.

    Yes, now I care about RTX, because I'm gonna buy that 2060 soon! :D

    But it does depend on the types of scenes you create as to what that performance will be. I will say this, because RTX will make more complex scenes faster, this will give users freedom to start thinking about building such scenes. You don't need to be afraid of them anymore! Though obviously the VRAM will always be a factor in just how complicated you can get.

    This is the kind of scenes I usually make: https://www.daz3d.com/gallery/users/2444471

    Of course, I have no problems in making more complex scenes now. I've always had 3Gb of VRAM...without integrated graphics in my CPU.

    I still think the 2060 is the best value for this. And watch out for possible price drops. If the rumor is true and the 2070 drops to $400, wow, that would be a truly great deal. Not only is it faster, but it has the bigger 8GB capacity. Of course, the 2060 would drop as well, and IF the 2070 hits $400, then the 2060 may drop to about $300 plus or minus a bit. But we'll have to see. I am not sure the 2070 will drop that much. It may only be a $50 drop to ~$450.

    We will find out during E3. I do not believe Nvidia has an actual stage show planned, but I am not sure. Either way, the timing of their announcement will be around the time that AMD reveals Navi. E3 takes place June 8-13. The PC gaming show takes place on June 10, that may be when AMD talks about Navi, so Nvidia's announcement may come anytime before or after that. Its all part of the corporate head games as Nvidia wants to overshadow AMD's show. And of course, we the consumers benefit.

    In Italy prices are a little bit higher, but we start to see 2070 at 450€ and 2060 at 320€!

    With all that political stuff the Euro should go to around 1,20$ in the upcoming months.

    RayDAnt said:

    I will say this, because RTX will make more complex scenes faster, this will give users freedom to start thinking about building such scenes. You don't need to be afraid of them anymore! Though obviously the VRAM will always be a factor in just how complicated you can get.

    Fwiw this was actually one of the main factors that drove me to fork up the money for a Titan RTX back in January. Yeah, even now a 24GB framebuffer is still vastly overkill for pretty much anything other than 8k video editing. But further down the road, once RTX acceleration support gets fully sorted out, the sort of scenes that used to be prohibitively time-consuming (like very high resolution tableaus) are gonna go so much faster that you're gonna want to start dropping really high quality textured stuff in there (the sort of content that you'd only ever render in isolation for memory usage concerns)  because that is the next logical step in expanding your scene design. Which in turn greatly ups the ante for memory consumption. 

    When it comes to the whole dilemma of memory capacity (GB counts) versus rendering efficiency (RT/Tensor/Cuda core counts) the analogy that comes to my mind is cargo capacity versus pulling power in motor vehicles. Large cargo capacity with high torque (eg. a Mac truck/Titan RTX) makes sense. So does small capacity with lower torque (eg. a sportscar/RTX 2060.) But midling cargo capacity with high torque (think an overpowered SUV with no trailer hookup/RTX 2080ti)? That dynamic simply doesn't fit.

    Wow, what a GPU! :O

    Well, I don't know much about GPUs, but I definitely know less about cars...so I'll just ignore the last part xD

    I could have decided to wait for a 2070, but having such a costly possession would make me anxious, and it seems that with these Daz sales it's just impossible to save so much money.

    So there is also another major piece of news then. We have confirmation that Iray 2019 is finished. I find it very strange that Nvidia has not posted this. It is also possible that Migenius has access to Iray 2019 before its official release. That's also why in the past I talked about keeping an eye on Migenius.

    At any rate, if Migenius has Iray 2019 in their hands, that means Daz is getting it soon... they may already have access to it to the new SDK. So the countdown to RTX starts now! I'm holding to my July-August timeline prediction. Time to jump on the hype train, people! WOOOOOOOOOO!

    I hope he's telling the truth then, I don't know him! :D

    Indeed in his examples there are no humans, but just plain objects with simple lights, so maybe we'll have a more noticeable improvement in actual scenes!

     

  • fred9803fred9803 Posts: 1,562

    This is all very exciting, no sarcasm intended. Although I'm completely satisfied with the 2080 I bought last Christmas, the expectation of it doing more wonderful things with RT cores and Iray is killing me. Maybe tomorrow I keep telling myself.

  • RayDAntRayDAnt Posts: 1,121
    edited June 2019

    The only thing RTX and RTX Ti lack compared to Titan V is number of CUDA cores

    It's not so much quantity of CUDA cores but type that sets the Ttian V apart from all its RTX counterparts. Turing cards give up a massive amount of the Volta architecture's native FLOAT64 computing performance to make physical space for their purpose-built RTCores. Which is great for computing applications that need raytracing, but not so great for many other (mostly academic research oriented computing tasks, I grant you) situations. In terms of universal applicable, higher mathematical order computational power, Titian V's are indeed far superior to to all RTX Turing cards.

     

    As for Iray 2019 with RTX support I told you already that it was finished and available back in March but you wouldn't believe me.

    As of this morning (June 5, 2019) Iray with full RTX RTCore support has yet to make it past the prelease beta stage. Partial RTX support in Iray has already been present in production level versions of Iray since before Turing's official release last fall.

    Post edited by RayDAnt on
  • LenioTGLenioTG Posts: 2,118
    RayDAnt said:
    As of this morning (June 5, 2019) Iray with full RTX RTCore support has yet to make it past the prelease beta stage. Partial RTX support in Iray has already been present in production level versions of Iray since before Turing's official release last fall.

    Oh no...so we have to continue waiting? surprise

    Why is this taking so long? It has been 8,5 months already crying

  • RobinsonRobinson Posts: 751
    LenioTG said:

    Oh no...so we have to continue waiting? surprise

    Why is this taking so long? It has been 8,5 months already crying

    Unfortunately yes.  These renderers are quite complex (it's probably a rewrite rather than tweaks in shaders here and there).  Solidworks showed a preview of iRay renderer with RT support back in April though, so betas are out there.

  • outrider42outrider42 Posts: 3,679
    edited June 2019

    This tells us something about the pure computational power of the Titan V, and I mentioned before that the RTX actually is lacking some of the features that the Titan V has.

    The only thing RTX and RTX Ti lack compared to Titan V is number of CUDA cores because only Titan has the all SMs enabled, but it is otherwise technologically inferior to Turing and RTX.

     Its too bad there is no such thing as a "Titan V RTX".

    You are terribly out of date on your facts:
    https://www.nvidia.com/en-us/titan/titan-rtx/

    As for Iray 2019 with RTX support I told you already that it was finished and available back in March but you wouldn't believe me.

     

    You need to do your research. ALL RTX cards have seriously gimped the FP64. The Titan V has over THIRTEEN TIMES the FP64 compute capability of Titan RTX. Yes, thirteen (13).

    Additionally, the Migenius benchmark shows the Titan V not only beating the 2080ti, but totally destroying the 2080ti. The Titan V even tops the Quadro GV100, the 48GB monster that originally cost $10,000 (which now can be bought at the discount price of just $5,500).

    The Titan V achieves a staggering 12.07 Megapaths/second. The 2080ti only manages 8.97. This is a massive difference by any measure.

    While the Titan RTX is not on the Migenius benchmark list, we already know how it performs thanks to the few people who own it here. And the times are basically the same in our own benchmarks. If the Titan RTX is any faster than the 2080ti, it is only slightly faster. So its performance should still fall around 9 Megapaths/second, which is drastically less than 12.

    So...if the Titan RTX has everything the Titan V has, then why can't it match the 12 Megapaths/second benchmark of the Titan V??? Answer this question.

    If you consider what Migenius wrote about RT core acceleration, that the speed gain may be anywhere from just 5% to double, then it stands to reason that the Titan V will actually beat the 2080ti and Titan RTX even after RTX gets enabled for most scenes. Only the most geometrically demanding scenes might allow the Titan RTX to best the Titan V in rendering speed. The Titan V is an Iray beast, pure and simple.

    So no, there is no such thing as the Titan V RTX. Notice the "V"? Just think of the beast that the Titan VOLTA would be if it had RT cores.

    The Titan V has just 12GB of memory, but wait, there is the special "CEO Edition" of the Titan V that has 32GB of VRAM. Yes...that's even more than the Titan RTX. Granted this special card is super rare and cannot be purchased from Nvidia. Like only 20 of these cards exist in the world. So good luck finding one, though one is listed on ebay...for $12,000. Hey, it is signed by the CEO of Nvidia himself, too, LOL.

    Obviously the correct answer is that it is that Titan RTX is indeed missing something that the Titan Volta has. You can argue all you want, but unless you can bring up a benchmark for Iray that disproves this, your claims have no value. Meanwhile, I do have a benchmark that shows there is a massive difference between these cards that favors the Titan V.

    And the same goes for Iray 2019. If you can provide documentation from Nvidia that shows that Iray 2019 is complete and the plugin is shipping, OK then. But just saying it was done in March has no value. There is no source for this. Nvidia will always release documentation for Iray and other products when they release them. I have been looking and I have not found anything.

    I know I sometimes say some things with little evidence behind them, but those are predictions and speculation, and I usually have some reasoning behind it. But these two points about the Titan V and Iray 2019 are not speculation. They are facts.

    Post edited by outrider42 on
  • LenioTGLenioTG Posts: 2,118
    Robinson said:
    LenioTG said:

    Oh no...so we have to continue waiting? surprise

    Why is this taking so long? It has been 8,5 months already crying

    Unfortunately yes.  These renderers are quite complex (it's probably a rewrite rather than tweaks in shaders here and there).  Solidworks showed a preview of iRay renderer with RT support back in April though, so betas are out there.

    I didn't know it was that hard!

    I wish they did it in advance, since it's their product, and we know it doesn't look good to pay for a technology you can't use (who cares about Battlefield 5), but I guess they had other thoughts on their minds!

    Thanks for the explanation :D

  • IllidanstormIllidanstorm Posts: 655

    Can someone give me an answer weather daz3d's iray is using those Tensor Cores now or not?

  • Can someone give me an answer weather daz3d's iray is using those Tensor Cores now or not?

    I thought Tensor cores were used by the denoiser?

  • LenioTGLenioTG Posts: 2,118

    I have finally got a RTX 2060!! :D

    Nice GPU, I have to say!

    It creates twice the iterations per minute of my old GTX 1060 3Gb.

    I'm also amazed by its GDDR6 memory: it starts the renders very fastly!!

  • outrider42outrider42 Posts: 3,679
    edited June 2019
    LenioTG said:
    So, I finally got my RTX 2060!!!! laughlaugh

    It's crazy fast in starting renders, I guess that's because of the GDDR6 memory.

    It does double the iterations per minute compared to my previous 1060 3Gb.

     

    I'm happy for you! You did your research and made the best choice for you. That's a nice upgrade, and double the vram over what you had. If you don't mind, could you post a benchmark to see how it compares if you have time?

    Post edited by Richard Haseltine on
  • outrider42outrider42 Posts: 3,679

    Can someone give me an answer weather daz3d's iray is using those Tensor Cores now or not?

    The latest beta is supposed to be using Tensor cores for denoising. Check the beta patch notes.
  • LenioTGLenioTG Posts: 2,118
    edited June 2019
    LenioTG said:
    So, I finally got my RTX 2060!!!! laughlaugh

    It's crazy fast in starting renders, I guess that's because of the GDDR6 memory.

    It does double the iterations per minute compared to my previous 1060 3Gb.

     

    I'm happy for you! You did your research and made the best choice for you. That's a nice upgrade, and double the vram over what you had. If you don't mind, could you post a benchmark to see how it compares if you have time?

    Yes, I'll do it! :)

    Now I was trying out my workflow with the new GPU: I was really limited by the old one!

    Can't imagine how a system with 4x 2080 Ti is xD

    Post edited by Richard Haseltine on
  • ArtAngelArtAngel Posts: 1,514

    I have a recently released 

    This tells us something about the pure computational power of the Titan V, and I mentioned before that the RTX actually is lacking some of the features that the Titan V has.

    The only thing RTX and RTX Ti lack compared to Titan V is number of CUDA cores because only Titan has the all SMs enabled, but it is otherwise technologically inferior to Turing and RTX.

     Its too bad there is no such thing as a "Titan V RTX".

    You are terribly out of date on your facts:
    https://www.nvidia.com/en-us/titan/titan-rtx/

    As for Iray 2019 with RTX support I told you already that it was finished and available back in March but you wouldn't believe me.

     

    You need to do your research. ALL RTX cards have seriously gimped the FP64. The Titan V has over THIRTEEN TIMES the FP64 compute capability of Titan RTX. Yes, thirteen (13).

    Additionally, the Migenius benchmark shows the Titan V not only beating the 2080ti, but totally destroying the 2080ti. The Titan V even tops the Quadro GV100, the 48GB monster that originally cost $10,000 (which now can be bought at the discount price of just $5,500).

    The Titan V achieves a staggering 12.07 Megapaths/second. The 2080ti only manages 8.97. This is a massive difference by any measure.

    While the Titan RTX is not on the Migenius benchmark list, we already know how it performs thanks to the few people who own it here. And the times are basically the same in our own benchmarks. If the Titan RTX is any faster than the 2080ti, it is only slightly faster. So its performance should still fall around 9 Megapaths/second, which is drastically less than 12.

    So...if the Titan RTX has everything the Titan V has, then why can't it match the 12 Megapaths/second benchmark of the Titan V??? Answer this question.

    If you consider what Migenius wrote about RT core acceleration, that the speed gain may be anywhere from just 5% to double, then it stands to reason that the Titan V will actually beat the 2080ti and Titan RTX even after RTX gets enabled for most scenes. Only the most geometrically demanding scenes might allow the Titan RTX to best the Titan V in rendering speed. The Titan V is an Iray beast, pure and simple.

    So no, there is no such thing as the Titan V RTX. Notice the "V"? Just think of the beast that the Titan VOLTA would be if it had RT cores.

    The Titan V has just 12GB of memory, but wait, there is the special "CEO Edition" of the Titan V that has 32GB of VRAM. Yes...that's even more than the Titan RTX. Granted this special card is super rare and cannot be purchased from Nvidia. Like only 20 of these cards exist in the world. So good luck finding one, though one is listed on ebay...for $12,000. Hey, it is signed by the CEO of Nvidia himself, too, LOL.

    Obviously the correct answer is that it is that Titan RTX is indeed missing something that the Titan Volta has. You can argue all you want, but unless you can bring up a benchmark for Iray that disproves this, your claims have no value. Meanwhile, I do have a benchmark that shows there is a massive difference between these cards that favors the Titan V.

    And the same goes for Iray 2019. If you can provide documentation from Nvidia that shows that Iray 2019 is complete and the plugin is shipping, OK then. But just saying it was done in March has no value. There is no source for this. Nvidia will always release documentation for Iray and other products when they release them. I have been looking and I have not found anything.

    I know I sometimes say some things with little evidence behind them, but those are predictions and speculation, and I usually have some reasoning behind it. But these two points about the Titan V and Iray 2019 are not speculation. They are facts.

    Here is a benchmark for 2080ti's.

    Different 2nd Benchmark.PNG
    1461 x 1368 - 222K
  • fred9803fred9803 Posts: 1,562
    LenioTG said:

    I have finally got a RTX 2060!! :D

    Nice GPU, I have to say!

    It creates twice the iterations per minute of my old GTX 1060 3Gb.

    I'm also amazed by its GDDR6 memory: it starts the renders very fastly!!

    I went from a 960 to a 2080 and it is fantastic to get that extra grunt to kick DS in the pants.What used to take over an hour now takes only 10-15 minutes.

    I pleased you're happy with your GPU upgrade. I certainly am.

  • LenioTGLenioTG Posts: 2,118
    fred9803 said:
    LenioTG said:

    I have finally got a RTX 2060!! :D

    Nice GPU, I have to say!

    It creates twice the iterations per minute of my old GTX 1060 3Gb.

    I'm also amazed by its GDDR6 memory: it starts the renders very fastly!!

    I went from a 960 to a 2080 and it is fantastic to get that extra grunt to kick DS in the pants.What used to take over an hour now takes only 10-15 minutes.

    I pleased you're happy with your GPU upgrade. I certainly am.

    I would have too if I had the possibility, nicely done! :D

    Yes, I'm experiencing something similar as well! Now I basically render a scene right away, instead of closing, reloading, render batching etc.

  • fred9803fred9803 Posts: 1,562
    LenioTG said:

    I would have too if I had the possibility, nicely done! :D

    Yes, I'm experiencing something similar as well! Now I basically render a scene right away, instead of closing, reloading, render batching etc.

    Not forgeting that us with RT cores are also promised a happy-clapping party when Iray and DS gets their act together to use those now useless cores with Iray rendering.

  • LenioTGLenioTG Posts: 2,118
    fred9803 said:
    LenioTG said:

    I would have too if I had the possibility, nicely done! :D

    Yes, I'm experiencing something similar as well! Now I basically render a scene right away, instead of closing, reloading, render batching etc.

    Not forgeting that us with RT cores are also promised a happy-clapping party when Iray and DS gets their act together to use those now useless cores with Iray rendering.

    Indeed!! :D

    I feel like the support is not coming very soon though

  • bluejauntebluejaunte Posts: 1,861

    So my 2080 TI is pretty much dying. It has gotten progressively worse to a point where I had to underclock it to even start a game. Now it started to hang even just in Windows, or "resetting" itself to lower clock right after reboot. Guess I'm gonna have to go through warranty procedures.

This discussion has been closed.