OT: New Nvidia Cards to come in RTX and GTX versions?! RTX Titan first whispers.

1356727

Comments

  • hphoenixhphoenix Posts: 1,335
    drzap said:
    drzap said:
    drzap said:
     

    I took note of that benchmark. I disregarded it because it failed to include render times, which is the conventional way to measure rendering performance. Instead it uses a unit i have never heard of nor have seen anywhere else on the internet.

    FLOPS may be on the internet, IPS won't be.  It's a bit niche.  (The "T" on the front is just the Tera- prefix, or Trillions)

    FLOPS are FLoating-point Operations Per Second.  So 1 TFLOPS is 1,000,000,000,000 floating point operations each second.

    IPS are Intersections Per Second.  It's a raytracing performance indicator.  Raytracing performance is primarily bound by the code which calculates ray-object intersections.  Every camera ray (and every spawned ray for refraction and/or reflection from a recursion) has to be checked for intersection with EVERY OBJECT IN THE SCENE.  So the number of Ray-Object Intersection tests is very large (and for polygonal models, you have to test against EACH polygon....implicit objects it's a single test.)

    Now realize that in pure ray-tracing, one ray is shot through EVERY PIXEL in the image.  When you do supersampling to anti-alias, you also shoot MULTIPLE rays per pixel.  So in a single 4k image, 3840x2160 = 8,294,400 pixels.  So that's over 8 million primary rays.  Simple supersampling uses one extra for each pixel, so about 16.5 million.  Now if ANY rays hit a reflective or refractive (or both) surface, NEW rays will be spawned to trace.  So to render SIMPLE raytracing with no reflection/refraction in it, at 4k resolution, at 30fps, with simple antialiasing, is about 497.664 MILLION intersections per second.....or about 0.5 GIPS.

    So when your Ray-Tracing Hardware is rated in TIPS, that's Trillions of Intersections Per Second.  

    Now, ray-tracing is not a flat parallel process.  Recursion is required to handle reflection/refraction, so it's more of a multi-pass problem, where each pass involves fewer operations than before.  Raytrace all the primary rays.  Any that strike reflective/refractive surfaces are then queued for tracing, and any of THOSE that strike reflective/refractive surfaces will get queued for tracing, etc., until all rays have struck non-reflective and non-refractive surfaces, have gone outside the world-space, or you hit a ray-depth recursion limit.

    Even so, having performance in the multiple TIPS range is pretty impressive.  There's more involved than just Intersections Per Second, of course.  But it's the biggest bottleneck in the ray-tracing pipeline.

  • drzapdrzap Posts: 795
    edited August 2018
    hphoenix said:
    drzap said:
    drzap said:
    drzap said:
     

    I took note of that benchmark. I disregarded it because it failed to include render times, which is the conventional way to measure rendering performance. Instead it uses a unit i have never heard of nor have seen anywhere else on the internet.

    FLOPS may be on the internet, IPS won't be.  It's a bit niche.  (The "T" on the front is just the Tera- prefix, or Trillions)

    FLOPS are FLoating-point Operations Per Second.  So 1 TFLOPS is 1,000,000,000,000 floating point operations each second.

    IPS are Intersections Per Second.  It's a raytracing performance indicator.  Raytracing performance is primarily bound by the code which calculates ray-object intersections.  Every camera ray (and every spawned ray for refraction and/or reflection from a recursion) has to be checked for intersection with EVERY OBJECT IN THE SCENE.  So the number of Ray-Object Intersection tests is very large (and for polygonal models, you have to test against EACH polygon....implicit objects it's a single test.)

    Now realize that in pure ray-tracing, one ray is shot through EVERY PIXEL in the image.  When you do supersampling to anti-alias, you also shoot MULTIPLE rays per pixel.  So in a single 4k image, 3840x2160 = 8,294,400 pixels.  So that's over 8 million primary rays.  Simple supersampling uses one extra for each pixel, so about 16.5 million.  Now if ANY rays hit a reflective or refractive (or both) surface, NEW rays will be spawned to trace.  So to render SIMPLE raytracing with no reflection/refraction in it, at 4k resolution, at 30fps, with simple antialiasing, is about 497.664 MILLION intersections per second.....or about 0.5 GIPS.

    So when your Ray-Tracing Hardware is rated in TIPS, that's Trillions of Intersections Per Second.  

    Now, ray-tracing is not a flat parallel process.  Recursion is required to handle reflection/refraction, so it's more of a multi-pass problem, where each pass involves fewer operations than before.  Raytrace all the primary rays.  Any that strike reflective/refractive surfaces are then queued for tracing, and any of THOSE that strike reflective/refractive surfaces will get queued for tracing, etc., until all rays have struck non-reflective and non-refractive surfaces, have gone outside the world-space, or you hit a ray-depth recursion limit.

    Even so, having performance in the multiple TIPS range is pretty impressive.  There's more involved than just Intersections Per Second, of course.  But it's the biggest bottleneck in the ray-tracing pipeline.

    Those are nice explanations for some common industry expressions. Now, can you explain megapaths? Because that is what I was referring to. I dont know what the heck a megapath is.
    Post edited by drzap on
  • outrider42outrider42 Posts: 3,679
    drzap said:
    drzap said:
    drzap said:
    The big story here for Daz 3D users is the potential for realtime raytracing. This is made possible by Nvidia's RTX cores that are dedicated to this task. Of course this will be featured in Iray by means of Optix, but it is not the sole means. Microsoft has a new API and there is another gateway through Vulcan. This doesn't necessarily mean anything for Daz Studio. First, Daz will need to take advantage of the RTX API in it's licensed version of iRay. This seems likely, but not a certainty. Daz Studio iRay doesn't always follow in lockstep with the commercial version. As far as tensor cores are concerned, I researched this awhile ago with the release of the Titan V. There is no indication that tensor cores have anything to do with realtime raytracing directly but they are extremely useful for the training of AI denoising and inferencing features. This is nothing new and has been in action since the Volta. Nvidia seems to be expanding the use of AI in the enabling of VFX and that can be nothing but good news for filmmakers and special effects artists but really nothing much to do with Daz Studio users. I would not be surprised if the consumer GeForce cards didnt even have tensor, since AI is being aimed at content creators and not those who consume the content. At any rate, it seems that realtime is really here. My favorite renderers have already announced support for the RTX protocol and I look forward to getting my hands on a Quadro or two.

     

    Then explain why Titan V more than doubles the Iray rendering speed over Titan Pascal or 1080ti.

     

    There are many differences between Pascal and Volta both in software and hardware. To conclude that rendering speed improvements are the result of tensor cores without citing proof isn't wise. Volta has more CUDAs, different memory architecture as well as vastly improved data bandwidth, among other things, besides the addition of tensor. I have not seen any rendering benchmarks that have recorded anything close to 2X performance improvement over the previous generation (perhaps you can provide one). The most common number is about 60% like that experienced by a very dependable source at Puget Systems (link below), who also state that tensor is unlikely to have any affects on rendering. This is in line with what Nvidia itself states. Maybe you can tell us why you are so convinced otherwise? https://www.pugetsystems.com/blog/2017/12/12/A-quick-look-at-Titan-V-rendering-performance-1083/

    If you read the thread you would have found my link to a benchmark on the first page, 12th post. I've posted this bench at least 3 times in other threads on this site, all threads covering the future GPUs as well as the Iray Benchmark thread that sickleyield made. I'm not linking it again. It comes from a group that has been benchmarking Iray and other programs for some time. They have covered numerous Iray SDKs.

    Also, CEO Huang did say Tensor could be useful for gaming, too. Considering he is the CEO, I would be more apt to take his word on that possibilty.

    https://www.dvhardware.net/article68217.html

    Quote:

    "The type of work that you could do with deep learning for video games is growing. And that’s where Tensor Core to take up could be a real advantage. If you take a look at the computational that we have in Tensor Core compare to a non optimized GPU or even a CPU, it's now to plus orders of magnitude on greater competition of throughput. And that allows us to do things like synthesize images in real time and synthesize virtual world and make characters and make faces, bringing a new level of virtual reality and artificial intelligence to the video games."

     

    I took note of that benchmark. I disregarded it because it failed to include render times, which is the conventional way to measure rendering performance. Instead it uses a unit i have never heard of nor have seen anywhere else on the internet. How does it compare to render times? does it match linearly or does it diminish as the number gets higher? Nobody knows since there is no explaination. There is also nothing said about tensor contributing to the high score and since it is the only test that has measured such a big difference between gpu's (and I have seen quite a few benchmarks of Volta on various renderers), I'm going to have to discount it. Puget Systems has a much better reputation that I can rely on. Not saying that this megapaths metric isn't legit. I'm just saying it is unheard of and unexplained, thus not a reliable reference. I'll pass.

    "Another thing, you mentioned you doubted if Daz would get the SDK for this Iray. Why? Nvidia makes only one SDK. Its not like you get to pick and chose..."

    Did I say that? read it again. I said it "seems likely" that at least some derivitive of RTX will end up in Iray. That does not sound like doubt but Daz has a history of omitting features they feel they dont need in Daz Studio (motion blur!). Nothing is certain.

    "Tensor cores are good for AI denoising, and hey look, Daz updated its denoiser with the latest beta. Hmm...."

    Yeah, I said that but that has very little to do with raytracing except reduce the passes needed for an acceptable render and the benchmark you cite mentions nothing about using the denoiser. More than likely, the reason why Volta performs so well is because it uses "Volta optimized CUDAs" (per Nvidia website) and iRay is optimized to use Volta CUDA. Also note that they ran the benchmark on a Linux workstation using commercial iRay . Thats about as far away from Daz Studio as you can get. I dont see how that quote from Huang explains how tensors enable realtime raytracing. Tensors are a great thing. But there are not needed to use the AI denoiser (they are only used in the training process by the developer) and there is nothing that I can see that says they are a vital component of raytracing. They are used for AI purposes, which is exciting in itself, but not directly related with realtime. Nvidia has never been shy about promoting itself. If tensor cores had further value in the 3D rendering community, I have a feeling the news would have been plastered across every 3D site on the internets.

    And I discounted the Puget bench because 1) It was right after the V launched, so who can say what condition the drivers are in. 2) Its not even Iray 3) It was literally a quickie, and even stated as much in the title. It is not a full benchmark in any capacity. I don't care what their reputation is, the simple fact of the matter at hand is they have not even tested Iray on Titan V. At all. Until you can bring up an Iray benchmark of the Titan V, you have nothing to suggest otherwise. Vray is similar. Octane is similar. But you know they are not excatly the same. And note who makes Iray, Nvidia. It would be logical for Nvidia to make their hardware work best with their software and vice versa. The newest Iray is specifically designed to take advantage of the new hardware that Volta offers. I would be very much willing to bet that 3rd party renderers do not take full advantage of the RTX features and Tensor because of just how new they are. For them, they are probably running almost purely on CUDA alone, and just like the non ray traced video games, there would only be around a 25% boost. But since Iray is built by Nvidia, they know exactly how to use the features they are building in these new cards for their own gain. They sell Iray, so ensuring it has an advantage over other renderers something they would likely do.

    Tensor cores are one piece of the puzzle, and the CEO briefly mentioned how they would benefit gaming. All of those things he mentioned can be applied to rendering.

    Another reason why Tensor could be in consumer cards, good old Hairworks could've powered by AI.

    https://wccftech.com/nvidia-gpus-ai-rendering-power-hairworks-aaa-games/

    And when you look up AI on Nvidia's website, it says that AI can benefit all industries. It lists a few, and one of them is gaming.

    Another quote "Turing is designed to boost that kind of imaging, with dedicated RT Cores working together with Tensor Cores that use AI inferencing to enable real-time ray tracing."

    From    https://newatlas.com/nvidia-turing-gpu-ray-tracing/55900/

    From this statement, it shows that Tensor and RTX work hand in hand.

    If Nvidia has no intention of using Tensor in gaming cards, it sure is weird that they keep mentioning gaming as benefiting from it.

    Megapaths, 1 million paths per second. No, I don't know exactly what it means, but all the GPUs line up to their expected performance levels relative to each other. So it would be surprising if their Titan V was really benched wrong or something.

    We shall see in a few days.

    New rumors point to a 2080ti already. I think its possible. Nvidia has delayed this launch a bit, so it stands to reason they have already reached enough dies to start on the ti earlier. I now believe the "2080+" mentioned previously is in fact the 2080ti. So if true, the 2080ti will launch a month or so after the 2080. Also note the rumor says it has Tensor cores.

  • drzap said:
    hphoenix said:
    drzap said:
    drzap said:
    drzap said:
     

    I took note of that benchmark. I disregarded it because it failed to include render times, which is the conventional way to measure rendering performance. Instead it uses a unit i have never heard of nor have seen anywhere else on the internet.

    FLOPS may be on the internet, IPS won't be.  It's a bit niche.  (The "T" on the front is just the Tera- prefix, or Trillions)

    FLOPS are FLoating-point Operations Per Second.  So 1 TFLOPS is 1,000,000,000,000 floating point operations each second.

    IPS are Intersections Per Second.  It's a raytracing performance indicator.  Raytracing performance is primarily bound by the code which calculates ray-object intersections.  Every camera ray (and every spawned ray for refraction and/or reflection from a recursion) has to be checked for intersection with EVERY OBJECT IN THE SCENE.  So the number of Ray-Object Intersection tests is very large (and for polygonal models, you have to test against EACH polygon....implicit objects it's a single test.)

    Now realize that in pure ray-tracing, one ray is shot through EVERY PIXEL in the image.  When you do supersampling to anti-alias, you also shoot MULTIPLE rays per pixel.  So in a single 4k image, 3840x2160 = 8,294,400 pixels.  So that's over 8 million primary rays.  Simple supersampling uses one extra for each pixel, so about 16.5 million.  Now if ANY rays hit a reflective or refractive (or both) surface, NEW rays will be spawned to trace.  So to render SIMPLE raytracing with no reflection/refraction in it, at 4k resolution, at 30fps, with simple antialiasing, is about 497.664 MILLION intersections per second.....or about 0.5 GIPS.

    So when your Ray-Tracing Hardware is rated in TIPS, that's Trillions of Intersections Per Second.  

    Now, ray-tracing is not a flat parallel process.  Recursion is required to handle reflection/refraction, so it's more of a multi-pass problem, where each pass involves fewer operations than before.  Raytrace all the primary rays.  Any that strike reflective/refractive surfaces are then queued for tracing, and any of THOSE that strike reflective/refractive surfaces will get queued for tracing, etc., until all rays have struck non-reflective and non-refractive surfaces, have gone outside the world-space, or you hit a ray-depth recursion limit.

    Even so, having performance in the multiple TIPS range is pretty impressive.  There's more involved than just Intersections Per Second, of course.  But it's the biggest bottleneck in the ray-tracing pipeline.

     

    Those are nice explanations for some common industry expressions. Now, can you explain megapaths? Because that is what I was referring to. I dont know what the heck a megapath is.

    Iray is a "path tracer" rather thana  ray tracer - see the max path length proeprty in Render Settings and "Light Path expressions". I'm hazy on the difference between paths and rays, but presumably "megapath" is counting the number that can be handled (by the millions), which is potentially a useful figure for users (like giving the number of rays or samples) though given that paths can vary in complexity I'm still not entirely sure what they would mean.

  • Can someone explain what makes tensor cores better at realtime raytracing but not normal raytracing as in Iray? Aren't these pretty much the same tasks?

     

    The RTX ray tracing is actually a hybrid rendering process.  It uses the texture mapping rendering from live game engines and blends it with dynamic lighting from the ray tracing calculations.  Iray is doing full scene ray tracing, so its slow compared to real time.  They are using new "ray" cores for the RTX, separate from tensor cores.

     

  • MendomanMendoman Posts: 401

    Well, when benchmark uses some random/unknown unit I'm always little sceptical, that somebody has probably influenced that benchmark. If you develop your own unit, and don't tell how that unit is really made or what it really presents, you can make anything looking really good.

     

    For example, if I make a new benchmark for cars, and my spreadsheets clearly show that old FIAT 127 has 45.5 rabbitpowers while new Ferrari 812 only has 3.8 rabbitpowers, FIAT 127 must be better? My benchmark could potentially be useful for some users, but nobody really knows, if I don't tell how I calculate those rabbitpowers. In case anybody is interested, my new rabbitpower is calculated like 3000 / horsepower. I just chose that so that it looks like FIAT 127 has bigger number than Ferrari. Bigger is always better, right?

     

    Anyways, if those benchmarkers really wanted to show some meaningful comparisons for Iray, then iterations in given time or final render time or anything like that would have actually meant something. Now we just have some random number and random unit, and nobody really knows what those mean. After that kind of benchmark I think it's really bold to make a statement that Volta is 2x faster/better than some other card in Iray rendering. It could be, that Volta cards can just handle more rays or paths or whatever in a scene, and those megapaths have nothing to do with render time, which most users are interested of.

     

    Also, I do agree that quick tests for other render engines don't really tell the whole truth about how well for example Volta cards render Iray scenes. I assume, that very few gamers bothered to buy $3000 Volta card, so there's very little interest to do benchmarks for those cards either. Now that Nvidia is releasing a new real family of gamer cards, hopefully we start to get proper benchmarks also, and finally get some real information.

  • hphoenixhphoenix Posts: 1,335
    drzap said:
    hphoenix said:
    drzap said:
    drzap said:
    drzap said:
     

    I took note of that benchmark. I disregarded it because it failed to include render times, which is the conventional way to measure rendering performance. Instead it uses a unit i have never heard of nor have seen anywhere else on the internet.

    FLOPS may be on the internet, IPS won't be.  It's a bit niche.  (The "T" on the front is just the Tera- prefix, or Trillions)

    FLOPS are FLoating-point Operations Per Second.  So 1 TFLOPS is 1,000,000,000,000 floating point operations each second.

    IPS are Intersections Per Second.  It's a raytracing performance indicator.  Raytracing performance is primarily bound by the code which calculates ray-object intersections.  Every camera ray (and every spawned ray for refraction and/or reflection from a recursion) has to be checked for intersection with EVERY OBJECT IN THE SCENE.  So the number of Ray-Object Intersection tests is very large (and for polygonal models, you have to test against EACH polygon....implicit objects it's a single test.)

    Now realize that in pure ray-tracing, one ray is shot through EVERY PIXEL in the image.  When you do supersampling to anti-alias, you also shoot MULTIPLE rays per pixel.  So in a single 4k image, 3840x2160 = 8,294,400 pixels.  So that's over 8 million primary rays.  Simple supersampling uses one extra for each pixel, so about 16.5 million.  Now if ANY rays hit a reflective or refractive (or both) surface, NEW rays will be spawned to trace.  So to render SIMPLE raytracing with no reflection/refraction in it, at 4k resolution, at 30fps, with simple antialiasing, is about 497.664 MILLION intersections per second.....or about 0.5 GIPS.

    So when your Ray-Tracing Hardware is rated in TIPS, that's Trillions of Intersections Per Second.  

    Now, ray-tracing is not a flat parallel process.  Recursion is required to handle reflection/refraction, so it's more of a multi-pass problem, where each pass involves fewer operations than before.  Raytrace all the primary rays.  Any that strike reflective/refractive surfaces are then queued for tracing, and any of THOSE that strike reflective/refractive surfaces will get queued for tracing, etc., until all rays have struck non-reflective and non-refractive surfaces, have gone outside the world-space, or you hit a ray-depth recursion limit.

    Even so, having performance in the multiple TIPS range is pretty impressive.  There's more involved than just Intersections Per Second, of course.  But it's the biggest bottleneck in the ray-tracing pipeline.

     

    Those are nice explanations for some common industry expressions. Now, can you explain megapaths? Because that is what I was referring to. I dont know what the heck a megapath is.

    Path Tracing is a form of raytracing.  It uses a Monte-Carlo randomized subsampling of rays for each pixel.  More info here:  https://chunky.llbit.se/path_tracing.html

    MegaPaths would be similar to TIPS, but seems an odd metric.  Since a given path may or may not involve multiple reflection and refraction rays, no 'path' is of definite or deterministic length or number of rays.  A ray could never intersect any objects, and thus the initial rays would never recurse.  Or it could strike glass, resulting in two new rays being recursively traced (one for the reflected ray, and one for the refracted ray).....and those could also strike things that could spawn new rays.  And so on.  Until it reaches max recursion depth, intersects an object with no reflection or refraction, or fails to intersect another object.   Without some 'average rays per path' defined, MegaPaths is a bit of a undefined quality.  And given that scenes can vary wildly in content, that value would vary dramatically as well.

    TIPS is simply ray-object intersection tests per second (in trillions, GIPS would be in billions, MIPS would be confusing as it's already an acronym for something else...) and is scene independent.  More complex scenes will require more such tests for a give resolution image.  But the performance is a valid metric.

     

  • ebergerlyebergerly Posts: 3,255
    edited August 2018

    While I get the enthusiasm for new technology, I again caution folks not to jump to conclusions based on a very few, intentionally vague and inconclusive statements.

    Statements such as "would be", "could be", "can be", "can benefit", "it's possible", "it stands to reason", "working hand in hand", etc., are unfortunately virtually meaningless, and certainly not even close to proof of anything.

    Anything "can" be. Working "hand in hand" means nothing. 

    The code behind de-noising software may have progressed to the point where it is a legitimate way to speed up renders AND maintain high quality comparable to a raytraced render, while cutting down the raytraced overhead. Or maybe it's only usable with games, where very high quality isn't needed with moving images. Or maybe it's a bit like Blender's de-noising, usable in certain lighting situations, but far too noticeable in others. And maybe for some renders we all might decide to disable it. Or maybe it's the greatest thing since the invention of the wheel. And sure, tensor cores are part of that. Just as the other components I mentioned before, which may or may not play significant roles in our renders. Maybe de-noising and tensor cores will be somewhat irrelevant compared to the other components, such as SM's for rastering and RT cores for raytracing, etc. Maybe the Optix with the new RT cores and the other architectural hardware and software API components are what will be the major factor in boosting performance.  

    But to assume any of this is a done deal and will bring us all into render heaven is premature to say the least.

    Again, until we have render benchmarks for Studio Iray scenes that show actual render times, all of this is just baseless speculation. After seeing the results in some Blender renders which used de-noising, it speeds up renders a lot and gives nice results. In other cases I turn it off because it's extremely noisy and obvious. Maybe now the de-noising advances and faster tensor calcs will improve that quality. But keep in mind that you don't get something for nothing. Higher quality = lower speed, and lower quality = higher speed. 

    It depends. It's complicated.   

    Post edited by ebergerly on
  • kyoto kidkyoto kid Posts: 40,678

    ....and why I am still progressing with the configuration and design of my dual HCC Xeon high memory render box.  I know the quality it will be able to produce.  It may take a bit longer to do the job, true, but no guesswork or speculation involved.

  • nicsttnicstt Posts: 11,715

    I'm inclined to agree with ebergerly.

    Nvidia wants to sell cards; they need to do so.

    Sure, there will be an improvement on previous cards. Previous experience suggests we'll wait months for Iray support.

    ... I'll be waiting until I know what I'm getting... Not think, hope, presume, etc. I also hope it wont be months. :)

    Speculation (or what is all-too-often guessing) can be fun, but the risk is that it becomes fact, without it being anything close.

  • kyoto kidkyoto kid Posts: 40,678
    edited August 2018

    ...yeah, I still remember the "8 GB 980 Ti" all the gaming and tech sites were talking about.

    Post edited by kyoto kid on
  • Ghosty12Ghosty12 Posts: 1,995
    edited August 2018

    Here is a demo video called Project Sol that shows off the power of RTX cards though this would probably be a Quadro RTX 8000 doing the work in this video..

     

    Post edited by Ghosty12 on
  • ebergerlyebergerly Posts: 3,255
    edited August 2018

    WOW!!! If I buy an RTX card all my renders will look professional and perfect like that with perfect materials and lighting and animation and models !!! And it will all be done in real time !!! laugh

    Post edited by ebergerly on
  • ghosty12 said:

    Here is a demo video called Project Sol that shows off the power of RTX cards though this would probably be a Quadro RTX 8000 doing the work in this video..

    Mostly shiny, hard surfaces though - and some strategic mist when they might have needed more. I'd be more impressed with a human or creature sequence of simlar quality.

  • nicsttnicstt Posts: 11,715
    ebergerly said:

    WOW!!! If I buy an RTX card all my renders will look professional and perfect like that with perfect materials and lighting and animation and models !!! And it will all be done in real time !!! laugh

    I don't know what it is, but I just can't help thinking I'm detecting a modicum of sarcasm here. :)

  • ebergerlyebergerly Posts: 3,255

    I'm guessing it took a team of 10 or more artists and animators about 6 months or more to develop that video. And the purpose of the video is to show you you awesome and fast and professional your work will be with an RTX card? 

    I don't get it. 

  • outrider42outrider42 Posts: 3,679
    Mendoman said:

    Well, when benchmark uses some random/unknown unit I'm always little sceptical, that somebody has probably influenced that benchmark. If you develop your own unit, and don't tell how that unit is really made or what it really presents, you can make anything looking really good.

     

    For example, if I make a new benchmark for cars, and my spreadsheets clearly show that old FIAT 127 has 45.5 rabbitpowers while new Ferrari 812 only has 3.8 rabbitpowers, FIAT 127 must be better? My benchmark could potentially be useful for some users, but nobody really knows, if I don't tell how I calculate those rabbitpowers. In case anybody is interested, my new rabbitpower is calculated like 3000 / horsepower. I just chose that so that it looks like FIAT 127 has bigger number than Ferrari. Bigger is always better, right?

     

    Anyways, if those benchmarkers really wanted to show some meaningful comparisons for Iray, then iterations in given time or final render time or anything like that would have actually meant something. Now we just have some random number and random unit, and nobody really knows what those mean. After that kind of benchmark I think it's really bold to make a statement that Volta is 2x faster/better than some other card in Iray rendering. It could be, that Volta cards can just handle more rays or paths or whatever in a scene, and those megapaths have nothing to do with render time, which most users are interested of.

     

    Also, I do agree that quick tests for other render engines don't really tell the whole truth about how well for example Volta cards render Iray scenes. I assume, that very few gamers bothered to buy $3000 Volta card, so there's very little interest to do benchmarks for those cards either. Now that Nvidia is releasing a new real family of gamer cards, hopefully we start to get proper benchmarks also, and finally get some real information.

    How is the megapath stat really different from time as a bench? Think about it. All of the things you said about different scenes changing how the bench may run hold true for a timed bench as well. If you load up on a lot of reflections and translucent surfaces that will Jack up the time in same way. Just look at the chart. When you factor all the numbers, all the cards we know other benchmarks for line up exactly where we would expect them to relative to each other. The only one we don't have experience with here is Titan V. This is not comparable to making up your weird stat nobody knows. Because again, we have all of those others cards on that chart to reference. Whether it be time or megapath, both stats show the same differences in rendering ability in all the cards we know of that have been benched right here in our own benchmark thread. This company has been benchmarking Iray since 2014, and has benches for each of the major Iray SDKs released. This isn't some fly by night operation designed to mislead people (or customers.) And most importantly, they are the only group I know of that has benchmarked Iray this much. Not other applications, but Iray, and they have benched a wide variety of GPUs, and even servers. They even benched the monsterous DGX machines. Again, no one else has done this. I am inclined to believe them.

    The 6 times faster tagline is obviously for very specific situations. It is exactly the same tagline Nvidia claimed with Pascal when they said it was 2.5 times faster than Maxwell...at VR. And this claim was largely true. Pascal is much faster for VR applications. Which leads to Turing, and this 6 times faster claim. The claim is 6 times faster at ray tracing. This claim is obviously hyped up like the 2.5 times faster VR claim. A pure ray tracing application is the only thing that will see a boost anywhere near this...which just happens to be Iray. And again, Nvidia created Iray, so if any application sees this benefit, it would almost certainly be Iray. These features are literally tailor made for Iray.

    So Daz didn't include motion blur in their release. But think about this. To use motion blur you need to create the settings and tools for them. That is a different beast than the passive use of Turing, Tensor, and RTX. These are things that are built into Iray, and not at all comparable to the option of motion blur. I believe Daz Iray has motion blur, but they simply didn't give us that option menu, the tools to use it. I'm guessing they have their reasons, and perhaps that is something to come in a future update. But I do not believe that Daz could refuse the ability to use Tensor or RTX. That doesn't even make sense. The only real question is how long it will take for the SDK to release, and for Daz to release it in Studio. I would imagine they want to get it done ASAP as this SDK will add Turing support, and they don't want a repeat of Pascal, which took months to see an update for support.

    In bigger news, PNY has basically confirmed the 2080 and 2080ti specs by accidentally releasing their promo materials for them. They do not directly mention Tensor, but they do have a very big bulletpoint for "AI enhanced graphics", which would seem to indicate some Tensor cores are present. The cards have a very high TDP. We haven't seen such power requirements in a long time. So anyone looking at these needs to make sure they have a good PSU, especially if they have any desire to run multiple GPUs. The bulletpoints also make the same "up to 6 times faster" claim, so the 6 times claim is not just being used for Quadro cards. Its just as I said, ray tracing and AI are massive benefits to gamers as well, so it only makes sense to include some of this for gaming. This does not step on Quadro much, because Quadro still has massive VRAM, vastly better support for CAD software, ECC memory, and TCC mode. Gaming and pro are still very much separated. Thankfully we as Daz Iray users don't need any of those things for Iray, well maybe except VRAM. But are you really going to shell out thousands of dollars just for more VRAM? https://wccftech.com/pny-nvidia-geforce-rtx-2080-ti-and-rtx-2080-listed-for-pre-order/
  • bluejauntebluejaunte Posts: 1,863
    ebergerly said:

    I'm guessing it took a team of 10 or more artists and animators about 6 months or more to develop that video. And the purpose of the video is to show you you awesome and fast and professional your work will be with an RTX card? 

    I don't get it. 

    The goal here was to show what can be done with realtime raytracing. It's no different than any fancy game engine demo before it, except for the raytracing. Realtime is the key here. Clearly they failed to impress you though. laugh

    Try this: ask yourself how long it would have taken to render something of similiar quality in Iray at 25 frames per second. Disregard how long it took to make, that's not the point. Any CG movie, any game will take an obscene amount of time to produce.

  • ebergerlyebergerly Posts: 3,255
    edited August 2018

    Yeah, I get it. It's like the Blender Eevee stuff with realtime previews that look almost like final renders, using existing GPU's. 

    https://www.youtube.com/watch?v=Fwk1cYA3G1U

    https://www.youtube.com/watch?v=arehBE6g08s

    But I think in real life, much of it is a standard marketing ploy called "association" (okay, that's my term, but you get the idea). You associate your product with something that is cool and awesome and funny, and people get it in their heads that "RTX = cool, awesome and funny professional realtime rendering with great animation and materials and lighting", when in fact it's, at the end of the day, an inevitable and incremental improvement in software and hardware to speed things up. 

    Personally, I love the idea of speeding up renders. But again, at the end of the day, it comes down to two numbers;
     

    • How much the card costs
    • How much it speeds up my Studio/Iray renders at the same quality 

    Videos like that tell me nothing about those.

     

    Post edited by ebergerly on
  • bluejauntebluejaunte Posts: 1,863

    I didn't enjoy this demo much by the way. Have a look at this Unity demo with no raytracing and it still looks and feels way more interesting.

    Realtime raytracing makes sense though as a next step while we move toward really anything being rendered in realtime. Who wants to wait for a render to finnish, right?

  • ebergerlyebergerly Posts: 3,255

    My only point is that existing GPU's, when running the latest software, can get near the point that's being touted so much by RTX. Which is why I'm so much more impressed with the software developments, not necessarily the hardware developments. 

    I LOVE realtime ray tracing. But it's not an RTX thing. It's not a tensor cores thing. 

  • Ghosty12Ghosty12 Posts: 1,995
    edited August 2018
    ghosty12 said:

    Here is a demo video called Project Sol that shows off the power of RTX cards though this would probably be a Quadro RTX 8000 doing the work in this video..

    Mostly shiny, hard surfaces though - and some strategic mist when they might have needed more. I'd be more impressed with a human or creature sequence of simlar quality.

    Yeah that was the main problem with it was going overboard with the shiny surfaces, but I gather that was the main purpose of the video. Although also would have been good is just a small amount of transparency in the helmet visor.

    And to me there was just enough hist/smoke/fog in the room as having too much can ruin a scene if it is not the type of scene that requires a lot of mist/smoke/fog. What impressed me the most funnily enough was the laser part, and a couple of scenes before the man came into the movie.. But that was about it..

    ebergerly said:

    Yeah, I get it. It's like the Blender Eevee stuff with realtime previews that look almost like final renders, using existing GPU's. 

    https://www.youtube.com/watch?v=Fwk1cYA3G1U

    https://www.youtube.com/watch?v=arehBE6g08s

    But I think in real life, much of it is a standard marketing ploy called "association" (okay, that's my term, but you get the idea). You associate your product with something that is cool and awesome and funny, and people get it in their heads that "RTX = cool, awesome and funny professional realtime rendering with great animation and materials and lighting", when in fact it's, at the end of the day, an inevitable and incremental improvement in software and hardware to speed things up. 

    Personally, I love the idea of speeding up renders. But again, at the end of the day, it comes down to two numbers;
     

    • How much the card costs
    • How much it speeds up my Studio/Iray renders at the same quality 

    Videos like that tell me nothing about those.

     


    Well the cost and specs of the Quadro RTX range is as below..

    https://nvidianews.nvidia.com/news/nvidia-unveils-quadro-rtx-worlds-first-ray-tracing-gpu

    GPU Memory Memory with NVLink Ray Tracing CUDA Cores Tensor Cores
    Quadro RTX 8000 48GB 96GB 10 GigaRays/sec 4,608 576
    Quadro RTX 6000 24GB 48GB 10 GigaRays/sec 4,608 576
    Quadro RTX 5000 16GB 32GB 6 GigaRays/sec 3,072 384

     

    Pricing
    Quadro RTX 8000 with 48GB memory: $10,000 estimated street price
    Quadro RTX 6000 with 24GB memory: $6,300 ESP
    Quadro RTX 5000 with 16GB memory: $2,300 ESP

     

    And in just over 48 hours from now Nvidia will supposedly reveal the consumer grade cards..

    Post edited by Ghosty12 on
  • ebergerlyebergerly Posts: 3,255

    Again, specs are meaningless. Only actual render time with actual Studio/Iray scenes matters. 

    And if those are the street prices, why are we even discussing? 

  • Kevin SandersonKevin Sanderson Posts: 1,643
    edited August 2018
  • Ghosty12Ghosty12 Posts: 1,995
    edited August 2018
    ebergerly said:

    Again, specs are meaningless. Only actual render time with actual Studio/Iray scenes matters. 

    And if those are the street prices, why are we even discussing? 

    You wanted to know so I told you, and the only I could find was for the professional level cards, you will have to wait two days for the consumer cards the be announced/released.. Also don't forget that it all depends on when Nvidia update iRay for the new cards and for when Daz implement it in Daz Studio.. So getting iRay render speed tests and so on will be a long ways off..


    What is interesting is rumor has it that they are going to release the Ti version rather quickly, after they release the 2080..

    Post edited by Ghosty12 on
  • Kevin SandersonKevin Sanderson Posts: 1,643
    edited August 2018

    Let's hope they quickly release to DAZ a new version of Iray that will work with them.

    Post edited by Kevin Sanderson on
  • ebergerly said:

    My only point is that existing GPU's, when running the latest software, can get near the point that's being touted so much by RTX. Which is why I'm so much more impressed with the software developments, not necessarily the hardware developments. 

    I LOVE realtime ray tracing. But it's not an RTX thing. It's not a tensor cores thing. 

     

    They rely on tricks to do that.  For all the various cases they have to manipulate textures and light methods, because they are approximations. When you use general purpose equations based on physics you get consistently the same real world result.  

  • outrider42outrider42 Posts: 3,679
    edited August 2018
    ebergerly said:

    WOW!!! If I buy an RTX card all my renders will look professional and perfect like that with perfect materials and lighting and animation and models !!! And it will all be done in real time !!! laugh

     

    If you build all of it in that quality, yes, you really can. You cannot do that with any previous generation GPU. This demo would be impossible on anything else, or at least it would run at a much lower frame rate. It would probably be too low for a proper video game expected to run at 60 FPS or more.

    ebergerly said:

    My only point is that existing GPU's, when running the latest software, can get near the point that's being touted so much by RTX. Which is why I'm so much more impressed with the software developments, not necessarily the hardware developments. 

    I LOVE realtime ray tracing. But it's not an RTX thing. It's not a tensor cores thing. 

    Current GPUs can sort of do ray tracing, because real time ray tracing is being done with DX12. That is not a secret, but it is not the point of this demo. Current cards cannot play the RTX demo you saw. They might, but not at a high frame rate.

    The Unity Adam demo has always been very impressive, and I have linked it myself before as an example of what gaming engines can do. Its funny how I got so much pushback from that back then as so many users seem to almost take offense at the idea of a game engine coming anywhere close to Iray. In the end, the end result is all that matters, so whether it is RTX or current game engines, whatever works works. But RTX does this in a way that ensures the light is realistic. Other lighting in games might look good, but it may not be accurate to real lighting. We have become spoiled by games and movies that have fantastical special effects, but in the end many of these effects are nothing like how real light would behave.

    RTX can add a new sense of reality to video games, and it will absolutely become a huge deal in the VFX industry. The only limiting factor is VRAM, and with NVLink and Quadro, studios now have massive banks of memory to pool from. NVLink is quite a game changer itself.

    Speaking of NVLink, the PNY leak proves that NVLink will be a feature for the 2080ti and 2080. It is limited to 2-way, but that is a big deal if it pools the VRAM. That could mean two 1080ti's in NVLink would have 22 GB of VRAM. That would be a game changer. I wonder how much the NVLink costs, I bet it is not cheap.

    Post edited by outrider42 on
  • ebergerlyebergerly Posts: 3,255
    edited August 2018

     

     

    Let's hope they quickly release to DAZ a new version of Iray that will work with them.

    I'm guessing that will take quite a while. It sounds like this is a major re-work of the API's and drivers for this new architecture. And depending on final price for the consumer versions, I'm not sure if there will be more delay waiting for Studio/Iray to get updated, or for someone to decide to shell out the money for the new GPU. laugh

    They're talking something like $1,000 for a 2080ti? 

    My totally uneducated and off-the-wall guess is that it will be next year before Studio is updated for it (the 2080ti I guess they're calling it?), and I'll take a wild guess that the actual improvement in benchmark (Sickleyield) render times will be something like 20-30% over a 1080ti.

     

     

    Post edited by ebergerly on
Sign In or Register to comment.