OT: New Nvidia Cards to come in RTX and GTX versions?! RTX Titan first whispers.

17810121327

Comments

  • kyoto kidkyoto kid Posts: 41,847

    ...I know when I used it in CPU rendering, renders seemed to take longer.

  • Rashad CarterRashad Carter Posts: 1,830

    Fairly certain if we just look at raw CUDA performance, the higher price of these cards would not be justified. But that's a little bit beside the point when the evermotion link just posted has official quotes by numerous producers of renderers making bold statements like 8x because RTX.

    Yes but as I reread it just now I saw that OptiX was a key factor especially for the AI denoising. That's why I'm curious about the adoption of the OptiX into the current state of affairs ahead of RTX specific rewrites

  • Kevin SandersonKevin Sanderson Posts: 1,643
    edited August 2018

    Yes, I believe that is part of it. Optix is and has been part of Iray. Some people have had issues with it, but I'm not sure why and if they reported everything correctly. Some have seen speed increased and render times drop with Optix clicked on. https://developer.nvidia.com/optix

    Post edited by Kevin Sanderson on
  • bluejauntebluejaunte Posts: 1,990

    Maybe I'm naive but I don't see what devs of Arnold,Octane and others would get out of spreading lies about what RTX can do for GPU rendering. I do think it's a given that the software needs to first support this stuff though. That's my biggest worry for Iray as NVIDIA so far cannot even be bothered to mention it all. That's not a good sign, they could be out there hyping performance of RTX in Iray, and probably would be if they had anything to show yet? Not just for these new game cards but for the RTX Quadros that are already out too.

  • bluejauntebluejaunte Posts: 1,990

    Fairly certain if we just look at raw CUDA performance, the higher price of these cards would not be justified. But that's a little bit beside the point when the evermotion link just posted has official quotes by numerous producers of renderers making bold statements like 8x because RTX.

    Yes but as I reread it just now I saw that OptiX was a key factor especially for the AI denoising. That's why I'm curious about the adoption of the OptiX into the current state of affairs ahead of RTX specific rewrites

    OptiX is just an API that probably makes it somewhat more accessible for third parties to implement this NVIDIA-specific functionality. Iray is an NVIDIA product and so naturally it already supports OptiX. What's in Daz Studio as an option is 'OptiX Prime Acceleration'. Here's a wiki entry describing that part, I'm assuming OptiX Prime:

    https://en.wikipedia.org/wiki/OptiX

  • ebergerlyebergerly Posts: 3,255
    edited August 2018

    Here's what NVIDIA says about Optix:

    "The NVIDIA® OptiX™ ray tracing engine is a programmable system designed for NVIDIA GPUs and other highly parallel architectures. The OptiX engine builds on the key observation that most ray tracing algorithms can be implemented using a small set of programmable operations. Consequently, the core of OptiX is a domain-specific just-in-time compiler that generates custom ray tracing kernels by combining user-supplied programs for ray generation, material shading, object intersection, and scene traversal. This enables the implementation of a highly diverse set of ray tracing-based algorithms and applications, including interactive rendering, offline rendering, collision detection systems, artificial intelligence queries, and scientific simulations such as sound propagation. OptiX achieves high performance through a compact object model and application of several ray tracing-specific compiler optimizations. For ease of use it exposes a single-ray programming model with full support for recursion and a dynamic dispatch mechanism similar to virtual function calls."

    All of this stuff is about optimizing the hardware and software to do the things users need to do most. It's less and less about generatlized hardware like CPU's which can do zillions of very different and very complex types of operations for all kinds of software tasks, to software and hardware working together to do simple and specific stuff very quickly and efficiently. It's about focusing on specific tasks and doing them well. You design the software and hardware to work together. 

    Post edited by ebergerly on
  • Kevin SandersonKevin Sanderson Posts: 1,643
    edited August 2018

    Maybe I'm naive but I don't see what devs of Arnold,Octane and others would get out of spreading lies about what RTX can do for GPU rendering. I do think it's a given that the software needs to first support this stuff though. That's my biggest worry for Iray as NVIDIA so far cannot even be bothered to mention it all. That's not a good sign, they could be out there hyping performance of RTX in Iray, and probably would be if they had anything to show yet? Not just for these new game cards but for the RTX Quadros that are already out too.

    AI Denoise is already in DAZ's beta. >>

    • Added support for Deep Learning Denoiser (see "Post Denoiser" below)
    • Render Settings pane
      • Added "Post Denoiser Available", "Post Denoiser Enable" and "Post Denoiser Start Iteration" properties to the "Filtering/Post Denoiser" property group
        • "Post Denoiser Available" must be enabled prior to starting a render in order to cause the other "Post Denoiser" properties to be revealed and have meaning <<

          NVIDIA didn't promote the changes with Iray on the Pascal cards. So I wouldn't worry. Just hide and watch! :)

          In some of the videos from NVIDIA, they say Tensor cores help out RTX... so that could be a performance improvement there that we are neglecting.
    Post edited by Kevin Sanderson on
  • bluejauntebluejaunte Posts: 1,990

    Maybe I'm naive but I don't see what devs of Arnold,Octane and others would get out of spreading lies about what RTX can do for GPU rendering. I do think it's a given that the software needs to first support this stuff though. That's my biggest worry for Iray as NVIDIA so far cannot even be bothered to mention it all. That's not a good sign, they could be out there hyping performance of RTX in Iray, and probably would be if they had anything to show yet? Not just for these new game cards but for the RTX Quadros that are already out too.

    AI Denoise is already in DAZ's beta. >>

    • Added support for Deep Learning Denoiser (see "Post Denoiser" below)
    • Render Settings pane
      • Added "Post Denoiser Available", "Post Denoiser Enable" and "Post Denoiser Start Iteration" properties to the "Filtering/Post Denoiser" property group
        • "Post Denoiser Available" must be enabled prior to starting a render in order to cause the other "Post Denoiser" properties to be revealed and have meaning <<

          NVIDIA didn't promote the changes with Iray on the Pascal cards. So I wouldn't worry. Just hide and watch! :)

    Yup, sure is.

  • ebergerlyebergerly Posts: 3,255
    edited August 2018

    By the way, in my thread explaining my recent efforts to write a very simple ray tracing software, I showed some of the same things that NIVDIA is saying. For example, when you do ray tracing you do the same, fairly simple operations millions and millions of times to render a 1920x1080 image with over 2 million pixels. To determine whether a ray has hit a sphere you need to solve a simple quadratic equation, which means you need to solve the following "determinant":

    • B^2 - 4 * A * C

    So if you can set up your hardware with the necessary registers and memory locations and processors to solve that EXACT equation as efficiently and quickly as possible, simultaneously for millions of pixels, you've got a very fast ray tracer. So that's what they do, they design their hardware and software to extract those key equations used in ray tracing and de-noising, etc., and design hardware and software to solve those exact things. That's what NIVDIA is doing with the latest generations of hardware and software. Their RT cores presumably are excellent at solving these ray tracing equations, and their Tensor cores are presumably excellent at solving de-noising, and the SM's are good at working with Physx software to solve physics problems, and so on. Do programmers always use those exact resources the same way for the same task? Probably not. Who knows? Maybe they use RT cores for de-noising sometimes.   

    It's incredibly complex, and I'd caution folks not to think they really understand it. The more you know, the more you realize you don't know nuthin'. I sure don't. 

    Post edited by ebergerly on
  • kyoto kidkyoto kid Posts: 41,847
    ...if it wasn't for my worsening arthritis, I'd go back to brushes, canvas, and oil paints again. 1,200$ would get me a nice setup.
  • Rashad CarterRashad Carter Posts: 1,830
    My excitement over the likely rendering capabilities of these cards is steadily growing. I know what Rashad's getting for Christmas. I now have motivation to behave myself. I promise not to be the least bit disagreeable between now and Dec 25. After that all bets are off, of course!
  • GatorGator Posts: 1,319
    Sisyphus said:

    I will definitely wait until there are iray-benchmarks and stable driver implementations. I have only found some infos over rtx raytracing and tensor cores integration in professional 3d software. May bee the information is useful for someone here. 

    https://evermotion.org/articles/show/11111/nvidia-geforce-rtx-performance-in-arch-viz-applications

    A really interesting demo off that page, Control.

    https://www.youtube.com/watch?v=PCD_7e0cMv0&feature=youtu.be

    Don't know the breakdown of gamers in the genres, but it appears that the majority following the tech are in the twitch fps types.  Watching son play Battlefront 2, RTX technology really falls into the "who cares" category.  It's just global illumination, but has reflections.  It's offering all those little details you just don't have time to sit around and stare at the environment.  And arguably may be better for fast action stuff - it's easy to light without shadows being everywhere.  Nvidia really used the wrong game to showcase the tech - when they zoom in to show reflections off of a characters eyes in Battlefront V.  Like you have time to look at those sort of things in a fast paced shooter.

    But for slower paced games, exploring places or with creepy things... RTX will be VERY cool.  I've never played it, but a suspenseful game like Alien Isolation will be so much better.  A deviation from Iray, but the thread title is OT and about gaming cards anyways.  smiley

  • bluejauntebluejaunte Posts: 1,990

    Some fairly indpeth info:

    https://www.chaosgroup.com/blog/what-does-the-new-nvidia-rtx-hardware-mean-for-ray-tracing-gpu-rendering-v-ray

    I like one bit especially though:

    NVLink

    NVLink is a technology that allows two or more GPUs to be connected with a bridge and share data extremely fast. This means that each GPU can access the memory of the other GPU and programs like V-Ray GPU can take advantage of that to render scenes that are too large to fit on a single card. Traditionally when rendering on multiple graphics cards, V-Ray GPU duplicates the data in the memory of each GPU, but with NVLink the VRAM can be pooled. For example, if we have two GPUs with 11GB of VRAM each and connected with NVLink, V-Ray GPU can use that to render scenes that take up to 22 GB. This is completely transparent to the user — V-Ray GPU automatically detects and uses NVLink when available. So, while in the past doubling your cards only allowed you to double your speed, now with NVLink you can also double your VRAM.


    NVLink was introduced in 2016 and V-Ray GPU was the first renderer to support it officially in V-Ray 3.6 and newer versions. Until now, the technology has only been available on professional Quadro and Tesla cards, but with the release of the RTX series, NVLink is also available on gaming GPUs - specifically on the GeForce RTX 2080 and GeForce RTX 2080 Ti. Connecting two cards with NVLink requires a special NVLink connector, which is sold separately.

    Doesn't that sound very much  like the gamingf cards pool just fine?

  • ebergerlyebergerly Posts: 3,255
    If the implementation in the GTX cards fully supprts all things NVLink then yeah. But I heard it will be SLI over the NVLink channel. It's complicated.
  • bluejauntebluejaunte Posts: 1,990

    Other important points to take away from the article:

    • Renderers need to first be explicitly programmed to support RTX functionality
    • It won't always be 8x (or any number) faster, it depends on the scene and the mix between raycasting (RTX helps) and shading (RTX does not help)
    •  
  • bluejauntebluejaunte Posts: 1,990
    edited August 2018
    ebergerly said:
    If the implementation in the GTX cards fully supprts all things NVLink then yeah. But I heard it will be SLI over the NVLink channel. It's complicated.

    SLI is dead. It was a flawed way to combine cards that resulted in micro stutters in many games, or sometimes wasn't supported at all. This statement makes no sense to me, NVLINK will be NVLINK no matter what. Worst case it won't pool memory but it has nothing to do with SLI. That's the whole point.

    Post edited by bluejaunte on
  • ebergerlyebergerly Posts: 3,255
    Here's what the NVIDIA website says: "The GeForce RTX NVLink™ bridge connects two NVLink SLI-ready graphics cards with 50X the transfer bandwidth of previous technologies. This means you can count on super-smooth gameplay at maximum resolutions with ultimate visual fidelity in GeForce RTX 2080 Ti and 2080 graphics cards." And nothing about memory stacking.
  • bluejauntebluejaunte Posts: 1,990
    ebergerly said:
    Here's what the NVIDIA website says: "The GeForce RTX NVLink™ bridge connects two NVLink SLI-ready graphics cards with 50X the transfer bandwidth of previous technologies. This means you can count on super-smooth gameplay at maximum resolutions with ultimate visual fidelity in GeForce RTX 2080 Ti and 2080 graphics cards." And nothing about memory stacking.

    Hmm ok, so the SLI term is here to stay? Guess I had the wrong idea. SLI has such negative connotation, would have thought they would replace it. But maybe the underlying tech is still SLI, just much faster with NVLINK.

  • GatorGator Posts: 1,319
    ebergerly said:
    Here's what the NVIDIA website says: "The GeForce RTX NVLink™ bridge connects two NVLink SLI-ready graphics cards with 50X the transfer bandwidth of previous technologies. This means you can count on super-smooth gameplay at maximum resolutions with ultimate visual fidelity in GeForce RTX 2080 Ti and 2080 graphics cards." And nothing about memory stacking.

    Hmm ok, so the SLI term is here to stay? Guess I had the wrong idea. SLI has such negative connotation, would have thought they would replace it. But maybe the underlying tech is still SLI, just much faster with NVLINK.

    SLI has been good here except for Maxwell... I figure the link was too slow for the cards & output.  I had the micro stutters with 4K and read about other users with micro stutter problems.  Once I upgraded to Pascal based cards, SLI was super smooth again.  

    NVlink is a new tech speeding up transfers greatly from system memory to the GPU's memory.

  • ebergerlyebergerly Posts: 3,255
    edited August 2018

    Other important points to take away from the article:

    • Renderers need to first be explicitly programmed to support RTX functionality
    • It won't always be 8x (or any number) faster, it depends on the scene and the mix between raycasting (RTX helps) and shading (RTX does not help)
    •  

    Yes, as I've posted here approximately 126 times, the reason why GPUs are so fast at rendering is the software and hardware is specifically designed to work together efficiently to solve specific problems. CPUs are a jack-of-all-trades which can render and do word processing and play videos, but GPUs are specifically designed to do a handful of tasks really well. Especially with RTX, since its components are far more focused on rendering-type stuff than previous generations. So instead of just streaming multiprocessors they now have rendering cores and tensor cores and so on. And the new software APIs are designed to take advantage of that hardware architecture. And thats why its more likely that if your specific task doesnt match exactly the hardware and software design it won't be surprising if performance drops a lot.
    Post edited by ebergerly on
  • bluejauntebluejaunte Posts: 1,990
    ebergerly said:

    Other important points to take away from the article:

    • Renderers need to first be explicitly programmed to support RTX functionality
    • It won't always be 8x (or any number) faster, it depends on the scene and the mix between raycasting (RTX helps) and shading (RTX does not help)
    •  

     

    Yes, as I've posted here approximately 126 times, the reason why GPUs are so fast at rendering is the software and hardware is specifically designed to work together efficiently to solve specific problems. CPUs are a jack-of-all-trades which can render and do word processing and play videos, but GPUs are specifically designed to do a handful of tasks really well. Especially with RTX, since its components are far more focused on rendering-type stuff than previous generations. So instead of just streaming multiprocessors they now have rendering cores and tensor cores and so on. And the new software APIs are designed to take advantage of that hardware architecture. And thats why its more likely that if your specific task doesnt match exactly the hardware and software design it won't be surprising if performance drops a lot.

    Any render we do will have a good portion of ray-casting though. I think it's safe to say that in every single render there will be performance enhancements beyond just raw CUDA, if/when Iray gets such support. Note also:

     It should also be noted that the Turing hardware itself is significantly faster than the previous Pascal generation, even when running V-Ray GPU without any modifications.

    So, are you stoked yet about it all, and about perhaps a renewed hope that NVLINK might indeed pool for game cards too? Or should we still be "cautious" laugh

  • MendomanMendoman Posts: 404

    My take on those 6-8x faster render times is, that those are probably achieved with ai denoising or something similar. If you render cars or something else with easy metallic surfaces, then I'm sure that works like a charm. For characters with hair, sss and all those little skin details we are so obsessed about, I doubt it works that well. Probably those that have pascal card and have tried new DS beta can tell better. My current maxwell card doesn't support iray ai denoising ( I think ), so only guessing here.

     

    Still, that new 2080ti looks like an upgrade to me. Price is high, but over 4000 cuda cores, higher clock speeds and new faster memory is nothing to sneeze at. Despite my scepticism, I do believe that at least I'll get nice increase for rendering performance. Who knows, maybe those RT and Tensor cores really are some magic ingredients, and we really see some huge improvements here, but I'm still little sceptical about that. Haha, it's so good to be a pessimist, because then you only get nice surprises devil

     

     Like I said earlier, if I had Pascal card, I probably wouldn't be upgrading and definitely not preordering, but for older model owners I think it'll be a considerable performance boost no matter what.....when turing architecture is supported. Meanwhile, I thought I could try Octane. Never tried it before, but apparently they have a DS and Blender plugins, so worth a try at some point.

  • Twilight76Twilight76 Posts: 318

    sigh i think the 2070 is in price range for me.
    My old GTX 970 needs a change :)

    But i wait for the first benchmarks to decide

     

  • kyoto kidkyoto kid Posts: 41,847
    edited August 2018

    Some fairly indpeth info:

    https://www.chaosgroup.com/blog/what-does-the-new-nvidia-rtx-hardware-mean-for-ray-tracing-gpu-rendering-v-ray

    I like one bit especially though:

    NVLink

    NVLink is a technology that allows two or more GPUs to be connected with a bridge and share data extremely fast. This means that each GPU can access the memory of the other GPU and programs like V-Ray GPU can take advantage of that to render scenes that are too large to fit on a single card. Traditionally when rendering on multiple graphics cards, V-Ray GPU duplicates the data in the memory of each GPU, but with NVLink the VRAM can be pooled. For example, if we have two GPUs with 11GB of VRAM each and connected with NVLink, V-Ray GPU can use that to render scenes that take up to 22 GB. This is completely transparent to the user — V-Ray GPU automatically detects and uses NVLink when available. So, while in the past doubling your cards only allowed you to double your speed, now with NVLink you can also double your VRAM.


    NVLink was introduced in 2016 and V-Ray GPU was the first renderer to support it officially in V-Ray 3.6 and newer versions. Until now, the technology has only been available on professional Quadro and Tesla cards, but with the release of the RTX series, NVLink is also available on gaming GPUs - specifically on the GeForce RTX 2080 and GeForce RTX 2080 Ti. Connecting two cards with NVLink requires a special NVLink connector, which is sold separately.

    Doesn't that sound very much  like the gamingf cards pool just fine?

    ...still have my doubts they will actually offer the full NVLink on GeForce cards. Full memory stacking is a huge selling point for their professional Quadro series.  GeForce is aimed mainly towards the gaming community and while GPU core pooling for improved frame rate is important, memory stacking is not.  The former can be achieved through SLI or a more "limited" form of NVLink as they are offering.  Go to the site, the GeForce NVLink bridges have he same configurations like the old SLI ones:  2, 3, 4 slot.  Also the price for a pair of bridges for the Quadro line is currently 599$ while the bridges for the GeForce ones are 79$.

    it also sounds like the software needs to support the process of memory stacking as well since only one render engine apparently does so.

    Post edited by kyoto kid on
  • kyoto kidkyoto kid Posts: 41,847
    ebergerly said:
    Personally, my recommendation is that everyone go and pre-order two RTX cards. Today. Right now. Take your pick....two 2070's, two 2080's, or two 2080ti's. And then post detailed benchmarks for all of it so I can decide whether its worth it. :)

    ...only doable if someone was magnanimous enough to send me a certified bank draught to cover the price of two 2080Ti's + the NVLink bridges, I'd be happy to to give it a go.

  • linvanchenelinvanchene Posts: 1,386
    edited August 2018

    edited post to be more precise

    - - -

    @ proof for possible RTX speed increase

    The 8x speed increase was tested with an RTX Quadro and showcased in videos on the Otoy twitter channel with a "new (and unfinished) vulkan kernel".

     

    https://twitter.com/otoy/status/1030269165318287360?s=21

    https://twitter.com/otoy/status/1030626195669311488?s=21

     

    And yes, I find it somewhat strange that all the new information available about RTX performance is from other render engines and not from Nvidia Iray.

    - - -

    @ customer expectations for new card releases

    To be clear I do not expect people who use DAZ Studio as a hobby to invest large amounts of money on new graphic cards.

    But keep in mind that some customers who frequent the DAZ3D forum also own other software and use other render engines that will make use of the RTX technology of the cards.

    It will not be possible to install the Turing cards for use with other software and then "quickly" swap them out again for Pascal cards for a render with DAZ Studio Iray.

    Once the new cards are installed they stay in there. Either DAZ Studio and Iray will be ready this time or customers are left with the option not to use DAZ Studio and Iray.

    DAZ Studio and Iray may not yet support RTX features at launch of the new cards at September 20th. Still, it should be a reasonable request that DAZ Studio Iray is updated in time so that the Turing cards can be used for normal CUDA based calculations as the previous Pascal cards.

    - - -

    Post edited by linvanchene on
  • The 8x speed increase was tested and showcased in screenshots on the Otoy twitter channel with a "new (and unfinished) vulkan kernel".

     

    https://twitter.com/otoy/status/1030269165318287360?s=21

    https://twitter.com/otoy/status/1030626195669311488?s=21

     

    And yes, I find it somewhat strange that all the new information available about RTX performance is from other render engines and not from Nvidia Iray.

    - - -

    They were silent last time when Pascal cards were released. I think it's their way, or they aren't ready yet.

  • By all means discuss the information we have, and try to draw inferences from it, but please avoid discussing others' pre-ordering or not.

  • nicsttnicstt Posts: 11,715
    ebergerly said:

    It's interesting...I just took a look at the Sickleyield benchmarks for the 1060 to the 1070 to the 1080ti, and there was almost exactly a 33% improvement in render times in going from the 1060 to the 1070, and the 1070 to the 1080ti. I didn't see a 1080 benchmark, but I assume it was less than 33%. 

    So if we expect the new 2080ti to give the same performance increase over a 1080ti it will only be 33%. I'm assuming it will be significantly more. But even if you double that and get a 66% increase in speed that barely compares to the performance of two 1080ti's at about the same price. 

    Just sayin'....the 2080ti has a lot to prove.

     

    Wow, you really do seem to completely dismiss any possible contributions of the RTX tech when you make predictions about the new cards?

    Just so that I am clear...are you stating that you expect nothing more than a 66% increase in the best case scenario from a 2080ti vs 1080ti? I ssume this is in regards to Cuda performace specifically? Restated, you are saying that even if RTX offers no additional benefit, we can at least expect a cuda speed improvement in the range of 33-66%? I think I can agree with predictions about cuda related performance, sure. But RTX tech doesnt seem to require updated software to see some degree of benefit as was demonstrated by the 1995 game with raytracing in one of the videos someone kindly posted somewhere in this now frighteningly specific thread

    I ask because you say the 2080ti has a lot to prove....what if it turns out the 2080ti provides a 4x speed improvement in Iray as it exists today, will that be proof enough? My question is quite literal, how much of an improvement in speed alone (since we already know the limits of vram offered with the 2080ti) would make the double price of a 1080ti worthwhile?

    For me, a mere doubling in speed is a good argument.

    Anything above a 3x improvement is a no brainer for me, I'd go for it immediately.

    So seeing claims of 8x improvements makes we wish I could pre-order the thing.

    I know I'm getting screwed over no matter what kind of deal I broker with these people. I'm gambling against the house...the house will always win in the end. So I might as well focus on having some fun.

    If I DO buy one of these 2080ti cards, I will actually ship it to you for a week just so you can test the performance of your new custom written raytracer (provided you wrote it to use the GPU rather than the CPU) to see if you observe any noteworhty improvements even without specific optimizations of code.

    This really is the part I'm most curious about. Do the new RTX cards somehow "know" when a raytraced algorithm is presented to them and do they automatically port those tasks to the more efficient RT cores, or do applications need special updated instructions to point raytracing tasks to the new RT cores? I was under the impression that the cards automatically send those tasks to the RT cores, but if not then I'm indeed terrified that apps will require huge updates to make any use of this stuff.

    In such a scenario I can see why you are so cautious. Apologies if my questions seem naive

    For example, if this turns out to be real benefit with little to no code updates of applications I plan to beg Daz3d to SERIOUSLY consider writing Bryce (which is a brute force raytracer of sorts) to utilize GPU. In the past we've assumed gpu rendering would have been faster than cpu for Bryce, but if RTX has specific hardware for raytracing....I simply don't see how or why we would not adopt it. Thinking out loud. 

    Anyhow, what do you think?

    I don't think it's dismissed.

    The issue is that there were no indications; the presentation was completely missing any comparrisons with previous generation cards. None at all: why was that?

    So whilst I am prepared to admit, indeed I'm expecting the new features to aide rendering, but that aide (that extra performance) is added to what exactly?

    Will the new cards be better? Yes.

    How much? Only Nvidia know for sure; the rest of us can (at best) speculate - and largely it's guesswork.

  • nicsttnicstt Posts: 11,715

    I was excited about the 8 times faster.

    I was dubious - very dubious - when I saw the 'up to'.

    I can certainly see 8 times faster renders when we're talking about objects such as cars and architectural scenes; what many of the folks do in Studio involves people, and denoising (as an example) does poorly in comparrison there; it basically does not look as good in denoising.

    Now if the speed increases are due to other functionality then it should hopefully be much better.

    ... All this speculation is fun, but ultimately pointless.

Sign In or Register to comment.