General GPU/testing discussion from benchmark thread

11214161718

Comments

  • ebergerlyebergerly Posts: 3,255

    So it sounds like there's fairly general consensus that when someone posts an Iray log for a render that says "Total Rendering Time: 3 minutes 33.27 seconds" it's "common sense" that the user wasn't running anything that might affect that, and you can fully rely upon and use the 3:33.27 time to compare with another render on another system? And it's actually accurate to 270 milliseconds for comparison purposes? 

    If that's what you believe, or anything close to that, then I think you are grossly mistaken. And clearly there's nothing I can say to change that belief, so I'll stop trying. 

  • bluejauntebluejaunte Posts: 1,859
    edited July 2019

    You can never fully rely on any individual result. Look at the big picture and you will surely be able to make conclusions. That's all that counts. Far more valuable than elaborate benchmark tables to me is when I get a few people say "I rendered this on 1080 TI and on 2080 TI and the render time was roughly halfed". Just a few separate claims like this should be enough to reasonably assume that a 2080 TI indeed renders 2x faster in Iray. That is the type of stuff I want to know. Not whether someone benched the same scene at 10 seconds more or less despite having the same hardware. As long as there aren't huge unexplainable differences, nobody cares.

    Post edited by bluejaunte on
  • "real" benchmarks are conducted on machines with very carefully audited configurations and procedures. The benchmarks in the thread here are more informal, and so need to be treated with a pinch or two of salt (especially as DS and iray versions change) but that doesn't stop them from being of some use in rough guidance.

  • Robert FreiseRobert Freise Posts: 4,247

    Something else to consider I had two identical machines and got different times from each one as much as 1 min when I built them I split a 128 meg matched memory kit between them and all components bought at the same time and from one place

  • JamesJABJamesJAB Posts: 1,760
    edited July 2019

    Plus, it's also reasonable to assume that people are not all using GPUs that are reference designs uising the founders edition cooler with stock clock speeds or identical computer towers...

    This in itself will lead to variations in GPU temperature causing the clock speed to vary from card to card.
    Plus there can be pretty big clock speed and boost clock variation between the same GPU made by the same manufacturer.

    example:
    GTX 1080 ti with 2 80mm fans in a cramped tower with no extra cooling fans will not be able to maintain anywhere near the full boost clock....
    GTX 1080 ti Founders with the blower style cooling in a well designed tower that pulls air in the front and blows it out the back and no card in the slot directly under the GPU will be be able to maintain full boost all day and maybe even handle overclocking past the rated boost speed for extended periods.

     

    In my Dell Workstation tower my Quadro P4000 and GTX 1080 ti both render at full boost speed all day and the computer fans never even get as loud as the air conditioner in my house.

    Post edited by JamesJAB on
  • outrider42outrider42 Posts: 3,679
    edited July 2019

    I asked for people to post clock speeds before, because it can make a difference. However, it is not a huge difference in the scope of Iray unless there is a huge difference in clock speed, like more than 100 Mhz, a lot more. This mostly comes into play when looking at laptop models of GPUs. Pascal laptops had the very same chip as the desktop models, however they were vastly downclocked.

    Clockspeed is fine because that generally covers the card's cooling as well. Cards that do not clock as well are usually given cheaper coolers and sold cheaper than the higher clocked varients. So the market takes care of itself here. The exception would Founder's Editions which typically price higher, but their clocks still reflect the blower fan they have as Pascal FE cards have lower clocks than others. And another exception would be if users do their own overclocking.

    It is very easy to test the impact of clock speed. Simply turn your clock speed down and render. You can tune your cards with software like MSI Afterburner and EVGA PrecisionX. Both of these will work with any brand.

    Other components have little impact on render speed. I can point you to benchmarks made with 1070s installed inside a Core 2 QUAD from over a decade ago which ran the SY Bench in practically identical times as a 1070 installed with much newer machines. It just doesn't matter. My own machine has a i5-4690K from 2014 which is only 4 cores and 4 threads. It is showing its age in other software, but for Iray my 1080tis match the speeds of users with brand new CPUs. In fact, I have sightly faster times (by a couple seconds) than some people with brand new machines. So again, it just proves that rest of the machine doesn't matter. Iray is not a video game, and having an "unbalanced" PC is not an issue, as long as that balance favors the GPU. It has been proven time and time again.

    The Quadro P4000 is in a very weird space. It seems to be right in between the 1060 and 1070. With only 8GB of VRAM, it offers the same amount that the 1070 does, but the 1070 is much faster AND cheaper. The P4000 only has a couple advantages. You can buy a single slot version, so you can cram a bunch of them into a board, and being a Quadro means they can enable TCC (Tesla Compute Cluster). TCC mode allow you to get around the Windows 10 VRAM issue as all video output is disabled so the card becomes a pure compute device. This allows you to access the full 8GB capacity. It is also possible that TCC mode might enhance the rendering performance of the card. I saw a twitter post that showed a Titan V running Vray with TCC enabled VS disabled, and there was a large difference of speed in favor of TCC enabled. But I have never seen anybody test TCC with Iray. In order to do so, you need another GPU for video output, because again, TCC disables video output.

    I would be very curious to know if TCC mode offers any benefit to Iray rendering, or if it can even work with Iray.

    However, for the price the P4000 goes for, you could get a 1080ti, which would be much faster than two P4000's put together, and have more VRAM than the P4000 even with TCC enabled. There are certain programs that can benefit greatly by using Quadro, but Daz Studio and Iray are not really amung those, unless TCC mode actually makes a big difference. 

    It is should be obvious to run benchmarks without running other software. Its a benchmark.

    Post edited by outrider42 on
  • RayDAntRayDAnt Posts: 1,120
    edited July 2019

    TCC mode allow you to get around the Windows 10 VRAM issue as all video output is disabled so the card becomes a pure compute device. This allows you to access the full 8GB capacity. It is also possible that TCC mode might enhance the rendering performance of the card. I saw a twitter post that showed a Titan V running Vray with TCC enabled VS disabled, and there was a large difference of speed in favor of TCC enabled. But I have never seen anybody test TCC with Iray. In order to do so, you need another GPU for video output, because again, TCC disables video output.

    I would be very curious to know if TCC mode offers any benefit to Iray rendering, or if it can even work with Iray.

    Will be investigating this very subject with the Titan RTX in the next couple weeks. God willing and the creek don't rise, that is (literally - finally got the last bits needed to properly watercool this thing. The stock Titan RTX cooler, while undeniably pretty, I'm sorry to say is largely a joke performance-wise.)

     

    ETA: Also, for what it's worth, TCC compatibility with Iray has been tested here as working (if buggily) at least once already. See this post from JD_Mortal.

    Post edited by RayDAnt on
  • Takeo.KenseiTakeo.Kensei Posts: 1,303

    Some Infos about the denoiser from Optix docs

    You can also create a custom model by training the denoiser with your own set of images and
    use the resulting training data in OptiX, but this process is not part of OptiX itself. To learn
    how to generate your own training data based on your renderer’s images you can attend the
    course Rendered Image Denoising using Autoencoders, which is part of the NVIDIA Deep
    Learning Institute.

    So no tool directly avalable or useable to make new training data

    Denoiser limitations
    In Optix 5.0 and later, the denoiser has the following limitations:
    • The denoiser runs under the first GPU found by the system. A different GPU can be
    selected by calling the functions cudaSetDevice()6.
    • There is no CPU fallback for denoising.
    • Objects behind transparent surfaces (for example, simulations of glass) will not denoise
    correctly.
    • Denoising produces flickering in a series of images rendered as an animation.
    • The denoising training set is not tuned for high-frequency imagery, such as hair or other
    fine detail, and may produced apparently blurred results. => which means loss of details is unavoidable without another training set

  • RayDAntRayDAnt Posts: 1,120

    Some Infos about the denoiser from Optix docs

    You can also create a custom model by training the denoiser with your own set of images and
    use the resulting training data in OptiX, but this process is not part of OptiX itself. To learn
    how to generate your own training data based on your renderer’s images you can attend the
    course Rendered Image Denoising using Autoencoders, which is part of the NVIDIA Deep
    Learning Institute.

    So no tool directly avalable or useable to make new training data

    Denoiser limitations
    In Optix 5.0 and later, the denoiser has the following limitations:
    • The denoiser runs under the first GPU found by the system. A different GPU can be
    selected by calling the functions cudaSetDevice()6.
    • There is no CPU fallback for denoising.
    • Objects behind transparent surfaces (for example, simulations of glass) will not denoise
    correctly.
    • Denoising produces flickering in a series of images rendered as an animation.
    • The denoising training set is not tuned for high-frequency imagery, such as hair or other
    fine detail, and may produced apparently blurred results. => which means loss of details is unavoidable without another training set

    Here's the course referenced in that, in case anyone wants to check it out (I'm a bit busy with other things at the moment.)

  • RayDAntRayDAnt Posts: 1,120
    edited July 2019

    Just launched a brand new benchmarking thread Daz Studio Iray - Rendering Hardware Benchmarking to coincide with today's fully RTX compatible DS Beta release. I highly recommend checking it out. It's obviously still a work-in-progress. But the sooner I can get data/feedback from people about it the better/more useful I can make it!

    Post edited by RayDAnt on
  • bluejauntebluejaunte Posts: 1,859

    Niiiice. There's no RTX on/off feature in the 4.12 beta right? Optix Prime needs to be on to use RT cores?

  • Takeo.KenseiTakeo.Kensei Posts: 1,303

    Don't know but I"ve got that in my logs

    2019-07-22 21:12:28.510 WARNING: ..\..\..\..\..\src\pluginsource\DzIrayRender\dzneuraymgr.cpp(302): Iray WARNING - module:category(IRAY:RENDER):   1.0   IRAY   rend warn : The 'iray_optix_prime' scene option is no longer supported.

  • bluejauntebluejaunte Posts: 1,859

    Makes sense I guess, they removed Optix Prime and rewrote it in Optix proper. Or... something?

  • RayDAntRayDAnt Posts: 1,120

    Niiiice. There's no RTX on/off feature in the 4.12 beta right? Optix Prime needs to be on to use RT cores?

    Based on my current testing with the Ttitan RTX, there's no control for it (it "just works" tm) and selecting/deselecting Optix Prime Acceleration under Advanced Render Options makes no difference in render times whatsover. Rendering is also significantly faster in all scenes so far tested.

  • Takeo.KenseiTakeo.Kensei Posts: 1,303

    Makes sense I guess, they removed Optix Prime and rewrote it in Optix proper. Or... something?

    For me that doesn't make sense at all.

    Optix contains Optix Prime for what I've seen

    Kicking Optix Prime out means that for non RTX Owner, there is no acceleration ?

  • bluejauntebluejaunte Posts: 1,859

    Makes sense I guess, they removed Optix Prime and rewrote it in Optix proper. Or... something?

    For me that doesn't make sense at all.

    Optix contains Optix Prime for what I've seen

    Kicking Optix Prime out means that for non RTX Owner, there is no acceleration ?

    Optix runs fine on none RTX too. It's just that Optix Prime was a coding shortcut but couldn't support RT cores because reasons. Huge grain of salt though, I really don't know what the hell I'm talking about. This is what I remember someone a long time ago say about why Iray wouldn't support RT cores for a while.

  • bluejauntebluejaunte Posts: 1,859
    RayDAnt said:

    Niiiice. There's no RTX on/off feature in the 4.12 beta right? Optix Prime needs to be on to use RT cores?

    Based on my current testing with the Ttitan RTX, there's no control for it (it "just works" tm) and selecting/deselecting Optix Prime Acceleration under Advanced Render Options makes no difference in render times whatsover. Rendering is also significantly faster in all scenes so far tested.

    I can't really test anything until I fix my CMOS battery or get a new board. Almost comical how a simple thing such as system time can screw up stuff.

  • Takeo.KenseiTakeo.Kensei Posts: 1,303

     

    Makes sense I guess, they removed Optix Prime and rewrote it in Optix proper. Or... something?

    For me that doesn't make sense at all.

    Optix contains Optix Prime for what I've seen

    Kicking Optix Prime out means that for non RTX Owner, there is no acceleration ?

    Optix runs fine on none RTX too. It's just that Optix Prime was a coding shortcut but couldn't support RT cores because reasons. Huge grain of salt though, I really don't know what the hell I'm talking about. This is what I remember someone a long time ago say about why Iray wouldn't support RT cores for a while.


    Optix runs, but as I said, you don't have Prime acceleration for non RTX cards. Now I'm curious and I'd like to see some non RTX bench on this version

    RayDAnt said:

     

    Niiiice. There's no RTX on/off feature in the 4.12 beta right? Optix Prime needs to be on to use RT cores?

    Based on my current testing with the Ttitan RTX, there's no control for it (it "just works" tm) and selecting/deselecting Optix Prime Acceleration under Advanced Render Options makes no difference in render times whatsover. Rendering is also significantly faster in all scenes so far tested.

    I can't really test anything until I fix my CMOS battery or get a new board. Almost comical how a simple thing such as system time can screw up stuff.

    A CMOS battery shouldn't be a big problem ?

    Boot. Press F1 to accept whatever is asked. Change date and system time and you should be able to use your computer

    Or does it generate a bigger problem ?

  • bluejauntebluejaunte Posts: 1,859

     

    Makes sense I guess, they removed Optix Prime and rewrote it in Optix proper. Or... something?

    For me that doesn't make sense at all.

    Optix contains Optix Prime for what I've seen

    Kicking Optix Prime out means that for non RTX Owner, there is no acceleration ?

    Optix runs fine on none RTX too. It's just that Optix Prime was a coding shortcut but couldn't support RT cores because reasons. Huge grain of salt though, I really don't know what the hell I'm talking about. This is what I remember someone a long time ago say about why Iray wouldn't support RT cores for a while.


    Optix runs, but as I said, you don't have Prime acceleration for non RTX cards. Now I'm curious and I'd like to see some non RTX bench on this version

    RayDAnt said:

     

    Niiiice. There's no RTX on/off feature in the 4.12 beta right? Optix Prime needs to be on to use RT cores?

    Based on my current testing with the Ttitan RTX, there's no control for it (it "just works" tm) and selecting/deselecting Optix Prime Acceleration under Advanced Render Options makes no difference in render times whatsover. Rendering is also significantly faster in all scenes so far tested.

    I can't really test anything until I fix my CMOS battery or get a new board. Almost comical how a simple thing such as system time can screw up stuff.

    A CMOS battery shouldn't be a big problem ?

    Boot. Press F1 to accept whatever is asked. Change date and system time and you should be able to use your computer

    Or does it generate a bigger problem ?

    Yeah the time is lagging behind. Have to set it manually over and over again in Windows laugh

  • Takeo.KenseiTakeo.Kensei Posts: 1,303

     

    Makes sense I guess, they removed Optix Prime and rewrote it in Optix proper. Or... something?

    For me that doesn't make sense at all.

    Optix contains Optix Prime for what I've seen

    Kicking Optix Prime out means that for non RTX Owner, there is no acceleration ?

    Optix runs fine on none RTX too. It's just that Optix Prime was a coding shortcut but couldn't support RT cores because reasons. Huge grain of salt though, I really don't know what the hell I'm talking about. This is what I remember someone a long time ago say about why Iray wouldn't support RT cores for a while.


    Optix runs, but as I said, you don't have Prime acceleration for non RTX cards. Now I'm curious and I'd like to see some non RTX bench on this version

    RayDAnt said:

     

    Niiiice. There's no RTX on/off feature in the 4.12 beta right? Optix Prime needs to be on to use RT cores?

    Based on my current testing with the Ttitan RTX, there's no control for it (it "just works" tm) and selecting/deselecting Optix Prime Acceleration under Advanced Render Options makes no difference in render times whatsover. Rendering is also significantly faster in all scenes so far tested.

    I can't really test anything until I fix my CMOS battery or get a new board. Almost comical how a simple thing such as system time can screw up stuff.

    A CMOS battery shouldn't be a big problem ?

    Boot. Press F1 to accept whatever is asked. Change date and system time and you should be able to use your computer

    Or does it generate a bigger problem ?

    Yeah the time is lagging behind. Have to set it manually over and over again in Windows laugh

    Have you try to hibernate the computer instead of shutting it down ? That may keep the date and time.

    And if you have internet, Windows should be able to keep them synchronized then.

    I know that by default Windows can't sync if the difference is greater than 2 days (not sure from memory)

  • bluejauntebluejaunte Posts: 1,859

    Tried lots of things, but yeah let's not do silly bluejaunte tech support stuff here. Get benchin' you fools! laugh

  • Takeo.KenseiTakeo.Kensei Posts: 1,303
    edited July 2019

    Tried lots of things, but yeah let's not do silly bluejaunte tech support stuff here. Get benchin' you fools! laugh

    Had a crash at my first test. Just testing stability for the moment

    Renders seem quicker

    And I've read a post in "Who wants better animation too" thread where DAZ_Steve states that all Nvidia GPU should render quicker

    * Edit : Load time seem longer though

    Post edited by Takeo.Kensei on
  • RayDAntRayDAnt Posts: 1,120
    edited July 2019

     

    Makes sense I guess, they removed Optix Prime and rewrote it in Optix proper. Or... something?

    For me that doesn't make sense at all.

    Optix contains Optix Prime for what I've seen

    Kicking Optix Prime out means that for non RTX Owner, there is no acceleration ?

    Optix runs fine on none RTX too. It's just that Optix Prime was a coding shortcut but couldn't support RT cores because reasons. Huge grain of salt though, I really don't know what the hell I'm talking about. This is what I remember someone a long time ago say about why Iray wouldn't support RT cores for a while.


    Optix runs, but as I said, you don't have Prime acceleration for non RTX cards. Now I'm curious and I'd like to see some non RTX bench on this version

    On CPU only it now uses Embree instead of Optix Prime:

    2019-07-22 16:29:43.150 Iray INFO - module:category(IRAY:RENDER):   1.11  IRAY   rend info : Using Embree 2.8.0

    Which, according to the Iray changelog is supposed to result in significantly faster CPU rendering over Optix Prime. Haven't tested it out to verify for myself yet, though.

    ETA: Gonna try it on my GTX 1050 laptop (as soon as I get it updated) and see what that tells me its using (whether OptiX Prime or something else.)

     

    UPDATE: This is what I got:

    2019-07-22 16:47:39.248 Iray INFO - module:category(IRAY:RENDER):   1.9   IRAY   rend info : Using OptiX Prime version 5.0.12019-07-22 16:47:39.261 Iray VERBOSE - module:category(IRAY:RENDER):   1.9   IRAY   rend stat : Geometry memory consumption: 35.0846 MiB (device 0), 0 B (host)2019-07-22 16:47:39.261 Iray INFO - module:category(IRAY:RENDER):   1.9   IRAY   rend info : Initializing OptiX Prime for CUDA device 0

    So OptiX Prime is still being utilized on non-RTX cards.

     

     

    RayDAnt said:

     

     

    Niiiice. There's no RTX on/off feature in the 4.12 beta right? Optix Prime needs to be on to use RT cores?

    Based on my current testing with the Ttitan RTX, there's no control for it (it "just works" tm) and selecting/deselecting Optix Prime Acceleration under Advanced Render Options makes no difference in render times whatsover. Rendering is also significantly faster in all scenes so far tested.

    I can't really test anything until I fix my CMOS battery or get a new board. Almost comical how a simple thing such as system time can screw up stuff.

    A CMOS battery shouldn't be a big problem ?

    Boot. Press F1 to accept whatever is asked. Change date and system time and you should be able to use your computer

    Or does it generate a bigger problem ?

    Yeah... incorrect board time can be a huge PITA - epsecially if you don't realize that's what's at fault. All sorts of weird problems start happening

    Post edited by RayDAnt on
  • RayDAntRayDAnt Posts: 1,120

    Some issues to keep in mind while playing around with this beta if you're a Turing owner (copied from the official Iray changelog thread):

    Known Issues and Restrictions
    • This beta release only works with driver version R430 GA1.
    • Multi-GPU support for multiple Turing GPUs is partially broken: Iray Photoreal and Iray Interactive may trigger the fallback to CPU rendering if multiple Turing GPUs are present in a system. It is recommended for the beta to enable only one Turing board.
    • Performance on Turing boards is not yet fully optimized (e.g.,compilation overhead on each startup and some scenechanges in Iray Photoreal and geometry overhead and rendering slowdowns in some scenes for Iray Interactive). Performance on non-Turing boards should not be affected.
    • In Iray Interactive, invisible geometry through cutouts remains visible.
    • Support for SM 3.X/Kepler generation GPUs is marked as deprecated, and it will most likely be removed with the next major release.
    • The Deep Learning based postprocessing convergence estimate only works if the Deep Learning based denoiser is disabled.
    • The Deep Learning based postprocessing to predict when a rendering has reached a desired quality is not yet implemented.
    • MacosX is not supported.
    • The new hair field in the material structure and the new hair_bsdf type from the MDL 1.5 Speci?cation are not yet supported by the MDL compiler.
    • The new MDL 1.5 distribution function modifierer df::measured_factor only supports one dimension of the texture in Iray Interactive.
  • DAZ_RawbDAZ_Rawb Posts: 817
    RayDAnt said:

    Some issues to keep in mind while playing around with this beta if you're a Turing owner (copied from the official Iray changelog thread):

    Known Issues and Restrictions
    • This beta release only works with driver version R430 GA1.
    • Multi-GPU support for multiple Turing GPUs is partially broken: Iray Photoreal and Iray Interactive may trigger the fallback to CPU rendering if multiple Turing GPUs are present in a system. It is recommended for the beta to enable only one Turing board.
    • Performance on Turing boards is not yet fully optimized (e.g.,compilation overhead on each startup and some scenechanges in Iray Photoreal and geometry overhead and rendering slowdowns in some scenes for Iray Interactive). Performance on non-Turing boards should not be affected.
    • In Iray Interactive, invisible geometry through cutouts remains visible.
    • Support for SM 3.X/Kepler generation GPUs is marked as deprecated, and it will most likely be removed with the next major release.
    • The Deep Learning based postprocessing convergence estimate only works if the Deep Learning based denoiser is disabled.
    • The Deep Learning based postprocessing to predict when a rendering has reached a desired quality is not yet implemented.
    • MacosX is not supported.
    • The new hair field in the material structure and the new hair_bsdf type from the MDL 1.5 Speci?cation are not yet supported by the MDL compiler.
    • The new MDL 1.5 distribution function modifierer df::measured_factor only supports one dimension of the texture in Iray Interactive.

    FYI, those are notes for the Iray 2019 beta, all of the known issues have been resolved (AFAIK) in the final release.

  • TheKDTheKD Posts: 2,667
    edited July 2019

    2019-07-22 18:40:01.245 Iray VERBOSE - module:category(IRAY:RENDER):   1.2   IRAY   rend progr: CPU: Processing scene...
    2019-07-22 18:40:01.245 Iray INFO - module:category(IRAY:RENDER):   1.4   IRAY   rend info : Using Embree 2.8.0
    2019-07-22 18:40:01.245 Iray INFO - module:category(IRAY:RENDER):   1.4   IRAY   rend info : Initializing Embree

    2019-07-22 19:29:47.983 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : Received update to 00653 iterations after 2986.703s.

    I let it render while I ate supper and did the dishes. 653 iterations with 2 dressed and posed people as the scene in 50 minutes, that's not all that bad as far as cpu only rendering goes.

    Post edited by TheKD on
  • outrider42outrider42 Posts: 3,679
    DAZ_Rawb said:
    RayDAnt said:

    Some issues to keep in mind while playing around with this beta if you're a Turing owner (copied from the official Iray changelog thread):

    Known Issues and Restrictions
    • This beta release only works with driver version R430 GA1.
    • Multi-GPU support for multiple Turing GPUs is partially broken: Iray Photoreal and Iray Interactive may trigger the fallback to CPU rendering if multiple Turing GPUs are present in a system. It is recommended for the beta to enable only one Turing board.
    • Performance on Turing boards is not yet fully optimized (e.g.,compilation overhead on each startup and some scenechanges in Iray Photoreal and geometry overhead and rendering slowdowns in some scenes for Iray Interactive). Performance on non-Turing boards should not be affected.
    • In Iray Interactive, invisible geometry through cutouts remains visible.
    • Support for SM 3.X/Kepler generation GPUs is marked as deprecated, and it will most likely be removed with the next major release.
    • The Deep Learning based postprocessing convergence estimate only works if the Deep Learning based denoiser is disabled.
    • The Deep Learning based postprocessing to predict when a rendering has reached a desired quality is not yet implemented.
    • MacosX is not supported.
    • The new hair field in the material structure and the new hair_bsdf type from the MDL 1.5 Speci?cation are not yet supported by the MDL compiler.
    • The new MDL 1.5 distribution function modifierer df::measured_factor only supports one dimension of the texture in Iray Interactive.

    FYI, those are notes for the Iray 2019 beta, all of the known issues have been resolved (AFAIK) in the final release.

    So to be clear, the 4.1 2 Beta is using the final version of Iray 2019 that fixes these bugs? Because some of these are pretty serious bugs.
  • outrider42outrider42 Posts: 3,679

    One thing I wonder about, with Embree being made by Intel, and here it is inside a Nvidia software...that might mean we have a chance of seeing Intel's future GPU support CUDA. Now that would be cool, you would have a genuine choice in the market, assuming the Intel cards are any good. Intel's GPU is quite a wild card, we really have no idea what they are doing. But Intel has the ability to really shake up things if they want to, given they are so much larger than AMD and Nvidia combined. How will they segment their cards? It may just be possible they attempt to do things outside the box to get attention to their new brand. 

    That's all for 2020, but I can't help but wonder.

  • jkim5453jkim5453 Posts: 6

    "Oxford Library" from Daz Store shows off RTX-accelaration quite well for me. "Dome and Scene" lighting, "Camera 6", rendered to 95% convergence, 1280x720 output for quick test.

    4.11:

    CUDA device 0 (GeForce RTX 2080 Ti): 5810 iterations, 2.999s init, 907.709s render

    4.12 beta

    CUDA device 0 (GeForce RTX 2080 Ti): 5827 iterations, 1.934s init, 574.256s render

  • RayDAntRayDAnt Posts: 1,120
    edited July 2019

    So... anybody have recommendations for particular Daz Studio compatible scenes/sets (either from the Daz Store or other sources) that are reeeeeaaally big/difficult to render? I'm looking for ways to study RTX acceleration more thoroughly, and I suspect there is a direct correllation between scene asset size/complexity and how much RTX/RTCores speedup the rendering process (hence why all the freely available benchmarking scenes - including this one just released by me - show such a little difference with it versus without.)

    Post edited by RayDAnt on
This discussion has been closed.