Daz Studio Iray - Rendering Hardware Benchmarking

18911131445

Comments

  • RayDAntRayDAnt Posts: 1,120
    edited September 2020

    Yeah, I tried on my 3080 even with the latest BETA. Not supported yet :(

    IRAY   rend warn : CUDA device 0 (GeForce RTX 3080): compute capability 8.6: unsupported by this version of iray, please look for an update with your software provider.

    IRAY   rend warn : GPU 1 (GeForce RTX 3080) with CUDA compute capability 8.6 cannot be used by iray photoreal.

    IRAY   rend warn : There is no CUDA-capable GPU available to the iray photoreal renderer.

    Ok, so I was wrong. What's really ridiculous about this is that other render engines are able to use these cards, even if their OptiX is not updated specifically for Ampere. I mean...they have benchmarks, the cards work right now. Yet Nvidia's own Iray does not work with its own GPUs the day it launches.

    This is false. Iray has had full Ampere support since the update to 2020.1.0 beta approximately a month ago. Daz just hasn't recompiled/released an update to any release channel of Daz Studio in that time.

    The one and only time it has ever worked at launch was with Turing and that was the beta (and back then I stated we got lucky).

    Luck had nothing to do with it. the reason why Turing was compatible (minus RTCores) day one is because Tuyring was designed on purpoise to be backwards compatible with Volta. Support for which was added to Iray months prior to the Turing release.

     

    The silver lining is that unlike with Pascal, Nvidia has already released an Iray SDK with support for the new GPUs. So it is only a matter of time. With Pascal, it was several months after launch before the SDK was even released.

    Daz already has Ampere supporting versions of DS (4.12.2.31 or later.) They just need to release one. 

    Post edited by RayDAnt on
  • outrider42outrider42 Posts: 3,679
    RayDAnt said:

    Yeah, I tried on my 3080 even with the latest BETA. Not supported yet :(

    IRAY   rend warn : CUDA device 0 (GeForce RTX 3080): compute capability 8.6: unsupported by this version of iray, please look for an update with your software provider.

    IRAY   rend warn : GPU 1 (GeForce RTX 3080) with CUDA compute capability 8.6 cannot be used by iray photoreal.

    IRAY   rend warn : There is no CUDA-capable GPU available to the iray photoreal renderer.

    Ok, so I was wrong. What's really ridiculous about this is that other render engines are able to use these cards, even if their OptiX is not updated specifically for Ampere. I mean...they have benchmarks, the cards work right now. Yet Nvidia's own Iray does not work with its own GPUs the day it launches.

    This is false. Iray has had full Ampere support since the update to 2020.1.0 beta approximately a month ago. Daz just hasn't recompiled/released an update to any release channel of Daz Studio in that time.

    The one and only time it has ever worked at launch was with Turing and that was the beta (and back then I stated we got lucky).

    Luck had nothing to do with it. the reason why Turing was compatible (minus RTCores) day one is because Tuyring was designed on purpoise to be backwards compatible with Volta. Support for which was added to Iray months prior to the Turing release.

     

    The silver lining is that unlike with Pascal, Nvidia has already released an Iray SDK with support for the new GPUs. So it is only a matter of time. With Pascal, it was several months after launch before the SDK was even released.

    Daz already has Ampere supporting versions of DS (4.12.2.31 or later.) They just need to release one. 

    It is true in the sense that Iray still needs to be updated just to use the new cards at all. For a lot of software, this is not so necessary, a simple driver update is often all that is needed, at least for basic support. That Nvidia made an update before release (this time) is good, but it requires all software that use Iray to replace their previous versions with this new one. That carries drawbacks when it comes to legacy support. It also takes time, for example...Daz Studio. Nvidia might build Iray, but they do not build Daz nor any of the other software that uses the Iray plugin.

    I'd say it is more like Turing was the consumer version of Volta. They simply used what they already had, it was not a special effort for back compatibility, it was easier. Nvidia just used a new name instead of calling the consumer cards Volta as well. It is like how the A100 is Ampere, even though it is on TSMC 7nm and has no ray tracing cores, similar to the Volta V100. And Daz did lucky that Nvidia made that design decision with Turing. The Titan V released a full year before Turing, giving it plenty of time to filter through the dev channels. If they had altered Turing more, that would have been trouble for Iray.

    Daz may have a release in their channel, but it does none of us any good until it is actually released to the public. Thus this information doesn't apply, it is essentially meaningless for us. There is no ETA on when this will actually release, so again, leaving us customers in the dark about these things is pretty lame. It would be kind of nice to give customers some idea. The version in the release channel might not even be the one that gets released next...we don't know, how would we...since they will never tell us anything. For all we know, they could release a version that predates Iray 2020.1. People want to upgrade their hardware, this is the most hyped GPU launch ever after all. But for Daz Iray users that means they also have to wait on Daz to upgrade in order to use that hardware with their chosen renderer. It is hard enough to score one of these cards, now the ones who do have another hassle to deal with. Some people do rendering for a living, these kinds of things can effect their bottom line.

    It will also mean that Ampere owners will no longer be able to use the old versions of Daz at all, unless they still have a previous generation GPU around. Daz will almost certainly release a beta version first, which would leave the full release users in the dark. Sometimes it is a pain to get certain plugins working in betas. And again, since we have no idea what their plans are, we have no idea when a full general release that supports Ampere may come.

  • fred9803fred9803 Posts: 1,558
     

    It will also mean that Ampere owners will no longer be able to use the old versions of Daz at all, unless they still have a previous generation GPU around. Daz will almost certainly release a beta version first, which would leave the full release users in the dark. Sometimes it is a pain to get certain plugins working in betas. And again, since we have no idea what their plans are, we have no idea when a full general release that supports Ampere may come.

    Excellent observations outrider! Particularly with regard to broken plugins. For a period of time it might be necessary to add the new card and keep the old one plugged in too, using the 3XXX with the beta and the old card for the previous version of DS. Especially for people who need to keep their workflow uninterrupted.... at least until there's a fix for the plugin/s. I suppose it depends if there's room on your MB for two cards. It will no doubt be messy for a while.

  • outrider42outrider42 Posts: 3,679
    fred9803 said:
     

    It will also mean that Ampere owners will no longer be able to use the old versions of Daz at all, unless they still have a previous generation GPU around. Daz will almost certainly release a beta version first, which would leave the full release users in the dark. Sometimes it is a pain to get certain plugins working in betas. And again, since we have no idea what their plans are, we have no idea when a full general release that supports Ampere may come.

    Excellent observations outrider! Particularly with regard to broken plugins. For a period of time it might be necessary to add the new card and keep the old one plugged in too, using the 3XXX with the beta and the old card for the previous version of DS. Especially for people who need to keep their workflow uninterrupted.... at least until there's a fix for the plugin/s. I suppose it depends if there's room on your MB for two cards. It will no doubt be messy for a while.

    There is a fix that involves moving the file locations around so the beta can find the plugins. But a lot of people don't like messing with that stuff. I think its fair to say that most Daz users skew towards being artists more than tech oriented, Daz's ease of use is one of its biggest selling points. Many people don't even install the beta.

  • SevrinSevrin Posts: 6,300

    It's not called bleeding edge for nothing.

  • I'm curious how close should the benchmark be to actual iterations achieved with a "typical scene" in everyday use. When I render the benchmark scene with my RTX 2060, I get on average 4.3 iterations per second which seems to be in the ballpark performance-wise for that card according the the chart on the 1st page, so I don't think I bought a lemon,. But my actual iteration rate when rendering many scenes is nowhere close to that figure. 

    For example, this current scene I'm working on uses the i13 Doctor Office environment set, 2 x Genesis 8 figures w/ clothes, hair (non dforce), and simple lighting ( set includes emmisive overhead lights) and I used an HDR as well, I only acheive .87 iterations per second with this setup. And I have verfired the scene is being rendered by the GPU, it is not being dumped to CPU. Is this normal for the iteration rate to vary this much between scenes? Thats an 80% drop in iteraton rate between the benchmark scene and one of my typical scenes. Also, I've checked other things like, overheating, throttling, etc. Everything is running fine, no overheating or anything like that.

  • RayDAntRayDAnt Posts: 1,120
    edited September 2020

     

    TGSNT said:

    I'm curious how close should the benchmark be to actual iterations achieved with a "typical scene" in everyday use. When I render the benchmark scene with my RTX 2060, I get on average 4.3 iterations per second which seems to be in the ballpark performance-wise for that card according the the chart on the 1st page, so I don't think I bought a lemon,. But my actual iteration rate when rendering many scenes is nowhere close to that figure. 

    For example, this current scene I'm working on uses the i13 Doctor Office environment set, 2 x Genesis 8 figures w/ clothes, hair (non dforce), and simple lighting ( set includes emmisive overhead lights) and I used an HDR as well, I only acheive .87 iterations per second with this setup. And I have verfired the scene is being rendered by the GPU, it is not being dumped to CPU. Is this normal for the iteration rate to vary this much between scenes? Thats an 80% drop in iteraton rate between the benchmark scene and one of my typical scenes. Also, I've checked other things like, overheating, throttling, etc. Everything is running fine, no overheating or anything like that.

    See the section called Design Factors at the beginning of this thread. In short, the benchmarking scene used in this thread was calibrated to present a computational load intensive enough for the current generation highest end GPU available at the time (the GTX 1080 Ti/Titan Xp) to take long enough in rendering to result in statistically significant data while at the same time not presenting so much of a computational workload that the current generation lowest end GPU available at the time (the GTX 1050 2GB) couldn't also complete it in a reasonable amount of time.

    In other words, on a scale of 1-10 for Iray based scene complexity, I would put this benchmarking scene at around a 2. Meaning that the vast majority of "everyday" scenes are likely to take significantly longer (lower iterations per second) to complete than this one. Case in point - my Titan RTX (which tpyically gets around 10ips in this benchmark) when set to render Iray scenes taking up not even half its VRAM limit (remember that this is a 24GB card) typically drops down to the 1-3ips completion rate range. Granted, this is with effects like Depth-of-Field turned on and 4K+ render sizes (the kind of rendering I do - hence my decision to go with such a premium priced card in the first place.) So I'd say that the observations you've made with your 2060 are entirely within expectation.

    Because with Iray we are talking about theoretically 100% pathtraced/raytraced rendering , completion rates can vary literally astronomically going from scene to scene. It all depends on scene content and composition.

    Post edited by RayDAnt on
  • Was able to grab a 3090 FTW3 Ultra Gaming only to find out Daz isn't working...

    So, is there a possibility that Ampere won't be supported for more than a couple of weeks? I don't know how long I can have this sitting in the box while people are paying twice what it costs on ebay/CL.

  • RayDAntRayDAnt Posts: 1,120
    whaleford said:

    Was able to grab a 3090 FTW3 Ultra Gaming only to find out Daz isn't working...

    So, is there a possibility that Ampere won't be supported for more than a couple of weeks? I don't know how long I can have this sitting in the box while people are paying twice what it costs on ebay/CL.

    Full Ampere support should be coming right with the very next beta release of Daz Studio (it's already been in the development changelogs for weeks.) So I'd say hold your horses and maybe submit a bug report to Daz about it not yet working when it clearly should, to help get the ball rolling.

    In the meantime, fancy doing a quick feature test on your 3090 to help us all know about what to expect from it capabilty-wise going forward? I am talking about Tesla Compute Cluster (TCC) driver mode compatability (the one way you can get access to all 24GB of VRAM for eg. rendering in Iray under Windows without having to sacrifice a significant portion of that to Windows itself or other running apps - a potentially extremely useful feature.) The test process goes like this (revised from here):

    1. Open a command-line window with Administrator privileges (Windows key -> type "cmd" and choose "Run as Administrator" under options.)  It is essential that Admin privileges are available as this test will fail otherwise.
    2. Run the following command:

      nvidia-smi.exe -L

      This will output a line like the following for each GPU in your system:

      GPU 0: TITAN RTX (UUID: GPU-a30c09f0-e9f6-8856-f2d0-a565ee23f0bf)

      Take note of the number next to the name of your 3090. You will need this for the rest of the steps in this process.

    3.  Run the following command with the number from step 3 in place of 0 after "-i ":

      nvidia-smi.exe -i 0 --query-gpu=driver_model.current,driver_model.pending --format=csv

      This should output:

      driver_model.current, driver_model.pendingWDDM, WDDM

      "WDDM" is the name of the Nvidia driver model mode normally active under windows. In the next step you are gonig to see if you can temporarily schedule it to change or not

    4. Now, run the following command:
      nvidia-smi.exe -i 0 -fdm TCC
      If you get a response like this:
      WARNING: Disconnect and disable the display before next reboot. You are switching to TCC driver model with display active!Set driver model to TCC for GPU 00000000:01:00.0.All done.Reboot required.
      This means that TCC driver mode is currently supported on the 3090. However If you get an error message like this:
      Unable to set driver model for GPU 00000000:01:00.0: Not SupportedTreated as warning and moving on.All done.
      That means it isn't (at least not yet...)
    Test Cleanup (only needed if "TCC" mode change successful above):

    If you were successful in step #3 above, you will need to switch the driver mode of your 3090 back to its default value "WDDM" prior to rebooting your system. Otherwise you will temporarily lose the ability to run any displays off of that card. To do this, run the follwoing in the same command-line window as before (with the number from step 3 in place of the 0 after "-i"):

    nvidia-smi.exe -i 0 -fdm WDDM

    You should get the following response:

    Set driver model to WDDM for GPU 00000000:01:00.0.All done.

    Please note that rebooting is unnecessary at this point. To verify everything is back to normal, Run (with the number from step 3 in place of the 0 after "-i"):

    nvidia-smi.exe -i 0 --query-gpu=driver_model.current,driver_model.pending --format=csv

    This should output:

    driver_model.current, driver_model.pendingWDDM, WDDM

    And as long as "WDDM" appears twice you are back to the way you were before.

     

    Based on what we're currently hearing, you should almost certainly find that TCC mode isn't currently available with the 3090. But it never hurts to actually check.

  •  

    I'm sure its there somewhere, but you can use the help file to find this information. Help>Troubleshooting>View Log File

    Assuming you just rendered, scroll all the way down to find the info.

     

    Sorry didn't visit the forum for a while, just found your reply, yes you were right, didn't notice it was refering to Daz log (hard to find size I customzied the file path) thought it was windows log.

    anyway, my result, hope I did it right

    System Configuration
    System/Motherboard: Asrock Extreme4
    CPU: Intel i7 5820k 3.3ghz 
    GPU: EVGA RTX 2060 
    System Memory: Gskill 32G 2400mhz 
    OS Drive: Samsung 500G SSD 
    Asset Drive: WD 4T HDD
    Operating System: Win10 18363.836
    Nvidia Drivers Version: 456.38
    Daz Studio Version: 412.1.118
    Optix Prime Acceleration: 7.1.0

    Benchmark Results

    Iteration Rate: 2020-09-24 18:10:01.543 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce RTX 2060): 1800 iterations, 3.313s init, 473.264s render
    Loading Time: 2020-09-24 20:20:16.233 Total Rendering Time: 7 minutes 51.50 seconds

     

    Interesting, time on my RTX2060 is faster than other RTX2060, and I don't have state of the art CPU

  • System Configuration
    System/Motherboard: ASUS ROG STRIX B360-F GAMING
    CPU: Intel Core i5-8600/stock
    GPU: GFORCE RTX2080/stock
    System Memory: G.Skill Ripjaws V Series 32GB DDR4 2666
    OS Drive: Samsung 970 EVO SSD 500GB - M.2 NVMe
    Asset Drive: Same
    Operating System: Windwos 10 Pro 1903 18362.1082
    Nvidia Drivers Version: 456.38
    Daz Studio Version: 4.12.1.118
    Optix Prime Acceleration: STATE (Daz Studio 4.12.1.086 or earlier only)

    Benchmark Results
    2020-09-25 14:17:02.051 Finished Rendering
    2020-09-25 14:17:02.080 Total Rendering Time: 5 minutes 51.56 seconds
    2020-09-25 14:18:53.021 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:
    2020-09-25 14:18:53.021 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce RTX 2080): 1800 iterations, 3.151s init, 346.064s render
    Iteration Rate: 5.12
    Loading Time: 5.496

    r.png
    900 x 900 - 1M
  • outrider42outrider42 Posts: 3,679
    edited September 2020

    windli3356 said:

    Sorry didn't visit the forum for a while, just found your reply, yes you were right, didn't notice it was refering to Daz log (hard to find size I customzied the file path) thought it was windows log.

     

    anyway, my result, hope I did it right

    System Configuration
    System/Motherboard: Asrock Extreme4
    CPU: Intel i7 5820k 3.3ghz 
    GPU: EVGA RTX 2060 
    System Memory: Gskill 32G 2400mhz 
    OS Drive: Samsung 500G SSD 
    Asset Drive: WD 4T HDD
    Operating System: Win10 18363.836
    Nvidia Drivers Version: 456.38
    Daz Studio Version: 412.1.118
    Optix Prime Acceleration: 7.1.0

    Benchmark Results

    Iteration Rate: 2020-09-24 18:10:01.543 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce RTX 2060): 1800 iterations, 3.313s init, 473.264s render
    Loading Time: 2020-09-24 20:20:16.233 Total Rendering Time: 7 minutes 51.50 seconds

     

    Interesting, time on my RTX2060 is faster than other RTX2060, and I don't have state of the art CPU

     

    "Total rendering time" includes the loading time. Your loading time is a bit faster, but the iteration rate (which is the 1800 iterations divided by the 473.264 render time) is 3.80 iteration per second. That is exactly what Devontaki was getting, so your result indeed falls in line with others. The render time you see beside the iteration count is the number that is important, because that is the actual time it took the GPU to perform the render. So while "total rendering time" is interesting to see, it is not the true benchmark of the GPU, as the loading time can depend on other hardware which skews results.

    You CPU has absolutely nothing to do with the render once it starts. The only job it really does here is to help send the scene to the GPU. The loading times are also heavily influenced by the SSD and/or hard drives that the data is coming from. But once the scene is loaded the drives and CPU are basically done with their tasks. That is why the whole scene must fit in VRAM in order to run on GPU. It is part of Iray's design.

    Also, the drivers can skew a result. Some recent drivers have wrecked render times for some users, which is why you see a noticeably faster time than Lenio did. Notice Lenio's driver is older. So hopefully your result is a sign that things are back to normal with the new drivers. I will have test myself. If you look back a page or two in this thread, you will see that I noticed a trend of people posting slower bench times with certain drivers, which included myself.

    So I will check back after I do a driver update to see if my speed has returned.

    Post edited by outrider42 on
  • chrislbchrislb Posts: 95
    edited October 2020

    A little off topic, but I had a few things I was wondering about after looking over the data. 

    I'd be interested to also see performance per watt and performance per dollar with the GPUs.  When is it better to go with several lower powered GPUs over a lower number of higher powered GPUs?  Given a certain total system power or GPU power draw, what GPUs perform the best?

    When the latest Nvidia GPUs reach full scale production and inventory issues are gone, the prices of Turing and Pascal GPUs will likely plummet.  If the prices drop at the same rate as previous generations, we likely will see ~$200 GTX 1080 ti GPUs.  So, you might get 4 GTX 1080 ti GPUS for not much more than a single 3080.

    I buy and resell used PC hardware and build lower priced gaming systems to resell on the side and currently have some GPUs that I can add to testing mutli GPU setups.  I'm going to test some 3x and 4x GPU setups with older GPUs.  Currently I have three EVGA GTX 1070 GPUs here that I can test in a 3 GPU test bench.  

    If your primary concern is render times, you might be better off buying a used motherboard and PC that support 3 or 4 PCIE x16 slots and sued GPUs over a 3000 series GPU when Daz adds the new version of Iray.

    Also, have we figured out what version of drivers work best for each generation of GPU?  If people aren't concerned with the latest game support, then we can run older drivers if they consistently perform better.

    Post edited by chrislb on
  • RayDAntRayDAnt Posts: 1,120
    chrislb said:

    A little off topic, but I had a few things I was wondering about after looking over the data. 

    I'd be interested to also see performance per watt and performance per dollar with the GPUs. 

    The "Overall Value" column in the main results chart is a measurement of performance per dollar (technically performance per dollar/hour since their needs to be a time domain )restriction for that sort of statistic to make any sense.)

    chrislb said:

    When is it better to go with several lower powered GPUs over a lower number of higher powered GPUs?  Given a certain total system power or GPU power draw, what GPUs perform the best?

    There was a lot of discussion leading up to the creation of this thread about which things to include in the metrics. And although power consumption to price was very commonly called for, the reality is that there simply is no easy way for GPU users without specialized testing hardware on hand to reliably measure the actual power consumption of their PCs - much less just that of their individual GPU(s.)

    Overall system power draw (measurable with a relatively inexpensive wall adapter) is subject to so many independent variables outside the scope of this thread (and the already-lengthy list of hardware specs collected in it) that it is effectively a useless quantity where the primary statistic this thread is designed to measure (Iray GPU or CPU rendering performance) is concerned. Meanwhile actual GPU power usage measurment requires either an openbench system arrangement with either multiple clamp-on power meters on both GPU and motherboard power cables (remember that PCI-E slot power consumption is very much a part of the equation) or an even more specialized tool/hardware setup using something like Nvidia's newly released PCAT

    Officially published power consumption statistics from card manufacturers are useless because they do not reflect the actual power consumption of specific workloads like Iray rendering (each kind of GPU workload has its own unique watts consumed to GPU clockspeed maintained pattern distinct from individual card capabilities.) And often the numbers which are given are purely theoretical based on a card running at static clock speeds - something that all Nvidia non-Quadro GPUs haven't been designed to do for more than 8 years now. Plus it often isn't clear if these manufacturer-supplied max wattage statistics are in reference to electrical power consumption at the system power supply end or radiant heat production at the system cooling end (both things are measured in Watts.) 

    But perhaps most importantly, in the grand scheme of things, unless you are operating an actual server farm out of your attic, the cost of the amounts of power consumed by these GPUs simply isn't  significant when you consider how much they cost at time of purchase. For example, if I were to run my Titan RTX - which is officially a 280 watts max GPU (although my real-world Iray usage has put it consistently at around only 200 watts) - at it's max power usage level continously for a year, you're looking at a max theoretical cost of:

    280 watt-hours * 24 hours * 365 days * $0.00012 (the average price in the US of a single watt-hour of power) =

    $294.37 per year for continuous use of a GPU which cost $2500. Drop these figures down to more sane levels like 40 usage hours per week (ie a full work week)  at the 200 watts actually observed, and your talking about:

    200 watt-hours * 40 hours * 52 weeks $0.00012 (the average price in the US of a single watt-hour of power) =

    $69.89 per year of energy costs for regular use of a GPU which still costs $2500.

    Needless to say, if someone can afford to buy a $2500 GPU, then a +$300 annual electric bill (much less a +$70 one) isn't realistically gonna matter.

     

    chrislb said:

    When the latest Nvidia GPUs reach full scale production and inventory issues are gone, the prices of Turing and Pascal GPUs will likely plummet.  If the prices drop at the same rate as previous generations, we likely will see ~$200 GTX 1080 ti GPUs.  So, you might get 4 GTX 1080 ti GPUS for not much more than a single 3080.

    I buy and resell used PC hardware and build lower priced gaming systems to resell on the side and currently have some GPUs that I can add to testing mutli GPU setups.  I'm going to test some 3x and 4x GPU setups with older GPUs.  Currently I have three EVGA GTX 1070 GPUs here that I can test in a 3 GPU test bench.  

    By all means - if you have additional combos you can bench them throw them in! Generally speaking, Iray scales almost perfectly under multi-GPU setups. So there really isn't much of a need for actual benchmarks of card multiples (somewhere along the way there's a post I did comparing just all the similar car multiple results we had at the time, and the takeway was that performance gains are essentially all linear.) But having actual real world numbers to demosntrate that is always better.

  • outrider42outrider42 Posts: 3,679
    edited October 2020
    chrislb said:

    A little off topic, but I had a few things I was wondering about after looking over the data. 

    I'd be interested to also see performance per watt and performance per dollar with the GPUs.  When is it better to go with several lower powered GPUs over a lower number of higher powered GPUs?  Given a certain total system power or GPU power draw, what GPUs perform the best?

    When the latest Nvidia GPUs reach full scale production and inventory issues are gone, the prices of Turing and Pascal GPUs will likely plummet.  If the prices drop at the same rate as previous generations, we likely will see ~$200 GTX 1080 ti GPUs.  So, you might get 4 GTX 1080 ti GPUS for not much more than a single 3080.

    I buy and resell used PC hardware and build lower priced gaming systems to resell on the side and currently have some GPUs that I can add to testing mutli GPU setups.  I'm going to test some 3x and 4x GPU setups with older GPUs.  Currently I have three EVGA GTX 1070 GPUs here that I can test in a 3 GPU test bench.  

    If your primary concern is render times, you might be better off buying a used motherboard and PC that support 3 or 4 PCIE x16 slots and sued GPUs over a 3000 series GPU when Daz adds the new version of Iray.

    Also, have we figured out what version of drivers work best for each generation of GPU?  If people aren't concerned with the latest game support, then we can run older drivers if they consistently perform better.

    Having different bench numbers for new combos is always welcome. But there is a point of diminishing returns over a single card. For one, you probably know that VRAM does not stack. That hurts a lot.

    The RTX line also largely negates many combos of non RTX cards. Take the 1080ti. The 2080ti nearly doubles the speed of a 1080ti, in fact in some situations it goes beyond that. According to the Iray dev team, the 3080 is doubling the performance of the top tier Quadro RTX 6000, which the 6000 should itself be slightly faster than a 2080ti. So just doing back of napkin math here a single 3080 could very well be QUADRUPLE the speed of a 1080ti. So even though a 3080 uses a lot of power, roughly 320 W, a single 1080ti uses 250 W.

    Even the lowest tier 2060 is faster than a 1080ti at Iray! These numbers are on the charts.

    Thus it kind of makes trying any combo of non RTX cards somewhat futile. The power of the ray tracing cores for Iray is just too much for non RTX cards to over come.

    So IMO the more interesting multiGPU combinations are going to be with Turing. Turing is going to be unique in that it offers Nvlink on several different models, while Ampere does not. The 2080ti, 2080, 2080 Super, and 2070 Super all have the ability to Nvlink with an equal card and combine their VRAM. If Turing cards drop low enough, that certainly raises some intriguing possibilities. The 2070 Super in Nvlink would offer almost 16gb of VRAM. How well does this combo compare to a 3070 or 3080?

    Of course, if Nvidia does release the fabled double VRAM versions of the 3070 and 3080, then that totally blows the doors open. A 3080 with 20gb should beat any of these possible Nvlink combinations, even the 2080ti, since again, it may offer double the performance of a 2080ti. So having two 2080tis with Nvlink would be using 500 W compared to the 3080's 320 W. They would only offer 2 additional GB, but a large portion of this would be negated by the fact that some data is still duplicated across the cards. For Iray, the mesh data is duplicated, only texture data is pooled. I imagine that if somebody is building a scene that uses 20GB, then the mesh data is very likely over that 2gb portion anyway, meaning the overall VRAM pool is actually less than what a 3080 with 20gb would have.

     

    Post edited by outrider42 on
  • chrislbchrislb Posts: 95
    edited October 2020
    chrislb said:
    ...

    So IMO the more interesting multiGPU combinations are going to be with Turing. Turing is going to be unique in that it offers Nvlink on several different models, while Ampere does not. The 2080ti, 2080, 2080 Super, and 2070 Super all have the ability to Nvlink with an equal card and combine their VRAM. If Turing cards drop low enough, that certainly raises some intriguing possibilities. The 2070 Super in Nvlink would offer almost 16gb of VRAM. How well does this combo compare to a 3070 or 3080?

     

     

    My RTX 2080 Supers in SLI equal a single overclocked and water cooled RTX 3080 in the 3DMark Port Royal benchmark.  If that carries over to Iray performance after Daz finally updates to the newer version of Iray to support the 3000 series cards, then 2080 Supers might only be worth it if they are ~$300 USD used and the NVLink bridge is still in stock at an affordable price.

    It would be interesting to see the correlation between other benchmarks and Daz Iray performance.

    Post edited by chrislb on
  • outrider42outrider42 Posts: 3,679
    edited October 2020
    chrislb said:
    chrislb said:
    ...

    So IMO the more interesting multiGPU combinations are going to be with Turing. Turing is going to be unique in that it offers Nvlink on several different models, while Ampere does not. The 2080ti, 2080, 2080 Super, and 2070 Super all have the ability to Nvlink with an equal card and combine their VRAM. If Turing cards drop low enough, that certainly raises some intriguing possibilities. The 2070 Super in Nvlink would offer almost 16gb of VRAM. How well does this combo compare to a 3070 or 3080?

     

     

    My RTX 2080 Supers in SLI equal a single overclocked and water cooled RTX 3080 in the 3DMark Port Royal benchmark.  If that carries over to Iray performance after Daz finally updates to the newer version of Iray to support the 3000 series cards, then 2080 Supers might only be worth it if they are ~$300 USD used and the NVLink bridge is still in stock at an affordable price.

    It would be interesting to see the correlation between other benchmarks and Daz Iray performance.

    I am not sure if Port Royal flexes Ampere fully. Even though it uses ray tracing, it still does not fully stretch out the RTX cores. If you examine the benchmarks between gaming and Iray, you will see a trend where Iray benefits far more greatly than gaming with RTX.

    It is interesting how things scale with Iray. When things were pure CUDA, the scaling was just about perfect. For example, if GPU A was twice as fast at a given scene over GPU B, then you could pretty much assume that GPU A would be twice as fast at just about every possible scene you could create for them.

    But the ray tracing cores are different. They accelerate the calculations of complex geometry. What we tend to see is that the more complex a scene is, the bigger the gap in performance between RTX vs non RTX. So RTX actually provides better benefits the more complex a scene becomes. When you pair that with lots of VRAM, that becomes a killer combo. Somebody tried to test this with a Iray scene using a big ball of dforce stand hair. Unfortunately that thread died out, but we saw some really cool results from the people that tried the bench. The gains over non RTX cards were getting absurd, and the bench provided a solid proof of concept of just how much RTX hardware could power through geometry vs pure CUDA.

    I found that thread in case you wanted to check it out. I am pretty sure nobody tried Nvlink 2080 Supers. https://www.daz3d.com/forums/discussion/344451/rtx-benchmark-thread-show-me-the-power/p1

    The Iray Dev Team posted some of their results for a variety of scenes they created using the 3080, Quadro RTX 6000, and Quadro P6000. The monster A100 is also included. The A100 does not include any ray tracing cores. The P6000 would be a hair faster than a 1080ti, as the P6000 used the uncut top Pascal chip. The RTX 6000 is the top Turing chip, so would be like a Titan RTX. So these tests are comparing the 3080 to the last two generation flagships. The RTX 6000 is used as the baseline for the performance differential.

    In these tests the 3080 ranged from a low of 1.7x to a high of 2.6x times faster than the RTX 6000, for an average of 2.1x performance gain. That to me is quite stunning, and also quite exciting. This also lines up with the bold 2x performance claims that Jenson Huang made during the Ampere reveal...they got those numbers from software like Iray. The performance gains are much higher than gaming benchmarks. Several gaming reviewers have noted that the 3080 and 3090 appear to extend their leads as the tasks grow harder. The 1080p gains are middling, the 1440p gains are better, and then at 4k the performance gains are at their highest. So that makes sense...Iray is going to be one of the most complex tasks you can throw at GPUs, thus seeing the largest gains with Iray is logical. The comparisons to the P6000, one of the fastest GPUs in the world in 2016, is frankly shocking. The range is from 4x to a staggering 10x times boost in performance. Again, this is a GPU using the full Pascal chip, and is in fact a little faster than a 1080ti. So yeah...there are scenes you can create in Iray where the 3080 will be 10 TIMES faster than a 1080ti. That is just wild. And the 3090 should be even faster, though not drastically so, probably around 20% over the 3080 at Iray.

    Link to Iray Dev Blog: https://blog.irayrender.com/post/628125542083854336/after-yesterdays-announcement-of-the-new-geforce

    The only drawback, and it is an obvious one, is how the 3080 only has 10gb of VRAM, and it has no Nvlink capability at all. So really, it is only Nvlink that makes the outgoing Turing cards interesting since their VRAM can be doubled. Thus many Daz users wait with baited breath for any news of the 3080 20gb release.

    Daz should be getting Iray 2020.1 soon. Hopefully. It is in their beta channels, so it should be hitting the public beta in time.

    Post edited by outrider42 on
  • chrislbchrislb Posts: 95

    I tested the same system with 1, 2, and 3 of the same GTX 1070 EVGA graphcis cards(1070 SC GAMING 08G-P4-5173-KR) with GPU only rendering.

    Also, I tried raising the GPU power limits to 15% more power than stock and the maximum allowed voltage over the manufacturer.  Increasing power and voltage did not significantly decrease render times.  The total reduction in render time was between 7 seconds and 22 seconds.  I did not include those times in the results.  Adding an additional ~25 watts power draw to each of the cards only saved a few seconds(less than 1%) of render time at most.

    I currently have two motherboards with 3 or more PCIex16 slots and only one of them wasn't in use, so, I had to use the older Z77 chipset motherboard for this test.  The Z77 chipset board is PCIE 2.0 and not PCIE 3.0 or PCIE 4.0, which may increase the speed of the render a little especially with multiple GPUs.  With two GPUs, they both oeprate in PCIE x8 mode.  With three GPUs, two of them operate in PCIE x4 mode and one operated in PCIE x8 mode due to the limit of PCIE channels on the motherboard.  GPU temepratures hit a peak of 64C to 70C during the render, which is within normal limits.

    System Configuration
    System/Motherboard: MSI Z77A G45 Gaming
    CPU: i7-3770K @ stock speed
    GPU: EVGA GeForce GTX 1070 SC GAMING 08G-P4-5173-KR @ Stock speed and stock power/voltage limits
    System Memory: G.SKILL Sniper Series 16GB (2 x 8GB) 240-Pin DDR3 2400 (PC3 19200) Model F3-2400C11D-16GSR
    OS Drive: Samsung 840 SATA 500 GB
    Asset Drive: Samsung 840 SATA 500 GB
    Operating System: Windows 10 Pro version 2004
    Nvidia Drivers Version: 456.55
    Daz Studio Version: 4.12.1.118
    Optix Prime Acceleration: N/A

    Benchmark Results 1 GPU Only No CPU
    2020-10-06 17:05:52.033 Finished Rendering
    2020-10-06 17:05:52.079 Total Rendering Time: 14 minutes 14.76 seconds

    2020-10-06 17:22:51.529 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:

    2020-10-06 17:22:51.529 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce GTX 1070): 1800 iterations, 4.909s init, 846.210s render


    Iteration Rate: (1800 846.210) = 2.127 iterations per second
    Loading Time: (854.76 - 846.210) = 8.55 seconds

    Benchmark Results 2 GPUs No CPU
    2020-10-06 15:00:32.340 Finished Rendering
    2020-10-06 15:00:32.371 Total Rendering Time: 7 minutes 28.68 seconds

    2020-10-06 15:01:47.128 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:

    2020-10-06 15:01:47.128 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce GTX 1070): 915 iterations, 4.868s init, 439.885s render

    2020-10-06 15:01:47.128 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 1 (GeForce GTX 1070): 885 iterations, 6.874s init, 437.721s render

    Iteration Rate: (1800439.885) = 4.0919 iterations per second
    Loading Time: ((448.68) - 439.885) = 8.795 seconds

    Benchmark Results 3 GPUs No CPU
    2020-10-06 15:29:45.067 Finished Rendering
    2020-10-06 15:29:45.099 Total Rendering Time: 5 minutes 4.84 seconds

    2020-10-06 15:33:46.363 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:

    2020-10-06 15:33:46.363 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce GTX 1070): 606 iterations, 5.311s init, 295.191s render

    2020-10-06 15:33:46.363 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 1 (GeForce GTX 1070): 591 iterations, 6.093s init, 294.744s render

    2020-10-06 15:33:46.363 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 2 (GeForce GTX 1070): 603 iterations, 5.829s init, 294.495s render

    Iteration Rate: (1800 304.84) = 5.905 iterations per second
    Loading Time: ((304.84) - 296.616) = 8.224 seconds

  • DeathKnightDeathKnight Posts: 96
    edited October 2020

    Daz Public Beta just got updated to a version with Iray that supports Ampere. Here are my 3080 results

    System Configuration
    System/Motherboard: Gigabyte X570 Aorus Elite
    CPU: AMD Ryzen 7 3700X @ stock speed
    GPU: MSI GeForce RTX 3080 VENTUS 3X 10G OC @ stock speed
    System Memory: 2 x 16GB G.SKILL Ripjaws V DDR4-3200 (14-14-14-34) Samsung B-Die @ stock speed
    OS Drive: Samsung 970 EVO 500GB M.2 NVMe SSD
    Asset Drive: Seagate Barracuda 2TB HDD (ST2000DM006-2DM164)
    Operating System: Windows 10 Pro 2004
    Nvidia Drivers Version: 456.55
    Daz Studio Version: 4.12.2.51
    Optix Prime Acceleration: N/A

    Benchmark Results
    2020-10-13 19:08:02.475 Finished Rendering
    2020-10-13 19:08:02.498 Total Rendering Time: 2 minutes 38.53 seconds
    2020-10-13 19:08:16.425 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:
    2020-10-13 19:08:16.425 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce RTX 3080):      1800 iterations, 6.477s init, 149.234s render
    Iteration Rate: (1800 / 149.234) = 12.062 iterations per second
    Loading Time: ((0 + 120 + 38.53) - 149.234) = 9.296 seconds

     

    3080 looks to be the new single card king until someone benches a 3090.

    Post edited by DeathKnight on
  • RayDAntRayDAnt Posts: 1,120
    edited October 2020

    Some updated Titan RTX numbers for comparison:

     

    System Configuration
    System/Motherboard: Gigabyte Z370 Aorus Gaming 7
    CPU: Intel i7-8700K @ stock (MCE enabled)
    GPU: Nvidia Titan RTX @ stock (watercooled)
    System Memory: Corsair Vengeance LPX 32GB DDR4 @ 3000Mhz
    OS Drive: Samsung Pro 970 512GB NVME SSD
    Asset Drive: Sandisk Extreme Portable SSD 1TB
    Operating System: Windows 10 Pro 1909
    Nvidia Drivers Version: 456.38


    Daz Studio Version: 4.12.2.006 Beta x64
    Benchmark Results: Titan RTX (TCC)
    Total Rendering Time: 4 minutes 10.42 seconds
    CUDA device 0 (TITAN RTX): 1800 iterations, 6.357s init, 241.502s render
    Iteration Rate: (1800 / 241.502) = 7.453 iterations per second
    Loading Time: ((0 + 4 * 60 + 10.42) - 241.502) = 8.918 seconds

     

    Daz Studio Version: 4.12.2.051 Beta x64
    Benchmark Results: Titan RTX (TCC)
    Total Rendering Time: 3 minutes 56.53 seconds
    CUDA device 0 (TITAN RTX): 1800 iterations, 6.074s init, 227.867s render
    Iteration Rate: (1800 / 227.867) = 7.899 iterations per second
    Loading Time: ((0 + 3 * 60 + 56.53) - 227.867) = 8.663 seconds

    Benchmark Results: Titan RTX (WDDM, used for display)
    Total Rendering Time: 3 minutes 59.52 seconds
    CUDA device 0 (TITAN RTX): 1800 iterations, 2.626s init, 234.389s render
    Iteration Rate: (1800 / 234.389) = 7.680 iterations per second
    Loading Time: ((0 + 3 * 60 + 59.52) - 234.389) = 5.131 seconds

     

     

    Post edited by RayDAnt on
  • FirlanFirlan Posts: 8
    edited October 2020

    System Configuration
    System/Motherboard: MSI x570 tomahawk
    CPU: AMD Ryzen 5 3600X @ 4.3 Ghz
    GPU: MSI GeForce RTX™ 3090 VENTUS 3X 24G OC @ stock
    System Memory: Crucial Ballistix 64GB Kit (2 x 32GB) DDR4-3600 16-18-18-36
    OS Drive: Crucial MX500 2TB
    Asset Drive: Western Digital WD Blue 2TB
    Operating System: Windows 10 Pro 2004
    Nvidia Drivers Version: 456.38
    Daz Studio Version: 4.12.2.51

    Benchmark Results:

    Total Rendering Time: 2 minutes 12.94 seconds
    CUDA device 0 (GeForce RTX 3090): 1800 iterations, 2.356s init, 127.220s render
    Iteration Rate: 14.148 iterations per second
    Loading Time: 5.720 seconds

    Post edited by Firlan on
  • RayDAntRayDAnt Posts: 1,120

    FYI the 3080 and 3090 are now in the tables at the beginning of the thread.

  • Finally! Not going to lie, this is a hair under what I was expecting based off of the Iray blog's benchmarks for the 3080, but maybe that's due to some other factor. I should check how my 1080Ti holds up between this beta and the previous version of Daz. Also, my CPU etc is a bit slow. 

    System Configuration
    System/Motherboard: ASRock B450 ProM
    CPU: AMD Ryzen 7 1700x @ Stock
    GPU: NVIDIA RTX 3090 FE @ Stock
    System Memory: 48GB DDR4 @ 2400 Mhz (Various)
    OS Drive: Adata SX8200NP 512GB M.2 NVMe SSD
    Asset Drive: WDC WD10EZEX-00RKKA0
    Operating System: Windows 10  Pro 
    Nvidia Drivers Version: 456.71
    Daz Studio Version: DAZ Studio 4.12.2.51 64 BITS
    Optix Prime Acceleration: N/A

    Benchmark Results

    2020-10-13 20:42:02.046 Finished Rendering
    2020-10-13 20:42:02.083 Total Rendering Time: 2 minutes 19.29 seconds
    2020-10-13 20:42:36.374 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:
    2020-10-13 20:42:36.374 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce RTX 3090): 1800 iterations, 2.596s init, 132.854s render


    Iteration Rate: (1800 / 132.854) = 13.549 iterations per second
    Loading Time: ((0 + 120 + 19.29) - 132.854) = 6.436 seconds

  • Decided to throw my hat in the ring  with my major system update (new MB, CPU, & memory).  Still waiting on the right GPU (hopefully a 3070 16GB) to be released.  This is with my GTX 1070 with the updated system build and the latest released Geforce Studio driver.

    System Configuration
    System/Motherboard: Gigabyte X570 Aorus PRO WIFI
    CPU: AMD Ryzen 7 3700X @ 3.6 (boosts to 4.3)
    GPU: EVGA GTX 1070 GAMING @ 1506 MHz
    System Memory: G.Skill 32 GB Trident Z Neo DDR-4 3200 memory @ SPEED
    OS Drive: Samsung 860 EVO 1TB SSD
    Asset Drive: 2 x Western Digital Blue WD20EZRZ 2TB HDD in mirrored array
    Operating System: Windows 10 Professional 64 1909
    Nvidia Drivers Version: 456.38 Studio Driver
    Daz Studio Version: 4.12.1.118
    Optix Prime Acceleration: N/A

    Benchmark Results
    2020-10-14 09:34:06.371 Finished Rendering
    2020-10-14 09:34:06.403 Total Rendering Time: 14 minutes 37.83 seconds
    2020-10-14 09:37:43.976 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:
    2020-10-14 09:37:43.976 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce GTX 1070):                 1800 iterations, 2.427s init, 872.484s render
    Iteration Rate: 2.0630750822 iterations per second
    Loading Time: ((0 + 840 + 37.83) - 872.484)=4.982 seconds

     

  • johndoe_36eb90b0johndoe_36eb90b0 Posts: 235
    edited October 2020

    @RayDAnt: Just checked, TCC mode is not available for 3090 RTX.

    System Configuration
    System/Motherboard: ASUS PRIME X299-A
    CPU: Intel Core i7-9800X @ 4.0 GHz
    GPU: MSI GeForce RTX 3090 VENTUS 3X 24G OC @ stock
    System Memory: Kingston Hyper-X Predator 64 GB DDR4 @ 3200 MHz
    OS Drive: Samsung 970 Pro 1 TB
    Asset Drive: Same
    Operating System: Windows 10 Enterprise 10.0.19041.572
    Nvidia Drivers Version: 456.38 Studio Driver WDDM
    Daz Studio Version: 4.12.2.51 Public Build
    Optix Prime Acceleration: N/A

    Benchmark Results

    2020-10-14 22:32:30.819 Finished Rendering
    2020-10-14 22:32:30.849 Total Rendering Time: 2 minutes 11.50 seconds
    2020-10-14 22:32:39.440 Iray [INFO] - IRAY:RENDER :: 1.0 IRAY rend info : Device statistics:
    2020-10-14 22:32:39.440 Iray [INFO] - IRAY:RENDER :: 1.0 IRAY rend info : CUDA device 0 (GeForce RTX 3090): 1800 iterations, 2.472s init, 126.296s render

    Iteration Rate: 14.25223284981314 iterations per second
    Loading Time: 5.204 seconds

    Result.png
    900 x 900 - 1M
    Post edited by johndoe_36eb90b0 on
  • RayDAntRayDAnt Posts: 1,120

    @RayDAnt: Just checked, TCC mode is not available for 3090 RTX.

    As I expected. Thanks for checking!

  • RayDAntRayDAnt Posts: 1,120
    cellsase said:

    Finally! Not going to lie, this is a hair under what I was expecting based off of the Iray blog's benchmarks for the 3080, but maybe that's due to some other factor. I should check how my 1080Ti holds up between this beta and the previous version of Daz. Also, my CPU etc is a bit slow. 

    Which Windows version are you on? So far, everyone else who's benched the latest beta has done it on W10 2004. Could be the source of of the difference.

  • nonesuch00nonesuch00 Posts: 17,890
    edited October 2020

    Specifications

    System/Motherboard: Gigabyte B450M Ds3I WiFi
    CPU: AMD Ryzen 7 2700 32GB
    GPU: PNY GeForce GTX 1650 Super 4GB
    System Memory: 32 GB (2x16GB 2666 MHz) Patriot 
    OS Drive: Crucial 2TB Sata III SSD
    Operating System: Windows 10 build 2004 64bit
    Nvidia Drivers Version: Gaming 456.71 (20201008)
    Daz Studio Version: DAZ Studio Pro Public Beta 4.12.2.51
    Optix Prime Acceleration: N/A

    +++++ Benchmark +++++

    2020-10-14 18:27:08.309 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:

    2020-10-14 18:27:08.309 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce GTX 1650 SUPER): 1800 iterations, 0.286s init, 908.227s render

    +++++ Benchmark +++++

    So the render took about 15 minutes & 8.227 seconds.

    RayDAnt_DS_Iray_Benchmark_2019A_r3.png
    900 x 900 - 1M
    Post edited by nonesuch00 on
  • StuffStuff Posts: 8
    edited October 2020

    3090 + 2080 Ti + Threadripper 3960X
    System/Motherboard: Aorus Extreme TRX 40
    CPU: Threadripper 3960X in Precision Boost Overdrive mode
    GPU: Inno3d iChill x3 3090 (stock settings and cooler) + Galax 1-Click OC 2080 Ti (stock settings and cooler)
    System Memory: 48Gb various G.Skill modules at 3200 XMP (2x 16Gb and 2x 8Gb in quad channel)
    OS Drive: Samsung 960 Pro NVMe SSD 1Tb
    Asset Drive: Same
    Operating System: Windows 10 64bit
    Nvidia Drivers Version: Studio Driver v 456.38
    Daz Studio Version: 4.12.2.51 (64-bit) Public Build
    Optix Prime Acceleration: Nope
     

    Benchmark Results - 3090 ONLY
    2020-10-15 10:12:02.728 Finished Rendering
    2020-10-15 10:12:02.769 Total Rendering Time: 2 minutes 14.6 seconds

    2020-10-15 10:12:16.733 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:

    2020-10-15 10:12:16.733 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce RTX 3090): 1800 iterations, 1.663s init, 129.815s render


    Iteration Rate: (DEVICE_ITERATION_COUNT [1800] / DEVICE_RENDER_TIME [129.815]) iterations per second = 13.86 ips
    Loading Time: ((TRT_HOURS * 3600 + TRT_MINUTES * 60 + TRT_SECONDS) - DEVICE_RENDER_TIME) seconds = 4.785 seconds

    Benchmark Results - 3090 + 2080 Ti + Threadripper 3960X
    2020-10-15 10:20:25.510 Finished Rendering
    2020-10-15 10:20:25.556 Total Rendering Time: 1 minutes 29.44 seconds
     

    2020-10-15 10:20:39.693 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : Device statistics:

    2020-10-15 10:20:39.693 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 0 (GeForce RTX 3090): 1066 iterations, 2.595s init, 84.021s render

    2020-10-15 10:20:39.696 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CUDA device 1 (GeForce RTX 2080 Ti): 567 iterations, 2.446s init, 84.195s render

    2020-10-15 10:20:39.696 Iray [INFO] - IRAY:RENDER ::   1.0   IRAY   rend info : CPU: 167 iterations, 1.952s init, 84.585s render


    Iteration Rate: (DEVICE_ITERATION_COUNT / DEVICE_RENDER_TIME) iterations per second = 21.28 ips
    Loading Time: ((TRT_HOURS * 3600 + TRT_MINUTES * 60 + TRT_SECONDS) - DEVICE_RENDER_TIME) seconds = 4.855 seconds

     

    Thanks for going to the trouble of setting this up RayDAnt.

    3090 Only.png
    900 x 900 - 1M
    3090 + 2080 Ti + TR 3960X.png
    900 x 900 - 1M
    Post edited by Stuff on
  • outrider42outrider42 Posts: 3,679
    cellsase said:

    Finally! Not going to lie, this is a hair under what I was expecting based off of the Iray blog's benchmarks for the 3080, but maybe that's due to some other factor. I should check how my 1080Ti holds up between this beta and the previous version of Daz. Also, my CPU etc is a bit slow. 

     

    The benefits of RTX grow the more complex a scene is. This benchmark scene is kept somewhat small in order to work on lower end machines. If you have more geometry in your scenes, you should see a better uplift. 

    There was another benchmark thread with dforce strand hair where the RTX cards showed much bigger gains over non RTX cards. It is older now, but if you feel like trying it, I would be curious to see just what your 3090 can do here. https://www.daz3d.com/forums/discussion/344451/rtx-benchmark-thread-show-me-the-power/p1

    RayDAnt benched his Titan RTX with that scene, and hit over 22 iterations per second. We can no longer test this scene without RTX, as you know your 3090 will not run on previous versions.

Sign In or Register to comment.