General GPU/testing discussion from benchmark thread

11213151718

Comments

  • talidesadetalidesade Posts: 71
    edited July 2019

    I have one. "Bend of the River". https://www.daz3d.com/bend-of-the-river

    There are millions of vertices in the fully loaded environment. It is so heavy that the scene loads with most of the foilage turned off by default. Otherwise most people's PCs cannot handle it, including mine. When fully loaded, Daz Studio comes to a crawl. The following secen barely fits my 1080ti. If I have a browser open, it will fall to CPU. I have to close everything to render. HOWEVER, the new 4.12 beta has made this scene so much more responsive. Even with everything visible DS is now usable again. And I also noticed that I can now have a browser open and still render, its like it doesn't use as much VRAM as it did before.

    Here's the scene info:

    Here is the image made. The environment is very large, there is a lot there not in my render. You could probably subdivide the rocks to get even more angles.

    Post edited by talidesade on
  • talidesadetalidesade Posts: 71

    And like, here's my bench for 4.12 beta:

    i5 4690

    32 GB RAM

    Samsung 500MB SSD, Western Digital 1TB External

    1080ti EVGA SC2 Black 1947 mhz

    1800 iterations, 5.913s init, 457.867s render

    Total Rendering Time: 7 minutes 47.61 seconds

  • Takeo.KenseiTakeo.Kensei Posts: 1,303

    One thing I wonder about, with Embree being made by Intel, and here it is inside a Nvidia software...that might mean we have a chance of seeing Intel's future GPU support CUDA. Now that would be cool, you would have a genuine choice in the market, assuming the Intel cards are any good. Intel's GPU is quite a wild card, we really have no idea what they are doing. But Intel has the ability to really shake up things if they want to, given they are so much larger than AMD and Nvidia combined. How will they segment their cards? It may just be possible they attempt to do things outside the box to get attention to their new brand. 

    That's all for 2020, but I can't help but wonder.

    Beeing a larger company doesn't warant anything. Intel had their ass kicked by AMD lately. I don't know what to expect from them. Remember they made the Xeon Phi and I'm not sure it is successfull.. At least they are a bit less prone than Nvidia to use closed tech. So in term of Software support you can at least expect multi GPU DXR support and certainly vulkan too. The big question is if they are going to concentrate on performance gaming in order to build up or if they will extend their hardware and build a software ecosystem for high performance computing and bring an open alternative to Cuda.(they support opencl 2.0 for ex)

     

    RayDAnt said:

    So... anybody have recommendations for particular Daz Studio compatible scenes/sets (either from the Daz Store or other sources) that are reeeeeaaally big/difficult to render? I'm looking for ways to study RTX acceleration more thoroughly, and I suspect there is a direct correllation between scene asset size/complexity and how much RTX/RTCores speedup the rendering process (hence why all the freely available benchmarking scenes - including this one just released by me - show such a little difference with it versus without.)

    Didn't have time yet but check Nvidia Orca's scene https://developer.nvidia.com/orca

    I was thinking of eventually building a scene with some speedtree if necessary but the NVIDIA Emerald Square City Scene could be a good bet

  • outrider42outrider42 Posts: 3,679
    That's because they got lazy as the market leader, and while its easy to praise what AMD has done lately, AMD did no favors by not competing for oh, a decade. Intel didn't need to do anything, so they staid the course. Ryzen came out of nowhere and caught Intel napping.

    But even so, one thing that gets forgotten is that even with all the good things the new Ryzen does, Intel STILL holds the performance crown in both gaming and workstation. Even with AMD's very best 12 core Ryzen, Intel's 9900k is still the best CPU for gaming. Every gaming benchmark shows the 9900k win most of the time. AMD does score some wins, and in other games they have closed the gap, but AMD still cannot claim to be the best gaming CPU. Price to performance? Sure. And lots of productivity tasks, too. But for hardcore gamers who want the most frames per second possible, the 9900k is still the best choice. And we must remember that this with AMD on 7nm, while the 9900k is only on 14 nm. If Intel still holds a performance edge with a 14nm chip...AMD might be in trouble if Intel ever gets 10nm out the door.

    Even in workstations, while it is true AMD has made massive strides and has much cheaper chips with tons of cores, the high end Xeons still hold a pretty big IPC advantage and still beat the best stuff AMD has to offer. The only reason why Intel is losing here is because of their high prices.

    So while AMD is gaining tons of momentum and selling quite well, they really aren't doing that great. AMD has a node advantage against both Intel and Nvidia. Yet AMD cannot touch Nvidia in the GPU space, and AMD still cannot outright beat the 14nm 9900k. Turing is on 12nm.

    Lets also remember that Intel's node is regarded as superior. Like Intel's 10nm should be as good or better than AMD 7nm. And that is exactly what the Intel GPU may be based on. If the Intel GPU is based on 10nm then Intel will have no problem dispatching AMD in the GPU market. They may have a tougher time competing against Nvidia, but we have to be honest and realize that AMD GPUs are at least a generation behind Nvidia even with Navi. And we have to realize that while things are good for AMD right now on the CPU front, that could all change the instant Intel finally reaches 10 or 7nm. And I say that as someone who wants to buy a Ryzen 3900x soon.
  • Takeo.KenseiTakeo.Kensei Posts: 1,303
    That's because they got lazy as the market leader, and while its easy to praise what AMD has done lately, AMD did no favors by not competing for oh, a decade. Intel didn't need to do anything, so they staid the course. Ryzen came out of nowhere and caught Intel napping.

    It's been already two years now that AMD outsells Intell. They're still napping.

    But even so, one thing that gets forgotten is that even with all the good things the new Ryzen does, Intel STILL holds the performance crown in both gaming and workstation. Even with AMD's very best 12 core Ryzen, Intel's 9900k is still the best CPU for gaming. Every gaming benchmark shows the 9900k win most of the time. AMD does score some wins, and in other games they have closed the gap, but AMD still cannot claim to be the best gaming CPU. Price to performance? Sure. And lots of productivity tasks, too. But for hardcore gamers who want the most frames per second possible, the 9900k is still the best choice. And we must remember that this with AMD on 7nm, while the 9900k is only on 14 nm. If Intel still holds a performance edge with a 14nm chip...AMD might be in trouble if Intel ever gets 10nm out the door.

    It's not about the performance crown. In price per performance ratio they're flat beaten and that's why AMD has made a come back. The first ryzen was farther in gaming performance but the balance price/performance made it to convince people to buy it. If you have to pay +1000$ more to get few more fps, very few people will do it.

    Plus you're the one also advising AMD platform because of their compatibility in the long run. The FPS crown is not the ultimate selling factor

    Even in workstations, while it is true AMD has made massive strides and has much cheaper chips with tons of cores, the high end Xeons still hold a pretty big IPC advantage and still beat the best stuff AMD has to offer. The only reason why Intel is losing here is because of their high prices.

     

    So while AMD is gaining tons of momentum and selling quite well, they really aren't doing that great. AMD has a node advantage against both Intel and Nvidia. Yet AMD cannot touch Nvidia in the GPU space, and AMD still cannot outright beat the 14nm 9900k. Turing is on 12nm.

     

    Lets also remember that Intel's node is regarded as superior. Like Intel's 10nm should be as good or better than AMD 7nm. And that is exactly what the Intel GPU may be based on. If the Intel GPU is based on 10nm then Intel will have no problem dispatching AMD in the GPU market. They may have a tougher time competing against Nvidia, but we have to be honest and realize that AMD GPUs are at least a generation behind Nvidia even with Navi. And we have to realize that while things are good for AMD right now on the CPU front, that could all change the instant Intel finally reaches 10 or 7nm. And I say that as someone who wants to buy a Ryzen 3900x soon.

    In the GPU field I already told you last year that AMD is far from NVidia.. That's nothing new and 2020 won't change anything, and at least you realised that. Intel beeing new they'll have to be utterly good and impress everybody to be able to shift people's decision but that will be hard. People don't buy Nvidia's GPUs just for gaming (that's why I talked about ecosystem). And there is also the question of their motivation : what is their ultimate goal ? Is it to get the ultimate performance crown (which I doubt) or do they have a certain market as a target (ex console)

  • LCJDLCJD Posts: 13
    RayDAnt said:

    So... anybody have recommendations for particular Daz Studio compatible scenes/sets (either from the Daz Store or other sources) that are reeeeeaaally big/difficult to render? I'm looking for ways to study RTX acceleration more thoroughly, and I suspect there is a direct correllation between scene asset size/complexity and how much RTX/RTCores speedup the rendering process (hence why all the freely available benchmarking scenes - including this one just released by me - show such a little difference with it versus without.)

    I've found this article about the new Iray RTX performance impact. It could help your study.

    https://www.migenius.com/articles/rtx-performance-explained

  • Daz Jack TomalinDaz Jack Tomalin Posts: 13,119
    DAZ_Rawb said:
    RayDAnt said:

    Some issues to keep in mind while playing around with this beta if you're a Turing owner (copied from the official Iray changelog thread):

    Known Issues and Restrictions
    • This beta release only works with driver version R430 GA1.
    • Multi-GPU support for multiple Turing GPUs is partially broken: Iray Photoreal and Iray Interactive may trigger the fallback to CPU rendering if multiple Turing GPUs are present in a system. It is recommended for the beta to enable only one Turing board.
    • Performance on Turing boards is not yet fully optimized (e.g.,compilation overhead on each startup and some scenechanges in Iray Photoreal and geometry overhead and rendering slowdowns in some scenes for Iray Interactive). Performance on non-Turing boards should not be affected.
    • In Iray Interactive, invisible geometry through cutouts remains visible.
    • Support for SM 3.X/Kepler generation GPUs is marked as deprecated, and it will most likely be removed with the next major release.
    • The Deep Learning based postprocessing convergence estimate only works if the Deep Learning based denoiser is disabled.
    • The Deep Learning based postprocessing to predict when a rendering has reached a desired quality is not yet implemented.
    • MacosX is not supported.
    • The new hair field in the material structure and the new hair_bsdf type from the MDL 1.5 Speci?cation are not yet supported by the MDL compiler.
    • The new MDL 1.5 distribution function modifierer df::measured_factor only supports one dimension of the texture in Iray Interactive.

    FYI, those are notes for the Iray 2019 beta, all of the known issues have been resolved (AFAIK) in the final release.

     

    So to be clear, the 4.1 2 Beta is using the final version of Iray 2019 that fixes these bugs? Because some of these are pretty serious bugs.

    Yea I saw the multi-GPU bug here myself with 4.11 general release, will be testing the 4.12 beta out later today to see if it's fixed

  • outrider42outrider42 Posts: 3,679
    That's because they got lazy as the market leader, and while its easy to praise what AMD has done lately, AMD did no favors by not competing for oh, a decade. Intel didn't need to do anything, so they staid the course. Ryzen came out of nowhere and caught Intel napping.

    It's been already two years now that AMD outsells Intell. They're still napping.

    But even so, one thing that gets forgotten is that even with all the good things the new Ryzen does, Intel STILL holds the performance crown in both gaming and workstation. Even with AMD's very best 12 core Ryzen, Intel's 9900k is still the best CPU for gaming. Every gaming benchmark shows the 9900k win most of the time. AMD does score some wins, and in other games they have closed the gap, but AMD still cannot claim to be the best gaming CPU. Price to performance? Sure. And lots of productivity tasks, too. But for hardcore gamers who want the most frames per second possible, the 9900k is still the best choice. And we must remember that this with AMD on 7nm, while the 9900k is only on 14 nm. If Intel still holds a performance edge with a 14nm chip...AMD might be in trouble if Intel ever gets 10nm out the door.

    It's not about the performance crown. In price per performance ratio they're flat beaten and that's why AMD has made a come back. The first ryzen was farther in gaming performance but the balance price/performance made it to convince people to buy it. If you have to pay +1000$ more to get few more fps, very few people will do it.

    Plus you're the one also advising AMD platform because of their compatibility in the long run. The FPS crown is not the ultimate selling factor

    Even in workstations, while it is true AMD has made massive strides and has much cheaper chips with tons of cores, the high end Xeons still hold a pretty big IPC advantage and still beat the best stuff AMD has to offer. The only reason why Intel is losing here is because of their high prices.

     

    So while AMD is gaining tons of momentum and selling quite well, they really aren't doing that great. AMD has a node advantage against both Intel and Nvidia. Yet AMD cannot touch Nvidia in the GPU space, and AMD still cannot outright beat the 14nm 9900k. Turing is on 12nm.

     

    Lets also remember that Intel's node is regarded as superior. Like Intel's 10nm should be as good or better than AMD 7nm. And that is exactly what the Intel GPU may be based on. If the Intel GPU is based on 10nm then Intel will have no problem dispatching AMD in the GPU market. They may have a tougher time competing against Nvidia, but we have to be honest and realize that AMD GPUs are at least a generation behind Nvidia even with Navi. And we have to realize that while things are good for AMD right now on the CPU front, that could all change the instant Intel finally reaches 10 or 7nm. And I say that as someone who wants to buy a Ryzen 3900x soon.

    In the GPU field I already told you last year that AMD is far from NVidia.. That's nothing new and 2020 won't change anything, and at least you realised that. Intel beeing new they'll have to be utterly good and impress everybody to be able to shift people's decision but that will be hard. People don't buy Nvidia's GPUs just for gaming (that's why I talked about ecosystem). And there is also the question of their motivation : what is their ultimate goal ? Is it to get the ultimate performance crown (which I doubt) or do they have a certain market as a target (ex console)

    Of course the market is more than just gaming, but the gaming performance is what often gets noticed. Price to performance does not always win, otherwise AMD would be winning against Nvidia right now. But they aren't. AMD has several very strong price to performance victories in the low to mid range, but that hasn't done them a whole lot of good as Nvidia still outsells them at nearly every level, even in segments where AMD has better value. It hasn't been a big secret, either, as you can find many benchmarks across different media that discuss how good AMD's value is vs Nvidia. The problem for AMD is mind share, and they are having a tough time convincing people to buy their GPUs.

    Even Ryzen had trouble selling at first. It took time for Ryzen to start selling well. And now things have snowballed. But again, with all of Ryzen's apparent success, AMD has only made a small dent in their overall market share vs Intel. Intel at one point had over 80% of the entire market according to multiple sources. Even right now with how popular Ryzen has been, it still very strongly favors Intel.

    You mention the market is more than just gaming, well, the market is more than just desktop, too. Laptops still very strongly favor Intel, and AMD has not done nearly as well with mobile Ryzen. And in the workstation space, AMD is only scratching at Intel's monsterous lead.

    Price to performance only gets you so far, you have to win the mind share of people first. It seems silly, but that's how it is. AMD's comeback has been a perfect storm of things. While Ryzen has been good, Intel has had its strange struggle with 10nm. They've been behind schedule for years. And ultimately this has allowed AMD to jump in and do what they are doing. But make no mistake here, if Intel was at 10nm, NONE of this would happening. If Intel was at 10nm, they could have responded quickly with better chips, but as they are stuck at 14nm, this had made it difficult for them. But even with 14nm, the 9900k still has a performance lead in many single threaded tasks, and in tasks that value latency, like video games. AMD has not solved their latency issue, which is why they still cannot beat the 9900k at gaming.
  • Takeo.KenseiTakeo.Kensei Posts: 1,303

    Then we'll agree that there are many factors and even with the best product and the best price you could still fail at selling it

    That's why I don't think it will be that easy for Intel

  • LenioTGLenioTG Posts: 2,118
    edited July 2019

    I've checked it out myself: it's noticeably faster with RTX GPUs!

    It's a superficial test, I know, but it works for me.

    I've used the same scene (attached), that is one that I had actually rendered some weeks ago. This is that version, but now I've rendered it GPU only (RTX 2060) until iteration 500 in both 4.11 and 4.12, everything else is the same.

    • 4.11 - 14min 57s
    • 4.12 - 11min 16s

    SPEED IMPROVEMENT: 33%

    PS: please don't explain to me why this test is wrong on who knows how many levels, I don't want to join the aggressive discussion. I just wanted to check it out on an actual scene myself, as the noob I am xD
    I hope this data will be helpful to other people that don't want to spend half an hour this way, or that don't have a RTX card yet.

    098_Denoised.png
    2560 x 1440 - 3M
    Post edited by LenioTG on
  • outrider42outrider42 Posts: 3,679

    Then we'll agree that there are many factors and even with the best product and the best price you could still fail at selling it

    That's why I don't think it will be that easy for Intel

    If they don't over take AMD in GPU market share in 2-3 years I would be very surprised. There are only 2 reasons why they would fail to do so: They overprice their products (which wouldn't be too surprising), or AMD pulls a rabbit of a hat and actually sells a top tier GPU. AMD just doesn't have the brand power in GPUs anymore. And it took AMD a couple years to rebuild their brand power in CPUs. That's also why Intel is still in good shape for now. And nearly every gaming benchmark you find for GPUs will use the 9900k as its test bench, because it is the fastest CPU. They do this to remove as much bottleneck as possible. But what this also does is keep Intel's brand in the public eye. If popular tech outlets start to switch to Ryzen for GPU benchmarking, then we'll know for sure that the market has truly swung. They may yet happen...but for now it is still Intel.

    It is not so far fetched to think that in 2020 or 2021 one the best PC configurations you can build might just be an AMD CPU with an Intel GPU. Just a few years ago this idea was unthinkable. AMD was dead in the water a few years ago, and the only thing that kept them afloat, and there are people from AMD on record who say this, the only thing that kept AMD afloat was their contracts with Sony and Microsoft for consoles. AMD has been supplying both CPU and GPU to Sony and MS for some time, and these guaranteed sales are what helped AMD fund the development of Ryzen. Without Sony and Microsoft, AMD may not even be in business right now. There is a lot of speculation that Sony even helped directly finance some of Navi's development.

    That's just how bad things were for AMD, I don't think many people understand just how dire things were looking before Ryzen. That's also why I am not sold on AMD's continued success beyond 2021. They have to take full advantage of the golden situation in front of them right now or they might very well fall out of the GPU market still yet. And once Intel reaches 10nm, AMD will have a very difficult battle in the CPU side as well. Don't get me wrong, I am rooting for AMD to stay competitive because I want competition. This market needs competition, that's why we have $1200 2080tis. It is in all of our best interest that competition be strong.

  • Robert FreiseRobert Freise Posts: 4,260

    AMD 3rd-Gen Ryzen is going to be 7nm chip

  • outrider42outrider42 Posts: 3,679

    Yes, it is. But Intel's 10nm is as good or even better than AMD's 7nm, this is no idol speculation, this has been reported on. Not all process nodes are equal, there is no standard for how they measure this, and each manufacturer may be different in their definition. Intel's process has been superior for years. And even so, any chip is only as good as its design. A perfect example is AMD Radeon VII. The Radeon VII was the world's first 7nm GPU, and yet it could only match a 2080 at best, and the 2080 is on a 12nm node. To be frank, that should be embarrassing for AMD. They put everything they had into the VII, and that was the best they could do? Match a 2080, which BTW matches a 1080ti. A 1080ti that was over 2 years old. It took AMD over 2 years to reach 1080ti performance! There is no better evidence at hand to demonstrate just how far behind AMD is than the Radeon VII. Now they have Navi, but once again, with 7nm and a new GPU structure, they still cannot beat a 2080. By the time AMD releases their so called "big Navi" GPUs, Nvidia will be already prepping for its next generation follow up to Turing. So if AMD finally catches up to the 2080ti, Nvidia will be ready to top it by a mile. That is not good for AMD at all.

    Plus as I already said, AMD's very best CPU based on 7nm still cannot beat Intel at 14nm, the 9900k in single threaded tasks that rely on low latency. It gets worse in the data center market. The only reason AMD is doing better here is their lower prices.

  • Robert FreiseRobert Freise Posts: 4,260

    Yes, it is. But Intel's 10nm is as good or even better than AMD's 7nm, this is no idol speculation, this has been reported on. Not all process nodes are equal, there is no standard for how they measure this, and each manufacturer may be different in their definition. Intel's process has been superior for years. And even so, any chip is only as good as its design. A perfect example is AMD Radeon VII. The Radeon VII was the world's first 7nm GPU, and yet it could only match a 2080 at best, and the 2080 is on a 12nm node. To be frank, that should be embarrassing for AMD. They put everything they had into the VII, and that was the best they could do? Match a 2080, which BTW matches a 1080ti. A 1080ti that was over 2 years old. It took AMD over 2 years to reach 1080ti performance! There is no better evidence at hand to demonstrate just how far behind AMD is than the Radeon VII. Now they have Navi, but once again, with 7nm and a new GPU structure, they still cannot beat a 2080. By the time AMD releases their so called "big Navi" GPUs, Nvidia will be already prepping for its next generation follow up to Turing. So if AMD finally catches up to the 2080ti, Nvidia will be ready to top it by a mile. That is not good for AMD at all.

    Plus as I already said, AMD's very best CPU based on 7nm still cannot beat Intel at 14nm, the 9900k in single threaded tasks that rely on low latency. It gets worse in the data center market. The only reason AMD is doing better here is their lower prices.

    True but one also has to consider industry support which Intel has had by the balls for years

  • RayDAntRayDAnt Posts: 1,120
    edited July 2019
    ebergerly said:

    I’ve been kinda scratching my head trying to figure out what we’re trying to accomplish with this effort. Apparently it was mentioned in another thread that we’re trying to make a comparison guide for users to see how well the cards perform to help in purchasing decisions?

    But the excellent migenius article basically says that Iray/RTX performance is totally dependent upon scene content (“complexity”), and the only way to benchmark RTX/Iray performace is by each user trying RTX/Iray out on scenes that represent their typical complexity (whatever that means) to get a rough idea. And they’re saying the performance range they’ve seen is a staggering 5% - 300%, depending on scene content.

    So why would we gather so much detailed hardware/software info to come up with a highly accurate render time comparison, down to a fraction of a second, for a single scene (which may or may not take much advantage of the RTX technology), when users’ actual results for their personal scenes could be anywhere in that 5% - 300% range? We get GPU render time comparisons accurate to 0.01 second for this benchmark scene using all of this detailed data, but a user may find their results for their particular scenes are not even close to what these benchmarks indicate. “Hey, you said there’s 300% improvement, how come I’m only getting 5% on my scenes?”.

    And it sounds like we’re trying to find the most “complex” scene in the store as a benchmark, and I doubt we even know what scene characteristics are needed to get the most RTX/Iray performance gain. And why do we even care about the complex scene and the most performance gain, when it may be irrelevant to the average user?

    Personally, I think the migenius article puts the nail in the coffin of standard, single scene benchmarks like this. Like I said long ago, this stuff is incredibly complex, and as you start to add other RTX-related features into the mix (Tensor de-noising, materials, physics), it becomes virtually impossible to find a single (or two, or three) benchmarks to be of much (any) use. My biggest surprise with the migenius article was that I wasn’t conservative enough in suggesting these benchmarks are, at best, ballparks within maybe +/- 15 seconds or 10% or whatever for comparison purposes. Wow, 5% to 300% variation? Amazing.

    There's a very simple answer to the general question: "Why do people run benchmarks?"

    Because it's fun!

    You are free to participate - or not - as you see fit. Although I highly encourage you/people to give it a whirl regardless if you've got the time. It gives those of us who find analyzing this stuff interesting/useful more to play with.

    Post edited by RayDAnt on
  • ebergerlyebergerly Posts: 3,255
    edited July 2019

    Nothing wrong with having fun. I just don't want unsuspecting users here being misled into thinking that any of this data actually means something when they're looking for advice on what GPU to buy.

    As a matter of fact, I recall someone recently referred to posting conflicting render times that disagreed by only 20 seconds as making "a grave mistake and doing everybody injustice". 

    I tend to agree. And if now we're talking about render times disagreeing by 5% to 300% depending on content, that's like really unjust, IMO. Heck, that could be MANY MINUTES of difference, not just seconds. 

     

    Post edited by ebergerly on
  • RayDAntRayDAnt Posts: 1,120
    edited July 2019
    ebergerly said:

    Nothing wrong with having fun. I just don't want unsuspecting users here being misled into thinking that any of this data actually means something when they're looking for advice on what GPU to buy.

    ...it does mean something. Just not everything. Something I'm pretty sure anyone with half a brain knows. ESPECIALLY if they're in the habit of reading discussion threads like this one.

    A benchmark is just a type of ruler (the measurement device - not the type of person who rules over other people.) Telling people they shouldn't make use of benchmarking information in buying decisions because the information they give might be innacurate makes exactly as little sense as telling people they shouldn't make use of rulers in cutting wood because the measurements they give might not be accurate.

    Smart woodcutters make multiple redundant measurements and cut once. Smart graphics cards buyers peruse multiple redundant benchmarks and buy once.

    Post edited by RayDAnt on
  • outrider42outrider42 Posts: 3,679
    ebergerly said:

    Nothing wrong with having fun. I just don't want unsuspecting users here being misled into thinking that any of this data actually means something when they're looking for advice on what GPU to buy.

    As a matter of fact, I recall someone recently referred to posting conflicting render times that disagreed by only 20 seconds as making "a grave mistake and doing everybody injustice". 

    I tend to agree. And if now we're talking about render times disagreeing by 5% to 300% depending on content, that's like really unjust, IMO. Heck, that could be MANY MINUTES of difference, not just seconds. 

     

    Ah, so you will update your list to show the correct render times instead of one bad anomaly, then! :D

    Also, nobody here has ever said or claimed that you will always get X as the result. If you can kindly point us the quote where somebody said that all users and all scenes would see the same performance gain from RTX, then OK, you may have a point. But otherwise, its time to drop this very tired subject.

    Can you blame people for trying to find answers for how RTX performs for Daz Studio? While migenius has done good work, and I bring them up all the time, they do not run Daz Studio. Migenius has nothing to do with the Daz asset store. When they say it takes a complicated scene to see the impact of RTX, well the obvious question is "how complicated?" To narrow that question down, "how complicated by Daz standards?" I don't believe the attempt to pursue this is pointless. Outside of migenius, nearly every search of Iray on the internet will lead you to these forums. Much of the knowledge gained has come directly from people in these forums performing a variety of tests for themselves and sharing it with the world.

    Nothing ventured, nothing gained.

    Also, while the new ray tracing core performance may fluctuate, Iray CUDA performance scales extremely well. A card's CUDA performance will scale across different scenes and the difference between one GPU and the next is very consistent. Thus there is a still baseline for the performance from these benchmarks. If it is shown that RTX has little to no impact on the benchmark scene, well that makes it quite easy then. That would provide a baseline level of performance one can expect from RTX. And then they may see extra performance on top of that depending on the scene complexity. 

    That doesn't seem so complicated to me.

  • Takeo.KenseiTakeo.Kensei Posts: 1,303

    Can someone test this scene in DS 4.11 and DS 4.12 please ?

    I get 20x speed improvement between the two versions and would like to see how it goes on other cards especially with non RTX cards

    It's basically the SY scene modified with a strand hair asset which is configured to be cached on RAM for rendering (don't know if that may be a problem)

    thanks

    https://www.sendspace.com/file/b78062

     

  • CinusCinus Posts: 118

    @RayDAnt

    I agree that the benchmarks have value, and the more people post their results the more useful the data becomes, but if you ask for too much info people will not bother to post results. I have run your benchmark test but have not posted my results because I think you are asking for way to much irrelevant information and I think your test has a serious flaw (more about that below).

    Most people are probably not looking to buy a complete system, they are just looking for video card data, so the test should be as simple as possible and eliminate other factors as much as possible and test the video card performance only.

    I think the data points that matter for this kind of benchmark are:

    Video Card (make and model)
    Nvidia Driver Version
    Daz Studio Version
    Optix (on or off)
    CPU (Make, model and clock speed)
    Amount of System Ram (I actually doubt this plays any major factor, so can probably be left off)
    Total render time (As reported by the log file)

    As suggested by outrider42 it would be better to start the test from the Iray preview (after it starts the actual render) instead of the Texture Shaded preview to eliminate as much of the system overhead as possible.

    Now for the serious flaw. Using a set number of iterations to terminate the test is not a good  idea. The amount of work done per iteration can change in each driver versions (and it has happened in the past, see link below), so it's possible that a new driver can achieve a certain level of convergence with a lot less (or more) iterations. What matters is not the number of iterations per second, but the time it takes to render a scene to a certain level of convergence (with a given quality setting).

    https://blog.irayrender.com/post/150402147761/iterations-and-sample-counts-vs-convergence-speed

  • LenioTGLenioTG Posts: 2,118

    Can someone test this scene in DS 4.11 and DS 4.12 please ?

    I get 20x speed improvement between the two versions and would like to see how it goes on other cards especially with non RTX cards

    It's basically the SY scene modified with a strand hair asset which is configured to be cached on RAM for rendering (don't know if that may be a problem)

    thanks

    https://www.sendspace.com/file/b78062

    Done!

    OMG: not only it rendered much faster, but it also opened the scene in a fraction of the time! surprise

    • 4.11 - 9 minutes 54.12 seconds
    • 4.12 - 1 minutes 9.44 seconds
    • Speed Improvement: 8,6x

    I have a RTX 2060.

    Test.png
    400 x 520 - 276K
  • RayDAntRayDAnt Posts: 1,120
    edited July 2019
    Cinus said:

    I have run your benchmark test but have not posted my results because I think you are asking for way to much irrelevant information

    The new benchmarking thread/scene/reporting method I constructed and posted here is designed very specifically to address the various major shortcomings of Sickleyield's first venerable benchmarking thread. One of the most significant of those has turned out to be the general lack of basic system data (mainly regarding OS/driver/DS versions, but also certain core system hardware elements) that have since UNEXPECTEDLY proven key to affecting MAJOR rendering performance diferences (and accurate measurement confusion) over the years.

    One of the most drastic of these has been the general adoption of solid state storage devices (SATA/NVME drives) over physical spinning hard drives. Having your operating system/programs/3d assets always in solid state majorly speeds up the pre-rendering initialization times needed by Iray to merely start rendering. I personally read and analyzed each and every single post in Sickleyield's original thread as prep for this. And in its early posts especially you see a lot of focus on "pre-loading" scene content into VRAM via the use of Iray preview in an effort to get more accurate just pure rendering times. This is because, back in 2015, the double-threat combo of spinning disks and pre-Windows 10 disk to memory caching schemes meant that you could easily see up to a half minute or more of extra time disporportionately being lumped into people's measured rendering times (as indicated by Total Rendering Time in the log file.) Which, in the case of older GPU hardware where the benchmarking scene could easily take an hour or more to complete, wasn't such a big deal (30 seconds slippage is less than 1% of an hour, and a margin-of-error that's 1% is more than acceptable.) However as cards get to completion faster, that potential 30 seconds extra becomes an increasingly bigger deal statistically (eg for 10 minutes, 30 seconds slippage is now a margin-of-error of 5% - entering questionable territory.)

    With the exception of very high-end current GPU's like the 2080Ti or Titan RTX (that I myself own) SSDs and current Windows almost completely resolve these memory loading time issues adversely influencing rendering times. Because, as born out by the "Loading Time" column stats on the far right of each table in this post, asset loading times even for the scene I created (which is significantly larger than Sickleyield's original) are 5 seconds or less.

    But what about all those 6 - 10 second Loading Time results in those same tables! You say. Check the Contributor name column. Then scroll down for those posts and notice what all those poster's "irrelevant" information has in common. That's right - they're all using multi-terabyte spinning disks to store their 3D assets.

    Hence why I'm asking for all that "irrelevant" information. So that - when it inevitably becomes relevant - it's already there. I can't make you report it. But it'd be smart/nice of you to future DS/Iray newbies to do so...

     

    As suggested by outrider42 it would be better to start the test from the Iray preview (after it starts the actual render) instead of the Texture Shaded preview to eliminate as much of the system overhead as possible.

    There is no system overhead whatsover in the measured rendering time if you are going by the last of the following set of statistics:

    2019-03-15 00:32:31.455 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : Device statistics:2019-03-15 00:32:31.455 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : CUDA device 0 (GeForce GTX 1050): 	 1000 iterations, 6.813s init, 1584.264s render

    as reported after every completed render by Iray in your Daz log file. Because these numbers come directly from the Iray kernel itself - not Daz Studio or your operating system. outrider42's suggestion is a solution for a problem that, in this particular data reporting design, doesn't exist. Hence why I chose to use it.

     

    What matters is not the number of iterations per second, but the time it takes to render a scene to a certain level of convergence (with a given quality setting).

    First a quote:

    • Render Stop Condition: By default, Iray simultaneously enforces three different stop conditions by which to judge the "completion" point of a scene. These are: time, convergence ratio, and iteration count. Since the first two of these conditions are less granularly calculated by Iray than the third [...] a scene must be configured to to use iteration count and iteration count only as its rendering stop condition in order to exhibit identical rendering behavior across a diverse set of testing hardware platforms

     Take any scene (such as Sickleyield's original benchmarking scene) that renders to completion based on convergence limit (in this case, the Iray default 95%) and render it in any version of Daz Studio with a single graphics card active for rendering. Now, open your log file and scroll down to the last render progress update before the "Convergence threshold reached" message eg:

    2019-07-26 03:36:44.976 Iray VERBOSE - module:category(IRAY:RENDER):   1.0   IRAY   rend progr: 95.16% of image converged2019-07-26 03:36:44.976 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : Received update to 04971 iterations after 55.015s.2019-07-26 03:36:44.987 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : Convergence threshold reached.

    Notice how it finished at slightly more than 95% (95.16% to be exact) despite it being configured to stop at 95%. Now, render the same scene again on a completely different computer/graphics card setup and look at the same log file data (these code snippets come from my Titan RTX equipped main rendering rig and my lowly GTX 1050 2GB equipped Surface Book 2 respectively):

    2019-07-26 04:19:08.489 Iray VERBOSE - module:category(IRAY:RENDER):   1.0   IRAY   rend progr: 95.05% of image converged2019-07-26 04:19:10.311 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : Received update to 04905 iterations after 1303.462s.2019-07-26 04:19:10.318 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : Convergence threshold reached.

    Notice how iterations and convergance percentage BOTH end up being different values for each render despite 95% convergence being configured in both cases.

     

    Now, do the same exercise with a time limited scene like Aala's 2 min cornell box (limited to 120 seconds.) Titan RTX rig:

    2019-07-26 04:35:17.171 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : Received update to 01635 iterations after 120.411s.2019-07-26 04:35:17.175 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : Maximum render time exceeded.

    And GTX 1050 2GB Surface Book:

    2019-07-26 04:36:14.690 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : Received update to 00178 iterations after 120.304s.2019-07-26 04:36:14.690 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : Maximum render time exceeded.

    Notice - once again, both iterations and time (the supposedly fixed value here) both end up being different values despite identical render completion configurations.

     

    Finally, do the same exercise with a scene limited to iterations (in this case, 1800.) Titan RTX rig:

    2019-07-26 04:51:06.102 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : Received update to 01800 iterations after 233.821s.2019-07-26 04:51:06.106 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : Maximum number of samples reached.

    And the Surface Book w/ GTX 1050 2GB:

    2019-07-26 05:23:40.868 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : Received update to 01800 iterations after 2347.403s.2019-07-26 05:23:40.868 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : Maximum number of samples reached.

    And what do we have? Finally the SAME VALUE SOMWHERE ANYWHERE between identically configured renders.

    Why does this matter? Because in comparative testing a fixed value such as convergence, time, or iteration count (naming each of the preceding sets of examples) is what is officially known as a Control value. And if a Control value is free to change outside of your control under different testing conditions (such as a change in underlying computing hardware) the whole test/benchmark categorically becomes invalid.

    Hence why the benchmarking scene iincluded with Iray Server itself is iteration count limited only. Hence why the benchmarking scene I created is iteration count limited only. So that it's actually a statistically valid benchmark.

     

    Post edited by RayDAnt on
  • ebergerlyebergerly Posts: 3,255
    RayD'Ant, I like your wood measuring analogy. But in the RTX case I think we're talking about measuring wood whose length changes depending on who is measuring the wood.
  • Takeo.KenseiTakeo.Kensei Posts: 1,303
    LenioTG said:

    Can someone test this scene in DS 4.11 and DS 4.12 please ?

    I get 20x speed improvement between the two versions and would like to see how it goes on other cards especially with non RTX cards

    It's basically the SY scene modified with a strand hair asset which is configured to be cached on RAM for rendering (don't know if that may be a problem)

    thanks

    https://www.sendspace.com/file/b78062

    Done!

    OMG: not only it rendered much faster, but it also opened the scene in a fraction of the time! surprise

    • 4.11 - 9 minutes 54.12 seconds
    • 4.12 - 1 minutes 9.44 seconds
    • Speed Improvement: 8,6x

    I have a RTX 2060.

    Thanks. Could you do the same test again but this time, could you render twice in a row for each DS version and note the rendertime for each render, please?

    I rendered twice in a row by accident with 4.11 and the second render was 4x quicker than the first

  • Takeo.KenseiTakeo.Kensei Posts: 1,303
    RayDAnt said:
     
    • Render Stop Condition: By default, Iray simultaneously enforces three different stop conditions by which to judge the "completion" point of a scene. These are: time, convergence ratio, and iteration count. Since the first two of these conditions are less granularly calculated by Iray than the third [...] a scene must be configured to to use iteration count and iteration count only as its rendering stop condition in order to exhibit identical rendering behavior across a diverse set of testing hardware platforms

     

    RayDAnt said:

    And what do we have? Finally the SAME VALUE SOMWHERE ANYWHERE between identically configured renders.

    Why does this matter? Because in comparative testing a fixed value such as convergence, time, or iteration count (naming each of the preceding sets of examples) is what is officially known as a Control value. And if a Control value is free to change outside of your control under different testing conditions (such as a change in underlying computing hardware) the whole test/benchmark categorically becomes invalid.

    Hence why the benchmarking scene iincluded with Iray Server itself is iteration count limited only. Hence why the benchmarking scene I created is iteration count limited only. So that it's actually a statistically valid benchmark.

     

    That's your interpretation based on false premise, that is that iteration is something fix

    I'm going to quote too, from https://docs.substance3d.com/spdoc/iray-settings-162005200.html

    IterationsThe number of computation passes done by Iray over the maximum defined in the settings.

    The number of iterations will define the final quality of the render : more iterations = better quality.
    However iterations can take some time, that why it is possible to define a maximum time. An iteration is defined by the number of samples.

    Time per iteration and sample per iteration are not fix. Since the nvoptix.dll is now shipped with Nvidia's driver, the number of sample count per iteration can vary with each driver change. There are also min sample and max sample parameters that will greatly influence the samplecount

    Your control value is nowhere fix.

    To make an analogy with wood, you're measuring the speed of two workers transporting some wood which are different size (like ebergerly noticed), where one has effectively transported 10 cubic meters and the other 9,5, but they effectively carried the same number of piece of wood

    I've been thinking about the best way to measure and have no definitive answer, but I'm more inclined to the convergence ratio for the moment.

  • RayDAntRayDAnt Posts: 1,120

    Time per iteration and sample per iteration are not fix. Since the nvoptix.dll is now shipped with Nvidia's driver, the number of sample count per iteration can vary with each driver change.

    Which is why my reporting methods requires that everyone state Windows, display driver, and Daz Studio versions (which is to say Iray version) along with each test they run. Because having a fixed value stated for something that can impact your end result transforms it into another Control value. Thereby eliminating the issue you just described.

    There are also min sample and max sample parameters that will greatly influence the samplecount

    Hence why both min sample and max sample have been affectively ACTIVELY DEACTIVATED by me in the benchmarking scene I designed (check the values they're set to as well as the rest of the parameters under the Progressive Rendering section. Every parameter there comes pre-tweaked by me to provide consistent results.)

    This scene ALWAYS completes as expected. By design.

  • Takeo.KenseiTakeo.Kensei Posts: 1,303
    edited July 2019
    RayDAnt said:

    Time per iteration and sample per iteration are not fix. Since the nvoptix.dll is now shipped with Nvidia's driver, the number of sample count per iteration can vary with each driver change.

    Which is why my reporting methods requires that everyone state Windows, display driver, and Daz Studio versions (which is to say Iray version) along with each test they run. Because having a fixed value stated for something that can impact your end result transforms it into another Control value. Thereby eliminating the issue you just described.

    There are also min sample and max sample parameters that will greatly influence the samplecount

    Hence why both min sample and max sample have been affectively ACTIVELY DEACTIVATED by me in the benchmarking scene I designed (check the values they're set to as well as the rest of the parameters under the Progressive Rendering section. Every parameter there comes pre-tweaked by me to provide consistent results.)

    This scene ALWAYS completes as expected. By design.

    None of these changes the fact that samplecount for each iteration varies

    Post edited by Takeo.Kensei on
  • LenioTGLenioTG Posts: 2,118
    LenioTG said:

    Can someone test this scene in DS 4.11 and DS 4.12 please ?

    I get 20x speed improvement between the two versions and would like to see how it goes on other cards especially with non RTX cards

    It's basically the SY scene modified with a strand hair asset which is configured to be cached on RAM for rendering (don't know if that may be a problem)

    thanks

    https://www.sendspace.com/file/b78062

    Done!

    OMG: not only it rendered much faster, but it also opened the scene in a fraction of the time! surprise

    • 4.11 - 9 minutes 54.12 seconds
    • 4.12 - 1 minutes 9.44 seconds
    • Speed Improvement: 8,6x

    I have a RTX 2060.

    Thanks. Could you do the same test again but this time, could you render twice in a row for each DS version and note the rendertime for each render, please?

    I rendered twice in a row by accident with 4.11 and the second render was 4x quicker than the first

    You're welcome :D

    Not now, sorry!

  • RayDAntRayDAnt Posts: 1,120
    RayDAnt said:

    Time per iteration and sample per iteration are not fix. Since the nvoptix.dll is now shipped with Nvidia's driver, the number of sample count per iteration can vary with each driver change.

    Which is why my reporting methods requires that everyone state Windows, display driver, and Daz Studio versions (which is to say Iray version) along with each test they run. Because having a fixed value stated for something that can impact your end result transforms it into another Control value. Thereby eliminating the issue you just described.

    There are also min sample and max sample parameters that will greatly influence the samplecount

    Hence why both min sample and max sample have been affectively ACTIVELY DEACTIVATED by me in the benchmarking scene I designed (check the values they're set to as well as the rest of the parameters under the Progressive Rendering section. Every parameter there comes pre-tweaked by me to provide consistent results.)

    This scene ALWAYS completes as expected. By design.

    None of these changes the fact that samplecount for each iteration varies

    Sample count statistics stay EXACTLY COMPUTATIONALLY CONSISTENT between benchmarking runs if driver version and Iray version are kept constant. Hence why the result tables in this post have a column for Driver version (so that performance outliers stemming from that being different are readily identifiable) and the overall summary tables in this post are broken up by DS/Iray version.

    All of your points/concerns are ABSOLUTELY VALID AND CRITICAL to take into account if the goal is to achieve precise results in any sort of DS/Iray rendering benchmarking attempt. It's just that the benchmark scene/reporting method/analysis thread I created was designed from the get-go to specifically take THESE VERY SPECIFIC ISSUES into account (since they were obvious flaws of past benchmarking attempts.) Making them non-issues in this specific case.

  • Takeo.KenseiTakeo.Kensei Posts: 1,303
    edited July 2019
    RayDAnt said:
    RayDAnt said:

    Time per iteration and sample per iteration are not fix. Since the nvoptix.dll is now shipped with Nvidia's driver, the number of sample count per iteration can vary with each driver change.

    Which is why my reporting methods requires that everyone state Windows, display driver, and Daz Studio versions (which is to say Iray version) along with each test they run. Because having a fixed value stated for something that can impact your end result transforms it into another Control value. Thereby eliminating the issue you just described.

    There are also min sample and max sample parameters that will greatly influence the samplecount

    Hence why both min sample and max sample have been affectively ACTIVELY DEACTIVATED by me in the benchmarking scene I designed (check the values they're set to as well as the rest of the parameters under the Progressive Rendering section. Every parameter there comes pre-tweaked by me to provide consistent results.)

    This scene ALWAYS completes as expected. By design.

    None of these changes the fact that samplecount for each iteration varies

    Sample count statistics stay EXACTLY COMPUTATIONALLY CONSISTENT between benchmarking runs if driver version and Iray version are kept constant. Hence why the result tables in this post have a column for Driver version (so that performance outliers stemming from that being different are readily identifiable) and the overall summary tables in this post are broken up by DS/Iray version.

    All of your points/concerns are ABSOLUTELY VALID AND CRITICAL to take into account if the goal is to achieve precise results in any sort of DS/Iray rendering benchmarking attempt. It's just that the benchmark scene/reporting method/analysis thread I created was designed from the get-go to specifically take THESE VERY SPECIFIC ISSUES into account (since they were obvious flaws of past benchmarking attempts.) Making them non-issues in this specific case.

    Seems you don't understand the problem. It's not just Iray and DLL versions

    You can't have taken any measure to the specific issue because of the nature of pathtracing and furthermore of a bidirectionnal pathtracer. There is no Iray parameter that can make iteration consistent whatever you think you have set in Iray settings. Even in the hypothethical case where Iray and Driver version were frozen there's still variance, or you'll have to prove that you can limit it (and I checked there is nothing in the settings)

    I'll pass on the fact that your benchmark scene is limiting hardware capability as RTCores are barely used and agravating the problem because with the exhibited results no one sees any RTX advantage (not to say that some people conclude that the'll get performance regression with DS 4.12)

    The convergence ratio is actually more reliable. You can't predict how much rays you need to calculate to achieve a certain ratio. And for each iteration pass, a maximum of rays are calculated during a defined maximum time. You can't tell the hardware to disable some cores to achieve the x% ratio with a 0.00000001 precision but you can try to change the max time in iray settings to reduce the overcalculation and thus reduce the convergence variance (but I personnaly don't see any problem with overcalculation). Something you certainly overlooked because that doesn't fit your choice

    And if you want a zero variance the easy solution is to choose 100% convergence. This way you'll really have a fix number that won't change whatever driver/Iray version/settings

    Post edited by Takeo.Kensei on
This discussion has been closed.