General GPU/testing discussion from benchmark thread

18911131418

Comments

  • ebergerlyebergerly Posts: 3,255


    Outrider is right. Iray Performance has always scaled with core numbers. Saying that it suddenly shouldn't be accounted out of the blue has no logic

    The GTX 1060 has 400 less cores than the GTX 970, but does the Iray benchmark in the same time.

    The GTX 1070 has 900 less cores than the GTX 980ti, but renders the benchmark in the same time.

    The Titan X has 500 less cores than the 1080ti, but renders in the same time. 

  • RayDAntRayDAnt Posts: 1,158
    edited July 2019
    ebergerly said:


    Outrider is right. Iray Performance has always scaled with core numbers. Saying that it suddenly shouldn't be accounted out of the blue has no logic

    The GTX 1060 has 400 less cores than the GTX 970, but does the Iray benchmark in the same time.

    The GTX 1070 has 900 less cores than the GTX 980ti, but renders the benchmark in the same time.

    The Titan X has 500 less cores than the 1080ti, but renders in the same time. 

    That's because they come from different GPU architectures... Cuda core counts are only directly comparable across GPUs based on the same core architecture.

    Post edited by RayDAnt on
  • ebergerlyebergerly Posts: 3,255
    edited July 2019

    Okay, well I'll leave it to you guys to spend your money as you like. I really don't care what you do. I've tried to provide some rational decision points for those who want to decide if they should buy something, but if you don't like them then that's up to you. It's what businesses do so that they don't waste money. 

    If you want to buy an expensive card solely because you're a "power user" and the card is a "beast", be my guest.

    Post edited by ebergerly on
  • drzapdrzap Posts: 795
    ebergerly said:

     

    drzap said:

    Your estimation then was the 1080ti was not the best performance for the price (how quickly we forget?).  Nevertheless, you eventually joined the bandwagon and got you one.  

    Yes. I was the first person here to even mention the concept of cost/benefit. At the time I was shooting for a cost/benefit of 10, then later realized that historically (based on the chart I developed above) a slightly different ratio of around 13 or 14 is a more reasonable expectation, so I accepted that. I moved 3 points, based on facts. What's your point? Here we're talking about accepting almost twice that (ie, 24). 

    If your analysis of a GTX2080ti says it is half the cost benefits of last generation, then my point is the same as RayDAnt.  Most power users will not agree with your chart.   Our priorities are elsewhere as are most early adopters of technology.   A shopper for budget hardware should go directly to the middle ground where manufacturers always slate their price-performance leaders.  A power user would never even consider it.

  • bluejauntebluejaunte Posts: 1,991
    ebergerly said:

    Okay, well I'll leave it to you guys to spend your money as you like. I really don't care what you do. I've tried to provide some rational decision points for those who want to decide if they should buy something, but if you don't like them then that's up to you. It's what businesses do so that they don't waste money. 

    If you want to buy an expensive card solely because you're a "power user" and the card is a "beast", be my guest.

    Your points aren't really rational when you have a 1080 TI and conclude that a 2080 TI isn't enough bang for the buck, even though it's either the exact same (if you take 1080 TI used prices of about $600) or even more when you take a "new" price that some have mentioned (let's say $700). It sounds more like you have convinced yourself that 1200 bucks is too much for a graphics card rather than look at actual price vs. performance.

  • RayDAntRayDAnt Posts: 1,158
    edited July 2019
    ebergerly said:

    I've tried to provide some rational decision points for those who want to decide if they should buy something,

    Yes, but only rational from a hobbyist's perspective. And, as previously stated, most of the posters in a thread like this are clearly not going to be just hobbyists.

     

    Post edited by RayDAnt on
  • Takeo.KenseiTakeo.Kensei Posts: 1,303
    ebergerly said:

    Okay, well I'll leave it to you guys to spend your money as you like. I really don't care what you do. I've tried to provide some rational decision points for those who want to decide if they should buy something, but if you don't like them then that's up to you. It's what businesses do so that they don't waste money. 

    If you want to buy an expensive card solely because you're a "power user" and the card is a "beast", be my guest.

    There is something you still don't understand. The logic for Top card is not based on a biased cost/benefit calculation. People who buy Ti or Titan buy it for the additionnal memory first and that's where your calculation fails

  • fred9803fred9803 Posts: 1,565

    The other element here is the benefit of upgrading from a much lower spec GPU. Going from a 1080ti to a 2080 may not make much sense, but going from a GTX960 to a RTX2080 would seem to be a sensible decision with regard to cost/benefit. The pay-off for amatures like me moving from a low spec card isn't the same for those already running a more recent GPU.

  • outrider42outrider42 Posts: 3,679
    ebergerly said:


    Outrider is right. Iray Performance has always scaled with core numbers. Saying that it suddenly shouldn't be accounted out of the blue has no logic

    The GTX 1060 has 400 less cores than the GTX 970, but does the Iray benchmark in the same time.

    The GTX 1070 has 900 less cores than the GTX 980ti, but renders the benchmark in the same time.

    The Titan X has 500 less cores than the 1080ti, but renders in the same time. 

    You of all people know how CUDA cores are not equal across generations, so I am not sure why are you suddenly trying to pull this. At this point you are just dragging this out for own benefit.

    And like others have said, your cost analysis completely ignores VRAM as a spec. VRAM is just as important, even more so for many people. While the 1080ti and 2080ti have the same 11GB, the other cards do not. The 2080 Super only has 8GB. You have no way to add VRAM as a variable into your little equation...how would you place a number on that in a way that makes sense? What weight would it have? What would it be based on?

    You know what would be a more sensible cost analysis? For gaming, they do "frames per dollar", so for rendering software, it should be "iterations per dollar". This might be a little tough as the iteration count can be different, but this would be a much more interesting analysis in my opinion. "Iterations per dollar" would be a figure that anyone could understand, rather than this weird stat you come up with. But even so, it still comes down to people's own choice. Let them make it. If somebody wants the best, ok then. The best is going to cost money. 

    That's also why somebody would pay $2000 for a titan RTX, because it has 24GB of VRAM packed into one very fast card. That's also why a lot of us are very interested in NvLink and would like to see if it really does work for Daz Iray.

    Obviously, everyone has their limit. I don't have a Titan RTX, let alone a 2080ti. Not many people do, and not many can run out and buy one on a whim.

    Certainly there can be benefits to cost analysis, but the thing with any such figure is that you leave the ultimate decision up to the people who see it. It is not your job to just tell them not to buy something. You can recommend something, but there is no need to argue with somebody about what they may or may not buy. I know I have suggested the new AMD Ryzen for people, but I am not going to chastise somebody for buying Intel even if I don't agree with it (usually..). I try, but I am not always successful, at biting my tongue and keyboard with Apple users, LOL. But that is kind of where this is heading, it is becoming an argument of ideals rather than about discussing hardware. So lets just cut this off now before it gets too silly.

  • RayDAntRayDAnt Posts: 1,158
    edited July 2019

    You know what would be a more sensible cost analysis? For gaming, they do "frames per dollar", so for rendering software, it should be "iterations per dollar". This might be a little tough as the iteration count can be different, but this would be a much more interesting analysis in my opinion.

    For what it's worth the New & Improved © benchmarking thread I've been periodically mentioning having put together but then not actually posting* has had this built into it (as part of a periodically updated overall benchmark results table sorted by Overall Value) for a quite some time now. The calculation method I've been going with thus far is:
    Overall Value = (IR * MC) / MSRP
    Where:
    IR = Iteration Rate (ie. average Iterations completed per second) of a standardized benchmarking scene. As computed by taking a specific Cuda device's total successfully completed iterations and dividing by that same device's total time spent rendering (two extremely useful performance statistics most people seem to have little to no idea exist in their DS log files.)
    MC = Memory Capacity (in Gigabytes)
    And
    MSRP = Manufacturer's suggested retail price (in USD.)

    The only thing I can't decide on is whether it makes more sense to simply sort the table first into categories based on RAM capacity and then sort by Overall Value. In which case the calculation would just be:
    Overall Value = IR / MSRP
    With the reader then needing to have the wherewithal to pick out the right memory capacity portion of the table to peruse for their use cases.

    This way you don't have potential situations where Incredibly cheap/fast/low vram capacity cards end up coming out equivalent to incredibly expensive/less fast/huge vram capacity cards in the tables/graphs. Because that equivalency makes no sense from an actual usability/usefulness perspective.

     

    Still need to write up an explanation about the benchmarking scene its gonna be based around. Plus I've been waiting this long for full RTX support to come out so as to avoid needing to peddle for retested numbers from people once it does.

    Post edited by RayDAnt on
  • outrider42outrider42 Posts: 3,679
    Honestly I would just leave VRAM out of the equation all together, but the info should be part of the list. Everyone has their own judgement on how much they need and how valuable it is to them. Plus it complicates the equation. Placing too much or too little weight would skew results. Gaming benchmarks don't put VRAM into any frames per dollar equations. But they often show the VRAM in the GPU name so people can see it and may make a note about outliers. Like if one card looks like a great value, but only has 2GB, that's an obvious problem. So I would still list VRAM. You could sort the list by VRAM, I think that would be a great way to segment the cards. And then sort the same VRAM cards by either speed or cost per iteration.

    What most of them do is make a graph with 2 colors. They show the overall performance in one color, like the average framerate across a variety of games, with frames per dollar in another color. The card's price is listed alongside its name. So you have all this information right in front of you in a single chart.

    Personally, I would prefer iterations per minute as the baseline rather than per second. The reason being is that the numbers for iterations per dollar will be larger and easier to distinguish apart.
  • RayDAntRayDAnt Posts: 1,158
    Gaming benchmarks don't put VRAM into any frames per dollar equations.

     Yeah, but that's only because modern gaming is generally a limited VRAM-usage affair. GPU rendering is anything but that. But with that said, I am also leery of over-emphasizing VRAM in any sort of price/performance equation (since VRAM isn't a scalable statistic. Whereas measured rendering performance is.) So I'm thinking more and more that segmented by VRAM capacity and then ordered by performance is the way to go (although it slightly complicates one thing thaty I already had worked out - integrating both relative GPU and CPU rendering performance statistics into a single graph/table in a way that actually makes sense.)

     

    What most of them do is make a graph with 2 colors. They show the overall performance in one color, like the average framerate across a variety of games, with frames per dollar in another color. The card's price is listed alongside its name. So you have all this information right in front of you in a single chart.

    Yeah, the thread I'm drafting is pretty much all that. Although it's going to include some admittedly wonky formatting workarounds in order to survive this Forum's outdated html implementation issues with legibility intact.

    Personally, I would prefer iterations per minute as the baseline rather than per second. The reason being is that the numbers for iterations per dollar will be larger and easier to distinguish apart.

    It all depends on what the typical value ranges end up being for the benchmarknig scene being used. The scene prototype I am currently working on is a lower count, more processing intensive one iteration-wise - much like the DAZ_Rawb benchmarking scene. And basically the exact opposite of the Sickleyield one (and your benchmark scene to a certain extent.) Which would actually work in favor of using iterations per minute rather than per second. However I think that keeping scale parity with gaming terminology is extremely important here (iterations per second for rendering is the direct equivalent of frames per second for gaming, and no one ever talks in terms of frames per minute in gaming) in order to avoid confusing people any more than necessary.

    Plus Iray's performance data is all natively reported in milleseconds. And going from milleseconds to seconds to whole minutes while retaining precision will produce its own share of counter-intuitive numbers. But I'll play around with it. Perhaps introduce a shift from iterations per second to iterations per minute in the Overall Value calulation itself - ie.
    Overall Value = (IR * 60) / MSRP
    rather than in the original Iteration Rate column itself. I still have some playing around to do.

  • MendomanMendoman Posts: 404
    edited July 2019

    Well, buying a flagship GPU is never really smart choice, if you only consider bang per buck. Customer always pays a lot for that premier status, and it was the same with 1080ti. Normal 1080 ot 1070 would have been a much "smarter" choice, but still many people were willing to pay for that little extra push that ti version gave. When it comes for 2080ti, I do agree, that it is expensive, and haven't really lived up to the hype about those RTX cores. At least in my opinion, Nvidia sure have taken their time to utilize those RTX cores, since now almost a year later we still don't have a working renderer that uses RTX cores, so that RTX added value is pretty much non existant currently. Do I regret buying 2080ti, not really. My old card was Maxwell Titan X, so I got pretty nice performance boost. Still, I did expect those RTX cores to do something more than just gather dust.

    Post edited by Mendoman on
  • ebergerlyebergerly Posts: 3,255
    edited July 2019
    It all depends on what the typical value ranges end up being for the benchmarknig scene being used. The scene prototype I am currently working on is a lower count, more processing intensive one iteration-wise - much like the DAZ_Rawb benchmarking scene. And basically the exact opposite of the Sickleyield one (and your benchmark scene to a certain extent.)

    Yeah, but what about the aforementioned "power users", who might be doing stuff like physics simulations, which might be using the new RTX Physx/Flex functionality when/if it gets included? Even if it's not in Studio, they might be gamers and want to know how their games might improve. Or what about the tensor core/AI/denoising functionality? Shouldn't you include that in your benchmarks and calculations? Otherwise it makes no logical sense. And what about the MDL improvements? Are you including that in your benchmark scene design? And VRAM is super important for those folks, so certainly you need to include some weight for VRAM values. Like some quadratic equation where the weight/importance of VRAM is input by the user (like a "1" if they don't care, or a "10" if it's real important), and it assigns a weight in the analysis of benefit. Actually you could have the user input their "weight" for each of those (physics, denoising, etc.) depending on how important they are. So you make a benchmark scene that fully utilizes all those features, and have a spreadsheet where the user inputs how important each of those is, and the spreadsheet calculates all of the contributions to render time and assigns an overal benefit number. And for those who only need raytracing, it will spit out only that portion of the render time so people can decide. And if someone just wants a card that's the latest "beast", they just enter a "1" for beast and it spits out how much of a beast it is. laugh  

    Post edited by ebergerly on
  • ebergerlyebergerly Posts: 3,255
    edited July 2019

    Okay, well after all the excellent comments explaining how illogical/irrational and overly simplistic my Cost/Benefit analyses were, and how we need to be more broad and comprehensive and inclusive of other types of users' needs, I jumped over to Visual Studio and spent only about 15 minutes to develop the attached conceptual Windows User Interface for what I think might be the start of a far more rational benchmarking approach. All that's needed is for someone who is far more expert at this than I am to develop the underlying alogorithms, and develop an inclusive benchmarking scene, and manage the results. After that I'll be glad to build a user interface like the attached. I suppose something similar could be accomplished with an Excel spreadsheet and some macros. Of course all of this is subject to everyone's input. 

    Also, since I suspect with all the various permutations and combinations that will come with all this benchmarking data, with so many cards and so many scenes and so many user needs, that the data and computations will be fairly massive, I can certainly employ some CUDA code to have all of this data calculated on a GPU. 

    Beast.JPG
    817 x 591 - 61K
    Post edited by ebergerly on
  • bluejauntebluejaunte Posts: 1,991
    Mendoman said:

    Well, buying a flagship GPU is never really smart choice, if you only consider bang per buck. Customer always pays a lot for that premier status, and it was the same with 1080ti. Normal 1080 ot 1070 would have been a much "smarter" choice, but still many people were willing to pay for that little extra push that ti version gave. When it comes for 2080ti, I do agree, that it is expensive, and haven't really lived up to the hype about those RTX cores. At least in my opinion, Nvidia sure have taken their time to utilize those RTX cores, since now almost a year later we still don't have a working renderer that uses RTX cores, so that RTX added value is pretty much non existant currently. Do I regret buying 2080ti, not really. My old card was Maxwell Titan X, so I got pretty nice performance boost. Still, I did expect those RTX cores to do something more than just gather dust.

    We have to keep in mind that these are gamer cards first and foremost. When they released RTX we didn't even know if any of that stuff was gonna enhance Iray at all. Later we learned it enhanced other renderers so we became hopeful but still we suspected that NVIDIA might not even bother touching Iray as it has always been a very second class citizen to them. A 2x increase in performance over a 1080 TI even without RTX features was a big surprise. It turned out we Iray users got a way bigger bump in performance than gamers did. And we know RTX is coming to Iray.

    I'm pretty happy with how things turned out overall. 2x faster is normally the stuff of fairy tales.

  • stiggstigg Posts: 6
    Render time is inversely proportional to GPU performance. Greater and greater performance will only decrease render times by smaller and smaller amounts. Comparing a GPU's cost directly to how much they reduce render time by is completely flawed. A GPU's throughput is what should be compared to price. Either iterations per second or complete renders per hour etc.
  • ebergerlyebergerly Posts: 3,255
    edited July 2019
    stigg said:
    Render time is inversely proportional to GPU performance. Greater and greater performance will only decrease render times by smaller and smaller amounts. Comparing a GPU's cost directly to how much they reduce render time by is completely flawed. A GPU's throughput is what should be compared to price. Either iterations per second or complete renders per hour etc.

    I suppose that depends on how you define "performance", right?

    Isn't RTX (at least in theory) an attempt to completely redesign the rendering solution with an entirely new architecture and software that approaches the problem differently, and therefore can, hypothetically, make much greater advances? From what I've seen on the software side with RTX they're still in its infancy in figuring how best to utilize all that nifty hardware for all those cool things. Kinda like going from single core to multicore processors allows you to vastly improve performance for selected types of problems which benefit from multicore. You break the problem down into different components, and find optimum ways to solve those specific problems.  

    BTW, it's also redefining in some sense what "render" even means. It's no longer raw raytracing, it's now got the option for stuff like denoising and other technologies to make a render sufficient for the task, not necessarily perfect. 

    Post edited by ebergerly on
  • RayDAntRayDAnt Posts: 1,158
    edited July 2019
    ebergerly said:
    It all depends on what the typical value ranges end up being for the benchmarknig scene being used. The scene prototype I am currently working on is a lower count, more processing intensive one iteration-wise - much like the DAZ_Rawb benchmarking scene. And basically the exact opposite of the Sickleyield one (and your benchmark scene to a certain extent.)

    Yeah, but what about the aforementioned "power users", who might be doing stuff like physics simulations, which might be using the new RTX Physx/Flex functionality when/if it gets included? Even if it's not in Studio, they might be gamers and want to know how their games might improve. Or what about the tensor core/AI/denoising functionality? Shouldn't you include that in your benchmarks and calculations?

    Absolutely not. Because - putting on my professional academic researcher hat for a moment - testing for more than one independent variable at any given time in a study/benchmarking context summarily leads to confounded data and useless results. For each distinct type of computing workload you want to study, you have a you have to have a completely separate test workload. The present thread/benchmarking conversation is about Iray Photoreal rendering performance specifically. If eg. ai denoising performance were to be the focus instead (another area I've already begun separately investigating), that is an entirely different set of benchmarking methodologies/relevant performance statistics/set of threads to report and discuss results in. Eventually you could end up with a master table of relative performance for each card in different areas of DS relevant computing (eg. rendering, ai denoising, physics modeling, etc.) performance each in a separate column. But that only happens after effective strategies for benchmarking specific workloads individually are established first.

     

    ebergerly said:

    Isn't RTX (at least in theory) an attempt to completely redesign the rendering solution with an entirely new architecture and software that approaches the problem differently, and therefore can, hypothetically, make much greater advances?

    Actually no, it isn't. It's a platform wide attempt at recasting the nature of real-time Game engine rendering into the same era of technological advancement as existing professional 3D production software solutions like Iray. We aren't gamers here. Raytracing is old hat to us. For us, RTX just makes specific aspects of workloads already being done work better/faster.

    Post edited by RayDAnt on
  • nicsttnicstt Posts: 11,715
    ebergerly said:

    Like I said, the 1080ti was only around $700, and had a 56% improvement in render times over a GTX 1060. That's a real good value, IMO, and at the time it was surely cutting edge. And a GTX 1070 was only $450 and gave a 33% improvement when it was cutting edge. Again, a similar value (both price/performance around 13-14). 

    I'm kinda scratching my head trying to figure why so many are suddenly suggesting we toss any consideration of price for what most here consider a hobby. So if you can afford $2,000 for a GPU, just pay it, even though it only gives a 10-20% improvement? I don't get it. As a minimum I think a rational person would AT LEAST start with considering Price/Performance as part of the equation, and then decide whether to toss it or not if there are some overriding factors. 

    Got to agree; I can afford a Titan. I just can't convince myself it is worth it yet.

    I agree that the 2080ti offers good rendering for it's cost, when comparing against other options.

    No matter how you look at it though, it costs over, and a decent amount over, $1000/£1000

  • nicsttnicstt Posts: 11,715
    ebergerly said:

    Okay, well I'll leave it to you guys to spend your money as you like. I really don't care what you do. I've tried to provide some rational decision points for those who want to decide if they should buy something, but if you don't like them then that's up to you. It's what businesses do so that they don't waste money. 

    If you want to buy an expensive card solely because you're a "power user" and the card is a "beast", be my guest.

    Your points aren't really rational when you have a 1080 TI and conclude that a 2080 TI isn't enough bang for the buck, even though it's either the exact same (if you take 1080 TI used prices of about $600) or even more when you take a "new" price that some have mentioned (let's say $700). It sounds more like you have convinced yourself that 1200 bucks is too much for a graphics card rather than look at actual price vs. performance.

    I'm of that opinion; I think it is rediculous, despite the fact I will soon or later spend much more getting my rendering system up to the latest spec. Be it Nvidia, or something else.

    What I really see is a lack of competition, and I can vote for that with my wallet.

  • nicsttnicstt Posts: 11,715
    RayDAnt said:

    You know what would be a more sensible cost analysis? For gaming, they do "frames per dollar", so for rendering software, it should be "iterations per dollar". This might be a little tough as the iteration count can be different, but this would be a much more interesting analysis in my opinion.

    For what it's worth the New & Improved © benchmarking thread I've been periodically mentioning having put together but then not actually posting* has had this built into it (as part of a periodically updated overall benchmark results table sorted by Overall Value) for a quite some time now. The calculation method I've been going with thus far is:
    Overall Value = (IR * MC) / MSRP
    Where:
    IR = Iteration Rate (ie. average Iterations completed per second) of a standardized benchmarking scene. As computed by taking a specific Cuda device's total successfully completed iterations and dividing by that same device's total time spent rendering (two extremely useful performance statistics most people seem to have little to no idea exist in their DS log files.)
    MC = Memory Capacity (in Gigabytes)
    And
    MSRP = Manufacturer's suggested retail price (in USD.)

    The only thing I can't decide on is whether it makes more sense to simply sort the table first into categories based on RAM capacity and then sort by Overall Value. In which case the calculation would just be:
    Overall Value = IR / MSRP
    With the reader then needing to have the wherewithal to pick out the right memory capacity portion of the table to peruse for their use cases.

    This way you don't have potential situations where Incredibly cheap/fast/low vram capacity cards end up coming out equivalent to incredibly expensive/less fast/huge vram capacity cards in the tables/graphs. Because that equivalency makes no sense from an actual usability/usefulness perspective.

     

    Still need to write up an explanation about the benchmarking scene its gonna be based around. Plus I've been waiting this long for full RTX support to come out so as to avoid needing to peddle for retested numbers from people once it does.

    Iterations would only be comparable for the same version of Studio. 4.11 is different to 4.10, or at least it seems to be that way.

  • Richard HaseltineRichard Haseltine Posts: 108,735

    Keep the discussion civil and focussed on the topic please. Some posts addressing and sniping at others have been trimmed.

  • RayDAntRayDAnt Posts: 1,158
    nicstt said:
    RayDAnt said:

    You know what would be a more sensible cost analysis? For gaming, they do "frames per dollar", so for rendering software, it should be "iterations per dollar". This might be a little tough as the iteration count can be different, but this would be a much more interesting analysis in my opinion.

    For what it's worth the New & Improved © benchmarking thread I've been periodically mentioning having put together but then not actually posting* has had this built into it (as part of a periodically updated overall benchmark results table sorted by Overall Value) for a quite some time now. The calculation method I've been going with thus far is:
    Overall Value = (IR * MC) / MSRP
    Where:
    IR = Iteration Rate (ie. average Iterations completed per second) of a standardized benchmarking scene. As computed by taking a specific Cuda device's total successfully completed iterations and dividing by that same device's total time spent rendering (two extremely useful performance statistics most people seem to have little to no idea exist in their DS log files.)
    MC = Memory Capacity (in Gigabytes)
    And
    MSRP = Manufacturer's suggested retail price (in USD.)

    The only thing I can't decide on is whether it makes more sense to simply sort the table first into categories based on RAM capacity and then sort by Overall Value. In which case the calculation would just be:
    Overall Value = IR / MSRP
    With the reader then needing to have the wherewithal to pick out the right memory capacity portion of the table to peruse for their use cases.

    This way you don't have potential situations where Incredibly cheap/fast/low vram capacity cards end up coming out equivalent to incredibly expensive/less fast/huge vram capacity cards in the tables/graphs. Because that equivalency makes no sense from an actual usability/usefulness perspective.

     

    Still need to write up an explanation about the benchmarking scene its gonna be based around. Plus I've been waiting this long for full RTX support to come out so as to avoid needing to peddle for retested numbers from people once it does.

    Iterations would only be comparable for the same version of Studio. 4.11 is different to 4.10, or at least it seems to be that way.

    Yeah, each Daz Studio release (beta or production) covered is gonna get its own performance table.

  • ebergerlyebergerly Posts: 3,255
    edited July 2019

    I'm trying to understand why anyone would care about "average iterations per second". Isn't that like saying "when travelling from NY to Miami my average speed (miles per hour) was 60 mph"? But it doesn't tell you how long it took to get there. Nor does it give the user any idea of how long their scene will render. Why not just use the "total time spent rendering", which is already used to calculate "average iterations per second"? 

    Also, why "hard code" the memory capacity in the "value" formula, so that you can't determine how much of the "value" is memory and how much is performance. Why not just allow the user to see how much memory it has so they can determine if it's enough? For example, if they want to compare to the existing system RAM to make sure they can fully utilize the GPU VRAM (ie, system RAM must be 2-3 times GPU VRAM to be fully utilized). Or maybe the GPU VRAM capacity isn't really an issue they care about, as long as it's above a certain value (like me). 

    BTW, the more I think about it the more confused I am. Looks like the units of the "Value" equation are "(iterations*GB/second)/$$"

    That doesn't really mean anything to the average person does it? Compared to, say, "render time per $$" which IMO makes a lot more sense. 

    Post edited by ebergerly on
  • stiggstigg Posts: 6
    ebergerly said:

    In my analyses of Studio/Iray benchmark results for a long list of GTX and other cards, and comparing their Benefit to Cost numbers, I've found that it's fairly common and reasonable for a "good deal" to be a cost/benefit around 10-15 (cost in $$ divided by percent improvement in Iray render times). I've found that the more expensive options are up over 20, which I consider to be overpriced. 

    The RTX 2080ti, for example, was hovering around $1,200. The normal price for a GTX 1080ti was under $700 (which is what I paid). And assuming the RTX-2080ti cut the 1080ti render times in half (ie, a 50% improvement), the Cost/Benefit comes out to 1200/50, or 24 (relative to the 1080ti). Personally, that seemed quite high to me compared to the other GTX family improvements since the GTX 1060. Doesn't mean it's bad, just that for me personally it seemed like a significant difference from most of the previous NVIDIA GPU releases (usually a $200-300 increase in price for a 30% improvement in render times). Now if it was $700 for a 50% improvement, that would be right in the ballpark with a cost/benefit of 14. I'm hoping the $700 Super 2080 is closer to 14 than 24. 

    Render time does not scale linearly with iteration rate. It is inversely proportional.

    Going by your formula of "Cost/Percent decrease in render time" a gpu that cost $3000 dollars but is a million times or even a billion times faster than a 1080 ti would have a higher cost/benefit than the rtx 2080ti in your example

    3000/99.9999 = ~30 vs 1200/50 = 24

    This is because you can never decrease the render time by more or even 100%.

  • ebergerlyebergerly Posts: 3,255
    edited July 2019

    Wow stigg thanks !!! You're right. You just exploded my brain. laugh

    So what's the right way to determine cost/benefit for something like this, where the benefit is render time improvement? Or maybe just take the inverse? 

    Hold on...I'm thinking maybe instead of % improvement you just use seconds of improvement (GPU1 render time - GPU2 render time)... 

    And maybe the % improvement metric only works at low values of % improvement where it's more linear? So maybe up to around 40-50% improvement it gives a reasonable number, but beyond that it gets too nonllinear? 

    Post edited by ebergerly on
  • stiggstigg Posts: 6
    edited July 2019
    Never mind
    Post edited by stigg on
  • RayDAntRayDAnt Posts: 1,158
    edited July 2019
    ebergerly said:

    I'm trying to understand why anyone would care about "average iterations per second". Isn't that like saying "when travelling from NY to Miami my average speed (miles per hour) was 60 mph"? But it doesn't tell you how long it took to get there. Nor does it give the user any idea of how long their scene will render. Why not just use the "total time spent rendering", which is already used to calculate "average iterations per second"? 

    It's best practice from a statistical standpoint. Using a rate based on the time statistic (eg. render length/rendered iterations = iterations per second) rather than just the raw statistic (eg. render length) autocorrects for unexpected variances in iteration count (something that plagues Sickleyield's benchmarking scene, for example) thereby guarding against faulty benchmarking results on certain hardware/software configurations.

    Post edited by RayDAnt on
  • ebergerlyebergerly Posts: 3,255
    edited July 2019
    Stigg, practically the best you could ever expect is 100% improvement at a cost of $1. So if you just use the inverse of what I had (ie, benefit/cost), the best you could expect is a ratio of 100. A 100% improvement for $100 would be a ratio of 1. And so on. A 10% improvement for $1,000 would be 0.01. A 1% improvement for $10,000 would be .0001. I think that works linearly, though its ugly.
    Post edited by ebergerly on
This discussion has been closed.