General GPU/testing discussion from benchmark thread

17810121318

Comments

  • drzapdrzap Posts: 795
    edited July 2019

    RTX cards aren't completely useless.  They offer more compute power and more CUDA, so even without the RT cores, they offer some value for power users.  I'm particularly grateful for the early adopters because they help finance the progress of this technology.  Hopefully, Nvidia will have a Super 2080ti or Titan with a better cost-value ratio to make it worth my while.  In the meantime, I can't recommend them, but I'm not going to hate on those who bought them because 2x performance upgrade in one card ain't too shabby.

    Post edited by drzap on
  • ebergerlyebergerly Posts: 3,255
    drzap said:

     but I'm not going to hate on those who bought them because 2x performance upgrade in one card ain't too shabby.

    It has nothing to do with hate, it's about concern. Some of us were advising folks 9 months ago to wait, and base their buying decisions on facts, not hype and emotion. We were trying to keep folks from wasting money. And the response was we were mocked, and chided for being too negative. As a result, some folks spent upwards of $1,400 on a card with probably the same performance as a card you can buy now for 1/2 that price. 

  • drzapdrzap Posts: 795
    edited July 2019
    ebergerly said:
    drzap said:

     but I'm not going to hate on those who bought them because 2x performance upgrade in one card ain't too shabby.

    It has nothing to do with hate, it's about concern. Some of us were advising folks 9 months ago to wait, and base their buying decisions on facts, not hype and emotion. We were trying to keep folks from wasting money. And the response was we were mocked, and chided for being too negative. As a result, some folks spent upwards of $1,400 on a card with probably the same performance as a card you can buy now for 1/2 that price. 

    LOL, I wouldn't waste my "concern" on those early adopters if I were you.  Like I said, they got a tremendous performance boost and apparently they've been taking advantage of it for up to 10 months.  There is always going to be a cheaper, faster graphics card in the future.  No need to put off a buy if you need the hardware and you have the cash.  I know I wouldn't have hesitated to buy if I was in the market for new GPU's.

    Post edited by drzap on
  • fred9803fred9803 Posts: 1,559

    Yeh I got a huge boost from my 2080 but I could have got the same from a much lower priced 1080ti. Things will be different when NVIDIA gets it's act together with Iray support for its RTX GPUs. Then I can say Nah Nah Ne Nah Nah to the GTX 10 series people.

     

  • outrider42outrider42 Posts: 3,679
    ebergerly said:

    So sometime following the introduction last year of the RTX series GPU's in September (?), I got pretty much fed up with what seemed to be nonstop marketing handwaving and hype, with no relevant data and facts, so I pretty much dropped any occasional checking in on any of the youtube tech channels. I pretty much lost all interest in PC hardware. 

    So this morning I noticed one on a new NVIDIA GPU release and learned that apparently they're now releasing a "Super" version of the RTX 2080, 2070, and 2060. And, for example, it is supposed that the 2080 Super is, performance-wise, essentially a RTX 2080TI, but with a few GB less VRAM, with a price that is only around $700. Now as I recall, not long ago an RTX 2080TI was selling in the $1200 - 1400 range. And this chart from pcpartpickers shows that maximum prices in recent months shot up to around $1,900:

    https://cdn.pcpartpicker.com/static/forever/images/trends/trend.gpu.chipset.geforce-rtx-2080-ti.9468dd94339aac76a7b86aaa2562c72e.png

    Huh? And apparently the Supers will replace the non-Super variants? 

    Just when the optimist in me was expecting we might get this RTX/Iray performance stuff ironed out in the next few months, now we've got this insanity. So if you include all of the GTX and RTX and Super variants that will be out there this year, and we try to come up with a benchmark that will have any meaning whatsoever, and we wait until all the software gets updated for all of this, I honestly think that benchmarking anything RTX/Iray related in the foreseeable future will be somewhat useless. 

    And I hope we all tuck this away in the back of our minds for the next time we want to run out and buy the latest new technology right away. Sometimes negativity is the smart approach.

    I covered the Super launch a couple pages ago with a full price and spec chart. The 2080 Super specs only released officially after my post. But the 2080 Super will NOT be a cheaper 2080ti, not even close. The gap between the two cards is way too large. The 2080 Super has 3072 CUDA cores. The 2080 has 4352 CUDA cores, well over 1000 more CUDA cores! So no, the 2080 Super isn't going to change anything. As it stands right now the 2080ti will not receive a Super version. It would be kind of hard to do that. The Titan RTX has 4608 CUDA cores, and that is the most CUDA Nvidia can place on the Turing chip. That's only about 250 cores more. So there isn't any space for a 2080ti Super unless they hand out more VRAM with it. A 16gb 2080ti would certainly be cool.

    And I'm not sure what you are talking about. I've always said to wait for benchmarks, and the 2080ti has provrn itself in every one of them, even before ray tracing cores get support. The 2080ti is as fast as two 1080tis in one system, and as stated before, cost less that buying two 1080tis. Power users will pay that price.

    Take notice that I didn't rush out to buy a 2080ti. I still have two 1080tis. However I am thinking about it. But I want to build a new PC first, with a new Ryzen. Probably the 12 core, which looks very sexy. By the time I get around to the GPU the next generation may be just around the corner. And 2020 looks to be a really wild year with AMD, Nvidia, and even Intel offering new GPUs. The competition will be fierce, which should translate to better prices across the board.
  • ebergerlyebergerly Posts: 3,255
    If you look at the Techpowerup estimates on the 2080 super vs the 2080ti (and others), they estimate the 2080ti is only 5% performance gain over 2080 Super. Not sure how this will shake out relative to Iray, but those are interesting numbers. And far more useful, IMO, than merely quoting core specs.
  • outrider42outrider42 Posts: 3,679
    edited July 2019

    If you go by that chart, that same chart would indicate that the 2080ti is only 21% faster than the last generation 1080ti...hmmm. So how did that turn out? I can't recall too many cases where the 1080ti is ONLY 21% slower than the 2080ti at anything, LOL. Fail. And for Iray its more like 5 TIMES that. So um, no, I am not buying this 5% estimate by a long shot. This estimate is relative game performance, anyway. Last I checked...Iray is not a game. If we want to do some funny math then 5 times 5 is 25%. So the 2080ti will be 25% faster than the 2080 Super at Iray.

    They use the same chipset, so only two specs matter. Core count and clockspeed. So with a 1250 core disadvantage, the 2080 Super would have to be overclocked like crazy to make up that difference. There is no other magical way to make up that difference. The 2080ti FE boosts to 1710, while the 2080 Super boosts to 1815. I'm not so sure that 105 Mhz is going to make up for 1250 cores. Keep in mind the existing 2080 is also clocked higher than the 2080ti at 1800 Mhz. The upgraded GDDR6 is faster, which might matter for video games but is not so much of a factor with Iray.

    So let me get this straight, you find an ESTIMATED performance gain completely unrelated to Iray or rendering in general to be more interesting and useful than the data sheet on the card itself? I find your statement funny given how you regularly scoff at the very real rendering benchmarks I posted. You can take the specs and do the math yourself.

    The biggest deal with the Super releases is that they kill the Founder's Edition pricing. It was very hard to find cards near the actual base price rather than the Founder's Edition. So the 2080 Super is going to be $700, and not $800 like the original. That is effectively a $100 price cut on top of a performance boost. But don't go dreaming that the 2080 Super is somehow a 2080ti "Light" now.

    Post edited by outrider42 on
  • ebergerlyebergerly Posts: 3,255
    You may be right. But we probably won't know the facts relative to actual iray/RTX performance for a few months. If we're lucky.
  • rrwardrrward Posts: 556
    The 2080ti is as fast as two 1080tis in one system, and as stated before, cost less that buying two 1080tis. Power users will pay that price.

    That's what I'm seeing with my system. And yes, for me the two 2080 tis I have are worth the money.

    I'm not getting people saying "it's all so complicated". It isn't, The 20x0 cards out perform the 10x0 cards regardless of RTX support. As for the Supers, they're replacing the 2080 and the 2070 at the same price and Nvidia is killing off the $100 (US) premium for Founders edition cards.

  • ebergerlyebergerly Posts: 3,255
    edited July 2019

    As I've mentioned before, some people approach major purchase decisions in the same way a business would...by using a Benefit/Cost analysis. And if the benefits don't outweigh the cost to a certain extent (at least in the eyes of that particular business), then the purchase isn't made. 

    In my analyses of Studio/Iray benchmark results for a long list of GTX and other cards, and comparing their Benefit to Cost numbers, I've found that it's fairly common and reasonable for a "good deal" to be a cost/benefit around 10-15 (cost in $$ divided by percent improvement in Iray render times). I've found that the more expensive options are up over 20, which I consider to be overpriced. 

    The RTX 2080ti, for example, was hovering around $1,200. The normal price for a GTX 1080ti was under $700 (which is what I paid). And assuming the RTX-2080ti cut the 1080ti render times in half (ie, a 50% improvement), the Cost/Benefit comes out to 1200/50, or 24 (relative to the 1080ti). Personally, that seemed quite high to me compared to the other GTX family improvements since the GTX 1060. Doesn't mean it's bad, just that for me personally it seemed like a significant difference from most of the previous NVIDIA GPU releases (usually a $200-300 increase in price for a 30% improvement in render times). Now if it was $700 for a 50% improvement, that would be right in the ballpark with a cost/benefit of 14. I'm hoping the $700 Super 2080 is closer to 14 than 24.  

    However, without knowing the final Iray render times once (if) Iray fully implements all the RTX-related technology it's not fair to assume only a 50% improvement. Hence part of the complications. Until the RTX is fully implemented we can't fill in that % improvement number. And if the Super variety ends up giving a much better Cost/Benefit, personally I'd consider that rather than an RTX-2080ti. But that's just me. Whatever works. 

    Post edited by ebergerly on
  • RayDAntRayDAnt Posts: 1,120
    edited July 2019
    ebergerly said:
    If you look at the Techpowerup estimates on the 2080 super vs the 2080ti (and others), they estimate the 2080ti is only 5% performance gain over 2080 Super.

    Yeah... my advice is to STAY AWAY from those auto-generated relative performance tables TechPowerUp has when coming to conclusions (at least where Nvidia cards are concerned.) The way those charts are calculated is based on the officially published base and boost clock speeds for each card. And due to the way Nvidia's GPU Boost technology works, their officially published numbers have virtually nothing to do with what they typically exhibit under actual use (check pretty much any major outlet's reviews of their cards for ample illustrations of this.)

    Although it's interesting - if you look at the published specs for all the SUPER (sic) cards, the base and boost clocks listed look a lot like the numbers the cards they are replacing get in real world use. So perhaps Nvidia has finally decided to stop lying about clock speeds (pretending they're lower) and just go with the numbers they actually get.

    Post edited by RayDAnt on
  • ebergerlyebergerly Posts: 3,255
    edited July 2019
    I would also suggest we stay away from assuming that raw hardware specs like number of cores and clock speed and on and on have much real world relevance to actual Iray render times. I think we already showed that core counts in the GTX series are not indicative of render times. And I suspect that with RTX RT cores and tensor cores and other architecture variations, as well as the very uncertain implementations on the software side, as well as scene composition, the only real indication of Iray render times is actual Iray render times. And we should also not assume that more is better. If a faster clock speed shaves 10.36 seconds off a render does anyone really care?
    Post edited by ebergerly on
  • drzapdrzap Posts: 795
    edited July 2019
    ebergerly said:
    I would also suggest we stay away from assuming that raw hardware specs like number of cores and clock speed and on and on have much real world relevance to actual Iray render times. I think we already showed that core counts in the GTX series are not indicative of render times. And I suspect that with RTX RT cores and tensor cores and other architecture variations, as well as the very uncertain implementations on the software side, as well as scene composition, the only real indication of Iray render times is actual Iray render times. And we should also not assume that more is better. If a faster clock speed shaves 10.36 seconds off a render does anyone really care?

    I would.... that is if I were using iRay.   10 seconds x 24 = 4 minutes per second of animation produced = a whopping 4 hours per minute of animation.  So a 10-second improvement would save me 60 hours on rendering a 15-minute episode. That's a whole work week. If rendering is your hobby, yeah, 10 seconds doesn't mean an awful lot.  But if you are working, those seconds add up and they turn into dollars of lost income.  An upgrade to the fastest hardware is money in the pocket.

    Post edited by drzap on
  • ebergerlyebergerly Posts: 3,255
    Good point. While I'm fairly sure it doesnt apply to most users, stuff like that is important to consider.
  • RayDAntRayDAnt Posts: 1,120
    edited July 2019
    ebergerly said:
    I would also suggest we stay away from assuming that raw hardware specs like number of cores and clock speed and on and on have much real world relevance to actual Iray render times.

    This makes no logical sense. Iray GPU based rendering performance IS directly determined by a card's PHYSICAL HARDWARE specs. The issue with the TechPowerUp graphs is that they are inaccurate since they include both physical hardware and SOFTWARE specs in their calculation (clock speeds aren't a physical hardware spec since they can be changed in software.)

    If a faster clock speed shaves 10.36 seconds off a render does anyone really care?

    Perhaps you don't, but you have to remember that your particular use case is only one out of many. For anyone doing more than one render at any one time, that 10+ second difference becomes a significant change in overall RATE rather a small addition in time.

    Post edited by RayDAnt on
  • ebergerlyebergerly Posts: 3,255
    edited July 2019
    Dont forget that just because the hardware exists doesn't mean the specific problem and the software are able to take full advantage of it. Hopefully, if your GPU has 50 RT cores, your particular scene and raytracing algorithms can be configured at runtime to use all cores effectively, and not be waiting around for some to finish doing other stuff. Theres a whole universe of software design and memory management and so on that have to come together to have it all working at peak efficiency. And often it just aint possible. Imagine a big factory with a lot of production lines. You need the right amount of orders, the right amount of workers, the right amount of resources, and have it all choreographed.
    Post edited by ebergerly on
  • outrider42outrider42 Posts: 3,679
    Looking at a cost analysis is not always a good way to shop for video cards where speed is the ultimate factor. You pay a premium for a premium product, simple as that. The x80ti or Titan class of cards have never really been a "good" value in such analysis. The gaming equivalent is "cost per frame", and no top end card has ever topped that chart to my knowledge. The most "cost effective" cards are always in the low to mid range. But no matter what the analysis says, if you can't run the more demanding games at high frame rates, its pointless. If you want the fastest, you have to pay for it.

    The 2080ti is for those who place a premium on time and performance. For some people, time is money, and making that investment pays off.

    Everyone has to make their own decision for what works best for them. Dissing the 2080ti at every step just because you don't like it is not doing anybody good, because now you are injecting an opinion on a cost analysis you made up yourself. While I do agree the 2080ti is priced higher than perhaps it could be, the performance is still there. The 2080ti can outperform two 1080tis in Iray, which is quite a jump in performance.

    If you really want to gripe about the 2080ti price, you should send your consternation towards AMD for not competing for so many years. If AMD had something, anything, that could compete with the 2080ti, I guarantee the price situation would be very different. Just look at Super, Nvidia shifted 3 main cards just days before AMD launched Navi. That certainly is no coincidence. It shouldn't take an elaborate analysis to figure out why this happened. And AMD responded to Super by dropping the announced Navi prices before they even launched. In doing so, AMD Tweeted how they "embrace competition". This is how the market is supposed to work.

    So if you want to see prices drop, you want to root for competition. This is why I keep pointing to 2020 as potentially a fantastic year for GPU hardware. Nvidia will release 7nm next gen cards, AMD will release their larger and faster Navi on full RDNA, and Intel will be jumping into the fray and potentially really shaking things up. With 3 big companies all releasing GPU hardware in one year, 2020 will be a wild one.

    But until then, this what we got. The 2080ti is all alone at the top. I would be thrilled to see its price drop, but I'm not holding my breath. I think the most we may get is the Founder's Edition price drops to match the base price.
  • outrider42outrider42 Posts: 3,679
    ebergerly said:
    Dont forget that just because the hardware exists doesn't mean the specific problem and the software are able to take full advantage of it. Hopefully, if your GPU has 50 RT cores, your particular scene and raytracing algorithms can be configured at runtime to use all cores effectively, and not be waiting around for some to finish doing other stuff. Theres a whole universe of software design and memory management and so on that have to come together to have it all working at peak efficiency. And often it just aint possible.

    And don't forget that the 2080 and 2080ti are using the exact same chip, with the exact same software. This is not like comparing hardware from different generations. And when you compare cards with the same hardware and software...the specs tell the difference. The 2080 Super doesn't have the specs to come close to a 2080ti.

    For a good example, just look down the line at the 2060 and its Super. The 2060 Super is actually clocked slower than the 2060, but it has more CUDA cores and powers ahead in performance, and the difference in CUDA is not as great between them like 2080 Super and 2080ti.
  • ebergerlyebergerly Posts: 3,255
    Thanks outrider. So what number do I use for Iray render time improvement for the RTX 2080 Super over the 1080ti?
  • nicsttnicstt Posts: 11,714
    ebergerly said:
    drzap said:

     but I'm not going to hate on those who bought them because 2x performance upgrade in one card ain't too shabby.

    It has nothing to do with hate, it's about concern. Some of us were advising folks 9 months ago to wait, and base their buying decisions on facts, not hype and emotion. We were trying to keep folks from wasting money. And the response was we were mocked, and chided for being too negative. As a result, some folks spent upwards of $1,400 on a card with probably the same performance as a card you can buy now for 1/2 that price. 

    I'm still recommending that folks wait and see; sure, that performance improvement is nice, but it does come at a cost. For me that cost is too high. It's too high because the product available NINE MONTHS after release is still not supported.

  • drzapdrzap Posts: 795
    edited July 2019

    Cutting edge technology has never been at the apex of price/performance value.  Never.  And it never will be because it isn't good business sense for suppliers.  Those who are are early adopters of the latest technology should already know this.  Businesses that choose to buy into it do it because they can't afford not to.  I agree with nicstt, most users should probably wait and see because of the flaky RTX issue.  But artists have been buying them in droves because it just makes sense to get the fastest thing you can afford if it will help them to produce better work, faster.

    Post edited by drzap on
  • ebergerlyebergerly Posts: 3,255
    edited July 2019

    Like I said, the 1080ti was only around $700, and had a 56% improvement in render times over a GTX 1060. That's a real good value, IMO, and at the time it was surely cutting edge. And a GTX 1070 was only $450 and gave a 33% improvement when it was cutting edge. Again, a similar value (both price/performance around 13-14). 

    I'm kinda scratching my head trying to figure why so many are suddenly suggesting we toss any consideration of price for what most here consider a hobby. So if you can afford $2,000 for a GPU, just pay it, even though it only gives a 10-20% improvement? I don't get it. As a minimum I think a rational person would AT LEAST start with considering Price/Performance as part of the equation, and then decide whether to toss it or not if there are some overriding factors. 

    Post edited by ebergerly on
  • RayDAntRayDAnt Posts: 1,120
    edited July 2019
    ebergerly said:

    I'm kinda scratching my head trying to figure why so many are suddenly suggesting we toss any consideration of price for what most here consider a hobby.

    There's your disconnect right there. Most of the people you are conversing with here (myself included) are professional artists of one sort or another. Professionals have very different cost/performance considerations than hobbysts do when it comes to things that count as both income generators and tax deductable business expenses. This is an advanced level hardware discussion thread. Obviously most of the people posting in it are going to be coming at it from the professional (or at the very least, hardware enthusiast) side of things rather than the hobbyist side. Expecting otherwise makes no logical sense.

    Post edited by RayDAnt on
  • ebergerlyebergerly Posts: 3,255
    edited July 2019

    So you DO consider cost and benefit in your purchase analysis? It's just some people's definition of "benefit" might include stuff that others might not care about? Professionals might consider a smaller speed improvement as having more value than a hobbyist might? So in my cost/benefit definition, where I set a "bar" of around 14 as a good value, others might decide upon a "good value" being something higher?

    Just like I mentioned at the beginning about how it varies among companies who employ this method. Depending on your needs and requirements, you determine what the costs and benefits are as they apply to your business or hobby. Like I said, this is standard practice among (successful) businesses. You're obviously free to define cost and benefits as you see fit, but I encourage folks to use this standard cost/benefit technique when considering what to buy.

    BTW, here's a copy of a chart I posted a year or two ago here with some basic cost/benefit numbers for the GTX line of GPU's, assuming the benefit to most users is Iray render time improvement.  

     

    Iray Benchmark Price Performance.jpg
    786 x 525 - 233K
    Post edited by ebergerly on
  • drzapdrzap Posts: 795
    edited July 2019
    ebergerly said:

    Like I said, the 1080ti was only around $700, and had a 56% improvement in render times over a GTX 1060. That's a real good value, IMO, and at the time it was surely cutting edge. And a GTX 1070 was only $450 and gave a 33% improvement when it was cutting edge. Again, a similar value (both price/performance around 13-14). 

    I'm kinda scratching my head trying to figure why so many are suddenly suggesting we toss any consideration of price for what most here consider a hobby. So if you can afford $2,000 for a GPU, just pay it, even though it only gives a 10-20% improvement? I don't get it. As a minimum I think a rational person would AT LEAST start with considering Price/Performance as part of the equation, and then decide whether to toss it or not. 

    GTX 1070 was never cutting edge.  It was always slotted as middle-high end and considered the best price-performance option and I remember a lot of folks complaining about the price of the 1080ti when it first came out, including you. https://www.daz3d.com/forums/discussion/comment/2621976/#Comment_2621976  Your estimation then was the 1080ti was not the best performance for the price (how quickly we forget?).  Nevertheless, you eventually joined the bandwagon and got you one.  But today, we can get twice the performance from a 2080ti for less than twice the price, so the cost/benefit quotient remains about the same as the last generation.  A $2000 gpu with only 20% improvement from generation to generation would die on the shelves.  Today's $2K card (you must only be talking about a Titan RTX) is twice as good as the last generation.

    Post edited by drzap on
  • Robert FreiseRobert Freise Posts: 4,269

    As a quote hobbyist unquote I try to buy on sale whenever possiable BUT I look at all options so that I get the most bang for my buck even if the item isn't on sale

  • ebergerlyebergerly Posts: 3,255

     

    drzap said:

    Your estimation then was the 1080ti was not the best performance for the price (how quickly we forget?).  Nevertheless, you eventually joined the bandwagon and got you one.  

    Yes. I was the first person here to even mention the concept of cost/benefit. At the time I was shooting for a cost/benefit of 10, then later realized that historically (based on the chart I developed above) a slightly different ratio of around 13 or 14 is a more reasonable expectation, so I accepted that. I moved 3 points, based on facts. What's your point? Here we're talking about accepting almost twice that (ie, 24). 

  • RayDAntRayDAnt Posts: 1,120
    edited July 2019
    ebergerly said:

    So you DO consider cost and benefit in your purchase analysis? It's just some people's definition of "benefit" might include stuff that others might not care about?

    Yes. All of your cost/benefit analyses are woefully inadequate from anything but a very simplistic hobbyist perspective because they don't take into account different capacities of VRAM.

    ETA: Performance limitations stemming from inadequate onboard video memory are a much more important issue to someone working with professional GPU computing workloads than speed (I can get by with a GPU that performs 10% slower than I'd ideally like. I can't get by with a GPU with 10% less memory than needed for what I'm doing.)

    Post edited by RayDAnt on
  • ebergerlyebergerly Posts: 3,255
    RayDAnt said:
    ebergerly said:

    So you DO consider cost and benefit in your purchase analysis? It's just some people's definition of "benefit" might include stuff that others might not care about?

    Yes. All of your cost/benefit analyses are woefully inadequate from anything but a very simplistic hobbyist perspective because they don't take into account different capacities of VRAM. 

    Awesome. So if you don't like the numbers, feel free to add your considerations to the cost/benefit analysis for those in a similar position as you. 

  • Takeo.KenseiTakeo.Kensei Posts: 1,303

     

    ebergerly said:
    Thanks outrider. So what number do I use for Iray render time improvement for the RTX 2080 Super over the 1080ti?


    Outrider is right. Iray Performance has always scaled with core numbers. Saying that it suddenly shouldn't be accounted out of the blue has no logic

    And your render time improvement is specific to one scene (three at most) for which the reliability for future benchmark was questionned for a few month now

    Even if the hierarchy in rendertime is respected according to core count and frequency inside a same generation, it is clear that the SY scene is unable to really diffenciate the top cards

    Now if you want a number, I'd bet on 1:15 but I warn you, that number is meaningless

    ebergerly said:

    Like I said, the 1080ti was only around $700, and had a 56% improvement in render times over a GTX 1060. That's a real good value, IMO, and at the time it was surely cutting edge. And a GTX 1070 was only $450 and gave a 33% improvement when it was cutting edge. Again, a similar value (both price/performance around 13-14). 

    I'm kinda scratching my head trying to figure why so many are suddenly suggesting we toss any consideration of price for what most here consider a hobby. So if you can afford $2,000 for a GPU, just pay it, even though it only gives a 10-20% improvement? I don't get it. As a minimum I think a rational person would AT LEAST start with considering Price/Performance as part of the equation, and then decide whether to toss it or not if there are some overriding factors. 

    I think that what is not understadable is why whould a 2080Ti at 1200$ be a less good value than a 1080Ti at 700$? You have double the performance for less than the double price and all that without having RTcore or NVlink support for DS

    The problem is your Cost/benefit measurement. Your measurement varies according to your starting point. If you don't have any GPU, the same card has a better value than if you start from any existing card.

    If you want to make a measurement then take Performance per Dollar and  the 2080Ti should suddenly appear better

This discussion has been closed.