General GPU/testing discussion from benchmark thread
This discussion has been closed.
Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2026 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2026 Daz Productions Inc. All Rights Reserved.
Comments
RTX cards aren't completely useless. They offer more compute power and more CUDA, so even without the RT cores, they offer some value for power users. I'm particularly grateful for the early adopters because they help finance the progress of this technology. Hopefully, Nvidia will have a Super 2080ti or Titan with a better cost-value ratio to make it worth my while. In the meantime, I can't recommend them, but I'm not going to hate on those who bought them because 2x performance upgrade in one card ain't too shabby.
It has nothing to do with hate, it's about concern. Some of us were advising folks 9 months ago to wait, and base their buying decisions on facts, not hype and emotion. We were trying to keep folks from wasting money. And the response was we were mocked, and chided for being too negative. As a result, some folks spent upwards of $1,400 on a card with probably the same performance as a card you can buy now for 1/2 that price.
LOL, I wouldn't waste my "concern" on those early adopters if I were you. Like I said, they got a tremendous performance boost and apparently they've been taking advantage of it for up to 10 months. There is always going to be a cheaper, faster graphics card in the future. No need to put off a buy if you need the hardware and you have the cash. I know I wouldn't have hesitated to buy if I was in the market for new GPU's.
Yeh I got a huge boost from my 2080 but I could have got the same from a much lower priced 1080ti. Things will be different when NVIDIA gets it's act together with Iray support for its RTX GPUs. Then I can say Nah Nah Ne Nah Nah to the GTX 10 series people.
If you go by that chart, that same chart would indicate that the 2080ti is only 21% faster than the last generation 1080ti...hmmm. So how did that turn out? I can't recall too many cases where the 1080ti is ONLY 21% slower than the 2080ti at anything, LOL. Fail. And for Iray its more like 5 TIMES that. So um, no, I am not buying this 5% estimate by a long shot. This estimate is relative game performance, anyway. Last I checked...Iray is not a game. If we want to do some funny math then 5 times 5 is 25%. So the 2080ti will be 25% faster than the 2080 Super at Iray.
They use the same chipset, so only two specs matter. Core count and clockspeed. So with a 1250 core disadvantage, the 2080 Super would have to be overclocked like crazy to make up that difference. There is no other magical way to make up that difference. The 2080ti FE boosts to 1710, while the 2080 Super boosts to 1815. I'm not so sure that 105 Mhz is going to make up for 1250 cores. Keep in mind the existing 2080 is also clocked higher than the 2080ti at 1800 Mhz. The upgraded GDDR6 is faster, which might matter for video games but is not so much of a factor with Iray.
So let me get this straight, you find an ESTIMATED performance gain completely unrelated to Iray or rendering in general to be more interesting and useful than the data sheet on the card itself? I find your statement funny given how you regularly scoff at the very real rendering benchmarks I posted. You can take the specs and do the math yourself.
The biggest deal with the Super releases is that they kill the Founder's Edition pricing. It was very hard to find cards near the actual base price rather than the Founder's Edition. So the 2080 Super is going to be $700, and not $800 like the original. That is effectively a $100 price cut on top of a performance boost. But don't go dreaming that the 2080 Super is somehow a 2080ti "Light" now.
That's what I'm seeing with my system. And yes, for me the two 2080 tis I have are worth the money.
I'm not getting people saying "it's all so complicated". It isn't, The 20x0 cards out perform the 10x0 cards regardless of RTX support. As for the Supers, they're replacing the 2080 and the 2070 at the same price and Nvidia is killing off the $100 (US) premium for Founders edition cards.
As I've mentioned before, some people approach major purchase decisions in the same way a business would...by using a Benefit/Cost analysis. And if the benefits don't outweigh the cost to a certain extent (at least in the eyes of that particular business), then the purchase isn't made.
In my analyses of Studio/Iray benchmark results for a long list of GTX and other cards, and comparing their Benefit to Cost numbers, I've found that it's fairly common and reasonable for a "good deal" to be a cost/benefit around 10-15 (cost in $$ divided by percent improvement in Iray render times). I've found that the more expensive options are up over 20, which I consider to be overpriced.
The RTX 2080ti, for example, was hovering around $1,200. The normal price for a GTX 1080ti was under $700 (which is what I paid). And assuming the RTX-2080ti cut the 1080ti render times in half (ie, a 50% improvement), the Cost/Benefit comes out to 1200/50, or 24 (relative to the 1080ti). Personally, that seemed quite high to me compared to the other GTX family improvements since the GTX 1060. Doesn't mean it's bad, just that for me personally it seemed like a significant difference from most of the previous NVIDIA GPU releases (usually a $200-300 increase in price for a 30% improvement in render times). Now if it was $700 for a 50% improvement, that would be right in the ballpark with a cost/benefit of 14. I'm hoping the $700 Super 2080 is closer to 14 than 24.
However, without knowing the final Iray render times once (if) Iray fully implements all the RTX-related technology it's not fair to assume only a 50% improvement. Hence part of the complications. Until the RTX is fully implemented we can't fill in that % improvement number. And if the Super variety ends up giving a much better Cost/Benefit, personally I'd consider that rather than an RTX-2080ti. But that's just me. Whatever works.
Yeah... my advice is to STAY AWAY from those auto-generated relative performance tables TechPowerUp has when coming to conclusions (at least where Nvidia cards are concerned.) The way those charts are calculated is based on the officially published base and boost clock speeds for each card. And due to the way Nvidia's GPU Boost technology works, their officially published numbers have virtually nothing to do with what they typically exhibit under actual use (check pretty much any major outlet's reviews of their cards for ample illustrations of this.)
Although it's interesting - if you look at the published specs for all the SUPER (sic) cards, the base and boost clocks listed look a lot like the numbers the cards they are replacing get in real world use. So perhaps Nvidia has finally decided to stop lying about clock speeds (pretending they're lower) and just go with the numbers they actually get.
I would.... that is if I were using iRay. 10 seconds x 24 = 4 minutes per second of animation produced = a whopping 4 hours per minute of animation. So a 10-second improvement would save me 60 hours on rendering a 15-minute episode. That's a whole work week. If rendering is your hobby, yeah, 10 seconds doesn't mean an awful lot. But if you are working, those seconds add up and they turn into dollars of lost income. An upgrade to the fastest hardware is money in the pocket.
This makes no logical sense. Iray GPU based rendering performance IS directly determined by a card's PHYSICAL HARDWARE specs. The issue with the TechPowerUp graphs is that they are inaccurate since they include both physical hardware and SOFTWARE specs in their calculation (clock speeds aren't a physical hardware spec since they can be changed in software.)
Perhaps you don't, but you have to remember that your particular use case is only one out of many. For anyone doing more than one render at any one time, that 10+ second difference becomes a significant change in overall RATE rather a small addition in time.
I'm still recommending that folks wait and see; sure, that performance improvement is nice, but it does come at a cost. For me that cost is too high. It's too high because the product available NINE MONTHS after release is still not supported.
Cutting edge technology has never been at the apex of price/performance value. Never. And it never will be because it isn't good business sense for suppliers. Those who are are early adopters of the latest technology should already know this. Businesses that choose to buy into it do it because they can't afford not to. I agree with nicstt, most users should probably wait and see because of the flaky RTX issue. But artists have been buying them in droves because it just makes sense to get the fastest thing you can afford if it will help them to produce better work, faster.
Like I said, the 1080ti was only around $700, and had a 56% improvement in render times over a GTX 1060. That's a real good value, IMO, and at the time it was surely cutting edge. And a GTX 1070 was only $450 and gave a 33% improvement when it was cutting edge. Again, a similar value (both price/performance around 13-14).
I'm kinda scratching my head trying to figure why so many are suddenly suggesting we toss any consideration of price for what most here consider a hobby. So if you can afford $2,000 for a GPU, just pay it, even though it only gives a 10-20% improvement? I don't get it. As a minimum I think a rational person would AT LEAST start with considering Price/Performance as part of the equation, and then decide whether to toss it or not if there are some overriding factors.
There's your disconnect right there. Most of the people you are conversing with here (myself included) are professional artists of one sort or another. Professionals have very different cost/performance considerations than hobbysts do when it comes to things that count as both income generators and tax deductable business expenses. This is an advanced level hardware discussion thread. Obviously most of the people posting in it are going to be coming at it from the professional (or at the very least, hardware enthusiast) side of things rather than the hobbyist side. Expecting otherwise makes no logical sense.
So you DO consider cost and benefit in your purchase analysis? It's just some people's definition of "benefit" might include stuff that others might not care about? Professionals might consider a smaller speed improvement as having more value than a hobbyist might? So in my cost/benefit definition, where I set a "bar" of around 14 as a good value, others might decide upon a "good value" being something higher?
Just like I mentioned at the beginning about how it varies among companies who employ this method. Depending on your needs and requirements, you determine what the costs and benefits are as they apply to your business or hobby. Like I said, this is standard practice among (successful) businesses. You're obviously free to define cost and benefits as you see fit, but I encourage folks to use this standard cost/benefit technique when considering what to buy.
BTW, here's a copy of a chart I posted a year or two ago here with some basic cost/benefit numbers for the GTX line of GPU's, assuming the benefit to most users is Iray render time improvement.
GTX 1070 was never cutting edge. It was always slotted as middle-high end and considered the best price-performance option and I remember a lot of folks complaining about the price of the 1080ti when it first came out, including you. https://www.daz3d.com/forums/discussion/comment/2621976/#Comment_2621976 Your estimation then was the 1080ti was not the best performance for the price (how quickly we forget?). Nevertheless, you eventually joined the bandwagon and got you one. But today, we can get twice the performance from a 2080ti for less than twice the price, so the cost/benefit quotient remains about the same as the last generation. A $2000 gpu with only 20% improvement from generation to generation would die on the shelves. Today's $2K card (you must only be talking about a Titan RTX) is twice as good as the last generation.
As a quote hobbyist unquote I try to buy on sale whenever possiable BUT I look at all options so that I get the most bang for my buck even if the item isn't on sale
Yes. I was the first person here to even mention the concept of cost/benefit. At the time I was shooting for a cost/benefit of 10, then later realized that historically (based on the chart I developed above) a slightly different ratio of around 13 or 14 is a more reasonable expectation, so I accepted that. I moved 3 points, based on facts. What's your point? Here we're talking about accepting almost twice that (ie, 24).
Yes. All of your cost/benefit analyses are woefully inadequate from anything but a very simplistic hobbyist perspective because they don't take into account different capacities of VRAM.
ETA: Performance limitations stemming from inadequate onboard video memory are a much more important issue to someone working with professional GPU computing workloads than speed (I can get by with a GPU that performs 10% slower than I'd ideally like. I can't get by with a GPU with 10% less memory than needed for what I'm doing.)
Awesome. So if you don't like the numbers, feel free to add your considerations to the cost/benefit analysis for those in a similar position as you.
Outrider is right. Iray Performance has always scaled with core numbers. Saying that it suddenly shouldn't be accounted out of the blue has no logic
And your render time improvement is specific to one scene (three at most) for which the reliability for future benchmark was questionned for a few month now
Even if the hierarchy in rendertime is respected according to core count and frequency inside a same generation, it is clear that the SY scene is unable to really diffenciate the top cards
Now if you want a number, I'd bet on 1:15 but I warn you, that number is meaningless
I think that what is not understadable is why whould a 2080Ti at 1200$ be a less good value than a 1080Ti at 700$? You have double the performance for less than the double price and all that without having RTcore or NVlink support for DS
The problem is your Cost/benefit measurement. Your measurement varies according to your starting point. If you don't have any GPU, the same card has a better value than if you start from any existing card.
If you want to make a measurement then take Performance per Dollar and the 2080Ti should suddenly appear better