OT: New Nvidia Cards to come in RTX and GTX versions?! RTX Titan first whispers.

145791027

Comments

  • ebergerlyebergerly Posts: 3,255
    edited August 2018

    My hunch is that the Iray preview will be pretty much realtime and very responsive. So the preview will be wonderful. The big question for me is the actual renders, and how close they'll be to a full raytrace render, and how fast it will render with the same convergence. Maybe it will be like the Blender Eeevee, where it gives you a nice realtime preview, but it's not same quality as a real render.  

    If it's no different than Eeevee, which is free with Blender and works with existing GPU's, why bother with all them $$$ for a new GPU?

    Maybe that's why the response to the RTX cards was so blah. People have seen what Eeevee can do with existing cards, and Unity can do as good or better with existing cards, so why get all excited over a $1,200 card?

    Post edited by ebergerly on
  • bluejauntebluejaunte Posts: 1,863

    What else should I get excited about. The new iPhone?

  • bluejauntebluejaunte Posts: 1,863

    Ok on my search for the new iPhone I found this: https://www.reddit.com/r/RenderToken/comments/97hhz9/gpu_path_tracing_performance_comparisons_quadro/

    That really doesn't sound too bad. You know, it may just turn out to be that us GPU renderers are going to be the happiest of all because games aren't really gonna profit from RTX for a while as they need to first support it, plus in all honesty games already look fantastic today without raytracing (which kinda explains gamers reactions to the demos). It wasn't mentioned much in the keynote, but it's common today that GI is pre-baked in games. This is essentially baked raytracing, an annoying and lenghty process that is calculated on the CPU and the dev needs to wait for a while until it's finnished. Sometimes hours for a full bake. Obviously this can't work for moving objects but the room with the table and light through the window they showed? Absolutely, unless there was a day/night cycle going on outside of course.

    So in a sense, a lot of games already had raytracing. Just not realtime. And not for relfections because they need to change with the player's view.

  • kyoto kidkyoto kid Posts: 40,678
    ...I'd only be happy if I had at least 16 GB of VRAM (32 would ve better)
  • bluejauntebluejaunte Posts: 1,863

    Yeah, 16 would have been nice. Still, I don't have that now either so. I'll take 5-8x faster Iray renders and the same amount of VRAM I already have any day. If that actually were the case, that is smiley

  • kyoto kidkyoto kid Posts: 40,678
    ...for rhe scenes I create, VRAM is a speed factor as once the process dumps to the CPU, all those fancy Tensor and RTX cores are worthless.
  • JazzyBearJazzyBear Posts: 798

    So I can grab a new card right NOW and will get a new system late NEXT year(end of 2019). It is only to help with iray for daz so 1070 or 1070ti or 1080? what is diff with Standard versus TI? 

    Also, does the Card Manufacturer really matter? Recommendations?

     

  • outrider42outrider42 Posts: 3,679
    edited August 2018
    ebergerly said:

    My hunch is that the Iray preview will be pretty much realtime and very responsive. So the preview will be wonderful. The big question for me is the actual renders, and how close they'll be to a full raytrace render, and how fast it will render with the same convergence. Maybe it will be like the Blender Eeevee, where it gives you a nice realtime preview, but it's not same quality as a real render.  

    If it's no different than Eeevee, which is free with Blender and works with existing GPU's, why bother with all them $$$ for a new GPU?

    Maybe that's why the response to the RTX cards was so blah. People have seen what Eeevee can do with existing cards, and Unity can do as good or better with existing cards, so why get all excited over a $1,200 card?

    The reason why gamers are not so excited is because they play video games. RTX is so new, that every single video game in existance right now does not take any advantage of RTX. None. So for for the vast overwhelming majority of video games, the performance impact is strictly down to what IPC improvements Nvidia has made with CUDA. Games might be getting more RTX in the future, but by the time RTX becomes a mainstream feature, there will be another round of GPUs out that are powerful enough to tackle it. For all the RTX glory talked about, Shadow of the Tomb Raider was not playing at 60 frames per second when RTX was enabled. While they did not have a frame counter on (and for obvious reasons it seems,) it was painfully obvious to gamers that the frame rate dropped when it was turned on. It was later confirmed that the game was only running around 55 frames per second. That is a deal breaker for many gamers. Plus the demo was only 1080p, so don't even think of using RTX for 4K content! The game developers claim that RTX is early, and that it will be polished up. But the game launches next month! RTX is being added at a later date. This is where people start shaking their heads.

    For gamers, especially PC gamers, it is all about the frame rates. They care about frame rates more than kyoto kid cares about VRAM. That is how serious this is. On Steam, there is a watchgroup called "The Framerate Police" whose only goal is pointing out any and all video games that are locked to 30 FPS, or cannot be played at 60 FPS or better. It doesn't matter how great the game is, if the game is a classic game getting rereleased, or whatever, they wil trash the game if it does not have 60 FPS. Playing below 60 FPS is just not what players want, and not from a $1200 GPU. Odds are most players are going to disable RTX because of the extreme hit to performance, just like they did with Nvidia Hairworks. Hairworks also made a ridiculous impact on performance, and most players disable it. And actually, 60 FPS is being kind. Many players want more than that. If you play competitevly, you want your FPS as high as possible. So you have a lot of gamers who want 4K at 144 FPS with all settings at Ultra (or whatever the max settings are called.) Right now the 1080ti can play some games at 4K, and some at high settings, but not a lot. And some players want ultra widescreen support on top of that, which means about 40% more pixels need to be generated.

    So for people like this, seeing that Nvidia is shipping this massive chip, and then finding some two thirds of this chip is actually used for other things, RTX and Tensor, is kind of a downer. Nvidia is going to have to prove to these kinds of players that the new cards can offer a performance boost that is worthy of their massive price. $1200 is a tough pill to swallow.

    There are a lot of gaming people saying to not buy these cards, or at the very least, wait for the reviews. And of course, I am waiting for a review. I am excited for these cards, but I wont buy until I see some reviews. Sadly, Iray does not get reviewed that often. And we are at the mercy of Nvidia for releasing the SDK.

    Gamers do not care about Blender, let alone Daz Studio Iray. I am something of a gamer, but I am hardly normal. Most players have never even heard of Daz Studio. Go ahead, try me, mention Daz Studio on a gaming forum, they might think you are talking about porn.

    Which brings me back to the video production crowd. As you can see from the links that bluejaunte and I gave, people in these fields are very much excited about RTX, and they have a very good reason to be. Octane has been messing with RTX for a some time, they have actual experience with it. If they are getting this 5-8 times performance, then things are looking very good for Iray. Just like I said, these cards are practically made for Iray, Iray is exactly the special use case that they touted as getting the best benefit.

    It remains to be seen what happens with NVLink. I was looking up Quadro variants, and some of them come down to basically the same speed as what the gaming NVLink is. So it would seem that VRAM pooling is still possible because all Quadro can pool VRAM even on the slowest NVLink. For what it is worth, the wikipedia on the new 20 series says NVLink pools memory. That doesn't make it true, of course. It would be great if Nvidia or somebody would clarify it. Has anybody gone on their forums and social media and asked directly?

    Post edited by outrider42 on
  • kyoto kidkyoto kid Posts: 40,678
    edited August 2018

    ...yeah considering the inaccuracies on Wikipedia, I don't take their word as "gospel" on a number of topics (I've even corrected a few articles on subjects like science, music, aviation and astronomy). 

    Not a gamer in the least, save for P&P tabletop RPGs (kind of grew weary of the push to always be "competitive" in everything I did along with seeing so many oneperson shooter/slasher/whacker games where the only option to succeed was by fighting the opposition which got old after a short while).

    That said, by all rights, I'm probably the "ideal" customer for the Quadro line, save for the fact I do not have an income or budget to afford one.

    As to gamers' thoughts of programmes like Daz and Poser being for creating porn, it isn't totally unfounded.  Go to DA and check out a number of the 3D art galleries there (you need to sign up to do so though).

     

    Post edited by kyoto kid on
  • outrider42outrider42 Posts: 3,679

    Finally some real information on what is in the 2080ti. For the first time, we have the actual Tensor and RT core counts listed. They are very close to rumored specs, which were apparently based on the full die. The 2080ti being slightly cut down has just slightly less core counts.

    https://wccftech.com/nvidia-turing-gpu-geforce-rtx-2080-ti-ipc-block-diagram-detailed/

    In summary:

    SMs: 68

    CUDA: 4352

    Tensor: 544

    RT cores: 68

    Geometry Units: 34

    TMUs: 272

    ROPS: 88

    This article also claims a 50% better performance core CUDA core over Pascal, which is pretty large if true. Considering the 2080ti has over 700 more CUDAs than the 1080ti, AND they are 50% faster, and the base clock is roughly the same, then the 2080ti should be quite powerful for "regular" non RTX applications. There is also an article saying the 2080 (not the ti) is 50% faster than the 1080 in regular games.

    So this is some decent news. But again, the price of the cards is still a big obstacle for a lot of people. And a lot of gamers are feeling like RTX is a tax that they should not have to pay extra for. But people here might feel quite differently, especially once we get to see real Iray benchmarks. It will be quite interesting too see what they look like.

    The Daz comment was just a joke. Most gamers would have absolutely no idea what Daz is at all. They might be more familiar with Garry's Mod, a well known and rather old piece of software. Or maybe XNA, which also uses game models. Both are often used for..."evil" more so than good, LOL. That doesn't mean porn, just...bad things. Some people really do have too much time on their hands.

  • kyoto kidkyoto kid Posts: 40,678

    ...yeah, I'll be sticking with my Titan X for a while. Blazing core speed is nice, but having that extra memory "overhead" is nicer.

    ...oh and there's a lot of really "bad" (as in taste not quality) 3D stuff on DA as well, a fair portion of which is done in Daz..   I've actually been looking for another place to host my work (that doesn't cater to "mature" [*cough*] material)  and closing my account there as it has been getting worse since I joined years ago.  Really don't feel comfortable about my characters (particularly my teen ones) being faved into collections that predominantly feature such work  Just gives me an "icky" feeling if you know what I mean.

     

     

  • ebergerlyebergerly Posts: 3,255
    I'm confused. I've heard the 2080ti is up to 6 times the performance. Or is it 8 times? And thats compared to what, a 1080ti? Which implies a 24 minute render on a 1080ti would render in like 3 or 4 minutes on a 2080ti. But everything I've seen for demos implies it renders in real-time. Thats a lot faster than 3 or 4 minutes. And like I say, if its the same as what Eevee or Unity does now with existing GPUs, why even bother with a 2080ti? What am I missing?
  • ebergerly said:
    I'm confused. I've heard the 2080ti is up to 6 times the performance. Or is it 8 times? And thats compared to what, a 1080ti? Which implies a 24 minute render on a 1080ti would render in like 3 or 4 minutes on a 2080ti. But everything I've seen for demos implies it renders in real-time. Thats a lot faster than 3 or 4 minutes. And like I say, if its the same as what Eevee or Unity does now with existing GPUs, why even bother with a 2080ti? What am I missing?

    I think the real-time is not universal (well, duh) but within certain limits of complexity.

  • Ghosty12Ghosty12 Posts: 1,995
    edited August 2018

    Here is some more info on the new cards, and bare in mind that these are Nvidia's numbers and aimed at games, so take with grain of salt.. But the biggest part of this video is that the RTX2070 does not have NVLink capabilities.. NVLink will only be available on the RTX2080, RTX2080 ti and the RTX Quadro's..

    Post edited by Ghosty12 on
  • Rashad CarterRashad Carter Posts: 1,799

    So then to answer my own silly questions I've come up with the following:

    1. Render speed should mean more than Vram. If a new card offers the same vram but double the performance speed, it is in my opinion easily worth double the price of the previous card. Deep down for every still render I've ever made I had a full movie in my head, or at least a short scene. Getting render times to the point that still scenes can become animations is a goal many of us assumed impossible long ago, and are unwuilling to reconsider. But the time may be coming for us to embrace animation workflows rather than disregard them. Double the rendering speed means at least a few more people willing to tackle animations, and that positions us to be more professional overall especially here at Daz3d.

    2. Vram limitations are not going to be an issue forever. Octane and many other apps of this nature have out of core memory options that Iray might also get someday, perhaps even "Daz soon." However for the current atmosphere Vram is a central consideration. But still secondary to speed.

    3. Obviously Nvidia isnt only interested in the gamers and miners, they've obviously considered the 3d rendering niche and see a potential for profit there, if nothing more than an offshoot of the gaming crowd. Lets not punish them by rejecting their preliminary efforts as not good enough. They'll just switch focus to a more grateful audience and officially forget about us. 

    Conclusion:

    There is no such thing as a "smart" time to buy one of these damned chips. You are going to waste money no matter how you do it. One decides to either ride the front of the wave or the back. These chips aren't like a pair of shoes, that either fits or it doesnt. All of these card products "work" and perform their duties acceptably. If you buy a Turing chip today (no matter how much you payed for it) it will be irrelevant within 2 years because Nvidia makes money from selling new chips and will certainly improve on both hardware and software. But at least its new, has at least a 2 year life under warranty, and could probably last you up to 8 years.  Waiting 2 years to buy last generations cards means you're always a step behind, which could be significant such as pre RTX vs post RTX. 6-8x performance is game changing. Warranties expire...applications become more advanced and move away from your legacy card's niche...

    We know that cards are always overpriced when they are first released, so I see no reason to be offended by the price. I paid roughly 1K for each of my two Titan Blacks...still paying them off on the the credit card.... and they are a cruel joke now compared to what other cards more advanced can offer. I had no idea that a month after purchasing my Blacks Nvidia would release a brand new architecture with cards cheaper than mine that were double the performance. Without knowledge of the future, one will almost certainly waste money in this venture. I wasted money, but I get that this is how the game operates, and I for some reason still find myself wanting to play.

     

     

  • ebergerlyebergerly Posts: 3,255
    Reminds me of Dr Evil.....mojo
  • Thread lightened by several posts, please keep the tone civil, avoid addressing the poster rather than the topic, and avoid snide comments.

  • kyoto kidkyoto kid Posts: 40,678

    ...yeah as with Genesis8, having read a lot about the the new cards over the last couple days they just don't offer the type of improvement that would entice me to shell out the higher price (along with the cost of possibly having to upgrade other hardware to support them) even if I had the funds available.  I only create single frame scenes, so a major boost in render speed is not as big a factor as it is to those who create animations, Again for my purposes, having enough memory overhead to reduce the chance of a large render job dumping to the CPU is more important.

    I'm sure that back in the Maxwell days those early adopters who dropped 5,000$ on the release (12 GB) version of the M6000 were not pleased when afterwards, the card's memory was doubled to 24. If they moved up to the more powerful version, they pretty much had to eat the cost of the older card as who would buy a used 12 GB M6000 for even half the original retail price when a brand new Titan-X was every bit as good (and still cost less)?  For many pros and smaller production studios, 5,000$ was likely not an easy lump to swallow.

    So yes "wait and see" is good advice. Maybe there will finally be some benchmarks that are meaningful to us as Nvidia's primary audience with GeForce is still the gaming community, which as far as what I have seen, is rather underwhelmed.

  • Rashad CarterRashad Carter Posts: 1,799

     

    kyoto kid said:

    ...yeah as with Genesis8, having read a lot about the the new cards over the last couple days they just don't offer the type of improvement that would entice me to shell out the higher price (along with the cost of possibly having to upgrade other hardware to support them) even if I had the funds available.  I only create single frame scenes, so a major boost in render speed is not as big a factor as it is to those who create animations, Again for my purposes, having enough memory overhead to reduce the chance of a large render job dumping to the CPU is more important.

    I'm sure that back in the Maxwell days those early adopters who dropped 5,000$ on the release (12 GB) version of the M6000 were not pleased when afterwards, the card's memory was doubled to 24. If they moved up to the more powerful version, they pretty much had to eat the cost of the older card as who would buy a used 12 GB M6000 for even half the original retail price when a brand new Titan-X was every bit as good (and still cost less)?  For many pros and smaller production

    We agree on this. I'm just stating that depending on one's point of view...whether they value saving money on an item, or getting the most usage out of that item...there are benefits and costs to the ideal of waiting.

    This makes me miss the dog days of scotch taped render farms. Just add more crap be it new or old to the rig and it will still improve overall performance in some way. Not true anymore. Argh.

  • bluejauntebluejaunte Posts: 1,863

    Yeah so there's quite a bit of negativity swirling around on the internet. It's mostly understandable from a gamer perspective. As mentioned, games need to first support raytracing, and even then, once you enable it the framerate is still going to drop massively. Probably too much to be worth it. While it's still almost a miracle this stuff render at playable framerates at all, overall gamers aren't too stoked about the combination of raytracing unsupported in most games, framerates dropping too much when you do enable it, and a higher price of admission for something they will probably not quite use yet. And NVIDIA has omitted benchmarks for the cards when they do normal rasterization. Probably the usual 25ish% coming from more CUDA cores.

    This could really turn out to be an absolute dream for GPU renderers. We rely on raytracing. If it really were to pan out that we can render 5-8 times faster, and gamers aren't too interested in these cards yet? More for us. Maybe plenty 2nd hand from disappointed gamers. A sea of videocards. Replacing my two 1080 TI with two 2080 TI and let's say opimistically/naively now rendering 16x faster. A 10 minute render now taking 37.5 seconds?

  • GatorGator Posts: 1,268
    ghosty12 said:

    Here is some more info on the new cards, and bare in mind that these are Nvidia's numbers and aimed at games, so take with grain of salt.. But the biggest part of this video is that the RTX2070 does not have NVLink capabilities.. NVLink will only be available on the RTX2080, RTX2080 ti and the RTX Quadro's..

    It does make sense for them reducing overlap (they may have too dang many already... for example if two 2070's are roughly equal to a 2080 Ti.  The retail cost is close, and allows them to save cost to build and support the lower end models.

    I'm guessing they are bumping up prices encouraging gamers to spend more on single cards.  SLI is becoming less and less common, and maybe predict declining further with single cards able to handle 4K.

  • kyoto kidkyoto kid Posts: 40,678

     

    kyoto kid said:

    ...yeah as with Genesis8, having read a lot about the the new cards over the last couple days they just don't offer the type of improvement that would entice me to shell out the higher price (along with the cost of possibly having to upgrade other hardware to support them) even if I had the funds available.  I only create single frame scenes, so a major boost in render speed is not as big a factor as it is to those who create animations, Again for my purposes, having enough memory overhead to reduce the chance of a large render job dumping to the CPU is more important.

    I'm sure that back in the Maxwell days those early adopters who dropped 5,000$ on the release (12 GB) version of the M6000 were not pleased when afterwards, the card's memory was doubled to 24. If they moved up to the more powerful version, they pretty much had to eat the cost of the older card as who would buy a used 12 GB M6000 for even half the original retail price when a brand new Titan-X was every bit as good (and still cost less)?  For many pros and smaller production

    We agree on this. I'm just stating that depending on one's point of view...whether they value saving money on an item, or getting the most usage out of that item...there are benefits and costs to the ideal of waiting.

    This makes me miss the dog days of scotch taped render farms. Just add more crap be it new or old to the rig and it will still improve overall performance in some way. Not true anymore. Argh.

    ....still have plans for putting one of those "old school" render boxes together in the works using older dual Xeon CPUs and a bunch of ECC memory (128 GB to start). I can just let it crunch away on a big render job while I work on other projects.

    Really looking forward to Octane4 though, 20$ a month for the subscription track makes it very accessible.

  • kyoto kidkyoto kid Posts: 40,678

    This could really turn out to be an absolute dream for GPU renderers. We rely on raytracing. If it really were to pan out that we can render 5-8 times faster, and gamers aren't too interested in these cards yet? More for us. Maybe plenty 2nd hand from disappointed gamers. A sea of videocards. Replacing my two 1080 TI with two 2080 TI and let's say opimistically/naively now rendering 16x faster. A 10 minute render now taking 37.5 seconds?

    ....again I see this primarily as a major advantage to animators who require extremely short render times (seems that's what they've been touting with their video promos).

  • bluejauntebluejaunte Posts: 1,863
    kyoto kid said:

    This could really turn out to be an absolute dream for GPU renderers. We rely on raytracing. If it really were to pan out that we can render 5-8 times faster, and gamers aren't too interested in these cards yet? More for us. Maybe plenty 2nd hand from disappointed gamers. A sea of videocards. Replacing my two 1080 TI with two 2080 TI and let's say opimistically/naively now rendering 16x faster. A 10 minute render now taking 37.5 seconds?

    ....again I see this primarily as a major advantage to animators who require extremely short render times (seems that's what they've been touting with their video promos).

    I guess this is me speaking as a PA. Do you wanna know how much time I spend rendering promos? Or even creating promos, Iray preview none stop. Actually, even way earlier while testing how stuff looks like, none stop Iray preview and test render after test render. Why is the speed you have now good enough but 5 to 8 times faster puts it into territory that only animators need? The argument is slightly bewildering. There is never enough speed until we are fully realtime. Nobody wants to wait around for stuff. Maybe you don't mind but the threshold you're setting here seems arbitrary. You don't have to buy now, you know? Imagine in a year or two when new stuff comes out and these will land on the 2nd hand market, supposedly where they may not be super desired by gamers.

  • kyoto kidkyoto kid Posts: 40,678

    ...yeah, you have me there concerning PAs and promos. Didn't think about that at the time.

    For my purposes, memory overhead is still a "speed" factor for large format, highly detailed, high quality render jobs. So even if/when/should the 20xx cards drop in price after the next generation is released and/or the gaming community doesn't generate the sales hoped for, I still likely won't bother unless there is an NVLink hack that opens memory stacking with the 2080 Ti (or I can afford a pair of RTX5000s with NVLink bridges).

    More likely by then, I'll have my "render farm in a box" up and running.

  • MendomanMendoman Posts: 401
    kyoto kid said:

    This could really turn out to be an absolute dream for GPU renderers. We rely on raytracing. If it really were to pan out that we can render 5-8 times faster, and gamers aren't too interested in these cards yet? More for us. Maybe plenty 2nd hand from disappointed gamers. A sea of videocards. Replacing my two 1080 TI with two 2080 TI and let's say opimistically/naively now rendering 16x faster. A 10 minute render now taking 37.5 seconds?

    ....again I see this primarily as a major advantage to animators who require extremely short render times (seems that's what they've been touting with their video promos).

    I guess this is me speaking as a PA. Do you wanna know how much time I spend rendering promos? Or even creating promos, Iray preview none stop. Actually, even way earlier while testing how stuff looks like, none stop Iray preview and test render after test render. Why is the speed you have now good enough but 5 to 8 times faster puts it into territory that only animators need? The argument is slightly bewildering. There is never enough speed until we are fully realtime. Nobody wants to wait around for stuff. Maybe you don't mind but the threshold you're setting here seems arbitrary. You don't have to buy now, you know? Imagine in a year or two when new stuff comes out and these will land on the 2nd hand market, supposedly where they may not be super desired by gamers.

     

    Although I'm not a PA, and I don't do 3D stuff for living, I'm still in a similar situation where speed is my main concern. To be honest, I'd love to have a realtime render speeds if possible. I have a full time job in another city, so traveling and job takes a good a chunk of my day. What little time I have left for hobbies ( including 3D ), render times is my main priority when I'm looking for a new GPU. I also play games and do other stuff with my computer, so newer and faster is always a nice bonus. I do understand that when people are retired, and have 24/7 to do whatever they like, their priorities are probably different.. In my case, it's just that if I have max couple of hours per day to play with DS, I want to get most out of that time.

     

    Also I'm really surprised to see this kind of response from a 3D rendering community for a GPU that is mainly focused for rendering ( ehm, of course we don't know that for sure yet, but according to rumours ). I mean, what difference does it make for us whether gaming community is excited or not, when finally Nvidia releases customer grade GPUs that are actually helping our hobby the most? I do get it, that the price is totally out of whack, and it sucks if you can't afford it on the launch day, but prices will drop eventually like they always do. Also, if I had a new 1080ti I wouldn't upgrade either, since that is still very respectable GPU for 3D rendering. Only thing you lose when new flagship card is released is that fastest GPU status. Your GPU is still just as fast, or maybe even faster after SDK updates. I also totally understand if people are sceptic for those 6-8 times faster render times, because so am I...  It just sounds too good to be true, so I do believe that at least some of it is just marketing like always, but I still don't get all the negativity. Well, or actually I do. Maybe it's the same like when Daz releases new generation of models. I admit, I was litter bitter that I had used tons of money for characters and other stuff, and now suddenly those are second grade. It took me some time to realise that my G3 stuff did not actually get any worse because G8 was released. I could still use my stuff, and I could just happily skip this new generation since I didn't see much point to upgrade.

     

    And when it comes to buying a new flagship GPU, personally I think there's really never a good time. If you want to wait, you can always bet that there's newer and faster card coming. My current card is Maxwell Titan X, and I hoped that if I buy top of the line card, I could skip the next generation if I wanted to. I had the best consumer grade GPU for 2 years, and still after Pascal release I had better than average card for next 2 years. Sure, next generation flagship cards will always be better, but my old card was still pretty close to normal 1080 performance, which is not a bad card at all. I even had some extra memory on top of that, so I could easily use it over 2 generations. I don't know if it's the same deal this time around, but I hope so. If I get new 2080ti now, I can probably skip next generation again if I want to. So like I said, if you want to wait, that is perfectly fine, since new and better cards are probably going to be released after a year or so ( ti upgraded version or maybe titan or something ), but I hope you understand that does not really change the situation at all. If you buy then, the new better cards are still going to be released in a year or so ( next generation probably ), so eventually there's always going to be that faster and better GPU available, and you just have to choose when you want to loose your money.

  • outrider42outrider42 Posts: 3,679
    edited August 2018

    Here's a cool story. Quake, a classic 90's video game, has had this ongoing fan project with the goal of adding real time ray tracing to it. Obviously they do not have a new 2080ti to play it on yet, so keep this in mind. (And maybe bookmark this to see if performance really changes with Turing.) For now, you have the game running on a 980ti, so not even a Pascal. But what is really interesting is the noise. The noise is why we have not had have ray tracing in games until now. That is where Tensor comes in. Yes, Tensor. The ray tracing cores do their thing, while Tensor denoises in real time, and that is how you get real time ray tracing in 2018. You have 2 new technologies working together. And it so happens these are things that Iray can take full advantage of. Anyway, here is the video, and you can see the dramatic change ray tracing makes to a very old video game.

     

    Post edited by outrider42 on
  • fastbike1fastbike1 Posts: 4,075

    Look at the specs on Nvidia's website. Check Cuda cores, Vram, memory speed, and memory bandwidth. Those will tell you all you need to know about the differences for Studio work. 

    JazzyBear said:

    So I can grab a new card right NOW and will get a new system late NEXT year(end of 2019). It is only to help with iray for daz so 1070 or 1070ti or 1080? what is diff with Standard versus TI? 

    Also, does the Card Manufacturer really matter? Recommendations?

     

     

  • ebergerlyebergerly Posts: 3,255
    edited August 2018

    Fastbike1 I agree with you 100%. My only slight difference is I would replace "will tell you all you need to know" with "will tell you almost nothing" about the differences. laugh

    BenchmarkNewestCores.jpg
    514 x 524 - 62K
    Post edited by ebergerly on
  • bluejauntebluejaunte Posts: 1,863

    Here's a cool story. Quake, a classic 90's video game, has had this ongoing fan project with the goal of adding real time ray tracing to it. Obviously they do not have a new 2080ti to play it on yet, so keep this in mind. (And maybe bookmark this to see if performance really changes with Turing.) For now, you have the game running on a 980ti, so not even a Pascal. But what is really interesting is the noise. The noise is why we have not had have ray tracing in games until now. That is where Tensor comes in. Yes, Tensor. The ray tracing cores do their thing, while Tensor denoises in real time, and that is how you get real time ray tracing in 2018. You have 2 new technologies working together. And it so happens these are things that Iray can take full advantage of. Anyway, here is the video, and you can see the dramatic change ray tracing makes to a very old video game.

     

    Ha, that's pretty cool.

Sign In or Register to comment.