OT: New Nvidia Cards to come in RTX and GTX versions?! RTX Titan first whispers.

1679111227

Comments

  • kyoto kidkyoto kid Posts: 41,847
    ebergerly said:

     

    Why did I preorder?

    - When  Nvidia CEO Jensen Huang stands on the stage and tells the world that they worked 10 years on that RTX ray tracing technology then I trust him.

     

    He also said, "The more GPU's you buy, the more money you save". surprise 

    Although, to his credit, he didn't crack up laughing while saying it. 

    ....yes

  • Rashad CarterRashad Carter Posts: 1,830
    edited August 2018
    The world needs risk takers, and those risk takers shouldn't have to endure ridicule. Playing it safe has it's own traps. So then we should all just wait?...but then if we do all decide to be "smart" and wait for someone else to boldly go where no one has gone before then we truly will never get those very benchmarks we've been saying we need to base our decisions upon...so..... He did state, quite clearly, that RTX is a game changer for Pixar sorts....which includes US at Daz3d, and how we really dont matter because we aren't the majority of card buyers. This annoys me slightly, the smugness. I'm a minority in any number of metrics, but I still matter and while not so often every once in a while something good happens that directly affects me and people like me. RTX from all observations so far is a MAJOR step for us in this niche. These RTX cards offer something good for us. For gamers the question is whether games will catch up to RTX tech soon enough before RTX tech shifts again? But as CG artists, we already know it's a solid step for us even if RTX stalled from today forward, we'd still be in much better shape than we have been. So then there it is. RTX tech is almost certain to die if games don't adopt it and fast fast fast. Bad enough gamers are hating, even worse the CG crowd is crying too. This is a tech we should be supporting in my view because it's good for us. I will not wait around for an Iray bench, I've got Octane benches already and since I've obviously chosen power over price in.the past (paid Octane vs free Iray) I'm certain I will continue to do so. If you use Octane, you can be confident RTX is worth the upgrades based on Otoys statements. Not stunting my own development waiting for iray updates and benchmarks that just take too darned long.
    Post edited by Rashad Carter on
  • kyoto kidkyoto kid Posts: 41,847
    edited August 2018

    ...once the subscription version of Octane4 goes live that may very well change the situation for artists like myself who don't have a lot of resources for high end software and hardware.  Until then, Iray is the most affordable PBR solution that many of us have available and know (and yes, I am well aware there will be a learning curve with Octane with regards to converting Iray/3DL materials).

    Still, having 12 GB of VRAM, even though it is an older generation Titan, I'd be somewhat reluctant to step down to 8 with the 2070 which pretty much would be the best I could afford on my budget once more reasonably priced competitive models hit the stores.

    Post edited by kyoto kid on
  • nicsttnicstt Posts: 11,715
    edited August 2018
    The world needs risk takers, and those risk takers shouldn't have to endure ridicule. Playing it safe has it's own traps. So then we should all just wait?...but then if we do all decide to be "smart" and wait for someone else to boldly go where no one has gone before then we truly will never get those very benchmarks we've been saying we need to base our decisions upon...so..... He did state, quite clearly, that RTX is a game changer for Pixar sorts....which includes US at Daz3d, and how we really dont matter because we aren't the majority of card buyers. This annoys me slightly, the smugness. I'm a minority in any number of metrics, but I still matter and while not so often every once in a while something good happens that directly affects me and people like me. RTX from all observations so far is a MAJOR step for us in this niche. These RTX cards offer something good for us. For gamers the question is whether games will catch up to RTX tech soon enough before RTX tech shifts again? But as CG artists, we already know it's a solid step for us even if RTX stalled from today forward, we'd still be in much better shape than we have been. So then there it is. RTX tech is almost certain to die if games don't adopt it and fast fast fast. Bad enough gamers are hating, even worse the CG crowd is crying too. This is a tech we should be supporting in my view because it's good for us. I will not wait around for an Iray bench, I've got Octane benches already and since I've obviously chosen power over price in.the past (paid Octane vs free Iray) I'm certain I will continue to do so. If you use Octane, you can be confident RTX is worth the upgrades based on Otoys statements. Not stunting my own development waiting for iray updates and benchmarks that just take too darned long.

    Risk-taking should be based on measurable risks.

    It should also be based on something meaningful.

    Their first is not in any way measurable for these new cards.

    ... And imo, it sure as hell isn't meaningful.

    Edit (including spelling and typos):

    BSOD as I was doing an edit.

    Now don't get me wrong, what I understand from all the marketting presentations, it appears as though us renderers may actually benefit more than gamers (unless said games are made with RTX in mind). So I am hopeful, but am I just prepared to take the word of a company who recently demonstrated they are not averse to sharp practices (Partner Program)? No I am not.

    They produce some great tech, I am hoping this is some more great tech.

    But how soon will us renderers be able to use it? No idea how quickly drivers will be ready to utilitise the new cards, never mind take advantage of the new features,

    Post edited by nicstt on
  • ebergerlyebergerly Posts: 3,255
    The world needs risk takers, and those risk takers shouldn't have to endure ridicule. Playing it safe has it's own traps. So then we should all just wait?

    All I've seen are some suggestions that maybe folks might want to consider waiting a few weeks before spending their hard-earned money.

    And I agree, being bold and taking risks is an excellent attitude if you're trying to cure cancer and make the world a better place. But when it comes to personal finances and merely rendering a G8, not so much laugh

  • The world needs risk takers, and those risk takers shouldn't have to endure ridicule. Playing it safe has it's own traps.

    This is true.

    nicstt said:

    Risk-taking should be based on measurable risks.

    I don't see the RTX 2080 TI as "risky" anyhow.  Who is thinking that with any amount of seriousness? 

    I do see it as a long overdue path forward for my graphic arts hobby as well as my protein folding interests, so I'm looking forward to seeing what actually becomes available and how well it works.  Ultimately, I'll want two cards and I don't want to run up the credit cards to get them, so I will start budgeting for them now.  I have a lot of things to do with money in real life first, so this will afford me ample time to watch how things go, particularly with the drivers.

    So I'm "waiting", but that's for finance reasons.

  • nicsttnicstt Posts: 11,715
    edited August 2018

    I was responding to a post - and they seemed to think there was risk-taking onvolved. To some limited extent there is; so my statement applies, the risks can't be measured. And whilst it isn't a risk to you, that in no way stops it being a risk to others.

    I agree, the steps forward are long-overdue; and whilst we are fairly safe in our assumptions that it does what is stated, I still want to know how it applies to my own personal use, and what those improvements are.

    I am hoping a lot, but hope is a funny beast.

    There was an aweful lot of time devoted to provided very little in the way of useful data, never mind concrete information.

    ... No comparrisons with previous generation, unless you consider the useless chart quantifying (sort of) the level of improvement.

    Post edited by nicstt on
  • ebergerlyebergerly Posts: 3,255
    edited August 2018

    Of course there's a risk. If you spend $1,200 today without knowing what the performance will be in Studio/Iray, or whether the initial implementation will be optimized to take advantage of everything it was designed to do, or whether the initial software implementations have some bugs/limitations, and so on, you're taking a risk. 

    Keep in mind there's a lot of hardware architecture and software changes required to make all of these new features work together. We like to make believe everything is so simple, but it's incredibly complex. There's new NGX software, there's a new CUDA version, there's new Physx versions, and so on. If I spend $1,200 today and next month find out I could have gotten almost the same performance by buying a 1080ti today at half the price, I'd be kicking myself for a very long time. 

    We can assume everything is awesome because it's new technology, but anyone who's been around new technology for any amount of time knows that the devil is in the details. It may turn out that it's as awesome as everyone desperately wants to believe, but why not wait a few weeks to make sure?

    And keep in mind that what people are seeing in Blender Eeevee (for free) and Unity and others with existing hardware might be not much different from the next year of RTX results. Maybe Iray will evolve to do that kind of stuff with existing hardware, and people will kick themselves for spending all that money.    

    Post edited by ebergerly on
  • TaozTaoz Posts: 10,248
    ebergerly said:

    Any argument about how the lower one may be enough seems to serve more as a demonstration of one's technical knowlege rather than actual reasonable advice?

    I think reasonable advice is advice that helps you not spend money you don't need to spend isn't it? 

     

    Sure. But putting the added cost of another 500W in relation to the other stuff I need to buy, say a 1000 bucks GPU, and adding other caveats that were mentioned, like wanting to add another GPU later, or two (after all we are talking about GPU rendering here), or even just being on the safe side for really very little money? Then no, I don't think it's reasonable advice anymore.

    I agree, the last thing I'd save on is the PSU, it's the heart of the system and you don't want your system to die because of a heart attack.

  • bluejauntebluejaunte Posts: 1,990

    Risk is a strange word here though. If you spend $1200 you probably had enough money that it doesn't bother you, let alone pose a risk. Even if it turned out completely crap, you could still sell it and probably at not much of a loss since availability will be low and there's always someone who will take it off you. Or you can keep it as it will be faster no matter what (more CUDA cores) and sell your previous card.

    At most I'd say slight uncertainty?

  • kyoto kidkyoto kid Posts: 41,847

    ...I still think of how those "risk takers" back in the early Maxwell days felt when they realised they paid 4,000$ more for what was basically a "Titan X" with different drivers and branding.

    I've seen first generation M6000s selling for around half their original price (which is still at least two times more than a newer Titan-Xp).

  • ebergerlyebergerly Posts: 3,255

    3 weeks. 

  • GatorGator Posts: 1,319
    ebergerly said:

    3 weeks. 

    Can't wait to see some reviews!
  • kyoto kidkyoto kid Posts: 41,847

    ...for what?

  • Ghosty12Ghosty12 Posts: 2,080
    kyoto kid said:

    ...for what?

    For the new cards to come out and for the tech reviewers to do the testing..

  • nicsttnicstt Posts: 11,715
    ebergerly said:

    Sounds like some folks here have pre-ordered cards, so hopefully they will post some benchmark numbers. I think 3 or 4 users so far have decided to boldly go where nobody has gone before, so we should have some good info. Based on prices, the 2080ti will have to render in better than half the render time of a 1080ti, which is asking a lot. I think historically the improvements have been more like 25-30% between generations. And with only 11GB, and no NVLink, I think theres good reason for some skepticism

    (Taken the quote from the closed thead. Makes sense that we don't spin off. :) Could have been merged though?)

    I'm certainly hoping for user benchmarks, more so really than the reviews out there.

    I think the value of upgrading may end up being dependant on what we are upgrading from: a 980ti v a 1080ti as an example; then there is more of a leap. I waited for the 1080ti, then decided, it wasn't that much of an improvement, although the RAM was a plus, but Windows stealing RAM made me think of other options.

    If NVlink allowed the sharing of RAM, it would be worth it - no doubt at all.

  • GatorGator Posts: 1,319
    nicstt said:
    ebergerly said:

    Sounds like some folks here have pre-ordered cards, so hopefully they will post some benchmark numbers. I think 3 or 4 users so far have decided to boldly go where nobody has gone before, so we should have some good info. Based on prices, the 2080ti will have to render in better than half the render time of a 1080ti, which is asking a lot. I think historically the improvements have been more like 25-30% between generations. And with only 11GB, and no NVLink, I think theres good reason for some skepticism

    (Taken the quote from the closed thead. Makes sense that we don't spin off. :) Could have been merged though?)

    I'm certainly hoping for user benchmarks, more so really than the reviews out there.

    I think the value of upgrading may end up being dependant on what we are upgrading from: a 980ti v a 1080ti as an example; then there is more of a leap. I waited for the 1080ti, then decided, it wasn't that much of an improvement, although the RAM was a plus, but Windows stealing RAM made me think of other options.

    If NVlink allowed the sharing of RAM, it would be worth it - no doubt at all.

    I really hope I'm wrong, but I doubt we'll see memory pooling with the GeForce line.  At least not these first gen cards.  Only if the competition forces them to.

  • I will definitely wait until there are iray-benchmarks and stable driver implementations. I have only found some infos over rtx raytracing and tensor cores integration in professional 3d software. May bee the information is useful for someone here. 

    https://evermotion.org/articles/show/11111/nvidia-geforce-rtx-performance-in-arch-viz-applications

  • RobinsonRobinson Posts: 751
    Sisyphus said:

    I will definitely wait until there are iray-benchmarks and stable driver implementations. I have only found some infos over rtx raytracing and tensor cores integration in professional 3d software. May bee the information is useful for someone here. 

    https://evermotion.org/articles/show/11111/nvidia-geforce-rtx-performance-in-arch-viz-applications

    Me too.  I won't be upgrading for a while I don't think.  The prices right now are terrible for a hobbyist.  I'd be very interested to see iRay benches on release, but I presume there will be a new iRay version to go with it and therefore a new Daz release.

  • nicsttnicstt Posts: 11,715

    There wasn't a new Studio release with the 10 series; at least iirc.

  • kyoto kidkyoto kid Posts: 41,847
    ghosty12 said:
    kyoto kid said:

    ...for what?

    For the new cards to come out and for the tech reviewers to do the testing..

    ...ah I was hoping the commercial release of Octane4.

  • kyoto kidkyoto kid Posts: 41,847
    nicstt said:
    ebergerly said:

    Sounds like some folks here have pre-ordered cards, so hopefully they will post some benchmark numbers. I think 3 or 4 users so far have decided to boldly go where nobody has gone before, so we should have some good info. Based on prices, the 2080ti will have to render in better than half the render time of a 1080ti, which is asking a lot. I think historically the improvements have been more like 25-30% between generations. And with only 11GB, and no NVLink, I think theres good reason for some skepticism

    (Taken the quote from the closed thead. Makes sense that we don't spin off. :) Could have been merged though?)

    I'm certainly hoping for user benchmarks, more so really than the reviews out there.

    I think the value of upgrading may end up being dependant on what we are upgrading from: a 980ti v a 1080ti as an example; then there is more of a leap. I waited for the 1080ti, then decided, it wasn't that much of an improvement, although the RAM was a plus, but Windows stealing RAM made me think of other options.

    If NVlink allowed the sharing of RAM, it would be worth it - no doubt at all.

    I really hope I'm wrong, but I doubt we'll see memory pooling with the GeForce line.  At least not these first gen cards.  Only if the competition forces them to.

    ..my thoughts as well. Whatever the new Titan (if there even is one) will be may allow for it.  The 3,000$ Titan XV doesn't.

     

  • nicstt said:

    There wasn't a new Studio release with the 10 series; at least iirc.

    There wasn't a new release conicident with the 10x0 series, no - since we were all waiting for an Iray update it wouldn't have mattered, from the eprspective of using the new GPUs, if there was. But of course we did get an update once there was an updated Iray version.

  • ebergerlyebergerly Posts: 3,255

    It's interesting...I just took a look at the Sickleyield benchmarks for the 1060 to the 1070 to the 1080ti, and there was almost exactly a 33% improvement in render times in going from the 1060 to the 1070, and the 1070 to the 1080ti. I didn't see a 1080 benchmark, but I assume it was less than 33%. 

    So if we expect the new 2080ti to give the same performance increase over a 1080ti it will only be 33%. I'm assuming it will be significantly more. But even if you double that and get a 66% increase in speed that barely compares to the performance of two 1080ti's at about the same price. 

    Just sayin'....the 2080ti has a lot to prove.

     

  • ebergerly said:

    It's interesting...I just took a look at the Sickleyield benchmarks for the 1060 to the 1070 to the 1080ti, and there was almost exactly a 33% improvement in render times in going from the 1060 to the 1070, and the 1070 to the 1080ti. I didn't see a 1080 benchmark, but I assume it was less than 33%. 

    So if we expect the new 2080ti to give the same performance increase over a 1080ti it will only be 33%. I'm assuming it will be significantly more. But even if you double that and get a 66% increase in speed that barely compares to the performance of two 1080ti's at about the same price. 

    Just sayin'....the 2080ti has a lot to prove.

    But, as you yourself have pointed out, we don't yet have any figures for the new GPUs.

  • Rashad CarterRashad Carter Posts: 1,830
    ebergerly said:

    It's interesting...I just took a look at the Sickleyield benchmarks for the 1060 to the 1070 to the 1080ti, and there was almost exactly a 33% improvement in render times in going from the 1060 to the 1070, and the 1070 to the 1080ti. I didn't see a 1080 benchmark, but I assume it was less than 33%. 

    So if we expect the new 2080ti to give the same performance increase over a 1080ti it will only be 33%. I'm assuming it will be significantly more. But even if you double that and get a 66% increase in speed that barely compares to the performance of two 1080ti's at about the same price. 

    Just sayin'....the 2080ti has a lot to prove.

     

    Wow, you really do seem to completely dismiss any possible contributions of the RTX tech when you make predictions about the new cards?

    Just so that I am clear...are you stating that you expect nothing more than a 66% increase in the best case scenario from a 2080ti vs 1080ti? I ssume this is in regards to Cuda performace specifically? Restated, you are saying that even if RTX offers no additional benefit, we can at least expect a cuda speed improvement in the range of 33-66%? I think I can agree with predictions about cuda related performance, sure. But RTX tech doesnt seem to require updated software to see some degree of benefit as was demonstrated by the 1995 game with raytracing in one of the videos someone kindly posted somewhere in this now frighteningly specific thread

    I ask because you say the 2080ti has a lot to prove....what if it turns out the 2080ti provides a 4x speed improvement in Iray as it exists today, will that be proof enough? My question is quite literal, how much of an improvement in speed alone (since we already know the limits of vram offered with the 2080ti) would make the double price of a 1080ti worthwhile?

    For me, a mere doubling in speed is a good argument.

    Anything above a 3x improvement is a no brainer for me, I'd go for it immediately.

    So seeing claims of 8x improvements makes we wish I could pre-order the thing.

    I know I'm getting screwed over no matter what kind of deal I broker with these people. I'm gambling against the house...the house will always win in the end. So I might as well focus on having some fun.

    If I DO buy one of these 2080ti cards, I will actually ship it to you for a week just so you can test the performance of your new custom written raytracer (provided you wrote it to use the GPU rather than the CPU) to see if you observe any noteworhty improvements even without specific optimizations of code.

    This really is the part I'm most curious about. Do the new RTX cards somehow "know" when a raytraced algorithm is presented to them and do they automatically port those tasks to the more efficient RT cores, or do applications need special updated instructions to point raytracing tasks to the new RT cores? I was under the impression that the cards automatically send those tasks to the RT cores, but if not then I'm indeed terrified that apps will require huge updates to make any use of this stuff.

    In such a scenario I can see why you are so cautious. Apologies if my questions seem naive

    For example, if this turns out to be real benefit with little to no code updates of applications I plan to beg Daz3d to SERIOUSLY consider writing Bryce (which is a brute force raytracer of sorts) to utilize GPU. In the past we've assumed gpu rendering would have been faster than cpu for Bryce, but if RTX has specific hardware for raytracing....I simply don't see how or why we would not adopt it. Thinking out loud. 

    Anyhow, what do you think?

  • ebergerlyebergerly Posts: 3,255

     

    But, as you yourself have pointed out, we don't yet have any figures for the new GPUs.

    True, but isn't it important (or at least useful) to define market expectations based on past history of price/performance? It's just setting the bar that has to be met for us users to consider the new GPU's as providing a reasonable price/performance ratio. My fuzzy recollection is that historically we've been paying something like $200 additional to go from a 1060 to a 1070, and another $200 for a 1070 to 1080ti. Or something like that (fill in the correct numbers if I'm way off base). So I think we expect something like a $200 price tag for a 33% increase in performance. So when we do the numbers for the 2080ti we'll know what's reasonable. Of course there are some who will pay top dollar just to be on the cutting edge, but there are others who'll look at return on investment first.   

     

  • bluejauntebluejaunte Posts: 1,990

    Fairly certain if we just look at raw CUDA performance, the higher price of these cards would not be justified. But that's a little bit beside the point when the evermotion link just posted has official quotes by numerous producers of renderers making bold statements like 8x because RTX.

  • ebergerlyebergerly Posts: 3,255
    edited August 2018

    I never said they won't give some amazing performance on our Iray scenes. Never. Not once.

    I said we don't know. Because we don't. If someone WANTS to believe they're awesome, then fine. I will believe they're awesome when I see actual numbers on actual DAZ/Iray scenes. If they cut the render time for a 20 minute scene down to 1/5th that time (ie, 4 minutes), I'll be thrilled. But based on a very long history of previous GPU's, that at least SEEMS unlikely, for many, many reasons. Not the least of which is the long past history of performance increases in subsequent generations of GTX cards. 

    Can this be a new universe of GPU performance? Absolutely, anything is possible. But just because you believe something doesn't make it true. All I'm saying is the past history of price/performance with GPU's has been fairly consistent. So yeah, this could be a new universe, but to suddenly go from a 33% decrease in render time to an 80%+ decrease in render time is a lot to expect. 

    And the more you know about how incredibly complex all of this hardware and software is, working together, needing to be optimized, requiring so many elements to work together efficiently, and so on, the more you tend to be skeptical. 

    And especially when you look at the incredible realtime performance that's already out there with Eeevee and Unity on existing GPU's, your skepticism about needing a new, expensive GPU might grow.  

    Post edited by ebergerly on
  • Rashad CarterRashad Carter Posts: 1,830

    So then the key to accessing the RTX features is OptiX? Is OptiX already a part of Iray or even Octane versions that exist today? Does anyone know? Thanks

Sign In or Register to comment.