OT: New Nvidia Cards to come in RTX and GTX versions?! RTX Titan first whispers.

1235727

Comments

  • nicsttnicstt Posts: 11,715

    Wondering when IRAY drivers will be available.

  • hphoenixhphoenix Posts: 1,335
    nicstt said:

    Wondering when IRAY drivers will be available.

    That's REALLY questionable.  Given the new architecture (with Turing, Ray-Tracing Cores, Tensor Cores, etc.) that could take quite a while.  Pascal wasn't even a huge change (API-wise) though the architecture internally was different.  Now we have a whole new set of algorithms and shading/rendering technologies to incorporate.  I think it's going to take nVIdia a few months at least.  Of course, they could just provide standard CUDA support and support it immediately, but without the benefits of all the new hardware features......but then you'd see very little improvement in render times and quality.

     

  • bluejauntebluejaunte Posts: 1,990

    Yeah so Iray perfomance (and support) will be key for us. There was no word about that. They increased prices over the previous generation quite a bit, which given the tech and the whole mining situation a while ago may be understandable. 11GB max is somewhat disappointing for us Irayers again. I guess the question then is, how much faster is a 2080TI in Iray over 2x 1080TI, if at all? As prices of the latter are going to drop even more, two of those are probably going to cost less for a while. No mention of NVLINK for the consumer cards either.

  • nicstt said:

    Wondering when IRAY drivers will be available.

    Or even, any reviews?

    It's a shame we're in a pre-order generation - I mean I love new hardware as much as the next, but there's no way I'm paying £1000+ on something I have no clue about.

  • kyoto kidkyoto kid Posts: 41,847

    ...OK, roughly 200$ more for the reference 2070 and 2080 and an almost 500$ boost in price for the Ti, version from the preceding generation.

    NVLink bridges priced 40$ more than the old SLI ones, but far under that for the Quadro/Tesla line.  At that price doesn't sound like it will have the full capability (eg memory pooling) or they will have seriously undercut their pro grade line if it does.  2,560$ (two Ti's and dual NVLink bridges) for 22GB of pooled VRAM, and combined 1150 Tensor cores as well as over 8,600 CUDA cores, is less than half the cost of a single 24 GB RTX6000, with 4,600 CUDA and 576 Tensor cores (and just a couple hundred more than the 16 GB RTX5000 with 3072 CUDA and 384 Tensor cores).  The only advantage of a single RTX6000 would be total power consumption, but is that alone worth 3,800$ more?

    I any event, I expect sales of lottery tickets to increase as well as a lot of people content to stay with Pascal.

  • InkuboInkubo Posts: 745

    Does anybody around here post benchmarking results from taking one complex scene and rendering it on various NVIDIA cards and on the fastest top-of-the-line CPUs? When these new cards come out, it would be good to see what the comparative rendering times are, and whether there's really anything for us to gain from them.

    I bought a PC instead of a Mac because I wanted an NVIDIA GTX 1080 card for fast rendering, but Iray slows down and/or craps out and reverts to CPU mode often enough for me to wonder if it wouldn't make more sense to get the fastest possible CPU in my next machine and just stop caring about GPU performance. I don't care about games... all I want is a responsive UI and fast rendering in DS, iClone, and Blender.

    Clearly a GPU should vastly outrun a CPU for Iray rendering--but what's the real-world difference when a GPU is pitted against a really screaming multicore CPU?

  • kyoto kid said:
     The only advantage of a single RTX6000 would be total power consumption, but is that alone worth 3,800$ more?

    I any event, I expect sales of lottery tickets to increase as well as a lot of people content to stay with Pascal.

    The top end cards aren't aimed at casual gamers or hobbyists though.  In the enterprise sector, it is all about density.. so you want as much as you can in as little space as possible... that is the real premium for those people.  So yea, it is worth the extra $4k.

     

  • ebergerlyebergerly Posts: 3,255
    Inkubo said:

    Does anybody around here post benchmarking results from taking one complex scene and rendering it on various NVIDIA cards and on the fastest top-of-the-line CPUs?

    This is based on the results posted for the Sickleyield benchmark scene. And yes, even though the render times are getting very fast, it does seem to scale well to more complex scenes quite nicely. 

    BenchmarkNewestCores.jpg
    514 x 524 - 62K
  • Looking at the specs of these new 2080ti's alot of them seem to be 2.5 slots wide..., be aware.!!, don't know whether the Nvidia Founders Editions are only two slots though.

    S.K.

     

  • nicsttnicstt Posts: 11,715
    nicstt said:

    Wondering when IRAY drivers will be available.

    Or even, any reviews?

    It's a shame we're in a pre-order generation - I mean I love new hardware as much as the next, but there's no way I'm paying £1000+ on something I have no clue about.

    +1, well +10, +100 better still +1,000,000

  • kyoto kidkyoto kid Posts: 41,847
    kyoto kid said:
     The only advantage of a single RTX6000 would be total power consumption, but is that alone worth 3,800$ more?

    I any event, I expect sales of lottery tickets to increase as well as a lot of people content to stay with Pascal.

    The top end cards aren't aimed at casual gamers or hobbyists though.  In the enterprise sector, it is all about density.. so you want as much as you can in as little space as possible... that is the real premium for those people.  So yea, it is worth the extra $4k.

     

    ..still it comes to cost vs. simplicity.  If I was running say a modest sized production studio, a dual 2080 Ti + NVLink setup would be more attractive.  Having twice the CUDA/Tensor cores would also be a production advantage as that controls render speed. Of course, again, this is only contingent on the consumer grade NVLink supporting full memory pooling, (which I am beginning to doubt more and more considering comparative the low cost to the professional link bridges).

    PSUs are also not that terribly expensive.

  • kyoto kidkyoto kid Posts: 41,847

    Looking at the specs of these new 2080ti's alot of them seem to be 2.5 slots wide..., be aware.!!, don't know whether the Nvidia Founders Editions are only two slots though.

    S.K.

     

    ...interesting observation.  So those with smaller form factor MBs and/or limited number of 3.0 x 16 slots may not be able to install two side by side to use the NVLink bridges

  • bluejauntebluejaunte Posts: 1,990

    This page has a description of NVLINK:

    https://www.nvidia.com/en-us/geforce/graphics-cards/rtx-2080-ti/

    No mention of memory pooling. Surely they would tout that if it was a feature. So yeah I think it'll just act as a better SLI and not double memory, hence the lower cost too. 

  • kyoto kidkyoto kid Posts: 41,847
    edited August 2018
    ...yup, sounds like a souped up version of SLI as they are still selling 2, 3, and 4 card bridges. All the more I'll stick with my Titan X. For 79$, I didn't think it would do the same as the much more expensive Quadro/Tesla bridges did. So 200$ - 500$ more for the same amount of VRAM and maybe a bit better rendering perfomance, *yawn*. With memory prices expected to drop and older generation workstation Xeons becoming more affordable that big duo CPU high memory render box is sounding like a better investment.
    Post edited by kyoto kid on
  • Ghosty12Ghosty12 Posts: 2,080
    edited August 2018

    Added in another video of information we already know now.. :)

    kyoto kid said:
    ...yup, sounds like a souped up version of SLI as they are still selling 2, 3, and 4 card bridges. All the more I'll stick with my Titan X. For 79$, I didn't think it would do the same as the much more expensive Quadro/Tesla bridges did. So 200$ - 500$ more for the same amount of VRAM and maybe a bit better rendering perfomance, *yawn*. With memory prices expected to drop and older generation workstation Xeons becoming more affordable that big duo CPU high memory render box is sounding like a better investment.


    It is a shame really but thinking about it, I guess they had to have something to differentiate the Quadro and Geforce cards this time around..  So it looks like if one wants memory pooling and "maybe" gpu pooling then one has to go for a Quadro card..

    Post edited by Ghosty12 on
  • fredmusicfredmusic Posts: 29
    edited August 2018

    I don't expect any of this new RTX stuff to be in Daz any time soon.  Could be wrong, but it would require a different engine to be added for realtime use.

    Anyway, this demo wasn't commonly seen.  The tech demo for the upcoming game Control by Remedy.

     

    Post edited by fredmusic on
  • Ghosty12Ghosty12 Posts: 2,080
    fredmusic said:

    I don't expect any of this new RTX stuff to be in Daz any time soon.  Could be wrong, but it would require a different engine to be added for realtime use.

    Anyway, this demo wasn't commonly seen.  The tech demo for the upcoming game Control by Remedy.

     

    Also Metro Exodus is supposedly going to have RTX capabilities..

  • kyoto kidkyoto kid Posts: 41,847

    ..jsut came from Tom's Hardware and this is getting pretty much a lukewarm response there. 

    For us, I don't see much of a benefit save for slightly faster render times.  All three cards have the same VRAM limits as their 10xx predecessors and what they call "NVLInk" is as I mentioned above, pretty much a souped up version of SLI that doesn't support memory pooling like it does the for the Quadro cards.  So forget having 16 or 22 GB of VRAM for rendering unless you can cough up the zlotys for one of the new Quadros.

  • outrider42outrider42 Posts: 3,679

    This page has a description of NVLINK:

    https://www.nvidia.com/en-us/geforce/graphics-cards/rtx-2080-ti/

    No mention of memory pooling. Surely they would tout that if it was a feature. So yeah I think it'll just act as a better SLI and not double memory, hence the lower cost too. 

     

    That page and almost every other preorder page also fails to mention anything about Tensor cores. But we now know that Tensor is indeed a part of the die, and they have a big role what is taking place. So just because this page lacks that info is not an indication of a lack of memory pooling. If you watch the conference, the CEO mentions NVLink and memory pooling right before jumping to name prices of the RTX cards. That's some weird timing to talk about NVLink if it isn't going to actually do what he says for the gaming cards. He pretty much said the same words I did, that the NVLink'ed GPUs are like single big GPU. We don't have confirmation, but we also don't have a denial. We need to get this question answered outright.

    Right now the reception is mixed. I'm seeing gamers asking why they really need RTX if it costs this much and few games use it. It is a bit of a tough sell in that regard. Outside of RT and Tensor, the jump is just above average. It will be interesting to see how this goes. The prices are pretty high, and Nvidia's conference was not very well put together. They listed 2 different prices and confused everybody. But you could just about buy two 1080ti's for the price of the 2080ti. Now that prices are dropping, 1080ti's are fast approaching $600. And with the Founder's Edition of the 2080ti being a cool $1200, well, the question is why should gamers buy pay twice the price when they are not getting twice the performance with "normal" games. Some gamers didn't even think the ray tracing looked better! Some people like their fake lighting, which shouldn't be surprising when some people still prefer 3DL when Iray's been with Daz Studio since 2015. 

    I believe there will be an impact on Iray. Iray has already been updated to support Tensor as they helped power the AI Denoiser. But Turing goes a step further. Watch the conference. He goes into some detail about how Tensor cores work with the RT engine. Tensor is NOT just about denoising here, they are taking an active role in rendering the image itself. The Titan V does not have the RT engine, and this is where Turing gains its advantage over it.

     

    ebergerly said:
    Inkubo said:

    Does anybody around here post benchmarking results from taking one complex scene and rendering it on various NVIDIA cards and on the fastest top-of-the-line CPUs?

    This is based on the results posted for the Sickleyield benchmark scene. And yes, even though the render times are getting very fast, it does seem to scale well to more complex scenes quite nicely. 

    How old are the numbers in that chart? This is important because Nvidia tweaked the way both quality and convergence worked in different updates of Iray. If you read the migenius testing method you would find this. That means older benchmarks are not going to be as accurate because what Iray decides is 95% convergence is slightly different in each SDK. That's also why when I made a benchmark, I used the iteration cap to stop the render at a set number, and that the render hits it. This will provide a little better baseline, though really it should be retested with each new SDK, it is better than using a set convergence as the cap.

    "Note that these benchmarks are not performed in a way that they can be compared to the previous series of benchmarks migenius conducted which is why we are retesting even the older cards where possible. This is due to changes in Iray itself, new Iray versions often change the relationship between iteration count and quality which can affect our absolute measurements. However all relative measurements between cards within the benchmark are valid."

    Also using time as the guideline can be problematic, especially here with how the old SY bench is timed to 95% convergence. It also gets compounded by super fast GPUs. As the bench gets faster and faster, the differences between one GPU to the next become just seconds, and it can hard to gauge what performance gain there may be sometimes when seconds might be within the margin of error.

  • Ghosty12Ghosty12 Posts: 2,080

    Another video among many and what is interesting is how unimpressed the first impressions of these new cards are and interestingly the thoughts of the previewer.. And that we won't know much about how good they are until near the end of next month..

  • Get them 1080ti's whilst you still can folks.!!!

    S.K.

  • I was looking so forward to this and expected at least the 2080 ti to have 16 GB VRAM. But 11??? Guess they saved me a lot of money because I'll still be sitting on the 1080 ti for a while it seems.

  • AalaAala Posts: 140

    I was looking so forward to this and expected at least the 2080 ti to have 16 GB VRAM. But 11??? Guess they saved me a lot of money because I'll still be sitting on the 1080 ti for a while it seems.

    Didn't it take Pascal like 6 months or so before you could use it on DAZ? Iray will need to be updated for Turing too, and it might take even longer because now there's a totally new architecture on top of the traditional one to consider too.

    That being said, unlike for gamers, we Iray users might actually really get that 6x performance boost because Iray is all abou ray-tracing.

  • MendomanMendoman Posts: 404
    edited August 2018

    Oh well, I took the leap of faith and preordered 2080ti from evga. I usually order from EVGA, because I get overclocked version ( with warranty ) and delivery cheaper from Germany than the base version from here in Finland. I don't know if it's the VAT or greed, but prices here are just insane. Anyways, in Europe they promise delivery in 4-6 weeks, but I have no idea if DS is able to use it yet, so not in a rush anyway.

     

    My old Maxwell Titan X is getting overburdened, because my scenes are getting more and more complex, so even if RTX/Tensor cores wouldn't do anything, at least new cuda cores and higher memory and GPU core speeds ( funnily though, they don't even know core and boost clocks yet at EVGA ) should speed up my renders considerably. I'd sure love to see that 6x speed over Pascal they advertise ( if that RTX stuff is supposed to work with Iray ), but I do have some serious doubts.

     

    Still, to be honest, if my current system could handle it, I probably would have ordered 2x 1080tis. These prices for Touring are quite insane, since you can almost get 2x 1080tis with the price of one 2080ti, and I'm (almost) sure those 1080tis are easily faster than one 2080ti... but then I'd have to upgrade my entire computer starting from power supply, ventilation or even water cooling and case, and probably also motherboard and CPU also if I wanted to get full benefits, so single card it is again.

     

    I was hoping Nvidia would release Titan something with 16Gb memory instead of a ti -version ( the price is certainly already at titan level ), but I suppose this is what we get this time around. That being said, I'm still sure that in a year or so we get a new upgraded version with more memory or something, but this should still do fine for my use for the next 3-4 years again.

    Post edited by Mendoman on
  • ebergerlyebergerly Posts: 3,255
    Mendoman, wow. I suppose we owe you a thank you for hopefully being the first render benchmark guinea pig for Studio/Iray.
  • This is probably a more relevant video, Autodesk Arnold.  It updates in a few seconds, so it runs the ray trace as long long as needed.

     

     

     

     

  • AalaAala Posts: 140
    Mendoman said:

    Oh well, I took the leap of faith and preordered 2080ti from evga. I usually order from EVGA, because I get overclocked version ( with warranty ) and delivery cheaper from Germany than the base version from here in Finland. I don't know if it's the VAT or greed, but prices here are just insane. Anyways, in Europe they promise delivery in 4-6 weeks, but I have no idea if DS is able to use it yet, so not in a rush anyway.

     

    My old Maxwell Titan X is getting overburdened, because my scenes are getting more and more complex, so even if RTX/Tensor cores wouldn't do anything, at least new cuda cores and higher memory and GPU core speeds ( funnily though, they don't even know core and boost clocks yet at EVGA ) should speed up my renders considerably. I'd sure love to see that 6x speed over Pascal they advertise ( if that RTX stuff is supposed to work with Iray ), but I do have some serious doubts.

     

    Still, to be honest, if my current system could handle it, I probably would have ordered 2x 1080tis. These prices for Touring are quite insane, since you can almost get 2x 1080tis with the price of one 2080ti, and I'm (almost) sure those 1080tis are easily faster than one 2080ti... but then I'd have to upgrade my entire computer starting from power supply, ventilation or even water cooling and case, and probably also motherboard and CPU also if I wanted to get full benefits, so single card it is again.

     

    I was hoping Nvidia would release Titan something with 16Gb memory instead of a ti -version ( the price is certainly already at titan level ), but I suppose this is what we get this time around. That being said, I'm still sure that in a year or so we get a new upgraded version with more memory or something, but this should still do fine for my use for the next 3-4 years again.

    In my honest opinion, I would cancel the preorder if all you're gonna use it for is DAZ Studio. From experience, DAZ3D are really slow to update Iray with the newest features, and since there isn't anything new yet about RTX support for Iray itself, I expect support for Iray on DAZ Studio to be even further in the future.
     

    Also, since Nvidia released the 2080 Ti along with the 2080, I fully expect there to be a Titan version in the near future too, possibly to be announced by the end of the year. Should be released just as RTX supported Iray on DAZ Studio comes out, and I also expect it to have at least 16 Gigs of VRAM.

  • jardinejardine Posts: 1,215

    evga's got their versions of the rtx 2080 and the rtx 2080ti available for presale now. 

    they're estimating a ship date in 4 to 6 weeks.

  • JazzyBearJazzyBear Posts: 805

    So I  am slogging around with a GTX 745 with like 384 cores... so if I went to just a GTX1070 it should fit in my system and give me like a 5x-7x speed jump?

    24G Main RAM and i7 processor.

Sign In or Register to comment.