OT: New Nvidia Cards to come in RTX and GTX versions?! RTX Titan first whispers.

1101113151627

Comments

  • bluejauntebluejaunte Posts: 1,990

    Could it be a reverse marketing tactic? Like people are a bit underwhelmed but then it turns out that VRAM stacks on gamer cards too and are positively surprised? laugh

  • nicsttnicstt Posts: 11,715

    Wow, great find nicstt - I called that we wouldn't get memory pooling with the GeForce gaming series.  I would LOVE to be wrong, sounds like I may be!  laugh

     

     

    I can't claim the credit for that. It was Outrider42

  • nicstt said:
    The NDA Nvidia want signing is: in a nutshell, nothing special. It doesn't indefinitely gag the journalists or prevent them from writing/recording a detrimental review.

    Perhaps they can write a negative review for *this* card, but will they end up being removed from Nvidia's "approved reviewers" list as a result? Nvidia has done that to other sites in the past. Will that knowledge cause current reviewers to pull their punches just so they can ensure they'll be able to review future products (which in turn drives page views).

    The wierd thing about this, in my opinion, is that at the moment Nvidia seems to have the best-performing products anyway, so why go hardball like this and risk bad publicity?

  • kyoto kidkyoto kid Posts: 41,845

    Wow, great find nicstt - I called that we wouldn't get memory pooling with the GeForce gaming series.  I would LOVE to be wrong, sounds like I may be!  laugh

     

     

    ...I'll wait, but sticking with my original call and not putting any money down that it will/won't happen (too poor to do so).

  • kyoto kidkyoto kid Posts: 41,845

    as to NVLink configuration here is the difference between the Quadro and GeForce series.

    First image dual RTX 2080 Ti's with NVLink

    Second image dual Quadros with NVLink

    Notice that only one bridge is employed with the GeForce series (functioning much like an SLI bridge though with faster transfer rate) while two bridges are used on Quadro/Tesla cards (both needed to also support memory pooling).  The article the first image is from only mentioned faster data transfer rates between dual 2080 Ti's with NVLink and nothing about memory stacking.

    RTX2080 NVLink.jpg
    650 x 365 - 60K
    Quadro NVLink.jpg
    700 x 480 - 43K
  • ebergerlyebergerly Posts: 3,255
    nicstt said:
    Perhaps they can write a negative review for *this* card, but will they end up being removed from Nvidia's "approved reviewers" list as a result? Nvidia has done that to other sites in the past. Will that knowledge cause current reviewers to pull their punches just so they can ensure they'll be able to review future products (which in turn drives page views).

    This is nothing new. NVIDIA or any company can decide not to send sample units to any reviewer if they don't like anything about that reviewer. That's always been the case. And not getting sample units to test means the reviewer needs to buy their own hardware, probably after the "approved" reviewers have already scored the big news with the early benchmarks. And how many reviewers would be willing to spend over $1,000 of their own money on a card that's old news? And if you're gonna review GPU's and CPU's and motherboards and water cooling and hard drives, etc., it gets real expensive real quick.   

  • ebergerlyebergerly Posts: 3,255
    edited August 2018
    From Nvidia’s Director of Technical Marketing, Tom Peterson: "With Quadro RTX cards, NVLink combines the memory of each card to create a single, larger memory pool. Petersen explained that this would not be the case for GeForce RTX cards. The NVLink interface would allow such a use case, but developers would need to build their software around that function."
    Post edited by ebergerly on
  • bluejauntebluejaunte Posts: 1,990

    Hmm, so will Iray be built around that or not. Sounds like not, at least for GeForce? What a confusing bunch of people.

  • kyoto kidkyoto kid Posts: 41,845
    edited August 2018

    ...finally confirmation on memory stacking. 

    Maybe for the full Iray standalone accessible to pro grade software users, not sure about the version integrated into Daz.  (though if someone can afford a subscription to 3DSMax or C4D, they probably can afford a pair of RTX5000s and the NVLink bridges to have 32 GB (about 5,200$ total).

    If Otoy is in the position to build that function in for the 2080/2080Ti (maybe already working on it), then yeah, that would make a big difference.

    looks like time to make a big bowl of popcorn and get a beer.

    Post edited by kyoto kid on
  • Takeo.KenseiTakeo.Kensei Posts: 1,303
    ebergerly said:
    From Nvidia’s Director of Technical Marketing, Tom Peterson: "With Quadro RTX cards, NVLink combines the memory of each card to create a single, larger memory pool. Petersen explained that this would not be the case for GeForce RTX cards. The NVLink interface would allow such a use case, but developers would need to build their software around that function."

    I thought so. The software must support it. The question then is which software will support it

    With Nvlink Windows 10 may finally have an a advantage over Windows 7 because I think only wddm 2.x can stack memory and only directx 12 will have Microsoft Real Time raytracing API. However I'm sure a chunk of the stacked memory will still be unusable

    Otherwise the two other choices would be using Nvidia SDK and may be Vulkan (not sure for this one)

    My wish would be Vulkan, because that is cross platform and hardware agnostic, but there is a bigger probability that Nvidia API will be used in most implementations for rendering apps, including Iray

    Now I'm rather curious about AMD's answer (Radeon Pro render is announced to get real time raytracing)

  • kyoto kidkyoto kid Posts: 41,845
    ebergerly said:
    From Nvidia’s Director of Technical Marketing, Tom Peterson: "With Quadro RTX cards, NVLink combines the memory of each card to create a single, larger memory pool. Petersen explained that this would not be the case for GeForce RTX cards. The NVLink interface would allow such a use case, but developers would need to build their software around that function."

    I thought so. The software must support it. The question then is which software will support it

    With Nvlink Windows 10 may finally have an a advantage over Windows 7 because I think only wddm 2.x can stack memory and only directx 12 will have Microsoft Real Time raytracing API. However I'm sure a chunk of the stacked memory will still be unusable

    ....hopefully that won't be the case or then for sure not going to bother as the W10 has other less than desirable issues in my book..

  • prixatprixat Posts: 1,616
    edited August 2018
    kyoto kid said:

    ...finally confirmation on memory stacking. 

    Maybe for the full Iray standalone accessible to pro grade software users, not sure about the version integrated into Daz.  (though if someone can afford a subscription to 3DSMax or C4D, they probably can afford a pair of RTX5000s and the NVLink bridges to have 32 GB (about 5,200$ total).

    If Otoy is in the position to build that function in for the 2080/2080Ti (maybe already working on it), then yeah, that would make a big difference.

    looks like time to make a big bowl of popcorn and get a beer.

    FYI:
    Iray for C4D was discontinued in Nov, 2017. It was generally rejected as immature and not ready for production compared to the other 8 or 9 renderers available to C4D users. (Though the final straw was probably the adoption of AMD Pro-Render as the 3rd internal renderer.)

    Post edited by prixat on
  • GatorGator Posts: 1,319
    kyoto kid said:

    ...finally confirmation on memory stacking. 

    Maybe for the full Iray standalone accessible to pro grade software users, not sure about the version integrated into Daz.  (though if someone can afford a subscription to 3DSMax or C4D, they probably can afford a pair of RTX5000s and the NVLink bridges to have 32 GB (about 5,200$ total).

    If Otoy is in the position to build that function in for the 2080/2080Ti (maybe already working on it), then yeah, that would make a big difference.

    looks like time to make a big bowl of popcorn and get a beer.

    Times may have changed, but back in my early days contracting with small-medium sized businesses I worked at quite a few metal working and fab shops.  They all balked at the cost of workstations and would always cheap out.  If things were the same there today it's highly doubtful I could convince them to spend 5K on video cards alone for a CAD workstation.

  • 7thOmen7thOmen Posts: 47
    ebergerly said:
    From Nvidia’s Director of Technical Marketing, Tom Peterson: "With Quadro RTX cards, NVLink combines the memory of each card to create a single, larger memory pool. Petersen explained that this would not be the case for GeForce RTX cards. The NVLink interface would allow such a use case, but developers would need to build their software around that function."

    This guy's answer is kind of ambiguous.

    Sentence 1: Quadro + NVLink pooling = yes

    Sentence 2: RTX + NVLink pooling = no

    Sentence 3: NVLink pooling = maybe, but no GPU family defined.

    So, how would The RTX series be capable of pooling if it has already be determined that the RTX family is a no-go? I would expect that this is a firmware switch that only nVidia is capable of state changes. Can the single RTX NVLink even support pooling? If so, why are there two bridges on Quadros?

    It is suprising that this sort of information wasn't revealed at (or soon after) the announcement, but other important facts were ommitted as well. So, we get to keep on guessing.

    BTW, it seems that the clearest performance picture can be found by examining the Titan V since the Titan is the closest relative on the family tree. I suspect that there was an algorithm change to the Tensor cores that allow for the ray tracing feature on the RTX. I wonder if the Titan V will get ray tracing?

    Omen

     

  • 7thOmen said:
    ebergerly said:
    From Nvidia’s Director of Technical Marketing, Tom Peterson: "With Quadro RTX cards, NVLink combines the memory of each card to create a single, larger memory pool. Petersen explained that this would not be the case for GeForce RTX cards. The NVLink interface would allow such a use case, but developers would need to build their software around that function."

    This guy's answer is kind of ambiguous.

    Sentence 1: Quadro + NVLink pooling = yes

    Sentence 2: RTX + NVLink pooling = no

    Sentence 3: NVLink pooling = maybe, but no GPU family defined.

    So, how would The RTX series be capable of pooling if it has already be determined that the RTX family is a no-go? I would expect that this is a firmware switch that only nVidia is capable of state changes. Can the single RTX NVLink even support pooling? If so, why are there two bridges on Quadros?

    It is suprising that this sort of information wasn't revealed at (or soon after) the announcement, but other important facts were ommitted as well. So, we get to keep on guessing.

    BTW, it seems that the clearest performance picture can be found by examining the Titan V since the Titan is the closest relative on the family tree. I suspect that there was an algorithm change to the Tensor cores that allow for the ray tracing feature on the RTX. I wonder if the Titan V will get ray tracing?

    Omen

     

    I wonder if "developers" here emans third-party card developers, rather than software developers? It's a verys trange rider, it looks to me like one of the later sentences is left from an earlier draft of the reply.

  • Rashad CarterRashad Carter Posts: 1,830

    What they talkin' 'bout, Willis?

  • ebergerlyebergerly Posts: 3,255
    edited August 2018
    If you look at the datasheet for the previous generation Quadro (GP100) it says the same thing. Two 16GB GP100's can access the total 32GB VRAM via NVLink but "access to 32GB of memory via NVLink requires specific application support". I imagine they make you jump thru hoops with the cheaper cards, but the expensive cards do it automatically. And maybe the $600 NVLink connectors on the Quadro RTX cards help, compared to the $50 ones on the GTX RTX cards....
    Post edited by ebergerly on
  • ebergerlyebergerly Posts: 3,255
    edited August 2018
    I also imagine they try to make their boards as uniform as possible so they dont need a bunch of expensive custom production lines. I'm inclined to believe all the NVLink hardware is there on the boards, but functionally enabled and disabled by the connectors and drivers as desired. Maybe the latest CUDA 10 detects an expensive Quadro RTX with $600 NVLink connector and fires up its fancy memory stacking code to do it automatically. Otherwise you gotta do a bunch of limited low level stuff on your own.
    Post edited by ebergerly on
  • linvanchenelinvanchene Posts: 1,386
    edited August 2018

    This is the information Otoy developers shared for their estimate on Turing and NV Link support on Aug 31, 2018:

     

    "1. We do have to recompile Octane for CUDA 10 for it to work on Turing, but unlike previous releases, NVIDIA has given us the CUDA 10 SDK and we have a V4 build on Turing already in house.

    2. We should at minimum get V4 and possibly V3 updated in September before the cards launch with support. We are planning for V4 stable in September, assuming that RC3 fixes all known issues reported in RC1 and RC2 and no new issues are reported within a week or two following it's release.

    3. NVLink on Turing GeForce RTX cards has not been tested by us yet, and we don't have all the details on how it works differently or is limited relative to Quadro/Tesla NV Link. We want to get our hands on the cards before we release this feature publicly, as we expect many Octane users will be looking to use it on 2080 Ti and 2080."

    Source:

    https://render.otoy.com/forum/ucp.php?i=pm&mode=compose&action=quotepost&p=345030

    Note: RC = Release Candidate.

    Post edited by linvanchene on
  • nicsttnicstt Posts: 11,715

    My understanding from watching the video interview is that RTX cards 2080 and 2080ti cards can share resources including RAM; it needs to be coded by the application. The caveat I noticed was that one shouldn't presume it will be a pool of RAM encompassing both cards that works seamlessly to the same levels of performance (for want of a better word).

  • GatorGator Posts: 1,319

    Yeah, I would expect a performance hit with memory pooling.

    Would be nice if we can get that in Iray with all the Turing cards, and be able to turn it on and off in the Iray render settings in Studio.  Wishful thinking anyway.  smiley

  • outrider42outrider42 Posts: 3,679
    kyoto kid said:

    ...the gaming community needs faster communication between cards for smoother frame rates, not more VRAM.  The NVLink for GeForce cards will deliver the former, but not the latter,.  We 3D enthusiasts are still a very small niche in comparison to the gaming community or professional 3D production studios, which is why I don't see them offering memory stacking in their consumer line.

    Also, if these cards are indeed that wide, it would require people to purchase a new compatible MB if they want to use more than one card as pretty much all MBs today can at best handle dual width cards. 

    3 slot video cards have existed for a very long time. As do motherboards that support them with extra space between the card slots. That is exactly why you have 3 slot wide nvlink links.
  • kyoto kidkyoto kid Posts: 41,845
    prixat said:
    kyoto kid said:

    ...finally confirmation on memory stacking. 

    Maybe for the full Iray standalone accessible to pro grade software users, not sure about the version integrated into Daz.  (though if someone can afford a subscription to 3DSMax or C4D, they probably can afford a pair of RTX5000s and the NVLink bridges to have 32 GB (about 5,200$ total).

    If Otoy is in the position to build that function in for the 2080/2080Ti (maybe already working on it), then yeah, that would make a big difference.

    looks like time to make a big bowl of popcorn and get a beer.

    FYI:
    Iray for C4D was discontinued in Nov, 2017. It was generally rejected as immature and not ready for production compared to the other 8 or 9 renderers available to C4D users. (Though the final straw was probably the adoption of AMD Pro-Render as the 3rd internal renderer.)

    ...thank you for the update.

  • kyoto kidkyoto kid Posts: 41,845
    edited August 2018
    7thOmen said:
    ebergerly said:
    From Nvidia’s Director of Technical Marketing, Tom Peterson: "With Quadro RTX cards, NVLink combines the memory of each card to create a single, larger memory pool. Petersen explained that this would not be the case for GeForce RTX cards. The NVLink interface would allow such a use case, but developers would need to build their software around that function."

    This guy's answer is kind of ambiguous.

    Sentence 1: Quadro + NVLink pooling = yes

    Sentence 2: RTX + NVLink pooling = no

    Sentence 3: NVLink pooling = maybe, but no GPU family defined.

    So, how would The RTX series be capable of pooling if it has already be determined that the RTX family is a no-go? I would expect that this is a firmware switch that only nVidia is capable of state changes. Can the single RTX NVLink even support pooling? If so, why are there two bridges on Quadros?

    It is suprising that this sort of information wasn't revealed at (or soon after) the announcement, but other important facts were ommitted as well. So, we get to keep on guessing.

    BTW, it seems that the clearest performance picture can be found by examining the Titan V since the Titan is the closest relative on the family tree. I suspect that there was an algorithm change to the Tensor cores that allow for the ray tracing feature on the RTX. I wonder if the Titan V will get ray tracing?

    Omen

     

    ...from what I understand, the Titan V does not support NVLink. Even if it did one would be better off with two 2,300$ RTX5000s and a 39$ bridge (still not sure why the bridge for the RTX Quadro's costs less than the GeForce ones).  If the Titan was able to make use of NVLink it you would need a pair of the Volta bridges, which cost 600$, making the total cost for 24 GB 6,600$ or 300$ more than a single RTX6000 and almost 2,000$ more than dual RX5000s that would give you 32 GB of pooled memory double the CUDA cores more Tensor (768) and likely more RTX cores (should they rededicate some of the Titan's Tensor cores to RTX).

    Post edited by kyoto kid on
  • outrider42outrider42 Posts: 3,679
    ebergerly said:
    nicstt said:
    Perhaps they can write a negative review for *this* card, but will they end up being removed from Nvidia's "approved reviewers" list as a result? Nvidia has done that to other sites in the past. Will that knowledge cause current reviewers to pull their punches just so they can ensure they'll be able to review future products (which in turn drives page views).

    This is nothing new. NVIDIA or any company can decide not to send sample units to any reviewer if they don't like anything about that reviewer. That's always been the case. And not getting sample units to test means the reviewer needs to buy their own hardware, probably after the "approved" reviewers have already scored the big news with the early benchmarks. And how many reviewers would be willing to spend over $1,000 of their own money on a card that's old news? And if you're gonna review GPU's and CPU's and motherboards and water cooling and hard drives, etc., it gets real expensive real quick.   

    All the cool youtubers get people to give them stuff free. Pretty much every video I see, the reviewer is thanking somebody for the hardware, whether it is a storefront, a manufacturer, or their own fans. And none of those are Nvidia, as 3rd party AIBs often do this. But some of these people actually do buy things with their own money. There are game reviewers who always buy their own games and outright refuse free copies (or donate the free copies to fans) on priciple. When you factor how many video games release, that is quite an expense to swallow.

    The NDA only effects pre launch. Nvidia cannot stop anybody from posting reviews after launch when anybody can buy the cards off the shelf. They would have to have the most restrictive consumer EULA in history to so, where every single person buying would not be able to post a benchmark. Actually it is interesting, Intel had a really weird restrictive EULA recently, but backlash quickly changed that.

    So while Nvidia can refuse to send certain reviewers units to test before launch, after the launch everything is fair game, and that is exactly what some reviwers are doing. This does mean that for some people their reviews will be later than others due to having to wait to get the cards. But for some of these people, they have enough respect from their audience that they will wait.

    If Nvidia is doing anything dirty, it will come back on them one way or another. They are taking a big risk with Turing. The cost is very high, and the performance is just not that improved. You are absolutely paying for ray tracing here, but gamers are not too excited about the idea. NOBODY buys a $1200 GPU to play games at 1080p (with ray tracing). If it is doing ray tracing, they want it at 4K. Maybe 1440p at worst. But 1080p is like a joke. We might be happy, but gamers are not so excited. So Nvidia already has a problem. If they are doing things behind the scenes to restrict reviewers and whatever, that will only compound it. And then you have the simple fact that they have a monster chip, but less than half of that chip has CUDA cores on it. This opens the door for AMD and Intel to catch up, and if they can compete at all, gamers will be willing to buy from them before Nvidia because of the bridges Nvidia is burning.

    Just think, AND or Intel could release a 7nm card, make it smaller than thus cheaper, and fill it with enough streaming processors to beat the 2080ti in raw performance. If they dedicate the full die to streaming processors they could do it. If Nvidia counters with 7nm, they have a problem. They cannot just abandon RT and Tensor so soon. They still have to dedicate half of the chip to these cores, and this cuts down how many CUDA cores they can place on any given die. Unless they build an expensive monster chip again, they would be faced with a disadvantage in "regular" games. In the blink of an eye, the whole GPU market can flip depending on how all things roll.

    2019 and 2020 could get wild.

  • TaozTaoz Posts: 10,236
    $600 for one NVLink sounds ridiculous - you can get a 1080Ti for that price.
  • kyoto kidkyoto kid Posts: 41,845
    edited September 2018

    ...well that's actually for two bridges as required for Volta cards.

    Post edited by kyoto kid on
  • nicsttnicstt Posts: 11,715
    ebergerly said:
    nicstt said:
    Perhaps they can write a negative review for *this* card, but will they end up being removed from Nvidia's "approved reviewers" list as a result? Nvidia has done that to other sites in the past. Will that knowledge cause current reviewers to pull their punches just so they can ensure they'll be able to review future products (which in turn drives page views).

    This is nothing new. NVIDIA or any company can decide not to send sample units to any reviewer if they don't like anything about that reviewer. That's always been the case. And not getting sample units to test means the reviewer needs to buy their own hardware, probably after the "approved" reviewers have already scored the big news with the early benchmarks. And how many reviewers would be willing to spend over $1,000 of their own money on a card that's old news? And if you're gonna review GPU's and CPU's and motherboards and water cooling and hard drives, etc., it gets real expensive real quick.   

    All the cool youtubers get people to give them stuff free. Pretty much every video I see, the reviewer is thanking somebody for the hardware, whether it is a storefront, a manufacturer, or their own fans. And none of those are Nvidia, as 3rd party AIBs often do this. But some of these people actually do buy things with their own money. There are game reviewers who always buy their own games and outright refuse free copies (or donate the free copies to fans) on priciple. When you factor how many video games release, that is quite an expense to swallow.

    The NDA only effects pre launch. Nvidia cannot stop anybody from posting reviews after launch when anybody can buy the cards off the shelf. They would have to have the most restrictive consumer EULA in history to so, where every single person buying would not be able to post a benchmark. Actually it is interesting, Intel had a really weird restrictive EULA recently, but backlash quickly changed that.

    So while Nvidia can refuse to send certain reviewers units to test before launch, after the launch everything is fair game, and that is exactly what some reviwers are doing. This does mean that for some people their reviews will be later than others due to having to wait to get the cards. But for some of these people, they have enough respect from their audience that they will wait.

    If Nvidia is doing anything dirty, it will come back on them one way or another. They are taking a big risk with Turing. The cost is very high, and the performance is just not that improved. You are absolutely paying for ray tracing here, but gamers are not too excited about the idea. NOBODY buys a $1200 GPU to play games at 1080p (with ray tracing). If it is doing ray tracing, they want it at 4K. Maybe 1440p at worst. But 1080p is like a joke. We might be happy, but gamers are not so excited. So Nvidia already has a problem. If they are doing things behind the scenes to restrict reviewers and whatever, that will only compound it. And then you have the simple fact that they have a monster chip, but less than half of that chip has CUDA cores on it. This opens the door for AMD and Intel to catch up, and if they can compete at all, gamers will be willing to buy from them before Nvidia because of the bridges Nvidia is burning.

    Just think, AND or Intel could release a 7nm card, make it smaller than thus cheaper, and fill it with enough streaming processors to beat the 2080ti in raw performance. If they dedicate the full die to streaming processors they could do it. If Nvidia counters with 7nm, they have a problem. They cannot just abandon RT and Tensor so soon. They still have to dedicate half of the chip to these cores, and this cuts down how many CUDA cores they can place on any given die. Unless they build an expensive monster chip again, they would be faced with a disadvantage in "regular" games. In the blink of an eye, the whole GPU market can flip depending on how all things roll.

    2019 and 2020 could get wild.

    NDAs (particularly the on from Nvidia) doesn't apply once what the NDA protects against, once the information is generally available. Either once the specific infor reaches the date the NDA is lifted, or if the infor is released early and becomes generally available.

  • TaozTaoz Posts: 10,236
    kyoto kid said:

    ...well that's actually for two bridges as required for Volta cards.

    Ok, still pretty expensive though.

Sign In or Register to comment.