OT: New Nvidia Cards to come in RTX and GTX versions?! RTX Titan first whispers.

1131416181927

Comments

  • While most folks have been talking about the new RTX high end ( and expensive ) cards, I have been more concerned finding out about the budget priced cards more in my (and many) peoples price range -  2060, 2050 and what sort of rendering performance you could get compared to current cards. If this speculation that entry level RTX 2060 cards may not support ray tracing, is correct, that would be unfortunate for many DS Iray users. I hope it doesn't turn out that way.

    https://www.digitaltrends.com/computing/nvidia-geforce-rtx-2060-launch-next-year-without-ray-tracing/

     

  • outrider42outrider42 Posts: 3,679

    I don't think there is any chance at all for the 2060 and below to have ray tracing. Sorry. This goes a long way back, when the "RTX" branding was first rumored. They had the 2070 and up as RTX, while 2060 and below as GTX. So this all pretty much confirms it. Plus for gaming these cards would be totally useless for ray tracing. After all, the 2080ti struggles to hit 1080p at 60 fps. If the biggest card struggles to hit that spec, then how will the 2080 and 2070 fair? The 2070 has 40% less ray tracing ability vs the 2080ti. So if the 2080ti only hits 60 fps with RTX on, then the 2070 is probably barely playable at 35 fps. Gamers do not want to play under 60 fps, let alone 30. And given this, if the 2060 had any RT cores, there is no way that it could play any RTX game at a reasonable frame rate. So it is pointless to include RT cores when they add so much to the cost of the card. It is largely because of the RT and Tensor additions that the 2070 and up are so super expensive. The 2080ti is a 752 mm chip, it is absolutely massive. That die size costs a lot to produce. The 2060 is generally the mass market GPU at around $250. It would not be possible to keep the card at that price point with RT and Tensor cores.

    You may get lucky and get Tensor cores though. The rumored die size in the 2060 is still kind of large, which makes me think that it may still have some Tensor cores aboard. If it does, you will still get a little boost from Tensor.

  • outrider42outrider42 Posts: 3,679

    I think I may be right about the 2060 at least getting Tensor cores. More details on the chipsets have just released, and Tensor cores are a big part of the SMs Nvidia is making, while the RT cores are in their own section.

    As you can see, Tensor is deeply embedded into the Turing chip, so it looks to me like many other Turing cards will retain Tensor while they may not have ray tracing. However the RT core is all alone at the bottom, so I imagine this can easily be left out. Also if this is anywhere to scale, that RT core is massive! Tensor cores take up a chunk of space, two. That also explains the huge size of these chips.

    The reason why Tensor would stay is easy, Tensor can still make a big impact with the new "DLSS" tech, which enabled can boost frame rates by quite a bit in games that support it. Actually, this feature for gamers will probably be much more well received...boosting frame rates is always a big win. DLSS is basically denoising in real time, as the Tensor cores draw upon machine learning to fill in texture gaps to allow the frame rate to stay higher than it would normally. Do games need to be updated for this feature? Probably, but I believe this feature will get a lot more support than ray tracing as it can enhance every game. It can make the difference for whether a game is playable with those extra frames.

    Tensor can also aid with physics calculations and similar things, so Tensor can be made to do a lot of different tasks. You could in theory even write a game solely on Tensor, its just a matter of code.

  • when you speak of new drivers and new iray to support the RTX would that mean that the RTX would not work at all for studio when released ? Or could you still use it and benefit from the boost in cuda cores numbers and the fact that the cores are of a newer generation  and what is impleneted later is all the flashy RT stuff ? 

     

  • nicsttnicstt Posts: 11,715

    No idea, but when the 10 series was released it took months for Nvidia to release drivers for it. Hopefully the 20 series will be quicker but we don't actually know.

  • ebergerlyebergerly Posts: 3,255
    I'm guessing that NVIDIA wouldn't go to all the expense to set up a separate and complicated and expensive manufacturing production line to fabricate a special version of the RTX for a low-end card. Seems more reasonable to just use the same hardware and disable whatever functionality for the cheap cards.
  • Ghosty12Ghosty12 Posts: 2,080
    edited September 2018

    Here is one of the first videos giving a look at the 2080 Founders Ed, what is interesting is the Founders Edition 2080 is quite a hefty, the card weighs 238 grams and the fans and heatsink weighs 936 grams as almost all of the card is using aluminium..

    Post edited by Ghosty12 on
  • nicsttnicstt Posts: 11,715

    I love my nerd porn, but I'm getting bored by it; too much without any completion.

  • kyoto kidkyoto kid Posts: 41,845
    edited September 2018

    ...interesting, he made a reference to the NVLink as being "SLI".

    Post edited by kyoto kid on
  • outrider42outrider42 Posts: 3,679
    kyoto kid said:

    ...interesting, he made a reference to the NVLink as being "SLI".

    NVlink is backward compatible with SLI, and you will probably hear many people refer to it as SLI for a very long time because that is what they are used to. Gamers have had the term SLI for about 20 years. Plus SLI is just easier to say and type.

    Gamer's Nexus has an excellent technical breakdown, and they also have a tear down video. Their teardown video is interesting because he goes into a lot more detail about the parts on the board. And best of all GN does not do any goofy unboxing scene. I really don't care for unboxing videos that much, just show me the freakin' product already!

    Fair warning, these videos are kind of long! One is 25 mintes, the other 30. If you watch just one, watch the breakdown vid. But the teardown is actually kind of funny as he curses the overly complex construction with over 50 screws.

    And again, it is very important to remember that Daz Studio and Iray will almost certainly need an update to use these new cards. We had to wait for Pascal to get supported for a long time. I don't think we will need to wait as long, but be prepared for that. Don't go buying a Turing card and trading your old card in, because you will probably not be able to run Daz Studio at all! If you do buy Turing, keep your old card so you can run Daz until it gets the Turing update.

    Nvidia needs to first release the Iray SDK, and then Daz needs to install the plug in. It will release in the Daz Beta first before releasing in the general release. Watch the site for updates.

  • So when you do a render that falls back to just CPU, is it better to have a 4xFasterClockSpeed or a 6xSlowerClockSPeed CPU?

    I am asking because I intend to grab a "cheap rig" (1070 or 1080) before the end of this year and then get a "real rig" next year when things have had a chance to shake out.

  • kyoto kidkyoto kid Posts: 41,845

    ..well, you definitely want as many CPU cores as you can afford.

  • outrider42outrider42 Posts: 3,679
    edited September 2018
    It depends on what you are comparing there, but in general you want more cores. But a modern 4 core will probably cream an old AMD 6 core Bulldozer, so that's not always the case. Don't forget there is a benchmark thread here, it is mostly for GPU, but there are CPU marks in it, too. It may take some digging, but you may find something there to give you an idea.
    Post edited by outrider42 on
  • Ghosty12Ghosty12 Posts: 2,080
    edited September 2018

    I found the deep dive more informative and interesting, than the tear down..

    Post edited by Ghosty12 on
  • ebergerlyebergerly Posts: 3,255
    edited September 2018
    kyoto kid said:

    ...interesting, he made a reference to the NVLink as being "SLI".

    NVlink is backward compatible with SLI, and you will probably hear many people refer to it as SLI for a very long time because that is what they are used to. Gamers have had the term SLI for about 20 years. Plus SLI is just easier to say and type.

     

    Or maybe because, as Tom Petersen of NVIDIA said "today we're using NVLINK to make SLI better", and effectively the gaming card NVLINK implemented as a higher-bandwidth SLI connection. And as the NVIDIA page for the 2080 says "The GeForce RXT NVLink bridge connects two NVLink SLI-ready graphics cards". Which is probably why the Quadro version of the NVLINK bridge costs $600, and the 2080/2080ti version is more like $60. Like I always say, it's very complicated.

    It's not just the extent to which the full hardware NVLINK interconnection is implemented/enabled with the gaming cards, it's the fact that none of them also implement a true NVLINK interconnection with the CPU/RAM to bypass the much slower PCIe. And that's very important if your particular application relies on transferring back and forth between GPU and CPU/RAM, as is the intent with CUDA's "Unified Memory" model I've mentioned before.

    And it's the extent to which the all-important SOFTWARE side of things is implemented, as Petersen clearly stated and is quite obvious. Just because a connector is connected doesn't mean that all the functionality is there, or is implemented, in any particular software. It requires drivers and CUDA and software apps to be written and OPTIMIZED to take advantage of all of this. Which is why I think it's gonna be a very long time before we see final, or even significant, results from many of these new Turing technologies. Most software out there right now has no clue what an RT core is.  

    My simple view of SLI is that it has one GPU working on one frame of a game, and the other GPU working on another frame simultaneously. And the gaming NVLINK sounds like it will just improve the bandwidth of those interactions between GPU's. Does that matter to anyone? Who knows? Maybe some will get a faster frame rate in games, as long as the GPU's calculating each frame isn't the bottleneck.  

    But for rendering, not until a more complete NVLINK implementation, both hardware and software, is available which allows us to stack VRAM and minimize page faults back to slow system RAM over PCIe bus, etc., and until we can afford to buy two or more cards with full NVLINK like that, it seems to me that NVLINK is kinda useless for most of us.  

    Post edited by ebergerly on
  • ebergerlyebergerly Posts: 3,255
    JazzyBear said:

    So when you do a render that falls back to just CPU, is it better to have a 4xFasterClockSpeed or a 6xSlowerClockSPeed CPU?

    I am asking because I intend to grab a "cheap rig" (1070 or 1080) before the end of this year and then get a "real rig" next year when things have had a chance to shake out.

    I have an 8 core, 16 thread Ryzen 7 1700. It renders a scene in 20 minutes, and a 1070 renders the same scene in 3 minutes. And when all cores of the CPU are rendering my computer becomes unresponsive (unless I disable cores for the rendering process), and it makes relatively no significant difference in render times compared to a 1070. Unless you have some other software that uses a powerful CPU I'd focus on the GPU and not worry too much about a CPU. CPU's are becoming less and less relevant. Unless you have monster scenes that use at least 16-24 GB of system RAM (which is what's required to overload a GTX-1070's 8 GB of VRAM), and don't mind relying on painfully slow CPU rendering instead of managing your scenes or doing some post-production to minimize the VRAM requirement.   

  • nicsttnicstt Posts: 11,715
    edited September 2018

    I wouldn't bothe with AMD prior to Ryzen.

    If your scenes drop to CPU fairly often, then it is a consideration; I have a 1950x, a scene that renders on my 980ti in five minutes takes about 15 on the CPU. I tend to remove a core and/or thread so I can still continue to use the system. I have other software that uses the cores of the CPU, otherwise I'm not sure I would have gone for the threadripper, although I have photoshop open, and other programs, which take a lot of RAM; I'm regularly at the 50GB mark, but they aren't particularly CPU taxing. I can have multiple instances of Studio open though, and render on 980ti, 970 (which is used for displays), and even on the CPU as well; rendering a scene, testing a scene and working on a new one. I do tend to avoid rendering on the display card though; it just gets annoying.

    More cores and more cards open up options.

    Post edited by nicstt on
  • outrider42outrider42 Posts: 3,679
    ebergerly said:
    kyoto kid said:

    ...interesting, he made a reference to the NVLink as being "SLI".

    NVlink is backward compatible with SLI, and you will probably hear many people refer to it as SLI for a very long time because that is what they are used to. Gamers have had the term SLI for about 20 years. Plus SLI is just easier to say and type.

     

    Or maybe because, as Tom Petersen of NVIDIA said "today we're using NVLINK to make SLI better", and effectively the gaming card NVLINK implemented as a higher-bandwidth SLI connection. And as the NVIDIA page for the 2080 says "The GeForce RXT NVLink bridge connects two NVLink SLI-ready graphics cards". Which is probably why the Quadro version of the NVLINK bridge costs $600, and the 2080/2080ti version is more like $60. Like I always say, it's very complicated.

    It's not just the extent to which the full hardware NVLINK interconnection is implemented/enabled with the gaming cards, it's the fact that none of them also implement a true NVLINK interconnection with the CPU/RAM to bypass the much slower PCIe. And that's very important if your particular application relies on transferring back and forth between GPU and CPU/RAM, as is the intent with CUDA's "Unified Memory" model I've mentioned before.

    And it's the extent to which the all-important SOFTWARE side of things is implemented, as Petersen clearly stated and is quite obvious. Just because a connector is connected doesn't mean that all the functionality is there, or is implemented, in any particular software. It requires drivers and CUDA and software apps to be written and OPTIMIZED to take advantage of all of this. Which is why I think it's gonna be a very long time before we see final, or even significant, results from many of these new Turing technologies. Most software out there right now has no clue what an RT core is.  

    My simple view of SLI is that it has one GPU working on one frame of a game, and the other GPU working on another frame simultaneously. And the gaming NVLINK sounds like it will just improve the bandwidth of those interactions between GPU's. Does that matter to anyone? Who knows? Maybe some will get a faster frame rate in games, as long as the GPU's calculating each frame isn't the bottleneck.  

    But for rendering, not until a more complete NVLINK implementation, both hardware and software, is available which allows us to stack VRAM and minimize page faults back to slow system RAM over PCIe bus, etc., and until we can afford to buy two or more cards with full NVLINK like that, it seems to me that NVLINK is kinda useless for most of us.  

    Or they use the term SLI because of the simple fact that it is what people have called it for 20 years. If you say NVlink, most people are going to wonder what that is. But if you say it is the new kind of SLI, they will instantly know what it is. And again, NVlink is back compatible with SLI, so using the term is important. And technically the term is correct, because all it stands for is 'scalable link interface'.

    More info https://www.techpowerup.com/reviews/NVIDIA/GeForce_Turing_GeForce_RTX_Architecture/14.html

    Note there is another difference between the 2080ti and 2080. The 2080ti in fact has 2 links, while 2080 has only 1. So the 2080 is getting scaled back...but the 2080ti is not.

    Using price to justify a point has no meaning. A Quadro link costs more because it is Quadro. Do you honestly believe that a link cable really has $600 worth of parts in it? It costs $600 for the Quadro NVlink because that is what Nvidia wants to charge for it, simple as that.
  • ebergerlyebergerly Posts: 3,255

    Or they use the term SLI because of the simple fact that it is what people have called it for 20 years. If you say NVlink, most people are going to wonder what that is. But if you say it is the new kind of SLI, they will instantly know what it is. And again, NVlink is back compatible with SLI, so using the term is important. And technically the term is correct, because all it stands for is 'scalable link interface'.

     

    Just don't be surprised if it doesn't have near what everyone is expecting for functionality of a true NVLink, and in practice (at least for the gaming cards) amounts to not much more than a bandwidth-improved SLI connector.  

  • kyoto kidkyoto kid Posts: 41,845

    .

    ebergerly said:
    kyoto kid said:

    ...interesting, he made a reference to the NVLink as being "SLI".

    NVlink is backward compatible with SLI, and you will probably hear many people refer to it as SLI for a very long time because that is what they are used to. Gamers have had the term SLI for about 20 years. Plus SLI is just easier to say and type.

     

    Or maybe because, as Tom Petersen of NVIDIA said "today we're using NVLINK to make SLI better", and effectively the gaming card NVLINK implemented as a higher-bandwidth SLI connection. And as the NVIDIA page for the 2080 says "The GeForce RXT NVLink bridge connects two NVLink SLI-ready graphics cards". Which is probably why the Quadro version of the NVLINK bridge costs $600, and the 2080/2080ti version is more like $60. Like I always say, it's very complicated.

    It's not just the extent to which the full hardware NVLINK interconnection is implemented/enabled with the gaming cards, it's the fact that none of them also implement a true NVLINK interconnection with the CPU/RAM to bypass the much slower PCIe. And that's very important if your particular application relies on transferring back and forth between GPU and CPU/RAM, as is the intent with CUDA's "Unified Memory" model I've mentioned before.

    And it's the extent to which the all-important SOFTWARE side of things is implemented, as Petersen clearly stated and is quite obvious. Just because a connector is connected doesn't mean that all the functionality is there, or is implemented, in any particular software. It requires drivers and CUDA and software apps to be written and OPTIMIZED to take advantage of all of this. Which is why I think it's gonna be a very long time before we see final, or even significant, results from many of these new Turing technologies. Most software out there right now has no clue what an RT core is.  

    My simple view of SLI is that it has one GPU working on one frame of a game, and the other GPU working on another frame simultaneously. And the gaming NVLINK sounds like it will just improve the bandwidth of those interactions between GPU's. Does that matter to anyone? Who knows? Maybe some will get a faster frame rate in games, as long as the GPU's calculating each frame isn't the bottleneck.  

    But for rendering, not until a more complete NVLINK implementation, both hardware and software, is available which allows us to stack VRAM and minimize page faults back to slow system RAM over PCIe bus, etc., and until we can afford to buy two or more cards with full NVLINK like that, it seems to me that NVLINK is kinda useless for most of us.  

    ...I've been saying this for a while now.

    Unfortunately as I mentioned, the current NVLink MBs are targeted towards data centre servers and research workstations/supercomputers for deep learning purposes.  The only card that is fully NVLInk enabled (both between cards and with the CPU) is the Tesla V100.  Even the new RTX Qaudros are still PCIe.

  • outrider42outrider42 Posts: 3,679
    ebergerly said:

    Or they use the term SLI because of the simple fact that it is what people have called it for 20 years. If you say NVlink, most people are going to wonder what that is. But if you say it is the new kind of SLI, they will instantly know what it is. And again, NVlink is back compatible with SLI, so using the term is important. And technically the term is correct, because all it stands for is 'scalable link interface'.

     

    Just don't be surprised if it doesn't have near what everyone is expecting for functionality of a true NVLink, and in practice (at least for the gaming cards) amounts to not much more than a bandwidth-improved SLI connector.  

    What part of Tom's interview that confirmed memory pooling is incorrect?

  • ebergerlyebergerly Posts: 3,255
    ebergerly said:

    Or they use the term SLI because of the simple fact that it is what people have called it for 20 years. If you say NVlink, most people are going to wonder what that is. But if you say it is the new kind of SLI, they will instantly know what it is. And again, NVlink is back compatible with SLI, so using the term is important. And technically the term is correct, because all it stands for is 'scalable link interface'.

     

    Just don't be surprised if it doesn't have near what everyone is expecting for functionality of a true NVLink, and in practice (at least for the gaming cards) amounts to not much more than a bandwidth-improved SLI connector.  

    What part of Tom's interview that confirmed memory pooling is incorrect?

    It did not confirm memory pooling in the GeForce RTX cards. In fact, it said:

    "With Quadro RTX cards, NVLink combines the memory of each card to create a single, larger memory pool. Petersen explained that this would not be the case for GeForce RTX cards. The NVLink interface would allow such a use case, but developers would need to build their software around that function." 

    Which clearly sounds like, while there is hardware capability for memory pooling, it is not nearly as easy to implement as with the Quadro cards. It would take developers to make software that does memory pooling, and there's no clear answer on how hard that might be or how optimized and useful you can make it. Presumbly there's not only hardware capability with the Quadro cards, but also a lot of software to make memory pooling much easier. And presumably that's why they're charging $700 for the Quadro version.   

  • kyoto kidkyoto kid Posts: 41,845

    ... the prices people are seeing for Quadro NVLink are for the dual bridge sets that only work with the GP100  and GV100.  I don't believe Nvidia has yet published a price for the new single Quadro NVLink bridges, but yes, they will likely cost more than the ones for the GeForce series.

  • nicsttnicstt Posts: 11,715

    On the memory pooling there seems to be disagreement; RTX cards have the functionality, the software needs to be written to allow it. That seems to me like the cards have it. If it couldn't be implemented in software as the cards did not have the feature, then it would be correct to say they did not have memory pooling.

    Seems to me though that there is an argument about something we know nothing about as yet, except what Nvidia has chosen to tell us; I'd sooner wait.

  • ebergerlyebergerly Posts: 3,255
    nicstt said:

    On the memory pooling there seems to be disagreement; RTX cards have the functionality, the software needs to be written to allow it. That seems to me like the cards have it. If it couldn't be implemented in software as the cards did not have the feature, then it would be correct to say they did not have memory pooling.

    Seems to me though that there is an argument about something we know nothing about as yet, except what Nvidia has chosen to tell us; I'd sooner wait.

    Don't assume that because the hardware capability exists, then the feature exists. There's two parts to the equation: Hardware and Software. Doing software to implement a feature can be very difficult and time-consuming. And maybe it's not something that some/many/most software companies catering to lower-end gaming users are willing to spend time and money to write software for. A lot depends on how the latest CUDA and the drivers are configured, and how easy they make it for developers to implement VRAM stacking. Instead of having, say, an entire scene located in each GPU's VRAM, you now have to configure it so that the scene information is spread across both GPUs' VRAM. And how do you handle "page migration" and "page faults", where you run out of VRAM? There's a lot of coordination and scheduling that needs to take place. And if gamers don't really care, why bother spending time developing software to implement it? 

  • nicsttnicstt Posts: 11,715

    I differentiated between software and hardware; however, the 'mere' fact that the functionaly exists in hardware means it boils down to software can be written to support it; and has as been said by Nvidia, 3rd parties can write said software.

    Software implements functionality; regardless of it being supported by software doesn't alter the fact it could be; Nvidia and others can chose to disable some feature of the hardware, and do so. In this instance, it hasn't be disabled.

  • Too bad NVidia doesn't have the same capabilities as AMD with different generation as far as crossfire/sli... Bah, I'll be waiting for several years anyways before my 1080 becomes obsolete and the RTX price drops!

  • kyoto kidkyoto kid Posts: 41,845

    ...same her with my Titan-X.

  •  

    kyoto kid said:

    ...same her with my Titan-X.

    I WISH I had one of those, but they came out only after I bought my 1080! ; )

  • drzapdrzap Posts: 795
    edited September 2018

    This article gives the most concise and simplest breakdown of RTX that I have found thus far.   It explains the role of each of the chipset cores in the rendering pipeline while also delving into the NVLink feature in the consumer cards.  It even gives some insight that might explain why the Volta cards are so much faster in iRay and other renderers.  Also take note of the issues raised in the comments below the article.  This guy seems to know his stuff. Good read.

    https://www.pcper.com/reviews/Graphics-Cards/Architecture-NVIDIAs-RTX-GPUs-Turing-Explored

    Post edited by drzap on
Sign In or Register to comment.