OT: The New Nvidia Titan V, Feel The Power

Ghosty12Ghosty12 Posts: 1,955
edited December 2017 in The Commons

Do you want an awesome, powerful and speedy videocard then the the new Nvidia Titan V is for you. One problem is the eye wateringly cost of $3000 USD for the card.. So you will have the power but go broke at the same time, for now here are the specs of the new card..

Specs:

MSRP $3000

Architecture: Volta

Process: TSMC 12 FFN

Die Size: 815 nm2

Cuda Cores: 5120

Tensor Cores: 640

Core / Boost Clock: 1.2 GHz/1.45 GHz

Memory Clock: 1.7 Gbps HBM2

Memory Bus Width: 3072-bit

Memory Bandwidth: 653 GB/s

VRAM Capacity: 12 GB HBM2

L2 Cache; 4.5 MB

Single Precision: 13.8 TFLOPS

Double Precision: 6.9 TFLOPS

So a rather impressive card to say the least but very expensive that as it is in the realms of Quadro Card pricing.. Would love to have one but could buy an entire computer system for that price.. :)  What it interesting is that the Titan V is a slightly cut down version of the extremely expensive Nvidia Tesla V100 card..

Post edited by Ghosty12 on
«1

Comments

  • MattymanxMattymanx Posts: 6,877

    It appears to be the equivelant of two 980TIs in a single card.  While I have no doubts that it will perform better in certain aspecs, it would be less expensive to buy two of these instead of one of those - https://www.zotac.com/us/product/graphics_card/gtx-980-ti-amp-extreme

     

  • nonesuch00nonesuch00 Posts: 17,890

    Well the 12GB HBM2 makes the card & it's price sort of a crass joke really. It's nothing more than a minor speed improvement designed to bilk the insecure out of their money. Had they put 32GB or dare say it, even 64GB RAM in that card at that price, then they could of been taken seriously.

  • hacsarthacsart Posts: 2,024

    I've owned cars that cost less....

  • ebergerlyebergerly Posts: 3,255
    edited December 2017

    It's designed for scientific calculations, artificial intelligence, and data operations, not graphics stuff. Presumably a big deal for scientists and engineers (I think the "tensor cores" are the big deal), but not something game players and renderers will care about. And obviously not targeted at the general public at that price, only large businesses and research facilities I assume. But I'm sure it will give a lot of tech folks in the computer forums a lot to get all giggly about. And of course we'll see the "wow it demolishes the 1080ti because it gives 10% better performance in this benchmark" type of stuff. 

    Post edited by ebergerly on
  • Ghosty12Ghosty12 Posts: 1,955
    edited December 2017
    Mattymanx said:

    It appears to be the equivelant of two 980TIs in a single card.  While I have no doubts that it will perform better in certain aspecs, it would be less expensive to buy two of these instead of one of those - https://www.zotac.com/us/product/graphics_card/gtx-980-ti-amp-extreme

     

    Yeah could get more for less of course with two previous gen cards, and does make sense since the Titan V is aimed more at AI science work, that is what the Tensor Cores are for..

    ebergerly said:

    It's designed for scientific calculations, artificial intelligence, and data operations, not graphics stuff. Presumably a big deal for scientists and engineers (I think the "tensor cores" are the big deal), but not something game players and renderers will care about. And obviously not targeted at the general public at that price, only large businesses and research facilities I assume. But I'm sure it will give a lot of tech folks in the computer forums a lot to get all giggly about. And of course we'll see the "wow it demolishes the 1080ti because it gives 10% better performance in this benchmark" type of stuff. 

    Yeah that is what is meant for though not sure why they would release a cut down version of the Tesla V100, which the Titan V is since the specs of the two cards are nearly the same.. And well since it is aimed at that market they would go for the $10000 Tesla V100 since they would have the funds..

    Post edited by Ghosty12 on
  • Ghosty12Ghosty12 Posts: 1,955
    edited December 2017

    nonesuch00 said:

    Well the 12GB HBM2 makes the card & it's price sort of a crass joke really. It's nothing more than a minor speed improvement designed to bilk the insecure out of their money. Had they put 32GB or dare say it, even 64GB RAM in that card at that price, then they could of been taken seriously.

    Nah they would not put any more memory on the card as it would directly compete against its more expensive brother the Tesla V100 which only has 16 Gig of ram on board..

    Post edited by Ghosty12 on
  • KindredArtsKindredArts Posts: 1,217

    Ok, they "say" it's for scientists, engineers, machine learning and all that stuff ... why gold? It's like putting rims on a microscope, what scientist is going to care? My guess is they geared it towards scientists but they know thats only a small market - but idiots with lots of money, well, that's a much bigger market. People will buy it because its gold, expensive and just must be "the best". Don't get me wrong, it looks like a great card, but the price tag is waaay out there for what we're doing. I've still got the maxwell gen Titan x's, and they do a great job. You can probably get them pretty cheap on the second hand market too.smiley

  • GatorGator Posts: 1,268

    Well the 12GB HBM2 makes the card & it's price sort of a crass joke really. It's nothing more than a minor speed improvement designed to bilk the insecure out of their money. Had they put 32GB or dare say it, even 64GB RAM in that card at that price, then they could of been taken seriously.

    I assume for the deep learning that the memory isn't needed... otherwise 12 GB would be a really, really dumb design decision at $3000.  And I don't think they are dumb.

    Interesting the Titan V is mo' betta than the Titan X.  At Tweaktown, they suspect they may continue with the Titan Xx line.  If not, looks like we'll have to go with the 2080 Ti.

  • JamesJABJamesJAB Posts: 1,760

    Isn't it obvious?

    This card exists as a stepping stone... or as a way to sell deffective Volta chips...

    What is a manufacturer supposed to do with a complete chip that fails testing with one bad memory modual?  Disable the bad one and sell it as a lesser chip.  This is standard practice in the GPU and CPU business. Where do you think the six core AMD CPUs come from?  They are fabbed as a eight cores then two get disabled because of a defect in those two cores. Same story with the Volta chips, but this time the VRAM is part of the GPU, so I'm betting that all of the Titan V cards physicaly have that fourth HBM2 memory module on the GPU.  Just one of the four are dissabled, and it varies from chip to chip which one it is.

  • nonesuch00nonesuch00 Posts: 17,890

     

    Well the 12GB HBM2 makes the card & it's price sort of a crass joke really. It's nothing more than a minor speed improvement designed to bilk the insecure out of their money. Had they put 32GB or dare say it, even 64GB RAM in that card at that price, then they could of been taken seriously.

    I assume for the deep learning that the memory isn't needed... otherwise 12 GB would be a really, really dumb design decision at $3000.  And I don't think they are dumb.

    Interesting the Titan V is mo' betta than the Titan X.  At Tweaktown, they suspect they may continue with the Titan Xx line.  If not, looks like we'll have to go with the 2080 Ti.

    No, they are not dumb, I didn't say that, but taking advantage of the psycology of those that feel they must always have the best and that is pure profit for a business selling 'feeling' instead of function. One may also salt their sunny-side up eggs with gold in some very expensive restaurants too. At least they are not selling bad feelings. My house is decorated with pretty things that aren't functional. Should I buy Keurig coffee maker instead of the cheapy when I would only drink maybe 3 dozen coffees a year? Those that can afford it and want it should feel free to buy as it helps to fund the future cards with 32GB and 64GB built-in; although like you I think from what I read by computer technology trade sites mass storage will eventually become so fast that all these I/O process will be able to stream more than fast enough to stream directly off of mass storage and even use mass storage as a RAM diskwith no significant time delays but that is still quite a bit aways I think. 

  • ebergerlyebergerly Posts: 3,255

    Maybe part of it is NVIDIA's way of getting people in forums to keep talking about them and get all excited about NIVIDA stuff... smiley

  • The price is silly. I just keep my two 1080Ti's.

  • kyoto kidkyoto kid Posts: 40,515

    Well the 12GB HBM2 makes the card & it's price sort of a crass joke really. It's nothing more than a minor speed improvement designed to bilk the insecure out of their money. Had they put 32GB or dare say it, even 64GB RAM in that card at that price, then they could of been taken seriously.

    ...even 16 GB to go up against the Vega Frontier. However at 3X the cost of the AMD counterpart.  Aside from universities and independent scientific research labs, not sure how many takers there would be at that price. Nvidia might have been better off just making this an "entry level" Tesla compute card with no video outputs as that is how it is being marketed. 

    As I mentioned on the Nvida Pr0n thread, Nvida isn't about to threaten sales of their professional Quadro line which is most likely why they kept the Titan V at 12 GB (the 2,500$ Quadro P5000 and 7,000$ GP100 each have 16 GB, the former GDDR5X, the latter HBM2). 

    It remains to be seen which direction Nvida takes the Quadro line in. They could go 16 GB for the "x5000" replacement and 32 GB for the "x6000" most likely assigning them a new designation as well (like GV100 and GV200?).  Both cards would be NVLink compatible (like the GP100) which would support memory pooling (there's your 64 GB of VRAM, just hope you have very deep pockets or match all numbers in the big lotto because most likely they will cost more than their P5000/6000 counterparts today given the price increase between the Titan XP and Titan V).

  • kyoto kidkyoto kid Posts: 40,515
    ghosty12 said:

    nonesuch00 said:

    Well the 12GB HBM2 makes the card & it's price sort of a crass joke really. It's nothing more than a minor speed improvement designed to bilk the insecure out of their money. Had they put 32GB or dare say it, even 64GB RAM in that card at that price, then they could of been taken seriously.

    Nah they would not put any more memory on the card as it would directly compete against its more expensive brother the Tesla V100 which only has 16 Gig of ram on board..

    ...the difference with the V100 is it is the first compute GPU to offer an NVLink motherboard interface option which supports linking more than 2 cards to pool VRAM.  This is technology geared exclusively to large datacentre servers and supercomputers using nodes that are built around dual IBM Power9 CPUs which can support 4, 6, or even 8 V100s (up to 128 GB of VRAM per node).  Don't expect to see any of this hardware on sale at Newegg or Tiger Direct any time soon as the price tag per node is in the six digit range.

  • JamesJABJamesJAB Posts: 1,760

    My guess is that we will see the same pattern that happened with the Quadro M6000.  (Initial release was 12GB then 24GB version released later)
    They will probably release a 32GB version of the Tesla V100 as soon as 8GB HBM2 chips enter production.  Right now HBM memory is restricted based on it sharing space on the GPU board.
    This image highlights what I'm talking about.

    Assuming that both GPUs in the above image are 16GB:
    The one on the left makes use of 16 x 1GB GDDR5 chips whereas the one on the right only has space for 4 x 4GB HBM2 chips because they are sharing space on the main GPU.

     

  • kyoto kidkyoto kid Posts: 40,515

    ...HBM2 can go up to "8-Hi" (eight 1 GB dies per stack on the interposer) which would allow for a total of 32 GB.  HBM 3 (in development) will double that by increasing the density of the individual dies to 2 GB.

    The issue here is with NVLink one could achieve 32 GB using two 16 GB cards though the cost would be more than an individual 32 GB card as like I mentioned, the NVLink "widgets" retail for 600$ apiece and a pair are required for memory pooling.

    Volta Consumer cards will more likely get GDDR6 memory instead of HBM2 because of cost and most likely will continue to be connected together through SLI links (about 30$ ea) so there will be no ability for memory pooling.

    For a hardware geek like myself, yeah, this is all incredibly fascinating.  In reality it will only be what I refer to as, "a lottery dream".

    Maybe when HBM3 is introduced in the Tesla/Quadro lines, the cost for HMB2 might come down enough to appear in top end consumer cards like an xx80 and xx80 Ti.  The higher memory bandwidth certainly would be a boost for games and VR.

  • Ghosty12Ghosty12 Posts: 1,955
    edited December 2017
    kyoto kid said:

    ...HBM2 can go up to "8-Hi" (eight 1 GB dies per stack on the interposer) which would allow for a total of 32 GB.  HBM 3 (in development) will double that by increasing the density of the individual dies to 2 GB.

    The issue here is with NVLink one could achieve 32 GB using two 16 GB cards though the cost would be more than an individual 32 GB card as like I mentioned, the NVLink "widgets" retail for 600$ apiece and a pair are required for memory pooling.

    Volta Consumer cards will more likely get GDDR6 memory instead of HBM2 because of cost and most likely will continue to be connected together through SLI links (about 30$ ea) so there will be no ability for memory pooling.

    For a hardware geek like myself, yeah, this is all incredibly fascinating.  In reality it will only be what I refer to as, "a lottery dream".

    Maybe when HBM3 is introduced in the Tesla/Quadro lines, the cost for HMB2 might come down enough to appear in top end consumer cards like an xx80 and xx80 Ti.  The higher memory bandwidth certainly would be a boost for games and VR.

    Yeah I think that one of the main costs of the card is the use of HBM2 memory.. But what will be interesting with the release of this card is what comes out next year for the average consumer/gamer..

    Post edited by Ghosty12 on
  • RFB532RFB532 Posts: 94

    I wonder how hot it runs since it's air cooled.

     

  • nicsttnicstt Posts: 11,714

    Ok, they "say" it's for scientists, engineers, machine learning and all that stuff ... why gold? It's like putting rims on a microscope, what scientist is going to care? My guess is they geared it towards scientists but they know thats only a small market - but idiots with lots of money, well, that's a much bigger market. People will buy it because its gold, expensive and just must be "the best". Don't get me wrong, it looks like a great card, but the price tag is waaay out there for what we're doing. I've still got the maxwell gen Titan x's, and they do a great job. You can probably get them pretty cheap on the second hand market too.smiley

    I had a good laugh at this.

  • ebergerlyebergerly Posts: 3,255
    edited December 2017

    I wonder how hot it runs since it's air cooled.

     

    Yeah, if you notice it's the same "Founder's Edition single small fan inside a box" cooling. But whatever the temps I'm sure it's fine for continuous use, or else the engineers who designed it should be fired.  

    Post edited by ebergerly on
  • JamesJABJamesJAB Posts: 1,760

    I wonder how hot it runs since it's air cooled.

     

    The card is rated as a 250w card, just like the GTX 1080 ti, GTX 980 ti, GTX 780 ti, and all of the previous Titan cards.
    My GTX 1080 ti idles at 30c and never goes above 75c under full load.
     

     

    ghosty12 said:
    kyoto kid said:

    ...HBM2 can go up to "8-Hi" (eight 1 GB dies per stack on the interposer) which would allow for a total of 32 GB.  HBM 3 (in development) will double that by increasing the density of the individual dies to 2 GB.

    The issue here is with NVLink one could achieve 32 GB using two 16 GB cards though the cost would be more than an individual 32 GB card as like I mentioned, the NVLink "widgets" retail for 600$ apiece and a pair are required for memory pooling.

    Volta Consumer cards will more likely get GDDR6 memory instead of HBM2 because of cost and most likely will continue to be connected together through SLI links (about 30$ ea) so there will be no ability for memory pooling.

    For a hardware geek like myself, yeah, this is all incredibly fascinating.  In reality it will only be what I refer to as, "a lottery dream".

    Maybe when HBM3 is introduced in the Tesla/Quadro lines, the cost for HMB2 might come down enough to appear in top end consumer cards like an xx80 and xx80 Ti.  The higher memory bandwidth certainly would be a boost for games and VR.

    Yeah I think that one of the main costs of the card is the use of HBM2 memory.. But what will be interesting with the release of this card is what comes out next year for the average consumer/gamer..

    Honestly there is no way that the HBM2 memory is the primary reason for the $3000 price tag. 
    The new AMD Vega cards have HBM2
    Radeon RX Vega 56 = 8GB (2x 4GB)  MSRP of $400
    Radeon RX Vega 64 = 8GB (2x 4GB)  MSRP of $500
    Radeon Vega Frontier = 16GB (2x 8GB)  MSRP of $1000

  • kyoto kidkyoto kid Posts: 40,515
    edited December 2017

    ...it's the new Volta Processor with 640 tensor cores, that a major reason is why the price is so high. The Titan uses the Volta V100 core (same as the 8,000$ Tesla V100).  It also has dedicated FP64 cores along with the Tensor AI cores and like I mentioned earlier, is NVLink compatible.

    While some reviews speculate about its graphics potential, it really is more of an "entry level" compute/deep learning rather than graphics GPU.

    Post edited by kyoto kid on
  • VisuimagVisuimag Posts: 547
    edited December 2017

    I'll have one in a few months. Moving on from two Titan Xp's.

    And for those expecting NVLink functionality, it doesn't support it nor SLI. Also, Single Precision is 14.89 TF (15), not 13.8.

    Post edited by Visuimag on
  • nonesuch00nonesuch00 Posts: 17,890
    kyoto kid said:

    ...it's the new Volta Processor with 640 tensor cores, that a major reason is why the price is so high. The Titan uses the Volta V100 core (same as the 8,000$ Tesla V100).  It also has dedicated FP64 cores along with the Tensor AI cores and like I mentioned earlier, is NVLink compatible.

    While some reviews speculate about its graphics potential, it really is more of an "entry level" compute/deep learning rather than graphics GPU.

    Oh so that explains it - that card is aimed at this people artificially inflating the cryptocurrency markets. Well, can't say that I'm too worried about financial losses for those involved in that pyramid scheme.

    I can't wait til scenes stream fast enough right off of mass storage so the only driving price of these video cards will be technical performance and computer speed and not scene size.

  • kyoto kidkyoto kid Posts: 40,515

    ...just read that on a several sites as well.

    So much for ever having "Big VRAM" for rendering purposes (felt from the beginning it was too good to be true).  Better off to just stick to my original plan of building a big memory dual Xeon CPU based render system.

    ----------

    One of the posts I saw on a tech forum about this hits the nail on the head:

    so its like a 1060 of ai cards ok big deal

    WIsh I could have upvoted that, but not a participant on the site's forums.

    The part that gets me though is the ability to pool memory and cores even between two cards is still extremely useful for deep learning purposes so they are kind of shooting the Titan V in the foot by essentially making it a standalone. 

  • Ghosty12Ghosty12 Posts: 1,955
    kyoto kid said:

    ...just read that on a several sites as well.

    So much for ever having "Big VRAM" for rendering purposes (felt from the beginning it was too good to be true).  Better off to just stick to my original plan of building a big memory dual Xeon CPU based render system.

    ----------

    One of the posts I saw on a tech forum about this hits the nail on the head:

    so its like a 1060 of ai cards ok big deal

    WIsh I could have upvoted that, but not a participant on the site's forums.

    The part that gets me though is the ability to pool memory and cores even between two cards is still extremely useful for deep learning purposes so they are kind of shooting the Titan V in the foot by essentially making it a standalone. 

    Yeah this is the problem with there being no NV Link ability it potentially hurt it, one thing is though is that Linus Tech Tips have said they plan on purchasing one of these cards to do testing on it since Nvidia are not sending out review samples because the card is not aimed at the average user market..  So will be interesting to see what the card does when they test it..

  • kyoto kidkyoto kid Posts: 40,515
    edited December 2017

    ...yeah, will be looking forward to that, particularly if they bench it against the 1080 Ti and Titan Xp. Hopefully they also do render perfomance as well as game performance comparisons.  Too many times these reviews are just focused on games which doesn't tell us 3D artists very much.

    Post edited by kyoto kid on
  • ebergerlyebergerly Posts: 3,255
    edited December 2017
    Gamers Nexus did some testing and posted a video yesterday. As expected its designed for non-graphics calculations so its graphics performance in games isnt close to worth the price. Presumably somewhat similar for rendering. Four times the price of a 1080ti would require 400% faster renders. 16 min on a 1080ti would have to be 4 min on Titan V? Not even close. Not sure why the interest. Maybe next year something reasonable will show up.
    Post edited by ebergerly on
  • Ghosty12Ghosty12 Posts: 1,955
    ebergerly said:
    Gamers Nexus did some testing and posted a video yesterday. As expected its designed for non-graphics calculations so its graphics performance in games isnt close to worth the price. Presumably somewhat similar for rendering. Four times the price of a 1080ti would require 400% faster renders. 16 min on a 1080ti would have to be 4 min on Titan V? Not even close. Not sure why the interest. Maybe next year something reasonable will show up.

    Well looking at that video they have yet to upload a productivity video of the card and what Volta can do.. The main part to take away from all of this is what Nvidia will bring with Volta and what they plan on doing next year.. Once Gamer Nexus and Linus Tech Tips release their productivity results the nwe will see how good or bad Volta is..

  • drzapdrzap Posts: 795
    edited December 2017

    It has 5100 CUDA cores compared to 3500 of its predecessor.   I have never seen such a massive generational jump before.  It should thrash everything else in GPU rendering.  Nevertheless, at this price point, the lack of memory pooling makes it a no buy for most people.  2 Titan XPs have more CUDAs, the same memory, and 2/3rds the cost.   On the other hand, those who are looking for the most performance per watt or per PCie slot (farms) would probably bite on these or the Quadros V's that are coming down the line.

    Post edited by drzap on
Sign In or Register to comment.