GeForce 1080 Ti and Titan Xp - What the **** were they thinking?

GatorGator Posts: 1,320

The 1080 Ti and the Titan Xp... Titan Xp has 1 GB more RAM, roughly 10% more memory bandwidth and cores.

The difference between a 980 Ti and the original Titan X was huge, from 6GB to 12GB.

Honestly, I'm scratching my head here with the 1000 series... the difference between their high end gaming 1080 Ti and their Prosumer Titan Xp is pretty thin.  Kinda seems like for the price difference they want, they should have bumped up the RAM on the Titan Xp, or a bit less on the 1080 Ti (every game I've monitored, 11 or 12 GB is total overkill for gaming, even at 4K).

«1

Comments

  • CybersoxCybersox Posts: 9,576

    The 1080 Ti and the Titan Xp... Titan Xp has 1 GB more RAM, roughly 10% more memory bandwidth and cores.

    The difference between a 980 Ti and the original Titan X was huge, from 6GB to 12GB.

    Honestly, I'm scratching my head here with the 1000 series... the difference between their high end gaming 1080 Ti and their Prosumer Titan Xp is pretty thin.  Kinda seems like for the price difference they want, they should have bumped up the RAM on the Titan Xp, or a bit less on the 1080 Ti (every game I've monitored, 11 or 12 GB is total overkill for gaming, even at 4K).

    At the bleeding high end of any technology, the dollar price for each notch of performance improvement gets progressively steeper.  Does a Porche 911 go five times as fast as a VW Jetta or Passat?  Is a Bugatti Veyron 15 times faster than the Porche?  Nope, although the Porche is arguably a much superior car to the Passat, there's only a fractional improvement in performance when stepping to the Veyron... and all four cars came out of Volkswagon factories. 

    Yeah, the bump up on the 980s was unexpected, but I think that's mainly because the amount of power required to handle modern games just keeps going up and the manufacturers felt that the 980 line was in danger of falling out of favor with the market niche it was aimed at. 

  • kyoto kidkyoto kid Posts: 42,135

    ...the Titan XP uses the same processor chip as the Quadro P6000 (GP102).  To bump up the memory would mean doubling it to 24 GB which would then threaten sales of their flagship Quadro P6000 card as the Titan Xp also has all 30 SMs unlocked giving it the same core count as it's pro grade sibling. 

    I was actually expecting the 1080 Ti to have 10 GB and was totally surprised they settled on the odd number of 11. Particularly so when they priced it 500$ less than the original Pascal Titan.X.

  • TaozTaoz Posts: 10,299
    kyoto kid said:

    I was actually expecting the 1080 Ti to have 10 GB and was totally surprised they settled on the odd number of 11. Particularly so when they priced it 500$ less than the original Pascal Titan.X.

    They said there is a technical reason for the 11 GB but I don't recall what it is. Maybe google can answer it.

  • nicsttnicstt Posts: 11,715
    Taoz said:
    kyoto kid said:

    I was actually expecting the 1080 Ti to have 10 GB and was totally surprised they settled on the odd number of 11. Particularly so when they priced it 500$ less than the original Pascal Titan.X.

    They said there is a technical reason for the 11 GB but I don't recall what it is. Maybe google can answer it.

    Marketing folks said there was a technical reason...

    Right!

  • kyoto kidkyoto kid Posts: 42,135
    edited June 2017
    ...yeah, the next full step up for the GP104 chip (used in both the 1080 Ti and Quadro P5000) is 16 GB which would eclipse the Titan Xp. Just unlocking the last two SMs still doesn't justify the extra 500$ to the pricetag over the 1080 Ti. Crikey, for 200$ more than a single Titan Xp I could have two 1080 Ti's giving me better viwport and render performance.
    Post edited by kyoto kid on
  • GatorGator Posts: 1,320

    The 1080 Ti and the Titan Xp... Titan Xp has 1 GB more RAM, roughly 10% more memory bandwidth and cores.

    The difference between a 980 Ti and the original Titan X was huge, from 6GB to 12GB.

    Honestly, I'm scratching my head here with the 1000 series... the difference between their high end gaming 1080 Ti and their Prosumer Titan Xp is pretty thin.  Kinda seems like for the price difference they want, they should have bumped up the RAM on the Titan Xp, or a bit less on the 1080 Ti (every game I've monitored, 11 or 12 GB is total overkill for gaming, even at 4K).

    At the bleeding high end of any technology, the dollar price for each notch of performance improvement gets progressively steeper.  Does a Porche 911 go five times as fast as a VW Jetta or Passat?  Is a Bugatti Veyron 15 times faster than the Porche?  Nope, although the Porche is arguably a much superior car to the Passat, there's only a fractional improvement in performance when stepping to the Veyron... and all four cars came out of Volkswagon factories. 

    Yeah, the bump up on the 980s was unexpected, but I think that's mainly because the amount of power required to handle modern games just keeps going up and the manufacturers felt that the 980 line was in danger of falling out of favor with the market niche it was aimed at. 

    I understand the dollar for higher performance...  I have Titan X cards.

    Playing a bunch of 4K games at Ultra resolution, they don't come anywhere close to the 12 GB (or 11 GB).

  • GatorGator Posts: 1,320
    edited June 2017
    kyoto kid said:
    ...yeah, the next full step up for the GP104 chip (used in both the 1080 Ti and Quadro P5000) is 16 GB which would eclipse the Titan Xp. Just unlocking the last two SMs still doesn't justify the extra 500$ to the pricetag over the 1080 Ti. Crikey, for 200$ more than a single Titan Xp I could have two 1080 Ti's giving me better viwport and render performance.

    THIS is where I think they boned themselves on the sales of Titan Xp cards.  You're only losing 1 GB of RAM, making it suitable as a prosumer card.

    Face it, the number of gamers or hobbyists that SLI (or 2 cards SLI not enabled) two Titan cards is really, really small.  You're talking spending $2400.

    For just a few hundred more, you can get two 1080 Ti cards far surpassing a single Titan Xp.

     

    Edited to add: Confirm alll cards are used for viewport performance, right?

    Post edited by Gator on
  • Peter WadePeter Wade Posts: 1,682
    nicstt said:
    Taoz said:
    kyoto kid said:

    I was actually expecting the 1080 Ti to have 10 GB and was totally surprised they settled on the odd number of 11. Particularly so when they priced it 500$ less than the original Pascal Titan.X.

    They said there is a technical reason for the 11 GB but I don't recall what it is. Maybe google can answer it.

    Marketing folks said there was a technical reason...

    Right!

    I think it was, there were these two rows of holes on the circuit board, so we thought we should solder a chip in there, hmm, one these "RAM" things looks like it will fit....

  • TaozTaoz Posts: 10,299
    nicstt said:
    Taoz said:
    kyoto kid said:

    I was actually expecting the 1080 Ti to have 10 GB and was totally surprised they settled on the odd number of 11. Particularly so when they priced it 500$ less than the original Pascal Titan.X.

    They said there is a technical reason for the 11 GB but I don't recall what it is. Maybe google can answer it.

    Marketing folks said there was a technical reason...

    Right!

    Well they actually explained what the reason was but I don't recall it (some tech stuff beyond my understanding). I'm sceptical too though, it might just be an excuse to cover up the real reason. Just found this when trying to find the mentioned explanation:

    "Such high bandwidth memory does wonders for games running at 4K and above, where high-resolution textures need to be pushed into memory, but there is a small compromise for GTX 1080 Ti buyers: you only get 11GB of memory instead of 12GB. Nvidia has simply removed one of the 12 VRAM chips surrounding the GPU, resulting in the odd 352-bit memory interface and 88 ROPs. That's an understandable point of differentiation from a business standpoint: if Nvidia opted to strap 12GB of RAM to the 1080 Ti it would have a massive 528GB/s of bandwidth, completely blowing the Titan XP out of the water."

    https://arstechnica.com/gadgets/2017/03/nvidia-gtx-1080-ti-review/

  • kyoto kidkyoto kid Posts: 42,135

    ...true.  If like I mentioned they ever uprated the Titan Xp to 24 GB it would pose serious issues for their top of the line pro card as both have the same number of unlocked SMs and CUDA threads as well as the same base FP32 floating point performance.  The P6000 would still have one little edge in that it can take advantage of NVlink offering a wider pipeline between multiple cards but that would be it.  I Look at it how many independent pro CG artists would continue to be wiling to pay 5,000$ to get the same performance from a Prosumer one that is priced at say even 1,800 - 2,000$?

  • outrider42outrider42 Posts: 3,679
    Pretty sure they were thinking of AMD Vega when they finalized the 1080ti spec. Remember AMD had just released Ryzen, and Ryzen surprised everybody by being actually good and not just empty hype. So if Ryzen is for real...then Vega might be for real, too. And then Nvidia would be stuck between generations for a while without a card to best Vega. They want the performance crown. Intel is scrambling, and all of the sudden we are seeing crazy multicore counts on future Intel chips. Amazing what competition can do. So keep rooting for AMD, as it makes Nvidia better as well. We may finally see Titan break past the 12 gb barrier this time.
  • kyoto kidkyoto kid Posts: 42,135
    edited June 2017

    ...again that would most likely mean going to 24 GB as I mentioned above which I am not certain Nvidia would do as it would compete directly with their Quadro P6000 in every category but at a lower price.  Even if they could add just 4 GB to bring it to 16, it would then totally blow away their Quadro P5000 in all respects as it would offer a much higher thread count as well as performance at 1/2, or even around 2/3 the cost if they increased the price to say, 1,500$.

    As to AMD, I'm looking forward to the single socket 32 core Epyc 7501.  One could build a nice little render farm in a single box with that, 128 GB of memory, several 1080 Tis, and have both GPU ad CPU rendering covered.

    Now that is a monster workstation.

    Post edited by kyoto kid on
  • drzapdrzap Posts: 795
    kyoto kid said:

    ...true.  If like I mentioned they ever uprated the Titan Xp to 24 GB it would pose serious issues for their top of the line pro card as both have the same number of unlocked SMs and CUDA threads as well as the same base FP32 floating point performance.  The P6000 would still have one little edge in that it can take advantage of NVlink offering a wider pipeline between multiple cards but that would be it.  I Look at it how many independent pro CG artists would continue to be wiling to pay 5,000$ to get the same performance from a Prosumer one that is priced at say even 1,800 - 2,000$?

    If I am not mistaken, only the top of the line GP100 has the faster memory required for the NVlink.  It only has 16GB of memory, but it is more expensive because multiple GPUs can combine and share memory, unlike the other cards.

  • LilithVXLilithVX Posts: 36

    I have two Titan X's. And I guess getting two Titan Xp to replace those is, pointless? Looking at the specs the difference seem quite small. I'm still waiting for cards that will give me closer to say 50% boost in iray. Or more.

  • kyoto kidkyoto kid Posts: 42,135
    edited June 2017
    drzap said:
    kyoto kid said:

    ...true.  If like I mentioned they ever uprated the Titan Xp to 24 GB it would pose serious issues for their top of the line pro card as both have the same number of unlocked SMs and CUDA threads as well as the same base FP32 floating point performance.  The P6000 would still have one little edge in that it can take advantage of NVlink offering a wider pipeline between multiple cards but that would be it.  I Look at it how many independent pro CG artists would continue to be wiling to pay 5,000$ to get the same performance from a Prosumer one that is priced at say even 1,800 - 2,000$?

    If I am not mistaken, only the top of the line GP100 has the faster memory required for the NVlink.  It only has 16GB of memory, but it is more expensive because multiple GPUs can combine and share memory, unlike the other cards.

    ...really so two 16 GB cards would stack?  Didn't think that was possible.

    Post edited by kyoto kid on
  • eric suscheric susch Posts: 135

    I had a Titan X maxwell before upgrading to 2 Titan X pascal cards.  The pascal cards are significantly faster.

  • GatorGator Posts: 1,320
    LilithMV said:

    I have two Titan X's. And I guess getting two Titan Xp to replace those is, pointless? Looking at the specs the difference seem quite small. I'm still waiting for cards that will give me closer to say 50% boost in iray. Or more.

    Maxwell or Pascal?  The difference between the Pascals is not that great.  Like 8-10%.

    The 1080 Ti is pretty close to the Titan X Pascal too.

  • GatorGator Posts: 1,320

    I had a Titan X maxwell before upgrading to 2 Titan X pascal cards.  The pascal cards are significantly faster.

    I have one box with two Titan X Maxwells, and one with two Titan X Pascals.  Yeah, from what I've checked with my scenes the Pascal Titan is about 60% faster.

  • tj_1ca9500btj_1ca9500b Posts: 2,057

    So, a 4-5% performance bump for almost double the cost?  Oh, and 12 GB of VRAM instead of 11 GB.  Hmmmm, kinda sounds like Intel...

    http://www.tomshardware.com/reviews/nvidia-titan-xp,5066-18.html

    Seems to me you'd get a LOT more performance out of 2 SLI'd 1080 Ti's.  Assuming you have the 2 PCIe 16 slots for them...

  • ebergerlyebergerly Posts: 3,255
    edited June 2017

    Since you guys are talking all techy and stuff, maybe someone can give me some insight....

    I have a single GTX-1070. And since you can never have too much computer stuff, I'm thinking of adding a second GTX-1070. But I'd have to get a new motherboard cuz my existing only fits one full PCI x16. 

    So is it reasonable to assume that if I have two that my Iray rendering time will be cut in half, or doesn't it work that way?

    Thanks. 

    Post edited by ebergerly on
  • GatorGator Posts: 1,320
    edited June 2017

    So, a 4-5% performance bump for almost double the cost?  Oh, and 12 GB of VRAM instead of 11 GB.  Hmmmm, kinda sounds like Intel...

    http://www.tomshardware.com/reviews/nvidia-titan-xp,5066-18.html

    Seems to me you'd get a LOT more performance out of 2 SLI'd 1080 Ti's.  Assuming you have the 2 PCIe 16 slots for them...

    That's what I'm sayin'... although it's more like 8%  laugh

    I'm thinking it's probably because you're basically getting the same card, just a RAM chip removed.  To bump the Titan Xp to compete with Vega's 16 GB, they'd have to re-architect a new card.

    [marketing] "Screw it, just sell it as is and someone will buy it."  [/marketing]

    Post edited by Gator on
  • TheKDTheKD Posts: 2,711

     

    ebergerly said:

    Since you guys are talking all techy and stuff, maybe someone can give me some insight....

    I have a single GTX-1070. And since you can never have too much computer stuff, I'm thinking of adding a second GTX-1070. But I'd have to get a new motherboard cuz my existing only fits one full PCI x16. 

    So is it reasonable to assume that if I have two that my Iray rendering time will be cut in half, or doesn't it work that way?

    Thanks. 

    Cut in half, no, don't count on it. The gains these days, think more like 1-10% time cut down. I wish I was not serious. The gains I achieved from going from 960 to 1070 was not much at all(other than vram, that I noticed a little). The gains I recieved from going from i5 2500 ddr3 RAM 12 GB to I5 7500 DDR4 RAM 32 GB was not much, I only noticed a difference so far in 7zip compressing on max settings, and on benchmarks, not so much on everyday use.   

  • LilithVXLilithVX Posts: 36
    edited June 2017

    I had a Titan X maxwell before upgrading to 2 Titan X pascal cards.  The pascal cards are significantly faster.

    I have one box with two Titan X Maxwells, and one with two Titan X Pascals.  Yeah, from what I've checked with my scenes the Pascal Titan is about 60% faster.

    Good! I have the older so upgrading to the newer Titan Xp would give me a big boost. unfortunately I prefer the water cooled hybrid model, like the older Titan X EVGA made. So I guess I have to wait untill EVGA or someone else have gotten their hand on them. The temp on the non water cooled Titan cards just reaches too high for my liking.

    Edit: I should've read up on this beforehand but looks like I have to mod the card myself since Nvidia is the only ones that can sell the Titan Xp. Which means having to build a custom water cooling system in my computer instead of like now when I just use a cpu water cooler and the two on my two Titan X's. Oh well.

    Post edited by LilithVX on
  • ebergerlyebergerly Posts: 3,255
    TheKD said:

     

    ebergerly said:

     

    So is it reasonable to assume that if I have two that my Iray rendering time will be cut in half, or doesn't it work that way?

     

    Cut in half, no, don't count on it. The gains these days, think more like 1-10% time cut down.

    Huh? Are you serious? If I add a second GTX 1070 I'll get maybe a 10% speedup in render time? Are you sure about that? I saw a youtube of a guy with 4 of them showing how fast the render time was with some of the Stonemason scenes. And I was amazed. Almost realtime response to a point where it looked like the render was maybe 50% complete. 

  • ben98120000ben98120000 Posts: 469
    ebergerly said:

    Since you guys are talking all techy and stuff, maybe someone can give me some insight....

    I have a single GTX-1070. And since you can never have too much computer stuff, I'm thinking of adding a second GTX-1070. But I'd have to get a new motherboard cuz my existing only fits one full PCI x16. 

    So is it reasonable to assume that if I have two that my Iray rendering time will be cut in half, or doesn't it work that way?

    Thanks. 

    Yes, for the second card almost in half. And if you have a MB slot to plug the second card in, even if its not full x16, just go ahead. It wont load textures in the card as fast as x16, but difference is in seconds, not hours, so it will still cut render times almost in half. You can change MB later.

    Besides its nice to see that double set of iteration numbers when rendering. Like, you will see iteration:1000 (from first card) and right after iteration 1100 (from second card). laugh

  • ebergerlyebergerly Posts: 3,255

    Cool, thanks.

    I have two other PCI slots, but both are wired for x4. Is that what you're talking about, it won't make much difference from x16?

  • ben98120000ben98120000 Posts: 469
    edited June 2017

    Yes. To be clear, they are all PCIe x16 slots (physical size of them) but some of them have x16 lanes, some x8 lanes, some x4 lanes. Theoretical maximum bandwidth of PCIe 2.x lane is 500 MB/s (PCIe 3.x  is 985 MB/s), so x4 is 2 GB/s and x16 is 8 GB/s.

    So, for our usage, how big in GB is sceene data? Not that big (nor is the maximum bandwidth needed all the time during the rendering) that difference in seconds would be , IMO, significant. On the other hand, doubling the amount of CUDA cores would significantly reduce render time (regardless of x4 "bottleneck"). 

    Post edited by ben98120000 on
  • ebergerlyebergerly Posts: 3,255

    AAhhhhhh....thanks. I think the lightbulb finally went off in my understanding of this. Cool.

    So it kinda comes down to loading vs. processing. Once the scene data is loaded using the PCI bandwidth, then the CUDA cores do their thing to process the render. So even if it took an 10 minutes to load the scene data, once that's done the PCI bandwidth is no longer the limiting factor, it's the number of CUDA cores to process it all. And if you double that, you should cut the render in half. Or at least somewhere near there. 

    Excellent. Now the problem is whether I can actually fit a second card into my machine. The first one was a real tight fit, and I'm not sure a second one will squeeze in there. Though I did buy a 750w power supply so I should be good to go if it fits.

  • swordkensiaswordkensia Posts: 348
    ebergerly said:

    AAhhhhhh....thanks. I think the lightbulb finally went off in my understanding of this. Cool.

    So it kinda comes down to loading vs. processing. Once the scene data is loaded using the PCI bandwidth, then the CUDA cores do their thing to process the render. So even if it took an 10 minutes to load the scene data, once that's done the PCI bandwidth is no longer the limiting factor, it's the number of CUDA cores to process it all. And if you double that, you should cut the render in half. Or at least somewhere near there. 

    Excellent. Now the problem is whether I can actually fit a second card into my machine. The first one was a real tight fit, and I'm not sure a second one will squeeze in there. Though I did buy a 750w power supply so I should be good to go if it fits.

     

    I run two 1070's.  In my render tests a single 1070 is only 10% slower than the Titan X Maxwell (which I have in my other machine along with a Titan Z!!!).  A second 1070 reduces render time by between 60-70%, so not quite twice as fast., unfortunately adding cards in Iray does nort give linear scaling of render speeds..,

    Also BIG ADVICE.., go with a 1000Watt power supply., I was running my two on a 750Watt  and Popped the Supply., fortunately the damage stopped there.., 1000Watt supply in the MINIMUM I would suggest for any dual card setup.

    S.K.

  • ebergerlyebergerly Posts: 3,255

    Oh really? I saw a video of a guy who was measuring power usage with a 1070 on the same computer as mine, and running some games he said he'd never seen total usage for his computer above about 250 watts. So I assumed even two 1070's would still be under my 750w.

    Just goes to show you can't always believe what you see on youtube smiley

    Thanks much !!

Sign In or Register to comment.