OT: 1080ti vs Titan XP vs Volta
Now that I have a GTX 1080ti I'm starting to get interested in the next generation of cards as well as the other cards in the same class. I never knew much about the Titan XP until I recently saw a Gamers Nexus video where they benchmarked the Titan XP with the 1080ti for game performance. And surprisingly the results showed that the Titan XP performed either identically, or with a slight margin improvement over the 1080ti. He said the price per performance was something astronomical like $100 per % improvement. Yikes. I recall the 1080ti vs. the 1070 is more like $15 or $20 per % improvement, which is much more reasonable, IMO.
So I'm curious why some have decided on a Titan XP. Did it come out prior to the 1080ti?
And apparently the next one to arrive is the Volta series? Sounds like they were supposed to arrive this year, but now it will be sometime next year? I assume now they have 10 or so models being sold right now there's no reason for NVIDIA to bring anything new to the market quite yet.

Comments
Titan XP came out after 1080 Ti. More CUDA cores - 3,840 compared to 3,584, 12GB to 11GB. You have to overclock the Ti to get to the same level of performance as the XP. I still have a 780 Ti so it's something I can dream about.
Sounds like they are comparing overclocked 1080 Tis vs the Titan XP.
Keep in mind many stock 1080 Tis are overclocked out of the box, vs Titan XP is Nvidia only. Nvidia stock speeds a Titan XP should be faster.
Also, are you sure it is a Titan XP, vs a Pascal architecture Titan X? My EVGA Hybrid 1080 Tis are a little faster rendering vs my Pascal Titan D's. Small margin, a few percent. But I have recently watercooled my Titans now and haven't tried overclocking yet.
Scott,
Here's the link if you're interested:
https://www.youtube.com/watch?v=Pw1Q0JLyG6g
He tested stock and overclocked and a whole bunch of models. And his summary is gamers should not buy the Titan XP.
It was my understanding that the Pascal architecture Titan X was called the Titan Xp to differentiate it from the the older Titan X, which was still available in stores at the time.
It's obvious it's not for gamers, don't need to watch a YouTube vid for that. :D
Don't think there are any games that come close to even the 11GB on the 1080 Ti. The price difference is too great for the difference, they should drop the price of the XP after releasing the 1080 Ti. Nvidia probably doesn't care, they may piss off previous Titan X buyers and likely sell a buttload more 1080 Tis to make up for the small Titan sales.I'm not a gamer, or at least only very rarely; for rendering, more RAM is more RAM; of course, until Windows stops stealing RAM, I'm not bothering with any more cards.
Well it sure wasn't obvious to me. I assumed at a price difference like that it had to be MUCH better.
But I'm curious, since gaming performance presumably should be at least somewhat similar to render performance, why did you decide to spend all that money for a Titan XP?
Well, I'm pricing out a new machine and am having a bit of sticker shock. Now, it seems, many of us hobbyists have to spend our money on hardware instead of more content. Used to be, in my decades doing this and stuff like it, that each new machine actually cost less than the prior one. This new one is going to more than double the cost of my previous one.
As is, I'm only going for a 1080 ti.
I thought the TitanXP was a bit earlier than the 1080Ti
http://www.techadvisor.co.uk/new-product/pc-components/nvidia-titan-x-pascal-release-date-price-specs-2016-uk-3643933/ indicates Auguast last year for the TitanXP
http://www.techradar.com/news/nvidia-geforce-gtx-1080-ti-release-date-news-and-rumors gives March 2017 for the 1080Ti
Yeah, Awesome Hardware (?) just did a video describing how bad it is for people wanting to build a PC now. Costs for memory have skyrocketed, GPU's costing almost $800, and so on. I'm guessing that a lot of it is companies charging what they think gamers will pay. I'm guessing there's so much excitement in the gaming community about faster and more awesome hardware that people will get all spun up and pay exhorbitant prices. And then there's nerds like me who pay almost $800 for a GPU he doesn't really need. Geesh.
Yep. 1080 Ti didn't exist when I bought my Titan X Pascals, not Titan Xp.
They are indeed different.
https://www.nvidia.com/en-us/geforce/products/10series/titan-x-pascal/
https://www.nvidia.com/en-us/design-visualization/products/titan-xp/
I think that's really due to some staggeringly powerful hardware being available. Sure, you can render Iray without a 1080 Ti, but it's slower. Double the cost, but you're probably getting an increase in computing power past Moore's law.
The Titan class of GPUs has always been a hybrid of sorts. It is not really intended as a gaming card, though it has gaming drivers. It is more of a bridge between the gaming and professional series Nvidia offers. As such, the Titan has more features that are in line with Nvidia's professional cards. This is the main difference between it and the GTX 1080ti. So that, along with it being a "pony car" of sorts, leads to its enormous price (I still think the price is outrageous, but this is Nvidia's justification for it.)
One "pro" feature the Titan has is that the user can disable the Windows Plug and Play. Plug and Play is why Windows takes a certain amount of VRAM from GTX cards. So you can use all 12gb of a Titan for Iray if you have a different GPU driving the video. The 1080ti cannot use all of its 11gb to render in Windows 10, even if you use a different card to run the display.
Let me sort the release dates out for everyone starting with the Kepler architecture GTX XX80 and up cards.
...the big game changer with Volta technology Tesla cards is NVLInk. NVLink creates a much wider pipeline bwtween the cards as well as between the CPU and GPUs as well, compared to PCIe and SLI. So far, this technology is pretty much only being employed in ultra high speed/capacity supercomputers (such as Oak Ridge's Summit system) which are designed around multiple Volta tesla GPU nodes with IBM's new Power9 CPUs. The next step will most likely see the MB technology adopted in high end engineering workstations and deep learning systems. It could be quite a while before you'll have one on your desk.
Nvidia's Quadro GP 100 cards are the first commercially available ones to be compatible with NVLInk technology, replacing the traditional SLI connectors bewtween GPU cards with the new NVLInk ones (though the cards still use a standard PCIe expansion slot). According to Nvidia using a two way NVLInk in this manner will allow for something they refer to as "Memory Pooling":
I don't know, but the way I read this, it sounds "too good to be true" as combining memory between multiple GPU cards for VRAM based tasks (like rendering) so far has not been possible. In any case, such a setup would be prohibitively expensive, roughly 15,000$ - 16,000$ for two GP-100 cards and dual NVLinks. Not something any of us will ever be able to afford without recieving some form of major windfall.
Pooling the memory with a high speed link between the cards is a logical step in the right direction. Look at how long ago dual and quad CPU computers started sharing memory in one big pool.
NV Link comes on PCIe versions of both the Tesla V100 and Quadro GP100. Both are 16GB HBM2 cards that can pool GPU and Memory resources to act as one GPU with double the cores and 32GB of VRAM.
..so then, they really did it, and If you have the money, you can have 32 GB for rendering. Not expecting that will trickle down anytime soon though.
Where's that loto ticket from yesterday?
You'd be suprised... With AMD puting HBM2 on their consumer level cards, that will drive the production costs down and bring up yield quality.
I will not be the slightest bit suprised if the GTX 1180 and up include HBM2 memory, and NVlink debuts as a consumer level "next generation" replcement for SLI.
All this in the name of maintaining their dominence in the gaming GPU market. (also adding NVlink into consumer level cards will help ofset the R&D cost)
Honestly NVlink may end up being a requirement for getting Volta cards above 60fps @ 8k resolutions on next years AAA video game releases.
I'm not so sure about that. Nvidia needs to continue to provide a reason for professionals to buy professional cards. The line between GTX and Quadro is getting blurrier each generation. Keeping HBM in Quadro territory would be a good way to solidify that line. Profits from their high end cards are what really help Nvidia's bottom line.
Sure about that? Their gaming market is a lot higher. Dunno internal costs. Sure the sales price on a gaming card is a lot less, but the R&D and production costs are spread out over a much larger number of units, with some R&D shared across the lines.
http://files.shareholder.com/downloads/AMDA-1XAJD4/5478721354x0x891747/E0723D98-DACA-4107-952A-F6E67B637A76/Rev_by_Mkt_Qtrly_Trend_Q117.pdf
It's not a matter of which market is higher. When you have your flagship Quadro selling at 10x higher than its GTX equivalent, there better be some darned good reasons why or people are going to switch over (and they are) They have tried to stave this off by pricing their render software cheaper if you have a professional card, but there are a lot of choices of rendering software (though, it's probably true that V-Ray has the architecture rendering market cornered). It's a matter of stopping the bleeding from your highest margin products.
Keep in mind there are the tooling costs and what not for changing production lines. Without knowing their costs, it's impossible to say where they are generating more profit.
If it were a problem or becomes a problem for them, then I'd expect prices to drop.
Yes, and the main selling points with the Quadro line are larger amounts of VRAM (usualy double what the consumer card has) and drivers that are tailored for CAD and other professional work loads. You also get cards that are not allowed to deviate from the reference design and a higher level of support/warranty services.
current generation examples:
TITAN XP 12GB is the same GPU as the Quadro P6000 24GB
GTX 1080 8GB is the same GPU as the Quadro P5000 16GB
Quadro P4000 is the same GPU as the GTX 1070 with one SMM disabled, power usage dropped to 105w and using a single slot cooling setup
Wow Scott I'm impressed ! You brought some real good data to the table. In fact I was guessing/assuming the professional market was even smaller than what they said. Not sure why, but I just assumed there were 200 kajillion gamers around the world, but not that many pros who relied on NVIDIA cards. Maybe stuff like AutoCAD and some fancy visualization stuff in various industries use GPU's? I know Matlab definitely uses GPU's, and I've done some programming with that.
Y'know that reminded me...for those techies out there, Matlab is an engineering-type application that does all kinds of high tech simulations and calculations in all kinds of industries. In any case, it will use one of your GPU's to do math calculations. And there's a way to compare how it performs with super complex math operations versus your CPU and other GPU's. I just did that for my new 1080ti to compare with my Ryzen 7 1700.
The chart below shows how the GPU and my CPU (both in bold) compare to other devices. The numbers show how many calculations were done per second, presumably calculating some insanely large arrays (like 10 million elements or something). But clearly the Tesla devices are vastly faster than my 1080ti for stuff like that.
And the graph shows the GPU vs CPU for increasingly larger matrices.
BTW, "GFLOPS" is giga-floating point operations. Like billions of operations or something.
...this has been my pont as well.
Maybe we might see a Titan Pascal variant with HBM2 in a 4 x 3-Hi stack configuration. This will maintain the Quadro 's position as having the most total memory of any Nvidia GPU card available. I'd expect it to be priced somewhere in the 1,400$ price range for the air cooled version.
What still gets me is the fact the Quadro 5000 and 6000 series have not seen an increase in the MSRP over the last couple generations in spite of memory and performance improvements, while costs for other cards ave gone up.
Take the Titan series for example. The Maxwell Titan-X with 12 GB had a base MSRP 999$ When the Pascal version was releast the price was increased by 201$ for pretty much the same specs. True, there was a slight "bump" in CUDA cores over the Maxwell version, GDDR5X replaced GDDR5, and the processor was upgraded to the GP102 (same as in the Quadro P6000) but with a couple SMs locked. For the most part though, it was still seen as a Titan-X with 12 GB and 200$ bigger price tag than its predecessor which many found a bit difficult to swallow, especially when....
...the 1080 Ti came along which posted better performance numbers albeit with 1 GB less memory than the Titan Xp for 500$ less. So at the time it appeared Nvida kind of shot their flagship enthusiast card in the foot. A few months later, they upgraded the Titan by unlocking the two remining SMs giving it the same core count and floating point performance of it's 24 GB "big brother" and the ability to dedicate all 12 GB to rendering or compute purposes (similar to the Quadro Line). OK, so that sort of makes the extra 200$ over the Maxwell version a little more worth it.
In spite of this, for most hobbyists and enthusiasts, the 1080 Ti (which also has GDDR5X) is seen as still being more attractive because of it's price, while the Titan Xp is seen as something of a status symbol ("I have the most VRAM and cores of any consumer grade card").
...well, a few weeks ago AMD seriously laid that claim to rest by releasing the Vega Frontier with 16 GB of HBM 2 and 4096 stream processors with an MSRP 200$ less than the Titan Xp. True, it is useless for Iray and like most of AMD's GPUs, it is power hungry (odd as HBM2 memory requires less power than GDDR5/5X, almost half), but there we have it, a card with the same amount of memory as a P5000 and more processing "cores" than the P6000 for a lot less scratch. OK for those of us who work with Iray, not a big deal, but for gamers and others who use OpenCL render engines, a significant boost. As ponted out above, the raw chip memory speed may not be better than a 1080 Ti, but the total memory bandwidth is, by a factor of 4.
So yes, it does remain to be seen how Nvidia is going to react to AMD's rasing the bar without compromising sales of their professional GPU cards. Keeping Volta (as well as whatever it's follower will be) and NVLink technology exclusive to their Quadro/Tesla lines would definitely be one way. Increasing the capabilities of NVLInk to handle more than two cards would be another. Imagine linking four, six, or even eight 16 GB (or 32 GB Quadro "V6000") cards together with a pipeline that is between 5 and 12 times wider. That might actually attract the big film studios move to GPU rendering as it would be so much faster, reducing production times and thereby production costs.
My main reason for believing we won't see HBM2 in consumer cards soon is because one of the main features of the architecture, memory pooling, is practically useless to gamers. Maybe gamers can take advantage of the memory bandwidth, but 32GB of Vram? Maybe 8GB HBM cards, but that would be less attractive to me. In the end, the game is in AMD's hands. They are pushing Nvidia to release products to fill tiny little market niches (1070ti) and such, so they may force their hand on this decision. Much in the same way that AMD is shaking up the cpu business, they have the potential to quicken the product cycle in the gpu segment as well.