GTX 1180 Preliminary Specs Update..
Ghosty12
Posts: 2,080
As the title suggests the first supposed specs of the new GTX 1180 have been "leaked" and they do look interesting as it seems that it puts the card in the range of a Titan Xp card in terms of power..
The supposed specs for the new card:
https://wccftech.com/nvidia-geforce-gtx-1180-specs-performance-price-release-date-prelimenary/
NVIDIA GeForce GTX 1180 Preliminary Specifications
| Wccftech | GeForce GTX 1180 | GTX 1080 |
|---|---|---|
| Architecture | Turing | Pascal |
| Lithography | 12nm FinFET | 16nm FinFET |
| GPU | GT104 | GP104 |
| Die Size | ~400mm² | 314mm² |
| CUDA Cores | 3584 | 2560 |
| TMUs | 224 | 160 |
| ROPs | 64 | 64 |
| Core Clock | ~1600MHz | 1607MHz |
| Boost Clock | ~1800MHz | 1733MHz |
| FP32 Performance | ~13 TFLOPS | 8.7 TFLOPS |
| Memory Interface | 256-bit | 256-bit |
| Memory | 8-16 GB GDDR6 | 8GB GDDR5X |
| Memory Speed | 16Gbps | 10Gbps |
| Memory Bandwidth | 512GB/s | 320GB/s |
| TDP | 170-200W | 180W |
| Launch | Q3 (July) 2018 | July 20 2016 |
| Launch MSRP | ~$699 | $599 $699 (Founder's) |
So from looking at those specs it looks to be a beast of a card, will have to wait and see when they are released on how good they really are..
Some more info on the 1180, and it looks like it could be based on the Volta architecture..
Post edited by Ghosty12 on



Comments
look nice, but I cannot afford either one. bring back the sub $200 super card.
Yeah pretty much one of those love to have type of moments..
RIP Titan XP, LMAO!!! If anything, hopefully I'll be picking up a Titan once this is rteleased
So does the Turning architecture mean a new MOBO?
Wow. Looks like a big jump from the GTX 1080. Look at those CUDA cores, memory bankwidth, and TeraFlops.
I'll wait to see how speculation and rumour hold up to the actuality before I start wondering what I'll do.
...still not sold on the VRAM boost. I remember when these "leaks" mentioned the 970 Ti and 980 Ti would have 8 GB (the 980 Ti was stepped up to 6 GB and there never was a 970 Ti).
It will be interesting to see the real specs. Wonder if there will be a 1180ti? I want to wait to see just how well Octane 4 does in terms of making high VRAM less important (i.e how much does this slow down renders) before considering another card.
Maybe the 1180ti will have 24GB and an MSRP of $899 or less. : ) (yeah, right)
In one respect, I can see it. The 1080ti has a die size of 471 mm. That fits 3584 CUDA cores. The Titan V has a die size of 812 mm, which also jams Tensor cores on there. This spec says the card would have about 400 mm, and the same 3584 core count as the 1080ti. So given the process went from 16 to 12 nm, that seems logical. Still, that is over 1,000 more CUDAs than the 1080 has, which would be a huge jump. We haven't seen this kind of jump in CUDA since Fermi to Kepler back in 2012.
But any chance at 16 GB VRAM sounds like crazy talk to me. That spec would be higher than the just released $3000 Titan V. It doesn't make sense for any x80 to beat a fresh new Titan in any spec. Its true that Nvidia frequently touts new generations as being better than the previous gen Titan. For Pascal, not only did Huang say the new 1080 was a Titan Maxwell killer, but even the step down 1070 was just as fast or faster than the Titan Maxwell. But at that time Titan Maxwell was old hardware from the previous generation. Titan V just released in December on the wholly different Volta, not Pascal. So I find it hard to believe the 1180 would have more than 12 GB of VRAM that the Titan V has. My bet is that there will be an 8 GB model. If they make variants higher than 8, then I bet those variants will cost a very high premium, maybe another hundred or two more just for that bump in VRAM.
On a side note, I also think it is seriously messed up for Nvidia to code name their gaming SKU as "Turing" right after the big crypto boom locked many gamers out of being able to buy GPUs. If Nvidia truly hates crypto miners, why on Earth would they choose a name associated with cryptography? That almost feels like trolling to me, somebody at Nvidia must have had a good laugh about that. (And do note, this is an opinion of Outrider42, not associated with Daz Studio nor Nvidia in any way, and is not any intended to be taken as some kind of baseless accusation. *rolls eyes*) It would be different if we had known this name sooner, before the mining craze shook up the GPU market. Volta was announced a long time ago and has been featured on Nvidia's road map for years. Then all of the sudden they swap up and decide to use Turing, right in the middle of this mining boom? It makes no sense. If Nvidia decided they needed to branch off the architectures more, Volta should have remained on the gamer side, and Turing would have been logical for research and workstations. Or maybe a whole different name from Turing, which they could have used at a later generation.
There has been an x80ti for several generations now, but that release is almost always around 8 or 9 months after the initial launch of the new generation and tends to mark the end of that generation's run. So you are looking at almost certainly 2019 for the 1180ti. Whatever the specs are, they may depend on whatever threat AMD poses in 2019 (if any,) as their new cards might be coming then or 2020.
I understand what you are saying about Alan Turing. But Alan Turing is more than the guy who crack the German encryption/ciphers. He is the grandfather of modern computer science. The guy who created the Turing Test and Turing Machine. The digital bits of 0's and 1's behind computing in the CPU/GPU. Binary, 0 & 1, True or False, Yes or No. Leading to the billions/trillions of transistors we have in the CPU/GPU today.
I believe Nvidia calls it Turing because of AI and Machine Learning and passing the Turing Test, rather than it having to do with cryptography. You know, those Tensor cores you are talking about.
I wonder if Iray will need to updates to support this new architecture, if you recall we had to wait a year for Iray Pascal support
Pretty much this^ I think it is more likely a homage to Alan Turing than anything else.. Since as you put it he is the father of modern computing..
...I'm totally with you on the VRAM estimate. Not only would that outdo the 3,000$ Titan V (however no tensor cores) but match the VRAM of the 2,000$ Quadro P5000. Not sure Nvidia is willing to to sacrifice those cash cows.
Well they have done it in the past with other Quadro cards, even now you can get the Quadro P6000 with 3840 Cuda Cores and 24GB GDDR5X and the Quadro GP100 with 3584 Cuda Cores and 16GB HBM2.. The P5000 has 2560 Cuda Cores and 16GB GDDR5X, so in a sense is sort of already obsolete..
https://www.nvidia.com/en-au/design-visualization/quadro-desktop-gpus/#
So would not put past Nvidia to do it again, there is also the rumored Ampere as well, whether it is even real is something will have to wait and see..
..however I'm talking about VRAM not Cores. This is why the 1080 Ti was limited to the odd amount of 11 GB so as not to overstep the Titan XP's VRAM (which at the time cost about 500$ more). They certainly did not want to compete against their high end Quadro cards either like the P5000 and P6000 either.
I don't consider the P5000 to be obsolete until there is a Volta successor in its class with HBM2 memory and NVLink support. If I could afford one It would be my card of choice right now. As is I am very pleased about my forthcoming Maxwell Titan X.even if it is a generation older as for my purposes, VRAM also equates to render speed as it allows a big scene to remain in GPU memory rather than dumping to CPU mode. Once that happens, all the CUDA cores in the world would be of no help.
I remember many people were convinced the mainstream Pascal cards were going to have HBM2, with the 1080Ti likely to have 32gb of it. We are now moving towards the next generation, and still seems there is not going to be HBM2.
That would be due to the high cost of HBM2, and well GDDR6 is just around the corner and somewhat cheaper over HBM2..
Daz had to wait for a working version of Iray.
Yeah HBM is a lot better to be sure, even read up an article about how it all works and it is quite interesting, it is just a shame that what it takes to produce HBM2 memory is so convoluted.. Either way what is even more interesting is how they are already working on HBM3 so HBM and HBM2 are already on the wway out so to speak. But as with everything I think we won't see HBM ram on consumer cards due to cost, not while there is GDDR5X and the soon to be released GDDR6 that is so much cheaper to produce, and that doesn't need a completely different PCB design to accomadate HBM ram..
What is interesting is Nvidia boosing RAM
https://www.anandtech.com/show/12576/nvidia-bumps-all-tesla-v100-models-to-32gb
...makes sense for GPUs that are now being used primarily as the basis for supercomputers. Each node of the Summit uses (I believe) a cluster of 8 Tesla V100s. Even at 16 GB that's 128 GB per node (and with something like 4,600 nodes) that yields 588.8 TB of HBM 2 memory alone. With the added DDR4 memory, the entire system has 10 PB of memory resources.
If Summit was upgraded to the 32 GB V-100s that would be about 1.18 PB of HBM memory.
What does the 8-16 GB mean? Is it 8 or 16 GB?
Why are we comparing this to the 1080 and not the 1080 TI?
The 1080 (not TI) is so 2016. May 2016, to be precise. The 1080 TI came out in March 2017, with more than a third more transistors...and stuff.
8GB to 16GB means 8GB to 16GB. Depends on the architecture. But I'm guessing it come in multiples of 4GB. So 8GB, 12GB, and 16GB.
AI and Machine Learning. Image editing with Nvidia
https://www.youtube.com/watch?time_continue=132&v=gg0F5JjKmhA
...this just shows the insanity the tech curve today. 2 years old and people are already talking "legacy" hardware.
Crikey I had a 1964 Buick Special when I was in college into the 70s. Drove and rode fine, was easy to maintain, got decent economy for the day, and actually looked "smart" (particularly compared to some of the odd if not outlandish 70s designs). I currently ride a 36 year old Specialised Stump Jumper (the first trail/mountain bike) that I bought at a yard sale (for 35$) and modified for commuting as it's rugged and durable, has a nice stiff frame, yet doesn't weigh a tonne. Those new ones with carbon/composite cantilever frames, shock absorbers (and several thousand dollar price tags)? Bah, can't sprint out of the way of some speeding motorist on one of those and would need a lock setup that weighed at least twice as much as the bike to keep it from being stolen.
There is something still to be said for homely and "old school", sadly, it seems the tech world is not the place.
...I could see 12 GB for whatever the top end 11xx card will be.as the Titan-V would still offer more cores (5100) along with the 640 Tensor cores. Unless there will also be an upgrade of VRAM (24 GB?) for whatever the P5000's Volta successor is, I don't think a 16 GB GTX 11xx will be a reality. I also really don't feel Nvidia would offer a GTX card with more VRAM than the Titan-V unless the latter was also upgraded to 16 GB (which with HBM2 would be easy to do as all that is needed is adding 1 more memory chip to each of the 4 stacks) and/or including NVLink support.
In the Maxwell days the Titan-X and Quadro M6000 both had 12 GB (the latter upgraded to 24 GB about a year before cards based on Pascal architecture were introduced).
Again based on past experiences with such "speculation", I don't think we will really know what the 11xx cards will have until they actually appear in the "silicon and plastic".