GTX 1180 Preliminary Specs Update..

Ghosty12Ghosty12 Posts: 2,080
edited May 2018 in The Commons

As the title suggests the first supposed specs of the new GTX 1180 have been "leaked" and they do look interesting as it seems that it puts the card in the range of a Titan Xp card in terms of power..

The supposed specs for the new card:

https://wccftech.com/nvidia-geforce-gtx-1180-specs-performance-price-release-date-prelimenary/

NVIDIA GeForce GTX 1180 Preliminary Specifications

Wccftech GeForce GTX 1180 GTX 1080
Architecture Turing Pascal
Lithography 12nm FinFET 16nm FinFET
GPU GT104 GP104
Die Size ~400mm² 314mm²
CUDA Cores 3584 2560
TMUs 224 160
ROPs 64 64
Core Clock ~1600MHz 1607MHz
Boost Clock ~1800MHz 1733MHz
FP32 Performance ~13 TFLOPS 8.7 TFLOPS
Memory Interface 256-bit 256-bit
Memory 8-16 GB GDDR6 8GB GDDR5X
Memory Speed 16Gbps 10Gbps
Memory Bandwidth 512GB/s 320GB/s
TDP 170-200W 180W
Launch Q3 (July) 2018 July 20 2016
Launch MSRP ~$699 $599
$699 (Founder's)

So from looking at those specs it looks to be a beast of a card, will have to wait and see when they are released on how good they really are..

Some more info on the 1180, and it looks like it could be based on the Volta architecture..

Post edited by Ghosty12 on
«13456

Comments

  • davesodaveso Posts: 7,769

    look nice, but I cannot afford either one. bring back the sub $200 super card. 

  • Ghosty12Ghosty12 Posts: 2,080

    Yeah pretty much one of those love to have type of moments..

  • FSMCDesignsFSMCDesigns Posts: 12,843

    RIP Titan XP, LMAO!!! If anything, hopefully I'll be picking up a Titan once this is rteleased

    So does the Turning architecture mean a new MOBO?

  • hyteckithyteckit Posts: 167

    Wow. Looks like a big jump from the GTX 1080. Look at those CUDA cores, memory bankwidth, and TeraFlops.

  • nicsttnicstt Posts: 11,715

    I'll wait to see how speculation and rumour hold up to the actuality before I start wondering what I'll do.

  • kyoto kidkyoto kid Posts: 41,847
    edited April 2018

    ...still not sold on the VRAM boost.  I remember when these "leaks" mentioned the 970 Ti and 980 Ti would have 8 GB (the 980 Ti was stepped up to 6 GB and there never was a 970 Ti).

    Post edited by kyoto kid on
  • GreymomGreymom Posts: 1,139

    It will be interesting to see the real specs.  Wonder if there will be a 1180ti?  I want to wait to see just how well Octane 4 does in terms of making high VRAM less important (i.e how much does this slow down renders) before considering another card.

    Maybe the 1180ti will have 24GB and an MSRP of $899 or less.  : )      (yeah, right)

  • outrider42outrider42 Posts: 3,679

    In one respect, I can see it. The 1080ti has a die size of 471 mm. That fits 3584 CUDA cores. The Titan V has a die size of 812 mm, which also jams Tensor cores on there. This spec says the card would have about 400 mm, and the same 3584 core count as the 1080ti. So given the process went from 16 to 12 nm, that seems logical. Still, that is over 1,000 more CUDAs than the 1080 has, which would be a huge jump. We haven't seen this kind of jump in CUDA since Fermi to Kepler back in 2012.

    But any chance at 16 GB VRAM sounds like crazy talk to me. That spec would be higher than the just released $3000 Titan V. It doesn't make sense for any x80 to beat a fresh new Titan in any spec. Its true that Nvidia frequently touts new generations as being better than the previous gen Titan. For Pascal, not only did Huang say the new 1080 was a Titan Maxwell killer, but even the step down 1070 was just as fast or faster than the Titan Maxwell. But at that time Titan Maxwell was old hardware from the previous generation. Titan V just released in December on the wholly different Volta, not Pascal. So I find it hard to believe the 1180 would have more than 12 GB of VRAM that the Titan V has. My bet is that there will be an 8 GB model. If they make variants higher than 8, then I bet those variants will cost a very high premium, maybe another hundred or two more just for that bump in VRAM.

    On a side note, I also think it is seriously messed up for Nvidia to code name their gaming SKU as "Turing" right after the big crypto boom locked many gamers out of being able to buy GPUs. If Nvidia truly hates crypto miners, why on Earth would they choose a name associated with cryptography? That almost feels like trolling to me, somebody at Nvidia must have had a good laugh about that. (And do note, this is an opinion of Outrider42, not associated with Daz Studio nor Nvidia in any way, and is not any intended to be taken as some kind of baseless accusation. *rolls eyes*) It would be different if we had known this name sooner, before the mining craze shook up the GPU market. Volta was announced a long time ago and has been featured on Nvidia's road map for years. Then all of the sudden they swap up and decide to use Turing, right in the middle of this mining boom? It makes no sense. If Nvidia decided they needed to branch off the architectures more, Volta should have remained on the gamer side, and Turing would have been logical for research and workstations. Or maybe a whole different name from Turing, which they could have used at a later generation.

    There has been an x80ti for several generations now, but that release is almost always around 8 or 9 months after the initial launch of the new generation and tends to mark the end of that generation's run. So you are looking at almost certainly 2019 for the 1180ti. Whatever the specs are, they may depend on whatever threat AMD poses in 2019 (if any,) as their new cards might be coming then or 2020.

  • hyteckithyteckit Posts: 167
    edited April 2018

     

    On a side note, I also think it is seriously messed up for Nvidia to code name their gaming SKU as "Turing" right after the big crypto boom locked many gamers out of being able to buy GPUs. If Nvidia truly hates crypto miners, why on Earth would they choose a name associated with cryptography? That almost feels like trolling to me, somebody at Nvidia must have had a good laugh about that. (And do note, this is an opinion of Outrider42, not associated with Daz Studio nor Nvidia in any way, and is not any intended to be taken as some kind of baseless accusation. *rolls eyes*) It would be different if we had known this name sooner, before the mining craze shook up the GPU market. Volta was announced a long time ago and has been featured on Nvidia's road map for years. Then all of the sudden they swap up and decide to use Turing, right in the middle of this mining boom? It makes no sense. If Nvidia decided they needed to branch off the architectures more, Volta should have remained on the gamer side, and Turing would have been logical for research and workstations. Or maybe a whole different name from Turing, which they could have used at a later generation.

    There has been an x80ti for several generations now, but that release is almost always around 8 or 9 months after the initial launch of the new generation and tends to mark the end of that generation's run. So you are looking at almost certainly 2019 for the 1180ti. Whatever the specs are, they may depend on whatever threat AMD poses in 2019 (if any,) as their new cards might be coming then or 2020.

    I understand what you are saying about Alan Turing. But Alan Turing is more than the guy who crack the German encryption/ciphers. He is the grandfather of modern computer science. The guy who created the Turing Test and Turing Machine. The digital bits of 0's and 1's behind computing in the CPU/GPU. Binary, 0 & 1, True or False, Yes or No. Leading to the billions/trillions of transistors we have in the CPU/GPU today.

    I believe Nvidia calls it Turing because of AI and Machine Learning and passing the Turing Test, rather than it having to do with cryptography. You know, those Tensor cores you are talking about.

    Post edited by hyteckit on
  • HavosHavos Posts: 5,576

    I wonder if Iray will need to updates to support this new architecture, if you recall we had to wait a year for Iray Pascal support

  • Ghosty12Ghosty12 Posts: 2,080
    hyteckit said:

     

    On a side note, I also think it is seriously messed up for Nvidia to code name their gaming SKU as "Turing" right after the big crypto boom locked many gamers out of being able to buy GPUs. If Nvidia truly hates crypto miners, why on Earth would they choose a name associated with cryptography? That almost feels like trolling to me, somebody at Nvidia must have had a good laugh about that. (And do note, this is an opinion of Outrider42, not associated with Daz Studio nor Nvidia in any way, and is not any intended to be taken as some kind of baseless accusation. *rolls eyes*) It would be different if we had known this name sooner, before the mining craze shook up the GPU market. Volta was announced a long time ago and has been featured on Nvidia's road map for years. Then all of the sudden they swap up and decide to use Turing, right in the middle of this mining boom? It makes no sense. If Nvidia decided they needed to branch off the architectures more, Volta should have remained on the gamer side, and Turing would have been logical for research and workstations. Or maybe a whole different name from Turing, which they could have used at a later generation.

    There has been an x80ti for several generations now, but that release is almost always around 8 or 9 months after the initial launch of the new generation and tends to mark the end of that generation's run. So you are looking at almost certainly 2019 for the 1180ti. Whatever the specs are, they may depend on whatever threat AMD poses in 2019 (if any,) as their new cards might be coming then or 2020.

    I understand what you are saying about Alan Turing. But Alan Turing is more than the guy who crack the German encryption/ciphers. He is the grandfather of modern computer science. The guy who created the Turing Test and Turing Machine. The digital bits of 0's and 1's behind computing in the CPU/GPU. Binary, 0 & 1, True or False, Yes or No. Leading to the billions/trillions of transistors we have in the CPU/GPU today.

    I believe Nvidia calls it Turing because of AI and Machine Learning and passing the Turing Test, rather than it having to do with cryptography. You know, those Tensor cores you are talking about.

    Pretty much this^ I think it is more likely a homage to Alan Turing than anything else.. Since as you put it he is the father of modern computing..

  • kyoto kidkyoto kid Posts: 41,847

    In one respect, I can see it. The 1080ti has a die size of 471 mm. That fits 3584 CUDA cores. The Titan V has a die size of 812 mm, which also jams Tensor cores on there. This spec says the card would have about 400 mm, and the same 3584 core count as the 1080ti. So given the process went from 16 to 12 nm, that seems logical. Still, that is over 1,000 more CUDAs than the 1080 has, which would be a huge jump. We haven't seen this kind of jump in CUDA since Fermi to Kepler back in 2012.

    But any chance at 16 GB VRAM sounds like crazy talk to me. That spec would be higher than the just released $3000 Titan V. It doesn't make sense for any x80 to beat a fresh new Titan in any spec. Its true that Nvidia frequently touts new generations as being better than the previous gen Titan. For Pascal, not only did Huang say the new 1080 was a Titan Maxwell killer, but even the step down 1070 was just as fast or faster than the Titan Maxwell. But at that time Titan Maxwell was old hardware from the previous generation. Titan V just released in December on the wholly different Volta, not Pascal. So I find it hard to believe the 1180 would have more than 12 GB of VRAM that the Titan V has. My bet is that there will be an 8 GB model. If they make variants higher than 8, then I bet those variants will cost a very high premium, maybe another hundred or two more just for that bump in VRAM.

    On a side note, I also think it is seriously messed up for Nvidia to code name their gaming SKU as "Turing" right after the big crypto boom locked many gamers out of being able to buy GPUs. If Nvidia truly hates crypto miners, why on Earth would they choose a name associated with cryptography? That almost feels like trolling to me, somebody at Nvidia must have had a good laugh about that. (And do note, this is an opinion of Outrider42, not associated with Daz Studio nor Nvidia in any way, and is not any intended to be taken as some kind of baseless accusation. *rolls eyes*) It would be different if we had known this name sooner, before the mining craze shook up the GPU market. Volta was announced a long time ago and has been featured on Nvidia's road map for years. Then all of the sudden they swap up and decide to use Turing, right in the middle of this mining boom? It makes no sense. If Nvidia decided they needed to branch off the architectures more, Volta should have remained on the gamer side, and Turing would have been logical for research and workstations. Or maybe a whole different name from Turing, which they could have used at a later generation.

    There has been an x80ti for several generations now, but that release is almost always around 8 or 9 months after the initial launch of the new generation and tends to mark the end of that generation's run. So you are looking at almost certainly 2019 for the 1180ti. Whatever the specs are, they may depend on whatever threat AMD poses in 2019 (if any,) as their new cards might be coming then or 2020.

    ...I'm totally with you on the VRAM estimate.  Not only would that outdo the 3,000$ Titan V (however no tensor cores) but match the VRAM of the 2,000$ Quadro P5000.  Not sure Nvidia is willing to to sacrifice those cash cows.

  • Havos said:

    I wonder if Iray will need to updates to support this new architecture, if you recall we had to wait a year for Iray Pascal support

    Actually, we had to wait til DAZ integrated the SDK version that supported Pascal; the current closed Beta supports Volta already and I'm honestly not sure how much different Turing is from that.
  • Ghosty12Ghosty12 Posts: 2,080
    edited April 2018
    kyoto kid said:

    In one respect, I can see it. The 1080ti has a die size of 471 mm. That fits 3584 CUDA cores. The Titan V has a die size of 812 mm, which also jams Tensor cores on there. This spec says the card would have about 400 mm, and the same 3584 core count as the 1080ti. So given the process went from 16 to 12 nm, that seems logical. Still, that is over 1,000 more CUDAs than the 1080 has, which would be a huge jump. We haven't seen this kind of jump in CUDA since Fermi to Kepler back in 2012.

    But any chance at 16 GB VRAM sounds like crazy talk to me. That spec would be higher than the just released $3000 Titan V. It doesn't make sense for any x80 to beat a fresh new Titan in any spec. Its true that Nvidia frequently touts new generations as being better than the previous gen Titan. For Pascal, not only did Huang say the new 1080 was a Titan Maxwell killer, but even the step down 1070 was just as fast or faster than the Titan Maxwell. But at that time Titan Maxwell was old hardware from the previous generation. Titan V just released in December on the wholly different Volta, not Pascal. So I find it hard to believe the 1180 would have more than 12 GB of VRAM that the Titan V has. My bet is that there will be an 8 GB model. If they make variants higher than 8, then I bet those variants will cost a very high premium, maybe another hundred or two more just for that bump in VRAM.

    On a side note, I also think it is seriously messed up for Nvidia to code name their gaming SKU as "Turing" right after the big crypto boom locked many gamers out of being able to buy GPUs. If Nvidia truly hates crypto miners, why on Earth would they choose a name associated with cryptography? That almost feels like trolling to me, somebody at Nvidia must have had a good laugh about that. (And do note, this is an opinion of Outrider42, not associated with Daz Studio nor Nvidia in any way, and is not any intended to be taken as some kind of baseless accusation. *rolls eyes*) It would be different if we had known this name sooner, before the mining craze shook up the GPU market. Volta was announced a long time ago and has been featured on Nvidia's road map for years. Then all of the sudden they swap up and decide to use Turing, right in the middle of this mining boom? It makes no sense. If Nvidia decided they needed to branch off the architectures more, Volta should have remained on the gamer side, and Turing would have been logical for research and workstations. Or maybe a whole different name from Turing, which they could have used at a later generation.

    There has been an x80ti for several generations now, but that release is almost always around 8 or 9 months after the initial launch of the new generation and tends to mark the end of that generation's run. So you are looking at almost certainly 2019 for the 1180ti. Whatever the specs are, they may depend on whatever threat AMD poses in 2019 (if any,) as their new cards might be coming then or 2020.

    ...I'm totally with you on the VRAM estimate.  Not only would that outdo the 3,000$ Titan V (however no tensor cores) but match the VRAM of the 2,000$ Quadro P5000.  Not sure Nvidia is willing to to sacrifice those cash cows.

    Well they have done it in the past with other Quadro cards, even now you can get the Quadro P6000 with 3840 Cuda Cores and 24GB GDDR5X and the Quadro GP100 with 3584 Cuda Cores and 16GB HBM2.. The P5000 has 2560 Cuda Cores and 16GB GDDR5X, so in a sense is sort of already obsolete..

    https://www.nvidia.com/en-au/design-visualization/quadro-desktop-gpus/#

    So would not put past Nvidia to do it again, there is also the rumored Ampere as well, whether it is even real is something will have to wait and see..

    Post edited by Ghosty12 on
  • kyoto kidkyoto kid Posts: 41,847

    ..however I'm talking about VRAM not Cores. This is why the 1080 Ti was limited to the odd amount of 11 GB so as not to overstep the Titan XP's VRAM (which at the time cost about 500$ more).  They certainly did not want to compete against their high end Quadro cards either like the P5000 and P6000 either.

    I don't consider the P5000 to be obsolete until there is a Volta successor in its class with HBM2 memory and NVLink support.  If I could afford one It would be my card of choice right now. As is I am very pleased about my forthcoming Maxwell Titan X.even if it is a generation older as for my purposes, VRAM also equates to render speed as it allows a big scene to remain in GPU memory rather than dumping to CPU mode.  Once that happens, all the CUDA cores in the world would be of no help.

  • outrider42outrider42 Posts: 3,679
    hyteckit said:

     

    On a side note, I also think it is seriously messed up for Nvidia to code name their gaming SKU as "Turing" right after the big crypto boom locked many gamers out of being able to buy GPUs. If Nvidia truly hates crypto miners, why on Earth would they choose a name associated with cryptography? That almost feels like trolling to me, somebody at Nvidia must have had a good laugh about that. (And do note, this is an opinion of Outrider42, not associated with Daz Studio nor Nvidia in any way, and is not any intended to be taken as some kind of baseless accusation. *rolls eyes*) It would be different if we had known this name sooner, before the mining craze shook up the GPU market. Volta was announced a long time ago and has been featured on Nvidia's road map for years. Then all of the sudden they swap up and decide to use Turing, right in the middle of this mining boom? It makes no sense. If Nvidia decided they needed to branch off the architectures more, Volta should have remained on the gamer side, and Turing would have been logical for research and workstations. Or maybe a whole different name from Turing, which they could have used at a later generation.

    There has been an x80ti for several generations now, but that release is almost always around 8 or 9 months after the initial launch of the new generation and tends to mark the end of that generation's run. So you are looking at almost certainly 2019 for the 1180ti. Whatever the specs are, they may depend on whatever threat AMD poses in 2019 (if any,) as their new cards might be coming then or 2020.

    I understand what you are saying about Alan Turing. But Alan Turing is more than the guy who crack the German encryption/ciphers. He is the grandfather of modern computer science. The guy who created the Turing Test and Turing Machine. The digital bits of 0's and 1's behind computing in the CPU/GPU. Binary, 0 & 1, True or False, Yes or No. Leading to the billions/trillions of transistors we have in the CPU/GPU today.

    I believe Nvidia calls it Turing because of AI and Machine Learning and passing the Turing Test, rather than it having to do with cryptography. You know, those Tensor cores you are talking about.

    And that's the problem, GTX is the gaming arm, not the machine learning arm. It would be more sensible to use his name that way than for just gaming. And the name came out of nowhere. Nvidia placed Volta on the roadmap all the way back in March 2013. So for over 5 years the name to succeed Pascal has always been Volta. But now we are suddenly getting Turing. Nvidia never once revealed Turing, in fact the only reason we have this name is because of financial reports and rumors. Nvidia has not actually confirmed this name officially! For all we know, they might just pop out Ampere instead. Or surprise everybody and just call it Volta after all. Either way, I have nothing against Turing as a name, its great to give him that recognition. But its the timing and sudden swapping that is just plain strange. Many people were thinking Nvidia would try dedicated mining cards again with that name. And who knows, maybe they were, but they changed their minds.

    Another thing to remember, Nvidia updated their user agreements to totally prohibit GTX cards from being used in workstations. IMO that's kind of scummy, and probably not something that can be enforced. They really want to separate the workstation products from the gaming side. So even if the 1180 had more VRAM than Quadros, workstation owners are prohibited from using them. I find this funny because Titans used to carry the GTX brand, and the even the Titan V still gets GTX gaming drivers. I get the feeling Nvidia isn't really sure what to do with the Titan.

    It would be great if by supporting Volta that Turing was included. But I have a feeling that is not the case. I certainly would not rush out to buy an 1180 without knowing if Daz Iray has the drivers yet. That would be a massive bummer, especially if someone sold their old cards to do so, leaving them without any Iray supported GPU. You know its going to happen. We had a thread or two from someone who was not happy the Titan V wasn't supported yet. If the 1100 series isn't supported quickly after launch, the support threads will blow up possibly worse than when Pascal did. Daz is seriously going to have to stay on top of informing people about whether the 1100s are supported or not. It may be out of their control, but customers might not see it that way. And when they do get the SDK, it better get fast tracked, LOL.
  • joseftjoseft Posts: 310

    I remember many people were convinced the mainstream Pascal cards were going to have HBM2, with the 1080Ti likely to have 32gb of it. We are now moving towards the next generation, and still seems there is not going to be HBM2. 

     

  • Ghosty12Ghosty12 Posts: 2,080
    joseft said:

    I remember many people were convinced the mainstream Pascal cards were going to have HBM2, with the 1080Ti likely to have 32gb of it. We are now moving towards the next generation, and still seems there is not going to be HBM2. 

     

    That would be due to the high cost of HBM2, and well GDDR6 is just around the corner and somewhat cheaper over HBM2..

  • Richard HaseltineRichard Haseltine Posts: 107,953
    Havos said:

    I wonder if Iray will need to updates to support this new architecture, if you recall we had to wait a year for Iray Pascal support

     

    Actually, we had to wait til DAZ integrated the SDK version that supported Pascal; the current closed Beta supports Volta already and I'm honestly not sure how much different Turing is from that.

    Daz had to wait for a working version of Iray.

  • outrider42outrider42 Posts: 3,679
    ghosty12 said:
    joseft said:

    I remember many people were convinced the mainstream Pascal cards were going to have HBM2, with the 1080Ti likely to have 32gb of it. We are now moving towards the next generation, and still seems there is not going to be HBM2. 

     

    That would be due to the high cost of HBM2, and well GDDR6 is just around the corner and somewhat cheaper over HBM2..

    HBM2 proved to have major productions issues and costs went up. Sticking with it would have been a problem. AMD had big trouble keeping their production up. Then add in a world wide RAM chip shortage and it compounds. At this point GGDR6 is more than suitable, especially for use in rendering. Memory speed is more important for gaming, which needs to render new frames in less than 16 ms to maintain a proper framerate. HBM's impact could be seen in its first products from AMD. The R9 Nano's performance improved at higher resolutions compared to Nvidia counterparts. I believe that had more to do with memory speed than than actual chip itself, as HBM was able to feed those large 4k frames much faster than GDDR5.
  • Ghosty12Ghosty12 Posts: 2,080
    edited April 2018
    ghosty12 said:
    joseft said:

    I remember many people were convinced the mainstream Pascal cards were going to have HBM2, with the 1080Ti likely to have 32gb of it. We are now moving towards the next generation, and still seems there is not going to be HBM2. 

     

    That would be due to the high cost of HBM2, and well GDDR6 is just around the corner and somewhat cheaper over HBM2..

     

    HBM2 proved to have major productions issues and costs went up. Sticking with it would have been a problem. AMD had big trouble keeping their production up. Then add in a world wide RAM chip shortage and it compounds. At this point GGDR6 is more than suitable, especially for use in rendering. Memory speed is more important for gaming, which needs to render new frames in less than 16 ms to maintain a proper framerate. HBM's impact could be seen in its first products from AMD. The R9 Nano's performance improved at higher resolutions compared to Nvidia counterparts. I believe that had more to do with memory speed than than actual chip itself, as HBM was able to feed those large 4k frames much faster than GDDR5.

    Yeah HBM is a lot better to be sure, even read up an article about how it all works and it is quite interesting, it is just a shame that what it takes to produce HBM2 memory is so convoluted.. Either way what is even more interesting is how they are already working on HBM3 so HBM and HBM2 are already on the wway out so to speak.  But as with everything I think we won't see HBM ram on consumer cards due to cost, not while there is GDDR5X and the soon to be released GDDR6 that is so much cheaper to produce, and that doesn't need a completely different PCB design to accomadate HBM ram..

    Post edited by Ghosty12 on
  • kyoto kidkyoto kid Posts: 41,847

    ...makes sense for GPUs that are now being used  primarily as the basis for supercomputers. Each node of the Summit uses (I believe) a cluster of 8 Tesla V100s. Even at 16 GB that's 128 GB per node (and with something like 4,600 nodes) that yields 588.8 TB of HBM 2 memory alone.  With the added DDR4 memory, the entire system has 10 PB of memory resources.

    If Summit was upgraded to the 32 GB V-100s that would be about 1.18 PB of HBM memory.

  • TooncesToonces Posts: 919

    What does the 8-16 GB mean? Is it 8 or 16 GB?

  • Why are we comparing this to the 1080 and not the 1080 TI?

    The 1080 (not TI) is so 2016.  May 2016, to be precise.  The 1080 TI came out in March 2017, with more than a third more transistors...and stuff. 

  • hyteckithyteckit Posts: 167
    Toonces said:

    What does the 8-16 GB mean? Is it 8 or 16 GB?

    8GB to 16GB means 8GB to 16GB.  Depends on the architecture. But I'm guessing it come in multiples of 4GB. So 8GB, 12GB, and 16GB.

  • hyteckithyteckit Posts: 167

    AI and Machine Learning. Image editing with Nvidia

    https://www.youtube.com/watch?time_continue=132&v=gg0F5JjKmhA

  • kyoto kidkyoto kid Posts: 41,847

    ...this just shows the insanity the tech curve today.  2 years old and people are already talking "legacy" hardware.

    Crikey I had a 1964 Buick Special when I was in college into the 70s.  Drove and rode fine, was easy to maintain, got decent economy for the day, and actually looked "smart" (particularly compared to some of the odd if not outlandish 70s designs).  I currently ride a 36 year old Specialised Stump Jumper (the first trail/mountain bike) that I bought at a yard sale (for 35$) and modified for commuting as it's rugged and durable, has a nice stiff frame, yet doesn't weigh a tonne.  Those new ones with carbon/composite cantilever frames, shock absorbers (and several thousand dollar price tags)? Bah, can't sprint out of the way of some speeding motorist on one of those and would need a lock setup that weighed at least twice as much as the bike to keep it from being stolen.

    There is something still to be said for homely and "old school", sadly, it seems the tech world is not the place.

  • Can't wait for the release. If it is sub 1000€ then I'll definitely buy it as soon as it is available
  • kyoto kidkyoto kid Posts: 41,847
    edited April 2018

    ...I could see 12 GB for whatever the top end 11xx card will be.as the Titan-V would still offer more cores (5100) along with the 640 Tensor cores.  Unless there will also be an upgrade of VRAM (24 GB?) for whatever the P5000's Volta successor is, I don't think a 16 GB GTX 11xx will be a reality.  I also really don't feel Nvidia would offer a GTX card with more VRAM than the Titan-V unless the latter was also upgraded to 16 GB (which with HBM2 would be easy to do as all that is needed is adding 1 more memory chip to each of the 4 stacks) and/or including NVLink support.

    In the Maxwell days the Titan-X and Quadro M6000 both had 12 GB (the latter upgraded to 24 GB about a year before cards based on Pascal architecture were introduced).

    Again based on past experiences with such "speculation", I don't think we will really know what the 11xx cards will have until they actually appear in the "silicon and plastic".

    Post edited by kyoto kid on
Sign In or Register to comment.