GTX 1180 Preliminary Specs Update..

2456

Comments

  • kyoto kidkyoto kid Posts: 41,859
    edited April 2018

    ...one of the issues with having as much performance in a consumer card as a pro grade one is people will look for bargains, even pros.  It wasn't unusual for some to use 1,000$ Titan X's rather than 5,000$ Quadro M6000s as both cards pretty much had similar specs (save for drivers).  This no doubt had some impact on sales of the M6000 (as well as the 8 GB M5000) and likely may have been part of why its VRAM capacity was doubled to 24 GB without a price increase (GDDR5 had been around for a while and was also pretty reasonable both in price and supply, so even though the upgrade cost more on the production end, if it lured customers back who willing to pay top dollar for the highest VRAM card available, it was a good move).  Again the idea to market the 1080 Ti with such an odd amount of VRAM (I am surprised it wasn't 10 GB) most likely had to do with not hurting sales of the Titan Xp even though there was about a 500$ price difference between the two and in some benchmarks, the 1080 Ti performed better..

    Granted there seems to be more of a divergence happening today, with Volta, HBM2, Tensor cores and NVLink pretty much being reserved for the pro/compute lines, as well as the new flagship the Quadro GV100 (basically the Volta successor of the P6000) getting another memory upgrade to 32 GB, a boost in core count, and NVLink support. 

    The odd "stepchild" in all this is the Titan-V, no longer branded "GTX" but not elevated to the "Quadro" line either.  It has the basic architecture and features of the new Quadros/Teslas but still relies on GTX drivers and does not support NVLink.  However, it is priced pretty much out of the gaming/enthusiast users' market, being 1,000$ more than the 16 GB Quadro P5000. It's big marketing point is being an "entry level" card for deep learning an AI, but without the ability to pool memory resources, it seems somewhat limited in this respect compared to its Quadro/Tesla siblings. 3,000$ can purchase a pretty decent gaming  or even rendering rig (pre- or custom built), particularly if prices come down to earth somewhat when the 11xx cards are released.

    Post edited by kyoto kid on
  • KitsumoKitsumo Posts: 1,221
    kyoto kid said:

    ...this just shows the insanity the tech curve today.  2 years old and people are already talking "legacy" hardware.

    Crikey I had a 1964 Buick Special when I was in college into the 70s.  Drove and rode fine, was easy to maintain, got decent economy for the day, and actually looked "smart" ...

    There is something still to be said for homely and "old school", sadly, it seems the tech world is not the place.

    Hey, if you're into "old school", I've got a drawer full of PCI cards. Sound cards, winmodems, maybe even a joystick card with MIDI adapter. I'll let 'em go for half what I paid for them. And if you need a serial mouse(or several), I got you covered, lol.

  • outrider42outrider42 Posts: 3,679
    Toonces said:

    What does the 8-16 GB mean? Is it 8 or 16 GB?

    The answer is they don't know what Nvidia will do. But its also common to have variants of the same board with different VRAM configurations. The 680 and 670 offered versions with 2GB and 4GB. So for that generation they had a spec with double the VRAM. I would be shocked if they went with a 16GB for any GTX this soon. They will certainly have 8GB, since the 1070 and 1080 had that. The question is will their be a variant with more...and how much will it cost. As we get closer to launch, we will almost certainly get closer to the true spec of the card. There is always somebody willing to talk, you just can't keep a secret anymore, LOL.

    The reason the rumors suggest a 16GB possibility is because that is among the configurations listed in for GDDR6. It can be stacked that way.

    I forgot to mention earlier one of the big advantages of HBM is that it stacks very well. They can build large stacks of this on top of each other, saving space. That is why AMD was able to make the Nano so small, yet so powerful. HMB completely changed how they could approach board design, and it was a stroke of brilliance. I think HBM2 is probably why Nvidia was able to make the Titan V such a monster size chip. Otherwise, the VRAM could have taken up too much space! They were able to devote more physical space to the larger chip.

  • kyoto kidkyoto kid Posts: 41,859

    ...HBM's main advantage is a larger memory bus and higher memory bandwidth than GDDR  instead of a 352 bit bus and 484 GB/s bandwidth (1080 Ti) you are looking at a 3072 bit bus with bandwidth of 652 GB/s (Titan-V) or 4096 bit bus with 717 GB/s bandwidth (Quadro GP100).  

    The memory interface is sort of like a highway, the higher the rating, the more "traffic lanes" available between the memory and processor.

  • PetercatPetercat Posts: 2,321
    edited April 2018
    kyoto kid said:

    ...this just shows the insanity the tech curve today.  2 years old and people are already talking "legacy" hardware.

    Crikey I had a 1964 Buick Special when I was in college into the 70s.  Drove and rode fine, was easy to maintain, got decent economy for the day, and actually looked "smart" (particularly compared to some of the odd if not outlandish 70s designs).  I currently ride a 36 year old Specialised Stump Jumper (the first trail/mountain bike) that I bought at a yard sale (for 35$) and modified for commuting as it's rugged and durable, has a nice stiff frame, yet doesn't weigh a tonne.  Those new ones with carbon/composite cantilever frames, shock absorbers (and several thousand dollar price tags)? Bah, can't sprint out of the way of some speeding motorist on one of those and would need a lock setup that weighed at least twice as much as the bike to keep it from being stolen.

    There is something still to be said for homely and "old school", sadly, it seems the tech world is not the place.

    Heh. My first car (in the 70's) was a 1963 Pontiac Lemans with half a V8 and a rope drive instead of a driveshaft. Most comfortable seats I've ever had in a car, and the weirdest hubcaps. Drove it cross-country several times. That was when I was in my early 20's.
    Until then I had been strictly motorcycles, year round. In Missouri and Massachusetts. Good times.
    But then, I've always had a taste for the weird. I once owned a Suzuki RE-5 with the Wankle engine.
    Now I drive an '87 Cadillac Hearse. Good mileage... for a battleship. 0-60? Someday...

    Post edited by Petercat on
  • outrider42outrider42 Posts: 3,679

     

    kyoto kid said:

    ...HBM's main advantage is a larger memory bus and higher memory bandwidth than GDDR  instead of a 352 bit bus and 484 GB/s bandwidth (1080 Ti) you are looking at a 3072 bit bus with bandwidth of 652 GB/s (Titan-V) or 4096 bit bus with 717 GB/s bandwidth (Quadro GP100).  

    The memory interface is sort of like a highway, the higher the rating, the more "traffic lanes" available between the memory and processor.

    That was until GDDR6 came along, which could hit 864 GB/s on a 384 bit bus. On a 256 bit bus it can still do 576 GB/s, So that advantage is getting wiped out. Titan V uses HBM2 mainly because GDDR6 was not being manufactuered at the time.

    GDDR6 can beat HBM2 in several areas, so if the 1180 has it, it will be pretty darn fast.

    However, Iray is different from a video game. A video game swaps data in and out constantly as it draws new frames at very rapid rates. Iray does not do this. Iray loads the entire scene into VRAM, where it stays while the calculations are made. The scene itself does not change like it does in a video game, because you are rendering one frame. Thus super high bandwidth is not necessary for best performance. And if you run multi-GPU or CPU+GPU setups, then PCIe would serve as the bottleneck, not the VRAM bandwidth. Even the NVLink taps out at 300 GB/s, which even GDDR5X beats with ease. 

  • TooncesToonces Posts: 919
    Toonces said:

    What does the 8-16 GB mean? Is it 8 or 16 GB?

    The answer is they don't know what Nvidia will do.

    Ah, ok, that's what I was wondering. Seems odd to have such a huge uknown factor as it would be the deciding factor for most folks running the latest 10x 8GB cards, especially for Iray users. Double memory is so much more important than the speed increase imo.

    Guess I'll just have to wait and see. :)

  • tj_1ca9500btj_1ca9500b Posts: 2,057

    Team Red has big plans over the next year...They have working 7nm GPU silicon in their labs now, and will be sampling by the end of the year, with market release to follow in 2018...

    https://wccftech.com/amd-has-working-7nm-gpus-running-in-labs-sampling-later-this-year/

    Note that 7nm Vega's preliminary specs are at 16GB to 32GB.  While this card is currently targetted at the high end, since AMD's primary target for this card is the AI market, the point is that AMD is also aiming high with the ram.  I certainly expect NVidia to pre-empt AMD's thunder as much as they can, which is why I can certainly see a 16GB consumer card coming from NVidia shortly.... for a premium of course!

    The E3 Expo begins on June 6th, and Team Blue, Red, and Green pretty much almost always have big announcements at E3, so we don't have too much longer to wait to see what they have in store... I'll be interested to see if AMD brings 7nm Vega to their Keynote to tease us with it's potential...

  • outrider42outrider42 Posts: 3,679

    Indeed, if AMD is threatening to push 16 GB, that is the main reason why Nvidia would have 16 GB versions for the 1180. Nvidia really hates getting one upped at anything. Look at the 1070ti, Nvidia has never released a ti version for the x70 series before. All it took was Vega 56.

    E3 could be when Nvidia makes their announcement, but its hard to say.

     

    Toonces said:
    Toonces said:

    What does the 8-16 GB mean? Is it 8 or 16 GB?

    The answer is they don't know what Nvidia will do.

    Ah, ok, that's what I was wondering. Seems odd to have such a huge uknown factor as it would be the deciding factor for most folks running the latest 10x 8GB cards, especially for Iray users. Double memory is so much more important than the speed increase imo.

    Guess I'll just have to wait and see. :)

    The specs are probably not finalized. Sometimes these things don't get nailed down until right before production. Part of that could be competition.as they always play games with each other to see see what they may be doing (like a I said above.) They are probably waiting to see what info they can dig up on AMD's next series. If it looks like AMD will go 16 GB, then they will respond. My thinking is they will launch multiple versions. There will almost certainly be a 8 GB model. There may be 10 or 12 GB versions to launch with it. They may release a 16 GB model later on to address AMD 7nm.

    You have to remember this is only a rumor, and should not be accepted as fact yet. Nvidia has not released any info on the 1100 series. And you also have to consider who the biggest customer base is, and that is gamers. GTX is foused on gaming. Iray is very niche, though other rendering applications are on the rise. I am betting Nvidia will push the 1180 as the ultimate 4K card. 4K can push VRAM, so they might release a 10 or 12 GB model.

  • tj_1ca9500btj_1ca9500b Posts: 2,057
    edited April 2018

    The other thing to keep in mind is that the GTX 1080Ti has been an incredible success for NVidia.  While yes, part of that was due to cryptomining, retailers simply couldn't keep those cards in stock.

    Our hobby/profession (depending on which category your Daz usage falls into) is amongst that group of people that helped make the 1080Ti such a success.  Simply put, consumers have demonstrated that they are willing to pay $700 or more for a high end consumer graphics card, so said design has worked out very well for Nvidia.

    Since the market forces for a high end consumer card with lots of ram clearly exist, well I don't see NVidia settling for anything less than 12GB (I'm thinking 16GB is more likely), and since they increased the total amount of ram between the 9xx and 10xx series high end cards by 33% or more, IMHO they will do so again this generation, ASSUMIING that they can offer it at a compelling price point.

    I don't see Nvidia sitting on their laurels this time.  They KNOW AMD is seriously nipping at their heels again, and they want to protect their market share/can't simply coast like they were over the last few years, until last year when Vega showed up..

    As for 7nm, Nvidia is currently playing their cards very close to their chest.  No news of any 7nm Nvidia GPU tapeouts that I've heard/can find so far, so right now it appears that AMD will beat Nvidia to market with the first 7nm GPU offerings...

    Post edited by tj_1ca9500b on
  • kyoto kidkyoto kid Posts: 41,859
    edited April 2018

     

    kyoto kid said:

    ...HBM's main advantage is a larger memory bus and higher memory bandwidth than GDDR  instead of a 352 bit bus and 484 GB/s bandwidth (1080 Ti) you are looking at a 3072 bit bus with bandwidth of 652 GB/s (Titan-V) or 4096 bit bus with 717 GB/s bandwidth (Quadro GP100).  

    The memory interface is sort of like a highway, the higher the rating, the more "traffic lanes" available between the memory and processor.

    That was until GDDR6 came along, which could hit 864 GB/s on a 384 bit bus. On a 256 bit bus it can still do 576 GB/s, So that advantage is getting wiped out. Titan V uses HBM2 mainly because GDDR6 was not being manufactuered at the time.

    GDDR6 can beat HBM2 in several areas, so if the 1180 has it, it will be pretty darn fast.

    However, Iray is different from a video game. A video game swaps data in and out constantly as it draws new frames at very rapid rates. Iray does not do this. Iray loads the entire scene into VRAM, where it stays while the calculations are made. The scene itself does not change like it does in a video game, because you are rendering one frame. Thus super high bandwidth is not necessary for best performance. And if you run multi-GPU or CPU+GPU setups, then PCIe would serve as the bottleneck, not the VRAM bandwidth. Even the NVLink taps out at 300 GB/s, which even GDDR5X beats with ease. 

    ...however NVLink lets you do something PCIe or SLI cannot do, pool memory between cards for large scale computational sets (such as rendering).

    There are MBs with NVLInk slots but they are expensive and primarily marketed towards data centres and high level workstations (currently only the Quadro GP100 and Tesla V100 offer both PCIe and NVLink versions, odd that for the 9,000$ GV 100 they only offer a PCIe version, kind of like putting a restrictor plate on a high performance engine).

    Post edited by kyoto kid on
  • kyoto kidkyoto kid Posts: 41,859
    edited April 2018

    Team Red has big plans over the next year...They have working 7nm GPU silicon in their labs now, and will be sampling by the end of the year, with market release to follow in 2018...

    https://wccftech.com/amd-has-working-7nm-gpus-running-in-labs-sampling-later-this-year/

    Note that 7nm Vega's preliminary specs are at 16GB to 32GB.  While this card is currently targetted at the high end, since AMD's primary target for this card is the AI market, the point is that AMD is also aiming high with the ram.  I certainly expect NVidia to pre-empt AMD's thunder as much as they can, which is why I can certainly see a 16GB consumer card coming from NVidia shortly.... for a premium of course!

    The E3 Expo begins on June 6th, and Team Blue, Red, and Green pretty much almost always have big announcements at E3, so we don't have too much longer to wait to see what they have in store... I'll be interested to see if AMD brings 7nm Vega to their Keynote to tease us with it's potential...

    ...again unless NVidia does some major upgrading to their other high end pro grade cards, I really don't see them releasing a 16 GB consumer grade one that could compete with their even more expensive pro/compute grade ones.  That happened with the Maxwell Titan-X and Quadro M6000 and I think they realised the mistake.

    Anyway, does one really need 16 GB or 32 GB of VRAM to run a game?  I thought it was core count and memory speed/bus size (which translates to memory bandwidth) that affects frame rate. For us, yes, it would be wonderful, but again, we are a small subset of the total enthusiast community.  Don't get me wrong if they actually did it I would be the first admit the error of my shortsightedness. It's just that I've heard such claims before and found myself disappointed which is why I take these predictions with a 50# salt lick.

    Like I mentioned, what they need to do is either add another layer to the memory stack of the Titan-V (for 16 GB) or give it card to card NVLink compatibility, then do like they did when they upgraded the M6000's VRAM and core count, keep the price the same. That way there would still be an incentive for the Titan if a 12 GB 1180 were released.

    Post edited by kyoto kid on
  • tj_1ca9500btj_1ca9500b Posts: 2,057
    edited May 2018

    More Vega 20 news:

    https://wccftech.com/amd-vega-20-vega-12-gpus-leaked-in-compiler-patch-vega-20-features-support-for-intrinsic-ai-instructions/

    While Vega 20 is aimed in part at the AI market, consumers are also part of Vega's target market  Note the 32GB of HBM2...

    There are some leaked benches of a porotype Vega 20 linked to  in the thread.  As with previous initial texts/leaked results of previous GPUs, the prototype was tested at just 1 GHz, but the production GPUs usually are clocked much higher.  Vega 20 already looks very promising, even at that low speed.

    NVidia's top high end going any less than 16 GB would put NVidia at a disadvantage, but we only have about a month until CES, at which point we'll know where things will fall memory wise on the NVidia high end this year.  12 GB?  16GB? Not much longer to wait now...

    Post edited by tj_1ca9500b on
  • kyoto kidkyoto kid Posts: 41,859

    ...the issue with the Vega cards is heat.  When the Vega Frontier (16 GB) was benchmarked, it was found to produce a lot of waste heat that couldn't even be mitigated by current liquid cooling technology. Maybe fine for an enterprise server farm or supercomputer that is kept in a refrigerated environment, but not for the system on top of your desk.

  • nicsttnicstt Posts: 11,715
    kyoto kid said:

    Team Red has big plans over the next year...They have working 7nm GPU silicon in their labs now, and will be sampling by the end of the year, with market release to follow in 2018...

    https://wccftech.com/amd-has-working-7nm-gpus-running-in-labs-sampling-later-this-year/

    Note that 7nm Vega's preliminary specs are at 16GB to 32GB.  While this card is currently targetted at the high end, since AMD's primary target for this card is the AI market, the point is that AMD is also aiming high with the ram.  I certainly expect NVidia to pre-empt AMD's thunder as much as they can, which is why I can certainly see a 16GB consumer card coming from NVidia shortly.... for a premium of course!

    The E3 Expo begins on June 6th, and Team Blue, Red, and Green pretty much almost always have big announcements at E3, so we don't have too much longer to wait to see what they have in store... I'll be interested to see if AMD brings 7nm Vega to their Keynote to tease us with it's potential...

    ...again unless NVidia does some major upgrading to their other high end pro grade cards, I really don't see them releasing a 16 GB consumer grade one that could compete with their even more expensive pro/compute grade ones.  That happened with the Maxwell Titan-X and Quadro M6000 and I think they realised the mistake.

    Anyway, does one really need 16 GB or 32 GB of VRAM to run a game?  I thought it was core count and memory speed/bus size (which translates to memory bandwidth) that affects frame rate. For us, yes, it would be wonderful, but again, we are a small subset of the total enthusiast community.  Don't get me wrong if they actually did it I would be the first admit the error of my shortsightedness. It's just that I've heard such claims before and found myself disappointed which is why I take these predictions with a 50# salt lick.

    Like I mentioned, what they need to do is either add another layer to the memory stack of the Titan-V (for 16 GB) or give it card to card NVLink compatibility, then do like they did when they upgraded the M6000's VRAM and core count, keep the price the same. That way there would still be an incentive for the Titan if a 12 GB 1180 were released.

    Agree.

    Seems crazy to release something to compete; sure they have upped the RAM on some products. Gamers are fine with 8GB, and whilst that might increase for a percentage of them, most gamers don't need or want to pay for more. Rendering on consumer cards is more niche.

    It would be great to see 16, but I just don't see it.

  • outrider42outrider42 Posts: 3,679

    4k changes things. The amount of memory needed to run 4k is drastically higher than 1080p. So if they are going to position the new line as the ultimate 4k cards, they will need the VRAM to claim that. Even 8gb is pushing it for 4k gaming. And they pretty much have to stress 4k, the 1070 is already overkill for most 1080p gaming. A new lineup has little purpose unless 4k is a big focus.

    Nvidia is pushing GPUs to do everything. Non gaming uses of GPU are growing at a rapid rate, that's where Nvidia's largest growth is at. While they have pro series Quadro, even they know most people use GTX instead of Quadro because of costs. While gaming is their bread and butter, it would be foolish to ignore non gamers. AMD does a lot of non gaming tasks very well, so Nvidia has to address that.

    However, I am positive they will have 8gb models at launch. The question is if they will offer larger models at a premium. They might do this. That possibility increases if it looks like AMD will launch cards with more than 8gb.

  • kyoto kidkyoto kid Posts: 41,859

    ...well we are just over a month or so away, we will see what comes out in the wash when it does.  Again not pinning high hopes on anything being forecast by tech blogs and sites.

  • Ghosty12Ghosty12 Posts: 2,080
    edited May 2018

    Added an update on the the supposed specs of the GTX 1180..

    Post edited by Ghosty12 on
  • GreymomGreymom Posts: 1,140

    Just saw this:

    https://www.digitaltrends.com/computing/nvidia-gtx-1180-graphics-card-database/

    So, 16 GB DDR6 vram, 3584 shader units, 200 watts, if this is correct.

  • nicsttnicstt Posts: 11,715
    edited May 2018
    Greymom said:

    Just saw this:

    https://www.digitaltrends.com/computing/nvidia-gtx-1180-graphics-card-database/

    So, 16 GB DDR6 vram, 3584 shader units, 200 watts, if this is correct.

    I don't believe it; almost sounds more like they're trying to increase site traffic.

    Rumoured specs, such as those listed, can prevent folks from buying the competition ; we've had such info change before iirc.

    16GB sure would be nice though; can't see it though.

    Post edited by nicstt on
  • GreymomGreymom Posts: 1,140
    nicstt said:
    Greymom said:

    Just saw this:

    https://www.digitaltrends.com/computing/nvidia-gtx-1180-graphics-card-database/

    So, 16 GB DDR6 vram, 3584 shader units, 200 watts, if this is correct.

    I don't believe it; almost sounds more like they're trying to increase site traffic.

    Rumoured specs, such as those listed, can prevent folks from buying the competition ; we've had such info change before iirc.

    16GB sure would be nice though; can't see it though.

    Yeah, at this stage "vaporspecs" are almost the rule, but I am hoping for 16 Gb...and a winning Powerball ticket  : )

  • kyoto kidkyoto kid Posts: 41,859
    nicstt said:
    Greymom said:

    Just saw this:

    https://www.digitaltrends.com/computing/nvidia-gtx-1180-graphics-card-database/

    So, 16 GB DDR6 vram, 3584 shader units, 200 watts, if this is correct.

    I don't believe it; almost sounds more like they're trying to increase site traffic.

    Rumoured specs, such as those listed, can prevent folks from buying the competition ; we've had such info change before iirc.

    16GB sure would be nice though; can't see it though.

    ..exactly.  Remember all the talk of a 4 GB boost to the 980 and 970 Ti versions on these sites?  The former wound up at 6 GB and the letter never saw production.

    I'll put my money on maybe a 12 GB 1080 with possibly a boost to 16 for the Titan-V.

  • kyoto kidkyoto kid Posts: 41,859
    Greymom said:
    nicstt said:
    Greymom said:

    Just saw this:

    https://www.digitaltrends.com/computing/nvidia-gtx-1180-graphics-card-database/

    So, 16 GB DDR6 vram, 3584 shader units, 200 watts, if this is correct.

    I don't believe it; almost sounds more like they're trying to increase site traffic.

    Rumoured specs, such as those listed, can prevent folks from buying the competition ; we've had such info change before iirc.

    16GB sure would be nice though; can't see it though.

    Yeah, at this stage "vaporspecs" are almost the rule, but I am hoping for 16 Gb...and a winning Powerball ticket  : )

    ...for myself it's a winning Megabucks ticket.  Yeah not as big a jackpot, but then my name doesn't get plastered all over the nation (or world) nor will I suddenly see emails from purported long lost family members, friends, spouses (never been married) I never knew I had from countries I never set foot in.

    At my age the current MB jackpot for tomorrow's draw (5.7$ million) would be more than sufficient and because it is a state lotto and considered a "voluntary tax", it is only subject to federal taxes.

  • tj_1ca9500btj_1ca9500b Posts: 2,057

    4k changes things. The amount of memory needed to run 4k is drastically higher than 1080p. So if they are going to position the new line as the ultimate 4k cards, they will need the VRAM to claim that. Even 8gb is pushing it for 4k gaming. And they pretty much have to stress 4k, the 1070 is already overkill for most 1080p gaming. A new lineup has little purpose unless 4k is a big focus.

    Nvidia is pushing GPUs to do everything. Non gaming uses of GPU are growing at a rapid rate, that's where Nvidia's largest growth is at. While they have pro series Quadro, even they know most people use GTX instead of Quadro because of costs. While gaming is their bread and butter, it would be foolish to ignore non gamers. AMD does a lot of non gaming tasks very well, so Nvidia has to address that.

    However, I am positive they will have 8gb models at launch. The question is if they will offer larger models at a premium. They might do this. That possibility increases if it looks like AMD will launch cards with more than 8gb.

    Agreed! (particularly the 4K observation thing).

  • outrider42outrider42 Posts: 3,679
    kyoto kid said:
    nicstt said:
    Greymom said:

    Just saw this:

    https://www.digitaltrends.com/computing/nvidia-gtx-1180-graphics-card-database/

    So, 16 GB DDR6 vram, 3584 shader units, 200 watts, if this is correct.

    I don't believe it; almost sounds more like they're trying to increase site traffic.

    Rumoured specs, such as those listed, can prevent folks from buying the competition ; we've had such info change before iirc.

    16GB sure would be nice though; can't see it though.

    ..exactly.  Remember all the talk of a 4 GB boost to the 980 and 970 Ti versions on these sites?  The former wound up at 6 GB and the letter never saw production.

    I'll put my money on maybe a 12 GB 1080 with possibly a boost to 16 for the Titan-V.

    Do note this same site was money on the 1080 specs, right down to the TDP of 150 Watts that NOBODY would believe. When I posted that rumor, people here universally rejected the idea of such a low TDP. But it came to be true. As did the 1070ti. Nobody bats 1.000, but the rumor mills have been a lot more accurate in the last couple of years. Nobody can keep a secret anymore in tech in our modern era. Apple can't stop the leaks even though they straight up tell their employees they will be fired AND sued. The bigger the company, the more leaky they get.

    Also, the reports have stated "up to 16GB", not that they absolutely will have 16. The boards being produced have the ability to accept that amount. And it is very possible they may be testing this amount for testing purposes. As I said, my thinking is they want to have the option to use 16 if they want to. I believe firmly that they will release with 8, and perhaps have a 2nd model with 12 or maybe 16.

    The idea of teasing this high of an amount to stop people from buying competition doesn't really work here. For one, AMD Navi is not coming until next year at the very earliest, and Vega hasn't exactly set the world on fire. So it would be pointless. But more so, teasing 16GB only to release with 8 could backfire on people who are led to expect 16. They would be quite disappointed, and that could could actually cause them to wait on Navi and give AMD a chance to strike back by releasing a 16GB card. You gotta think the long game here. Nvidia is not just looking at their spec, they are trying to predict what AMD will do, and now possibly even Intel. If they think there is a strong chance either will go 16, they will do the same. And this could also cause issues for AMD and Intel obtaining VRAM stocks. There's a lot of things going into play here.

  • kyoto kidkyoto kid Posts: 41,859
    edited May 2018

    ...that "teasing" is what soured me from purchasing a 980 Ti when it was released after all the 8 GB talk.  I was pretty disappointed as the only other Nvidia card with 8 GB was the 2,000$ Quadro M5000  (there was an AMD GPU by Sapphire that had 8 GB priced at around 450$, but it was useless for Iray).

    Again the only way I see Nvidia releasing a 16 GB enthusiast card is if the amp up all their pro/semi pro (Titan) offerings so that it doesn't compete against them.

    Post edited by kyoto kid on
  • nicsttnicstt Posts: 11,715

    I would love to be wrong.

    The one thing pushing RAM for gaming is multi-monitor support on 4k, course cards able to deal with 4K single monitor aren't common atm.

  • hyteckithyteckit Posts: 167
    Greymom said:

    Just saw this:

    https://www.digitaltrends.com/computing/nvidia-gtx-1180-graphics-card-database/

    So, 16 GB DDR6 vram, 3584 shader units, 200 watts, if this is correct.

    Wow. Amazing if true and around the  $1000 price range.

  • outrider42outrider42 Posts: 3,679

    The 1080ti can play most games at 4k settings, though maybe not at max settings. Most benchmarks these days start with 4k performance, because that's really where the differences are going to be. Almost everything can play 1080p games beyond 60 fps now, so 1080p benches are getting silly, though you do find players who have 120 and 144 hz monitors. For those players 1440p is the "sweet spot" as the 1080ti can deliver 1440p at those high frame rates, too. Even the consoles have moved to 4k, with both Sony and Microsoft shipping 4k capable machines bringing 4k to the masses, on top of the many 4k capable streaming boxes. 4k is not niche anymore. In a short period of time in my line of business, 4k went from an occasional sighting to becoming about half of what I personally see and that ratio seems to grow by the month. So I expect the 1100 series marketing to be all about 4k. I fully expect him to call the 1180 the "Ultimate 4K GPU". If he doesn't say those exact words, I'll be shocked.

    Pascal was heavily promoted as being VR ready at its launch, but VR has sort of fallen short in popularity. However 4k is here to stay. But as I said, the 1080ti can already do 4k decently, so the 1180 will have to be much faster, especially if it is going to selling for high price. The 1080ti launched at $700. The 1180 would be competing with its own older sibling.

  • kyoto kidkyoto kid Posts: 41,859
    nicstt said:

    I would love to be wrong.

    ...same here of course, but not holding my breath.

Sign In or Register to comment.