Leaked pics of GTX 1080?

At least as far as rumors go, many of these specs are not new. But we now have a picture of the board the 1080 is supposed to be http://wccftech.com/nvidia-pascal-gp104-gpu-leaked/

8 gb GDDR5X vram will be the new standard. 2560 CUDA cores. Powered by a single 8 pin connector. June 2016!

Are you hyped?

«1

Comments

  • Ghosty12Ghosty12 Posts: 1,995
    edited April 2016

    Will depend also on the transitor count and what the bus interface will be as this could be the bottom end of the 1000 series cards guesstimating by some of the specs since it has less cores that the 980 TI.. The one to look out for by the looks is what card will use the Nvidia GP 100 at the end with a whopping 16/32 Gig of the new HBM2 ram that comes with a huge 3840 Cores and a massive 4096 bit bus that would be the wet dream of anyone wanting raw rendering power and probably cost the same as a decent car..

    Post edited by Ghosty12 on
  • FSMCDesignsFSMCDesigns Posts: 12,640
     

    Are you hyped?

    Not really, while this might be great news for those with deep pockets and whose main focus is Iray, it is overkill at the moment for my uses. Now when they come out with a stock 6 ghz CPU (I expected more from Skylake), then I will get excited! Thanks for the info though.

  • Ghosty12Ghosty12 Posts: 1,995
     

    Are you hyped?

    Not really, while this might be great news for those with deep pockets and whose main focus is Iray, it is overkill at the moment for my uses. Now when they come out with a stock 6 ghz CPU (I expected more from Skylake), then I will get excited! Thanks for the info though.

    This might be interesting http://www.techpowerup.com/219275/intel-readies-a-5-1-ghz-xeon-chip-based-on-the-broadwell-architecture.html they are getting there slowly but surely..

  • outrider42outrider42 Posts: 3,679

    I am primarily interested in what is coming because my current gpu is about to turn 4 years old and needs upgrading. I'm probably more likely to buy the 1070 than the 1080 myself. It will be a long time before that full size GP 100 card with all 32 gb hbm gets released, and it will probably only be a server based card. HBM2 is in short supply right now, which is why both AMD and nvidia are looking at GDDR5X for their current models.

    Of course, there is that $130,000 DGX-1 Super Computer they are releasing that has eight Tesla GP100 GPUs, each with 16 gigabytes of memory each. I'm sure a few of us could use one of those. I wanna play Solitare on that machine. http://www.gizmag.com/nvidia-dgx1-supercomputer/42652/

  • FSMCDesignsFSMCDesigns Posts: 12,640

    Ha, great, I was trying to decide between Haswell and Skylake and now they come up with a new inbetween, LOL.

  • kyoto kidkyoto kid Posts: 40,688

    ...I'm waiting for the GP series that uses HBM2 instead of GDDR5 memory. If what they claim will come true, it will be a major breaktrhough for GPU based rendering.

  • mike9mike9 Posts: 69

    Users here usually recommend to use the cuda cores as important reference for iray perfomance. The 2560 cuda cores from the 1080 are less than 2816 from 980Ti. The performance situation could be similar to 780ti vs 980 where the 780ti is strong. It looks like no big performance jump concerning iray for june.

  • HavosHavos Posts: 5,321

    It depends on the pricing. If the 1080 was the new mid range card, and had a price of say 300-400 dollars, then there would be takers, even though the performance was similar to the current 980Ti. A later top of the range 1000+ dollar card would then have superior performance to any existing card.

  • nicsttnicstt Posts: 11,715

    Unlikely to be the name; calling a card a 1080, when the card displays better than 1080 but 4k seems unlikely

    But who knows, folks screw up all the time. :)

  • nicsttnicstt Posts: 11,715
    mike9 said:

    Users here usually recommend to use the cuda cores as important reference for iray perfomance. The 2560 cuda cores from the 1080 are less than 2816 from 980Ti. The performance situation could be similar to 780ti vs 980 where the 780ti is strong. It looks like no big performance jump concerning iray for june.

    Only from the same architecture; using cuda cores for comparring different architecture is not valid. Pascal is new.

  • RenomistaRenomista Posts: 921

    Not really. Since it is very probable that this will come out in the 1000 bucks area and at least for rendering i see no reason why choose this on over a titan x

  • mike9mike9 Posts: 69
    nicstt said:

    Only from the same architecture; using cuda cores for comparring different architecture is not valid. Pascal is new.

    The rumors suggest they went down with the power consumption and cores for 1080 but up with the base clock. This thread: http://www.daz3d.com/forums/discussion/53771/iray-starter-scene-post-your-benchmarks/p9 suggest a 980 is about 15% faster than a 780ti in iray. If its the same with the first pascal gpus its imo not a big jump for a new generation.

  • Ghosty12Ghosty12 Posts: 1,995
    edited April 2016

    One interesting thing about the GP100 series is the use of HBM2 ram and the amount of cuda cores and so on.. To give an idea of price though you would be looking at Titan X Territory the only reason I suspect this is that over here in Australia one place that I look at is selling the AMD R9 Nano that uses the new HBM memory architecture, the cost for us is about $1100 AUD for a 4 gig card..

    And for a Titan X with 12 gig is over $1500 AUD, the major difference with the Nano is the length of the card where as most current video cards need a ton of room in the case, the Nano is not much longer than the PCI slot and because it uses the new HBM memory the memory bus is 4096 bit (currently you have 256 or 384 bit), the same as the Nvidia GP 100 in that link imagine how it will be with 4096 bit..

    http://www.centrecom.com.au/gigabyte-radeon-r9-nano-4gb-hbm-memory-gaming-graphics-card

    http://www.centrecom.com.au/gigabyte-geforce-gtx-titan-x-12gb-monster-graphics-card

    As you can see the price is rather expensive considering the amount of ram..

    Post edited by Ghosty12 on
  • kyoto kidkyoto kid Posts: 40,688
    edited April 2016
    mike9 said:

    Users here usually recommend to use the cuda cores as important reference for iray perfomance. The 2560 cuda cores from the 1080 are less than 2816 from 980Ti. The performance situation could be similar to 780ti vs 980 where the 780ti is strong. It looks like no big performance jump concerning iray for june.

    ...the benefit would I see is the higher GPU memory limit. This looks to be the first 8GB GTX GPU (what rouurs of the 980 TI were saying it woud have before it's release). Having less chance of the render file exceeding the GPU memory and dumping to CPU mode alone is an effective speed increase.  Also if the GPU chip itself is faster at calculations, that should come into play as well. Another deciding factor is the GPU interface. If a new faster version of PCI comes along, it will also affect performance. Nvidia's NVlink (to be used in their forthcoming mini supercomputer) has a pipeline between GPU - CPU as well as the linked GPUs that is 5 - 12 times faster than PCI 3.0.

    ...oh, and about that 129,000$ supercomputer, don't count on using it for Daz or even 3DS as it most likely will not be running Windows, considering its target is AI development and neural network research.

    Post edited by kyoto kid on
  • kyoto kidkyoto kid Posts: 40,688

    ...I'm looking at the GP series to be priced more than the Titan-X possibly somewhere between the GTX and Quadro series. 16 GB is a lot of memory, more than sufficient for even fairly complex scenes. The 32 GB versions are being said to be geared for professional use and most likely will become the new "Quadro" line (the first 16 GB GP100 version is to be a Tesla compute GPU).

  • mtl1mtl1 Posts: 1,501

    I have a hard time believing that it can be powered by a single 8-pin connector... that's a little ludicrous, isn't it?

  • PetercatPetercat Posts: 2,318
    mtl1 said:

    I have a hard time believing that it can be powered by a single 8-pin connector... that's a little ludicrous, isn't it?

    Oh, I don't know. If it's an 8-pin connector that uses 8 automotive 6 gauge jumper cables it ought to work just fine.

  • outrider42outrider42 Posts: 3,679
    kyoto kid said:

    ...I'm waiting for the GP series that uses HBM2 instead of GDDR5 memory. If what they claim will come true, it will be a major breaktrhough for GPU based rendering.

    Well, this should be GDDR5X, which should be twice as fast as regular GDDR5. Both AMD and nvidia are kind of stuck waiting for HBM2 production to ramp up to mass market levels, as rumors also say the first new cards from AMD will also use GDDR5X instead of HBM2 as originally planned. I'm thinking HBM2 may come out for the 1080ti whenever it comes down the pipe. I've heard that Pascal has controllers for both kinds of ram. And I agree with those claims for rendering, I think the memory speed will be a big game changer for gpu rendering. I think if you had to gpus that were exactly the same except for GDDR5 vs HBM2, the HBM2 card would offer a big advantage in gpu render speeds. When it comes to gaming, most people would not see a big difference unless they are playing in 4k or VR.

    Last year AMD released the first ever HBM cards. The testing of those cards was quite interesting. While a similar nvidia card might run 1080p better, the AMD cards with HBM really shined at 4k. I think this proves the pixel pushing power of HBM, because those cards were likely inferior to their nvidia counterparts otherwise. The memory alone was what made those cards better at 4k. And that was just the first version of HBM. GDDR5X actually has faster bandwidth than what the AMD Fury offered, so cards using it will still see great gains.

    Also, like one said, you cannot compare CUDA counts from different generations. The newer cards might have fewer CUDA cores, but those cores are more efficient at what they do.

    The 8 pin connector could be true. Maxwell was already pretty energy efficient, and Pascal is a major die shrink. The first Maxwell card released, the 750ti, surprised people by running purely on bus power. That wasn't a flagship, but the point was made.

  • kyoto kidkyoto kid Posts: 40,688
    edited April 2016

    ...I'm looking at the form factor benefit of HMB2 vs GDDR5.  Half the size of a 980 TI or Titan-X would be a huge bonus as it would leave more "open" space incide the case.so that componentes were not so crowded together. More "breathing space" means better airflow.

    Also 8 GB is somewhat borderline for me. If the 1080 costs nearly as much as a Titan X, I'd opt for the latter to get the extra 4 GB of overhead even if it is "older tech". This is why I have little interest in the 980 TI as it would at best handle 55- 60% of my scenes in memory as opposed to around 85 - 90% with a Titan-X and pretty much 100% with a GP series that had 16 GB of HBM 2. Not having your scene dump to CPU mode is a major speed advantage when rendering.

    Post edited by kyoto kid on
  • mike9mike9 Posts: 69

    kyoto kid said:

    ...the benefit would I see is the higher GPU memory limit. This looks to be the first 8GB GTX GPU (what rouurs of the 980 TI were saying it woud have before it's release). Having less chance of the render file exceeding the GPU memory and dumping to CPU mode alone is an effective speed increase.  Also if the GPU chip itself is faster at calculations, that should come into play as well. Another deciding factor is the GPU interface. If a new faster version of PCI comes along, it will also affect performance. Nvidia's NVlink (to be used in their forthcoming mini supercomputer) has a pipeline between GPU - CPU as well as the linked GPUs that is 5 - 12 times faster than PCI 3.0.

    ...oh, and about that 129,000$ supercomputer, don't count on using it for Daz or even 3DS as it most likely will not be running Windows, considering its target is AI development and neural network research.

    Has nvlink even the potential to change much? My guess is iray loads (with enough memory) the whole scene into gpu ram at start of the render. If thats the case nvlink shouldn't improve much after the inital load duration.

    outrider42 said:

    I think if you had to gpus that were exactly the same except for GDDR5 vs HBM2, the HBM2 card would offer a big advantage in gpu render speeds.

     HBM2 is interesting but from my understanding it mainly improves the transfer rate with a higher bus width. They use a lower clock but high bus width. Is it known if iray works close to the memory bus limit? My guess is iray will mainly improve from lower transfer/access times but I don't know how hbm2 compares there with gddr5 or gddr5x.

  • kyoto kidkyoto kid Posts: 40,688
    edited April 2016

    ...reading more, NVlink seems more suited to purely computational functions (eg, supercomputers built with Tesla GPUs) so most likely it would not impact rendering all that much since the memory of only one GPU ever comes into play in the latter.

    The newer GPU core which is capable of faster processing however may make having more than say, 2 GPUs for rendering unnecessary even though the increase in the number of CUDA cores is more modest than some had speculated.  Some articles I read were claiming 5,000 - 6,000 CUDA cores for the HBM 2 Pascal GPUs.

    The  advantage of HBM is more memory can be stacked in a smaller form factor. The base GP100 series has twice the memory the 1080 will have in a card about half the length as the memory chips are arranged in an array of 4 stacks of 4 x 1 GB chips (16 GB total) each.

    Post edited by kyoto kid on
  • StratDragonStratDragon Posts: 3,167

    So I should be able to get a 970 for about $99 in three months?

  • outrider42outrider42 Posts: 3,679

    Having 8gb as the new standard only means that is the new minimum ammount of vram a card will have. You will not see any 4gb varients of 1070/1080. I wouldn't be surprised at all if there are larger varients available, though possibly not at launch. But the 1080 is not a Titan, so don't expect Titan size numbers. I'm sure the ti and Titan versions of Pascal will come along in 2017 with those kinds of stats, just like they always do. Along with a Titan sized price to match. I'm thinking the 1080 will be a $500-600 card at launch. You have to consider this is replacing the 980, which launched at $500. People who own Titans already will likely shrug at such peasantry. This card is not for them.

    Nvidia cards retain their value well. Even a 4 year old 670 is still going for $130-170 (I saw some asking $200!) on ebay, even as the 960 is available for $200 brand new. There will be price drops like always, but not any extreme.

    If iray does not make use of the bandwitdth that will be available with new memory now, I'm sure nvidia will update it to do so later, being nvidia based software.

  • nicsttnicstt Posts: 11,715
    mike9 said:
    nicstt said:

    Only from the same architecture; using cuda cores for comparring different architecture is not valid. Pascal is new.

    The rumors suggest they went down with the power consumption and cores for 1080 but up with the base clock. This thread: http://www.daz3d.com/forums/discussion/53771/iray-starter-scene-post-your-benchmarks/p9 suggest a 980 is about 15% faster than a 780ti in iray. If its the same with the first pascal gpus its imo not a big jump for a new generation.

    I agree, if the difference is the same; but Nvidia seem to be indicating a ten times increase. Will have to wait and see.

  • StratDragonStratDragon Posts: 3,167
    edited April 2016

    asking price and getting price are not mutual. my GTS250 goes for more now than it did when it was used, but I don't know who's buying them if anyone is.

    Post edited by StratDragon on
  • asking price and getting price are not mutual. my GTS250 goes for more now than it did when it was used, but I don't know who's buying them if anyone is.

    Perhaps people who, for some reason (such as space pr specialised software) cannot use more recent cards on their systems and so have to source second-hand when they need to replace a part.

  • AndyGrimmAndyGrimm Posts: 910

    Used Titan 6Gb's go for 300 - 350 USD (CHF).. they had the Titan X price 2 - 3 years ago. (in Switzerland). 

  • mtl1mtl1 Posts: 1,501
    kyoto kid said:

    ...I'm waiting for the GP series that uses HBM2 instead of GDDR5 memory. If what they claim will come true, it will be a major breaktrhough for GPU based rendering.

    Well, this should be GDDR5X, which should be twice as fast as regular GDDR5. Both AMD and nvidia are kind of stuck waiting for HBM2 production to ramp up to mass market levels, as rumors also say the first new cards from AMD will also use GDDR5X instead of HBM2 as originally planned. I'm thinking HBM2 may come out for the 1080ti whenever it comes down the pipe. I've heard that Pascal has controllers for both kinds of ram. And I agree with those claims for rendering, I think the memory speed will be a big game changer for gpu rendering. I think if you had to gpus that were exactly the same except for GDDR5 vs HBM2, the HBM2 card would offer a big advantage in gpu render speeds. When it comes to gaming, most people would not see a big difference unless they are playing in 4k or VR.

    Last year AMD released the first ever HBM cards. The testing of those cards was quite interesting. While a similar nvidia card might run 1080p better, the AMD cards with HBM really shined at 4k. I think this proves the pixel pushing power of HBM, because those cards were likely inferior to their nvidia counterparts otherwise. The memory alone was what made those cards better at 4k. And that was just the first version of HBM. GDDR5X actually has faster bandwidth than what the AMD Fury offered, so cards using it will still see great gains.

    Also, like one said, you cannot compare CUDA counts from different generations. The newer cards might have fewer CUDA cores, but those cores are more efficient at what they do.

    The 8 pin connector could be true. Maxwell was already pretty energy efficient, and Pascal is a major die shrink. The first Maxwell card released, the 750ti, surprised people by running purely on bus power. That wasn't a flagship, but the point was made.

    I remember there being some discussion on the power savings from Pascal. I guess I'm just skeptical that a card that *may* surpass the 980Ti or even TitanX will only need one bus connector to operate, because that means the power savings will be out of this world.

  • kyoto kidkyoto kid Posts: 40,688

    Having 8gb as the new standard only means that is the new minimum ammount of vram a card will have. You will not see any 4gb varients of 1070/1080. I wouldn't be surprised at all if there are larger varients available, though possibly not at launch. But the 1080 is not a Titan, so don't expect Titan size numbers. I'm sure the ti and Titan versions of Pascal will come along in 2017 with those kinds of stats, just like they always do. Along with a Titan sized price to match. I'm thinking the 1080 will be a $500-600 card at launch. You have to consider this is replacing the 980, which launched at $500. People who own Titans already will likely shrug at such peasantry. This card is not for them.

    Nvidia cards retain their value well. Even a 4 year old 670 is still going for $130-170 (I saw some asking $200!) on ebay, even as the 960 is available for $200 brand new. There will be price drops like always, but not any extreme.

    If iray does not make use of the bandwitdth that will be available with new memory now, I'm sure nvidia will update it to do so later, being nvidia based software.

    ...because of the configuration required using GDDR5 I see 12 GB as probably the maximum limit for the xx80 architecture. The 12GB Titan-X is already a large form factor "double width" unit that requires a fair amount of room.

    This is where again stackable HBM 2 memory will have the advantage as, because of the thinner dies and different configuration, they can be single width half length units. The 32 GB units would most likely be wider as they will employe four stacks of eight 1 GB chips (unless HBM 3 becomes a reality with 2 GB chips, though that may be a couple more years down the road).

  • outrider42outrider42 Posts: 3,679
    edited April 2016

    Maxwell saved a lot of power without a real die shrink. Now they have gone from 28 nm to 14/16 nm, roughly half the fabrication size of just last gen. So I do believe there will be some big power gains. 8 pins does sound very optimistic, but again this is the first launch of the series, not a Titan. The ti and Titan versions will crank up the power use and probably need more. I am betting we will see versions of the card with additional pins from vendors for over clocking purposes, but the OG board could be 8. AMD is on record saying their 14 nm Polaris offers 2.5X performanec per Watt over their previous 28 nm fabs. That's really something.

    This is just the launch model. I read somewhere that Pascal has controllers for both kinds of memory, so they could switch to HBM2 when it becomes readily available for their ti/Titan models. While AMD is way behind in sales, they are still competing, and nvidia cannot let AMD have too many "wins" in direct competition. Gamers will easily jump to AMD if they offer better cards at better prices. So nvidia cannot afford to coast just because Maxwell pushed them to 80% market share, in fact, AMD made some slight gains recently in market share. And those gain are clearly due to AMD offering 8gb models vs the 4gb models nvidia has and strong performance. AMD just announced their 490 and 490X will release in June as well, practically at the same time as the 1080 is supposed to drop! The competition in the gaming market is fantastic for us customers, even if AMD cards can't do iray. Don't worry, you will probably get your 16gb ti or Titan next year. It will likely command a serious premium, but I believe it will be produced in small numbers. That's why I am not worried about what the ti or Titan will offer, because I am certain they will be well beyond my price range. 8gb will be enough for me, I'm used to making do with 2gb right now! 8gb would make me jump for joy.

    EDIT: An update to the original article is up now, and the picture that was shown is now believed to be a 1070 board, NOT a 1080. They are saying the 1070 will use GDDR5, while the 1080 will use the GDDR5X.

    Another tid bit in the article: Fewer, Faster CUDA Cores With Significantly Higher Per Thread Throughput. So again, pay no attention to CUDA core counts, as they do not translate across generations.

    WCCF GTX 980 Ti GTX 980 GTX 1080 GTX 1070 TESLA P100 (GP100)
    GPU GM200 GM204 GP104 GP104 GP100
    Process Node 28nm 28nm 16nm FinFET 16nm FinFET 16nm FinFET
    Transistors 8 Billion 5.2 Billion TBA TBA 15.3 Billion
    CUDA Cores 2816 CUDA Cores 2048 CUDA Cores 2048 CUDA Cores? 1664 CUDA Cores? 3840 CUDA Cores
    VRAM 6 GB GDDR5 4 GB GDDR5 8 GB GDDR5X 8 GB GDDR5 16GB HBM2
    Memory Bus 384-bit 256-bit 256-bit 256-bit 4096-bit
    Memory Speed 7Ghz 7Ghz 12Ghz 8Ghz 1.4Ghz
    Bandwidth 336GB/s 224GB/s 384-320GB/s 256GB/s 720GB/s
    TDP 250W 165W TBA TBA 300W
    Launch Date May 2015 September 2014 June 2016 June 2016 Q1 2017

    Source  http://wccftech.com/rumor-nvidia-pascal-gtx-1080-gddr5x-gtx-1070-f-gddr5/

    Post edited by outrider42 on
Sign In or Register to comment.