Semi-OT - nVidia Pascal cards debut finished, 1080GTX and 1070GTX announced!
http://arstechnica.com/gadgets/2016/05/nvidia-pascal-gtx-1080-details-watch-livestream/
Tonight, at 9pm Eastern, 6pm Pacific US time, is supposedly when nVidia will reveal the new Pascal-based cards. Iray users rejoice! The event is supposed to be livestreamed on Twitch.TV/NVIDIA (and they have a link for those outside the USA on the page here: http://www.geforce.com/whats-new/articles/nvidia-livestream-today-at-6pm-pst )
Let the speculation begin!
Post edited by hphoenix on

Comments
Iray users rejoice or those who can afford the first generation cards which will be excluded from new functionality when the 2nd generation Pascal cards addressing Nvidia's shortfalls is released rejoice? :-)
I thought the consumer cards were not due until late this year or early next - I just bought a GTX970 two weeks ago.
Well, according to the article, the release will premiere the 1080 and 1070 cards. They're expected to use GDDR5X memory (not the HBM2) and to enter at similar price-points to the existing 980Ti and 970.
I wouldn't expect HUGE jumps in performance (without the HBM2) though they are expected to clock a good bit faster than the 9xx series. But they are expecting 8GB memory on the 1080.
DRIVER issues are a whole different story. I'm expecting driver issues with the new pascal-based cards for at least 6-8 months after release......(edit: mostly on the gaming side, but there may be some more fundamental issues to resolve....)
And whether we see 1080Ti cards or not......probably, but we'll have to wait and see. And these are just the reference designs. Wait until a few other manufacturers get ahold of them, then we'll see some fun.....
(And since they are entering at the same basic price-point as current 9xx series, I'd expect to see a drop in prices for the 4xx-9xx series cards. Maybe not a huge one, but they'll have to drop them some......)
I'm definitely going to want to get the 1080s, will have to set aside a fewmonths worth of paychecks though lol
hahaha Wait for it....
You sound like you work in IT like I do. Reminds me of every Apple OS or patch release... ever....
you're in the wrong field, either you were cut out to be a detective or you've been doing this like me, which is for too long. :
Far too long. I always wait for the early adopters to sort out the worst of the bugs before I upgrade.
Patiently awaiting price drops on current hi end cards...
unfortunately (for us) I think Nvidia will continue to make those cards or keep a flow of chips available for manufactures so prices might not budge. If they do drop I will update my 6 year old GPU though.
I've been trying to push getting a new computer until the end of the year so I can see how Pascal shakes things out.
I'm kind of curious whatever happened to Moore's law? It's been several years since I bought my last computer, and now that Oculus Rift is coming out I'm finally upgrading to a new computer, but I was surprised that even being willing to spend for top end equipment that the power/speed hasn't increased that much; I'll be going from an 8 core to another 8 core, and yes I'll go from a 2GB Nvidia to a 4GB Nvidia, but still it isn't as much of a jump as I would have expected.
Always before, whenever I bought a new computer it would much more than double the power/speed of the one it was replacing. Has technology development started to plateau?
Indeed; I'm excited, but not so excited I'm going to be buying.
CPU power has not been making the jumps it was 10 years ago, but GPU tech seems to have taken up that mantle.
Moore's Law still somewhat applies, as it can be looked from a price perspective. That is, the cost of a chip should fall by half in 18 months due to yield, efficiencies, technological improvements, etc. This extrapolation corresponds closely to the die shrinks that have dominated the industry for the past few decades -- and the chips that stay at the previous nodes.
Also, don't leave out the chips in phones, tablets and all that stuff...
Not that 16 core CPUs don't exist, there is probably not much of a market for them outside the niche they currently occupy...servers. So a 'what can be done' vs 'what sells' factor is now driving things more than just advances in tech...and right now tablets/phones are 'hot'.
GPU accelerated computing is becoming a very big deal, though, as GPUs are becoming more and more integrated in PC archetechture. I think this is one of the more interesting aspects of going forward. If you factor in all of the different elements in a PC, there have been some big evolutions in just the past year or so. Not exactly Moore's Law, but the advancements are happening.
Also, the new cards are expected to ship with 8 gb vram. Both AMD and Nvidia were going to use newer HBM2 vram, but they both ran into supply and cost problems. AMDs next cards are also rumored to be mostly GDDR5X, with a only the highest of high end using HBM, like the new dual graphics card they just released for $1500.
...the newest supercomputer being built is using the first HBM2 Pascal Tesla compute cards with 16 GB and a new interface developed by Nvidia called NVlink. NVLink offers an extremely fat pipeline between both GPU and CPU as well as directly between multiple GPUs compared to PCI 3.0.
It's official. GTX 1080 comes out May 27th worldwide, MSRP $599. GTX 1070 comes out worldwide June 10th, MSRP $379.
Both have 8GB GDDR5X. The 1070 is touted as STILL being faster than a Titan-X.
Simultaneous Mutli-projection tech so VR/stereoscopy is virtually free computationally.
Wow.
They were showing units rendering OC'd at 2.1GHz core clock and over 4GHz memory clock. Holy crap! And those are weren't the ones they were using as their benchmarks.....the benchmarks were stock speeds.
I'm officially drooling.....and my wallet is trying to squirm away, knowing it will soon be squeezed dry......
(and wondering what the OC'ing on an EVGA Classified GTX1080 will be.....)
Yes, but that's not a consumer card. You aren't going to see consumer cards carry HBM without a premium price attached. And looking at the specs for the 1080, I'm not sure that G5X is holding it back any at all. 1080 is a beast. 8 gb is still VERY Daz friendly. Nearly everyone who uses it has been working with less than that, considering the 980 ti only has 6. Maybe next year, but I can't afford to wait that long with an aging 670 with only 2 gb. I managed to deal with that limitation, I think I can deal with "only" 8 gb.
Anyway, the reveal just wrapped.
A single 1080 is faster the two Titan X in SLI. The 1070 is also faster than the Titan X. I didn't expect that.
They also has their card running AIR cooled clocked at over 2100 and running at 67C. Pascal is looking pretty sweet.
GTX 1080, twice the power of a Titan X and 3 times more effecient, $599
HTX 1070 $379
More info here http://wccftech.com/nvidia-geforce-gtx-1080-launch/
Now decision time. My 670 is starting to sweat bullets after this show. He's been a good friend for some time, we made it through the good times and the bad times. But its about time to move on.
yeah that 1080 is as sweet as I had hoped. Time to start saving @_@
I feel kinda bad for Tom though. Man that part was awkward.
If the graphs they were showing were to be believed, the TDP for the 1080 is running right at 180W. I'm thinking my 970GTX will stay, and drive my monitors and most of my regular gaming. The 1080GTX will be Iray only....
....maybe.
Oh, and IIRC, the 1080GTX was being said to be faster than two 980GTX in SLI. NOT two Titan-X in SLI......
Edit: Oh, and the stock clocks are supposed to be 1607MHz core (1733MHz turbo), but it was showing as air-cooled stable at 2100MHz core. That's about a 30% overclock, stable, and air-cooled at 67C. The third-party solutions are going to be insane......EVGA Classified with ACX3.0 cooling anyone?
Oh, it was 980ti in SLI. But still the 1080 is much faster than a Titan X, not just a little. I really wonder where the 1070 stacks up here. They didn't give much info on it, other than price. That price is rather interesting to, because AMD very recently stated they want to compete hard at the $330 "mass market" price point. So I wonder just what AMD has in store in that range. This is important because it could influence Nvidia to alter that $380 price tag.
Can't wait to see the benchmarks. Some people were carrying 1080s out of the show. Lucky.
Well, in both the 1080 and 1070 cases, the MSRP launch prices are precisely $50 USD above the historical launch prices for the 980 and 970. They did say in the presentation that the 1070 is faster than a Titan-X, but not by how much. Even if it's just a tiny bit, that's like getting a Titan-X (with 2/3rds the memory) for almost 1/3rd the price!
Now, I'm betting most of the 'benchmarking' was based on gaming-style performance. Raw-compute wise (like for Iray) will probably be a little less impressive. The Titan-X has a much larger bus (384 vs 256 bit) and a few more cores (3072 vs 2560), though the Pascal architecture and core/memory speeds may easily overcome that, since the Titan-X only clocks at 1000MHz. Compared to the 1080GTX, that's only 63% of the speed of the STOCK 1080GTX. And less than 50% of overclocked speed demonstrated. That's a BIG performance gain. The 1070GTX has only 2048 cores, and is probably clocked very similarly to the 1080GTX. So I'd expect it to perform a little better than a Titan-X, even at stock speed. With OC'ing, it can probably do close to the performance of a stock 1080GTX.
Pascal is a BEAST.
(and the ones carrying out the 1080s were probably the ones who won the 'order of 10' contest and got free tickets to the event. But agreed, lucky buggers....)
It's twice as fast as 2 980's...not 980 Ti's. Big difference.
True. However, the performance is still getting CLOSE to 2 980Ti in SLI. The 980Ti has about 300 more cores, but runs at a much lower core/memory clock (1000MHz vs 1600MHz stock speeds for 980Ti and 1080 respectively.)
With OC'ing the 1080, I could easily see it approaching the same raw clock-cores of two 980Ti that were stock speeds. (1080: 2560 cores * 2100 MHz = 5376 GHzCores. 980Ti: 2816 cores * 1000MHz = 2816 GHzCores. So pretty darn close. (yes, I know it doesn't scale linearly and directly like that, but it is a good approximation))
Just looking at the preliminary numbers, two 1070s might make more sense than one 1080. It depends on the price point and clocking of the third party cards.
As someone who has very recently bought a GTX970 and cannot see a way to upgrade within a year or two, I'm hoping that DAZ and the PA community don't shift into the 8GB space too soon. I'm already finding 4GB to be borderline and have to be careful with things like HD/SubD. IRay came along and with it a new generation of content specifically tailored for the NVidia render engine. I had to wait a long time before I could manage to scramble together the funds for a new PC with that GTX970 (just two weeks ago). How people have the patience to use IRay in CPU mode is still a complete mystery to me - I didn't even try because I stuck with Reality/Luxrender until the moment the new PC arrived.
By the way, I did consider trying to sell the GTX970 but I doubt it for a couple of reasons: I'm expecting problems with Pascal drivers, etc., when it comes to IRay in DAZ Studio (I've already had to back-out the latest GForce drivers on the 970). Secondly, the second-hand price of the 970 is going to drop like a stone.
Pascal has been known to be on the Summer 2016 release schedule since at least last year. The rumor mill kicked into over drive a few months ago about how special Pascal could be. I personally posted one of the rumor threads here. Nearly every spec revealed last night was in line with what rumors had been posting for at least a month. Even the single 8 pin rumor is true, and some people thought that was impossible! Prices are almost exactly in line with my own expectation, though I thought the 1070 would be more $330-350. If anyone just bought a GPU, maybe it can be returned/exchanged? Check store policies. I never expected the 1080 to cost more than $600. Nvidia is trying to deal a knockout blow to AMD, they can't do that if they price themselves too high. That said, I really want AMD to compete, and compete well. Competition is good for us all, and the 1000 series is a direct result of that. That is why AMD matters, even for us Daz users.
Yeah, thinking it over, two 1070s could prove to be an excellent value when it comes to Daz, and gaming in general, giving you over 4,000 extremely fast CUDA cores! However, one important difference between the 1070 and 1080 is that the 1070 still uses older GDDR5 memory, while the 1080 is using the new GDDR5X or "GX5" as Huang called it. So there is large difference in memory bandwidth between them. That could impact Daz, but I'm not sure.
As for CUDA counts, I'm pretty confident that wont be an issue at all. The only advantage the Titan X has is 12 gb, but unless a scene devours more than 8 gb, the 1080 will smoke it at rendering speed, and the 1070 might come very close. Oh yeah, and it uses a single 8 pin connector.
Here is more info on the 1070 specs.
http://wccftech.com/nvidia-geforce-gtx-1070-launch/
Another thing that excited me in a nerdy way was the talk about clean power. I love clean power, this is something that is SO over looked by many PC users, even experienced ones. Huang did decent job explaining it, but only scratched the surface. This is also why you want an good quality power supply, not just some cheap PSU you found for $15. It is not just Wattage, and it is not even the efficiency rating. It is about maintaining a Voltage as prefectly steady as possible. This will help prevent some odd ball and unexplained errors, and could have a direct benefit for rendering. Long story short: use a quality PSU, and have your PC and anything plugged in to it connected to an excellent Power Conditioner, or better yet, a battery backup with power conditioning. A power conditioner is NOT just a surge protector. Don't be an idiot using a $20 surge "protector" on a $1000 computer. Protect your equipment. I guarantee you will see fewer errors and random crashes if you get the right stuff.
Anyway, we have to revive that rendering test thread when Daz users start getting their Pascal cards. I went and dug it up:
http://www.daz3d.com/forums/discussion/53771/iray-starter-scene-post-your-benchmarks
This way we can get standardized results and see just how much faster these cards are.