Slightly OT - (Rumor) Nvidia GTX-11 series launch imminent?
tj_1ca9500b
Posts: 2,057
in The Commons
Just saw this on WCCFTech - citing Tweaktown, the article states that we may see a GTX-11 launch within the next few hours...
https://wccftech.com/nvidias-upcoming-lineup-will-be-called-geforce-gtx-11-series/
The article also leads with the statement that the GTX-10 series is now listed as 'out of stock'....
While this article is tagged as 'rumor', As of late the rumors that the WCCFTech staff choose to post have been pretty spot on...

Comments
Link to NVidia GTC Keynote livestream here: It is happening very shortly (less than 45 minutes as I type this)...
https://wccftech.com/nvidia-gtc-2018-livestream/
Quadro GV100 is the new 'pro' card coming down the pike. 32 GB per card, memory is supposedly is 'stackable' via NVLink 2.
Availability is supposedly immediate (now)...
Looks pretty sweet.
Daz3d is shown on the list of companies on board with RTX. Hmmm
New Volta will have 32GB HBM2 VRAM. Wow.
$399,000 dollars for the world's largest GPU (DGX-2) which is essentially 16 tightly interlinked GPUs acting as one. 2 Petaflops of computational capacity. You'd need something like a100 Amp+ circuit to run it though..
Well, they say it draws 10kW. So about 80 amps @ 120 VAC. Three 30 amp circuits could do it. Use 208 VAC and you could do it at only about 50 Amp.
Big CryptoMiners will actually LOOK at this. That cost is about the same as 250 1080Ti cards. For a LOT more performance. The smaller fry won't be able to afford it, but the bigger ones who are buying GPUs in bulk WILL.
Much appreciated on the clarifcation/for fine tuning my seat of my pants ballpark estimate... I couldn't remember the exact number (10Kw)... and 2400 watts per 20 amps @ 120 is a number I have rattling around in my head (music industry amps thing...) which is what I was using as my loose guideline.
NVidia has a reasearch team in Salt Lake City?
Kind of makes sense that Daz (which is based in SLC) likes Iray so much...
Just looked up what I THINK might the aforementioned team's website... which seems to be more 'science' oriented.
https://www.sci.utah.edu/nvidia-coe.html
---
Also, that Quadro GV100 with the 5120 Cuda cores and 32GB of HBM 2 and RTX technology is apparently retailing for $9,000...
https://wccftech.com/nvidia-quadro-gv100-gpu-32-gb-hbm2-memory-announced/
For comparison, The Quadro P6000, with it's 32 GB of GDDR5X, is around $5000 these days.
Well, the keynote at GTC 2018 from nVidia has ended, and no mention of 1100 series release.
OK, so nothing in the Keynote about GTX 11 (or GTX20 as also a possible series name). That's not to say that it couldn't be announced separately today. but I'm guessing not.
That Star Wars oriented Real Time Raytracing demo at the beginning of the presentation was pretty sweet though. If we could get a fraction of that capability for our still and animation rendering time reduction wise, that'd be pretty sweet...
... but all we want to know is "Will it run DAZ Studio?"
Redshift dev weighed in on realime raytracing, puts it in perspective a bit.
I kind of suspected from the comments section that they wouldn't announce any gaming products in that speech, which makes sense. But if that's the case, why take all of the 10 series out of their store? Hopefully they're switching thier full production to the new stuff.
That confirmed my suspicion that the raytracing everyone's talking about isn't real ray tracing, but just a normal 3d scene with some ray tracing effects for added realism. Kind of like how hoverboards turned out not to be hover-boards and AI isn't actual artificial intelligence. Marketing.
No ever said it was 100% full ray tracing. This has been covered several times. But regardless, this hybrid engine creates far more life like images that run in real time, and at the end of the day, it is always the end result that matters. The Star Wars demo shown today, had reflections bouncing off the shiny armor in multiple paths. They didn't didn't just show one reflection, they showed the gun reflecting the armor, the armor reflecting the gun, and you could zoom in to see the many more reflections between them on top of that, not to mention the reflections of the environment. Then they had colored lights moving, and all of these reflections reacted as reflections should. In real time. For all intents and purposes the scene being rendered looked like it could have been right out of the movie. Unreal has also shown videos of digital human actors, and these were done on the same box.
So that box, the Star Wars demo was run on the older DGX, this was confirmed both by Unreal and by Huang during today's presentation. While the DGX is still a very expensive machine, it is only $68,000. NOT the $158,000 DGX-1. The DGX-1 and the new ultra ridiculous DGX-2 got plenty of play today, but the big demo everyone geeked out over was actually running on the older and cheaper machine. And the DGX also has a deal on it where you can buy it for as cheap as $49,000. So Nvidia is willing to knock $19,000 off of this box for this deal. I think that maybe points to what their markup is on these things.
Anyway, the point here is that this tech is not quite AS FAR off as some people are saying. Look at what happened today. Nvidia just doubled the VRAM in Volta cards, and went bananas on CUDA and Tensor cores on this gold plated Quadro. They unveiled the new "biggest GPU ever", just a year after Titan V. Lets keep in mind that the original Kepler Titan released in 2013, and now you can out class it with an everyday 1070 or even 1060. This happened in 5 years. Now we have Titan V's powering the DGX-1, which contains 4 Titan Vs. If they can keep up this pace, in 5 years we might see x70's that can beat the Titan V, and thus 4 of them would match the power seen in the original DGX box. Now that doesn't sound so far off into the future, does it? I think we have good reason to be excited, even if today is not the day we get use this tech ourselves.
But in reference to us, as I posted above, Daz3D's logo was on the list of companies working with RTX. And there was no mention of Iray by name, so this is noteworthy. There have been rumors of Iray getting some kind of quicker, maybe instant mode that dials some of the process intensive tasks back. That and the AI denoiser thing. I do not believe these things are unrelated to the tech that Nvidia revealed this week, as these features filter around the Nvidia circle. So these things can have an impact on us this year.
...I stopped following such speculation on those sites as back when the 980 Ti was being talked about most kept predicting it would have 8 GB of VRAM.
...the Quadro P600 has 24 GB of GDDR5X as that is as much as the GP 102 chip supports. Also as I pointed out in the other thread it falls woefully short in several performance departments compared to the GV100.
Also the VRAM memory is stackable for pure compute purposes only, not for rendering.
Well, we have an update on the GTX 11 timeframe...
https://wccftech.com/nvidia-geforce-11-series-launching-around-july-gddr6-mass-production-timeline-confirms/
July, GDDR6 memory (roughly double the speed of GDDR5 at 1.35v), and it looks like we will be looking at 8GB and 16 GB variants.
....I can easily buy the GDDR6 as that's been in the discussion pipeline for about a year now, but 16 GB? Remember when these sites said the 980 Ti was gong it include an update from 4 GB to 8 GB? Based on that I see the 1180 being stepped up to 12 GB for the sole reason they still look to sell a mid range Quadro like the P5000 successor which will most likely replace the current GP100 is but with a Volta GPU processor.
Yeah, 16 Gb sounds a little high to me. Nvidia is really careful about not letting their gaming cards cannibalize their professional and scientific card sales. It would be nice, though.
Keep in mind that the Ti series jumped from the 6 GB 980 Ti to the 11 GB 1080 Ti. So the report of 16 GB for an 1180 Ti makes sense to me... or maybe 15 Gb if they want to be wierd again...
Note that the article talks about the memory chips coming in two flavors (which would favor the 8 vs 16 argument). And of course they could vary the number of stacks a bit...
Looking over Micron's specs, GDDR5 has 2Gb, 4 Gb, and 8 Gb density, GDDR5x has 8Gb modules, and GDDR6 comes in 8 Gb, 16Gb, and it looks like it will have 32Gb density modules at some point...
https://www.micron.com/~/media/documents/products/technical-note/dram/tned03_gddr6.pdf
We have a few months yet to speculate though. July is months from now...
...NVLInk memory pooling is only useful for pure compute purposes, not rendering.
Hopefully something good will come out for rendering in iray.
Just pointing out (again) that NVidia said in the keynote (and in their press release) that, using NVLink(2), the memory can be combined for rendering larger, more complex models...
https://nvidianews.nvidia.com/news/nvidia-reinvents-the-workstation-with-real-time-ray-tracing
If it IS creating a combined memory pool for rendering purposes (as implied above), no doubt there will be a hit to latency. Anyways, that's what was said (and highlighted) in the Keynote...
Of course, the relevant rendering application has to be set up to take advantage of this, and apparenly only the GP100 and (by extension based on the keynote) GV100 support this interlinking (see here, scroll down to GPU rendering):
https://www.nvidia.com/content/dam/en-zz/Solutions/design-visualization/documents/quadro-pascal-gpu-render-data-sheet-octane-us-FNL-hr.pdf
From the .pdf:
So the better question might be, which of the rendering engines out there are set up to take advantage of this? And will Daz3D even bother to implement this capability inside of Studio? I obviously have no idea how easy it would be to implement, and whether the Daz software team would find it worthwhile to spend time to implement a capability for essentially just 2 graphic cards, out of the numerous catalog of cards in NVidia's library...
...from discussions I've read on other tech forums, NVLink only adds VRAM for compute purposes to assist a process like rendering, the scene itself must still fit within the VRAM of a single GPU in the pair. So should NVLInk filter down to the GTX line (I doubt it will as the connectors alone cost 450$ each and you need two for both cards to exchange data back and forth) and say you have two 1180s with 12 GB each, the memory resources of the second card will aid in the computations thus speeding up the process, but the size of the scene itself will still be limited to to the memory of one card.
That doesn't change the fact that NVidia is advertising NVLinked GP100 and GV100 cards as being able to hold even larger scenes for rendering (specifically) than the 24 GB Quadro P6000. Note that I linked NVidia's own .pdfs on this subject.
They do note that the software needs to be able to support NVLink, hence my comment r.e. whether it was worth a given company's time and resources to actually implement this feature for just one, and as of this week two, cards...
So it boils down to: Either everyone else has it wrong, and rendering software companies simply aren't bothering, or Nvidia is lying about being able to combine the memory to hold larger scenes using NVLink with these two cards..
And on that note:
Apparently, The Chaos Group's V-Ray is able to use this capability...
https://www.chaosgroup.com/blog/v-ray-gpu-benchmarks-on-top-of-the-line-nvidia-gpus
In the above link, they cite an example where they use a scene where the geometry alone wouldn't fit into a single 16 GB card (GP100), and ended up with 26 GB of scene split between the two cards...
It's mentioned earlier on in the article that HBM2 makes this memory configuration quite feasable, due to HBM2's suitability for this sort of thing r.e. transfer rates across the NVLink..
Of course, at roughly $7,000 a card, I'm not surprised that we aren't seeing a lot of real world usage as of yet, although the GP100 has been around for a bit now...
...the GP100 has been out for just under a year. and yes, at that price, none of us will ever have one, let alone two in our systems (and certainly not the GV100 at nearly 9,000$). If HBM2 + NVLink is what makes memory stacking for rendering possible (and I still have some reservations on that), then the whole idea for us is moot anyway as the forthcoming 11xx or 20xx cards will be getting GDDR6. Interestingly, the 3,000$ Titan-V doesn't have NVLink capability even though it has both Volta technology and HBM2.
https://www.techpowerup.com/239519/nvidia-titan-v-lacks-sli-or-nvlink-support
Here is some more info on the next cards (sort of) and the new GDDR6 memory.. Looks like the new GDDR6 memnory won't be available till June/July so should imagine that is when we will see the anouncement of the new cards..