Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
Are you sure about that?
https://www.anandtech.com/show/16137/nvidia-announces-ampere-rtx-a6000-a40-cards-for-pro-viz
"In terms of performance, NVIDIA is promoting the A6000 as offering nearly twice the performance (or more) of the Quadro RTX 8000 in certain situations, particularly tasks taking advantage of the significant increase in FP32 CUDA cores or the similar performance increase in RT core throughput."
I thought both the A40 and A6000 had 84 RT cores.
- Greg
Interesting read.
He's talking about racks, not graphics cards.
What card is in the racks?
A100 and yes I'm 100% sure.
Oh, OK... no big deal then, not like I will ever own one of those anyway :)
Just be aware these things are being sold as Tesla cards, eventually. Older Tesla's show up on the used market with regularity. With 40Gb of VRAM these things are going to look really appealing in about 5 years when they are selling for $500 or so on eBay.
Yeah, the lack of RT is odd in those but I was more referring to 64GB and more of video RAM trickling down to us via these gamer/3D hobbyist video cards having the RT and tensor (is that what they are called, the AI cores?) core. That could be down right exciting with the improvement of AI & physics simulations sw to use that hw.
All RTX cards have tensor cores. Nvidia is never going to give consumer cards huge amounts of VRAM.
First the stuff reallydoes cost money and they have already wiped out the profit margin of the AIB's with the shit they pulled with this launch, notice how there are no AIB cards selling for the supposed MSRP of the 3080 or 3090?
Second there is no call for it. PC gamers are not gaming at 4k and textures just aren't that large.
+1
Although I'm trying to decide if it's fishy or incompetance. The 3000 release seems rushed and/or glaringly incompetent. I'm a paranoid conspiracy theorist, but I do like some evidence that I can run with, but there is so little. A botched release can't be taken as evidence imo; it's the result of something, not the something in itself.
No. I think this all boils down to Jensen and Nvidia's arrogance.
Look at the performance figures for Ampere again more closely. The rasterization numbers compared to Pascal and Turing are anemic and that's being nice. This was a die shrink with a massive increase in transistors and rather than increasing the thing that consumers pay for, drawing graphics on screen faster, they put most of the "real estate" they gained into rt cores and tensor cores. RT cores matter to us and will be one of those things people will turn on in single player games if they have high enough fps but it will always be the first thing to turn off in competitive games, if it is ever even possible to turn on. Tensor cores are amazing for AI, they are specialized for doing matrix math, but they really don't have anything at all to do with computer graphics. Someone pretty high up made that decision and I'd put a lot of money on it being Jensen.
Nvidia is in the same jam as all the fabless silicon companies. TSMC is the best foundry going but their capacity is being sucked up by the 800lb gorilla that is Apple. So they went looking for another source and Samsung wasn't a bad choice but that 8nm process had been stalled since 2018. I understand something has to be the first commercial product out of a new fab but I'm not sure I would have given the green light if I was in Jensen's shoes.
Choosing to go with Micron's GDDR6X VRAM, which they are clearing having production issues with, is also another troubling point of this rollout. I get they wanted higher bandwidth VRAM than GDDR6 while avoiding the cost and supply issues of HBM2 so inctead they have VRAM that can't sustain the rated bandwidth and overheats even at the lowered frenquencies.
As to a sudden glut of games using 16Gb of textures? LOL. Don't count on it. 4k monitors have been around for a while as have been 11Gb, and higher, GPU's. There are plenty of games that really dazzle at 4k, take a look at the PC version of Horizon Zero Dawn if you don't believe me, which still don't go above even 6Gb of VRAM. You might, might, see a few games creep up to 8Gb but going above 10? We get really blase about 2Gb of data but that really is an immense amount. Do keep in mind that these console games would need to be DL'd and for many people a 12Gb game would be a several hour, if not overnight, download
At the end of the day the crashing issues had nothing to do with the capacitor layout (no design was inherently immune, but it didn't affect everyone). It was due to erratic boost clock behavior which Nvidia fixed in a driver update. Just yet another example of the Internet taking something before all the facts are known and running with it.
The difference here is that this will change next year when there are millions of next gen consoles out with 12gb usable memory and super fast better than Windows+current SSD IO systems. No one is quite sure how much PC gfx memory will be enough. But looking back at last gen, the consoles had 5.5gb usable and terrible hard disk drives. 4gb on PC struggled a bit but was more or less OK at the beginning of the gen. 10gb aught to be fine on that basis for at least the next 2 years, but who knows how soon games designed around the new SSDs will start cropping up. Everything so far still looks to have half a leg in the last gen, even that Spiderman game.
With the Xbox looking to have more 'standard' SSD performance, I think a gamer on a 10gb 3080 with a PCIE4 hard drive will probably be OK for a couple of years. It might not even be 2 years before Nvidia's next push. When AMD have caught them before they have come back quicker with their next release. 8 months between 480 and 580 releases. 8 months! 15 months 680 to 780. 16 months between the fx5800 - their worst ever flagship gfx card - and the fx5900 remake. Someone buying a 3080 could always just sell it in 2 years if it started to bother them.
The parallel with last gen was 1gb gfx cards were enough right until 2013, Skyrim HD mods aside. Had to ditch my crossfire 5850 because it ran out of memory before it ran out of performance. Then 1gb cards instantly became obsolete within a year of new consoles released. And not long after my crossfire 7970s just couldnt cope with 3gb as the games really started pushing what the consoles could do. I've been thinking about my gfx card purchases since the mid 90s and in more than half of cases I had to upgrade due to lack of memory before lack of performance.
They do make a little difference; to prevent cards without any from having issues, a slightly lower boost clock would work as stability is a matter of just a few Mhz
They basically do a better (and I think different) job of cleaning up the signal.
Igor's Lab post is an excellent article; it contains many facts. Also pertinent is that only cards without the specific components on the back exhibited the behaviour; according to Linux Tech Tips, a card supplier (I forget which one) noticed this abherent behaviour with the card the supplied, and advised him of it; cudos to the company for saying that - it also turned out to be a card without the expensive parts.
Cards with every single capacitor layout had reports of crashes (all SP-CAP, 5 SP-CAP/1 MLCC array, 4 SP-CAP/2 MLCC array, 6 MLCC array, etc).
You should watch this Hardware Unboxed video for a full explanation: https://www.youtube.com/watch?v=lhyCdraz54s
[edit] Had capacitor numbers transposed. Lysdexia?
From every report they do not. The issue was that Nvidia did not provide a fully functional driver to the AIB's with the reference PCB's. They provided what is cvalled a Furmark driver. It would only run Furmark which doesn't let the cards boost, for long or very high, as they are fully loaded from the very start of the run. This meant none of the AIB's could test their cards properly and catch the issue before launch. Reports are that they didn't get the real drivers till Nvidia put them out for the public.
There is precisely one party at fault for this Nvidia. They could/should have known about this and certainly should have released a reasonable beta driver to their own partners so they could test their cards fully. Nvidia's claims about not wanting leaks is laughable since Nvidia leaks like a seive. It strongly appears that what Nvidia wanted was to hamstring the AIB's in this launch cycle where Nvidia wants to sell their own cards where Nvidia is at a disadvantage in card and cooler design and is reliant on third part manufacturing.
Very interesting; his vid says the following.
Engineers have said that mixing the chips would give better stability, but should not be taken on its own as other factors come into play. The reason mixing them is a good idea is that they deal with different frequencies.
He also referenced the video I did above, stating that one of the MLCC (?) chips affected the performance above the stock clocks
I don't know how that is any different than what I said other than I was unaware that there were some cards that had issues that I previously had thought free.
So it does look like being a driver issue on it's own? However it does appear that the best balance is to use both chips (your vid). I wonder if we'll get anything to substantiate that. Well whatever, I won't be overclocking the card anyway - presuming that I go for Nvidia.
I haven't bought a card yet; being a beta tester isn't my thing when I have to pay for the privilege.
The real reason there are a mix of capacitor types on different AIB cards is due to alot of them being factory overclocked. They're better equipped to handle higher frequencies. Higher frequencies does not automatically equal instability and crashing though. The instability and crashing was from bad Windows drivers. The same cards that were crashing under Windows 10 had zero issues in Linux. The ultimate conclusion here is that the capacitor type/layout was falsely blamed when the actual issue was faulty drivers. From my own observations with the new driver vs old on my 3080 the core clock can still boost just as high but there seems to be less spiking going on (which was likely the erratic behavior). Performance is identical. FYI I myself had zero crash or instability issues on the old driver.
I agree that the issue was the driver; to claim, however, that higer frequencies have nothing to do with any particular crash is to ignore what overclocking does.
A stable overclock can be down to a matter of a few Mhz. That isn't down to the driver, but the frequency. It will vary between cards and from card to card of the same type; the silicon lottery is a trope for a reason.
... It remains to be seen if the new drivers prevent all stability issues or only most.
I didn't claim that. I said it doesn't automatically lead to instability. I know all about overclocking and its perils as I've done it for many years. Many people were lead to believe that the 3080 crashes were the result of a particular capacitor layout coupled with high boost clock frequenices. We now know that's entirely false. It has nothing to do with any particular capacitor layout nor high boost clock frequencies.
Also, these cards don't auto-boost to unstable frequencies out of the box. That's not how they're designed. There are multiple variables that govern them (BIOS limts, power limits, temperature limits, etc... which can be bypassed if you want to push the card further with a custom overclock).
Overclocking will always lead to instability if it doesn't have a cut-off; the point is to stop at the point just before that instability occurs; it will vary from card to card and even between the same type due to the silicon lottery; components have an affect as some cards overclock more than others.
If the various compents didn't have an affect then all cards would possess exactly the same components - and I mean exactly.
We don't know it is entirely false; it is too early to say. I agree it is likely, but we still don't know.
Edit:
Consider these two posts. The both site drivers as the reason for the large improvement; they both also state we don't know yet if they resolve the issue completely. The second also indicates that at least one manufacturer is making changes; how accurate this information is I don't know, but one thing I am confidnet about is we don't know yet. I hope it is, cause if I do get a 3000 card, I want it to be reliable.
https://www.pcmag.com/news/nvidia-releases-driver-to-address-stability-issues-with-rtx-3080-cards
https://www.extremetech.com/gaming/315629-new-nvidia-drivers-may-improve-rtx-3080-stability-by-tweaking-voltages-boost-clocks
One article (both?) also cites customers claming that the clock speed has been reduced, whereas others report an increase.
This story doesn't yet appear to be over.
More likely ... "to be continued".. seeing more reports of maybe January before cards are in stock again. The pre-built computers that have the new cards are also pretty much sold out.
This has all been very interesting, and I appreciate the civility amongst the differing opinions.
I am curious about Nvidia's play here.... did they come out of the gate... too soon, because they were afraid of what, or had some insider corperate espionage on what AMD is releasing and wanted to get their product out first?
If they "knew" that Big Navi was far behind their offering, I can't believe they would step on their own "hotdog" with golf cleats like they did.
It is hard to say really, isn't it?
All we know, is they announced the cards, released them and don't appear to have the stock.
Why? Either of your options seem odd from Nvidia's prespective. Their cards are great, especially comparing them to the ill will 2000 series generated. Yet this seems to be a shot in the foot with shortages, driver issues and even potentially hardware issues - although that remains an unknown and may not actually be a real issue.
Yep, looks like Nvidia really tripped over their tie on this release. Insufficient product to cover the demand. Possible issues with the Samsung 8nm process yeild that may have comtributed to the insuifficient product. Memory issues with the GDDR6X memory overheating at the planed clock speeds causing Nvidia to clock the memoruy at a lower speed than 21.5 Gbps. Driver issues where Nvidia did not supply proper beta test drivers and now there are card issues and finger pointing all around. If Big NAVI is really big, then it will not bode well for Nvidia but it may be better for us as Nvidia may release higher memory cards earlier and hopefully at a lower price. It's not going to happen probably but my wish is for a RTX 3070 Super Content Creator Edition with NVLINK connections. Maybe they could call it a "RTX 3070 DAZ Edition".
I still think the bigger problem is the bots and scalpers. AMD will likely have the same issues besides their own.
I don't think the shortages or the scalpers work against Nvidia. They haven't hurt Apple any. People jonesing for Nvidia cards are unlikely to switch to AMD.
In fun news: MSI subsidiary scalps 3080s.
https://www.gamespot.com/articles/msi-partner-caught-scalping-nvidia-rtx-3080-cards/1100-6483032/