Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
What's even more interesting is the map download... 150GB apparently, more or less according to LTT's weekly WAN show broadcast
Flight sim discussion starts at 3:30
You know with realtime ray tracing more video memory was coming. In 5 years time it will be normal for your video card to have twice the ram your CPU has and much more.
What the number one complaint with DAZ Studio? Render being kicked out to CPU because the video card ran out of RAM oir DAZ Studio crashing because the system ran out of RAM. They can talk about 3D rendering all they like but eventually the GPUs have to be able to do the scenes on the GPU card.
Yes, ray tracing adds a lot to the VRAM requirements. In current games, the rays can only travel so far before they simply disappear because there is not enough VRAM. If you are going to ray trace large areas, you need a boatload of VRAM.
MS Flight Sim has been destroying PCs for years, well at least it used to, it has been ages since they released a new one. Every time a new version is released it requires big upgrades, and this new one is by far the most ambitious one yet. I mean, dude, they actually have a map of the entire Earth in this game. That is why the download is so massive, and that download is not even the entire thing. There would be no way to actually pull all the data of the Earth into this, so the way the game works is to stream in data of the area you are flying in while you play. This data comes from actual map data, and then the game adds in trees and things. So it is not 100% accurate, but pretty darn close. The game can even pull in the weather from that part of the world. It really is amazing. So this game will also use a lot of streaming data.
And actually, this game is our competition in a way. Flight Simmers can be very hard core. This might be the only game they even play, and they build everything around it. So yeah, the hard core flight sim crowd will be lining up for a 3090 ASAP. Considering the price and power required, they will likely be one of the groups buying 3090s first, so you may be competing with them to get your hands on these cards depending on how available they are.
It is interesting how these cards are perceived by different people. You will find some who hate on them because of their price and perhaps power use, or think that 24gb is silly. Then you have Daz Iray and other renderers that see these cards a bit differently because they want every drop of VRAM and performance they can get. And of course the flight sim crowd, and others like them who may use these cards for other things.
MSFS is very CPU limited at the moment though. Better GPU won't even do that much.
https://www.tomshardware.com/features/microsoft-flight-simulator-benchmarks-performance-system-requirements
This is true at 1080p and some at 1440p, but at 4K the bottleneck indeed becomes the GPU.
A big issue is that the game runs DirectX 11 for some crazy reason. It is limited to basically 4 cores, so having faster cores matters more than having lots of cores. Then you have memory speed as a big factor. But again, these bottlenecks are removed when playing at 4K resolution, where the GPU becomes the limiting factor. I would imagine many would prefer playing at 4K to get the cleanest visuals in this stunning game. The biggest upgrade will be when it gets support for DirectX 12.
One last thing could be pcie 4.0. Some games have shown differences with 4.0, but so few cards support it right now it is hard to truly see the impact it has. It is possible that this game may be effected by bus speed, and I think this game might take advantage of it. That would further enhance Ampere on AMD (currently only AMD CPUs support it.) On top of that, the next wave of AMD CPUs promise much better single core performance, which might pull them ahead of Intel in gaming...and in Flight Simulator as well.
We are now 6 days away from the announcement. Strap your yourselves in!
Not entirely. I play at 1440p and to have my 2080 TI run at over 50% at all I had to increase the render scaling to 150. Even so it seems to mostly limited by CPU still, but I only have an i4770k so there's that. It's a damn slog overall that's for sure, but damn is it beautiful. Best geography teacher ever as well.
It is, apparently, the new Crisis.
And at 8k, it's hilarious.
3090 24GB, 3080 10GB with possibility of 20GB model later
https://videocardz.com/newz/confirmed-nvidia-geforce-rtx-3090-has-24gb-memory-rtx-3080-gets-10gb
Thanks @volpier11, was just about to share that link! Only six days left to go!
Here's Nvidia's official sneak peek of their latest cards, discussing the cooling solutions and power connectors and such.
Basically hype, but if you are interested in such things, then the above video might interest you.
The official Nvidia event countdown clock is here:
https://www.nvidia.com/en-us/geforce/special-event/
Update - Exactly 6 Days to go (see screenshot)...
Oh yeah, that 4770k is bottlenecking hard. At 1440p the game does bottleneck, but only a few GPUs. Then at 4K the 2080ti sits alone by itself compared to other GPUs, which indicates that the bottleneck is gone. Otherwise other GPUs would hit the same frame rate.
I am baffled why they used DirectX 11. I guess this game has been in development for so many years that they never migrated to 12. But still...this is a Microsoft game, it should be using their newest API! LOL. DirectX 12 is not even that new. At any rate, they have promised an update add DX12 sometime in the future, though who knows how long that might be. Once they do, it should help a lot with that bottleneck, though your 4770 will still be an issue.
I am in same boat. I have a 4690k which is roughly the same and age is catching up fast on it. I plan on getting a new Ryzen 4000 when they release this winter. I have high hopes that next gen Ryzen matches Intel at gaming at long last. They been killing it at everything else and have gotten closer at gaming. The next one should get them there, plus it will have pcie 4.0, which I think will matter in a year or two. I think the consoles are going to make use of better throughput.
But yeah, 24GB 3090, it is a thing! It is just a question of price. CEO Huang is probably holding the real price in his leather jacket just for the reveal. And keep in mind, the door is still open for a 20GB 3080, even if they do not announce one. The same goes for the 3080ti. I strongly believe the 3080ti is being kept aside as their Ace card. If AMD beats the 3080 by a decent margin then I expect the 3080ti to come out in the spring. And if all these cards on Samsung 8nm, then a Super refresh could come out next summer or fall with TSMC 7nm variants of several cards.
Yeah DX11 is weird. Then again DX12 hasn't been enormously great, has it? The best performing games seem to be using Vulkan
That's what Unity & UE4 to handle platform compatibility and targeting because Microsoft, Android, and Apple have decided that they want to be the non-console choice for gaming but with Apple's HW platform costs they've ruled themselves out of serious competition almost immediately; leaving Vulkan/Android and DX12/Microsoft. I don't see a lot of 'serious tech gamer ' gaming happening on ChromeOS/Android either but Unity/UE4 transparent targeting of Vulkan (and Metal) will help.
There are Android TV Boxes and Apple TV Boxes that are reasonable priced as far as holding your expenses down but in essense what you are getting is a stripped iOS / Android phone to attach directly to your televison so not something seriously competitive with dedicated gaming consoles or Windows gaming PCs.
There were more leaks today of course, but nothing really new to add, so I'm not going to bother to post them.
Apparently the 10 GB vs 20 GB 2080 thing will be a 'per vendor' kinda thing - 10 GB will be the new standard, but vendors may choose to offer a 20 GB enhanced model as well. That's the latest rumor anyways...
The 3090 leaks are almost all saying 24 GB at this point, as for the price point I saw an article where they were listed overseas for over 2 grand in USD.
Just under 4 days to go!
Well with a price over 2K that is meant to be the consumer replacement for the consumers rich enough and enthusiastic enough to buy Titans. I guess nVidia felt the Titan name was a bit confusing to consumers actually just trying to find the fastest and most expensive consumer oriented gaming / 3D card.
And a 20GB mfg option doesn't sound like we will be getting any reasonably priced 20GB nVidia GPUs without AMD forcing the issue with their own ultra competitively priced and performing GPUs.
The article was speculating that the higher pricing was in part due to anticipated demand allowing them to hike the price over MSRP a bit. And, of course, you KNOW that the hardware scalpers are going to buy up everything and sell it at a huge markup, so unless you are lucky enough to grab one on pre-order or something, I'm guessing the prices will be more than a bit over MSRP anyways for the first few weeks, based on other recent new computer hardware releases...
BTW, here's some handsome leaked pics of the Zotac 30xx lineup...
https://wccftech.com/zotac-geforce-rtx-3090-geforce-rtx-3080-geforce-rtx-3070-custom-graphics-cards-pictured/
Take into account, that if "overseas" meant this side of the pond (europe), that's the price you can actually buy it with (VAT included)
I don't understand why DirectX 12 wouldn't be backwards compatible. Why can't the software just check the version number and run if that number isn't lower than what it needs?
There shouldn't be much if any difference between Vulkan and DX12, except insofar as studios spend more time with one than the other. They're both thin layers on top of the hardware. If there are differences it's in the drivers or the game source, not the API layer.
You cannot program for DX11 and then just flip to DX12. They are quite different, as are all versions of DX. It is sort of like how on phones each app still has to update for every new version of Android in order to take advantage of any new Android features (or to fix any new problems the new Android brings.)
There sure are lots of 3070 leaks going on, so Nvidia may be giving specifics on it as well.
At this point, nearly every single detail about the cards is already known, except for the exact pricing. Jenson may as well just walk out on stage and say "Ampere...it just works!" And then he does a mic drop as the price reveals behind him, LOL.
But it should still be an interesting show, as he is going to give his best sales pitch on these cards. I am sure there has to be something we don't know yet that might surprise us.
And then it will be AMD's turn. Many people are hoping AMD will bring something to compete and force better prices. I think AMD absolutely must make some kind of noise within the week of Nvidia's reveal. If AMD still stays silent in the week or so after this Sept 1st event, I would be really worried about whether they actually have something or not. Just look at all these Ampere leaks, they have been going like crazy. Meanwhile the leaks on AMD have been very quiet. Most of the few leaks we have were a long time ago at that. Supposedly AMD cleaned house to stop leaks, but there is more to it than that.
I really am hoping that Big NAVI is really big and can compete or even beat RTX 3000. I would like to see a 16 GB 3070 or 20 GB 3080 sooner rather than later.
As soon as I started hearing the power rumors, even on a node process with smaller geometry, I instantly thought that they must be using that power savings to up the frequency because they can feel Big Navi breathing down their necks. I really hope that that's the explanation for these crazy power numbers we're seeing: NVidia cannot be seen as being even challenged on performance.
I don't think there is any question about it. AMD and 'Big Navi' have already made a huge impact on Ampere from a design stand point. I don't think Nvidia would be doing a 3090 with 24GB of VRAM, that's for sure. They have always stayed around 250 Watts for their top cards. If these TDPs are correct, they will bust down that barrier with not one, but two different cards at once...and by a large margin at that. There is absolutely no way they would do that without feeling threatened. I have argued about competition for a long time. These companies are very well aware of what the other may be doing.
So I believe we can already thank AMD. I am sure that Nvidia was focusing on ray tracing and Tensor performance regardless, but the overall performance of the top card was probably going to be a more standard 30% above the 2080ti, with a 3080 that matched the 2080ti. Instead the 3080 should be a good bit faster than a 2080ti, and the 3090 may be 40-50% faster at standard performance. And that is on top of the vastly improved ray tracing and Tensor cores, and a hefty VRAM boost. It should all translate to a great boost for Iray.
The VRAM capacity is very interesting if it really is up to the vendors to double it on the 3080 and 3070. It seems like Nvidia would offer the option on their cards, too.
The one thing that strikes me as odd is that they are saying the 3090 will be 350 Watts and the 3080 will be 320 Watts. That doesn't make a lot of sense. The extra VRAM in the 3090 alone can account for the extra 30 Watts. But if that is correct, then the chipset would be drawing nearly the exact same power as the much bigger 3090. That doesn't seem right. The 3080 is a cut down 3090, and it is heavily cut down. It shouldn't need the same amount of power. It may be clocked higher than the 3090, but I feel that should not cause it to draw the same power as the 3090. Unless the 3080 is really leaky on Voltage, it doesn't seem to add up, and even more so if these are actually on TSMC 7nm. However, I will point out that that AMD's 16 core Ryzen is actually more efficient than its 12 core. That sounds crazy, but it comes down to binning. AMD saves the best chips for the 16 core parts. The ones that have defects are cut down to 12, so even though it has 4 fewer cores, it leaks just a little bit and thus uses slightly more power than the 16 cores do. So I suppose that could explain this, but like I said, it still seems odd. It certainly makes the 3080 look very inefficient, and reviewers will notice that.
So that leaves me wondering if the power draw numbers are being exaggerated a bit. I would not be surprised if some things being leaked are done so to intentionally throw off leakers. Nvidia has done a remarkable job keeping everybody guessing what the fab is on these things, so they could have another secret for us.
...as great as the new Flight Sim looks, it would still lag and stall even if I had a killer system with dual Titan RTXs NVLInked together as my connection is the fail point. In my part of town we dont have gigabit fibre optic and the building I live in still has copper wire.
@outrider42
There should be a significant power savings by going to a smaller node, even with the performance bump. We've already seen this in Picasso vs Renoir on the AMD end. I need to put a caveat on this, though, as if Nvidia is indeed using a Samsung node for the new cards it's not a simple apples to apples comparison of a larger node vs a smaller node. Kind of like trying to compare GloFo silcion to TSMC and Intel silicon.
A more linear comparison would be seeing how much Intel has eeked out of their 14nm node over the years, but that's been incremental improvements at best, and maybe comparing that to the Intel 10nm node.
I did see in one of the rumors recently where the core count bump with the new cards along with the clock speed improvements may mean as much as a 30%-50% improvement vs. the previous gen cards in the last day or so, but of course until we see independent benchmarks, grain of salt and all that. If we were talking say a 50% improvement with the 3090 vs Titan RTX, that's pretty significant from a content creation standpoint. Even 30% is a huge jump. At that point you are talking with getting away with just 2 newer cards vs 3 older cards, but of course the power budget situation and heat management need to be taken into account.
On the leak/rumor front, here's a leak featuring the Gainward 3090 and 3080 cards:
https://videocardz.com/newz/gainward-geforce-rtx-3090-and-rtx-3080-phoenix-leaked-specs-confirmed
The WCCFTech article goes into a bit more detail r.e. the power connectors on the Gainward cards:
https://wccftech.com/gaiwanrd-geforce-rtx-3090-geforce-rtx-3080-phoenix-custom-graphics-cards-pictured-specifications-detailed/
Two 8 pin power connectors sounds nice for people not wanting to have to track down that new 12 pin connector that'll work with their PSU... If NVLink is indeed being left off of the 3080's though, that's definitely something that a few people around here might care about. Grain of salt and all that.
Remaining silent, even knowing you should be able to compete, is a good tactic, providing you're confident Nvidia doesn't know what is about to hit them.
Alternatively, you know you're screwed.
I'm hoping for the former.
What Nvidia giveth, is also likely to be some taking away too.