Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
Frankly, all the talk about power and speed, while perhaps interesting, is of no consequence to me. My only concern is VRAM. It doesn't matter how fast the card is if the scene I want to render will not fit and drops to CPU. What's the point of spending over $1,000 for a piece of high-technology that is just not up to the task? And it looks increasingly probable that the VRAM increase, if it happens at all, will be significantly delayed. Which all prompts the search for alternatives such as rendering in other software or hoping for something like Google Filament to be a gift from the gods.
All the rumor sites are now saying the 3090 will have 24 GB VRAM.
Seriously, try Blender.
I don't care what they're saying; I will wait until I see what is released. After the release of the last cards, and how long it took Nvidia to deliver what they promised, I'll be taking what Nvidia claim with a pinch of salt too.
I'm not in the 3090 league, I'm afraid. I have been hoping thet the 3070 16GB rumour had some truth to it but I'm pretty much doubting that now.
Well they did a GTX 960 with 4GB back when the maximum memory available on Maxwwell was 4GB of GDDR5 so if RDNA 2 does some 16GB midrange cards then Nvidia should too. Just hope to see them soon though.
I think it will depend on what AMD does for the 3070 to get 16 GB of VRAM, too. Looks like sometime in October according to some. Maybe you can get some more money saved up by then and get a little better Nvidia card and we'll maybe know how they work with Iray by then.
I wish. I don't have spare cash as I'm on a fixed state pension so my annual holiday comes out of my life savings and if I choose not to take a holiday (meaning a trip somewhere), I use that money for a luxury such as a computer upgrade. With the pandemic, travel is limited so this year I'm spending on the computer.
I've a feeling the the 30XX series will not be fast enough for someone on my small budget to justify spending $1400 on a 3090 GPU with 24GB I'll get 1 or 2 years use of before one with 24GB or more VRAM that is actually fast enough does come out. For that reason I've made up my mind to buy the 3060 6GB/12GB or 3070 8GB/16GB GPUs depening on cost. It might come down to the 3060 12GB or the 3070 8GB, I'm not sure.
I'd wait on some user reviews then before making the leap. You've had continuous problems with Blender so I don't recommend it. Likewise the jury is still out on UE though it shows some promise with high quality renders, it still is rather complicated in some ways, but reports of using FBX export with the Daz rig works better than the Epic rig. The current bridge seems focused on games and not the film production part of UE which may be closer to Iray in results. Anything game related isn't going to get you the quality you've said before that you want. Filament causes hesitation for me because of the game focus. It's still early though.
I'm 64 and have been trying to do some quality work before I die like you. My stroke last year has slowed me down. But I'm still wary of fools errands which we see a lot of around here. I'm fortunate that I still have a job but who knows how long it will last. Good luck, Marble!
Very much why I've cherished my TITAN RTX and will probably add another over buying one of these new cards. Of course, I'd need to see the new cards' NVLINK performance, but I'm willing to bet I'd be set with two TITAN RTXs for a good while!
You might find them going up in cost as it's discovered they are actually good at what they do versus the new 3000 series being disappointing.
I have the cash for one, but after moving my rendering to Blender, decided to wait and see. My Threadripper outperforms my 980ti, and it's a first gen Threadripper .
Re: The Remarkable Art & Science of Modern Graphic Card Design. I'm interested!
I'm building a machine, gradually, learning the newer technology as I go. One concern I've had is whether or not the new GPUs will require anything new or different in the other components I need to get.
So according to the video at 6:18, the GPUs will include an adaptor to make their 12-pin connectors compatible with pcie 8-pin cables. If I'm understanding that correctly, hurray, one concern resolved.
[Also planning: ASRock x570 Taichi, R7 3700x]
My current machine can no longer render, so I'm eager to get the new build built. But it hasn't been easy. A person can't study for what doesn't exist yet.
Oh! Just wondering: Does DS usually have the ability to render Iray, albeit poorly, with newly-released cards, or will the 30xx's be totally unable to render in Iray until a new version of DS comes out?
Anyway, it's nice to have some good news these days, isn't it?
-Ken
This is my hobby in retirement. I was a techie for my whole working career, supporting computer hardware and networking in the latter years. So I'm not afraid of technology but I find that I'm slow to grasp new techniques these days so learning Blender, for example, has become a slog and my comfort zone is still DAZ Studio. If DAZ could improve the timeline, speed up dForce, add soft body physics and allow me to render a scene with 4 characters for the cost of a 3070, I would look forward to enjoying this hobby for my remaining years. I didn't have a stroke but I did need a quadruple bypass in my mid-40's so I know about being slowed down.
Of course, that is always the case. That is why clock speeds often increase. My 670 had a clock of right around 1000 Mhz. That was good back then, but today that would be considered quite low. You get more performance by increasing clock speeds, it is often considered "free" performance since it does not require any additional hardware besides better cooling. But Nvidia has generally capped their top GPUs at around 250 Watts. That has been the norm for quite a long time. But the top Ampere breaks that, and at 350 Watts it does it by a full 100 Watts. The 2nd tier 3080 even breaks that barrier at 320 Watts. For reference, the 2080ti was rated at 260 Watts, and the Titan RTX was rated at 280 Watts. That 20 Watt difference is largely due to the additional VRAM, 14GB more VRAM will use about that much. The 3090 is using even more energy than the Titan RTX. But it will be a lot faster.
That shows Nvidia is working very hard to add additional performance to these GPUs, beyond what they normally would do.
There is also something that I have seen no tech people really discussing: that Nvidia is using 3 different types of cores in their GPUs now, while AMD might still be using just one type of core.
Think about that. Back in the PS5 deep dive, they mentioned that the PS5 GPU has no dedicated ray tracing cores. This GPU is doing ray tracing through its existing cores. The PS5 is using a modified RDNA2, and this indicates that AMD is not using dedicated cores.
So what's the big deal? Well that means that AMD can use their entire die space for their regular cores, and they can use ray tracing and upscaling over these same cores. But Nvidia cannot do this. For Nvidia, they have to dedicate a large amount of die space to these ray tracing and tensor cores they created. So while Nvidia will likely have a big performance advantage at ray tracing and upscaling/denoising, AMD might actually have an advantage at traditional rendering and performance. Most video games today still use traditional raster performance, so that is bad news for Nvidia.
Thus, to make up for this potential disadvantage, Nvidia made large GPUs and over clocked them as much as possible to get every drop of performance out of them. So even though this is a smaller process node, the end result is a GPU that might use a lot more power than normal.
So ultimately I believe a lot of what we are seeing has come down to Nvidia's design decision to create these dedicated ray tracing and tensor cores. This design choice is a double edged blade for them.
A 1660 also out performs a 980ti in the iray benchmark thread. That is not the 1660ti or 1660 Super, that is the plain jane 1660, a sub $200 GPU. The 980ti is quite old and out dated now. It is great that Threadripper can beat a 980ti, but the entire RTX lineup does as well, all the way down to the 2060, which utterly destroys the 980ti in Iray.
They might, which is why many folks have been saying wait; don't make decissions based on rumour.
How would anyone but Nvidia know what is happening; if Daz have any early info, they won't be able to say due to NDA. The lack of comment from Daz either means NDA or they don't know.
...And what good news? Speculation and guess work is not news.
Hopefully both nvidia and daz learned from the last time, and got an update ready to roll out on release lol.
..for me it's one of these:
There is no indication for Iray and Ampere yet, as Ampere is not even officially announced. Most of the things that have been shown are not official Nvidia.
However, we have some things that have come straight from 3rd party AIBs. So many pieces of info are basically as confirmed as confirmed can be. But what you will not hear about on Sept 1 is Iray.
As I have said before, the changeover to Iray RTX changed how Iray itself is updated. It now has the full OptiX 6, not the old OptiX Prime. Prime needed to be recompiled for every new GPU arch. 6.0 does not need to be. So Ampere *should* work with Iray pretty quickly, if not on launch day. I am willing to bet that Ampere will work with Daz Iray right away. However, don't take my word for it, wait for somebody to confirm. There is always somebody who gets one of these on launch who posts in the forums.
As for me, I am keeping my 1080tis, so if by chance I am wrong, I have a backup plan. And I can just play Cyberpunk 2077 on my Ampere. If you do not have a backup plan, then just wait.
Good news that they are actually coming out next week, (unless that's also speculation), so that I can finally get an idea of what the hell I can put into my new machine.
Good news that I might be back to accomplishing something I enjoy instead of sitting here in this damn apartment listening to crappy world news, trying to deal with damn cataracts and other body, & computer, failures while my bit of time left on this ridiculous planet is wasting away.
Damn! I'll take any bit of good news that I can freaking get these days.
If I have to, I'll forgo the new cards and get whatever's left in the 20xx's. I just want to know ASAP what my options are.
Gainward GeForce RTX 3090 Phoenix Golden Sample
I was about to post this article, which features the above card:
https://wccftech.com/nvidia-geforce-rtx-3090-geforce-rtx-3080-custom-graphics-cards-leak-out/
In any case, yeah when official product pages go online with the specs, that's a bit more than just a rumor I'd say...
So, if the latest leaks hold true, we are looking at a 24GB and a 10GB option at launch, but of course we have about a day to go until Nvidia's official presentation. Gee, I wonder what a few people around here might be doing tomorrow around that time... I still say Dr. Su looks better in a leather jacket than Jensen does!
I'm waiting patiently for the watercooled variants to surface.
For 10xx and 20xx new versions of Iray and DS were needed (and that took quite a bit of time for Iray to support them IIRC). No one but Nvidia could say whether or not it will be needed for 30xx, but I personally wouldn't bet on them being supported on release.
The only thing that seems to be 100% true is that Nvidia have a clock counting down; they will be announcing something when said clock reaches zero - or so we're being lead to believe.
Whatever card I end up getting, and presuming it's a Nvidia option, then it will be a Strix variant or perhaps a watercooled variant; well if I go AMD, and strix is an option there, then it will be true for AMD also.
10 series took ages iirc; 20 series was if not imediately then close
Where's that meme of a big salt factory...
OK, HUGE Grain of salt (article even says this), but if true, there may be some happy campers around here tomorrow. If not, well you were warned, don't get your hopes up yet!
https://wccftech.com/rumor-nvidia-rtx-3090-performance-slides-leaked/
Almost 2x performance over 2080 Ti if leaked slides pan out. If not, well that's why it's tagged RUMOR.
1 day plus 1 hour to go, plus factor in initial blah blah at the beginning of the presentaiton time....
The 20x0 cards came in two steps - basic support was a lot quicker than the 10x0s, but not instant by any means, full support taking advantage of the RTX cores was quite a bit longer (but I can't recall how it compared with the 10x0 release - I know we had it before I bought my 2080Ti at the end of last year).