Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
These are new products so no one has ever seen them to know but since they lack CUDA they are useless for iRay.
Digital Foundry also observed this issue in their video. So something is wonky with 2 independent reviewers then! I'm sorry, but I would trust HU before 99% of "the rest of the tech world"...or you.
At any rate, I'll agree to disagree.
But remember how I also said this was a chicken and egg situation? And that once 16gb cards enter the more mainstream market we will start see games using more VRAM? Especially from any AMD sponsored titles? Remember that? Well, just one week after the AMD reveal we have a game saying they will be REQUIRING 12gb of VRAM for their highest settings. Oh snap! Not only does this step over the 3080, but even the 2080ti.
https://videocardz.com/newz/godfall-requires-12gb-of-vram-for-ultrahd-textures-at-4k-resolution
Of course this title also happens to be an AMD sponsored title. AMD is going to leverage this advantage real hard, and actively encourage game devs to take advantage of this VRAM. But even without marketing this is something that needs to happen. I've said it before VRAM is actively restricting video game design. Now that we have 16gb cards entering the market at multiple price points, the time has come. While tech that pulls data from SSD like the PS5 will eventually be a thing, it is still much easier to design a game around VRAM since it is right there on the GPU. That and most people will not have the PC hardware to do this for some time. The fact that AMD is even releasing 16gb cards is a sign that AMD themselves understands this.
If that is not enough, the lesser AMD cards are rumored to have some large VRAM capacities as well. Rumors say the 6700XT could have 12gb itself, a possibly $400 card would have more VRAM than the 3080.
As for the console advantage for AMD, it will be a factor this time. Sure, providing the hardware for the previous generation didn't help AMD...that is because AMD sucked so bad. AMD had nothing to compete with Nvidia, but the bigger issue was adoption. Nvidia had total command of the GPU market. Game developers are always going to target the larger markets, and since AMD could never gain any traction why would they support them? That changes now. AMD is poised to take at least some market share from Nvidia, almost by default as long as they can simply supply stock. So with AMD cards becoming more popular, it is only logical that devs will start to specifically target them more. AMD now has some of the fastest hardware and they are involved with consoles. They have a serious advantage thanks to Nvidia's mistakes.
Poser's press release for Poser 12 has some sick burns aimed directly at Iray, LOL. Poser made I think the best choice they possibly could make by teaming up with Blender Cycles since it can support different hardware. Now maybe we will see a Poser comeback, all of this stuff can be tied together. If customers become frustrated with Nvidia, they may make a move to different software, software like Poser and Blender Cycles, which can use AMD hardware. If that happens, those customers may start buying models from elsewhere, too. If I were Daz, I'd be at least a little bit nervous about this. The Daz to Blender "export" is just a script, and it is limited in what it can do. Poser actually has SuperFly built right in, no messy and time consuming export process needed. This is something Daz has a genuine problem with. A lot of people are not fond of needing to export everything.
As PCGamer pointed out in regards RE3 until its actually tested what publishers says is required is irrelevant.
Did you read the reddit thread you linked about the HU test? I wasn't alone in pointing out that the test wasn't having the results they claimed.
Reading through the feature list for Poser 12, it looks to me like they have only implemented the CUDA rendering capabilities which requires an NVidia GPU.
"Another bonus is that Cycles doesn't restrict users to one type of GPU. Cycles supports GPU rendering with three GPU rendering modes: CUDA, which is the preferred method for older Nvidia graphics cards; OptiX, which utilizes the hardware ray-tracing capabilities of Nvidia's Turing architecture & Ampere architecture; and OpenCL, which supports rendering on AMD graphics cards. This means multiple GPUs are also supported, unlike Iray that only functions with Nvidia hardware."
https://www.posersoftware.com/article/488/poser-12-update-how-the-new-superfly-improves-art-render-time
Somehow I feel the upgrading game has not become easier for people who do GPU rendering on hobbyist/student level. It would be interesting to see how Octane and Cycles would fare with AMD GPUs. I've read people claim that OpenCL is slower than Optix, so would be interesting to see how it fares in Blender.
I have a AMD Radeon RX 570 8GB and while a good GPU, Blender 2.9's cycles support of it is a bit patchy and buggy. Now that won't stop me from buying one of those Big Navi cards next year after I buy an obligatory 3060 or 3070 card because I do expect AMD to make big inroads catching up to nVidia in SW over the next couple of years as well. A least that's my hope because without it AMD exceeding nVidia in HW quality and speed is pretty useless.
For those of you that may want to try to snag a Ryzen 5000 series CPU today...
https://wccftech.com/amd-ryzen-9-5950x-ryzen-9-5900x-ryzen-7-5800x-ryzen-5-5600x-zen-3-ryzen-5000-desktop-cpus-available-today-heres-where-to-buy-them/
The scalper bots will probably still win, but best of luck!
Yes, there appears to be hope for a decent rendering card.
I have a friend who works in sales at a Microcenter and they do work on commission so take this advice with that knowledge.
Microcenter is only selling to actual persons in their actual stores. If you live near one and you want one of these CPU's you might want to consider that as an option.
Reviewers love 'em!
https://arstechnica.com/gadgets/2020/11/hands-on-zen-3-testing-with-amds-ryzen-9-5900x-and-5950x/
https://www.anandtech.com/show/16214/amd-zen-3-ryzen-deep-dive-review-5950x-5900x-5800x-and-5700x-tested
I checked at the one nearest me and the 3070s are all sold out but they do have 3080s. It's probably like that at all their stores. I guess they really are buying for need and not to have the creme de la creme top card. If your search on Amazon the prices are silly price gauging and this time the markups are do to the fake justification folks building lots cryptocurrencies rigs isn't flying. Even so, the Microcenters & every place I've seen that is an established chain or online business has markups between $30 and $60 over the MSRP of nVidia's models.
My MB's manufacturer (Gigabyte) told me in a support ticket the B450 MB bought originally designed way back for the Ryzen 2 CPU generation will be bios upgraded to support the Ryzen 9 5950X. Pretty sweet even though earliest I am likely to buy is autumn 2021!
I got a 5600X, plus all the bits and pieces for a new system for the SO, at Microcenter. There was no significant rush on them as far as I could tell but they were doing more business than usual so who knows how long they'll stay in stock.
Everything went together smoothly and am now waiting through the Windows update process.
According to what I've seen the 400 series Mobo's won't get BIOS updates to support the CPU's till next year.
Congrats on the new CPU!
It's not mine. It's the wife's. It's to get her off my system and off the laptop she's been using since her desktop died back in August. I'm staying on my R7 2700 for a while.
It is up and running now and some quick benchmarks before she kicked me off show it to be at least 50% faster than my 2700 in games and at DS CPU renders (even with the 2 fewer cores).
She seems happy which means I'm happy.
Ah, you've discovered the secret to true happiness. Happy wife. happy life.
I am thinking about adding another 32GB of memory and maybe upgrading my Ryzen 3700X to a 5000 series CPU after I upgrade my GPU. WIll be a 3080Ti if they have 20 GB of memory or go full Monte and get a 3090 if the 3080 Ti is released with a subpar quantity of memory for rendering.
Now lets hope the current 3080Ti rumors are true (3090 chips that failed the bin tests to make 3090 speed and 20 GB of memory).
The problem with that rumor is the 3080 and 3090 are the same chip. So 3090's that fail QA become 3080's. If yields were high enough then I'm sure they could split the yields up more but right now with no 3080's on the shelves I really cannot see Nvidia splitting the product stack up. Next spring or whenever they get the yields up maybe but you guys counting on some imminent release are I think going to be disappointed.
Ha nice
Think it's finally time to upgrade my 1950x Threadripper; it is now outclassed. But it has been a great CPU, and still is in all fairness - there is just better out there
Not sure if it's a Ryzen 9 or wait for the Threadripper. I'm also expecting to go AMD GPU if the performance lift on pairing them plays out - I render in Cycles so I don't care, presuming I'm not taking a cycles hit using AMD.
Looks like EPYC Milan is starting to make appearances in the benchmark databases:
https://wccftech.com/amd-3rd-gen-epyc-7713-milan-zen-3-cpu-spotted-64-cores-spotted-benchmarked/
They've been in some AWS data centers for at least a month.
I am hoping that the reason we are not seeing regular 3080's right now is that Nvidia is saving the GPU's they have for a 3080 Ti release before the end of the year. I think that Nvidia is realizing that they may have goofed with the 3080 only having 10 GB of memory. The perception being that the 3080 with only 10 GB of memory will not have long enough legs to future proof buyers where as they will see the 16 GB on the RX 6000 series cards as a better option It doesn't matter if it is true or not and also we do not know what the game memory requirements will be Q3 & Q4 of next year. With 16 GB of memory in the Xbox & Playstations and on the RX 6000 series cards, game developers will figure out a way to use it at some point and probably very soon leaving the 10 GB cards out in the cold. Again this is all just wishful thinking and rumors.
It was a weird decision in the first place to have less VRAM than in the previous generation, it would have been enough if they even had gone up by just 1GB
That's post hoc stuff. They'd annoy the heck out of their own consumers plus they're losing sales every day by not having units on the shelf. This fantasy, by pretty much only DS users, that the console devs will suddenly start producing these games is just absurd.
Console game devs are mostly the big publishers and they produce cheap games. They don't spend money they don't have to. For them to add textures they'd jump the prices, which they won't do, $60US is the sweet spot, or do them as a separate paid DL which is very unlikely to get past the EU regulators.
Beyond that is the DL size. Textures do not compress. Going from 4 Gb of textures to 10 means a much bigger DL. For someone in an urban/suburban area with a decent high speed connection that's not a problem but for anyone else...
You might see some games push up over 6 but there simply aren't going to some huge rush of them to 12 and it will not force Nvidia to release cheap cards with lots of VRAM. Even if 4K gaming does start calling for more than 8, I'll bet serious money it won't, 1080p definitely won't and most gamers are not leaving 1080. Nvidia is not stupid and they mostly sell cards to 1080 gamers.
Fyi the idea that both the Playstation 5 and the XBox Series X consoles have 16GB of video memory is a PR marketing ploy. They both only actually have 10GB of VRAM wired in such a way as to be usable for live graphics rendering. The other 6GB is locked behind a roughly 1/2 throughput bus interface and is only really usable for OS and non-graphics related functions (like audio processing.)
It isn't a weird decision at all if you understand the full ramifications of high-speed direct SSD access on graphics processing. Hence why you see everyone (except AMD on the desktop end - undoubtedly because bigger number = better) targeting the exact same number (10GB) in hardware designs meant to take advantage of that tech in the near future.
What's the near future? They should demo it, shouldn't they, with an nVidia GPU.
The tech demos are coming. I'm not as confident as some that they'll be on the desktop in the next few years but Nvidia is promising them to enterprise users in an experimental form this generation and, hopefully, as a standard part of the "quadro" lineup for the next generation.
But until it is actually deployed who knows? Personally I think this all could be rendered moot. PCIE gen 5 and 6 are on the horizon, along with DDR5, so the throughput of modern systems may increase to the point where it just may not matter any more. Where that will leave things like iRay? Who really knows?
So, Steve at GN noticed something about 2 sticks of ram vs 4 sticks on Ryzen 3...
Ryzen has 2 channels of memory of course ('little' Threadrippers have 4, 'big' Threadrippers and EPYCs have 8). If I'm remembering correctly, for overclocking purposes up to this point going with 1 stick per channel was preferred, so this is a significant change. Short form, with Ryzen 3, it may be a better idea to fill all of your slots. Steve noticed a 2-8% increase using 4 sticks vs 2.
I wish he would have had a set of 2x16 sticks to test against vs the 4x8 sticks, with the same timings, but Steve was pretty sure that his systems were using less than 16 GB of memory in the tests he ran. Nonetheless, it's a significant increase, so worth noting.
There's also another nuance, that is how many 'ranks' are on every stick if I understood what Wendell was getting at (as Steve relayed it) correctly, and Wendell may do a followup on that subject.
The weird thing is the memory controller is supposed to be unchanged from Zen 2 so I just don't get this. When I go to work Monday I'll have to see what I have in the way of a matched set of 4 sticks of DDR4 RAM to bring home to test this myself.
Also this will be somewhat awkward to take advantage of for the average user. 4 x 4Gb kits are essentially unavailable right now, I could not find any. so if you just want 16Gb of RAM you'd be out of luck.
This may be a result of a number of improvements, the ones jumping out at me ATM is that they increased the number of table walkers from 2 to 6, which reduces that bottleneck. They also increased the store queue depth from 48 to 64. There are other improvements as well.
https://www.anandtech.com/show/16214/amd-zen-3-ryzen-deep-dive-review-5950x-5900x-5800x-and-5700x-tested/4
Speculating here, but AMD MAY have already built the IO die in such a way that the CPU can directly handle either slot in a channel, with the IO die just being the 'bridge'. The 'cross ccd' thing in Zen 1-2 may have incurred some sort of penalty here before, which was resolved somehow when the CCX/D grew to 8 cores. I honestly don't know, but I look forward to Wendell's investigation into this subject.