Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
THIS! is AMAZING!
I am on the verge of buying a 5090. I have it in my cart, but I am wondering if I can use it to render in Daz 4.24.
I do not want to use two cards in my PC I would rather have only the 5090. But will it do what the 4090 does with the current Daz version?
What are the drawbacks of anyone using one already?
Some people report that it does not work at all to render. My CPU and motherboard are fairly new PCIE Version 5.0.
I do not want to buy a card that will have to sit on a shelf for months. But if it works I will give it a try.
I am not gonna spend $11,000 on a card that might work, but I may spend almost 4,000.
The version of Iray used in DS 4.24 doesn't support 50xx cards, so no. They only work to render in DS2025.
I know what you all need. You all need a dual CPU motherboard fitted with two AMD EPYC™ 9965 CPU.
Each AMD EPYC™ 9965 CPU will give you 192 Cores and 384 Threads; a total of 384 Cores and 760 threads running at 4GHz without a sweat in a DUAL system.
That's a top-of-the line CPU, so the bad news is they cost around fifteen thousand dollars each. But the good news is that it's still a better option than paying eleven thousand dollars for an Nvidia GPU which, if my confidence in Nvidia is anything to go by, will fry its own brains out and brick itself anyway (ouch).
Check the spec of AMD EPYC™ 9965:
https://www.amd.com/en/products/processors/server/epyc/9005-series/amd-epyc-9965.html
Way to go AMD, that's pretty cool and I'll take two then. Pricing sarcasm aside, I would just like to point out to AMD that if only they could lower the price of those processors to say, £250 each, then I would definitely buy two of them, and I would even be happy to live on stir-fry noodles for a while to pay for them. I would like to take this opportunity to thank all at AMD for hearing my suggestion, and look forward to picking a few up later in the year once my request has been approved at AMD head office.
I asked Copilot this:
Can 2 x AMD EPYC™ 9965 out render a 5090?
Copilot responded:
It depends on the workload! The AMD EPYC 9965 is a server-grade CPU designed for multi-threaded tasks, while the NVIDIA RTX 5090 is a high-end GPU optimized for parallel processing.
For CPU-based rendering, like ray tracing in Blender's Cycles engine, two EPYC 9965 processors would likely outperform a single RTX 5090, thanks to their high core count and multi-threading capabilities. However, for GPU-accelerated rendering, such as OctaneRender or Redshift, the RTX 5090 would be significantly faster due to its massive CUDA core count and AI-driven optimizations.
If you're working with CPU-heavy rendering tasks, a dual EPYC 9965 setup could be a powerhouse. But for real-time graphics and GPU-accelerated rendering, the RTX 5090 is the better choice.
Comment: It seems Nvidia's $3-$4000 GPU can, for the most part, hold its own and even exceed AMD's $30,000 CPU.
2x5090s in an AMD dual EPYC 9965 setup would fly!
Who could have guessed that a GPU would be better than a CPU at GPU-accelerated rendering?
Why stop at 2? Get an updated version of this 6 x 4090 workstation.
x2 Epyc 9965 - total cost: 19,996 USD (Newegg), Total threads: 768, Total power consumption 1,000W (minimum PSU 1,800 W) Boost clock 3.7 GHz
Cooling x2 Silverstone 360XE- SP5 closed loop cooling systems @ 444.50 USD ea.
Motherboard: Gigabyte MZ73-LM2 SP5 DDR5 Server Motherboard: 2,138 USD
Memory: Whatever you can afford.
Makes even a system built around an RTX Pro 6000 AMD Threadripper 7945 WX (12c/24t) CPU, Gigabyte TRX50 AI MB and 384 GB DDR5 4800 (8 x 48) ECC RAM seem like a bargain..
It feels like a trick question because unlike your 'nope' answer in the later part of your message, the answer is positive for both situation: it's faster to simulate on my rendering computer compare to my Macbook Pro M1 (and it's not because my rendering machine has a better CPU than the M1 chip, quite the opposite in fact).
And it's faster to render too, even accounting for the need to care about fitting everything in VRAM (which means at worse, having to do a couple of renders, each of them with only what's fitting in VRAM, before merging them in Affinity Photo).
Where to start? Well how about that useless Co-Pilot (which was duly banished and disabled as soon as I found it on my system). It wasn't even able to detect the fact that I intentionally mis-specced the CPU, because for starters that model does not reach 4GHz, nor does the total thread count come to 760, it comes to 768.
There's no way on earth that dForce simulation is better done on a GPU than that EPYC processor, either. Not unless dForce has been rewritten to take advantage of GPU cores. And as for Kyoto Kid, lol, well we all know you have a couple stashed away to pair up with your stashed GPU, just as soon as you source the dual CPU motherboard you'll need to link 'em all up - lol
I'm just teasing. But on a more serious note, the point I am trying to make is scalability and flexibility. None of this would be a problem if Daz decided that Genesis X is going to mark a permanent change from Iray to Cycles, therefore opening Daz Studio to the entire AMD userbase too, and making everyone much happier due to the huge drops in price that such a move would trigger. We need to remember that Blender users don't have to worry about Nvidia price scalping monopolies, simply because Blender users get to use AMD GPU acceleration, thanks to Cycles!
Personally, I think the sooner Daz switch to Cycles, the better for everyone involved, including Daz themselves. Cycles is Open Source, Iray is proprietory, so personally I hope that when Genesis X arrives, that it marks a permanent change-over to what they should have been using all along ... Cycles!
Believe it or not, I say that with a kind of sadness, since although I absolutely despise Nvidia, I'm nevertheless quite fond of the Iray render engine. Cycles is the way forward, though, since it is the renderer of choice of the biggest name in the business (Blender), it is open source, and it works on BOTH major brands of GPU. For me, supporting both brands instead of one, would be an absolute no brainer. Unfotunately, I don't get to make the decisions. But one thing is for sure and it is this; that unless Daz switch to Cycles, then their userbase will simply dry-up over time due to the level of inaccassibility brought about by the sheer cost of the hardware.
I asked Copilot:
Is NVidias new renderer that replaces Iray more of an industry standard that traverses most rendering engines?
Copilot answered:
NVIDIA has deprecated its RTX – Accurate (Iray) renderer in favor of RTX Pathtracing / Interactive, which offers significantly faster rendering. While Iray was widely used, the newer RTX rendering technologies are becoming more of an industry standard due to their speed and real-time capabilities.
The RTX rendering pipeline integrates well with USD Composer and other NVIDIA Omniverse tools, making it more adaptable across different rendering engines. However, full compatibility depends on the specific software and workflows being used.
I then asked Copilot:
Is Nvidia's pathtracing more like Blender's cycles?
Copilot answered:
Great question! NVIDIA's RTX Pathtracing and Blender's Cycles are both path-tracing render engines, but they have key differences:
If you're working in Blender, Cycles is still a fantastic choice, but NVIDIA's path tracing is pushing the boundaries of real-time rendering.
I then asked Copilot:
RTX Pathtracing / Interactive uses a PBR model, is that model the favored shader model of the industry?
Copilot answered:
Yes, Physically Based Rendering (PBR) has become the industry standard for modern rendering engines, including RTX Pathtracing / Interactive. PBR ensures realistic material properties by simulating how light interacts with surfaces, making it widely adopted across game engines, VFX, architectural visualization, and product design.
Most major rendering engines—including Blender Cycles, Unreal Engine, Unity, Omniverse, and V-Ray—use PBR workflows to achieve consistent, physically accurate results across different platforms. The Metal-Roughness and Specular-Glossiness models are the most common PBR shading approaches.
The 5090 is coming tomorrow!
I have C4D R25 with Vray
And Maya With Arnold.
Cycles is very slow compared to these two
however We have two very good bridges with Daz studio to Blender and Diffeomorphic so nothing is stopping
DS users from rendering Daz content in cycles.
IMHO Daz should do what Reallusion for has done for Iclone, and dump, the deprecated Iray engine and implement an Omniverse connector for DS 2025.
A Daz V2025 to Omniverse bridge would be of interest to me. One that would also export Ultra Scenery, VDBs, all manner of hair, transparencies and figures (with no baking of shaders) etc...
As far as I'm aware of, DS has no ability to generate VDBs outside of Mesh to Volume, so you should just be able to import your Daz VDB products directly into Omniverse.
The gist is, a bridge that would export the whole scene, like Diffeomorphic. I am almost done paying for the FXJanga suite (Embergen etc...). Even Diffeomorphic does not know what to do with VDBs (Blender), some metal textures/marble (shaders), dual lobe hair, fire/transparencies, glass, and the figure materials often do not export well (while Ultra Scenry is completely out of the question)... etc.
Even manual exports as FBX do not really work.
Perhaps the new Nvidia rendering engine (standard) will make importing/exporting across programs/platforms more doable.
"Nvidia RTX Pathtracing/Interactive" how can we abbreviate this and give it a name like IRay?
RPath?
Different programs and different rendering engines have different capabilities. Custom shaders will need to be rebuilt in whatever program you're exporting to. "Dual lobe" is not a type of hair, but a shader that be used for hair in DS; the strand-based hair can be exported, but seemingly not through (Daz's implementation of) FBX. Some programs/rendering engines use cutout opacity, while others use alpha. Every time I export something to a different program, I do so with the full expectation that I will need to rebuild shaders, and I might need to export multiple times, sometimes using different file formats.
USD export would address many if not most of these issues, so that's what we should really be pushing for, rather than a specific Omniverse bridge. You're also referring to Diffeomorphic rather than Daz's own Blender bridge, so you seem to be aware of the limitations of Daz's own solutions.
The whole point I am trying to make is; wouldn't it be nice if we did not have to rebuild shaders. That moving things from one app to another was a unified standard.
Jpg's, word documents formats, and video formats are generally standardized.
Without these standards the internet would not be possible. We need a 3D internet with standardized formats as well.
Not just objs and vdbs but scenes too.
A lot of things would be nice.
Shaders, rendering engines and 3D programs are generally standardized, but "generally standardized" means less than you seem to be suggesting. Not all text documents are created equal, and not every program can fully represent all the features of any given document. JPG is only one format of image, different formats have different capabilities, and not every format can be used for every purpose. The same is true of video. Any standardization would have to come from the major industry actors: if Otoy and Autodesk, for example, adopted the same standard shader language, then maybe other rendering engine companies would follow suit, but maybe not. Nothing Daz could do will have any impact on this, though, because Daz doesn't have its own rendering engine. Even if it did, it would have to be an industry-wide standard on the level of Arnold, Redshift or Octane to have any hope of implementing such a standard. MaterialX is one system that has that potential, but it's not guaranteed to reach that point.
FBX, OBJ, Collada, Alembic, and so on are all standardized formats, and yet here we are. USD is becoming, as its name suggests, a universal format. DS currently can't export DS, so that would be the most important step toward your vision.
Word documents are more standardized than most think, you can copy and paste words from nearly all documents into notepad.
I remember typing my songs out on one of those old double-decker typewriters in triplicate with carbon paper.
I remember when I had all my song documents typed in a Commodore 64 word processor app that my mom bought at a department store.
I tried to get them converted to Windows text format and the technician sent me back a disk with cryptic programming characters on it. In other words, I had to retype my songs again.
I typed them all out in WordStar (from dot matrix printouts) and I was able to eventually get them into Microsoft Word.
We have come a long way since then.
Klite mega codecs, Adobe media encoder, and Any Video Converter Ultimate pretty much gives me the ability to read and write just about any video format known to science.
I remember when Sound Forge used to make my quad core Pentium computer crash... And when Windows 3.1 (with my 1gb HD) sound recorder used to only allow me to record 60 seconds of audio, and when 320x240 video resolution was all the rage. My Cakewalk DAW for MSDOS only had MIDI. My Commodore 64 computer had a cartridge slot in the keyboard that I plugged in a midi cartridge called Master Tracks. A friend of mine built a box he designed and hand-made the circuit board for FSK so I could sync my Fostex 8 track reel-to-reel to the Commodore with time code for an early DAW (he made) named Score Point. lol
Passport Midi Interface « My Commodore 64
You're not talking about "word documents", but basic text, and not only would the internet not work without that level of standardization, COMPUTERS wouldn't work. You can write HTML in notepad, but it needs a web browser to be viewable as a website. You can write C# or Python or whatever other programming language into notepad, but it needs a program that knows how to execute the code. If you were determined enough, you could probably write a series of binary color values that perfectly describe an image, but that image wouldn't be viewable in notepad.
I was actually talking about various word documents "before" basic text was standardized. Before MSDOS and Windows.
Allow me to quote you back to yourself:
In a single sentence, you talked about word documents being standardized, then pointed out that you can copy and paste text from one program to another, as if those are the same thing. I can copy text from Chrome and paste it into a wrangle in Houdini, and neither of those are "word documents". Here's a snippet from my resume, saved in .docx, opened in notepad:
I hope I don't need to state that my resume isn't an unintelligible string of Chinese, Korean, Arabic and at least one other script I don't recognize. That's how "standardized" word documents are: a text program made by Microsoft can't correctly represent a document created in a different text program made by Microsoft.
.obj file format is close to what you want. It supports vertex position in x,y,z coordinates, UV mapping and colour, anything above that is renderer specific.
Asking for a universal scene exporter that includes universal shader support is like asking for a universal spoken language. ( does anyone actually speak Esperanto anymore ?? ).
A scene converter seems no more complex than what Microsoft and Google have done with PhoneLink were I can run over a million Android apps/phone/messages/pictures from within Windows.
there is this thread
https://www.daz3d.com/forums/discussion/719226/the-usd-exporter-hidden-in-daz-studio
I remember complaining to Microsoft over 20 years ago that word processing is a human right, along with freedom of speech. (Open Office was at that time given converters to Word Documents, Word can now load and edit PDF's and I believe Adobe Acrobat can now load Word documents.)
I also complained that operating systems should not be device dependent. A phone device should be able to adopt (communicate with) a different OS on-the-fly... (still waiting on that one)
Currently there is antitrust litigation with Apple aimed at democratizing phone operating systems.
Consumers should not be subjected to proprietary forms of communication. (Just a thought or idea.)
Music has VSTs, a plethora of apps and instruments that work across all digital audio workstations. (some of the VSTs that I own: Native Instruments Collector's Edition, Omnisphere, Nexus, Falcon Synth, Antares Products, Wave Audio, VPS Avenger, Scaler 2, East West Sounds, Reason (Which I do not own but has since become a DAW as well, and a multitude of others.) Most Daws export tracks as "stems"...
It is helpful to consumers when content is placed in a universally available format. Universal formats have not hindered content creation. And indeed, graphics cards should also play nice together.
...OK back to the RTX Pro 6000 for a moment.
Here is a basic system specification built around the RTX Pro 6000. (A revision of the core specs I posted earlier)..
GPU: Nvidia RTX Pro 6000 11,000 USD.
CPU AMD EPYC 9135 16c/32t base clock 3.65 GHz/boost 4.3 GHz TDP 200 W [Total memory support 9 TB DDR5]: 1,388 USD .
CPU Cooler Noctua NH-U14S TR5-SP6 220 USD
MB: Gigabyte MZ33-AR1 Rev. 3.x Server Motherboard [single socket] 759 USD
Memory Micron DDR5 4800 288 GB ECC memory (6 x 48 GB)on 1,436 USD (this leaves 6 of the 12 DIMM slots empty).
PSU: EVGA Supernova 1200 P2 80+ Platinum 1200 W/ 300 USD
This is actually a fairly basic core specification to support the RTX Pro 6000.
Case, drives, displays, keyboard, pointing device etc. not included for this discussion
I primarily drafted this up to illustrate that workstation/sever grade components (particularly CPUs and motherboards) are necessary to handle the Pro 6000's 96 GB of VRAM as it would require 288 GB of system memory (using the 3x VRAM formula) to support rendering. Ideally one would want some overhead so 384 GB (12 x 32 GB) would be more desirable (add 449 USD to substitute a Nemix DDR5 4800 memory kit).
For all of the above (again not counting drives, full tower case, and peripherals) 15,083 USD (15,532 USD with 384 GB memory).
So compared to a dual Epyc 9965 system, about two thirds the cost (projected) though still a pricey endeavour.(and that dual 9965 system would still need a GPU to support the display(s).
It's available for $8,998 on Amazon.
https://www.amazon.com/Blackwell-Professional-Workstation-Simulation-Engineering/dp/B0F7Y644FQ
Looks like it has the same single 12V power connector. A shame for such an expensive and otherwise great card! If you need it for work it would make sense, and it can also be deducted as a business expense.