WHEN... will DAZ3D learn...
gniiial
Posts: 230
how to use - as much reccources - of the computer we use for it?
Is it to hard to learn from other - free to use and open scource - programs, like blender or similar, which are able to load stuff faster, render faster and work overall faster?!
Is it - too impossible - to check and save the cpu cores and split the loading between them? Like the genesis figure with the morphs on cpu one, Hair, only on cpu core two, clothing only on 3 and so on... That would at least shrink the loading time of heavy scenes A LOT! You could do on 4, 5, 6 other different things, like nature oriented stuff, props, static environments and such... IF there are processors, ram and overall 90% more power available, it should be used! Right?
I wonder - why - it is not possible already! While I load a heavy scene, my reccources look like:

Comments
Not everything is suitable for multi-tasking, some things are inherently sequential. Blender has a lot of great features, but it doesn't - that I know of - have a flexible and expansible system for adding morphs and other data to a scene element after creatiuon which the DS architecture does allow - and which is one of the things that is more-or-less sequential/serial in nature.
It still should be possible in 2026 to think of a way and realize it, to really use the full advantages of a high end computer system. I mean, my sceene (8 Gen 9 + Hair and clothes) needs 10+ minutes to come up in full. Not even the rendering time takes this much time to get the crisp result I want to. And it's no wonder, if everything is just dumped onto one core, that, for example is also already pacially in use by standart programs that need this first core.
Surely it should be possible to split this load and reduce the loading time, without conflicting the program itself somehow. And if everything depends for example on the genesis figure, then at least this could be loaded first and then those other features, like hair, clothings props or whatever else has it's own category could be loaded after the figure is "available" to fit stuff on.
midgard229 said:
Blender saves the entirety of a scene into one enormous file..... My blender scenes are 4 gb for something simple and I despise it, where my daz takes 40 seconds to load and the scene weighs 20mb. Sure my add is full 3tb worth of daz assets but 40 seconds load time isn't bad imo. Now the old versions of daz where it'd take an hour to load... Yes that was a problem xD
Blender is by far the fastes program on the market. Maybe you should consider updating your hardware or set things straight in Blender by it's settings and all these possibilities to tweak it properly. And I don't know why your files are this big (4GB rly?!). I even transfer so much stuff into Blender, and it loads like a charm. Not even 5 seconds and it's there. And we are talking - Program start - while hitting just a blend file! Just used the lounge ( nightclub https://www.daz3d.com/night-lounge-iray ) where I added some stuff. The .duf File is 11.8 MB big. I hit the file:
Daz loads the entry screen 5 seconds,
Then reading asset 3-5 seconds...
Then clearing the scene & deleting objects. It's not even done after 2 Minutes waiting and we are talking one camera, empty scene... so basically NOTHING! (3 Minutes!)
Then the lounge appears and the login pops up (I don't know why that even has to happen... but that's not the problem...)
Clearing the scene for example is and loading the stuff with one core only too for sure. Not even a "bigger spike" on my ssd's, ram, in the cpu or gpu performance part... Which means, this stuff will only be used when DAZ3D renders.
Hit the rendering button: 2 Minutes 40 Seconds (not even crisp and with fireflys after 350 iterations)
The blend file of the same scene - 71.8 MB!
Hit the blend file - 5 Seconds - scene there!
Hit rendering - 27 seconds (250 iterations crisp and usable, no need for more! Even though, it's a brighter light)
But I cannot do that with all the character stuff. That's way too much to transfer to achive an overall smooth workflow.
It still should be possible in 2026 to think of a way and realize something, to really use the full advantages of a high end computer system. Look at blender, look at unreal, they achive stuff one can dream off to use in DAZ3D directly.
I mean, my sceene (8 Gen 9 + Hair and clothes) needs 10+ minutes to come up in full. Not even the rendering time takes this much time to get the crisp result I want to. And it's no wonder, if everything is just dumped onto one core, that, for example is also already pacially in use by standart programs that need this first core.
Surely it should be possible to split this load and reduce the loading time, without conflicting the program itself somehow. And if everything depends for example on the genesis figure, then at least this could be loaded first and then those other features, like morphs, hair, clothings props or whatever else has it's own category could be loaded after the figure is "available" to fit stuff on. I mean for real, if you create a character - once - it also cann be a very fast process. (...if we for example have all these features we want to use in one place and just click through them)
And don't get me wrong here. We have turbo loader out there, which basically - forces - DAZ3D to ignore all these genesis oriented features and does a great job making this program faster. But that cannot be all...
For example, why not add a feature, where one hooks a library (let's say clothing), and - only then - the features that are needed will be loaded?!
I already splitted my libraries in nature, environments, hair and other categories, so I could just dump the load manually, If I want to (with the preferences, deleting a folder, adding it when I need it). But that is tideous! Turbo loader gives you the opportunity to save settings, with for example only female oriented morphs, characters and stuff.
Why is - that feature - not part of literally everything as an automatic process?! Or even as a manual one, the folders just as "links" and only if accessed, the features will be loaded. I assume that would make even starting a saved file faster, since there could be an "link to a lib" like it's done in programing. Okay, this is a scene with only nature, only some buildings, some cars... Nothig else needed to be load. So the result is faster for sure.
Blender does not even load any library, because - what for - when the user does not even started to do something?!
If we open a file, the libraries are shaders, textures, stuff that is actually - really - used and nothing else.
If we then again add a chair, this chairs stuff get's loaded..
The linking or appending part is also very interesting... I mean these are great ideas, right?! Adapt them somehow...
Look at unreal, how it deals now with their camera (you can do that with blender too easy!) and for example trees, bushes, grass and nature in general... It's incredible what is possible there.
You can clearly see the idustries future there...
Btw first Image is DAZ3D second is Blender... notice the difference when rendering. Even though it's not a heavy scene, the difference is pretty clear. The GPU is used more, the cpu is quite involved in the process.
Windows will, as long as there are enough cores, spread the actual work across multple cores even if it is a single thread of activity.
Come up with an architecture to do that while maintaining the extensibility of the figures. "Surely it should be possiible" isn't an arggument or evidence to support an argument.
Turbo Loader dos the same, in effect, as uninstalling some content - which yes, will reduce load time. But that isn't modifying the way DS loads content, it just means you don't have the disabled properties available.
In general a DS scene will be small - because it is pointing to lots of external files for the figure, its morphs, UVs etc., and maps. Blender, as far as I know, embeds everything and so can't add newly installed content the way DS can. If your scene files are 4GB then they must be embedding a lot of data (custom morphs, models, UV sets etc., simulations, or other things that don't point back to an asset file).
Well the blend files are large when porting a daz character that has hairs, clothing and etc. my pc is fine, i have an nvidia 3080 10gb with 64gb ram. Also a hint; when you save a scene in daz, just task force close it out. I've been doing it for years.
My scene with g8 +g9 and everything takes less than a minute to boot up, so may be something up with your system
If you're talking about Daz Studio 4.24, Daz Studio 4 was first published in 2011, and all subsequent sub-versions of DS 4 have ultimately been derived from that framework.
In 2011, a fancy consumer level CPU might have four cores, but two cores was entirely normal.
So, at the time, a multi-core CPU mostly meant "your computer can run more things at once without getting bogged down" rather than "your computer can split one task into many jobs". Multi-threading has a performance overhead (particularly early on when people were still learning how to go about it), so when you've only got a couple of cores to play with, the much of the time saved on doing the job in multiple threads was often largely lost in splitting up the task in the first place..
As such, software was often still being developed as single threaded; the idea that consumer level hardware would come to have 12, 16, even 24 cores... that was a fantasy.
Rebuilding a single-threaded application into a multi-threaded application is a hugely non-trivial task. Even more so, if you multiply things up by the difficulty Daz has in that they don't really control the entire infrastructure, but they are responsible for it - that is to say, there are a great many plug-ins and scripts that vendors have made. Daz can't force an update to these plug-ins (indeed, they may not even have the source code), but angry customers *WILL* come to Daz if their plug-ins break, because Daz is the storefront.
So Daz has long had to be very careful about what changes it can and can't make to the infrastrcture - we can never expect such an update to DS4. However, Daz Studio 6 is on the way, and in many respects it is a lot faster, presumably being built to support more multi-threading.
It will never however be able to be as fast as Blender; if I look in my library, I have over 22,000 morphs for Genesis 8 Female in just my main library for Daz3D purchases, totalling more than 15 GB; all of my custom assets and things from other stores have their own libraries and will be adding on top of that.
DS has to go through and index all of that each time I load a scene with G8F in, so obviously it's going to take longer than Blender opening a Blend file with a few dozen MB of data in it.
Blender is a professional package and when things so obvious as this arise, one really benefits from blaming oneself first. You might want to investigate Library Overrides.
You can append a shapekey from another blendfile, just like any other object. The problem is that there is no one common topology to rule them all, like the Genesis frameworks, but rather several good ones. That's not actually better.
No, a single thread is the most fundamental execution context there is. It can't spread a single thread accross cores. The kernel can migrate the thread from core to core to keep it close to the memory it accesses, that concept is called Process Affinity, but it can't divide a thread any further to parallelize it.
But you already identified the problem... not all problems are parallelizable or are inherently sequential with the input of one part of the process being dependent on the output of the previous one. 128 cores will not do you any good if the structure of the problem doesn't lend itself.
But there's pretty damning evidence in the logs to support the assertion that this is a simple matter of an architecture that cannot provide an acceptable user experience. This is not conjecture. You can literally see all the things that DS loads that the scene does not even reference. It is reasonable to suggest that this is suboptimal, especially when the concept of Lazy Loading is a well-known technique.
While developing Sagan/Hitchens, I've crashed DS probably thousands of times. I've never lost data or corrupted anything. I cannot imagine what DS is doing for such a long time, to "clear" a scene.
I wasn't meaniing to suggest otherwise, just that the execution of a single thread does not tie up a single physical core - I wasn't quite sure what the previous psoter was worried about, but it might have been the physical effect of runniing one specific CPU core flat out so i was pointing out that that isn't how it works.
Well, it has to read all the morphs, UV sets and so on to have them appear in the options for the figure - unless it read them when the option list was opened, which would speed the initial load at the expense of slower operation while working (which would probably be the worse user experience for most of us). That pre-loadgn of all the optiions was what i was sayng Blender doesn't do, rather than claiming that there was no way to add elements explicitly (but having to go through UI interactions to do so).
And that is exactly what I think @gniiial and I are referring to when we say that the design of the application does not provide an acceptable user experience. Why force the user to wait for dials that the user has expressed no interest in, instead of initializing them into some kind of inactive state and only loading them if/when they are actually used? That part is not speculation but empirically observed and is a mild criticism in all events. The dev team gets a lifetime pass for the Genesis framework, in my book :) say what you may, the thing otherwise works. Well. Projection morphs are an ingenious innovation.
Here's my sepculation: It is difficult to believe that the same devs who can make something as complicated as a 3D application like DAZ Studio just didn't know about the concept of Lazy Loading. What is believable, though, is that no one suspected DAZ studio to be the smash success that it is, and the combinatoric operation of linking all the morphs and their dependencies was manageable when the catalog was small. It's a victim of its own success, so to speak. But whatever, it has improved greatly in any case.
Just my two cents.
But sure ly the main point of the Genesis system is its extensibility - people can add new morphs, new UVs, projection templates etc. within the existing framework, and without breaking other morphs, UV sets, projection templates etc. It may be that there is a non-linear way of handling those, and it may be that the net effect of that would be a removal of existing bottlenecks without introducing new ones - I just don't think that should be blithely assumed with a surely it must be possible, it is something that needs to be - at least in outline - shown as a potential possibility.
Because, for very important practical reasons, control links have to be able to be defined in either direction.
That is to say that if you have a Controller A, with child morphs B, C & D, then the code can be in A to say "when A is dialled in, dial in B, C & D". Or the code can be in each of B, C & D to say "Go and look at A. If it's dialled in, dial me in too, thanks". Or various combinations.
Being able to link in either direction is very important, both practically and legally.
Let's say someone creates some new expressions, but *also* wants to have correctives for those expressions to make them look their absolute best on Victoria 9. So, they need to link the correctives to Victoria 9's morph so those correctives only activate on her shape. But they cannot update Victoria 9's morph because a) if everyone who needed to link to Victoria 9's morph had to alter it, then only one person could ever link to it, b) it would stupidly bloat everything if you needed to send around updated morphs for everything you wanted to link to, and c) they do not have the copyright to alter and redistribute Victoria 9's morph. So the link to Victoria 9's morph needs to be saved in the expression corrective file, not Victoria 9's.
As such, an on-demand top-down indexing of the morph library of "only look in the files once you know you need them" is impossible, because a lot of morph links are defined bottom-up and thus you don't know for sure whether you do or don't need a file until you've looked in it.
Now, possibly there are better ways to handle this infrastructure. Maybe the links could exist as files themselves, and their file names are encoded such that DS can tell for sure "Hey, that link file has a name that matches this morph file's name, so I know that it tells me about a link I will need to use if I ever dial this morph in, but I don't need to look in it right now", but it would not be possible to migrate an existing figure base to a new system like that.
For now, Daz Studio is forced to work the way it does, opening and checking every morph file it can find in the library folders for that base figure to find all possible links before proceeding.
That doesn't nesisarily have to be. There's ways to do such things in parallel. It's really a mater of breaing the process into separate tasks, each is assigned to a process thread. How easy or hard that is to do really depends on the language/framework used.
One way to handle that is each file has a header with all of the metadata about dependences. That can be read without having to process the entire file. The rest can be posponed untill the file is required or as a background task. Wating until a dial is used can trigger 50 or more dependences being loaded, which can cause the app to freeze while that's being processed.
That of course can be sped up if loading and decoding files was done in paralellel. For example, which file can be processed by a separate thread. The best way to do that depends on where the bottlenecks in the pipeline are.
This is already how it is done. The formulas for any morph are stored near the start of its DSF file, immediately after the basic parameters of said morph. The morph deltas are further through the file, and indeed do not get processed and loaded until the morph is actually activated.
However, looking in the morph file has reminded me of something I'd forgotten. The basic parameters of a morph include things like what the morph's minimum and maximum limits are. Unless the file is opened and its header sections processed, then DS does not know whether a morph's slider should be displayed to the user with a range of 0% to 100%, -100% to 100%, -50% to 100%, 0 to 200%, or whatever.
This makes the problem of speeding up the processing of morphs a considerably more complicated process than I initially considered, and I'm now not actually sure I can see a better way that actually achieves the aims Daz need to meet.
I believe Daz Studio 6 is already moving towards being able to paralellise these things more, but it's not an option we can expect to see added to DS4, which has reached the end of its development life and would have been extremely impractical to rebuild anyway.
Either the range info would have to be moved to the meta data section, which would start to bloat things a bit, or it would have to be displayed without specific info until it's completely loaded. That's where using a background task would help. Things the user tries to access can be moved to the front of the queue.
I figured as much. Much of the framework for DS4 was set when CPUs only had a few cores and most people still relied on spinning hard drives. Older ones didn't handle multitasking well. Command queuing helps, but the drive was still a massive bottleneck. Parallel processing files would require investing time that would benefit few users back then.
I'm sure such things were a "next gen" feature. I doubt they ever planned for that next major release to be pushed back this long.
I believe the poser origins is the issue here. Or more likely, backward compatibility is the issue.
I think keeping the extensibility and having fast response would require redoing how morphs are setup. For example:
You create a figure from scratch, you get the basic morphs. You want to use a morph you purchased called "heads vol 2", you go to the library and "attach" the morphs within it to the figure. Now the figure is "base morphs" plus "heads vol 2" (and whatever else heads v2 loads). Later you save and reload this figure. When it reloads, it load "Heads vol 2" automatically because the scene/character file tells it to.
I'm sure there are problems with my scheme as well. But if load times were basically "instant", maybe those other problems wouldn't have as bad an impact. After all, if all those morphs are loaded, they take up memory. Saving time and memory can't be toooo bad.
A design that creates "very important practical reasons" why it must be inefficient is not a very good design. There is no degree of rationalization or explanation that is more important than the user experience, which is the reason why the design exists in the first place. Knowing why it is inefficient is no consolation while I'm twiddling my thumbs while waiting for the scene to load; I'd vastly prefer to simply be ignorant and not know why it can load a scene so quickly rather than knowing all the exact design decisions, as you've carefully explained, that make it painfully slow.
What's the alternative, remove product features that people use? There's trade-offs sometimes. Some features create a benefit, but create a cost elsewhere. That cost might be considered small at the time, or go unnoticed for years until circumstances change. The world is complicated like that.
The feature you're complaining about wasn't a problem at the time. It only became a problem later. The current design wasn't intended to handle 5000+ morph files. And yet, Daz artist keep turning out characters with loads of custom morphs and such.
Now that has indeed became a problem, Daz is going to have to deal with it. As has already been discussed, there would be ways to speed up loading that aren't going to be added to the current DS platform. We'd have to wait for the next major version release. There are other ways to deal with the issue in the meantime. The main one is removing or disabling some of the figure morphs to take the load of a system that wasn't designed to handle it.
You can't put a 2 ton load in a 1/2 truck and expect it to get up 70+ on the freeway. You can either reduce the load, or wait for Daz to redesign the truck with better load capacity.
Which is, to an extent, what the ExP system used with Victoria/Michael 4 did - it added its own data files to /Runtime/Libraries/!DAZ/Figurename and in order to use them we had to run a script that added those files to the master list of files to load that the figure read in. People hated this, hence the Genesis system.
Hating the experience of the current system doesn't mean that an alternative system's experience would be better. There are always trade-offs, as I noted in reference to ExP above, and Daz chose what they thought would be the lesser evil. It is possible that history has shown a different approach would have been a lesser evil, but it can't be assumed that that is the case. It is possible that DS 6 will try to adjust the system, but there will probably be downsides to that too.
To say that the current system is bad does not actually mean that a better option is known, or is even possible at this stage.
As I've already said, Daz have a very unenviable infrastructure. Unless it's a Daz Original (that is, a product where Daz either commissioned it or have outright bought the rights), Daz do not have permission to alter the products they distribute. The copyrights still lie with the vendors, and the most Daz has is an "you have to update this or we'll have to remove it" ultimatum. And that's not going to work for the not-inconsiderable number of vendors who have passed away in the last few years.
As such, to be practical, any change to the system would have to work with the files as they are currently encoded.
Maybe when we look back at it in hindsight Daz made a poor decision in implementing this system, not foreseeing the trouble that might come with figure generations lasting many years and accruing tens of thousands of morphs... but there were good reasons to implement it in the first place, and changing it at this stage would still be a problem. It would be like removing a house's foundations and expecting it to stay up.
Perhaps Genesis 10 when it comes along will come with a new and more efficiently encoded file format, but Genesis 8 and 9 are going to have to continue to struggle along with the file formats they have, and the best Daz can do is try to come up with ways for the software to more efficiently process that architecture.
TLDR: Softwares fine, the content is the issue.
Lets be honest, while this system isn't brilliant it's better than what we had, around here somewhere I still have a couple of characters for Posette, that are nothing more than a stripped out OBJ and a couple of textures.
I've known for a long time that there are three things that will cause DS to get "bogged down" when it's loading a figure, those are the number of DSF files it's having to load, the number of formula in each file, and the number of errors being reported. The errors are the real "kick in the nuts" as they can turn a 5 minute load time into a 30 minute load time, one user I helped had a 65 minute load time for G8F, I checked his log and told him to uninstall one product, he wasn't happy about removing it but he did and his load time dropped to just over 21 minutes, that one product was causing so many errors it was crippling his system, I showed him how to edit his files to fix the remaining errors and by the end his load time was down to just under 10 minutes.
If I'd had direct access to his system I could have gotten it down even further, but that would have required me to rebuild damn near every morph/character pack he had.
I make my own characters, but I build them as one file rather than a separate head and body with a controller, in my characters for G8F there are 1020 formula in each DSF file, in many characters made by others you will find two formula in the controller, roughly 500+ in the head morph and roughly 900+ in the body morph (I've checked that's how I know), that's a lot of extra formula that shouldn't be there, on top of that you are moving multiple bones twice, that's not good for the rigging.
"You need to use the Adjust Rigging to Shape comand, then most likely use Joint Editor to tweak the results"
That is a quote from Richard from a week or two back, just how many content creators actually "tweak" the rigging ?
With G8/8.1/9 the eyelids are controlled by bones rather than morphs, adjust rigging to shape normally results in wonky eyelids when you close them, tweaking the bones will fix the bone placements, and baking the changes into the head morph means no extra files/formula to worry about, but if you check your content you will find corrective morphs everywhere for closing the eyelids.
Rob and the rest of the devs are busting their asses trying find ways to speed up loading, and there's only so much they can do with that especially as it's the content that's the main issue, get rid of the extra files and formula and load times will speed up a lot.
No, but the design created the situation. And if there is no solution, Richard M. Stallman coined a great phrase: "Broken By Design".
It is their design that made this so, not a supernatural being, not the Universe, it was DAZ.
Because of the design.
A cogent example I can give is the decision to base the SDK on static headers files instead of a component architecture. It's made worse by the fact that Qt, on which DS is based, has a very well regarded component architecture. That's why DAZ cannot update the SDK in any substantive way. There is a growing list of cool DAZ features where you have no choice but to parse the JSON and try to figure out what DAZ does with it at run time because it couldn't be put into the SDK without breaking binary compatibility. It is of absolutely zero consolation when someone chimes in to try to explain why that bad decision was made. Explanations don't update header files.
That's all I've been saying. I think that many non-engineers perceive such a criticism as an ad hominem. It's not. It's difficult to think things through 100% when there are others who just want your code in production yesterday. There's another saying in software: "Technical Debt". It's when you borrow from the future to get code working today. At the time you have every intention to go back and do things the "right" way, but of course, reality always seems to intervene and code you honestly meant to be a stop-gap for less than a sprint ends up in production for 30 years.
The DAZ guys built a pretty darned good 3D ecosystem. But they exist and work in the same environment as all software engineers do, and I have never known an exception to it. No PM has ever come to me and said "No, no, don't worry about the delivery date... take all the time you need to write your unit tests because, of course, at this company quality trumps profitability."
Your words are true, but I don't have much hope. The same pressures continute to exist, and are even stronger now than in the past for the reasons you cited. So I see no reason to expect the outcome to be any different.
This is pointless. You have yet to propose any alternative solution that meets the design requirements, which can be retrofitted at the current stage, and which hasn't already been tried and proven unsuitable.
If there is no solution, there is nothing for "Daz to learn" as the topic title puts it, because they couldn't have done any better. Sometimes there is no perfect answer. Sometimes there was never one to begin with.
I'm sure if there was a good solution, Daz would love to hear it, because obviously it's not in their interests to have users getting picky about buying new content because they don't want it to slow down their library.
For the record, the developers do not agree with the assessment of the relative superiority of "a component architecture". I lack even a fragment of the knowledge requiired to form my own judgement, but your assertion should not go without at least a note that it does not command universal assent.
With a better design we would never have been "at the current stage". But now we are stuck with it and that offsets in bas relief the importance of a forward thinking design in the first place. My, and presumably the OP's, valid point. An argument based on the fact that an outsider not involved in the product's development at all doesn't have an alternative is an exceedingly weak one. And I still have not heard a convincing argument against the one that says that if the user experience is unsatisfactory, it was a deficiency in the design. And I sincerely doubt I ever will.
Again, that's "Broken by Design." There are always tradeoffs to be made. That's the process of Engineering in a nutshell. If there is no tradeoff to be made, the design cut off certain avenues to be explored. I have never seen that not to be the case. No one will ever convinve me that when there was ultimate freedom to determine how the framework would work, that the only way to make an extensible framework was to load all the morphs. At some point, that was designed in by a decision that, in retrospect, turned out to be a bad decision. Happens to the best of us.
That's probably the best argument for Open Source-ing DAZ Studio that I've ever heard.