Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
OMG. I'd be like "nope, forgetaboutit!" lol
I'm in that boat. Good things are worth waiting for.... but there are some limits. By the time you've converted you lossyless original image to jpg to post it, most of the wonderful clarity and intracate details have been lost anyway. You'll just end up with a spotty slightly blurred jpg for all those hours of effort.
...@ Outrider 42, with Iray, many of my scenes end up going into swap mode, exceeding 11 GB (which is the available memory my system has after Windows). The issue with Iray is both the Daz programme and scene file also need to stay open which uses up more memory on top of the renderfile. With Lux or Octane, once the scene file has been submitted to the render engine, both it and Daz programme can be closed freeing up the memory they use.
My compositing skill is abysmal, particularly when it comes to dealing with shadows that would cover multiple render layers. The only way to deal with that is to paint them in and due to serious arthritis, I no longer have a steady hand to do that. As I mentioned before, Octane has a unique hybrid system that splits the load if the file gets too large for the GPU's memory. It still renders pretty quick, much faster than Iray in pure CPU mode.
Again I am not looking for two minute renders, I am fine with 2, 3, or 4 hours. I just don't want my system tied up for several days rendering with only 8 CPU threads compared to over 3,000 that a Titan-X has (or 6,000 for two Titan Xs).
...yeah and instead of 2,000 - 3,000 cores (or more with multiple GPUs), it has to do it with only 8 or 12. I don't understand how Octane can do it then as it also uses CUDA on the GPU.
I also prefer the 3Delight shaders as they work better in Carrara as duf imports and with Octane.
If I use DAZ studio I prefer Octane to iRay too and exporting obj and FBX the 3Delight shaders get more of the maps right.
Because Octane uses only the GPU and Cuda to render. It's not hybrid rendering. If needed it can use system RAM to store the texture data when it excedes the limits of GPU memory. Otoy has developed an extremely efficient means for Octane to pull (pre-fetch) the texture data needed from system RAM without the GPU needing to wait for the slower system RAM to "feed" the GPU.
Thus the reason for the very slight performance hit, unlike the typical performance hit experienced when using a hybrid renderers, where both the CPU and the GPU are actively rendering. There is so much overhead involved in trying to keep the rendered data from the CPU and the GPU "in sync", that typically hybrid rendering is hugely slower than GPU rendering, and sometimes slower than CPU only rendering. Otoy avoided this problem completely by not implementing a hybrid renderer, and just using sytem RAM to store texture data, and do all of the rendering on the GPU.
The "overhead" doesn't have to be huge. There are well known methods to keep syncronization bookkeeping down to as little as 8 bits across hundres of machines/processes. Otoy decided that they just didn't want to deal with CPUs. One reason is that to use the CPU you must "play by the OS's rules" which means that in many cases the render process(es) can be stalled/starved for significant periods of time, especially on Windows. When using the GPU's, the OS has little overt control, and the processes can be run pretty much "unencumbered." HOWEVER, there is a drawback to this which many who use GPU rendering hav experienced: Rendering on the GPUs can make the video card "unresponsive". If the GPUs are your main display card, then you're hosed on getting anything done until the render completes. This is why a great number of GPU render users have a "low cost" card for display, and a "high cost" card to be used only for rendering.
Kendall
...yeah, this is what I deal with a lot redering full scenes in Iray on the CPU. That is also after I have manually converted all the surfaces to Iray. Even when 3DL optomises surfaces, the longest I've seen it take was maybe 1 - 2 minutes (and I can run that process separately for m the render process).
If I had the room in my tiny living space I'd probably set up a second computer but it's just too much and too much room. I could let a render bake forever and not have it bog down my other processes but that's just not going to happen here! lol
...thank you for the explanation of how this actually works. Sounds very elegant in comparison. If I only had the resources for it, the plugin, and a GTX 780 TI 6 GB (which would work on my older system)
So again I ask, why can't Iray be set up the same way rather than dumping everything to the CPU when the GPU runs out of memory? Unless it is just there to sell more expensive cards and their VCA.
..exactly, I'd use the existing card for the Display and the 780 TI for rendering.
1060 1070 1080 are pretty good value compared to my Kepler gen Titan
Thanks to NVidia's multi-billion research efficiency increased like crazy and things will get cheaper and faster
Not having to buy a new power supply to install the non-Titan Pascal cards, very grateful for that!
The future is geared towards realtime-everything, esp VR
What takes 3 hours today will be 3.5 seconds tomorrow...very exciting
The real test for iRay in Daz Studio, specifically, is beyond the shiny, but SSS and even better realtime interactivity, especially the particle animation potential
Posing our actors in dynamic enviroment with fluttering leaves and flowing river streams and actual snow and rain shower... so much more artisitic and inspiring!
Have you been listening in on me? :)
In any case, most users here (on DAZ forums) are not taking the necessary steps to use their equipment properly. The problem with running out of VRAM is that there is this crazy effort to use 4Kx4K image maps for every frikin item in a scene. THIS IS UNNECESSARY! It wastes time in the render process (in many ways) and causes scenes to use too much VRAM and system RAM. Before rendering, one needs to look at the items in the scene and decide if it is close enough or visible enough to use a 4Kx4K texture map+displacement map+transmap+specular map+.....
If the area/item is covered by clothing (legs), then remove the image maps for the legs from the surfaces tab or turn the invisible part(s) off. If they are "ON" or have textures applied to them, those WILL get loaded into VRAM because the render engine DOES NOT KNOW if clothing is translucent, or has open gaps, etc. Another thing: the Texture Atlas is your friend. Downsize the image maps on items that are not in primary focus.
These are just 2 things that can make your VRAM go much, much farther.
Kendall
What would be cool is if someone set up some sort of an automated way, depending on the figure or props distance to the camera to resize the image maps dynamically. I'm sure someday that sort of thing will be a reality but I've often wondered why some brilliant code person hasn't tried to create a script to run on a still scene when ready to render. Generating maps takes seconds but if you have hundreds in a scene it may take some extra time to set all that up but with all the coming tech around speed with faster processors and vid cards... who knows how fast a script like that would really take...
In some cases, it may be necessary to use large imagemaps on things that are farther away -- it depends on what is happening. So running a script like that could conceiveably cause more problems that it would solve.
Really, we all used to turn off covered/hidden/unseen body parts as a matter of course - it kept pokethru to a minimum if nothing else. Somewhere along the line we got lazy. Maybe it was due to autofit or smoothing. Now we can go so far as completely removing polygons (not just hiding them) using the PGE. Iray has no problems with lots and lots of polys, but it sure doesn't like imagemaps very much. 3DL is somewhat the reverse. It loves the maps but doesn't like the polys.
Here's a funny thing... the maps do get scaled; when the render engine determines that a hit has occurred. This is funnier... the scaled image that was created isn't saved or cached -- in order to save memory. So each time a hit occurs the imagemaps are re-scaled. So I hear "why not cache them?" Well, we're already low on VRAM. Imagine if the scaled imagemaps get cached for EVERY thread that has a slightly different factor? There's no possible way. Also, how long do you keep them cached? If the map is only hit once, do you keep it cached indefinitely just "in case"? There's no way for each separate GPU core to know what the other cores have done. The communications to keep it all sane would negate the purpose of using the GPUs in the first place. This is why we, as creators, need to take the initiative to determine this stuff for the engine. *We* know what is and isn't going to be important, the render engine doesn't.
Kendall
Ah, OK. Seems like there is a solution there somewhere though!
You can completely avoid these probelms in Octane with out of core textures and the render priority settings (see attaached image). I use a single card setup and just keep the render priority at medium and response is great. If I'm working on something that will be GPU memory intensive, I just set things so I have enough free video memory to use Gimp, browse the web, or what ever else I want while rendering, and use system RAM for the additional memory needed for rendering.
PS: While were "talking", I thought I'd put in another vote for strand based hair from DS in Octane
As I said, the OS doesn't have control, the program running the GPUs does. If the render engine's scheduler allows for "backing off" then that's great. If it allows for "full on kill-the-card" mode and the user wants to use it, that's great as well. This is the reason that Otoy went the way they did. To do out-of-core textures, what is necessary is to stall the process that needs the external texture, replace it with one that doesn't (so you don't have wasted GPU core time), then unload an unneeded texture (when available), load the necessary texture from system memory, then restart the process.
If absolute speed isn't necessary 100% of the time, one can open the PCI-e->System Memory window (settable in many BIOS) and read/write directly to system memory. This requires hardware support by the GPU card (and driver) to allow the CUDA/Stream cores to access the memory being exposed by the window instead of only allowing access to the bus controller chip.
Kendall
One problem with not "rendering" things that the camera can't "see" is that you loose the effects that those objects might have on the scene (i.e. shadows, reflections, and bounced light). This type of scene optimization is usually best left for the end user ..... though I guess maybe a some sort of quick and easy toggle switch could be implemented to intentionally "turn off" everything not scene by the camera. This would definitely affect bounced light though, for example I often use a refector for passive fill lighting (just like a photographer would).
IMHO someone already has done it better than Nvidia/Iray, but it's supported by a plugin and not fully integrated into DS like Iray. Octane Render already has a whole host of features that Iray doesn't, and there is a plugin to use it in DS. DAZ writting their own render engine would be a huge undertaking. Version 1 of Octane Render was released on Novemeber 28, 2012, and was in beta for quite a while before that (2-3 years???), and that is all they were working on. Disney wrote their own ray trace rendering software, and IIRC it took them anout 2 years to get it fully production ready (not Pixar - the Disney animation studios), and Ray Trace rendering is the "easiest" to implement. I doubt that DAZ could even come close to the resources ($$$$$$$$$$$) Disney had for writing their new renderer. However, I really don't know how much Disney invested - so maybe it could be affordable for DAZ, but I doubt it. I think that SM implementing Cycles is extremely telling in that they didn't chose to incorporate PBR into FireFly, but integrate a totally different render engine. It also shows just how popular PBR rendering is right now, or SM wouldn't have felt the need to implement a PBR render engine in Poser.
Keep in mind that it isn't DAZ "depending on Iray so much", it's the user base and the resulting response from the PA's. DAZ just implemented it in DS, and it immediately became the favorite render engine for the majority of the user base, and the PA's have followed that trend because that is what sells.
Hi Spit; no patronizing here, I understand your frustration. I feel it, too, although it's a completely irrational thing, to tell the truth. Iray this, Iray that, so annoying, as if it's the next best thing since sliced bread, but...
I'm with people like Seliah who say that it's better to get Iray only stuff and convert materials by hand, than get half-baked second-thought "materials" that do a major disservice to the reputation of 3Delight in the hobbyist community.
It's a production renderer. It was designed for studio use, for a number of people, not for a lonely freelancer. Hence, a steep learning curve for this lonely freelancer - in the sense that you _must_ learn how to code, if you want to have it your own way and not depend on anyone else. Shaders; custom interactions with your scene setup software (not just DS - 3Delight is so powerful that even its Maya integration doesn't cover everything 100% out of thd box) - simple stuff actually, but daunting for a casual hobbyist (unless coding is a hobby of yours already).
Yes, 3Delight loves the maps. Like, adores. It takes care to preprocess and mipmap them (the tdlmake background tasks do just that). Then it decides on the fly (if you animate the camera, it becomes a literal expression) which mip level to use depending on the distance yadda yadda. What RAMWolff describes. The shader author can even specify "lerp" - additional interpolation between mips that might give better quality (I never tried, though).
I'm surprised to hear Iray doesn't have this mipmapping support. Games have had it for years, and a GPU-oriented tool doesn't? That's strange. Same as no geometry culling. If it's true, it just about explains a lot of efficiency issues.
Now, 3Delight and polys - no, these it does dislike, true. It prefers subdivisions.
// now, the tidbit that follows you might find useful //
But here's the catch: as of right now, 3Delight supports Catmull-Clark only. The DS default algo is not this anymore. Mjc1016 brought it to my attention that this new Catmark algo will render noticeably slower in DS than Catmull-Clark will.
Generally, there are not that many cases where the "legacy" algo loses to the default one regarding how the mesh looks. On organic shapes they are almost identical.
But if you use HD morphs, you have to use Catmark default, period. They are lost with the "legacy" one.
We suspected that the new (3DL-unsupported) algo gets sent to 3Delight as polys, hence the slowdown.
However, when exporting to RIB, the conversion goes fine, and you get a correct subdivision surface.
Hi R :) Thank you for remembering me.
Yeah I am indeed the one who promises a suite of new RSL shaders and "scripted renderer" stuff for using the 3Delight physically plausible path tracing core straight in DS. For free. But DAZ Soon (tm) cuz real life.
Not sure about the potential popularity, though - I am building a preset library and a handful of useful scripts, but other than that, it's a strictly manual conversion of every material in the scene. C'mon, you've seen my kit - it is like a new render engine =) And it's grown since that first alpha iteration.
And using "gamma correction" is a must with my stuff.
In my experience, manual conversion doesn't take much time once you have an idea which types of materials need which values. But!! If you have a model whose surface naming makes no sense... and there are a gazillion surfaces... now, then it might take quite some time to figure out which surface is supposed to look like which material.
Surfaces that mix materials via mapping "metallicity" or whatever aren't a problem, as long as you do have maps.
Didn't I just see a very oldschool-styled fake iridescence product in the store for Iray?
Thin film is built into 3Delight today. I put it into my shaders, and it's fun to use and surprisingly versatile. Here are a couple of test renders (even a tiny animation):
http://daz3d.com/forums/discussion/comment/1006364/#Comment_1006364
http://www.daz3d.com/forums/discussion/comment/1019213/#Comment_1019213
Studio moved from internal code to using a standard dynamic library. Catmull-clark is very easy to optimize and I'd be very surprised if it wasn't inlined and optimized as much as DAZ programmers could get it. However, now they are calling a dll/dylib (opensubdiv) which has much higher overhead than optmized inline code. Once the subdivisions are made, regardless of method, redraws will be the same since the verts and facets are cached. This leads to be asked whether Studio is passing a base mesh to 3DL along with a subdivision level and letting 3DL subdivide it by itself. I cannot see a situation where a render engine would fare worse given static polygons than when having to subdivide on its own.
Kendall
3Delight always "subdivides on its own" - it has no subdiv level, only rendering the limit surface. What I meant is that DS might be passing a pre-subdivided mesh to it, as polys. And then, 3Delight may well slow down - the 3DL docs have been stating it for years that subdivs are more efficient to use than high-poly meshes.
There is no mention of OpenSubDiv support anywhere in the 3DL docs or forums. Hence this suspicion of ours.
And they work just as well in the 3Delight version, super easy to use with lots of different lighting to choose from. And he added the 3Delight version for free. So some vendors are still trying to support 3Delight
...as I understand, editing map resolutiuon needs to be done in a 2D porogramme. I wouldn't want to do that with something like Stonemason's Urban Sprawl 3 that can have dozens of different textures each with several map channels to deal with.
many free 2d packages give you the ability to batch change things like image size and resolutions while saving to a new name. Then you have 2 options:
1) load the item (US3) and change the surfaces and save as a new duf with a name corresponding to what you did. This is a one time effort.
2) load the .duf file into a text editor (may need to uncompress it first -- many ways to do this) and do a search and replace on the names of the surfaces to use your newly created images.
Once the new duf is saved, you need not worry about doing it again.
Kendall
...one would still need to load each individual texture (from another folder location so as not to overwrite the originals). Also what about displacement, bump and transparency maps, don't those need to have the same resolution as the diffuse?
That last step sounds like a recipe for trouble.
Also don't see the purpose of saving the setting for future use as I probably wouldn't do another scene from the exact same camera position/angle. It does however sound handy for animation (which I do not bother with due to system resources).
LAMH , Garibaldi or Zbrush exported OBJ are all the same geometry files, no fiber or fibre ( fiber is the actual rendered curve in each plugin or program before exporting to obj) , Garibaldi and Zbrush exported hair have 3 side polygons option what works best in DS as it can be pretty well subdivided once for round hair tubes , more than 3 sides not recommended I don't have LAMH so I can't tell what are the setings there for export but go for 3 sides poly for best result so you can use it in scene far away not subdivided or close up subdivided and you keep the hair at the lowest poly level as you don't need more sides .