iRay only I'm tired of it.

145791013

Comments

  • RAMWolffRAMWolff Posts: 10,343
    kyoto kid said:
    RAMWolff said:

    I love iRay, it's a breath of fresh air but there are some weird things about it I hope in time will be ironed out.  I mean we have only had it for, what?, a year now?  Already we have folks discovering ways to bring special FX effects that we were told could only be done in 3Delight so there is hope. 

    One kevetch I have is using older Environments, even after conversion sometimes brings my renders to a crawl and takes upwards of 2 minutes to even see the preliminary render on screen.  Usually with a character, dressed with hair and a few props the render is visible in a few moments but with a small environment loaded and converted to iRay I find the wait to be concerning and then it's there (when I come back from washing up a few dishes in the sink, LOL). 

    ...my railway station scene takes over 40 min before anything shows up in the render window.

    OMG.  I'd be like "nope, forgetaboutit!"  lol

  • fred9803fred9803 Posts: 1,565
    RAMWolff said:
    kyoto kid said:

    ...my railway station scene takes over 40 min before anything shows up in the render window.

    OMG.  I'd be like "nope, forgetaboutit!"  lol

    I'm in that boat. Good things are worth waiting for.... but there are some limits. By the time you've converted you lossyless original image to jpg to post it, most of the wonderful clarity and intracate details have been lost anyway. You'll just end up with a spotty slightly blurred  jpg for all those hours of effort.

  • kyoto kidkyoto kid Posts: 41,854

    ...@ Outrider 42, with Iray, many of my scenes end up going into swap mode, exceeding 11 GB (which is the available memory my system has after Windows).  The issue with Iray is both the Daz programme and scene file also need to stay open which uses up more memory on top of the renderfile.  With Lux or Octane, once the scene file has been submitted to the render engine, both it and Daz programme can be closed freeing up the memory they use.

    My compositing skill is abysmal, particularly when it comes to dealing with shadows that would cover multiple render layers.   The only way to deal with that is to paint them in and due to serious arthritis, I no longer have a steady hand to do that.  As I mentioned before, Octane has a unique hybrid system that splits the load if the file gets too large for the GPU's memory.  It still renders pretty quick, much faster than Iray in pure CPU mode.

    Again I am not looking for two minute renders, I am fine with 2, 3, or 4 hours. I just don't want my system tied up for several days rendering with only 8 CPU threads compared to over 3,000 that a Titan-X has (or 6,000 for two Titan Xs).

  • kyoto kidkyoto kid Posts: 41,854
    kyoto kid said:

    Still to really make good use of Iray and not risk waiting days for a render to complete, will need a new system with more up to date components.  Again would be nice if they could work out a hybrid mode like Octane uses, then I could get by with a 250$ 6 GB 1060. instead of a $$$$ 16 GB Titan P.

    If I understand how Iray works correctly, the reason CPU mode is slower than GPU mode is because they are emulating CUDA in software; hybridized mode may not gain as much speed as it could for that reason.

    ...yeah and instead of 2,000 - 3,000 cores (or more with multiple GPUs), it has to do it with only 8 or 12.  I don't understand how Octane can do it then as it also uses CUDA on the GPU.

  • WendyLuvsCatzWendyLuvsCatz Posts: 40,076
    edited July 2016

    I also prefer the 3Delight shaders as they work better in Carrara as duf imports and with Octane.

    If I use DAZ studio I prefer Octane to iRay too and exporting obj and FBX the 3Delight shaders  get more of  the maps right.

    Post edited by WendyLuvsCatz on
  • DustRiderDustRider Posts: 2,880
    edited July 2016
    kyoto kid said:
    kyoto kid said:

    Still to really make good use of Iray and not risk waiting days for a render to complete, will need a new system with more up to date components.  Again would be nice if they could work out a hybrid mode like Octane uses, then I could get by with a 250$ 6 GB 1060. instead of a $$$$ 16 GB Titan P.

    If I understand how Iray works correctly, the reason CPU mode is slower than GPU mode is because they are emulating CUDA in software; hybridized mode may not gain as much speed as it could for that reason.

    ...yeah and instead of 2,000 - 3,000 cores (or more with multiple GPUs), it has to do it with only 8 or 12.  I don't understand how Octane can do it then as it also uses CUDA on the GPU.

    Because Octane uses only the GPU and Cuda to render. It's not hybrid rendering. If needed it can use system RAM to store the texture data when it excedes the limits of GPU memory.  Otoy has developed an extremely efficient means for Octane to pull (pre-fetch) the texture data needed from system RAM without the GPU needing to wait for the slower system RAM to "feed" the GPU.

    Thus the reason for the very slight performance hit, unlike the typical performance hit experienced when using a hybrid renderers, where both the CPU and the GPU are actively rendering. There is so much overhead involved in trying to keep the rendered data from the CPU and the GPU "in sync", that typically hybrid rendering is hugely slower than GPU rendering, and sometimes slower than CPU only rendering. Otoy avoided this problem completely by not implementing a hybrid renderer, and just using sytem RAM to store texture data, and do all of the rendering on the GPU.

    Post edited by DustRider on
  • Kendall SearsKendall Sears Posts: 2,995
    DustRider said:
    kyoto kid said:
    kyoto kid said:

    Still to really make good use of Iray and not risk waiting days for a render to complete, will need a new system with more up to date components.  Again would be nice if they could work out a hybrid mode like Octane uses, then I could get by with a 250$ 6 GB 1060. instead of a $$$$ 16 GB Titan P.

    If I understand how Iray works correctly, the reason CPU mode is slower than GPU mode is because they are emulating CUDA in software; hybridized mode may not gain as much speed as it could for that reason.

    ...yeah and instead of 2,000 - 3,000 cores (or more with multiple GPUs), it has to do it with only 8 or 12.  I don't understand how Octane can do it then as it also uses CUDA on the GPU.

    Because Octane uses only the GPU and Cuda to render. It's not hybrid rendering. If needed it can use system RAM to store the texture data when it excedes the limits of GPU memory.  Otoy has developed an extremely efficient means for Octane to pull (pre-fetch) the texture data needed from system RAM without the GPU needing to wait for the slower system RAM to "feed" the GPU.

    Thus the reason for the very slight performance hit, unlike the typical performance hit experienced when using a hybrid renderers, where both the CPU and the GPU are actively rendering. There is so much overhead involved in trying to keep the rendered data from the CPU and the GPU "in sync", that typically hybrid rendering is hugely slower than GPU rendering, and sometimes slower than CPU only rendering. Otoy avoided this problem completely by not implementing a hybrid renderer, and just using sytem RAM to store texture data, and do all of the rendering on the GPU.

    The "overhead" doesn't have to be huge.  There are well known methods to keep syncronization bookkeeping down to as little as 8 bits across hundres of machines/processes.  Otoy decided that they just didn't want to deal with CPUs.  One reason is that to use the CPU you must "play by the OS's rules" which means that in many cases the render process(es) can be stalled/starved for significant periods of time, especially on Windows.  When using the GPU's, the OS has little overt control, and the processes can be run pretty much "unencumbered."  HOWEVER, there is a drawback to this which many who use GPU rendering hav experienced: Rendering on the GPUs can make the video card "unresponsive".  If the GPUs are your main display card, then you're hosed on getting anything done until the render completes.  This is why a great number of GPU render users have a "low cost" card for display, and a "high cost" card to be used only for rendering.

    Kendall

  • kyoto kidkyoto kid Posts: 41,854
    RAMWolff said:
    kyoto kid said:
    RAMWolff said:

    I love iRay, it's a breath of fresh air but there are some weird things about it I hope in time will be ironed out.  I mean we have only had it for, what?, a year now?  Already we have folks discovering ways to bring special FX effects that we were told could only be done in 3Delight so there is hope. 

    One kevetch I have is using older Environments, even after conversion sometimes brings my renders to a crawl and takes upwards of 2 minutes to even see the preliminary render on screen.  Usually with a character, dressed with hair and a few props the render is visible in a few moments but with a small environment loaded and converted to iRay I find the wait to be concerning and then it's there (when I come back from washing up a few dishes in the sink, LOL). 

    ...my railway station scene takes over 40 min before anything shows up in the render window.

    OMG.  I'd be like "nope, forgetaboutit!"  lol

    ...yeah, this is what I deal with a lot redering full scenes in Iray on the CPU. That is also after I have manually converted all the surfaces to Iray. Even when 3DL optomises surfaces, the longest I've seen it take was maybe 1 - 2 minutes (and I can run that process separately for m the render process). 

  • RAMWolffRAMWolff Posts: 10,343

    If I had the room in my tiny living space I'd probably set up a second computer but it's just too much and too much room.  I could let a render bake forever and not have it bog down my other processes but that's just not going to happen here!  lol

  • kyoto kidkyoto kid Posts: 41,854
    DustRider said:
    kyoto kid said:
    kyoto kid said:

    Still to really make good use of Iray and not risk waiting days for a render to complete, will need a new system with more up to date components.  Again would be nice if they could work out a hybrid mode like Octane uses, then I could get by with a 250$ 6 GB 1060. instead of a $$$$ 16 GB Titan P.

    If I understand how Iray works correctly, the reason CPU mode is slower than GPU mode is because they are emulating CUDA in software; hybridized mode may not gain as much speed as it could for that reason.

    ...yeah and instead of 2,000 - 3,000 cores (or more with multiple GPUs), it has to do it with only 8 or 12.  I don't understand how Octane can do it then as it also uses CUDA on the GPU.

    Because Octane uses only the GPU and Cuda to render. It's not hybrid rendering. If needed it can use system RAM to store the texture data when it excedes the limits of GPU memory.  Otoy has developed an extremely efficient means for Octane to pull (pre-fetch) the texture data needed from system RAM without the GPU needing to wait for the slower system RAM to "feed" the GPU.

    Thus the reason for the very slight performance hit, unlike the typical performance hit experienced when using a hybrid renderers, where both the CPU and the GPU are actively rendering. There is so much overhead involved in trying to keep the rendered data from the CPU and the GPU "in sync", that typically hybrid rendering is hugely slower than GPU rendering, and sometimes slower than CPU only rendering. Otoy avoided this problem completely by not implementing a hybrid renderer, and just using sytem RAM to store texture data, and do all of the rendering on the GPU.

    ...thank you for the explanation of how this actually works.  Sounds very elegant in comparison.  If I only had the resources for it, the plugin, and a GTX 780 TI 6 GB (which would work on my older system)

    So again I ask, why can't Iray be set up the same way rather than dumping everything to the CPU when the GPU runs out of memory? Unless it is just there to sell more expensive cards and their VCA.

  • kyoto kidkyoto kid Posts: 41,854
    DustRider said:
    kyoto kid said:
    kyoto kid said:

    Still to really make good use of Iray and not risk waiting days for a render to complete, will need a new system with more up to date components.  Again would be nice if they could work out a hybrid mode like Octane uses, then I could get by with a 250$ 6 GB 1060. instead of a $$$$ 16 GB Titan P.

    If I understand how Iray works correctly, the reason CPU mode is slower than GPU mode is because they are emulating CUDA in software; hybridized mode may not gain as much speed as it could for that reason.

    ...yeah and instead of 2,000 - 3,000 cores (or more with multiple GPUs), it has to do it with only 8 or 12.  I don't understand how Octane can do it then as it also uses CUDA on the GPU.

    Because Octane uses only the GPU and Cuda to render. It's not hybrid rendering. If needed it can use system RAM to store the texture data when it excedes the limits of GPU memory.  Otoy has developed an extremely efficient means for Octane to pull (pre-fetch) the texture data needed from system RAM without the GPU needing to wait for the slower system RAM to "feed" the GPU.

    Thus the reason for the very slight performance hit, unlike the typical performance hit experienced when using a hybrid renderers, where both the CPU and the GPU are actively rendering. There is so much overhead involved in trying to keep the rendered data from the CPU and the GPU "in sync", that typically hybrid rendering is hugely slower than GPU rendering, and sometimes slower than CPU only rendering. Otoy avoided this problem completely by not implementing a hybrid renderer, and just using sytem RAM to store texture data, and do all of the rendering on the GPU.

    The "overhead" doesn't have to be huge.  There are well known methods to keep syncronization bookkeeping down to as little as 8 bits across hundres of machines/processes.  Otoy decided that they just didn't want to deal with CPUs.  One reason is that to use the CPU you must "play by the OS's rules" which means that in many cases the render process(es) can be stalled/starved for significant periods of time, especially on Windows.  When using the GPU's, the OS has little overt control, and the processes can be run pretty much "unencumbered."  HOWEVER, there is a drawback to this which many who use GPU rendering hav experienced: Rendering on the GPUs can make the video card "unresponsive".  If the GPUs are your main display card, then you're hosed on getting anything done until the render completes.  This is why a great number of GPU render users have a "low cost" card for display, and a "high cost" card to be used only for rendering.

    Kendall

    ..exactly, I'd use the existing card for the Display and the 780 TI for rendering.

  • MythmakerMythmaker Posts: 606
    edited July 2016
    kyoto kid said:
    DustRider said:
    kyoto kid said:
    kyoto kid said:

     

    So again I ask, why can't Iray be set up the same way rather than dumping everything to the CPU when the GPU runs out of memory? Unless it is just there to sell more expensive cards and their VCA.

    1060 1070 1080 are pretty good value compared to my Kepler gen Titan laugh

    Thanks to NVidia's multi-billion research efficiency increased like crazy and things will get cheaper and faster

    Not having to buy a new power supply to install the non-Titan Pascal cards, very grateful for that!

    The future is geared towards realtime-everything, esp VR

    What takes 3 hours today will be 3.5 seconds tomorrow...very exciting

    The real test for iRay in Daz Studio, specifically, is beyond the shiny, but SSS and even better realtime interactivity, especially the particle animation potential

    Posing our actors in dynamic enviroment with fluttering leaves and flowing river streams and actual snow and rain shower... so much more artisitic and inspiring!

     

    Post edited by Mythmaker on
  • Kendall SearsKendall Sears Posts: 2,995
    Mythmaker said:
    kyoto kid said:
    DustRider said:
    kyoto kid said:
    kyoto kid said:

     

    So again I ask, why can't Iray be set up the same way rather than dumping everything to the CPU when the GPU runs out of memory? Unless it is just there to sell more expensive cards and their VCA.

    1060 1070 1080 are pretty good value compared to my Kepler gen Titan laugh

    Thanks to NVidia's multi-billion research efficiency increased like crazy and things will get cheaper and faster

    Not having to buy a new power supply to install the non-Titan Pascal cards, very grateful for that!

    The future is geared towards realtime-everything, esp VR

    What takes 3 hours today will be 3.5 seconds tomorrow...very exciting

    The real test for iRay in Daz Studio, specifically, is beyond the shiny, but SSS and even better realtime interactivity, especially the particle animation potential

    Posing our actors in dynamic enviroment with fluttering leaves and flowing river streams and actual snow and rain shower... so much more artisitic and inspiring!

     

    Have you been listening in on me?  :)

    In any case, most users here (on DAZ forums) are not taking the necessary steps to use their equipment properly.  The problem with running out of VRAM is that there is this crazy effort to use 4Kx4K image maps for every frikin item in a scene.  THIS IS UNNECESSARY!  It wastes time in the render process (in many ways) and causes scenes to use too much VRAM and system RAM.  Before rendering, one needs to look at the items in the scene and decide if it is close enough or visible enough to use a 4Kx4K texture map+displacement map+transmap+specular map+.....

    If the area/item is covered by clothing (legs), then remove the image maps for the legs from the surfaces tab or turn the invisible part(s) off.  If they are "ON" or have textures applied to them, those WILL get loaded into VRAM because the render engine DOES NOT KNOW if clothing is translucent, or has open gaps, etc.  Another thing:  the Texture Atlas is your friend.  Downsize the image maps on items that are not in primary focus.

    These are just 2 things that can make your VRAM go much, much farther.

    Kendall

  • RAMWolffRAMWolff Posts: 10,343

    What would be cool is if someone set up some sort of an automated way, depending on the figure or props distance to the camera to resize the image maps dynamically.  I'm sure someday that sort of thing will be a reality but I've often wondered why some brilliant code person hasn't tried to create a script to run on a still scene when ready to render.  Generating maps takes seconds but if you have hundreds in a scene it may take some extra time to set all that up but with all the coming tech around speed with faster processors and vid cards... who knows how fast a script like that would really take...

  • Kendall SearsKendall Sears Posts: 2,995
    edited July 2016
    RAMWolff said:

    What would be cool is if someone set up some sort of an automated way, depending on the figure or props distance to the camera to resize the image maps dynamically.  I'm sure someday that sort of thing will be a reality but I've often wondered why some brilliant code person hasn't tried to create a script to run on a still scene when ready to render.  Generating maps takes seconds but if you have hundreds in a scene it may take some extra time to set all that up but with all the coming tech around speed with faster processors and vid cards... who knows how fast a script like that would really take...

    In some cases, it may be necessary to use large imagemaps on things that are farther away -- it depends on what is happening.  So running a script like that could conceiveably cause more problems that it would solve.

    Really, we all used to turn off covered/hidden/unseen body parts as a matter of course - it kept pokethru to a minimum if nothing else.  Somewhere along the line we got lazy.  Maybe it was due to autofit or smoothing.  Now we can go so far as completely removing polygons (not just hiding them) using the PGE.  Iray has no problems with lots and lots of polys, but it sure doesn't like imagemaps very much.  3DL is somewhat the reverse.  It loves the maps but doesn't like the polys.

    Here's a funny thing... the maps do get scaled; when the render engine determines that a hit has occurred.  This is funnier... the scaled image that was created isn't saved or cached -- in order to save memory.  So each time a hit occurs the imagemaps are re-scaled.  So I hear "why not cache them?"  Well, we're already low on VRAM. Imagine if the scaled imagemaps get cached for EVERY thread that has a slightly different factor?  There's no possible way.  Also, how long do you keep them cached?  If the map is only hit once, do you keep it cached indefinitely just "in case"?  There's no way for each separate GPU core to know what the other cores have done.  The communications to keep it all sane would negate the purpose of using the GPUs in the first place.  This is why we, as creators, need to take the initiative to determine this stuff for the engine.  *We* know what is and isn't going to be important, the render engine doesn't.

    Kendall

     

    Post edited by Kendall Sears on
  • RAMWolffRAMWolff Posts: 10,343

    Ah, OK.  Seems like there is a solution there somewhere though! 

  • Oso3DOso3D Posts: 15,085
    Yeah, I run into problems with compression if I use Iray superworlds, because THOSE maps are 12000x3000 and generally need to not be compressed at all.
  • DustRiderDustRider Posts: 2,880
    edited July 2016

      

     When using the GPU's, the OS has little overt control, and the processes can be run pretty much "unencumbered."  HOWEVER, there is a drawback to this which many who use GPU rendering hav experienced: Rendering on the GPUs can make the video card "unresponsive".  If the GPUs are your main display card, then you're hosed on getting anything done until the render completes.  This is why a great number of GPU render users have a "low cost" card for display, and a "high cost" card to be used only for rendering.

    Kendall

    You can completely avoid these probelms in Octane with out of core textures and the render priority settings (see attaached image). I use a single card setup and just keep the render priority at medium and response is great. If I'm working on something that will be GPU memory intensive, I just set things so I have enough free video memory to use Gimp, browse the web, or what ever else I want while rendering, and use system RAM for the additional memory needed for rendering.

    PS: While were "talking", I thought I'd put in another vote for strand based hair from DS in Octane wink

    OctaneSystemTab.JPG
    646 x 896 - 95K
    Post edited by DustRider on
  • Kendall SearsKendall Sears Posts: 2,995
    DustRider said:

      

     When using the GPU's, the OS has little overt control, and the processes can be run pretty much "unencumbered."  HOWEVER, there is a drawback to this which many who use GPU rendering hav experienced: Rendering on the GPUs can make the video card "unresponsive".  If the GPUs are your main display card, then you're hosed on getting anything done until the render completes.  This is why a great number of GPU render users have a "low cost" card for display, and a "high cost" card to be used only for rendering.

    Kendall

    You can completely avoid these probelms in Octane with out of core textures and the render priority settings (see attaached image). I use a single card setup and just keep the render priority at medium and response is great. If I'm working on something that will be GPU memory intensive, I just set things so I have enough free video memory to use Gimp, browse the web, or what ever else I want while rendering, and use system RAM for the additional memory needed for rendering.

    PS: While were "talking", I thought I'd put in another vote for strand based hair from DS in Octane wink

    As I said, the OS doesn't have control, the program running the GPUs does.  If the render engine's scheduler allows for "backing off" then that's great.  If it allows for "full on kill-the-card" mode and the user wants to use it, that's great as well.  This is the reason that Otoy went the way they did.  To do out-of-core textures, what is necessary is to stall the process that needs the external texture, replace it with one that doesn't (so you don't have wasted GPU core time), then unload an unneeded texture (when available), load the necessary texture from system memory, then restart the process.

    If absolute speed isn't necessary 100% of the time, one can open the PCI-e->System Memory window (settable in many BIOS) and read/write directly to system memory.  This requires hardware support by the GPU card (and driver) to allow the CUDA/Stream cores to access the memory being exposed by the window instead of only allowing access to the bus controller chip.

    Kendall

  • DustRiderDustRider Posts: 2,880

         

    The fact is Iray is still extremely difficult on hardware, even high end hardware. And that is the problem. It needs to be better optimized so badly. It needs to be able to split duty with standard RAM and delegate tasks between GPU and CPU. It needs to understand when pixels are not visible do not need to be rendered! Like a car with an engine in it, even though the hood is closed, that engine is still taking up memory. So to render the car faster, you need to delete or hide that engine. What nonsense is that? With VR taking off, look at how Nvidia worked to build a better VR experience. The 1000 series cards are so much faster at VR, way faster than their actual specs suggest they should be. Why? Because of better software. When the 1000 series runs a VR app, it takes into account which pixels are not visible, and it dumps these pixels. It does this for every single frame, which is over 90 frames per second. Now I know gaming tech does not transfer directly to Iray, but my point here is if Nvidia can figure out VR like this, they need to apply a technique like this to Iray. When pushed, Nvidia is capable of delivering an answer.

     

    One problem with not "rendering" things that the camera can't "see" is that you loose the effects that those objects might have on the scene (i.e. shadows, reflections, and bounced light). This type of scene optimization is usually best left for the end user ..... though I guess maybe a some sort of quick and easy toggle switch could be implemented to intentionally "turn off" everything not scene by the camera. This would definitely affect bounced light though, for example I often use a refector for passive fill lighting (just like a photographer would).

    So if Nvidia ever gets off their sorry butts to fix Iray, then we'll see dramatic improvements in render times for everybody. Iray is more than just spec, it is software. My hope is that somebody will come along and do it better than Nvidia. Until then, I really wish Daz could work on their own rendering engine. That way they would have full control over it and not need to depend on Nvidia. I just think its a bad idea for Daz to depend on Iray so much.

    IMHO someone already has done it better than Nvidia/Iray, but it's supported by a plugin and not fully integrated into DS like Iray. Octane Render already has a whole host of features that Iray doesn't, and there is a plugin to use it in DS. DAZ writting their own render engine would be a huge undertaking. Version 1 of Octane Render was released on Novemeber 28, 2012, and was in beta for quite a while before that (2-3 years???), and that is all they were working on. Disney wrote their own ray trace rendering software, and IIRC it took them anout 2 years to get it fully production ready (not Pixar - the Disney animation studios), and Ray Trace rendering is the "easiest" to implement. I doubt that DAZ could even come close to the resources ($$$$$$$$$$$) Disney had for writing their new renderer. However, I really don't know how much Disney invested - so maybe it could be affordable for DAZ, but I doubt it. I think that SM implementing Cycles is extremely telling in that they didn't chose to incorporate PBR into FireFly, but integrate a totally different render engine. It also shows just how popular PBR rendering is right now, or SM wouldn't have felt the need to implement a PBR render engine in Poser.

    Keep in mind that it isn't DAZ "depending on Iray so much", it's the user base and the resulting response from the PA's. DAZ just implemented it in DS, and it immediately became the favorite render engine for the majority of the user base, and the PA's have followed that trend because that is what sells.

  • Mustakettu85Mustakettu85 Posts: 2,933

    Hi Spit; no patronizing here, I understand your frustration. I feel it, too, although it's a completely irrational thing, to tell the truth. Iray this, Iray that, so annoying, as if it's the next best thing since sliced bread, but...

    I'm with people like Seliah who say that it's better to get Iray only stuff and convert materials by hand, than get half-baked second-thought "materials" that do a major disservice to the reputation of 3Delight in the hobbyist community.

    It's a production renderer. It was designed for studio use, for a number of people, not for a lonely freelancer. Hence, a steep learning curve for this lonely freelancer - in the sense that you _must_ learn how to code, if you want to have it your own way and not depend on anyone else. Shaders; custom interactions with your scene setup software (not just DS - 3Delight is so powerful that even its Maya integration doesn't cover everything 100% out of thd box) - simple stuff actually, but daunting for a casual hobbyist (unless coding is a hobby of yours already).

    RAMWolff said:

    What would be cool is if someone set up some sort of an automated way, depending on the figure or props distance to the camera to resize the image maps dynamically. 

    3DL is somewhat the reverse.  It loves the maps but doesn't like the polys.

     

    Yes, 3Delight loves the maps. Like, adores. It takes care to preprocess and mipmap them (the tdlmake background tasks do just that). Then it decides on the fly (if you animate the camera, it becomes a literal expression) which mip level to use depending on the distance yadda yadda. What RAMWolff describes. The shader author can even specify "lerp" - additional interpolation between mips that might give better quality (I never tried, though).

    I'm surprised to hear Iray doesn't have this mipmapping support. Games have had it for years, and a GPU-oriented tool doesn't? That's strange. Same as no geometry culling. If it's true, it just about explains a lot of efficiency issues.

    Now, 3Delight and polys - no, these it does dislike, true. It prefers subdivisions.

    // now, the tidbit that follows you might find useful //

    But here's the catch: as of right now, 3Delight supports Catmull-Clark only. The DS default algo is not this anymore. Mjc1016 brought it to my attention that this new Catmark algo will render noticeably slower in DS than Catmull-Clark will.

    Generally, there are not that many cases where the "legacy" algo loses to the default one regarding how the mesh looks. On organic shapes they are almost identical.

    But if you use HD morphs, you have to use Catmark default, period. They are lost with the "legacy" one.

    We suspected that the new (3DL-unsupported) algo gets sent to 3Delight as polys, hence the slowdown.

    However, when exporting to RIB, the conversion goes fine, and you get a correct subdivision surface.

  • Mustakettu85Mustakettu85 Posts: 2,933

     

    Its an impressive engine; but like any impressive engine, you need dedicated people designing unique shaders and lights. We're not getting them for Studio anymore, haven't for a long time. Maybe, when Ketu releases hers, there will be a resurgence in popularity.

     

     

    Hi R :) Thank you for remembering me.

    Yeah I am indeed the one who promises a suite of new RSL shaders and "scripted renderer" stuff for using the 3Delight physically plausible path tracing core straight in DS. For free. But DAZ Soon (tm) cuz real life.

    Not sure about the potential popularity, though - I am building a preset library and a handful of useful scripts, but other than that, it's a strictly manual conversion of every material in the scene. C'mon, you've seen my kit - it is like a new render engine =) And it's grown since that first alpha iteration.

    And using "gamma correction" is a must with my stuff.

    In my experience, manual conversion doesn't take much time once you have an idea which types of materials need which values. But!! If you have a model whose surface naming makes no sense... and there are a gazillion surfaces... now, then it might take quite some time to figure out which surface is supposed to look like which material.

    Surfaces that mix materials via mapping "metallicity" or whatever aren't a problem, as long as you do have maps.

     

  • Mustakettu85Mustakettu85 Posts: 2,933
    Khory said:

    Thin film = thin film is arguably a PBR setting that so not at this time useful in 3dl. Like many settings it is something people have been faking for years by coloring specularity settings.

    Didn't I just see a very oldschool-styled fake iridescence product in the store for Iray?

    Thin film is built into 3Delight today. I put it into my shaders, and it's fun to use and surprisingly versatile. Here are a couple of test renders (even a tiny animation):

    http://daz3d.com/forums/discussion/comment/1006364/#Comment_1006364

    http://www.daz3d.com/forums/discussion/comment/1019213/#Comment_1019213

  • Kendall SearsKendall Sears Posts: 2,995

    Hi Spit; no patronizing here, I understand your frustration. I feel it, too, although it's a completely irrational thing, to tell the truth. Iray this, Iray that, so annoying, as if it's the next best thing since sliced bread, but...

    I'm with people like Seliah who say that it's better to get Iray only stuff and convert materials by hand, than get half-baked second-thought "materials" that do a major disservice to the reputation of 3Delight in the hobbyist community.

    It's a production renderer. It was designed for studio use, for a number of people, not for a lonely freelancer. Hence, a steep learning curve for this lonely freelancer - in the sense that you _must_ learn how to code, if you want to have it your own way and not depend on anyone else. Shaders; custom interactions with your scene setup software (not just DS - 3Delight is so powerful that even its Maya integration doesn't cover everything 100% out of thd box) - simple stuff actually, but daunting for a casual hobbyist (unless coding is a hobby of yours already).

    RAMWolff said:

    What would be cool is if someone set up some sort of an automated way, depending on the figure or props distance to the camera to resize the image maps dynamically. 

    3DL is somewhat the reverse.  It loves the maps but doesn't like the polys.

     

    Yes, 3Delight loves the maps. Like, adores. It takes care to preprocess and mipmap them (the tdlmake background tasks do just that). Then it decides on the fly (if you animate the camera, it becomes a literal expression) which mip level to use depending on the distance yadda yadda. What RAMWolff describes. The shader author can even specify "lerp" - additional interpolation between mips that might give better quality (I never tried, though).

    I'm surprised to hear Iray doesn't have this mipmapping support. Games have had it for years, and a GPU-oriented tool doesn't? That's strange. Same as no geometry culling. If it's true, it just about explains a lot of efficiency issues.

    Now, 3Delight and polys - no, these it does dislike, true. It prefers subdivisions.

    // now, the tidbit that follows you might find useful //

    But here's the catch: as of right now, 3Delight supports Catmull-Clark only. The DS default algo is not this anymore. Mjc1016 brought it to my attention that this new Catmark algo will render noticeably slower in DS than Catmull-Clark will.

    Generally, there are not that many cases where the "legacy" algo loses to the default one regarding how the mesh looks. On organic shapes they are almost identical.

    But if you use HD morphs, you have to use Catmark default, period. They are lost with the "legacy" one.

    We suspected that the new (3DL-unsupported) algo gets sent to 3Delight as polys, hence the slowdown.

    However, when exporting to RIB, the conversion goes fine, and you get a correct subdivision surface.

    Studio moved from internal code to using a standard dynamic library.  Catmull-clark is very easy to optimize and I'd be very surprised if it wasn't inlined and optimized as much as DAZ programmers could get it.  However, now they are calling a dll/dylib (opensubdiv) which has much higher overhead than optmized inline code.  Once the subdivisions are made, regardless of method, redraws will be the same since the verts and facets are cached.  This leads to be asked whether Studio is passing a base mesh to 3DL along with a subdivision level and letting 3DL subdivide it by itself.  I cannot see a situation where a render engine would fare worse given static polygons than when having to subdivide on its own.

    Kendall

  • Mustakettu85Mustakettu85 Posts: 2,933

    3Delight always "subdivides on its own" - it has no subdiv level, only rendering the limit surface. What I meant is that DS might be passing a pre-subdivided mesh to it, as polys. And then, 3Delight may well slow down - the 3DL docs have been stating it for years that subdivs are more efficient to use than high-poly meshes.

    There is no mention of OpenSubDiv support anywhere in the 3DL docs or forums. Hence this suspicion of ours.

     

  • IceDragonArtIceDragonArt Posts: 12,757
    edited July 2016
    GKDantas said:

    I think a lot of people still use 3DeLight, thats why I updated my GIDome pack for Iray to work with 3DeLigt and even with a good settings its pretty fast http://www.daz3d.com/gidome-iray

    But yes you can see in the image that the result is very particular for every engine.

    And they work just as well in the 3Delight version, super easy to use with lots of different lighting to choose from. And he added the 3Delight version for free.  So some vendors are still trying to support 3Delight

    Post edited by IceDragonArt on
  • kyoto kidkyoto kid Posts: 41,854
    Mythmaker said:
    kyoto kid said:
    DustRider said:
    kyoto kid said:
    kyoto kid said:

     

    So again I ask, why can't Iray be set up the same way rather than dumping everything to the CPU when the GPU runs out of memory? Unless it is just there to sell more expensive cards and their VCA.

    1060 1070 1080 are pretty good value compared to my Kepler gen Titan laugh

    Thanks to NVidia's multi-billion research efficiency increased like crazy and things will get cheaper and faster

    Not having to buy a new power supply to install the non-Titan Pascal cards, very grateful for that!

    The future is geared towards realtime-everything, esp VR

    What takes 3 hours today will be 3.5 seconds tomorrow...very exciting

    The real test for iRay in Daz Studio, specifically, is beyond the shiny, but SSS and even better realtime interactivity, especially the particle animation potential

    Posing our actors in dynamic enviroment with fluttering leaves and flowing river streams and actual snow and rain shower... so much more artisitic and inspiring!

     

    Have you been listening in on me?  :)

    In any case, most users here (on DAZ forums) are not taking the necessary steps to use their equipment properly.  The problem with running out of VRAM is that there is this crazy effort to use 4Kx4K image maps for every frikin item in a scene.  THIS IS UNNECESSARY!  It wastes time in the render process (in many ways) and causes scenes to use too much VRAM and system RAM.  Before rendering, one needs to look at the items in the scene and decide if it is close enough or visible enough to use a 4Kx4K texture map+displacement map+transmap+specular map+.....

    If the area/item is covered by clothing (legs), then remove the image maps for the legs from the surfaces tab or turn the invisible part(s) off.  If they are "ON" or have textures applied to them, those WILL get loaded into VRAM because the render engine DOES NOT KNOW if clothing is translucent, or has open gaps, etc.  Another thing:  the Texture Atlas is your friend.  Downsize the image maps on items that are not in primary focus.

    These are just 2 things that can make your VRAM go much, much farther.

    Kendall

    ...as I understand, editing map resolutiuon needs to be done in a 2D porogramme.  I wouldn't want to do that with something like Stonemason's Urban Sprawl 3 that can have dozens of different textures each with several map channels to deal with. 

  • Kendall SearsKendall Sears Posts: 2,995
    kyoto kid said:
    Mythmaker said:
    kyoto kid said:
    DustRider said:
    kyoto kid said:
    kyoto kid said:

     

    So again I ask, why can't Iray be set up the same way rather than dumping everything to the CPU when the GPU runs out of memory? Unless it is just there to sell more expensive cards and their VCA.

    1060 1070 1080 are pretty good value compared to my Kepler gen Titan laugh

    Thanks to NVidia's multi-billion research efficiency increased like crazy and things will get cheaper and faster

    Not having to buy a new power supply to install the non-Titan Pascal cards, very grateful for that!

    The future is geared towards realtime-everything, esp VR

    What takes 3 hours today will be 3.5 seconds tomorrow...very exciting

    The real test for iRay in Daz Studio, specifically, is beyond the shiny, but SSS and even better realtime interactivity, especially the particle animation potential

    Posing our actors in dynamic enviroment with fluttering leaves and flowing river streams and actual snow and rain shower... so much more artisitic and inspiring!

     

    Have you been listening in on me?  :)

    In any case, most users here (on DAZ forums) are not taking the necessary steps to use their equipment properly.  The problem with running out of VRAM is that there is this crazy effort to use 4Kx4K image maps for every frikin item in a scene.  THIS IS UNNECESSARY!  It wastes time in the render process (in many ways) and causes scenes to use too much VRAM and system RAM.  Before rendering, one needs to look at the items in the scene and decide if it is close enough or visible enough to use a 4Kx4K texture map+displacement map+transmap+specular map+.....

    If the area/item is covered by clothing (legs), then remove the image maps for the legs from the surfaces tab or turn the invisible part(s) off.  If they are "ON" or have textures applied to them, those WILL get loaded into VRAM because the render engine DOES NOT KNOW if clothing is translucent, or has open gaps, etc.  Another thing:  the Texture Atlas is your friend.  Downsize the image maps on items that are not in primary focus.

    These are just 2 things that can make your VRAM go much, much farther.

    Kendall

    ...as I understand, editing map resolutiuon needs to be done in a 2D porogramme.  I wouldn't want to do that with something like Stonemason's Urban Sprawl 3 that can have dozens of different textures each with several map channels to deal with. 

    many free 2d packages give you the ability to batch change things like image size and resolutions while saving to a new name.  Then you have 2 options:

    1) load the item (US3) and change the surfaces and save as a new duf with a name corresponding to what you did.  This is a one time effort.

    2) load the .duf file into a text editor (may need to uncompress it first -- many ways to do this) and do a search and replace on the names of the surfaces to use your newly created images.

    Once the new duf is saved, you need not worry about doing it again.

    Kendall

  • kyoto kidkyoto kid Posts: 41,854

    ...one would still need to load each individual texture (from another folder location so as not to overwrite the originals).  Also what about displacement, bump and transparency maps, don't those need to have the same resolution as the diffuse?

    That last step sounds like a recipe for trouble.

    Also don't see the purpose of saving the setting for future use as I probably wouldn't do another scene from the exact same camera position/angle.  It does however sound handy for animation (which I do not bother with due to system resources).

  • MEC4DMEC4D Posts: 5,249

    LAMH , Garibaldi or Zbrush exported OBJ are all the same geometry files, no fiber or fibre ( fiber is the actual rendered curve in each plugin or program  before exporting to obj) , Garibaldi and Zbrush exported hair have 3 side polygons option what works best in DS as it can be pretty well subdivided once for round hair tubes , more than 3 sides not recommended I don't have LAMH so I can't tell what are the setings there for export but go for 3 sides poly for best result so you can use it in scene far away not subdivided or close up subdivided and you keep the hair at the lowest poly level as you don't need more sides .

    kyoto kid said:
    kyoto kid said:
    hphoenix said:
    kyoto kid said:

    ...again Fibremesh hair is very memory intensive. It would pretty much require the memory resources of a Titan-X or even Quadro M6000 to hold a complete scene. especially with multiple characters.

    It is too bad Iray didn't have a similar system to Octane where it split the Geometry and Texture load.  It wouldn't be as lightning quick as pure GPU rendering but much faster than pure CPU rendering and wouldn't require a GPU with tonnes of memory at a heavyweight price..

    Actually, fibermesh hair isn't that bad.  While it is a LOT of geometry, geometry itself is a lot cheaper than image maps.  Geometry is around 32 bytes per vertex, plus Face lists (3 or 4 vertex indices per face, so about 14 bytes average).  This means that STATIC geometry (no morphs/bones) works out to about 96 bytes per face or less.  So a 100,000 face mesh would only occupy about 10MB of VRAM.  Compare this to a SINGLE 4k x 4k image map, which even at 50% compression in VRAM is 32MB.

    Compared to high-res textures, geometry is cheap.  So fibermesh hair is NOT that huge of a hit, memory wise.  However, the more polys in a scene, the more ray-tests and such have to be done, so it will slow down rendering.  Complex big texture maps are just table-lookups, so are pretty fast, even with decompression.  So fibermesh hair will slow your render down, but take up only a small amount of VRAM.  Strip-based textured hair will be faster to render, but takes up a lot more VRAM.  Trade-offs.  Of course, the moment you start using fancy shaders on the hair, including transparency, special specular calculations, translucency, and more, even strip-based textured hair will start to slow down the render as well.......

    But in general, geometry is a lot cheaper (memory-wise) but slower (render-wise.)

     

    ...if you are stuck with only CPU rendering, those render times can become glacial  Also I need to make use of what I have as being on a fixed income, I cannot afford a whole new library of Fibremesh hair content to suit various different styles and lengths.  Persoanlly I like tools such as Garabaldi and LAMH as they give me the flexibility to create any style or length I need.  The drawback with Iray is they need to be imported as a .obj, and that really does pump up the polycount. So, with textures, specularity, and translucency, you get the worst of both worlds, more memory use and longer render time whreas in 3DL they render rather quickly.

    LAMH FiberHair is significantly smaller than OBJ.  Using default settings as much as 40-60% smaller on average.  Some hair styles can get upwards of 80% smaller.  But FiberHair does require the full plugin and not just the free player.

    Kendall

    ..LAMH fibre? As I reacall LAMH isalso a strand based system (Like Garabaldi) which is why it also needs to be converted to .obj to render in Iray.

     

Sign In or Register to comment.