3Delight Surface and Lighting Thread

1192022242552

Comments

  • wowiewowie Posts: 2,029
    edited August 2014


    I also did some Oren-Nayar roughness tests, and I have to say I am not convinced it's worth it, sticking it inside subsurface computations (although the 3Delight guys do it in their Maya shaders). I diffed the images carefully against each other and didn't see any meaningful difference apart from the darkening that increasing roughness gives in Oren-Nayar.

    Actually,it can make quite a bit of difference, just maybe not in your test scenario. If I understand correctly, they are using roughness to control the distribution of SSS/diffuse response. This is the main reason I went for two layer diffuse in my preset - set the SSS/diffuse layer really flat so it covers most of the object' surface (up to the edge), but have the primarily non-translucent diffuse close to the middle.

    A good scenario to test this out is with several strong diffuse lights with enough different directions. If your diffuse lighting is primarily an ambient light, it wouldn't show up much because the light is evenly spread throughout the surface.

    Below are examples of using the first layer diffuse with SSS at a roughness of 1, but the non SSS diffuse layer at 3. In the second render, both diffuse layer is set to 1. Notice you you get more saturation from the SSS with varying rougness between layers. Of course, they way you implement SSS (adding or multiplying it with the diffuse) can play a part.

    1.jpg
    800 x 1040 - 268K
    13.jpg
    800 x 1040 - 266K
    Post edited by wowie on
  • wowiewowie Posts: 2,029
    edited August 2014

    Here's another look illustrating the example. I rotated the lights to 45 degrees. Notice how you there's more surface variations from the varying SSS/diffuse roughness. If you went really crazy, you could probably add cavity maps or subdermal details with microstructures to really bring out underlying skin details. I've been toying with the idea, but haven't really thought how to approach it yet.

    The main drawback with this approach is that it's harder to get good result with textures with baked in blushes or darkened areas (which shouldn't be done anyway).

    1.jpg
    800 x 1040 - 262K
    13.jpg
    800 x 1040 - 259K
    Post edited by wowie on
  • Mustakettu85Mustakettu85 Posts: 2,933
    edited December 1969

    wowie said:

    Actually,it can make quite a bit of difference, just maybe not in your test scenario. If I understand correctly, they are using roughness to control the distribution of SSS/diffuse response. This is the main reason I went for two layer diffuse in my preset...
    ... Of course, they way you implement SSS (adding or multiplying it with the diffuse) can play a part.

    Yes you are right, it will definitely play a part when layering diffuse with SSS. And thank you for taking the time to post renders!

    What I mean is that I didn't see that much of a difference when we're talking "naked" SSS (I haven't yet implemented diffuse layering): inside every SSS calculation there is a diffuse one for the pre-pass (that's the way the model works).

    Take a look at this example RSL code (basic "naked" SSS) from the 3Delight manual ( http://www.3delight.com/en/uploads/docs/3delight/3delight_40.html ):

    
    surface simple( float Ks = .7, Kd = .6, Ka = .1, roughness = .04 )
    {
      normal Nf = faceforward( normalize(N), I);
      vector V = normalize(-I);
    
      uniform string raytype = "unknown";
      rayinfo( "type", raytype );
    
      if( raytype == "subsurface" )
      {
        /* no specular is included in subsurface lighting ... */
        Ci = Ka*ambient() + Kd*diffuse(Nf);
      }
      else
      {
        Ci = subsurface(P) + Ks * specular(Nf, V, roughness);
      }
    }
    

    This is an oldschool example, and notice how in the prepass (when raytype is "subsurface") there is a basic Lambertian diffuse() shadeop. In the new Maya shaders, they calculate Oren-Nayar there. Doesn't seem to make much difference in the SSS result.

  • wowiewowie Posts: 2,029
    edited August 2014


    What I mean is that I didn't see that much of a difference when we're talking "naked" SSS (I haven't yet implemented diffuse layering): inside every SSS calculation there is a diffuse one for the pre-pass (that's the way the model works).

    This is an oldschool example, and notice how in the prepass (when raytype is "subsurface") there is a basic Lambertian diffuse() shadeop. In the new Maya shaders, they calculate Oren-Nayar there. Doesn't seem to make much difference in the SSS result.

    Well, if they do compute roughness, it might be possible to have different roughness between the actual diffuse and the SSS part. If such flexibility exists, there's no need to go through what I did. You could apply the SSS to the diffuse directly and not rely on 2nd layer diffuse..

    Post edited by wowie on
  • Mustakettu85Mustakettu85 Posts: 2,933
    edited December 1969

    wowie said:

    Well, if they do compute roughness, it might be possible to have different roughness between the actual diffuse and the SSS part. If such flexibility exists, there's no need to go through what I did. You could apply the SSS to the diffuse directly and not rely on 2nd layer diffuse..

    I'll see if the UberSurface-type roughness (most likely to be power law based) is going to actually make a "naked SSS" difference of the sort you get with layering.

    As of now, take a look at these two tests with Oren-Nayar roughness at 0 (when it becomes Lambertian) and 0.58. If you diff them (layer them in an image editor and set the top one to "difference"), you will see a simple uniform decrease in lightness. Not what you'd expect, knowing how different Lambertian and Oren-Nayar tend to look by themselves (not when used for SSS): http://en.wikipedia.org/wiki/Oren–Nayar_reflectance_model#Results

    This is why I am tempted to just stick a Lambertian in there. Less expensive to evaluate, to boot.

    evannatex_oren0-58.png
    600 x 1000 - 700K
    evannatex_oren0.png
    600 x 1000 - 713K
  • wowiewowie Posts: 2,029
    edited August 2014


    As of now, take a look at these two tests with Oren-Nayar roughness at 0 (when it becomes Lambertian) and 0.58. If you diff them (layer them in an image editor and set the top one to "difference"), you will see a simple uniform decrease in lightness. Not what you'd expect, knowing how different Lambertian and Oren-Nayar tend to look by themselves (not when used for SSS): http://en.wikipedia.org/wiki/Oren–Nayar_reflectance_model#Results

    Actually,, I could see a pretty big difference by just looking at the thumbnails. The one with the roughness looks more inline to what I expect skin to look like - a little bit darker in areas not facing the light directly.

    Post edited by wowie on
  • Mustakettu85Mustakettu85 Posts: 2,933
    edited December 1969

    wowie said:

    Actually,, I could see a pretty big difference by just looking at the thumbnails. The one with the roughness looks more inline to what I expect skin to look like - a little bit darker in areas not facing the light directly.

    Would you use it? Then we can have both options, if you wish (and a control for artistically tweaking the Lambertian response as well). Oh, and render time turns out to be the same between Lambertian and bsdf()-based Oren-Nayar, which is cool.

    More on render times and related features:

    - "new" raytraced SSS only works this way in the raytracer; in REYES, it still behaves in the "oldschool" way (so no real reason to use it there), so whatever I say next is applicable to raytracer only.

    - as of right now, "new" SSS will be noticeably slower than equivalent quality "oldschool" SSS - but for large CLOSE-UPs only.
    For long shots, it's faster than "oldschool": a) there is no shading rate to think of, and hence way less time spent in the pre-pass (it's hard to notice if there is any); b) you can use only 128 samples instead of, say, 256. So - it will be faster the farther away your figure is, without caveats. // and since the 3Delight team is constantly optimising things, the algorithm is likely to become faster overall with time. //

    - "new" SSS won't halt the progressive mode to do prepass! It renders iteratively and seamlessly, so previewing is much more efficient.

    - another feature is correct GI interaction (uses "irradiance" channel for SSS, so even "naked" scattering will not be black in the GI cache).

    I don't really want "feature creep" (otherwise it will take me forever to document it all), so here's what it has right now:

    - SSS (diffuse colour map, colour multiplier, diffuse roughness, quality controls, scatter/absorption depths, scatter/absorption multiplier sliders);
    - two specular layers (colour multiplier and strength map slot for each; no coloured specular mapping because it's a shader for dielectrics);
    - a cheat-y specular for "velvet" (off by default);
    - blurred reflection (off by default);
    - bump;
    - two optional ways for finetuning the diffuse texture used in the SSS calculation (replacing one colour with pure SSS result, like the Poser manual suggests, and/or hue-saturation-lightness colour correction - the latter may add up to 3+ minutes for a large closeup with 8 pixel samples in render settings).

    What I will definitely add:
    - displacement;
    - an ability to add the diffuse calculation on top of SSS (same map, but with an option to tweak its roughness/falloff separately from the one used for SSS);
    - a separate diffuse-only colour overlay (mix() via a transparency mask, for makeup/eyebrows);
    - visibility attribute controls.

    What I don't want to add right now:
    - anisotropic specular/reflection;
    - refraction;
    - transparency;
    - no-thickness translucency. // right now, UberSurface 1 and 2 handle all these things well //

    I'm not sure Westin's velvet is necessary, either.

  • Mustakettu85Mustakettu85 Posts: 2,933
    edited December 1969

    PS A sort of a benchmark: on my machine, my Fiery Genesis scene with GI cache renders in 15 minutes with 8x8 pixel samples and max diffuse and specular bounces of 2. So when I post render times, you'd know how it compares to what you would likely get on your machine.

  • wowiewowie Posts: 2,029
    edited August 2014


    Would you use it? Then we can have both options, if you wish (and a control for artistically tweaking the Lambertian response as well). Oh, and render time turns out to be the same between Lambertian and bsdf()-based Oren-Nayar, which is cool.

    On the SSS side, having backscatter like US2 is indispensible for skin. Different roughness controls for SSS and both diffuse layer (the base and the overlay).are a plus. As for how the SSS applied to diffuse, sticking to add or multiply would simplify things. I generally don't use diffuse/SSS maps with SSS. Pure, naked SSS when setup right is way much portable from texture set to texture set.

    There's no need to substitute colors or doing color correction. Let the artist handle that by themselves by fine tuning color or maps.For strength/gain/multiplier controls, a clamp function should suffice. You could implement this as a single value or as color values.

    I'm assuming there will be two layers of specular as well, with an IOR based fresnel as noted in your previous posts.

    Post edited by wowie on
  • Kevin SandersonKevin Sanderson Posts: 1,643
    edited August 2014

    Did you guys ever see this in your searches? Seems to be there's another page somewhere that shows the different image layers.

    http://kissb.cgsociety.org/art/kissb-mudbox-photoshop-hillbilly-renderman-xsi-3d-866980

    http://kissb.cgsociety.org/art/renderman-xsi-lambert-shading-hillbilly-3d-868644


    EDIT: This might be what I remember seeing a few years ago:

    http://forums.cgsociety.org/showthread.php?t=866980

    Post edited by Kevin Sanderson on
  • Mustakettu85Mustakettu85 Posts: 2,933
    edited December 1969

    wowie said:

    On the SSS side, having backscatter like US2 is indispensible for skin.

    I'm not sure I can make it work. I've tried, but got confused; I'll try again. The raytraced algorithm seems to be quite sensitive to backlighting, anyway. With a simple trace()-based GI shader it will even respond to HDR envmap (no directional lights - see attached). This does not work with UE2, but I plan to release that GI light, too, along with finetuned render scripts.

    wowie said:
    As for how the SSS applied to diffuse, sticking to add or multiply would simplify things.


    The mix overlay is a "bonus", for those who don't want to mess around with setting up geometry shells but want sharp black makeup or maybe second skins, or whatever. More limited in scope than a shell, but simpler to use. I'm going to stick it at the bottom of the interface, anyway =)

    wowie said:
    I generally don't use diffuse/SSS maps with SSS.

    Then the additive diffuse will get its own map slot so that you would be able to not have it affect SSS (but those who want an SSS map would have their way, too).

    How useful in your experience do you find _multiplying_ diffuse with SSS?

    There's no need to substitute colors or doing color correction.

    Well, this all stays because I already did it, for myself (and if I like it, there may be others who would want it, too). It's all off by default, so you can just leave those controls alone and their code won't be invoked.


    For strength/gain/multiplier controls, a clamp function should suffice. You could implement this as a single value or as color values.

    I'm sorry I don't follow. Which strength etc controls? What exactly should get clamped and in what boundaries?


    I'm assuming there will be two layers of specular as well, with an IOR based fresnel as noted in your previous posts.

    Certainly, that's a given. I just didn't feel like typing down things I remembered mentioning before =)

    backlight_maponly_tracegi.png
    420 x 700 - 342K
  • Mustakettu85Mustakettu85 Posts: 2,933
    edited December 1969

    Did you guys ever see this in your searches? Seems to be there's another page somewhere that shows the different image layers.

    http://kissb.cgsociety.org/art/kissb-mudbox-photoshop-hillbilly-renderman-xsi-3d-866980

    http://kissb.cgsociety.org/art/renderman-xsi-lambert-shading-hillbilly-3d-868644


    EDIT: This might be what I remember seeing a few years ago:

    http://forums.cgsociety.org/showthread.php?t=866980

    Lovely work! Softimage users are (were?) a super creative bunch.
    Render passes are a powerful technique, but not every hobbyist is willing to bother with manual passes (and AFAIK there's no DS tool these days to assist in that).

  • wowiewowie Posts: 2,029
    edited August 2014


    I'm not sure I can make it work. I've tried, but got confused; I'll try again. The raytraced algorithm seems to be quite sensitive to backlighting, anyway. With a simple trace()-based GI shader it will even respond to HDR envmap (no directional lights - see attached). This does not work with UE2, but I plan to release that GI light, too, along with finetuned render scripts.

    Well, it would help a lot if you're just using pure SSS and one diffuse layer. The problem with HSS and US2 was to get somewhat strong backscatter, you need to boost either SSS scale or strength (or both) and that results in wax looking SSS. To compensate, you then use SSS strength maps or use the diffuse map in the SSS strength/color map. With backscatter, you can keep the SSS quite low and still get transluency on earlobes, fingers etc.

    US2 does have backscatter, but lacks controls to limit the translucence to just the outer areas (ie, it's pretty uniform across the fingers, though thicker parts like forearms look OK).

    Below is an example real life image I've found that exhibits you can still see translucence even in a well lit scene, much like what you generally have in HDRI images. With backscatter and falloff, I think we can get reasonably similar results.


    How useful in your experience do you find _multiplying_ diffuse with SSS?

    It should help solve your black eyebrows problem or any other part of the surface you don't want SSS to show. I'm generally more concerned about the nostrils than eyebrows though.


    I'm sorry I don't follow. Which strength etc controls? What exactly should get clamped and in what boundaries?

    I'm thinking when you do a diffuse + SSS, you can do a clamp operation on the results so you can still have fairly good (high) amount of diffuse, or SSS, or both but have the results clamped to a certain limit so you won't end up overblown diffuse or SSS. As a general rule, the value used for the clamp should be the albedo value of the material. In effect, this also acts as a strength dial.

    SubScattering.sm_.jpg
    600 x 548 - 27K
    Post edited by wowie on
  • Mustakettu85Mustakettu85 Posts: 2,933
    edited December 1969

    wowie said:

    US2 does have backscatter, but lacks controls to limit the translucence to just the outer areas (ie, it's pretty uniform across the fingers, though thicker parts like forearms look OK).

    Below is an example real life image I've found that exhibits you can still see translucence even in a well lit scene, much like what you generally have in HDRI images. With backscatter and falloff, I think we can get reasonably similar results.
    ...
    I'm thinking when you do a diffuse + SSS, you can do a clamp operation on the results so you can still have fairly good (high) amount of diffuse, or SSS, or both but have the results clamped to a certain limit so you won't end up overblown diffuse or SSS.

    I see, thanks for the photo, I understand now. I can't 110% promise it will work, but I think I have a pretty clear idea of the maths that could theoretically do this. If I don't get nice results with it within a couple of weeks, I will postpone it until some subsequent future update.



    It should help solve your black eyebrows problem or any other part of the surface you don't want SSS to show. I'm generally more concerned about the nostrils than eyebrows though.

    Mathematically, it seems to me, mix() is better suited for this type of tasks, but let's do it this way... When I release the shader, you could play with the mix overlay and see if it works for you, nostrils and stuff. If it does not, I'll stick a mutiplicative option in there.

  • Takeo.KenseiTakeo.Kensei Posts: 1,303
    edited December 1969

    I'm not a big fan of backscattering. Should be handled by the shader calculation, but you need correct light calculation for that
    Eventually just to give some artistic control

  • Mustakettu85Mustakettu85 Posts: 2,933
    edited August 2014

    I'm not a big fan of backscattering. Should be handled by the shader calculation, but you need correct light calculation for that
    Eventually just to give some artistic control

    Artistic, exactly. The new raytraced algo looks good enough for me, but other people have different eyes. So I'll try to add it.

    Okay, now volumes. They weren't supported in 2013, that's for sure (I saw folks complaining on the 3Delight forums). Good they fixed it quicker than I could hope for.

    Now, I've managed to get RiFog working with the "progressive" switch (albeit there's still something weird going on with the eyes - see attached), but not in the scripted mode. On looking at the RIBs, the example render script won't see atmospheres.

    This is the function to get it, from the DS3 script docs:

    void DzScriptedRenderer::riAtmosphere ( String  name,
    Array  tokens,
    Array  params  
    )   
    DAZ Script implementation of the RiAtmosphereV() function
    
    Parameters:
    name  The name of the volume shader
    tokens  An Array of String token names passed to the shader
    params  An Array of corresponding values for tokens

    But how to get the atmosphere name automatically?

    reyesrifog.png
    321 x 500 - 129K
    rifog_test_prog.jpg
    321 x 500 - 41K
    Post edited by Mustakettu85 on
  • Takeo.KenseiTakeo.Kensei Posts: 1,303
    edited December 1969

    For what I know volumes are supported since before 3delight 8.5 which means DS3.x

    I don't really know why you have white eyes. Hard to guess but that is a cool effect

    And I am not sure to understand your last question : you should know the name of the shader you want to use, so unless your computer does read mind that will be difficult to have an automatic naming ??? (cool idea though :) )

  • Mustakettu85Mustakettu85 Posts: 2,933
    edited December 1969

    For what I know volumes are supported since before 3delight 8.5 which means DS3.x

    I don't really know why you have white eyes. Hard to guess but that is a cool effect

    And I am not sure to understand your last question : you should know the name of the shader you want to use, so unless your computer does read mind that will be difficult to have an automatic naming ??? (cool idea though :) )

    There was something wrong with volumes in the raytrace hider (only there, not in the default one) in 2013, when they were transitioning to path tracing - Berto Durante said there was no proper support yet: http://www.3delight.com/en/modules/PunBB/viewtopic.php?id=3878

    The question is, how do I make the render script to read the name from the scene atmosphere shader automatically.
    In DS, we attach these to cameras. It looks like I have to query the active camera whether it's a normal camera or a dzShaderCamera, and if it's a dzShaderCamera, I get its dzRSLShader.
    But how do I then get its name and tokens to pass to Renderer.riAtmosphere()? I can see no straightforward methods for this listed in the docs.

  • kyoto kidkyoto kid Posts: 41,845
    edited December 1969

    ...been directed here from the Renderman thread. Crikey there's so much to go through, more than I can handle in one session as I no longer have Net access from home.

    On the other thread I brought up AoA's atmospheric cameras relating to volumetrics and scripted rendering.

    One of the responses included an example of a render using UberVolume as it was mentioned the pure ray tracer had issues with AoA's shader and not with the ability to interpret volumeetrics

    Everything works. Atmosphere, interior, and whatever. See the attached testscene

    My question is long did the render process take? I know with the regular 3Delight, UberVolume brings the process to an excruciatingly slow crawl.

    My other concern (as I mentioned in the other thread) is the whole business of scripting. I am no longer a programmer, have not been for decades, and am way out of step with any of today's languages. Therefore I am totally dependent on pre-written scripts that are made available which I can download and install into the 4.6 scripts folder.

    Another question, does scripted rendering not support of AoA's Advanced lights as well? Two of their features I really like and find very useful are surface flagging and the spotlight having a squared (natural) falloff setting, neither of which are available with the normal Daz lights or with any of the UberLight options. I also do not have any of the other Omnifreaker shaders lights than those which were built into 4.6 (budget reasons).

  • Takeo.KenseiTakeo.Kensei Posts: 1,303
    edited December 1969

    For what I know volumes are supported since before 3delight 8.5 which means DS3.x

    I don't really know why you have white eyes. Hard to guess but that is a cool effect

    And I am not sure to understand your last question : you should know the name of the shader you want to use, so unless your computer does read mind that will be difficult to have an automatic naming ??? (cool idea though :) )


    There was something wrong with volumes in the raytrace hider (only there, not in the default one) in 2013, when they were transitioning to path tracing - Berto Durante said there was no proper support yet: http://www.3delight.com/en/modules/PunBB/viewtopic.php?id=3878

    The question is, how do I make the render script to read the name from the scene atmosphere shader automatically.
    In DS, we attach these to cameras. It looks like I have to query the active camera whether it's a normal camera or a dzShaderCamera, and if it's a dzShaderCamera, I get its dzRSLShader.
    But how do I then get its name and tokens to pass to Renderer.riAtmosphere()? I can see no straightforward methods for this listed in the docs.

    Like any program, you could have bugs at some time of the dev. I'm sure I never saw it. And in case of I always keep old 3delight and DS versions. And it also takes time to make everything correct. I suppose the problem was at a particular version

    For the other question, I am not sure as I didn't try but there is a getparameter in DZshaderdescription of the SDK


    Kyoto Kid said:
    ...been directed here from the Renderman thread. Crikey there's so much to go through, more than I can handle in one session as I no longer have Net access from home.

    On the other thread I brought up AoA's atmospheric cameras relating to volumetrics and scripted rendering.

    One of the responses included an example of a render using UberVolume as it was mentioned the pure ray tracer had issues with AoA's shader and not with the ability to interpret volumeetrics

    Everything works. Atmosphere, interior, and whatever. See the attached testscene

    My question is long did the render process take? I know with the regular 3Delight, UberVolume brings the process to an excruciatingly slow crawl.


    Some other testscene attached :
    First scene : Ubervolume cone with Depthcue camera, one spotlight for creating the Godray and one Distant light in the back. Reyes Rendertime = 2min18. Progressive Rendering (aka raytrace) = +10 min
    I wouldn't advice to use Raytrace with volumes. It seems that Reyes is way more efficient for that
    Second render is the same as first scene but with a sphere as Dome to make a closed environment. Reyes = 4 min. Your computer should render the same scene a bit quicker than mine.


    My other concern (as I mentioned in the other thread) is the whole business of scripting. I am no longer a programmer, have not been for decades, and am way out of step with any of today's languages. Therefore I am totally dependent on pre-written scripts that are made available which I can download and install into the 4.6 scripts folder. I am no programmer and Kettu isn't either. I'm a mechanical engineer but I work as a production IT so I do script and program. But what is done here is not really difficult if you follow Kettu's tutorial. It's rather copy/paste. I can understand some people are afraid but once you get the trick, I guess you can customize things to your way, like Kettu does. She's doing good considering she has no IT background. Just read her post http://www.daz3d.com/forums/discussion/21611/P495/#642793

    Another question, does scripted rendering not support of AoA's Advanced lights as well? Two of their features I really like and find very useful are surface flagging and the spotlight having a squared (natural) falloff setting, neither of which are available with the normal Daz lights or with any of the UberLight options. I also do not have any of the other Omnifreaker shaders lights than those which were built into 4.6 (budget reasons).

    I don't have AOA's product. Only the standard SSS shader that I don't use either. So I can only guess : Scripting should be supported but I remember reading somewhere AOA's product don't support Pointcloud. Don't know if it is still the case. No clue either for the Raytrace Hider. Now to guess why it doesn't work in some specific case is something AOA should answer. Hard to guess when you don't have the source code

    But you don't really need scripting in my view. Just eventually use progressive raytrace in your render settings to see if you have some speed up with progressive rendering (so anyway not with volumes)

    Last note : Uberarealight has a control for the falloff : it is called Falloff Decay and has a default value of 2 which means square root falloff.

    volumetest02.jpg
    1920 x 1080 - 463K
    Volumetest01.jpg
    1920 x 1080 - 249K
  • Mustakettu85Mustakettu85 Posts: 2,933
    edited August 2014

    Kyoto Kid said:
    Crikey there's so much to go through, more than I can handle in one session as I no longer have Net access from home.


    Well, here's a post of mine with links to:
    a) a "benchmark" type of scene;
    b) main post with copy & paste instructions;
    c) further small edits.

    http://www.daz3d.com/forums/discussion/21611/P570/#650631


    Kyoto Kid said:

    Another question, does scripted rendering not support of AoA's Advanced lights as well? Two of their features I really like and find very useful are surface flagging and the spotlight having a squared (natural) falloff setting

    Actually "scripted rendering" is simply a rather unwieldy interface for passing specific parameters to the render engine (allowing the user to use more calls than DS does "by default"). The example script DAZ devs have been so kind to provide for us, which is used as the basis for most other scripts, has no issues handling lights. What the example script does not provide is a ready-made construct for parsing shaders attached to cameras (like atmosphere shaders are in DS).

    TL;DR: feel free to use AoA's lights. (see attached - AoA's distant light illuminates only the sphere, AoA's ambient - only the cone).

    // But AoA's ambient will not do bounce light GI. Use UE2 bounce mode for this //

    PS Actually, falloff controls (and some other useful things) _are_ included with the so-called "shader" lights found in the "DS Defaults" folder in either "Lights" or "Light Presets" folder in "Your Library". They won't do fancy "surface flagging" or cookies/gobos/whatever is the term for slide projecting these days, but they can use DSM, so if you ever find you need one... But you always have to CTRL-click and "Add" them, so as not to overwrite all your other lights.

    ------


    For the other question, I am not sure as I didn't try but there is a getparameter in DZshaderdescription of the SDK

    Okay thanks, I'll see what this method can return.


    But what is done here is not really difficult if you follow Kettu's tutorial. It's rather copy/paste. I can understand some people are afraid but once you get the trick, I guess you can customize things to your way, like Kettu does. She's doing good considering she has no IT background.

    Yup, it's all copy&paste; when you have the constructs. Technically, as part of my local mech eng/applied physics degree, I did have a semester of basic programming and a semester of numerical modeling, but you don't want to know what glorious bugs I was able to conjure XD

    AmbientCone_DistantSphere_NonePlane.png
    420 x 700 - 109K
    Post edited by Mustakettu85 on
  • Mustakettu85Mustakettu85 Posts: 2,933
    edited August 2014

    For the other question, I am not sure as I didn't try but there is a getparameter in DZshaderdescription of the SDK

    Tried, and this method along with its getNumParameters() friend seems to only be defined for DzShader, not DzRSLShader.

    On the bright side, getShaderFile() gives me the path like "ShaderBuilder/Volume/RiSpec/RI_Fog" - which I think will work for the RiAtmosphere call as "name".

    There is a findTokenByProperty() method which may or may not be useful... if I find a way to extract properties from definition file, a path to which I can get from getDefinitionFile() - but then how to fill the tokens with user-defined values from the scene interface??

    Convoluted much.

    I'm thinking about making a render script that will be tuned for a specific atmosphere (we don't have many right now for DS, do we?) - say, RI_Fog. Then its name will be passed automatically, but the tokens will be filled in by the user in the render settings (not in the camera settings). The downside is that I'm not sure these values will get saved per-scene.

    I hope to see if this idea works in the next few days.

    PS About UberVolumes, I noticed the omSimpleSurface enclosing the volume is always visible to diffuse (GI/AO) rays! When rendering with either on, it makes the volume render slow, but does not seem to change its appearance much as compared to non-GI renders. I wish the UberVolume scripts were non-encrypted! Then it would be possible to kill the diffuse ray visibility...

    Post edited by Mustakettu85 on
  • Kevin SandersonKevin Sanderson Posts: 1,643
    edited December 1969

    Kettu's script works magic! I bailed on a 60% done render after 14 minutes using GI and Progressive Render. Kettu's script which allows tweaks comes in around 6 minutes 30 seconds.

  • Mustakettu85Mustakettu85 Posts: 2,933
    edited December 1969

    Kettu's script works magic! I bailed on a 60% done render after 14 minutes using GI and Progressive Render. Kettu's script which allows tweaks comes in around 6 minutes 30 seconds.

    Thank you, but the real magic is done by 3Delight (which was just waiting for us to "unlock" it, haha)

  • Mustakettu85Mustakettu85 Posts: 2,933
    edited December 1969

    Hi again everyone,

    I may disappear for some time, maybe a few weeks - real life an'stuff. I may be popping in once in a while for a quick check, but not much more. But I will be working on the shaders, scripts etc.

    So here's what I managed so far to get re:backscattering:

    - "noback" - just the default shader mode, no adjustments to the subsurface() shadeop result;
    - "back_falloff 1" - backscatter multiplier of 5, no falloff correction;
    - "back_falloff 0.15" - same backscatter mutiplier, but a 0.15 parameter passed to falloff calculation.

    back_falloff0-15.png
    420 x 700 - 151K
    back_falloff1.png
    420 x 700 - 173K
    noback_.png
    420 x 700 - 147K
  • Kevin SandersonKevin Sanderson Posts: 1,643
    edited August 2014

    The .15 image looks more correct to me.

    Post edited by Kevin Sanderson on
  • wowiewowie Posts: 2,029
    edited December 1969

    I suggest also adding a sharpness value for the falloff.

  • Mustakettu85Mustakettu85 Posts: 2,933
    edited December 1969

    wowie said:
    I suggest also adding a sharpness value for the falloff.

    I'm not sure how to do it. I will try, if I find an RSL resource with explanation.

  • wowiewowie Posts: 2,029
    edited August 2014
    Post edited by wowie on
  • Kevin SandersonKevin Sanderson Posts: 1,643
    edited August 2014

    Wowie, I read the one on WETA's render engine. Interesting how they have found it better to match full spectrum light rendering to real scenes shot. I have felt that way for a while, but there must be a way to fake it or come really close.

    I used your Photo Studio Kit Lights and SSS to test a render battle between Progressive Render and Kettu's script. Maybe Progressive adds time. I have the Kettu script set to Progressive off. Kettu's is faster. And consistently so. 21 seconds a frame faster can save time for animation... almost like buying a new processor.

    Progressive Render 2 minutes 58.7 seconds

    Bucket Order Horizontal
    Bucket Size 16
    Max Ray Trace Depth 2
    Pixel Samples X 4
    Pixel Sample Y 4
    Shadow Samples 8
    Gain 1
    Gamma Correction On
    Gamma 2.2
    Shading Rate 1.0
    Pixel Filter Sin (but goes to Box)
    Pixel Width X 6
    Pixel Width Y 6

    Kettu's script (no GI source in scene but GI Pass still enabled) 2 minutes 37.6 seconds

    Max Diffuse Bounce 2
    Max Specular Bounce Depth 2
    Bucket Order Horizontal
    Bucket size 16
    Max Ray Depth 2
    Pixel Samples X 4
    Pixel Samples Y 4
    Shadow Samples 8
    Gain 1
    Gamma 2.2
    Shading Rate 1
    Pixel Filter Sinc
    Pixel Filter Width X 6
    Pixel Filter Width Y 6


    AMD 8350 8-core CPU and 16GB RAM

    WowiePhotoStudioKit_ProgressiveRender.jpg
    1280 x 720 - 227K
    WowiePhotoStudioKit_KettuScript_No_GI_On_In_Scene_Though_GI_Pass_On.jpg
    1280 x 720 - 231K
    Post edited by Kevin Sanderson on
Sign In or Register to comment.