Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
Actually,it can make quite a bit of difference, just maybe not in your test scenario. If I understand correctly, they are using roughness to control the distribution of SSS/diffuse response. This is the main reason I went for two layer diffuse in my preset - set the SSS/diffuse layer really flat so it covers most of the object' surface (up to the edge), but have the primarily non-translucent diffuse close to the middle.
A good scenario to test this out is with several strong diffuse lights with enough different directions. If your diffuse lighting is primarily an ambient light, it wouldn't show up much because the light is evenly spread throughout the surface.
Below are examples of using the first layer diffuse with SSS at a roughness of 1, but the non SSS diffuse layer at 3. In the second render, both diffuse layer is set to 1. Notice you you get more saturation from the SSS with varying rougness between layers. Of course, they way you implement SSS (adding or multiplying it with the diffuse) can play a part.
Here's another look illustrating the example. I rotated the lights to 45 degrees. Notice how you there's more surface variations from the varying SSS/diffuse roughness. If you went really crazy, you could probably add cavity maps or subdermal details with microstructures to really bring out underlying skin details. I've been toying with the idea, but haven't really thought how to approach it yet.
The main drawback with this approach is that it's harder to get good result with textures with baked in blushes or darkened areas (which shouldn't be done anyway).
Yes you are right, it will definitely play a part when layering diffuse with SSS. And thank you for taking the time to post renders!
What I mean is that I didn't see that much of a difference when we're talking "naked" SSS (I haven't yet implemented diffuse layering): inside every SSS calculation there is a diffuse one for the pre-pass (that's the way the model works).
Take a look at this example RSL code (basic "naked" SSS) from the 3Delight manual ( http://www.3delight.com/en/uploads/docs/3delight/3delight_40.html ):
This is an oldschool example, and notice how in the prepass (when raytype is "subsurface") there is a basic Lambertian diffuse() shadeop. In the new Maya shaders, they calculate Oren-Nayar there. Doesn't seem to make much difference in the SSS result.
Well, if they do compute roughness, it might be possible to have different roughness between the actual diffuse and the SSS part. If such flexibility exists, there's no need to go through what I did. You could apply the SSS to the diffuse directly and not rely on 2nd layer diffuse..
I'll see if the UberSurface-type roughness (most likely to be power law based) is going to actually make a "naked SSS" difference of the sort you get with layering.
As of now, take a look at these two tests with Oren-Nayar roughness at 0 (when it becomes Lambertian) and 0.58. If you diff them (layer them in an image editor and set the top one to "difference"), you will see a simple uniform decrease in lightness. Not what you'd expect, knowing how different Lambertian and Oren-Nayar tend to look by themselves (not when used for SSS): http://en.wikipedia.org/wiki/Oren–Nayar_reflectance_model#Results
This is why I am tempted to just stick a Lambertian in there. Less expensive to evaluate, to boot.
Actually,, I could see a pretty big difference by just looking at the thumbnails. The one with the roughness looks more inline to what I expect skin to look like - a little bit darker in areas not facing the light directly.
Would you use it? Then we can have both options, if you wish (and a control for artistically tweaking the Lambertian response as well). Oh, and render time turns out to be the same between Lambertian and bsdf()-based Oren-Nayar, which is cool.
More on render times and related features:
- "new" raytraced SSS only works this way in the raytracer; in REYES, it still behaves in the "oldschool" way (so no real reason to use it there), so whatever I say next is applicable to raytracer only.
- as of right now, "new" SSS will be noticeably slower than equivalent quality "oldschool" SSS - but for large CLOSE-UPs only.
For long shots, it's faster than "oldschool": a) there is no shading rate to think of, and hence way less time spent in the pre-pass (it's hard to notice if there is any); b) you can use only 128 samples instead of, say, 256. So - it will be faster the farther away your figure is, without caveats. // and since the 3Delight team is constantly optimising things, the algorithm is likely to become faster overall with time. //
- "new" SSS won't halt the progressive mode to do prepass! It renders iteratively and seamlessly, so previewing is much more efficient.
- another feature is correct GI interaction (uses "irradiance" channel for SSS, so even "naked" scattering will not be black in the GI cache).
I don't really want "feature creep" (otherwise it will take me forever to document it all), so here's what it has right now:
- SSS (diffuse colour map, colour multiplier, diffuse roughness, quality controls, scatter/absorption depths, scatter/absorption multiplier sliders);
- two specular layers (colour multiplier and strength map slot for each; no coloured specular mapping because it's a shader for dielectrics);
- a cheat-y specular for "velvet" (off by default);
- blurred reflection (off by default);
- bump;
- two optional ways for finetuning the diffuse texture used in the SSS calculation (replacing one colour with pure SSS result, like the Poser manual suggests, and/or hue-saturation-lightness colour correction - the latter may add up to 3+ minutes for a large closeup with 8 pixel samples in render settings).
What I will definitely add:
- displacement;
- an ability to add the diffuse calculation on top of SSS (same map, but with an option to tweak its roughness/falloff separately from the one used for SSS);
- a separate diffuse-only colour overlay (mix() via a transparency mask, for makeup/eyebrows);
- visibility attribute controls.
What I don't want to add right now:
- anisotropic specular/reflection;
- refraction;
- transparency;
- no-thickness translucency. // right now, UberSurface 1 and 2 handle all these things well //
I'm not sure Westin's velvet is necessary, either.
PS A sort of a benchmark: on my machine, my Fiery Genesis scene with GI cache renders in 15 minutes with 8x8 pixel samples and max diffuse and specular bounces of 2. So when I post render times, you'd know how it compares to what you would likely get on your machine.
On the SSS side, having backscatter like US2 is indispensible for skin. Different roughness controls for SSS and both diffuse layer (the base and the overlay).are a plus. As for how the SSS applied to diffuse, sticking to add or multiply would simplify things. I generally don't use diffuse/SSS maps with SSS. Pure, naked SSS when setup right is way much portable from texture set to texture set.
There's no need to substitute colors or doing color correction. Let the artist handle that by themselves by fine tuning color or maps.For strength/gain/multiplier controls, a clamp function should suffice. You could implement this as a single value or as color values.
I'm assuming there will be two layers of specular as well, with an IOR based fresnel as noted in your previous posts.
Did you guys ever see this in your searches? Seems to be there's another page somewhere that shows the different image layers.
http://kissb.cgsociety.org/art/kissb-mudbox-photoshop-hillbilly-renderman-xsi-3d-866980
http://kissb.cgsociety.org/art/renderman-xsi-lambert-shading-hillbilly-3d-868644
EDIT: This might be what I remember seeing a few years ago:
http://forums.cgsociety.org/showthread.php?t=866980
I'm sorry I don't follow. Which strength etc controls? What exactly should get clamped and in what boundaries?
Certainly, that's a given. I just didn't feel like typing down things I remembered mentioning before =)
Lovely work! Softimage users are (were?) a super creative bunch.
Render passes are a powerful technique, but not every hobbyist is willing to bother with manual passes (and AFAIK there's no DS tool these days to assist in that).
It should help solve your black eyebrows problem or any other part of the surface you don't want SSS to show. I'm generally more concerned about the nostrils than eyebrows though.
I'm thinking when you do a diffuse + SSS, you can do a clamp operation on the results so you can still have fairly good (high) amount of diffuse, or SSS, or both but have the results clamped to a certain limit so you won't end up overblown diffuse or SSS. As a general rule, the value used for the clamp should be the albedo value of the material. In effect, this also acts as a strength dial.
Mathematically, it seems to me, mix() is better suited for this type of tasks, but let's do it this way... When I release the shader, you could play with the mix overlay and see if it works for you, nostrils and stuff. If it does not, I'll stick a mutiplicative option in there.
I'm not a big fan of backscattering. Should be handled by the shader calculation, but you need correct light calculation for that
Eventually just to give some artistic control
Artistic, exactly. The new raytraced algo looks good enough for me, but other people have different eyes. So I'll try to add it.
Okay, now volumes. They weren't supported in 2013, that's for sure (I saw folks complaining on the 3Delight forums). Good they fixed it quicker than I could hope for.
Now, I've managed to get RiFog working with the "progressive" switch (albeit there's still something weird going on with the eyes - see attached), but not in the scripted mode. On looking at the RIBs, the example render script won't see atmospheres.
This is the function to get it, from the DS3 script docs:
But how to get the atmosphere name automatically?
For what I know volumes are supported since before 3delight 8.5 which means DS3.x
I don't really know why you have white eyes. Hard to guess but that is a cool effect
And I am not sure to understand your last question : you should know the name of the shader you want to use, so unless your computer does read mind that will be difficult to have an automatic naming ??? (cool idea though :) )
There was something wrong with volumes in the raytrace hider (only there, not in the default one) in 2013, when they were transitioning to path tracing - Berto Durante said there was no proper support yet: http://www.3delight.com/en/modules/PunBB/viewtopic.php?id=3878
The question is, how do I make the render script to read the name from the scene atmosphere shader automatically.
In DS, we attach these to cameras. It looks like I have to query the active camera whether it's a normal camera or a dzShaderCamera, and if it's a dzShaderCamera, I get its dzRSLShader.
But how do I then get its name and tokens to pass to Renderer.riAtmosphere()? I can see no straightforward methods for this listed in the docs.
...been directed here from the Renderman thread. Crikey there's so much to go through, more than I can handle in one session as I no longer have Net access from home.
On the other thread I brought up AoA's atmospheric cameras relating to volumetrics and scripted rendering.
One of the responses included an example of a render using UberVolume as it was mentioned the pure ray tracer had issues with AoA's shader and not with the ability to interpret volumeetrics
My question is long did the render process take? I know with the regular 3Delight, UberVolume brings the process to an excruciatingly slow crawl.
My other concern (as I mentioned in the other thread) is the whole business of scripting. I am no longer a programmer, have not been for decades, and am way out of step with any of today's languages. Therefore I am totally dependent on pre-written scripts that are made available which I can download and install into the 4.6 scripts folder.
Another question, does scripted rendering not support of AoA's Advanced lights as well? Two of their features I really like and find very useful are surface flagging and the spotlight having a squared (natural) falloff setting, neither of which are available with the normal Daz lights or with any of the UberLight options. I also do not have any of the other Omnifreaker shaders lights than those which were built into 4.6 (budget reasons).
Some other testscene attached :
First scene : Ubervolume cone with Depthcue camera, one spotlight for creating the Godray and one Distant light in the back. Reyes Rendertime = 2min18. Progressive Rendering (aka raytrace) = +10 min
I wouldn't advice to use Raytrace with volumes. It seems that Reyes is way more efficient for that
Second render is the same as first scene but with a sphere as Dome to make a closed environment. Reyes = 4 min. Your computer should render the same scene a bit quicker than mine.
I don't have AOA's product. Only the standard SSS shader that I don't use either. So I can only guess : Scripting should be supported but I remember reading somewhere AOA's product don't support Pointcloud. Don't know if it is still the case. No clue either for the Raytrace Hider. Now to guess why it doesn't work in some specific case is something AOA should answer. Hard to guess when you don't have the source code
But you don't really need scripting in my view. Just eventually use progressive raytrace in your render settings to see if you have some speed up with progressive rendering (so anyway not with volumes)
Last note : Uberarealight has a control for the falloff : it is called Falloff Decay and has a default value of 2 which means square root falloff.
Okay thanks, I'll see what this method can return.
Yup, it's all copy&paste; when you have the constructs. Technically, as part of my local mech eng/applied physics degree, I did have a semester of basic programming and a semester of numerical modeling, but you don't want to know what glorious bugs I was able to conjure XD
Tried, and this method along with its getNumParameters() friend seems to only be defined for DzShader, not DzRSLShader.
On the bright side, getShaderFile() gives me the path like "ShaderBuilder/Volume/RiSpec/RI_Fog" - which I think will work for the RiAtmosphere call as "name".
There is a findTokenByProperty() method which may or may not be useful... if I find a way to extract properties from definition file, a path to which I can get from getDefinitionFile() - but then how to fill the tokens with user-defined values from the scene interface??
Convoluted much.
I'm thinking about making a render script that will be tuned for a specific atmosphere (we don't have many right now for DS, do we?) - say, RI_Fog. Then its name will be passed automatically, but the tokens will be filled in by the user in the render settings (not in the camera settings). The downside is that I'm not sure these values will get saved per-scene.
I hope to see if this idea works in the next few days.
PS About UberVolumes, I noticed the omSimpleSurface enclosing the volume is always visible to diffuse (GI/AO) rays! When rendering with either on, it makes the volume render slow, but does not seem to change its appearance much as compared to non-GI renders. I wish the UberVolume scripts were non-encrypted! Then it would be possible to kill the diffuse ray visibility...
Kettu's script works magic! I bailed on a 60% done render after 14 minutes using GI and Progressive Render. Kettu's script which allows tweaks comes in around 6 minutes 30 seconds.
Thank you, but the real magic is done by 3Delight (which was just waiting for us to "unlock" it, haha)
Hi again everyone,
I may disappear for some time, maybe a few weeks - real life an'stuff. I may be popping in once in a while for a quick check, but not much more. But I will be working on the shaders, scripts etc.
So here's what I managed so far to get re:backscattering:
- "noback" - just the default shader mode, no adjustments to the subsurface() shadeop result;
- "back_falloff 1" - backscatter multiplier of 5, no falloff correction;
- "back_falloff 0.15" - same backscatter mutiplier, but a 0.15 parameter passed to falloff calculation.
The .15 image looks more correct to me.
I suggest also adding a sharpness value for the falloff.
I'm not sure how to do it. I will try, if I find an RSL resource with explanation.
Haven't actually read this, but maybe it will give you ideas about volumetric lights/shadows:
http://www.sfb716.uni-stuttgart.de/uploads/tx_vispublications/espmss10.pdf
Some nice insight - do's and don'ts - on skin rendering:
http://c0de517e.blogspot.jp/2011/12/three-skin-rendering-horrors-you-want.html
Btw, anybody read this yet?
http://www.fxguide.com/featured/manuka-weta-digitals-new-renderer/
http://www.fxguide.com/featured/rendermanris-and-the-start-of-next-25-years/
Wowie, I read the one on WETA's render engine. Interesting how they have found it better to match full spectrum light rendering to real scenes shot. I have felt that way for a while, but there must be a way to fake it or come really close.
I used your Photo Studio Kit Lights and SSS to test a render battle between Progressive Render and Kettu's script. Maybe Progressive adds time. I have the Kettu script set to Progressive off. Kettu's is faster. And consistently so. 21 seconds a frame faster can save time for animation... almost like buying a new processor.
Progressive Render 2 minutes 58.7 seconds
Bucket Order Horizontal
Bucket Size 16
Max Ray Trace Depth 2
Pixel Samples X 4
Pixel Sample Y 4
Shadow Samples 8
Gain 1
Gamma Correction On
Gamma 2.2
Shading Rate 1.0
Pixel Filter Sin (but goes to Box)
Pixel Width X 6
Pixel Width Y 6
Kettu's script (no GI source in scene but GI Pass still enabled) 2 minutes 37.6 seconds
Max Diffuse Bounce 2
Max Specular Bounce Depth 2
Bucket Order Horizontal
Bucket size 16
Max Ray Depth 2
Pixel Samples X 4
Pixel Samples Y 4
Shadow Samples 8
Gain 1
Gamma 2.2
Shading Rate 1
Pixel Filter Sinc
Pixel Filter Width X 6
Pixel Filter Width Y 6
AMD 8350 8-core CPU and 16GB RAM