Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
The DAZ primitives use a simple projection mapping...cubic for the cube, etc, to give them a UV map.
But in the case of the hair...one possible cause...it's like you've got the shader set for the U direction, but the way the mapping is, you are getting the V direction..or more specifically, what you would EXPECT the V direction to be, if it were mapped correctly. I guess the best way to describe it...you are seeing a phase shift because the mesh is mapped 'crosswise' instead of 'lengthwise'...
And it's compounded by the 'twisting' of the planes that make up the hair. (and on that one, it looks like they may be 'cupped' too...)
I've done some more tests, and as far as I can tell, something is definitely up with the glossy metallic option for me in the default brick.
the two objects below both use the same exact shader, both objects are subdivided. I personally deformed and borked the uv-mapping of a sphere in blender. everything other than basic objects goes disco.
Can someone test to see whether it's just something messed up on my computer?
Can you post a screenshot of your network?
I'm in the middle of what is going to be about a 10 hr render, or I'd run some tests, now.
Here's my current test. One nice thing that's come out of this is I updated Studio. Apparently there's a new specular block. Notice how the sphere in the test scene on the left doesn't have the weird specular squares, but the instant it is applied to a more interesting object in a scene (not that interesting its a plane I draped in blender but its base is just a plane.) it goes weird.
What happens if you set the tile repeats to 1?
Absolutely nothing in terms of the specular. My working hypothesis is that it has something to do with subdivision, the squares of the specular line up exactly with the edges of the base mesh. But beyond that I don't know, Ive tested different render settings... pretty much anything I can think of. Also, the anisotropic shader in Ubersurface works, so there must just be something somewhere I'm missing
Please make a bug report, with as much information as possible.
Slightly out of topic but still related.
I've been playing around with progressive rendering and other render options. Then I started noticing that hair renders isn't just faster, but textures that have seam issues (needing the use of lower shading rates) don't have seam issues at all with progressive rendering. First I thought it was the use of the box filter, but I tried rendering the usual way with the box filter (and the same 1:1 pixel filter width) but the progressive render still finish much quicker.
This is only for renders with hair or a lot of transmapped surfaces though. Progressive rendering still takes longer to finish with just figures and not a lot of transmapped surfaces.
Anybody else notice this?
Thanks! I didn't recognise it from the side view.
That's cool!
Hi everyone,
Two questions for today; if anyone has even the vaguest idea, I'd be very grateful:
1) How to write a shadow catcher? I know there is a brick in the SMixer, but I want to write a non-"black box" one. Google doesn't really seem to help. I don't need exact code, but just the general procedure - I'm drawing a blank completely as to the algorithm.
2) How to write proper gamma controls into the point-cloud script?
I'll be nice and try to answer.
Q1/ There could be many techs depending on how you calculate your shadow.
I don't know how the DS brick is working but it could be a simple message passing to the surface shader. So when calculating the shadow catcher, this could be pretty simple as you just set the color to black and opacity to 1 when "not in shadow" and put opacity to 0 otherwise
Or you could go a step furthr and do some linear function for the opacity and color with the same tech
Or you can calculate the occlusion on the surface and set the opacity to be equal to occlusion
These are simple examples and you could use other derivative techs.
Q2 / I don't see the need for a gamma control in point cloud script. You just have to adjust your gamma slider in the render panel. The only thing missing is the Gamma ON/OFF button but you can do that in the normal render panel then switch to ptc rendering and it seems to keep the parameter as it recalculate all textures with tdlmake each time I toggle the Gamma ON/OFF and render in ptc mode (I didn't dig further than that but you could check wether the TDLmade textures are Gamma corrected or not)
I have a question. How do you make chromatic aberration in DS?
You fake it...and do it in post.
Dispersion can be done with some fancy shader work, but I haven't seen anything for aberration.
I was afraid that's gonna be the answer. While doable for stills, it's hard to do convincingly with animation.
You could do it in AE with a plugin:
http://www.redgiant.com/videos/quicktips/item/145/
Oh, does it? *checks again* Yeah, apparently it does... it's just that on my machine(s) the scripted renderer resets its parameters to default whenever I save a render under a new name (is it just me?), so I thought it was resetting the on/off switch along with the gamma value.
I'm not sure how to compare .tdl files to the original textures properly - i-display opens .tdl but not jpeg, and IrfanView does not open .tdl... the .tdl in i-display looks darker to me than the jpeg in IrfanView, but not as dark as the jpeg looks if I correct it to 0.45 gamma...
Oh, does it? *checks again* Yeah, apparently it does... it's just that on my machine(s) the scripted renderer resets its parameters to default whenever I save a render under a new name (is it just me?), so I thought it was resetting the on/off switch along with the gamma value.
I'm not sure how to compare .tdl files to the original textures properly - i-display opens .tdl but not jpeg, and IrfanView does not open .tdl... the .tdl in i-display looks darker to me than the jpeg in IrfanView, but not as dark as the jpeg looks if I correct it to 0.45 gamma...
I don't have any problem with it beeing resetted till now, so I can't really say much about that. Have to ask other people to test to check if they have the problem with the Gamma ON/OFF knob
To open TDL file you need an image editor that can read TIFF Files if I'm correct.
You need an unbiased renderer with spectral light and a shader that will calculate the colors on the decomposed wavelenght
Not doable inside DAZ with actual shaders. But you could do that with Luxrender or Octane.
I never tried but I guess it is also doable in Blender with Cycles.
I find Cycles very powerful and it is very affordable in term of price, and you can get quick renders depending on your Graphic card
With Renderman it is possible to fake it but you have to write the shader depending on what you want to get and what your animation is.
I find it kind of funny that in the world of photography that they go to such lengths to reduce/eliminate it and we in the '3d world' try to create it.
There is one Youtube video showing a renderman shader based CA...but it is described as being an 'in house' shader and not going to be publicly released...with no mention of anything else. So it's either a very 'expensive' shader that will take a very long time to render or another way of 'faking' it.
And there is a way to do it in Maya, but I think it involves some Maya specific camera shaders/code.
Unbiased, physically based renederers...yeah, they'll do it.
I've just seen the video comments. Seems there is some TCL involved, from the author's comment, and not just pure RSL. And we don't know how he feed his shader and how the TCL was involved in that. And still he is outputing AOV so there could also be some post compositiing. All these thing would make it pretty much unusable for DS
I'm not sure but I think I'd try to go the Batch image processing way after some experiments in a 2D picture editor.
I've just seen the video comments. Seems there is some TCL involved, from the author's comment, and not just pure RSL. And we don't know how he feed his shader and how the TCL was involved in that. And still he is outputing AOV so there could also be some post compositiing. All these thing would make it pretty much unusable for DS
I'm not sure but I think I'd try to go the Batch image processing way after some experiments in a 2D picture editor.
Theoretically...it is all doable in DS, but like so many features of 3DL, there's no easy way to get there from here. Take the pointcloud script...it came into being long after PC was a feature of 3DL. Same with the hair plugins...the interface to be able to access the feature doesn't yet exist.
And while the DS scripting (essentially qt) is probably up to the task, I have no idea where to begin to get from point a to point b.
So back to my original statement...do it in post (and Kevin's recommendation of that AE plugin is probably one of the easier ways...if you've got AE).
The more commercially viable D/S becomes, the more we'll see the kind of plugins that take advantage of 3DL. It'll take commercials and, possibly, a film to be produced to get there....
The more commercially viable D/S becomes, the more we'll see the kind of plugins that take advantage of 3DL. It'll take commercials and, possibly, a film to be produced to get there....
3Delight is already used to make movies. The DS approach is for 3D amateurs. Very far from a usable movie pipeline. I don't see that happening in a close future. Many things are lacking to get there. It is not only the shaders but the rest.
Blender is closer to what I'd expect for creating a movie as it has fluid, particles, smoke, dynamics, hair, etc...
https://www.youtube.com/watch?v=ZMBPuuqTk_M
Made with Poser which, like D/S, is for amateurs.
Don't like those two. Luxrender is way too slow and Octane has a lot of hardware restrictions (proprietary hardware/API, memory limitations). Plus a GPU is very power hungry.
I've been playing with this - http://www.fluidray.com/ - and so far I like it, a lot. There's practically no restrictions like Octane (resolution and hardware requirements) for the free beta download. Noise and fireflies disappear much, much quicker than Luxrender 1.3. If I go the hardware route, I'd probably go with Caustics - http://www.caustic.com/visualizer/maya/
Here's a video of it in action - http://www.youtube.com/watch?v=h5mEEho0ago
Notice the time it took the image to clear up. Now that's FAST!
Okay....
I wrote this a while back. Some of you don't seem to understand what you have with the Render Engine D/S Gives you:
http://wancow.deviantart.com/journal/3Delight-The-2-Render-Engine-in-Hollywood-366276410
Please read it so I don't have to keep repeating its content... setting all y'all straight is tiresome...
And Takeo: D/S can do just about anything any other software can do IF the plugin is written to do it. It's a platform. A very nice simple platform that is infinitely customizeable. It simply requires some investment... now, Piracy and a whole lot of other factors make that an economic gamble, but it could still be profitable.
I was just watching this:
http://www.dream-lounge.com/dreamlight/sfx_Pro_overview_final_editl.wmv
That's been available since D/S2...
Okay, yeah, Maya is the grandaddy of all platforms... Blender is great and all, but has anyone used it to produce a feature film? You say they can, but has it been? It's biggest obstacle has been it's less than intuitive interface.
Yes, we need a new particle plugin and a water plugin and a dynamics plugin and a decent third party cloth plugin...
But you guys simply don't seem to understand what you already have.
@Wowie : Have a look at that http://www.blendernation.com/2014/03/18/super-fast-real-time-rendering-with-combined-rendering-technologies/
Just have to wait a bit to know how it was made
@ Wancow : I know what 3Delight is capable of. I just saying DS is a limiting factor. It's very different when used in an other 3D package. There are examples in the web you can see from maya or softimage
For movies made with Blender you can watch
Sintel : https://www.youtube.com/watch?v=HomAZcKm3Jo
Tears of steal https://www.youtube.com/watch?v=R6MlUcmOul8
Caminandes https://www.youtube.com/watch?v=XCejtdKK40o&hd=1
Big Buck Bunny https://www.youtube.com/watch?v=t4gjl-uwUHc&hd=1
Elephants dream https://www.youtube.com/watch?v=MFg8mBaMoPs&hd=1
If you search for "blender movie" in Youtube you'll see a lot of others.
Quite nice.
I think it's pretty similar to what Furryball for Maya (http://furryball.aaa-studio.eu) which combines ray tracing and a real time OpenGL renderer. You can pretty much do AO and SSS in realtime now (on a high end GPU), but reflections/refractions is trickier (outside of baking everything). Doing them with raytracing makes the code simpler and since it's an embarrassingly parallel problem, it's well suited to GPUs.
Unfortunately, because GPUs don't have a true unified memory addressing with the CPU, you're still going to have memory limitations (having all the data on local memory) or performance hits (ie. moving data from/to GPU to the CPU). The most interesting pure GPU renderer I've seen so far is Redshift3D (https://www.redshift3d.com/) which allows you to just put only scene geometry data on the local GPU memory and pretty much stream texture data from main memory. Although it also uses CUDA, they do have plans for an OpenCL port, which means it should be portable to other hardware.
ImgTech Caustics approach works a little bit different. If I understand correctly, they're using the accelerator boards just to do ray intersections. The actual shading is actually still done on the CPU side. The end effect is the same - you only need to send geometry data to the accelerator local memory and keep everything else on the CPU. So technically, you can still use the same code you're using for materials.
The thing I love about ImgTech's approach is that their entire board (the high end one with two chips and 16 GB RAM) consumes only 60 watts at most. They claim a typical usage will only push it to 40 watts. Compare that to high end GPUs which can consume something like 250 or 300 watts. It still has the drawbacks of not having a unified memory with CPU, but it uses less power, outputs less heat and have a lot more memory than your typical consumer GPU.
@Wowie : The two renderers are very interresting Wowie. However it's a pity it is only again limited to Autodesk Product in order to use them
OK I know that Autodesk is pretty much almost the only 3D Package Provider in the Industry
I thought of using the GPU a long time ago to speed up renders inside DS but didn't have time to try anything
In term of hardware, because of ATI not pushing up OpenCL to be better you end up choosing Nvidia because almost every GPU renders are made for CUDA
The OpenCL Standard changed last year in order to catch up with Cuda, if I'm correct, but it takes time for the implementation.
I'd like it to be better because OpenCL would work on both AMD and Nvidia
Btw, if you can use Blender a bit, you can try Corona render. I did a quick test the other day with it and it was blazing fast on CPU. Don't know yet how it would go if I was trying chromatic aberration or more complicated scenes. There are a bit too much new things to play with for me these days. Don't know how I could make days longer
Those Corona renders are very nice!
Actually my new HDRI light maps create chromatic aberration in DS as a result of indirect direct shadows
however chromatic aberration is result of a camera lens
I fake it before in Carra grouping 3 light together red-green-blue for the (fake full spectrum ) with a nice result did not tried it in DS