displacement and subdivision?

MistaraMistara Posts: 38,675
edited December 1969 in The Commons

why does displacement painting need subdivision?

i thought, from the old days :) , displacement was to save memory on lower geometry detail?

Comments

  • Richard HaseltineRichard Haseltine Posts: 97,762
    edited December 1969

    That depends on the tool, as far as rendering goes; Lightwave and Carrara used to need actual mesh to displace, Poser and DAZ Studio have supported sub-polygon displacement for as long as they've had displacement at all. Sculpting displacement usually does require actual mesh detail - certainly in all the tools I have, I'm not sure about Mudbox.

  • cwichuracwichura Posts: 1,042
    edited December 1969

    There are two main approaches to implementing displacement:

    1) Use the displacement map to modify the mesh. This requires the mesh have sufficient mesh density relative to the detail level of your displacement map. E.g., if you have a plane that is a single quad (like many walls in environment props), applying a displacement map to it will give very poor results, as the only vertices are the four corners. Displacement pushes the vertices around. Hence, you often have to subdivide the surface to get the necessary density. The advantage to modifying the mesh is that the displacement is applied once during scene prep, making render time quicker. It also means all the new resulting faces have full, proper normals, which gives more accurate shading. The downside is it requires more memory to store the denser mesh.

    2) Calculate the displacement "on the fly" at ray intersection time. This allows almost any granularity, without increasing memory. But because the displacement is calculated on the fly, it slows down rendering. Also, some render engines (e.g., LuxRender's microfacet mode) don't calculate the new normal -- they use the normal of the original face. So you can get some slightly odd shadows. (I assume 3Delight is doing some form of microfacet displacement like this, but it may have more advanced support for it. I leave it to a 3Delight expert to chime in here.)

    When it comes to 'hi resolution, low geometry', I believe normal maps are what you're thinking of there, not displacement. Normal maps (and bump maps, which you can think of as a simplified version of a normal map that only points straight out along the face's normal) work by changing the angle of incidence on a ray as if the surface was actually displaced. They are fast and don't require any more memory than the texture map requires. Their main drawback is they do not support self-shadowing. For things like skin pores, that's not a big real. For cobblestone brick, or other things with large differences in height, the lack of self-shadowing becomes quite noticeable and breaks the illusion. In that case, you need to use displacement, since displacement fully supports self-shadowing.

  • wizwiz Posts: 1,100
    edited February 2015

    why does displacement painting need subdivision?

    Displacement works by subdivision. It just does so inside the 3D program and/or render engine, in a way that's reasonably invisible to the user.
    i thought, from the old days :) , displacement was to save memory on lower geometry detail?


    Primarily, it's to allow detail to be added via fairly easy tools. With a "friendly" UV mapping, displacement can be painted on in Photoshop. Any artist who can texture can be displacing like a pro with minimal training. It moves the detailing work to a different desk, lol.

    Secondarily, it's really useful for animation. Some programs (not DS, unfortunately) support animating anything that can be mapped: texture, spectacularity, bump, even displacement. I've animated pulsing veins that way, insects crawling under skin, aging, etc. by animating the displacement maps.

    Post edited by wiz on
  • Richard HaseltineRichard Haseltine Posts: 97,762
    edited December 1969

    cwichura said:
    Also, some render engines (e.g., LuxRender's microfacet mode) don't calculate the new normal -- they use the normal of the original face. So you can get some slightly odd shadows. (I assume 3Delight is doing some form of microfacet displacement like this, but it may have more advanced support for it. I leave it to a 3Delight expert to chime in here.)

    Yes, the normal is updated - you can see that if you look at a Shader Mixer shader with displacement.

    Normal maps (and bump maps, which you can think of as a simplified version of a normal map that only points straight out along the face's normal) work by changing the angle of incidence on a ray as if the surface was actually displaced.

    A bump map is a height map, like displacement. The renderer calculates the new normal direction from the heights and uses that for lighting, so the end result would be the same as a normal map (which directly encodes the direction in which the surface faces). The main advantage of normal maps, especially for games and other time-critical tasks, is they they don't need to have their normals calculated on use.

  • cwichuracwichura Posts: 1,042
    edited December 1969

    That was me poorly saying that various engines these days internally convert the bump maps to a normal map, as the engine only processes normals.

Sign In or Register to comment.