Sorry for the heated debate in my absence. I’m not able to contribute as I would like, apologies to all. Its all pretty simple when it boils down.
I’m still without internet service at home so I am posting from work again…...
I guess the only thing I can say is that I have already done things in the manner Len would do it. Problem is that it doesnt look right as evidenced by my renders. I will explain and I might jump around a bit so bear with with me.
The basic problem is that where the sand and the water converge there is a phase change. Phase changes are always a challenge.
The size or detail of the scene is not itself the issue, more-so to do with the multiplicity of viewpoints. That’s what leads to trouble.
If I was to do it in the simple way, build the scene to be rendered only from 1 viewpoint, I might be able to pull off what Len suggests. But because specular and reflection are both viewing perspective dependent, when I move the camera my specular and reflection will also move. Often times this will cause a polygon that once displayed lots of specular to appear without any specular simply because the camera no longer aligns with the key light source as it used to.
It is a flaw to my thinking to rely on specular and reflection to do the marrying of sand to water in this case for several reasons. If the water is a flat plane you can assume that all normals are facing the same direction giving the same specular highlights to the camera view. But if the terrain surface is dynamic, then many of the polygons will face in different directions, meaning that there will be a lot less specular to hide behind. Kind of like how a smooth surface reflections more than a matte or bumpy one, its the same idea.
Terrain water, even with the same material settings as an infinite plane will look totally different due to the specular, reflection and refraction being chopped up by the waves. Now, refraction is another consideration.
Sometimes we are concerned only with the surface of the water but this too is an error. There is a certain amount of volume to consider. As the real world examples showed the water changes color based on its depth. The deeper the water the more light it absorbs and redirects. Thus I have used a terrain but set it to “solid” so that the water has volume at deeper depths just like real water. The problem in Bryce is with the Volume and Transparent Color Channels. These channels do not care whether a swatch of water is a milimeter in depth or 12 feet, it will still color them in the same way, like colored glass. Once again Bryce’s lack of absorption ruins my plans for ultimate water. (FYI Len, that dragon example of in-scattering David made with the jade dragon is not actually possible to repeat. He literally “broke” the material lab to pull that off. Needless to say he had to hack it, its not yet native to Bryce. Maybe in the next version, but not yet today.) To force some degree of inscattering, I have applied an altitude filter that is white at the top and dark blue near the base. It should create a sinking feeling. But surprisingly this still did not bleach my coastline. For that I had to do a top view render where I could blend the water along the edges to be clear compared to further out. But as evidenced, its still not enough.
Now lets go back for a moment to Len’s exercise that has led to debate. Len is not wrong in his proposal. Problem is I already tried it and it doesnt work. The basic idea of Lens post is that I create a water surface, place an object into the water, then apply filters to the object so that it reaches the proper level of specular and reflection at the point where the water touches it. In theory this should create a seamless progression from dry sand to wet sand to full water.
But due to the scale I am working from, the waves pattern of the water surface prevents me from being able to match these ideals perfectly. Even after creating a top view mask, I am still unsatisfied with the way it looks when viewed up close.
Here are a few hand drawn examples made with basic MS PAINT to explain why the mask idea fails at least when it comes to blending the water into the sand.
You’ll either believe me or not when I tell you this, but that’s pretty much what I suspected you were getting at.
Thing is though, looking at your diagram still tells me that it should work. You’ve confirmed (and conveyed) perfectly what you’re getting at, but surely all you have to do to fix that is to ensure that the start point of the gaussian-blended part doesn’t start until you’re past that safe-zone?
In other words, all you need to do is make sure that you don’t just blend it, but also that the blend starts at a point sufficient enough to allow for the differences in geometry height that are causing the problems. Think of that allowance as the safe-zone, and because the surface of the water has exactly the same surface properties as the closest edge of the sand, you shouldn’t have any problem as long as there’s sufficient distance to allow for the highest point that would otherwise give you problems.
On your diagram you need another part, and it falls somewhere along that area you have specified as the “gaussian progression”. Where exactly, is going to be dependent upon how high the highest point of the problem is (the part of the water that deviates the most).
Attached Diagrams By Rashad: