I’m not getting defensive, a bit annoyed perhaps because it seemed no one understood a simple question.
The reason I knew it wouldn’t be any different without trying it is because it’s obvious there would be no difference, didn’t need to try it. Wouldn’t have mattered whether it was a terrain, plane, naked Vickie or whatever else he was intersecting. If one is white and one is black then you have a template when you hit render, there was nothing to figure out there or even test. I wasn’t thinking about Rashad’s mega-scene stuff and there was no mention of it in his post.
Anyway, forget I posted, back to normality - lol
Sorry for the heated debate in my absence. I’m not able to contribute as I would like, apologies to all. Its all pretty simple when it boils down.
I’m still without internet service at home so I am posting from work again…...
I guess the only thing I can say is that I have already done things in the manner Len would do it. Problem is that it doesnt look right as evidenced by my renders. I will explain and I might jump around a bit so bear with with me.
The basic problem is that where the sand and the water converge there is a phase change. Phase changes are always a challenge.
The size or detail of the scene is not itself the issue, more-so to do with the multiplicity of viewpoints. That’s what leads to trouble.
If I was to do it in the simple way, build the scene to be rendered only from 1 viewpoint, I might be able to pull off what Len suggests. But because specular and reflection are both viewing perspective dependent, when I move the camera my specular and reflection will also move. Often times this will cause a polygon that once displayed lots of specular to appear without any specular simply because the camera no longer aligns with the key light source as it used to.
It is a flaw to my thinking to rely on specular and reflection to do the marrying of sand to water in this case for several reasons. If the water is a flat plane you can assume that all normals are facing the same direction giving the same specular highlights to the camera view. But if the terrain surface is dynamic, then many of the polygons will face in different directions, meaning that there will be a lot less specular to hide behind. Kind of like how a smooth surface reflections more than a matte or bumpy one, its the same idea.
Terrain water, even with the same material settings as an infinite plane will look totally different due to the specular, reflection and refraction being chopped up by the waves. Now, refraction is another consideration.
Sometimes we are concerned only with the surface of the water but this too is an error. There is a certain amount of volume to consider. As the real world examples showed the water changes color based on its depth. The deeper the water the more light it absorbs and redirects. Thus I have used a terrain but set it to “solid” so that the water has volume at deeper depths just like real water. The problem in Bryce is with the Volume and Transparent Color Channels. These channels do not care whether a swatch of water is a milimeter in depth or 12 feet, it will still color them in the same way, like colored glass. Once again Bryce’s lack of absorption ruins my plans for ultimate water. (FYI Len, that dragon example of in-scattering David made with the jade dragon is not actually possible to repeat. He literally “broke” the material lab to pull that off. Needless to say he had to hack it, its not yet native to Bryce. Maybe in the next version, but not yet today.) To force some degree of inscattering, I have applied an altitude filter that is white at the top and dark blue near the base. It should create a sinking feeling. But surprisingly this still did not bleach my coastline. For that I had to do a top view render where I could blend the water along the edges to be clear compared to further out. But as evidenced, its still not enough.
Now lets go back for a moment to Len’s exercise that has led to debate. Len is not wrong in his proposal. Problem is I already tried it and it doesnt work. The basic idea of Lens post is that I create a water surface, place an object into the water, then apply filters to the object so that it reaches the proper level of specular and reflection at the point where the water touches it. In theory this should create a seamless progression from dry sand to wet sand to full water.
But due to the scale I am working from, the waves pattern of the water surface prevents me from being able to match these ideals perfectly. Even after creating a top view mask, I am still unsatisfied with the way it looks when viewed up close.
Here are a few hand drawn examples made with basic MS PAINT to explain why the mask idea fails at least when it comes to blending the water into the sand.