I’ve been doing a bit of thinking lately on this and comparing what I see in rl with what I would have expected to achieve in artwork and have noticed something. One should really brake down distance of the viewed object in at least 6 levels; extreme distant, distant, mid, near, closeup, and extreme closeup. What is necessary for each of these levels varies vastly of course, but what is surprising to me on observation is how even ‘near’ doesn’t need as much detail as we might expect as our eye does not pick up the detail of the bark or leaves for instance. The trick it appears is that in rl there is a small amount of blurring going on even in near compared to closeup (30 feet vs 10 for example… varies from person to person a bit.) At 30 feet for instance, one could get away with a decent texture and a small amount of bump. At 10, one really needs a displacement map to look good.
3D software puts everything in sharp detail unless one uses DOF in their camera and, this isn’t exactly the same as DOF. Rather, this blurring runs in conjunction with DOF. Add to this motion blur which starts at small amounts even with small amounts of motion (like walking) and we see how this can add up. Our brains average this all out and we don’t notice it unless we consciously pay attention to it.
It would be interesting if we here went out over the next few days and did a conscious measurement of blur we notice under various circumstances and compared notes.
Final note on this, I think instancing and LOD are two areas that have a lot of promise in this regard. Having said that, extreme closeups will probably always be their own beast unless animating into it.