OK, we are still working on this, but here’s an example.
From the overhead view.
1. Is the lens/filter system that combines the two images.
2. Is a target sphere that rides a path. This can be slid back and forth. Where this is put, is where the viewing surface will be (for example) your monitor.
3. Those blue specks is the Stanford dragon model.
4. That red blob is Vicky seen from above.
The lens system’s parent is the camera, to it goes where ever the camera goes. By changing the relationship of the elements within the system, by them being set to track the target sphere, this determines if the objects appear in front of - or behind - the viewing surface. The aim is, of course, to make the experience as simple and intuitive as possible for the user. Once we have completed testing, we will start on tutorials and documentation. It has sort of proved a bit more complicated than expected. But we are getting there.