Settings and parameters for rendering 360° stereoscopic images in DS iRay

Aim: render a 360° stereoscopic image to be viewed in VR headsets, using iRay in DS.

Question: how to set up the cameras and their parameters, and how these parameters influence the results, what they are supposed to do and what is their effect?

So far, by experimenting, I came up with a basic solution: two cameras, one parented and zeroed to the other, with Lens distorsion type = spherical, and Lens stereo offset (mm) = +/- IPD/2 (interpupillary distance), e.g the "right" one will have +33.5 mm, the "left" one will have -33.5 mm, for 67 mm IPD.

The thing I learned is that the camera are not themselves to be displaced relative to each other, it is the Lens stereo offset that does the computation. The difference is remarcable, displaced cameras (no offset) yield a correct result only if in VR we look with our head pointing forward and not tilted (assuming the displacement was in the x direction), but by having them in the same place and using the offset the result is practically correct for "any" orientation and tilting.

The offset shoul be related to IPD to yield a correct sense of scale. This is especially relevant at short distances: there is an expectation of how big a figure (being DS, lets say a human figure) should appear and it relates to how much our eyes converge on it, when this relation is off, our brain "corrects" by estimating a modified size for the object. If it looks too big, the offset (i.e. the rendered IPD) is too small, if it looks too small, the offset is too big.

Another give-away that the offset is wrong is some sense of distortion/curvature. I noticed this when I tested an offset of 3.35 mm. Look at a standing human figure in front of You at close distance (e.g. 75 cm to 1 m) from eye-level height, if You look down at its leg and they seem to curve toward You the offset is too small, also the figure is looking to big.

One thing I noticed is that using the offset as half IPD (67 mm should be close to my own value) the scale "felt" almost right, but when using a displacement between the cameras of the same value, combined with no offset, the figures looked too small.

I use Stereo Photo Maker to combine the images.

At this point, what I wonder is if there are other parameters (under "Lens"?) that influence the results.

What do You think, and what is Your experience?

Thanks

Comments

  • marblemarble Posts: 7,449
    edited October 2019

    Sorry I can't offer any advice because I'm very new to VR having just aquired an Oculus Rift S. However, I am interested to see where this discussion goes and would appreciate any further help with settings and software. For example, I would have no clue as to how to view any image created using your method on my VR headset. I am guessing that perhaps I might need to buy some kind of Virtual Desktop software? Another question would be: would it be possible to wander around an environment such as the excellent Stonemason sets? What a great VR experience that would be and I'd love to know how, now that I have the hardware.

    [EDIT] On thinking about it, wandering around an environment would probably require real-time rendering wouldn't it? Ah well, IRay is certainly a long way from that. Even for a single object/character I was hoping to be able to view from all angles and perspectives but, again, I'm thinking that the stereoscopic view we are talking about here is just a static 3D image like those you look at with those red/blue glasses.

    Post edited by marble on
  • SevrinSevrin Posts: 6,301
    edited October 2019

    Deleted

    Post edited by Sevrin on
  • Hi, I think I'm using a similar setup to you - I have two cameras parented to a central one, with lens mode set to spherical and a relative stereo offset for each 'eye'.  The main issue I find with this approach (as opposed to physically moving the cameras apart, which has the problem of stereo only being correct in one direction), is that you get extreme distortion to straight vertical lines when looking upwards (e.g. in a city scene, the edges of buildings look wobbly) -- I've not experimented with different IPD values though.  The other problem I've run into is that, because you're rendering a 360 degree scene, you have to render each frame at a very high resolution in order to get an acceptable result, which makes animation expensive to do.  In many respects, what I'd prefer is to be able to render a 180 scene, as for me that would offer a better a trade off between image quality and resolution, but I haven't found a way to do so with the lens distortion settings.

  • PinkusPinkus Posts: 26

    In many respects, what I'd prefer is to be able to render a 180 scene, as for me that would offer a better a trade off between image quality and resolution, but I haven't found a way to do so with the lens distortion settings.

    For this, and until there is a specifc set to render 180° images, a workarount could be to place a disk behind the camera (matte black material, as simple to render as it can get) to mask the part of the environment which is not relevant. Admittedly, it is a very rough solution, but it reduces render time a bit. One point to keep in mind is that it must be further behind than the stereoscopic offset, or it would render through this screen at some angles.

  • PaintboxPaintbox Posts: 1,633

    Instead of a black disk, there is also the Iray Section Plane which might be of use.

  • FirePro9FirePro9 Posts: 455

    I believe there is currently no way to do a good 360 stereo image from DS.  Full Octane Render version I undestand does it.  Also seen discussion about using a "slit camera" and stiching multiple images together.  Currenlty a 360 stereo image from DS will render good results looking straight forward and get progressively worse as your view turns to the rear.  I've submitted to DAZ a new feature request to add Nvidia Ansel support for DS just so we can get the 360 stereo capabilities in Ansel.

    If someone has solution for good single pass 360 stereo from DS render, would love to learn about it.

     

  • Pinkus, hello. If you want to look around at props and charactors in VR you need a game engine render platform. Think Unity or Unreal. Import your props and charactors and switch Iray shading  to what looks best to you. Game engines don't use Iray because too slow. I would look at youtube or in these forums on importing stuff into these two platforms. If you are looking for ultra realism I would look into getting VR googles with higher resolution screens, and greater field of view, but these specialty VR googles are not plug and play and are very expensive.

    Good luck

Sign In or Register to comment.