Hi Jesse
Ok, so if I understand correctly,
(from:
With the rectilinear lense option the rays converge into a point, the rayvec is calced as normalized(vec3(x*FOV,y*FOV,zconst)).
Where x and y are the image coords with 0,0 in the center.
and
With pinhole camera i meant especially that all rayvecs starts from the same point, what can be judged then as camera location.
)
I should use the rectilinear lens option to have the rays converge in (= start from) a single point, which essentially produces a pinhole camera behaviour (as stated in my previous post, that the two figures shown below result essentially in the same kind of projection)
This points me also to a fault in my stereo calculation, because i used the camera plane as position for an eye, not the focus point. Wondering how much difference this makes and if this can be corrected by just giving a higher distance to the viewing screen in the stereo mode settings.*Edit: the viewing screen distance must be lowered, not increased!
Do you had trouble with stereo renderings in m3d?
Maybe this can be corrected when you do an update / upgrade of Mandelbulb 3D. I must admit that it's been a while that I tested the implemented stereo function of m3d, which I used for cross-eyed viewing on my Monitor. However, what works in cross-eyed view does not seem to work (at least not problem-free) when switching to a (significantly larger) stereoscopic display. Here, for example, the horizontal separation of the two images (that can be assembled into an mpo file using the tool StereoPhotoMaker) has to be well-defined, taking into account the screen size (for my 55 inch display, the horizontal is 122 cm) as well as intraocular distance (for me, that is ~ 5.4 cm, corresponding to 1920 pixel x (122 cm/5.4 cm) = 85 pixel of L/R image separation).
My current approach is to render two images of the same scene after sliding ~5-10 clicks sideways in the navigator (with fixed zoom & steps) at 4010 x 2160, then scale down by factor of two to 2005 x 1080, then cut the left-eye image by 85 pixels on the left and the other image by 85 pixels on the right (resulting in two 1920 x 1080 images), which are assembled into an mpo file. This has to be done to project points in the fractal image that are lying close to infinity
behind the screen.
This seems to work fairly well on most instances, however, changing the field of view from 90° to 60° or 30° does not seem to correlate well with the actual viewing angle I try to achieve by changing the distance to the TV panel, and especially pop-out effects become quite eye-straining, possibly indicating a discrepancy between horizontal separation (defining the point of fixation and thereby the perceived position of an object with respect to the viewer) and the angle from which the object had actually been captured (the 'side views')...