Welcome to Fractal Forums

Fractal Software => Mandelbulb 3d => Topic started by: cytotox on December 19, 2011, 02:43:41 PM




Title: FOV vs. Zoom
Post by: cytotox on December 19, 2011, 02:43:41 PM
Hi Jesse

As I recently bought a new TV with stereoscopic display properties (so-called 3D, with shutter glasses), I started to experiment with combining M3D-generated images into mpo-files.

I have a question now: if I understand correctly, FOV is the (horizontal?) viewing angle, with a value of, say, 60 meaning that the image represents the fractal as seen through an opening which limits the field of view to a total of 60°. But when I increase the zoom (after checking the box 'Fixed zoom and steps') in the navigator, the image is enlarged as if the field of vision has been reduced and, as a result, the center of the fractal gets scaled (zoomed) to fill out the image. As the FOV value does not change by this operation (it still reads 60, but is it really still 60°?), is this effect actually achieved by stepping closer to the fractal?



Title: Re: FOV vs. Zoom
Post by: Jesse on December 19, 2011, 03:20:17 PM
M3d works still with a camera view plane and the zoom determines the size of it.
The FOVy (vertical) will then propagate from this viewplane, so you could give FOVy also negative values! (disabled in the navi though)

Sometime when i have really much time i want to change it to the usual pinhole camera behaviour, but i would have to change most functions of the program with a lot of testing... that is why i hav not dome it yet.

If you get some object cutting at the image edge in the navi, you can choose the fixed steps option and increase the zoom to make this camera viewplane smaller.  It is usually scaled autmatically with respect for the local distance estimation, but this can fail on some wild formulas.


Title: Re: FOV vs. Zoom
Post by: cytotox on December 20, 2011, 01:46:57 PM
Hi Jesse

Thanks for the reply. So the FOV / angle refers to the vertical (then how is the horizontal angle determined? Proportionally to the image aspect ratio?).

Do I understand you correctly that the projection of the fractal onto the camera view plane works like outlined in the first figure (3a), or, if negative values were allowed, like outlined in the second figure (3b)? There is probably a mistake in my figures as I have drawn the lines from the object to the view plane parallel to the outer lines. However, if this is true, then I do not really understand the (principal) difference between the current implementation of the camera, which should be equivalent to what is outlined in the third figure (all projection rays meet at a single point behind the viewplane; only true if 'rectilinear lens' is switched off?), and a pinhole camera (where all projection rays pass through a single point), as shown in the final figure (2).

(The only difference would be that in the current implementation parts of the fractal occurring between the view plane ("screen") and the "point of view" (convergence point) are not visible, whereas they would be visible in the projection if a pinhole camera would be placed at the former point of view)

All in all, the main question for me is if it is possible to obtain a camera setting from which I can derive two images that can be combined in an - artefact-free - stereoscopic projection (e.g. am I allowed to use the "rectilinear lens" in this instance?) ...


Title: Re: FOV vs. Zoom
Post by: Jesse on December 20, 2011, 08:11:10 PM
With the rectilinear lense option the rays converge into a point, the rayvec is calced as normalized(vec3(x*FOV,y*FOV,zconst)).
Where x and y are the image coords with 0,0 in the center.
In the default option the rayvec is calced this way: normalized(vec3(sin(x),sin(y),cos(x)*cos(y))).  x and y are scaled to fit the FOV.
Dunno how much sense this makes, just works with arbitrary FOV's.
But this does not make a focuspoint behind the projection plane!

With pinhole camera i meant especially that all rayvecs starts from the same point, what can be judged then as camera location.

This points me also to a fault in my stereo calculation, because i used the camera plane as position for an eye, not the focus point.  Wondering how much difference this makes and if this can be corrected by just giving a higher distance to the viewing screen in the stereo mode settings.*Edit: the viewing screen distance must be lowered, not increased!

Do you had trouble with stereo renderings in m3d?


Title: Re: FOV vs. Zoom
Post by: cytotox on December 21, 2011, 11:43:06 AM
Hi Jesse

Ok, so if I understand correctly,
(from:
With the rectilinear lense option the rays converge into a point, the rayvec is calced as normalized(vec3(x*FOV,y*FOV,zconst)).
Where x and y are the image coords with 0,0 in the center.
and
With pinhole camera i meant especially that all rayvecs starts from the same point, what can be judged then as camera location.
)
I should use the rectilinear lens option to have the rays converge in (= start from) a single point, which essentially produces a pinhole camera behaviour (as stated in my previous post, that the two figures shown below result essentially in the same kind of projection)
This points me also to a fault in my stereo calculation, because i used the camera plane as position for an eye, not the focus point.  Wondering how much difference this makes and if this can be corrected by just giving a higher distance to the viewing screen in the stereo mode settings.*Edit: the viewing screen distance must be lowered, not increased!

Do you had trouble with stereo renderings in m3d?

Maybe this can be corrected when you do an update / upgrade of Mandelbulb 3D. I must admit that it's been a while that I tested the implemented stereo function of m3d, which I used for cross-eyed viewing on my Monitor. However, what works in cross-eyed view does not seem to work (at least not problem-free) when switching to a (significantly larger) stereoscopic display. Here, for example, the horizontal separation of the two images (that can be assembled into an mpo file using the tool StereoPhotoMaker) has to be well-defined, taking into account the screen size (for my 55 inch display, the horizontal is 122 cm) as well as intraocular distance (for me, that is ~ 5.4 cm, corresponding to 1920 pixel x (122 cm/5.4 cm) = 85 pixel of L/R image separation).

My current approach is to render two images of the same scene after sliding ~5-10 clicks sideways in the navigator (with fixed zoom & steps) at 4010 x 2160, then scale down by factor of two to 2005 x 1080, then cut the left-eye image by 85 pixels on the left and the other image by 85 pixels on the right (resulting in two 1920 x 1080 images), which are assembled into an mpo file. This has to be done to project points in the fractal image that are lying close to infinity behind the screen.

This seems to work fairly well on most instances, however, changing the field of view from 90° to 60° or 30° does not seem to correlate well with the actual viewing angle I try to achieve by changing the distance to the TV panel, and especially pop-out effects become quite eye-straining, possibly indicating a discrepancy between horizontal separation (defining the point of fixation and thereby the perceived position of an object with respect to the viewer) and the angle from which the object had actually been captured (the 'side views')...