Thanks Buddhi, congrats on another groundbreaking version! I'm currently rendering my first tests for the HTC Vive VR headset. I think understand how to determine correct distance between eyes for the intended scale to be perceived, but how do I calculate correct "infinite distance correction" for equirectangular images for VR? Also is there a formula for infinite distance correction by screen size and distance to screen? I will be displaying anaglyph art on large projector screens
Monte carlo is beautiful, and reflections are now focus correct and much more convincing. So glad for netrender though, and looking forward to the day my hardware supports openCL! I will probably opt for the GTX1070 when possible.
My test image is rendering now and I think I see a bug. I am rendering stereoscopic in top-bottom mode via netrender on a 4-core host and 8-core client. The image from the host is correct, but the image from the client is being stretched. I can see the image on the client screen is flattened to 2:1 aspect, though it should be 1:1. This image data is being cropped to the left side...
Ok that's weird. I stopped the render to see if stereo was enabled on the host, it was, I restarted the render and it's fine now, the host image is in 1:1 aspect.
Thank you for the feedback.
About "infinite distance correction" it's actually a little difficult for me to create strict definition of value. This parameter creates offset between two images which doesn't depend on distance. This offset is needed to "move" background far away in 3D appearance. In perfect situation this offset should equal to real distance between eyes (it's about 60mm) on a given screen size. In that situation a viewer will see background in infinite distance (with 3D glasses). The problem is that you never know the size of the display which will be used by a viewer. You can create image which will look perfect on your 24 inch display, but a viewer will use 50 inch display and for him this offset would be too bit (his eyes will need to do reversed squint which is not possible). That's why I decided to make it just a some number between 0 and 1. Value 1 could be a optimal offset for 24 inch display. But recommended value has to be lower because somebody can have bigger display. That's why in the tooltip there is written that there should be used value 0.4 (which I have found by experiments using different displays). But if you want to use projector with extremely big screen (e.g. 10 meters wide), then this value should be close to zero, because the screen is already closer to infinite distance (from optical point of view). The rule which you can use is folowing: If get good result on your computer display (you know size of your display) and you know size od destination display (you mentioned large projector screens), then you can divide "infinite distance correction" by a ratio between destination screen and size of your display.
I'm glad you like monte carlo DOF, even if it's extremely slow
.
About GPU hardware, the actual version of Mandelbulber v2 still cannot utilize GPU. OpenCL is still not implemented. You you want to play with GPU support you have to use old Mandelbulber v1.21
About NetRender problem with stereo top-bottom, thank you for pointing this problem. I will check what is wrong and create fix for that. I'm creating issue for it on our GitHub page (
https://github.com/buddhi1980/mandelbulber2/issues)