Logo by CorneliaYoder - Contribute your own Logo!

END OF AN ERA, FRACTALFORUMS.COM IS CONTINUED ON FRACTALFORUMS.ORG

it was a great time but no longer maintainable by c.Kleinhuis contact him for any data retrieval,
thanks and see you perhaps in 10 years again

this forum will stay online for reference
News: Check out the originating "3d Mandelbulb" thread here
 
*
Welcome, Guest. Please login or register. April 25, 2024, 04:45:01 PM


Login with username, password and session length


The All New FractalForums is now in Public Beta Testing! Visit FractalForums.org and check it out!


Pages: [1] 2   Go Down
  Print  
Share this topic on DiggShare this topic on FacebookShare this topic on GoogleShare this topic on RedditShare this topic on StumbleUponShare this topic on Twitter
Author Topic: Better Pseudo-Depth of Field  (Read 6003 times)
Description: Algorithm could be improved - light wins
0 Members and 2 Guests are viewing this topic.
philovivero
Forums Newbie
*
Posts: 1


« on: November 18, 2012, 12:04:09 PM »

I'm using Mandelbulber. I turn on the depth of field post-processing, and it always looks wrong. I finally figured out why.

In actual lens blur, light wins over dark every time. Take a picture of a perfect checkerboard pattern of black/white, blurry, with a camera. The white squares bleed into the black, never the other way around, but Mandelbulber (and I assume other depth of field algorithms) just use gaussian blurring, whereby dark colours are simply smeared with light ones.

If before blurring, you apply a convolution matrix of positive numbers, eg:

[
0.1 0.3 0.1
0.3 0.5 0.3
0.1 0.3 0.1
]

Then the entire image will be "light colours propogated" and if you then blur that, you will get something more closely approximating lens blur. It's still very imperfect, but should be more natural-looking than the gaussian-only method.

I tested this out by pulling up GIMP and doing a generic convolve on my image before blurring. In that case, it's a 5x5 matrix rather than 3x3 as I've illustrated here, but the principle is the same: you put the highest value in the centre, and as you increase distance from centre of matrix, the values are proportionally smaller (probably on a quadratic basis, but I haven't completed my analysis that far yet).

I think to do this 100% properly, you'd need the convolve matrix to be larger the further from the focal plane the voxel is. But again, as a first approximation, just doing it with a static matrix would be an improvement over what's there currently.

Does anyone have opinions on this matter?
Logged
kram1032
Fractal Senior
******
Posts: 1863


« Reply #1 on: November 18, 2012, 12:55:41 PM »

while you're right that bright values usually dominate over dark ones, this effect usually actually is called bloom.
DoF rather needs a bokeh-blur (convolve with a circular cutout matrix) instead of a gaußean.

http://en.wikipedia.org/wiki/Bokeh

http://en.wikipedia.org/wiki/Light_bloom
Ideally, for a perfect lens, you want to have an Airy-disk kernel: http://en.wikipedia.org/wiki/Airy_disk
« Last Edit: November 18, 2012, 01:31:11 PM by kram1032 » Logged
richardrosenman
Conqueror
*******
Posts: 104



WWW
« Reply #2 on: November 22, 2012, 09:08:30 PM »

90% of the time, users apply standard gaussian and box blurs to simulate depth of field. This couldn't be more incorrect. As a result, I spent a lot of time studying correct depth of field and them creating a Photoshop plugin that does it correctly.

You should check out the technical info as it's interesting, especially when you break down what a correct bokeh looks like:

http://richardrosenman.com/shop/dof-pro/

Specular bloom, as you point it out, is something different though and coincidentally enough, I created a plugin for this too: http://www.lumierefilter.com

Cheers,
-Rich
« Last Edit: December 17, 2017, 07:01:47 PM by richardrosenman » Logged

kram1032
Fractal Senior
******
Posts: 1863


« Reply #3 on: November 23, 2012, 01:29:37 AM »

Stuff like this always makes me wonder:
Do you have to apply DoF first, then bloom or the other way round or does it not matter? (that is, if you try to keep it realistic)
Logged
richardrosenman
Conqueror
*******
Posts: 104



WWW
« Reply #4 on: November 23, 2012, 05:57:55 AM »

Hi Kram;

That's a really good and important question. The order absolutely matters. For instance, in my field, we have to deal with motion blur and depth of field regularly. Rendering these in 3D is computationally exhaustive so we often rely on applying them as a post process. However, adding motion blur first and then depth of field yields incorrect results. Likewise, adding depth of field first and then motion blur is also incorrect. The bottom line is they both happen concurrently in the camera yet there's no tools out there that allow you to do both in one shot as a post process.

Often enough, we can cheat it and you won't notice but the order is definitely an ongoing headache we deal with every day.

With phenomena like blooms, I think it's easier to cheat because a bloom is equivalent to a gel or filter over the lens. As a result, it's ok to add motion blur and/or depth of field first and then add the bloom. Likewise for vignettes. At least, that's my opinion. wink

-Rich
Logged

kram1032
Fractal Senior
******
Posts: 1863


« Reply #5 on: November 23, 2012, 05:55:57 PM »

heh, yeah I thought so...
Though couldn't you "just (cheesy)" write an algorithm that sort of does all two (or three if you add bloom) blurs at the same time in post-processing?
Logged
lycium
Fractal Supremo
*****
Posts: 1158



WWW
« Reply #6 on: November 23, 2012, 06:46:56 PM »

That's a really good and important question. The order absolutely matters. For instance, in my field, we have to deal with motion blur and depth of field regularly. Rendering these in 3D is computationally exhaustive so we often rely on applying them as a post process. However, adding motion blur first and then depth of field yields incorrect results. Likewise, adding depth of field first and then motion blur is also incorrect. The bottom line is they both happen concurrently in the camera yet there's no tools out there that allow you to do both in one shot as a post process.

Often enough, we can cheat it and you won't notice but the order is definitely an ongoing headache we deal with every day.

That's why, as computers get faster, physically correct renderers are increasingly attractive: human effort isn't getting any cheaper, but FLOPs most certainly are...

Also, while you're taking extra samples, it doesn't really cost any more to incorporate depth of field, motion blur, area lights, etc. In other words, it's not like it takes twice as long to do DOF and motion blur, compared to just DOF alone. So that extra sampling effort to do things correctly is quite justifiable, since it covers all effects if you do it right.
« Last Edit: November 23, 2012, 06:49:56 PM by lycium » Logged

kram1032
Fractal Senior
******
Posts: 1863


« Reply #7 on: November 23, 2012, 06:56:21 PM »

The most exciting part of that, to my mind, is that basically full-quality raytracers become almost real-time usable...
Of course, having navigable scenes isn't strictly new anymore. But I wonder how much longer it takes until the first game engines start to pop up that do full-fledged real-time ray-tracing with basically no noise on by then basically "normal" hardware...
Logged
richardrosenman
Conqueror
*******
Posts: 104



WWW
« Reply #8 on: November 23, 2012, 07:08:56 PM »

Quote
Also, while you're taking extra samples, it doesn't really cost any more to incorporate depth of field, motion blur, area lights, etc. In other words, it's not like it takes twice as long to do DOF and motion blur, compared to just DOF alone. So that extra sampling effort to do things correctly is quite justifiable, since it covers all effects if you do it right.

Exactly. BUT, and there's always a but. Motion blur, dof, etc, is free in unbiased renderers (Maxwell, Octane, Arion, etc) and unfortunately, those are still not viable options. They still take waaaaaay too long when compared to biased renderers and those that utilize GPU processing, have some serious limitations which prohibit their use for any serious commercial production. For instance, depending on the project, it is not uncommon to have geometric scenes in excess of millions or billions of triangles. This is immediately impossible with GPUs. Likewise, using 4k textures, or many of them is also a limitations due to GPU ram. There's much more.

So your standard commercial production renderer like VRay, Mental Ray, etc, do cheat and that makes turning on a features like depth of field, quite a bit more taxing. For this reason, I'd say about 80% of all projects still use post depth of field. VRay and Mental Ray do have GPU extensions (VRay RT GPU, IRay) but like I said, they are limited and more useful for single frame production than animation.

It's also interesting to note that film production renderers weren't even raytracers up until recently. Renderman, 3Dlight, from what I know, didn't even do raytracing in order to be able to achieve the speeds required to render at film resolutions quickly and efficiently. I believe films like the Matrix were among the first that used commercial raytracers like Mental Ray to render with global illumination and such.

Quote
Though couldn't you "just ()" write an algorithm that sort of does all two (or three if you add bloom) blurs at the same time in post-processing?

You probably could but it would be extremely complicated. For instance, you need to supply a depth map which is a frame that describes the distance of the scene to the camera:





With this you can effectively apply and modify the depth of field:



You would have to do the same to the motion blur: supply a vector map that would describe the direction and length of the motion vectors. This exists and you can do it with select apps. So imagine having to supply two frames of depth and motion information, both of which also have to be computationally derived, just to process one frame? And then imagine doing it for a sequence? Nahhhhhh.... wink

-Rich
« Last Edit: November 24, 2012, 12:11:36 AM by richardrosenman » Logged

richardrosenman
Conqueror
*******
Posts: 104



WWW
« Reply #9 on: November 23, 2012, 07:23:10 PM »

I have also created a motion blur plugin (ironically that the three topics that have come up I have created software for). It is called Motion Blur Lab PRO: http://www.mblpro.com

I'm posting it because it's a cool example of extending Photoshop's motion blur capabilities to production-environment requirements. Check out the gallery - some of the examples are pretty neat.





But in order to make this plugin really powerful, you would want to implement the velocity maps I mentioned. This is what a velocity map looks like:



Basically through using the RGB values you can describe motion vector x,y, length.

-Rich
« Last Edit: November 24, 2012, 12:07:08 AM by richardrosenman » Logged

cbuchner1
Fractal Phenom
******
Posts: 443


« Reply #10 on: November 23, 2012, 11:25:05 PM »


Richard, I am delighted that you are a member of fractal forums. Reading your postings really ups one's IQ wink


Logged
richardrosenman
Conqueror
*******
Posts: 104



WWW
« Reply #11 on: November 24, 2012, 12:05:28 AM »

Richard, I am delighted that you are a member of fractal forums. Reading your postings really ups one's IQ ;

Awwww - thanks! I just wish I could keep up with the fractal programming in here... Most of it is beyond me! wink

-Rich
Logged

M Benesi
Fractal Schemer
****
Posts: 1075



WWW
« Reply #12 on: November 24, 2012, 05:42:26 AM »

  Why'd you blur the sky so much?   (joking) 

  So is a velocity map the same thing as a distance map used to taper motion blur on more distant objects?  You could implement a linear map to decrease motion blur with distance- the mountains, farther down the track, the sky....
« Last Edit: November 24, 2012, 05:44:00 AM by M Benesi » Logged

richardrosenman
Conqueror
*******
Posts: 104



WWW
« Reply #13 on: November 24, 2012, 06:01:20 AM »

Quote
You could implement a linear map to decrease motion blur with distance

That's a pretty neat idea actually...

My filter uses motion vectors which you can shorten or lengthen. This allows you to decrease the amount of motion blur in some areas and increase in others which is super-important in examples such as a vehicle turning around a corner and also allows you, as you pointed out, to vary motion blur depending on depth:



I suppose the above train example isn't a good one as it's using radial blur but the custom motion blur gives you a ton of control:



-Rich
Logged

kram1032
Fractal Senior
******
Posts: 1863


« Reply #14 on: November 24, 2012, 02:55:00 PM »

Hmm.. The train map example seems to only take into account x-y-plane blurring, judging by the velocity map which apparently lacks a blue-channel which, I'd assume, stores z-velocities.
If that's "good enough", why don't you just store the z-depth information in the blue channel and apply DoF effects accordingly?
Or am I incorrect in assuming lack of z-velocities?
Logged
Pages: [1] 2   Go Down
  Print  
 
Jump to:  

Related Topics
Subject Started by Replies Views Last post
Depth Of Field test Images Showcase (Rate My Fractal) « 1 2 3 » Buddhi 30 9655 Last post November 10, 2013, 04:48:04 PM
by eiffie
Depth of Field Mandelbulb 3d mehrdadart 1 3589 Last post February 09, 2011, 11:22:45 PM
by Jesse
Depth of Field in animation Mandelbulb 3d porco 4 2851 Last post January 17, 2013, 12:08:54 PM
by porco
urgent issue with depth of field Bug Reporting taurus 2 1934 Last post January 29, 2014, 02:13:14 PM
by taurus
Are there any tutorials out there for using Depth Of Field? Tutorials chaos_crystal 2 2381 Last post May 14, 2014, 04:01:37 AM
by Kausemus

Powered by MySQL Powered by PHP Powered by SMF 1.1.21 | SMF © 2015, Simple Machines

Valid XHTML 1.0! Valid CSS! Dilber MC Theme by HarzeM
Page created in 0.283 seconds with 27 queries. (Pretty URLs adds 0.022s, 2q)