News: Support us via Flattr FLATTR Link
 
*
Welcome, Guest. Please login or register. October 25, 2014, 03:24:37 PM


Login with username, password and session length



Pages: [1] 2   Go Down
  Print  
Share this topic on DiggShare this topic on FacebookShare this topic on GoogleShare this topic on RedditShare this topic on StumbleUponShare this topic on Twitter
Author Topic: Best filtering strategy?  (Read 1751 times)
0 Members and 1 Guest are viewing this topic.
Syntopia
Fractal Bachius
*
Posts: 588



syntopiadk
WWW
« on: April 14, 2012, 10:55:18 PM »

I'm working on improving the raytracer in Fragmentarium. I've implemented Image-Based Lighting and use Panoramic HDR images to create specular and diffuse lighting (for diffuse lighting a blurred, or rather convoluted, image is used).

I (progressively) cast multiple rays per pixel and accumulate weighted samples in 32-bit float buffers. After accumulation I normalize each pixel by its total weight, gamma correct, and tonemap. To get anti-alias I choose random samples uniformly from a disc centered at each pixel (no stratification) - the sampling radius is usually choosen larger than the pixel area - typically 2 pixels radius. But what is the best way to filter (weight) the samples?

I've tried box filtering, triangle filtering, and gaussian filtering. But I still seem to get better results by creating a high-resolution image using the same amount of samples, and then down-sizing the image.

I have been examining the SunFlow filters, and to be honest I think the triangle filter looks just as good as the others: http://sfwiki.geneome.net/index.php5?title=Image#Filters
Many of the filters seem soft or has ringing.

Here is an example of a progressively sampled image:


and a downsized large-resolution image:


(Maps: Creative Commons licensed from http://www.smartibl.com/sibl/archive.html)

The downsized image looks best - even though all weighting in the progressive sampled image was done in 32-bit floats, and the downsized image is 8-bit. This is most noticed on the specular highlights (which will have very high brightness values).

I think it should be possible to get at least as good accumulated anti-aliasing, but how?
Logged
marius
Iterator
*
Posts: 197


« Reply #1 on: April 15, 2012, 12:20:43 AM »

The downsized image looks best - even though all weighting in the progressive sampled image was done in 32-bit floats, and the downsized image is 8-bit. This is most noticed on the specular highlights (which will have very high brightness values).

I think it should be possible to get at least as good accumulated anti-aliasing, but how?

Is the downsize algorithm that you used gamma-aware? Color seems different and highlights somewhat muted.
Logged
Syntopia
Fractal Bachius
*
Posts: 588



syntopiadk
WWW
« Reply #2 on: April 15, 2012, 01:21:25 AM »

Is the downsize algorithm that you used gamma-aware? Color seems different and highlights somewhat muted.

Actually, I don't know. I Use Paint.NET. But it is probably not.

But my own filtering should be okay - I make no changes to the samples before adding them (because I expect the HDR maps to linear gamma) and then Gamma correct afterwards. The problem is that my own approach look worst. I think the strong specular highlights become to dominant and saturate the neighboring pixels.

Perhaps my final tonemapping is too simple: I use a simple exponential mapping: color = vec3(1.0)-exp(-color*Exposure). I'll try out some other mappings tomorrow.
Logged
Syntopia
Fractal Bachius
*
Posts: 588



syntopiadk
WWW
« Reply #3 on: April 15, 2012, 11:15:04 AM »

Here is a better example.

Here is a Mandelbulb lighted by (pure reflective) specular light (where the samples are tonemapped before summing):


Here is the same image with HDR lighting (tonemapping after averaging - the correct approach):


It is lighter, of course, but has terrible artifacts, and jaggies.

100 rays were shot per pixel. I think what happens is, that if even a single of these ray reflect into one of the strong HDR lights it will completely dominate the sum.

You would not get the same artififacts if rendering a high-resolution HDR image and downscaling (because the pixels would influence neighbor pixels). But I render on a GPU, so the samples for each pixels are completely independent and are not "re-used" between pixels - leading to these isolated high-intensity pixels.

I'm not sure what to do about this. A Bloom filter would probably fix it, but I also need to be able to do Tile Rendering, and this complicates such an approach.



Logged
David Makin
Global Moderator
Fractal Senior
******
Posts: 2269



Makin' Magic Fractals
WWW
« Reply #4 on: April 15, 2012, 01:31:27 PM »

If using GPU render internally at the larger resolution but include the downsizing in the GPU code i.e. so the output res. is as it would be after downsizing the conventional way i.e. calculate 2*2 or 3*3 etc. per ray and combine to output 1*1.
In other words do the oversampling the way you'd have to do it in Ultra Fractal if you wanted to do it yourself.
I do not believe any other strategy could improve on that either in quality or efficiency no matter which algorithms are used for lighting etc.
Logged

The meaning and purpose of life is to give life purpose and meaning.

http://www.fractalgallery.co.uk/
"Makin' Magic Music" on Jango
subblue
Conqueror
*******
Posts: 113



WWW
« Reply #5 on: April 15, 2012, 02:14:14 PM »

I found a similar thing in my WebGL renderer which uses a multipass 32bit float accumulation buffer for the super-sampling. I don't have a good solution yet either but I did find I could minimise the effect by limiting the maximum accumulated brightness based on the number of super samples I am taking.
I've also tried a more involved 'filmic' tone-mapping approach as linked to from this page: http://mynameismjp.wordpress.com/2011/12/06/things-that-need-to-die/ which does give you fine control, but is probably too much to give end users as a set of controls.
Logged

www.subblue.com - a blog exploring mathematical and generative graphics
Syntopia
Fractal Bachius
*
Posts: 588



syntopiadk
WWW
« Reply #6 on: April 15, 2012, 04:33:11 PM »

If using GPU render internally at the larger resolution but include the downsizing in the GPU code i.e. so the output res. is as it would be after downsizing the conventional way i.e. calculate 2*2 or 3*3 etc. per ray and combine to output 1*1.
In other words do the oversampling the way you'd have to do it in Ultra Fractal if you wanted to do it yourself.
I do not believe any other strategy could improve on that either in quality or efficiency no matter which algorithms are used for lighting etc.

Yes, that is certainly a solution, but I need many samples, 100+, for doing the Monte Carlo raytracing (for getting high-quality soft shadows, DOF, AA, AO) - and I don't want to specify the number in advance. So I'm working on a progressive, interactive solution, where I can see the image converge, while still being able to interact with the settings.
Logged
Syntopia
Fractal Bachius
*
Posts: 588



syntopiadk
WWW
« Reply #7 on: April 15, 2012, 04:51:49 PM »

I found a similar thing in my WebGL renderer which uses a multipass 32bit float accumulation buffer for the super-sampling. I don't have a good solution yet either but I did find I could minimise the effect by limiting the maximum accumulated brightness based on the number of super samples I am taking.
I don't know the number of samples in advance, but I'm thinking of applying some kind of compression before averaging the samples and then doing an expansion afterwards - similar to doing the tonemapping before averaging. However, I think it should be possible to do a complete HDR pipeline. These problems must also exist for ordinary raytracers, but of course the problem is amplified because of the rapidly varying surface normals on fractals and the large dynamic range of the light maps.

Quote
I've also tried a more involved 'filmic' tone-mapping approach as linked to from this page: http://mynameismjp.wordpress.com/2011/12/06/things-that-need-to-die/ which does give you fine control, but is probably too much to give end users as a set of controls.

The author also links to a interesting blog entry about 'Specular Aliasing', which I'll take look at.

Btw, Tom, how do you average your samples? Some funky filtering or just box filtering?
Logged
subblue
Conqueror
*******
Posts: 113



WWW
« Reply #8 on: April 15, 2012, 10:01:21 PM »

Btw, Tom, how do you average your samples? Some funky filtering or just box filtering?
I'm using a stratified jittering which accumulate by a known factor as I have a fixed number of supersample passes (see p347 of Physically Based Rendering). In your case batching into set number of passes might help.
Logged

www.subblue.com - a blog exploring mathematical and generative graphics
Syntopia
Fractal Bachius
*
Posts: 588



syntopiadk
WWW
« Reply #9 on: April 15, 2012, 10:26:47 PM »

I'm using a stratified jittering which accumulate by a known factor as I have a fixed number of supersample passes (see p347 of Physically Based Rendering). In your case batching into set number of passes might help.

Yes, I've also tried grid stratification. With my current progressive setup, though, I think I'll try Halton sequences for generating samples - here you do not need to specify the count in advance.

But what about the weights for the samples - do you just assign the same weight for each sample, or do you apply a function based on the distance from the pixel center? I'm still a little disappointed, that I see so little difference between box, gaussian, triangle, ... filtering.

Btw, I tried the filmic tonemapping (I found his shader examples here: http://filmicgames.com/archives/75), which works nicely.
 
Logged
hobold
Fractal Phenom
******
Posts: 459


« Reply #10 on: April 17, 2012, 12:36:06 AM »

I think what happens is, that if even a single of these ray reflect into one of the strong HDR lights it will completely dominate the sum.
This is the usual source of this kind of aliasing.

As far as I know, it is provably impossible to remove these artifacts _correctly_ unless you have a display device capable of showing HDR imagery. Your only freedom is in choosing a failure mode. The bloom filter that you have suggested yourself would emulate the behaviour of good ole' analog film. If film is hit by more photons than can be absorbed, the photons are scattered, and some hit the film around the actual focal point.

Another way to incorrectly fix this would be to saturate and tone map the result of each ray separately before summing. This is what rendering a larger image with subsequent downsampling does. This gives you good antialiasing without blooming, but at the cost of "wrong" tones and intensities in overly bright pixels. On the other hand, you cannot show this extreme brightness on ordinary displays anyway.

Choose your poison ...
Logged
kram1032
Fractal Senior
******
Posts: 1587


« Reply #11 on: April 17, 2012, 09:24:32 AM »

There is this wavelet-rassterization that uses limit-boxfiltering...
http://josiahmanson.com/research/wavelet_rasterization/
It's obviously not ideal for being box-filtering but on the other hand, its equivalent-to-infinite sample-depth might outweight that? I'm not sure...
However, they stated that they used Haar-wavelets (corresponding to box-filtering) for sake of simplicty while they could have used any other discrete normal wavelet out there.
If you get that to work for fractals, you are probably as close to ideal as possible.
It also works for voxel-data. Or any hypercube-xel for that matter...
Logged
Syntopia
Fractal Bachius
*
Posts: 588



syntopiadk
WWW
« Reply #12 on: April 17, 2012, 05:45:30 PM »

There is this wavelet-rassterization that uses limit-boxfiltering...
http://josiahmanson.com/research/wavelet_rasterization/
It's obviously not ideal for being box-filtering but on the other hand, its equivalent-to-infinite sample-depth might outweight that? I'm not sure...
However, they stated that they used Haar-wavelets (corresponding to box-filtering) for sake of simplicty while they could have used any other discrete normal wavelet out there.
If you get that to work for fractals, you are probably as close to ideal as possible.
It also works for voxel-data. Or any hypercube-xel for that matter...

If I understand it correctly, the paper is about alias-free rasterization of triangles, so I don't think it can be used. We have no triangles to rasterize, and my problem arise from alising of the specular light-sources.
Logged
Syntopia
Fractal Bachius
*
Posts: 588



syntopiadk
WWW
« Reply #13 on: April 17, 2012, 05:57:09 PM »

This is the usual source of this kind of aliasing.

As far as I know, it is provably impossible to remove these artifacts _correctly_ unless you have a display device capable of showing HDR imagery. Your only freedom is in choosing a failure mode. The bloom filter that you have suggested yourself would emulate the behaviour of good ole' analog film. If film is hit by more photons than can be absorbed, the photons are scattered, and some hit the film around the actual focal point.

Another way to incorrectly fix this would be to saturate and tone map the result of each ray separately before summing. This is what rendering a larger image with subsequent downsampling does. This gives you good antialiasing without blooming, but at the cost of "wrong" tones and intensities in overly bright pixels. On the other hand, you cannot show this extreme brightness on ordinary displays anyway.

Choose your poison ...

I tried importing the HDR map from above into Keyshot 3 (which I believe is a pretty advanced raytracer) to see how it would handle such sharp lights.

Here is a render:



As is evident the specular light alias a lot on the top of the bump-sphere. It does't help to turn up the number of samples: new completely white pixels come creeping, without affecting on neighbor pixels.

Keyshot actually has  a bloom-filter, so I tried this to see if it could remove the jagged specular artifacts:



It sure does remove some artificats, but there are still aliasing present, and the image looks a bit artificial to me.

I think - for now - the best strategy is limiting the specular intensity before averaging. Even though it kind of spoils the purpose of introducing a HDR pipeline to begin with.
Logged
hobold
Fractal Phenom
******
Posts: 459


« Reply #14 on: April 17, 2012, 07:09:28 PM »

I think - for now - the best strategy is limiting the specular intensity before averaging. Even though it kind of spoils the purpose of introducing a HDR pipeline to begin with.
Having a HDR pipeline does have its benefits. If you were not confined to real time, you could tone-map an image at several different "exposure" levels, and then re-combine those pictures into an unrealistic, but very detailed image that captures detail across the whole dynamic range. Exactly like the tricks done with the so called HDR photography.
Logged
Pages: [1] 2   Go Down
  Print  
 
Jump to:  



Powered by MySQL Powered by PHP Powered by SMF 1.1.20 | SMF © 2013, Simple Machines

Valid XHTML 1.0! Valid CSS! Dilber MC Theme by HarzeM
Page created in 0.481 seconds with 27 queries. (Pretty URLs adds 0.02s, 2q)