Logo by mclarekin - Contribute your own Logo!

END OF AN ERA, FRACTALFORUMS.COM IS CONTINUED ON FRACTALFORUMS.ORG

it was a great time but no longer maintainable by c.Kleinhuis contact him for any data retrieval,
thanks and see you perhaps in 10 years again

this forum will stay online for reference
News: Visit the official fractalforums.com Youtube Channel
 
*
Welcome, Guest. Please login or register. April 25, 2024, 05:37:42 PM


Login with username, password and session length


The All New FractalForums is now in Public Beta Testing! Visit FractalForums.org and check it out!


Pages: 1 [2] 3 4   Go Down
  Print  
Share this topic on DiggShare this topic on FacebookShare this topic on GoogleShare this topic on RedditShare this topic on StumbleUponShare this topic on Twitter
Author Topic: Antialiasing fractals - how best to do it?  (Read 19933 times)
0 Members and 2 Guests are viewing this topic.
twinbee
Fractal Fertilizer
*****
Posts: 383



WWW
« Reply #15 on: June 02, 2009, 03:28:57 AM »

Quote
Currently I use a median filter to convert the supersampled data to the final image count value. It seems a little "better" (very subjective) than a simple average,
One way to compare the two types is to render a very highly anti-aliased image (say oversampled to 32*32), and use that to compare to the other two. You'd measure the differences (perhaps something like: abs(red1-red2) + abs(green1-green2) + abs(blue1-blue2) ), and see which picture was more different to the near-perfect oversampled one.

Quote
Pseudo-Poisson grid
Isn't that just the monte carlo method?
Logged
HPDZ
Iterator
*
Posts: 157


WWW
« Reply #16 on: June 02, 2009, 03:43:56 AM »

Well, about the first thing, how to compare the two: you'd still have to choose a filtering method even for the 32x32 oversampling. You would think that as the number of oversampling points gets bigger maybe the details of the filter wouldn't matter as much, but it's not clear to me that's necessarily true.

About the second question: I think "Monte Carlo" is not a precisely defined term. True, the Poisson Grid is a random method, but it's not quite the same as just picking N points independently within the pixel to be supersampled. Doing that leaves each point's location unconstrained, while the pseudo-Poisson grid keeps them sort of evenly spaced from each other.
Logged

Zoom deeply.
www.hpdz.net
lycium
Fractal Supremo
*****
Posts: 1158



WWW
« Reply #17 on: June 02, 2009, 03:36:47 PM »

my recommendation for sample placement is to use the "best candidate" algorithm to approximate a poisson distribution; basically you take a whole bunch of random candidate sampling points, keeping track of the one with the largest minimum distance to all previously computed points. strictly deterministic sampling methods suck for fractals because they fail to hide the massive aliasing caused by self-similarity with noise (which the eye actually loves, i have in time come to appreciate).

as for which filter to use, again i have a fractal-specific recommendation: anything except sharpening filters (eg your cubic filter family, and especially ones based on the "ideal" sinc function), these will exacerbate the noise/ringing problem with additional sampling. my personal preference here is the triangle/tent filter (basically 1 - abs(x) over [-1,1] interval), being sharper than the usual gaussian but not sufficiently so to get in the way of making a clean image at a reasonable sampling rate.

deary me, what an antialiasing-fetishist i have become...  tease
Logged

twinbee
Fractal Fertilizer
*****
Posts: 383



WWW
« Reply #18 on: June 03, 2009, 12:18:23 AM »

Quote
Well, about the first thing, how to compare the two: you'd still have to choose a filtering method even for the 32x32 oversampling. You would think that as the number of oversampling points gets bigger maybe the details of the filter wouldn't matter as much, but it's not clear to me that's necessarily true.

Well then just use 64x64, as I'm willing to bet that would beat 32x32, no matter what the filtering type.
At 64*64, noise is so, so, small that it really should provide a practically ideal image yardstick. If it's still not enough, then of course there's 128x128 oversampling. Each one is about 4 times as accurate as the last, so when a comparison always produces consistent results (where filtering type a always beats filter type b according to the yardstick comparison), you know the yardstick is good enough.

Unless I'm mistaken, this seems like a great way to quantitatively compare filtering types.

Quote
You would think that as the number of oversampling points gets bigger maybe the details of the filter wouldn't matter as much, but it's not clear to me that's necessarily true.

Hmm... I very much would think so wink The differences would surely get smaller and smaller, converging to no difference with the super-high oversampling versions. Even the 16x version is almost perfect in the last pic from this thread.
« Last Edit: June 03, 2009, 12:34:00 AM by twinbee » Logged
cKleinhuis
Administrator
Fractal Senior
*******
Posts: 7044


formerly known as 'Trifox'


WWW
« Reply #19 on: June 03, 2009, 12:52:17 AM »

ehrm, you got to take into account the result, in fact, you are proposing, using a 128x128 sub image for calculatin one tiny pixel in your resulting image is considerably different from a 64x64 sub image covering the same area ?!

when talking about those big sub images, you are comming very fast to visible limitations ( considering a rgb pixel of 8bits for each rgb channel ) because the differencies are becoming very fast very small, my experience is that using a 4x4 ( 16x calculation time ! ) sub pixel leads to very good results cheesy

....had to say something wink
Logged

---

divide and conquer - iterate and rule - chaos is No random!
twinbee
Fractal Fertilizer
*****
Posts: 383



WWW
« Reply #20 on: June 03, 2009, 01:22:06 AM »

That's what I'm saying yes - that such massive resolutions, the quality is so perfect that any deeper just doesn't make any difference, no matter what the filtering algorithm is. I was just making the point that one could go to deeper resolutions if each filtering algorithm does actually give noticably different results (which I bet wouldn't be the case beyond say 16x16 or 32x32).
« Last Edit: June 03, 2009, 01:24:20 AM by twinbee » Logged
HPDZ
Iterator
*
Posts: 157


WWW
« Reply #21 on: September 30, 2009, 03:36:17 AM »

Well, the forum is recommending I start a new topic since nobody's discussed this in over 90 days. That's probably because everyone's been waiting for their 256x256 oversampled test images to render...ha ha

I did in fact make two test images with 256x256 oversampled images ... yes, that is 65536 samples per image pixel! ... one with median filtering, and one with mean filtering. I also did this at 16x16 and 32x32.

I'll start a new thread and post the images to my gallery. They will be JPG images, to keep the sizes reasonable; unfortunately, that means a lot of the detail that differentiates the mean filter result from the median filter result is obscured by the JPG compression. So I have put the original uncompressed BMP files on www.hpdz.net. The link is in the new topic thread.
Logged

Zoom deeply.
www.hpdz.net
HPDZ
Iterator
*
Posts: 157


WWW
« Reply #22 on: September 30, 2009, 04:13:59 AM »

BAH! Right after I clicked "Post" on that last message, I decided to just go ahead and keep the thread going as it is.

So here's the thing: I believe one of these two methods is superior. I won't bias anyone more than I already have (review the thread) by saying which one I think is superior, but I think if you look closely at even the JPG images, with all their artifacts, you can tell. Download the BMP images (almost 3 MB each!) if you really want to scrutinize them.

I further believe that the difference in filtering methods persists even at huge overampling levels like 256x256 as I have done here. One of these methods is just inherently not suited to dealing with the kind of skewed, non-Gaussian noise that we have here (more data on that is coming), and it doesn't matter how much oversampling you do. No matter how large the oversampling is, these two methods do NOT converge to a common "perfect" image as our intuition might lead us to believe. The filtering method definitely matters, even at extreme levels of oversampling like this.

When comparing these test images, don't be distracted by the slightly different coloring; these images were both colorized by the same method, but this method uses the distribution of fractal count data to generate the color map, and since the different filtering techniques generate slightly different count distributions in the final, filtered images, the colorings are slightly different. The important thing is to compare the level of detail between the two. Not in the very central white spot, which is overwhelmed by moire. Check out the peripheral areas to see where more fine structure is evident.

The median filtered images do take longer to render. It is easier to add a whole bunch of elements in a list than it is to find the median element in that list. As the list gets longer, this problem gets larger. It took about 95 hours to render the median filtered image and only about 35 hours for the mean filtered one.

I will post the 16x16 and 32x32 images later. I also want to generate histograms of the two 256x256 images and also try maybe a 2D fourier transform to see if the obvious visual noise can be demonstrated on a power spectrum. And of course, comparing the 16x16 and 32x32 images to the 256x256 images will be helpful too. If there isn't a huge difference between the lower degrees of oversampling and the extreme oversampling, it may not be worth going too crazy with this. These things typically obey the 80-20 rule since some kind of relationship like Performance = sqrt(Effort) typically shows up somewhere.



This is the mean filtered oversampled image. The raw BMP file is at http://www.hpdz.net/images/TechPics/AA3-256x256-Mean.bmp


This is the median filtered oversampled image. The raw BMP file is at http://www.hpdz.net/images/TechPics/AA3-256x256-Median.bmp
« Last Edit: September 30, 2009, 04:16:49 AM by HPDZ, Reason: typo » Logged

Zoom deeply.
www.hpdz.net
HPDZ
Iterator
*
Posts: 157


WWW
« Reply #23 on: September 30, 2009, 04:21:11 AM »

my recommendation for sample placement is to use the "best candidate" algorithm to approximate a poisson distribution; basically you take a whole bunch of random candidate sampling points, keeping track of the one with the largest minimum distance to all previously computed points.

The pseudo-Poisson grid is almost as good, and far simpler to implement: divide the region to be oversampled into NxN subregions. Within each subregion pick a randomly located point to evaluate. This turns out to give very nearly the same spectral properties as the ideal Poisson grid with vastly less computational effort.
Logged

Zoom deeply.
www.hpdz.net
lycium
Fractal Supremo
*****
Posts: 1158



WWW
« Reply #24 on: September 30, 2009, 05:20:15 AM »

the superior spectral noise properties of the best candidate algorithm are easily seen in comparison to jittered sampling  afro
Logged

HPDZ
Iterator
*
Posts: 157


WWW
« Reply #25 on: September 30, 2009, 05:43:22 AM »

Jittered sampling will only reduce moire, and only "reduce" it in the sense that it turns it into white noise. Otherwise, jittered supersampling has no particular advantage over regular grid supersampling, at least none that I know of.

If you have an example of this effect your are referring to, I would like to see it.

Revision: You are referring to something different than my comment addresses, I see now. I think that for most applications, the distributions of the supersampling points doesn't matter. For dealing with moire-ridden areas, it matters a lot. Still I think that the pseudo-Poisson jittered grid is pretty darn close to a true Poisson set of supersampling points, close enough that there is almost no perceptible difference in actual images visually (maybe some spectral analysis can distinguish them).

I still invite and encourage a comparison of the jittered pseudo-Poisson grid to the best-candidiate grid.

And I also still think that the filtering method is critical, as an independent matter from the arrangement of the supersampling points.

I would love to hear any further opinions or analysis on this.
« Last Edit: September 30, 2009, 05:58:26 AM by HPDZ, Reason: Total misunderstanding. » Logged

Zoom deeply.
www.hpdz.net
lycium
Fractal Supremo
*****
Posts: 1158



WWW
« Reply #26 on: September 30, 2009, 06:25:16 AM »

i think we've both done the tests (just had a look around your site) smiley

my comment should have been prefaced with "for most imaging purposes": the superior reconstruction attained by sampling patterns with blue noise properties is most prominent in low frequency regions, whereas fractal imaging is usually full of very high frequencies.
Logged

Duncan C
Fractal Fanatic
****
Posts: 348



WWW
« Reply #27 on: September 30, 2009, 12:31:12 PM »

BAH! Right after I clicked "Post" on that last message, I decided to just go ahead and keep the thread going as it is.

So here's the thing: I believe one of these two methods is superior. I won't bias anyone more than I already have (review the thread) by saying which one I think is superior, but I think if you look closely at even the JPG images, with all their artifacts, you can tell. Download the BMP images (almost 3 MB each!) if you really want to scrutinize them.

I further believe that the difference in filtering methods persists even at huge overampling levels like 256x256 as I have done here. One of these methods is just inherently not suited to dealing with the kind of skewed, non-Gaussian noise that we have here (more data on that is coming), and it doesn't matter how much oversampling you do. No matter how large the oversampling is, these two methods do NOT converge to a common "perfect" image as our intuition might lead us to believe. The filtering method definitely matters, even at extreme levels of oversampling like this.

When comparing these test images, don't be distracted by the slightly different coloring; these images were both colorized by the same method, but this method uses the distribution of fractal count data to generate the color map, and since the different filtering techniques generate slightly different count distributions in the final, filtered images, the colorings are slightly different. The important thing is to compare the level of detail between the two. Not in the very central white spot, which is overwhelmed by moire. Check out the peripheral areas to see where more fine structure is evident.

The median filtered images do take longer to render. It is easier to add a whole bunch of elements in a list than it is to find the median element in that list. As the list gets longer, this problem gets larger. It took about 95 hours to render the median filtered image and only about 35 hours for the mean filtered one.

I will post the 16x16 and 32x32 images later. I also want to generate histograms of the two 256x256 images and also try maybe a 2D fourier transform to see if the obvious visual noise can be demonstrated on a power spectrum. And of course, comparing the 16x16 and 32x32 images to the 256x256 images will be helpful too. If there isn't a huge difference between the lower degrees of oversampling and the extreme oversampling, it may not be worth going too crazy with this. These things typically obey the 80-20 rule since some kind of relationship like Performance = sqrt(Effort) typically shows up somewhere.




Can you post the coordinates of those images, and the number of iterations used to render them? They both seem pretty noisy for all the effort put into them. I'd like to try a crack at them using a different coloring scheme that greatly reduces the appearance of noise.
Logged

Regards,

Duncan C
HPDZ
Iterator
*
Posts: 157


WWW
« Reply #28 on: September 30, 2009, 05:10:28 PM »

Can you post the coordinates of those images, and the number of iterations used to render them? They both seem pretty noisy for all the effort put into them. I'd like to try a crack at them using a different coloring scheme that greatly reduces the appearance of noise.

Sure. I will post the coordinates and iteration count tonight when I get back home. I have a few other demo images too from different locations.

They are really noisy, yes, but you should see the unfiltered version!! Since there is an infinite amount of noise in the underlying "signal" (the "true" image of the Mandelbrot set in infinite detail) no amount of filtering is going to make a noiseless image, but I do agree that it is disappointing to see how MUCH noise there is even after this much filtering.

My main purpose here is to contrast the median filter with a simple average as the way of reducing the supersampled data set. I think the median filter does a better job.

What different coloring scheme were you going to try?
Logged

Zoom deeply.
www.hpdz.net
Duncan C
Fractal Fanatic
****
Posts: 348



WWW
« Reply #29 on: October 01, 2009, 03:58:21 AM »

Can you post the coordinates of those images, and the number of iterations used to render them? They both seem pretty noisy for all the effort put into them. I'd like to try a crack at them using a different coloring scheme that greatly reduces the appearance of noise.

Sure. I will post the coordinates and iteration count tonight when I get back home. I have a few other demo images too from different locations.

They are really noisy, yes, but you should see the unfiltered version!! Since there is an infinite amount of noise in the underlying "signal" (the "true" image of the Mandelbrot set in infinite detail) no amount of filtering is going to make a noiseless image, but I do agree that it is disappointing to see how MUCH noise there is even after this much filtering.

My main purpose here is to contrast the median filter with a simple average as the way of reducing the supersampled data set. I think the median filter does a better job.

What different coloring scheme were you going to try?

My app, FractalWorks, offers both log color change and color change based on a histogram of the plot. For histogram based color tables, iteration values that occur frequently get a large color change, and iteration values that occur infrequently get a small color change. Color change is proportional to a color's popularity in a plot.

This causes areas with very rapid change in iteration value to use smaller color changes so you don't get a riot of color in a small space.

Histogram-based colors have the effect of reducing the amount of color change in areas with "high frequency", which lowers the amount of visual noise.

FractalWorks will also use fractional iteration values to interpolate colors between individual iteration values to avoid color bands.

Finally I can introduce color change based on the distance estimate (DE) value of a pixel. The closer a pixel is to the Mandelbrot/Julia set, the more weight I apply a "close to the set" color.

Here is a sample image from a similar area of the Mandelbrot set. This image doesn't use ANY supersampling or antialiasing. It is a pure 1000x1000 pixel plot using histogram based colors (and distance estimates to create a rapid change in color for pixels very near the Mandelbrot set.) It is saved as a JPEG directly from FractalWorks.

Usually for images I am posting to a forum I'll render them at 2x the target size and then downsample them using Photoshop's bicubic method, which does a decent job of antialiasing. I wanted to show a "pure" image for this thread however.

This plot took less than a second to render on my 8-core Intel Xeon Macintosh. If I turn off multithreading it takes a little less than 4 seconds.




Document name:   Fractal Forums sample
Fractal type:   mandelbrot
Plot size (w,h):   1000,   1000
Maximum iterations:   50000
Center Point (real, imaginary):   -0.74886,   0.069278 i
Plot Width (real):   0.00222

The max iteration count of 50000 was overkill for this image. The average number of iterations was about 400, and it doesn't have any blobs in the middle with a max iteration count of 5000. I just slapped in a high number to make sure there weren't any artifacts from using too low a max iteration value.
« Last Edit: October 01, 2009, 04:05:41 AM by Duncan C » Logged

Regards,

Duncan C
Pages: 1 [2] 3 4   Go Down
  Print  
 
Jump to:  

Related Topics
Subject Started by Replies Views Last post
two b&w 3d fractals 3D Fractal Generation lycium 0 7035 Last post August 25, 2007, 12:16:05 AM
by lycium
filters for antialiasing animations Help & Support jakehughes 2 1104 Last post February 10, 2008, 08:22:57 AM
by jakehughes
New animation video with antialiasing completed Movies Showcase (Rate My Movie) ianc101 6 2454 Last post December 31, 2010, 05:07:45 AM
by ianc101
Help required with antialiasing and the median filter Programming zenzero-2001 7 689 Last post October 28, 2012, 02:27:41 PM
by zenzero-2001
[mandelbulb3d] animation: antialiasing or virtualdub filter 2:1 ? General Discussion scavenger 0 3500 Last post May 26, 2013, 06:57:54 PM
by scavenger

Powered by MySQL Powered by PHP Powered by SMF 1.1.21 | SMF © 2015, Simple Machines

Valid XHTML 1.0! Valid CSS! Dilber MC Theme by HarzeM
Page created in 0.213 seconds with 24 queries. (Pretty URLs adds 0.025s, 2q)