Logo by bib - Contribute your own Logo!
News:
<- Like it? why not donate for continuity of this forum!
info about usage
 
*
Welcome, Guest. Please login or register. April 23, 2014, 11:13:02 AM


Login with username, password and session length



Pages: [1] 2 3 4   Go Down
  Print  
Share this topic on DiggShare this topic on FacebookShare this topic on GoogleShare this topic on RedditShare this topic on StumbleUponShare this topic on Twitter
Author Topic: Antialiasing fractals - how best to do it?  (Read 8341 times)
0 Members and 1 Guest are viewing this topic.
Duncan C
Fractal Fanatic
****
Posts: 346



WWW
« on: March 06, 2008, 04:55:59 PM »

I've got my app, FractalWorks, rendering 3D height maps now, and am overall pretty pleased with the results. Directional lighting and specular highlights add a lot to the look of the plots. I've posted a few of the images to the samples gallery.

Here is a sample image for those that haven't seen the other post:


(click the picture to see the much larger "original" version on pbase.)

I'm using the distance estimate method (DEM) as the basis of my height values. (inverted, the log, scaled and streched, etc.)
The 3D texture of the height map very close to the set changes so much that it doesn't look good.

Simple supersampling doesn't work for fractals because there is an infinite amount of detail.

What approaches have others used to get smooth lines and cleaner textures? I should be able to apply an approach used for antialiasing 2D plots to my 3D plots.

I'm thinking of calculating a weighted average value of the current pixel and it's neighbors in a 3x3 grid
((the sum of the 8 neighboring pixels/8) + the current pixel) / 2.
Or I could do a simple average of all 9 pixels in a 3x3 grid centered on the current pixel. Either approach would tend to smooth out the "spiky" nature of the areas closest to the Mandelbrot/Julia set.

Does anybody else have a good suggestion as to how to do this?

Here's a sample image that shows the effect I want to avoid:

This is my attempt to duplicate the cover of the book "The Beauty of Fractals":



Note how the areas in the peaks closest to the baby mandelbrot are too chaotic (funny, that!) to make much sense visually.


Duncan
Logged

Regards,

Duncan C
cKleinhuis
Administrator
Fractal Senior
*******
Posts: 5269


formerly known as 'Trifox'


WWW
« Reply #1 on: March 06, 2008, 05:20:33 PM »

i am using a gaussian blur filter to smooth out my details, on very high resolutions, this leads to very good results
http://en.wikipedia.org/wiki/Gaussian_blur

but all in all it wringles up to much of my detail, i havent tried out a selective filter yet

due to the fact i am using this method for animations, the chaos even gets worse, if i want to make smooth animations, i have to apply the blur filter also in time direction, so to say, i have to take the last blursize and next blursize images into account when smoothing ... smiley but this i have not done yet, because it would had strange effects on my movie, it would have an effect like motion blur on my movies, but that wouldnt be the result i want to achieve ... sad
« Last Edit: March 06, 2008, 05:25:52 PM by Trifox » Logged

---

divide and conquer - iterate and rule - chaos is No random!
lycium
Fractal Iambus
***
Posts: 939



WWW
« Reply #2 on: March 07, 2008, 12:20:38 AM »

duncanc, you've hit the nail on the head: what you want to do is antialias your heightmap. blurring is something completely different.

a very simple and effective way to antialias the heightmap is via supersampling: instead of just using 1 sample per heightmap value, divide each "region" of your M x N map into K x K subregions (effectively enlarging it by a factor of K in each dimension, but you don't store all those values) which you evaluate and average to produce your final heightmap value. this is a simple way to bandlimit your signal, discarding all high frequency data that cannot be represented at the current sampling frequency (i.e. above the nyquist limit).

trifox, this method will give you much better results than blurring.
Logged

Duncan C
Fractal Fanatic
****
Posts: 346



WWW
« Reply #3 on: March 07, 2008, 01:36:37 AM »

duncanc, you've hit the nail on the head: what you want to do is antialias your heightmap. blurring is something completely different.

a very simple and effective way to antialias the heightmap is via supersampling: instead of just using 1 sample per heightmap value, divide each "region" of your M x N map into K x K subregions (effectively enlarging it by a factor of K in each dimension, but you don't store all those values) which you evaluate and average to produce your final heightmap value. this is a simple way to bandlimit your signal, discarding all high frequency data that cannot be represented at the current sampling frequency (i.e. above the nyquist limit).

trifox, this method will give you much better results than blurring.

lycium,

My app is written to render a 2D fractal, then create a 3D view of that fractal, with one height map (distance estimate) value for each pixel. The easiest thing for me to do will be to create a 2D fractal that's a multiple  of the size of the target 3D mesh, then average the height values to create a smaller mesh.

Do you think a 4x average would be enough to get significant antialising? If not, how many extra samples would you suggest? And should the K x K subregions overlap between points, or should they tile?


Duncan
Logged

Regards,

Duncan C
lycium
Fractal Iambus
***
Posts: 939



WWW
« Reply #4 on: March 07, 2008, 09:10:01 AM »

yes, what you in effect (but not in practice!) do is simply render a higher resolution version then downsample. there is no reason to store all that intermediate data.

it's really simple (bashing it out right here without ide, so there might be mistakes and it's obviously not optimised):

Code:
for (y = 0; y < yres; ++y)
for (x = 0; x < xres; ++x)
{
float r = 0.0f, g = 0.0f, b = 0.0f;

for (v = 0; v < supersample_factor; ++v)
for (u = 0; u < supersample_factor; ++u)
{
float fx = float(x) + float(u) / float(supersample_factor);
float fy = float(y) + float(v) / float(supersample_factor);

colour eval = pixel_function(fx,fy);

r += eval.r;
g += eval.g;
b += eval.b;
}

r /= float(supersample_factor * supersample_factor);
g /= float(supersample_factor * supersample_factor);
b /= float(supersample_factor * supersample_factor);

imagedata[y * xres + x].set(r,g,b);
}

regarding how much supersampling is enough, that comes down to your taste and the function you're sampling. fractals are particularly tricky to antialias because they are infinitely convoluted at all scales, so it's not theoretically clear whether or not supersampling will reduce aliasing or increase it! in practice however it seems to do a good job; 3x3 samples should produce a decently crisp reconstruction in most cases; obviously the higher resolution you use, the less aa you can get away with - so trifox will need a lot more for his 200x200 heightmap than you will for your 2000x2000 because there is much greater function variation to accurately account for.

i should also say that signal theory is a very deep rabbit hole and this description doesn't even begin to scratch the surface of the theory. it's also largely irrelevant for fractal rendering because it doesn't fit the classical assumptions of signal theory.

some time ago i taught a guy on deviantart, chaos5, the basics of sampling and reconstruction (you can look through his various deviations and journals for my posts if you're interested). the art of sampling comes into play when you start attempting to properly evaluate the signal reconstruction integral (of function times bandlimiting kernel)...
Logged

cKleinhuis
Administrator
Fractal Senior
*******
Posts: 5269


formerly known as 'Trifox'


WWW
« Reply #5 on: March 07, 2008, 03:28:30 PM »

i also do use supersampling, with random points scattered over the subgrid area, and smooth it afterway also,

without the blurring, it increases significantly the image quality, but for 3d ...

i have some more questions, i heard about nyqist frequency, but where does it come into play here ?

the high frequency noise increases towards the mandelbrot area, how can one possibly determine
a good subsampling for these areas, you said 3x3 raster serves in most cases but i am not satisfied here wink
basically it comes down to limiting the iteration depth ?

iteration depth = a indicator for frequencies ?

wouldnt it be possible to use this as a factor for smoothing ?
e.g.make a histogram over the occured iteration depths ?

just thoughts ...

another thing to mention is what kind of grid do you use ?

have you experimented with triangle grids ? or even heaxognal grids `?
http://mathworld.wolfram.com/TriangleTiling.html

i thought this could improve the smoothness of the 3d grid ....
Logged

---

divide and conquer - iterate and rule - chaos is No random!
lycium
Fractal Iambus
***
Posts: 939



WWW
« Reply #6 on: March 08, 2008, 05:00:15 AM »

the nyquist criterion is an element of classical sampling theory that, as i mentioned, has no part to play in reconstructing fractal functions. basically if you have some, say, audio signal with a maximum frequency content of 20khz, then you should sample it at a minimum of 40khz to be able to reconstruct it perfectly. when we don't know the frequency content of our function, this theory of little use.

however, as you pointed out, if we are given a sampling rate we can sometimes use this to adjust how much detail we add to our function. a nice example of this is the fractal perlin noise function, where interpolated noise is added in "octaves", each term having twice the frequency of the previous; if we have a given sample frequency we can then decide how many octaves are added, blending the final octave with an average value (which is usually zero).

unfortunately, i don't see how we can relate this spectral interpretation to fractals. it's clear that the function's frequency increases the more we iterate, but whether this is linear, exponential, quadratic, ... i don't know. maybe a real mathematician can help us out here wink

in any case, it actually doesn't matter too much to consider these things in theoretical perfection, simpler methods will work just fine. for example, we can for each pixel see how much the function varies and then do a second sampling pass based on that, ought to work well.

about which irregular sampling patterns i use, that depends on the application. typically for ray tracing purposes i use quasi monte carlo sequences which effectively fill the first 5 or 6 sampling dimensions, whereas for straightforward image sampling i'll usually use a rank-1 lattice if i can afford it.

here's a good book on monte carlo and quasi monte carlo methods you might be interested to look at (no use hiding from the fact that we're approximating integrals): http://www.cs.fsu.edu/~mascagni/Hammersley-Handscomb.pdf
Logged

lycium
Fractal Iambus
***
Posts: 939



WWW
« Reply #7 on: March 08, 2008, 11:35:55 AM »

here are some good notes on signal processing: http://www.cs.virginia.edu/~gfx/Courses/2003/ImageSynthesis/papers/Sampling/Notes%20on%20Signal%20Processing.pdf
Logged

lycium
Fractal Iambus
***
Posts: 939



WWW
« Reply #8 on: March 11, 2008, 08:09:35 PM »

trifox, have you tried doing this antialiasing instead of your current blurring approach?

i think that, together with higher resolutions like 1024x1024 or more, will bring a tremendous quality improvement for your renders. it's also a lot easier to do zooms like this, since you don't need to worry about continuity between different blurred images and stuff like that.
Logged

cKleinhuis
Administrator
Fractal Senior
*******
Posts: 5269


formerly known as 'Trifox'


WWW
« Reply #9 on: March 12, 2008, 10:18:02 AM »

yes, in current version of mutatorkammer, is the oversample feature, wich is also used
when calculating pixels for heightmaps, i am using as told before a randomized grid
as base, so, when using an 8 x oversample 64 pixels are taken into account for one
resulting pixel

i need the blurring for other things, because, no matter how good you do the oversampling
you will always get high peaks in some areas, e.g. the minibrots in some spiral arms of
the mandelbrot...

because of this the rendering for one single image is taking me more than a minute ;(
but i do always calculate everything on the fly, e.g. the antialiased heightmap, the blurring
and then the rendering, but if that is done, i am using opengl buffers to render the scene,
so one can move around the camera ...

but you are right, the blurring is taking me far too much expenses, because of the continuity at
the borders...
Logged

---

divide and conquer - iterate and rule - chaos is No random!
lycium
Fractal Iambus
***
Posts: 939



WWW
« Reply #10 on: March 12, 2008, 10:28:45 AM »

you can speed up the blur a number of ways (in order of complexity):

1. make a special case for the majority of the pixels in the centre region which are guaranteed to not have problems at the edges, to remove all those if-checks. actually to avoid inaccuracies you should compute the fractal map BEYOND the edges of what will be displayed, so that the full image you work with is using correct data.

2. the gaussian filter kernel is seperable, which means f(x,y) = g(x) * g(y). a basic consequence of this is that you can convert your O(N^2) filtering loop to O(N) by first blurring in one dimension, then on the other.

3. from some spline theory (specifically that of cascaded convolutions) we know that the gaussian is the limit of multiple box-convolutions. so if you box-convolve a box function, you get the linear (so-called "tent") filter, if you box-convolve it again you get a piecewise quadratic, then a cubic, etc... in the limit it becomes the gauss filter. photoshop uses 3x box filters to approximate the gaussian, and it works quite well. why would you want to use the box filter? because it can blur an image for any filter size in constant time if coded cleverly.

ahhh, old demoscene tricks smiley


all of this aside though, what you actually want to do is reduce the dynamic range of your heightmap, which the blurring does as a side effect of destroying most of the detail. so actually what you want to do to make the image more "regular" is to apply a range compression function like the logarithm to your map. this is similar to what is done in astronomical imaging because the stars don't display a nice linear ramp of intensities, it goes up in geometric scales (10x, 100x, 1000x, ...).

have a look at this page for more infos on this range compression operator: http://homepages.inf.ed.ac.uk/rbf/HIPR2/pixlog.htm
Logged

cKleinhuis
Administrator
Fractal Senior
*******
Posts: 5269


formerly known as 'Trifox'


WWW
« Reply #11 on: March 12, 2008, 11:09:00 AM »

1. huh? this does not sound good for me, are you talking about the black hole in the middle of the mandelbrot ?


2.dont worry, i am using the gaussian trick of O(n^2)->O(n)


3. any links to the constant time ?

but the logarithm compression sounds new to me, have not thought about it, all in all i am using a HDR method already, but only for creating the color ranges ...
Logged

---

divide and conquer - iterate and rule - chaos is No random!
lycium
Fractal Iambus
***
Posts: 939



WWW
« Reply #12 on: March 12, 2008, 12:07:03 PM »

1. i'm talking about taking those if-checks out of the innermost loops by special casing the central region where the filter will never extend past the image boundaries. if you compute extra pixels on the edges specially for this, then you also get a more correct image.

2. ok

3. basically you keep a running sum.

consider the following 1d data to be box-blurred:

2 4 6 8 ...

let's say you're using a 3-tap box filter. your first value is:

(2+4+6) / 3

then your next value is:

(4+6+8) / 3

which can be computed as:

(s-2+8) / 3 where s = 2+4+6, the previous sum.

so to blur a scanline, you first initialise your running sum, then each time you move a pixel you subtract the leftmost value and add the new one.

clearly the processing speed is independent of the size of your filter, and this method only works for box filter.
Logged

twinbee
Fractal Fertilizer
*****
Posts: 383



WWW
« Reply #13 on: September 24, 2008, 07:20:21 PM »

An old topic, but an interesting one.

I've just found out recently that for some fractals with sharply contrasted edges, one needs to sample 16*16 times greater than the desired resolution. This makes it 256 times slower than usual, but the results are definitely worth it. It makes such a difference to the noise level in very detailed areas.
Logged
HPDZ
Iterator
*
Posts: 157


WWW
« Reply #14 on: December 18, 2008, 04:51:05 PM »

This is an area I too am very interested in, although for the kind of animations I do, I can't afford the 4X or 16X or 256X rendering time increase that anti-aliasing imposes.

First, I think the image on the cover of the book clips the height at some upper limit...if you look closely, around the little mini-brot, everything is flat. I think they so something like if (count>max) count=max, which will obviously remove a lot of the jaggedness.

For anti-aliasing, I think the most important thing is that since the fractal does indeed have infinite spatial detail, there's no perfect way to do it. Some ways are better or faster than others, but none is perfect. There will always be unfilterable noise above the Nyquist frequency (as determined by the size of the sampling grid of your image pixels) in this situation.

When I do use AA, what I do, like everyone else, is supersample and apply a filter (average, windowed average, or median filter) to the supersampled data to generate the final count value for a pixel. I filter the actual count values before they are converted to colors rather than trying to filter the image colors (a much harder problem).

Some variations I've tried:

  • Pseudo-Poisson grid With areas of extreme detail, this can help reduce moire somewhat. The technique is to divide each image pixel into an NxN grid, but instead of calculating the fractal values at the center of each grid box, choose a random location within each grid box. This randomness spreads out the spatial frequencies that would turn into moire and turns them into random noise instead. The Poisson grid is the theoretically ideal way to do this, but it's a hassle to generate a perfect Poisson grid and I've found that the pseudo-Poisson grid is good enough.
  • Selective supersampling The idea here is to select the noisiest pixels and supersample only them. There are a million variations on this, but the basic idea is to compare each pixel to its neighbors, and if it's different enough, then it's a candidate for further oversampling. This can reduce the number of pixels that need oversampling quite a bit.
  • Different filter types Currently I use a median filter to convert the supersampled data to the final image count value. It seems a little "better" (very subjective) than a simple average, especially in areas near the edge of the set, where there is a lot of detail. I've tried averaging and various windowing schemes, and the difference is too subtle to matter much.
  • The Main Thing The more points in the supersampling grid, the better. I almost never go beyond 4x4, but there is a noticeable difference in some images between 4x4 and 16x16. The finer the grid, the more the high-frequency noise gets filtered and the cleaner the image will look. This will affect the final result far more than the details of which type of filter you apply to the supersampled data (for any reasonable filter).

Something else to consider is to apply a filter to the final image itself instead of or in addition to oversampling. This can remove a lot of sparkle noise but also can remove a lot of detail.

Another thing that I haven't tried (but I think is done in many animations) is to undersample and interpolate. That is, calculate image points on, say, a 160x120 grid, then use something like bilinear interpolation (or bicubic, or Lanczos -- pick your favorite method) to get a 640x480 image. The undersampling is kind of like applying a low-pass filter, so you get less noise in the final image.
« Last Edit: December 29, 2008, 06:28:18 PM by HPDZ, Reason: Moire is not a name and should not be capitalized! » Logged

Zoom deeply.
www.hpdz.net
Pages: [1] 2 3 4   Go Down
  Print  
 
Jump to:  


Related Topics
Subject Started by Replies Views Last post
filters for antialiasing animations Help & Support jakehughes 2 856 Last post February 10, 2008, 08:22:57 AM
by jakehughes
3D fractals 3D Fractal Generation Duncan C 2 1426 Last post April 25, 2008, 05:40:50 PM
by titia
New animation video with antialiasing completed Movies Showcase (Rate My Movie) ianc101 6 1169 Last post December 31, 2010, 05:07:45 AM
by ianc101
Help required with antialiasing and the median filter Programming zenzero-2001 7 275 Last post October 28, 2012, 02:27:41 PM
by zenzero-2001
[mandelbulb3d] animation: antialiasing or virtualdub filter 2:1 ? General Discussion scavenger 0 297 Last post May 26, 2013, 06:57:54 PM
by scavenger

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2013, Simple Machines

Valid XHTML 1.0! Valid CSS! Dilber MC Theme by HarzeM
Page created in 0.695 seconds with 30 queries. (Pretty URLs adds 0.059s, 2q)