Logo by Pauldelbrot - Contribute your own Logo!

END OF AN ERA, FRACTALFORUMS.COM IS CONTINUED ON FRACTALFORUMS.ORG

it was a great time but no longer maintainable by c.Kleinhuis contact him for any data retrieval,
thanks and see you perhaps in 10 years again

this forum will stay online for reference
News: Did you know ? you can use LaTex inside Postings on fractalforums.com!
 
*
Welcome, Guest. Please login or register. April 24, 2024, 12:23:48 AM


Login with username, password and session length


The All New FractalForums is now in Public Beta Testing! Visit FractalForums.org and check it out!


Pages: [1] 2   Go Down
  Print  
Share this topic on DiggShare this topic on FacebookShare this topic on GoogleShare this topic on RedditShare this topic on StumbleUponShare this topic on Twitter
Author Topic: Why do Julia calculations fall apart at lower magnification than Mandelbrot?  (Read 9651 times)
0 Members and 1 Guest are viewing this topic.
Duncan C
Fractal Fanatic
****
Posts: 348



WWW
« on: April 05, 2007, 06:49:53 PM »

The program I'm developing, FractalWorks, so far generates "conventional" Mandelbrot and Julia set plots using complex numbers. It uses double precision (64 bit) floating point math to do it's calculations.

For Mandelbrot sets, I can get down to a plot with (max_real - min_real) of around 4e-14 before floating part errors start to degrade the resulting images.

For Julia sets, though, my plots fall apart at a much lower magnification. Here is a sample plot at about 2.5e-7 that is very badly distorted from floating point errors.

The same routines calculate both Mandelbrot and Julia sets. For Mandelbrot plots the same routine uses zero for the intiial value of z and the point coordinates for c, and for Julia sets it uses the Julia origin point as the value of c and the point coordinates for the initial value of z.

{in the iterative equation z(x+1) = z(x)2 + c}

I don't understand why this is. Can somebody enlighten me?

Here is a sample Julia plot that shows the distortion I'm talking about. Click on the image to see a full sized image including the plot coordinates:
 
 http://www.pbase.com/image/76699423/original

Duncan C





« Last Edit: April 14, 2007, 01:20:54 AM by Nahee_Enterprises » Logged

Regards,

Duncan C
lycium
Fractal Supremo
*****
Posts: 1158



WWW
« Reply #1 on: April 05, 2007, 08:30:11 PM »

interesting, i had no idea that the two iterations had such differing numerical properties! this is of course surprising because they use the exact same formula (the c for the julia-iteration is just the starting point for a mandelplot).

the only thing i can think of that might be responsible for such different behaviour is the dynamics of the iteration (ie. the orbit of points z_n); a way to verify this would be to sum the distance (squared should be fine) between subsequent iterations and the c-value, since floating point computations are sensitive to the relative magnitudes of their operands (this is especially true of addition and subtraction). since the mandelbrot's c-value is usually closer to the orbit those differences could well be smaller, leading to less cumulative roundoff.
« Last Edit: April 05, 2007, 08:32:43 PM by lycium » Logged

Duncan C
Fractal Fanatic
****
Posts: 348



WWW
« Reply #2 on: April 06, 2007, 03:29:38 PM »

interesting, i had no idea that the two iterations had such differing numerical properties! this is of course surprising because they use the exact same formula (the c for the julia-iteration is just the starting point for a mandelplot).

the only thing i can think of that might be responsible for such different behaviour is the dynamics of the iteration (ie. the orbit of points z_n); a way to verify this would be to sum the distance (squared should be fine) between subsequent iterations and the c-value, since floating point computations are sensitive to the relative magnitudes of their operands (this is especially true of addition and subtraction). since the mandelbrot's c-value is usually closer to the orbit those differences could well be smaller, leading to less cumulative roundoff.

lycium,

You suggest keeping a running total of the cumulative (square) difference in distance between an iteration's value and it's initial c value, for each point on a Julia graph and the corresponding point on a Mandelbrot graph? If your theory holds true, I gather you'd expect the differences to be much larger for the Julia set.

I'm trying to think of a meaningful way to view the results. At the simplest I could average the distance value for all points on both a Mandelbrot and Julia plot, but I'm not sure that would give a meaningful result. I could also compare corresponding total difference values between points on a Julia and Mandelbrot plot, and color them with one gradient when the Julia distance is larger, and a contrasting gradient when the Mandelbrot distance is greater. The problem with that is that there's not really a meaningful correspondence between individual points on a Mandelbrot and Julia plot.

Hmm... My brain isn't working very well this morning. undecided I'll have to drink more coffee and think about this further.


Duncan C
 
Logged

Regards,

Duncan C
lycium
Fractal Supremo
*****
Posts: 1158



WWW
« Reply #3 on: April 06, 2007, 04:24:39 PM »

You suggest keeping a running total of the cumulative (square) difference in distance between an iteration's value and it's initial c value, for each point on a Julia graph and the corresponding point on a Mandelbrot graph? If your theory holds true, I gather you'd expect the differences to be much larger for the Julia set.

hmm, needs a bit of clarification i think (i also wrote my suggesting woefully undercaffeinated): basically what we'll do is render a "debug image", which just visualises some pixelwise info. so you do your zoom where the julia breaks up and the mandel doesn't, then you hit debug vis. for each pixel, it computes the sum of the (sqr) distance between the last iteration pos and the c-value, over all iterations. so for each pixel you'll get a distance-sum, which you visualise however is best (mapping to a colour, just a linear rescale and clamp, whatever).

thinking about this a little further, it's not quite right to use the squared distance, since that's not a linear metric (which is desirable for comparisons).

debug images are always interesting, hope you'll post yours here smiley
Logged

Duncan C
Fractal Fanatic
****
Posts: 348



WWW
« Reply #4 on: April 06, 2007, 05:42:25 PM »

[snip]
...so you do your zoom where the julia breaks up and the mandel doesn't, then you hit debug vis. for each pixel, it computes the sum of the (sqr) distance between the last iteration pos and the c-value, over all iterations. so for each pixel you'll get a distance-sum, which you visualise however is best (mapping to a colour, just a linear rescale and clamp, whatever).

Question on terms: You say "..computes the sum of the square distance between the last iteration pos and the c-value."

If I compare the distance between the last iteration's position with the c-value, what sum are you referring to?

I think you're right that the square value would be hard to compare. I might take the square root of the distance measurements and just let the debug plot run overnight. I wonder how much error there is in the square root function in my math library, at very low

By the way, for the plots where the Julia image breaks down, I use a very large max iteration value. (60,000 iterations in the example.)

I wonder if the breakdown is due to magnfication or high iteration count. I find it easier to find Julia sets (vs Mandelbrot plots) that have interesting detail at high iteration counts but fairly low magnification.  It might be a function of the topology of the different sets.



Duncan C
Logged

Regards,

Duncan C
lycium
Fractal Supremo
*****
Posts: 1158



WWW
« Reply #5 on: April 06, 2007, 06:00:05 PM »

[snip]
...so you do your zoom where the julia breaks up and the mandel doesn't, then you hit debug vis. for each pixel, it computes the sum of the (sqr) distance between the last iteration pos and the c-value, over all iterations. so for each pixel you'll get a distance-sum, which you visualise however is best (mapping to a colour, just a linear rescale and clamp, whatever).

Question on terms: You say "..computes the sum of the square distance between the last iteration pos and the c-value."

If I compare the distance between the last iteration's position with the c-value, what sum are you referring to?

something like this:

Code:
double total_len = 0.0;
for (i = 0; i < max_iter; ++i)
{
  // z <- z^2 + c

  const double dx = z.re - c.re;
  const double dy = z.im - c.im;

  total_len += sqrt(dx*dx + dy*dy);
}

// scale total_len to some (mostly) displayable range and plot that to the debug image

I think you're right that the square value would be hard to compare. I might take the square root of the distance measurements and just let the debug plot run overnight. I wonder how much error there is in the square root function in my math library, at very low

at double precision, even with something like 10k terms in the sum (iteration) the total should be quite accurate, and definitely accurate enough for visual display (remember that we can only show 255 intensity levels!).

as for "overnight", damn dude, how slow is this app? for one thing, we don't need a poster resolution image, just some sort of comparison... 512x512 is more than sufficient, which you should be able to render in realtime i guess? a few seconds at most!

By the way, for the plots where the Julia image breaks down, I use a very large max iteration value. (60,000 iterations in the example.)

ah well, there's something i haven't considered. that's one hell of a chaotic system, that it can orbit for 60k iterations without ever having radius > 2!

I wonder if the breakdown is due to magnfication or high iteration count. I find it easier to find Julia sets (vs Mandelbrot plots) that have interesting detail at high iteration counts but fairly low magnification.  It might be a function of the topology of the different sets.

i'm not sure what you mean by "function of the topology", but definitely 60k iterations of anything numerical will have siginificant drift.

btw, at double precision you're pushing common computers pretty much to their limit (if you consider speed as well as precision, since there's a dramatic speed dropoff after double precision). i know i say it too much, but there are other very good looking fractals (and types of fractals) are far less numerically demanding...
Logged

gandreas
Explorer
****
Posts: 53



WWW
« Reply #6 on: April 06, 2007, 06:15:46 PM »

OK, think of it this way.

Each thing takes two starting values - the coordinate, and a constant fixed value.  Obviously, the fixed value is just that - fixed.  The coordinate, however, is going to vary on the screen - what's more, the value you use as a coordinate is not going to exactly represent the location on the screen (there is going to be slight rounding errors going from screen to the zoomed coordinate space - we'll call that "E" since a nice unicode epsilon won't show up here).

M has z0 = constant, and c = pixel +/- E.  J has z0 = pixel +/- E, and c = fixed.

Now at each step, you take z^2 + c.

We'll also pick outrageously large simple number (and lets make them 1D as well).  Z0 = 10, C = 10.

If E = 1, then M(Z0) = 11, M(C) = 10, and J(Z0) = 10, J(C) = 11

Then at one step we've got M(z1) = 10 * 10 + 11 = 111 (off by 1)
J(z1) = 11 * 11 + 10 = 131 (off by 21)
If E = 0, then M(z1) = J(z1) = 110.  As you can see the Julia set is already more "off"

Another step
E=0, z2 = 12110
M(z2) = 111 * 111 + 11 = 12332 (off by 222)
J(z2) = (131) * (131) + 10 = 17171 (off by 5061)

Even if the error were smaller (E=0.1) we get:
E0(z1) = 110, M(z1) = 110.1, J(z1) = 112.001 (.1 vs 2.001)
E0(z2) = 12110, M(z2) = 12132.11, J(z2) = 12554.224 (22.11 vs 444.224)

So as you can see, the error caused by converting screen coordinates to accumulates faster in the Julia set, due to the initial step squares the error.  And since that error is going to vary from pixel to pixel, you get that weird blockiness.

It has nothing to do with how you calculate the color or anything like that - the problem is that there is going to be error when you convert from screen pixel coordinates to your complex number.
Logged
Duncan C
Fractal Fanatic
****
Posts: 348



WWW
« Reply #7 on: April 06, 2007, 06:27:19 PM »

If I compare the distance between the last iteration's position with the c-value, what sum are you referring to?

something like this:

Code:
double total_len = 0.0;
for (i = 0; i < max_iter; ++i)
{
  // z <- z^2 + c

  const double dx = z.re - c.re;
  const double dy = z.im - c.im;

  total_len += sqrt(dx*dx + dy*dy);
}

// scale total_len to some (mostly) displayable range and plot that to the debug image

The code sample helps clarify things. You DO mean to sum the C to Z difference for all iterations.



I think you're right that the square value would be hard to compare. I might take the square root of the distance measurements and just let the debug plot run overnight. I wonder how much error there is in the square root function in my math library, at very low

at double precision, even with something like 10k terms in the sum (iteration) the total should be quite accurate, and definitely accurate enough for visual display (remember that we can only show 255 intensity levels!).

My app uses as many colors as there are iteration values. I don't limit myself to 255 colors. I support up to 64k iterations, and 64k unique colors (I store a 16 bit iteration value.) That way you can use any arbirary coloring that creates a clear plot

as for "overnight", damn dude, how slow is this app? for one thing, we don't need a poster resolution image, just some sort of comparison... 512x512 is more than sufficient, which you should be able to render in realtime i guess? a few seconds at most!

You're right that a small plot should be enough to get an idea of what's going on. And I'd only be taking a square root once for each pixel, after doing all the iterations.


By the way, for the plots where the Julia image breaks down, I use a very large max iteration value. (60,000 iterations in the example.)

ah well, there's something i haven't considered. that's one hell of a chaotic system, that it can orbit for 60k iterations without ever having radius > 2!
That's why they call this stuff chaotic!


Duncan C
Logged

Regards,

Duncan C
lycium
Fractal Supremo
*****
Posts: 1158



WWW
« Reply #8 on: April 06, 2007, 06:29:26 PM »

<snip lots of not-so-relevant-1d-stuff>

So as you can see, the error caused by converting screen coordinates to accumulates faster in the Julia set, due to the initial step squares the error.  And since that error is going to vary from pixel to pixel, you get that weird blockiness.

the error introduced by moving from pixel co-ords to the complex plane is once-off, it's never "accumulated". what's more, the procedure for going from screen space to the complex plane is the same for both mandel and julia, so there's no distinction to be made there. as i suggested in my initial post, i think the error comes from the different iteration dynamics (orbit) in the complex plane, and conjectured that this may be because of differing magnitudes in the z- and c-values since ieee fp has to convert to a common base when adding or subtracing, which chops off a lot of bits if the operands have significantly different magnitudes. that's why it's important to do the full complex number simulation instead of something 1d, you'd like to actually measure what you're interested in wink

It has nothing to do with how you calculate the color or anything like that - the problem is that there is going to be error when you convert from screen pixel coordinates to your complex number.

i don't think anyone suspected colour computation as the source of the blockiness.

edit: reply to duncan...

My app uses as many colors as there are iteration values. I don't limit myself to 255 colors. I support up to 64k iterations, and 64k unique colors (I store a 16 bit iteration value.) That way you can use any arbirary coloring that creates a clear plot

my point about 256 distinct intensity levels still stands (and it's linearly less once you take gamma correction into account).

You're right that a small plot should be enough to get an idea of what's going on. And I'd only be taking a square root once for each pixel, after doing all the iterations.

no, you need to do it each time. sqrt(a + b) != sqrt(a) + sqrt(b) wink
« Last Edit: April 06, 2007, 06:32:33 PM by lycium » Logged

gandreas
Explorer
****
Posts: 53



WWW
« Reply #9 on: April 06, 2007, 06:56:15 PM »

<snip lots of not-so-relevant-1d-stuff>

So as you can see, the error caused by converting screen coordinates to accumulates faster in the Julia set, due to the initial step squares the error.  And since that error is going to vary from pixel to pixel, you get that weird blockiness.

the error introduced by moving from pixel co-ords to the complex plane is once-off, it's never "accumulated". what's more, the procedure for going from screen space to the complex plane is the same for both mandel and julia, so there's no distinction to be made there. as i suggested in my initial post, i think the error comes from the different iteration dynamics (orbit) in the complex plane, and conjectured that this may be because of differing magnitudes in the z- and c-values since ieee fp has to convert to a common base when adding or subtracing, which chops off a lot of bits if the operands have significantly different magnitudes. that's why it's important to do the full complex number simulation instead of something 1d, you'd like to actually measure what you're interested in wink

Except,  of course, 1d is a proper subset of complex numbers (and I picked it as something that could be easily demonstrated), so my example is actually completely valid (though a terribly boring image).  More importantly, however, the errors aren't "one off" - once error is introduced into a system, it only grows - that's basic error analysis (this is especially true in a chaotic system where even minute errors cause radical changes).

The point is that both J and M do identical calculations, the only real difference is where the error in the initial values is. Any additional errors in calculations are comparable between the two (i.e., anything caused by the various addition and multiplication).  And claiming that there are differences in magnitudes of the numbers isn't valid either, because numbers that are too large are treated as "escaped" so the numbers remain in similar ranges.  It's just that the error in J is squared at the first step, while M is added.

Logged
lycium
Fractal Supremo
*****
Posts: 1158



WWW
« Reply #10 on: April 06, 2007, 07:05:34 PM »

Except,  of course, 1d is a proper subset of complex numbers (and I picked it as something that could be easily demonstrated), so my example is actually completely valid (though a terribly boring image).  More importantly, however, the errors aren't "one off" - once error is introduced into a system, it only grows - that's basic error analysis (this is especially true in a chaotic system where even minute errors cause radical changes).

yes it's a proper subset, but you still have to do the same dynamics, which means using complex numbers. otherwise you're studying something different, that much is clear no? you can't properly study wind turbulence by farting into a jar, even if that's a "proper subset" of wind's behaviour. in this instance it's trivial to get the "big picture", directly from the phenomenon in question, no need to resort to measuring something else, so why change the issue?

also, the error introduced by the screen->complex plane IS once off, because you only do it once! that it grows is obvious, no one is disputing that, but the fact that it only happens once is really, really, really clear. the roundoff introduced at every iteration IS cumulative, because it happens many times, and that's where (i hypothesise) the main source of error is. compare 60k roundoffs (using very different numbers) to that introduced by a single operation, and tell me which is more likely to be responsible for error...

The point is that both J and M do identical calculations, the only real difference is where the error in the initial values is.

that's completely untrue, the value of the c-constant (and the resulting different orbit) is what makes the radically different images! you don't really expect the two to have similar numerical properties if they have wildly different orbits, do you?

Any additional errors in calculations are comparable between the two (i.e., anything caused by the various addition and multiplication).  And claiming that there are differences in magnitudes of the numbers isn't valid either

it was a hypothesis, and until there's evidence one way or the other your saying "isn't valid" isn't valid tongue stuck out

edit: actually, there is a plain error in your statement. try adding 10000000 + 0.000001 - 10000000 100k times (probably you'll get 0) and compare the answer to adding just 0.000001 100k times. you say the computations are "comparable", but fail to mention that that's only true in an algebraic sense - yes yes we square and add in both cases, but we both know that numerically the actual order of operations matters a great deal, and in general fp arithmetic does not follow the laws of algebra; not only that, but the actual numbers in both cases are probably completely different.

that the relative sizes of the operands involved in addition and subtraction affect the precision of the result is a FACT[1], not established by me and studied worldwide. this is why i suspected that the cumulative distance between the z- and c-values for an orbit is greater for julia iteration than mandelbrot iteration, and suggested that it be measured. should those numbers not be significantly different, i'm more than happy to throw out my 2-second hypothesis, but we shouldn't jump the gun with explanations until we have the data to explain...

it would be interesting to do the iterations in double precision as a reference, then compute the length-sum as an error estimate, then do the iterations again in single precision to get a real error term. if there is a pixelwise correlation between the length-sum and the double-to-single precision error, i'd say that constitutes strong evidence in favour of my hypothesis. how will you test the claims you made, or even just prove that the claims i made are invalid?

[1] http://docs.sun.com/source/806-3568/ncg_goldberg.html#680
« Last Edit: April 06, 2007, 07:51:56 PM by lycium » Logged

lycium
Fractal Supremo
*****
Posts: 1158



WWW
« Reply #11 on: April 06, 2007, 08:49:53 PM »

i just realised that there's a better metric for testing my hypothesis; will code it up after some (very rare) tv watching wink
Logged

gandreas
Explorer
****
Posts: 53



WWW
« Reply #12 on: April 06, 2007, 08:56:50 PM »

You're getting caught up in the implementation details and failing to notice the basics here.

Given two numbers Z0 and C and their "error" Z0E = Z0 +/- E, CE = C +/- E (where Z0 and C are comparable numbers, so that Z0E and CE have similar uncertainties), the result of:

JZ1 = Z0E * Z0E + C

is going to have a higher degree of uncertainty (in general*) than:

MZ1 = Z0 * Z0 + CE

(*one can come up with specific cases where is not true, but error analysis is much like "big O" analysis - there's a lot of things that appear non-obvious at first on these things)

Arguments about mixing precision (adding 1E9 + 1e-6 and the like) aren't applicable to this example because the problem occurs in standard escape time algorithm rendering of both the M and J set, and when numbers get large like that, the algorithm is terminated.

The original poster wanted to know why the Julia set broke down under magnification sooner than the Mandelbrot set did when using the same basic algorithm for rendering if they are the same routine to calculate them.  And this hold true not just for something like Zn+1 = Zn * Zn + C, but for a wide range of formulas (there are also formulas where the exact opposite is true, and some, such as (Z + C) / (Z - C) will breakdown at roughly the same point).

Logged
lycium
Fractal Supremo
*****
Posts: 1158



WWW
« Reply #13 on: April 06, 2007, 11:12:08 PM »

You're getting caught up in the implementation details and failing to notice the basics here.

specifics yes, implementation details no; i was very clearly getting at basics (60k roundoffs versus one, relative magnitudes affecting precision, mandel and julia iterations are numerically different) too, but i'm not going to put more convincing behind my conjecture because it's only that, and you're a lot more adamant than i am wink

anyway, i look forward to a definite answer to this. i'll probably still write that app, but only to put forth evidence.
Logged

lycium
Fractal Supremo
*****
Posts: 1158



WWW
« Reply #14 on: April 07, 2007, 01:31:35 AM »

M has z0 = constant, and c = pixel +/- E.  J has z0 = pixel +/- E, and c = fixed.

btw, it's important to be clear about what error you're measuring. in that example, you're seeing how the error of the initial pixel->complex plane computation (and only that error!) grows. an equally valid view is to take c as being exact in value*, but not as the value you'd algebraically get. the rationale behind this is that we don't really care how much that first error grows (which we can't exactly say anyway except for broad big-oh generalisations which don't take the differing iterations into account), but rather how the subsequent 60k iterations behave given the differing c-value. i'm sure you'll agree, that's a much more difficult problem, and to me that's really the heart of the matter, not how some error in a constant value behaves under iteration.


* why is this valid? instead of one arbitrary constant, x, you have two - x+y. but since they're arbitrary, we might as well take x+y (where y in this case is the roundoff error) to be w, and call that exact.
Logged

Pages: [1] 2   Go Down
  Print  
 
Jump to:  

Related Topics
Subject Started by Replies Views Last post
Thoughts About the Fall of Humanity Images Showcase (Rate My Fractal) heavenriver 0 1658 Last post April 15, 2011, 03:18:08 PM
by heavenriver
Lower Class Residential Mandelbulb3D Gallery JodyVL 0 791 Last post July 10, 2011, 04:36:25 PM
by JodyVL
Flowers of Fall Mandelbulb3D Gallery Madman 0 1706 Last post August 01, 2011, 10:38:56 PM
by Madman
Roll-your-own 64 or 128 bit fixed point math for Mandelbrot/Julia calculations? Programming Duncan C 4 8782 Last post August 31, 2011, 12:21:58 PM
by real_het
efficient algorithms for drawing mandelbrot or julia set Programming sddsmhr 10 14887 Last post December 15, 2015, 07:47:38 PM
by therror

Powered by MySQL Powered by PHP Powered by SMF 1.1.21 | SMF © 2015, Simple Machines

Valid XHTML 1.0! Valid CSS! Dilber MC Theme by HarzeM
Page created in 0.193 seconds with 24 queries. (Pretty URLs adds 0.016s, 2q)