Welcome to Fractal Forums

Fractal Software => Programming => Topic started by: Duncan C on April 05, 2007, 06:49:53 PM




Title: Why do Julia calculations fall apart at lower magnification than Mandelbrot?
Post by: Duncan C on April 05, 2007, 06:49:53 PM
The program I'm developing, FractalWorks, so far generates "conventional" Mandelbrot and Julia set plots using complex numbers. It uses double precision (64 bit) floating point math to do it's calculations.

For Mandelbrot sets, I can get down to a plot with (max_real - min_real) of around 4e-14 before floating part errors start to degrade the resulting images.

For Julia sets, though, my plots fall apart at a much lower magnification. Here is a sample plot at about 2.5e-7 that is very badly distorted from floating point errors.

The same routines calculate both Mandelbrot and Julia sets. For Mandelbrot plots the same routine uses zero for the intiial value of z and the point coordinates for c, and for Julia sets it uses the Julia origin point as the value of c and the point coordinates for the initial value of z.

{in the iterative equation z(x+1) = z(x)2 + c}

I don't understand why this is. Can somebody enlighten me?

Here is a sample Julia plot that shows the distortion I'm talking about. Click on the image to see a full sized image including the plot coordinates:
 (http://www.pbase.com/image/76699423/original.jpg)
 http://www.pbase.com/image/76699423/original (http://www.pbase.com/image/76699423/original)

Duncan C







Title: Re: Why do Julia calculations fall apart at lower magnification than Mandelbrot?
Post by: lycium on April 05, 2007, 08:30:11 PM
interesting, i had no idea that the two iterations had such differing numerical properties! this is of course surprising because they use the exact same formula (the c for the julia-iteration is just the starting point for a mandelplot).

the only thing i can think of that might be responsible for such different behaviour is the dynamics of the iteration (ie. the orbit of points z_n); a way to verify this would be to sum the distance (squared should be fine) between subsequent iterations and the c-value, since floating point computations are sensitive to the relative magnitudes of their operands (this is especially true of addition and subtraction). since the mandelbrot's c-value is usually closer to the orbit those differences could well be smaller, leading to less cumulative roundoff.


Title: Re: Why do Julia calculations fall apart at lower magnification than Mandelbrot?
Post by: Duncan C on April 06, 2007, 03:29:38 PM
interesting, i had no idea that the two iterations had such differing numerical properties! this is of course surprising because they use the exact same formula (the c for the julia-iteration is just the starting point for a mandelplot).

the only thing i can think of that might be responsible for such different behaviour is the dynamics of the iteration (ie. the orbit of points z_n); a way to verify this would be to sum the distance (squared should be fine) between subsequent iterations and the c-value, since floating point computations are sensitive to the relative magnitudes of their operands (this is especially true of addition and subtraction). since the mandelbrot's c-value is usually closer to the orbit those differences could well be smaller, leading to less cumulative roundoff.

lycium,

You suggest keeping a running total of the cumulative (square) difference in distance between an iteration's value and it's initial c value, for each point on a Julia graph and the corresponding point on a Mandelbrot graph? If your theory holds true, I gather you'd expect the differences to be much larger for the Julia set.

I'm trying to think of a meaningful way to view the results. At the simplest I could average the distance value for all points on both a Mandelbrot and Julia plot, but I'm not sure that would give a meaningful result. I could also compare corresponding total difference values between points on a Julia and Mandelbrot plot, and color them with one gradient when the Julia distance is larger, and a contrasting gradient when the Mandelbrot distance is greater. The problem with that is that there's not really a meaningful correspondence between individual points on a Mandelbrot and Julia plot.

Hmm... My brain isn't working very well this morning. :-\ I'll have to drink more coffee and think about this further.


Duncan C
 


Title: Re: Why do Julia calculations fall apart at lower magnification than Mandelbrot?
Post by: lycium on April 06, 2007, 04:24:39 PM
You suggest keeping a running total of the cumulative (square) difference in distance between an iteration's value and it's initial c value, for each point on a Julia graph and the corresponding point on a Mandelbrot graph? If your theory holds true, I gather you'd expect the differences to be much larger for the Julia set.

hmm, needs a bit of clarification i think (i also wrote my suggesting woefully undercaffeinated): basically what we'll do is render a "debug image", which just visualises some pixelwise info. so you do your zoom where the julia breaks up and the mandel doesn't, then you hit debug vis. for each pixel, it computes the sum of the (sqr) distance between the last iteration pos and the c-value, over all iterations. so for each pixel you'll get a distance-sum, which you visualise however is best (mapping to a colour, just a linear rescale and clamp, whatever).

thinking about this a little further, it's not quite right to use the squared distance, since that's not a linear metric (which is desirable for comparisons).

debug images are always interesting, hope you'll post yours here :)


Title: Re: Why do Julia calculations fall apart at lower magnification than Mandelbrot?
Post by: Duncan C on April 06, 2007, 05:42:25 PM
[snip]
...so you do your zoom where the julia breaks up and the mandel doesn't, then you hit debug vis. for each pixel, it computes the sum of the (sqr) distance between the last iteration pos and the c-value, over all iterations. so for each pixel you'll get a distance-sum, which you visualise however is best (mapping to a colour, just a linear rescale and clamp, whatever).

Question on terms: You say "..computes the sum of the square distance between the last iteration pos and the c-value."

If I compare the distance between the last iteration's position with the c-value, what sum are you referring to?

I think you're right that the square value would be hard to compare. I might take the square root of the distance measurements and just let the debug plot run overnight. I wonder how much error there is in the square root function in my math library, at very low

By the way, for the plots where the Julia image breaks down, I use a very large max iteration value. (60,000 iterations in the example.)

I wonder if the breakdown is due to magnfication or high iteration count. I find it easier to find Julia sets (vs Mandelbrot plots) that have interesting detail at high iteration counts but fairly low magnification.  It might be a function of the topology of the different sets.



Duncan C


Title: Re: Why do Julia calculations fall apart at lower magnification than Mandelbrot?
Post by: lycium on April 06, 2007, 06:00:05 PM
[snip]
...so you do your zoom where the julia breaks up and the mandel doesn't, then you hit debug vis. for each pixel, it computes the sum of the (sqr) distance between the last iteration pos and the c-value, over all iterations. so for each pixel you'll get a distance-sum, which you visualise however is best (mapping to a colour, just a linear rescale and clamp, whatever).

Question on terms: You say "..computes the sum of the square distance between the last iteration pos and the c-value."

If I compare the distance between the last iteration's position with the c-value, what sum are you referring to?

something like this:

Code:
double total_len = 0.0;
for (i = 0; i < max_iter; ++i)
{
  // z <- z^2 + c

  const double dx = z.re - c.re;
  const double dy = z.im - c.im;

  total_len += sqrt(dx*dx + dy*dy);
}

// scale total_len to some (mostly) displayable range and plot that to the debug image

I think you're right that the square value would be hard to compare. I might take the square root of the distance measurements and just let the debug plot run overnight. I wonder how much error there is in the square root function in my math library, at very low

at double precision, even with something like 10k terms in the sum (iteration) the total should be quite accurate, and definitely accurate enough for visual display (remember that we can only show 255 intensity levels!).

as for "overnight", damn dude, how slow is this app? for one thing, we don't need a poster resolution image, just some sort of comparison... 512x512 is more than sufficient, which you should be able to render in realtime i guess? a few seconds at most!

By the way, for the plots where the Julia image breaks down, I use a very large max iteration value. (60,000 iterations in the example.)

ah well, there's something i haven't considered. that's one hell of a chaotic system, that it can orbit for 60k iterations without ever having radius > 2!

I wonder if the breakdown is due to magnfication or high iteration count. I find it easier to find Julia sets (vs Mandelbrot plots) that have interesting detail at high iteration counts but fairly low magnification.  It might be a function of the topology of the different sets.

i'm not sure what you mean by "function of the topology", but definitely 60k iterations of anything numerical will have siginificant drift.

btw, at double precision you're pushing common computers pretty much to their limit (if you consider speed as well as precision, since there's a dramatic speed dropoff after double precision). i know i say it too much, but there are other very good looking fractals (and types of fractals) are far less numerically demanding...


Title: Re: Why do Julia calculations fall apart at lower magnification than Mandelbrot?
Post by: gandreas on April 06, 2007, 06:15:46 PM
OK, think of it this way.

Each thing takes two starting values - the coordinate, and a constant fixed value.  Obviously, the fixed value is just that - fixed.  The coordinate, however, is going to vary on the screen - what's more, the value you use as a coordinate is not going to exactly represent the location on the screen (there is going to be slight rounding errors going from screen to the zoomed coordinate space - we'll call that "E" since a nice unicode epsilon won't show up here).

M has z0 = constant, and c = pixel +/- E.  J has z0 = pixel +/- E, and c = fixed.

Now at each step, you take z^2 + c.

We'll also pick outrageously large simple number (and lets make them 1D as well).  Z0 = 10, C = 10.

If E = 1, then M(Z0) = 11, M(C) = 10, and J(Z0) = 10, J(C) = 11

Then at one step we've got M(z1) = 10 * 10 + 11 = 111 (off by 1)
J(z1) = 11 * 11 + 10 = 131 (off by 21)
If E = 0, then M(z1) = J(z1) = 110.  As you can see the Julia set is already more "off"

Another step
E=0, z2 = 12110
M(z2) = 111 * 111 + 11 = 12332 (off by 222)
J(z2) = (131) * (131) + 10 = 17171 (off by 5061)

Even if the error were smaller (E=0.1) we get:
E0(z1) = 110, M(z1) = 110.1, J(z1) = 112.001 (.1 vs 2.001)
E0(z2) = 12110, M(z2) = 12132.11, J(z2) = 12554.224 (22.11 vs 444.224)

So as you can see, the error caused by converting screen coordinates to accumulates faster in the Julia set, due to the initial step squares the error.  And since that error is going to vary from pixel to pixel, you get that weird blockiness.

It has nothing to do with how you calculate the color or anything like that - the problem is that there is going to be error when you convert from screen pixel coordinates to your complex number.


Title: Re: Why do Julia calculations fall apart at lower magnification than Mandelbrot?
Post by: Duncan C on April 06, 2007, 06:27:19 PM
If I compare the distance between the last iteration's position with the c-value, what sum are you referring to?

something like this:

Code:
double total_len = 0.0;
for (i = 0; i < max_iter; ++i)
{
  // z <- z^2 + c

  const double dx = z.re - c.re;
  const double dy = z.im - c.im;

  total_len += sqrt(dx*dx + dy*dy);
}

// scale total_len to some (mostly) displayable range and plot that to the debug image

The code sample helps clarify things. You DO mean to sum the C to Z difference for all iterations.



I think you're right that the square value would be hard to compare. I might take the square root of the distance measurements and just let the debug plot run overnight. I wonder how much error there is in the square root function in my math library, at very low

at double precision, even with something like 10k terms in the sum (iteration) the total should be quite accurate, and definitely accurate enough for visual display (remember that we can only show 255 intensity levels!).

My app uses as many colors as there are iteration values. I don't limit myself to 255 colors. I support up to 64k iterations, and 64k unique colors (I store a 16 bit iteration value.) That way you can use any arbirary coloring that creates a clear plot

as for "overnight", damn dude, how slow is this app? for one thing, we don't need a poster resolution image, just some sort of comparison... 512x512 is more than sufficient, which you should be able to render in realtime i guess? a few seconds at most!

You're right that a small plot should be enough to get an idea of what's going on. And I'd only be taking a square root once for each pixel, after doing all the iterations.


By the way, for the plots where the Julia image breaks down, I use a very large max iteration value. (60,000 iterations in the example.)

ah well, there's something i haven't considered. that's one hell of a chaotic system, that it can orbit for 60k iterations without ever having radius > 2!
That's why they call this stuff chaotic!


Duncan C


Title: Re: Why do Julia calculations fall apart at lower magnification than Mandelbrot?
Post by: lycium on April 06, 2007, 06:29:26 PM
<snip lots of not-so-relevant-1d-stuff>

So as you can see, the error caused by converting screen coordinates to accumulates faster in the Julia set, due to the initial step squares the error.  And since that error is going to vary from pixel to pixel, you get that weird blockiness.

the error introduced by moving from pixel co-ords to the complex plane is once-off, it's never "accumulated". what's more, the procedure for going from screen space to the complex plane is the same for both mandel and julia, so there's no distinction to be made there. as i suggested in my initial post, i think the error comes from the different iteration dynamics (orbit) in the complex plane, and conjectured that this may be because of differing magnitudes in the z- and c-values since ieee fp has to convert to a common base when adding or subtracing, which chops off a lot of bits if the operands have significantly different magnitudes. that's why it's important to do the full complex number simulation instead of something 1d, you'd like to actually measure what you're interested in ;)

It has nothing to do with how you calculate the color or anything like that - the problem is that there is going to be error when you convert from screen pixel coordinates to your complex number.

i don't think anyone suspected colour computation as the source of the blockiness.

edit: reply to duncan...

My app uses as many colors as there are iteration values. I don't limit myself to 255 colors. I support up to 64k iterations, and 64k unique colors (I store a 16 bit iteration value.) That way you can use any arbirary coloring that creates a clear plot

my point about 256 distinct intensity levels still stands (and it's linearly less once you take gamma correction into account).

You're right that a small plot should be enough to get an idea of what's going on. And I'd only be taking a square root once for each pixel, after doing all the iterations.

no, you need to do it each time. sqrt(a + b) != sqrt(a) + sqrt(b) ;)


Title: Re: Why do Julia calculations fall apart at lower magnification than Mandelbrot?
Post by: gandreas on April 06, 2007, 06:56:15 PM
<snip lots of not-so-relevant-1d-stuff>

So as you can see, the error caused by converting screen coordinates to accumulates faster in the Julia set, due to the initial step squares the error.  And since that error is going to vary from pixel to pixel, you get that weird blockiness.

the error introduced by moving from pixel co-ords to the complex plane is once-off, it's never "accumulated". what's more, the procedure for going from screen space to the complex plane is the same for both mandel and julia, so there's no distinction to be made there. as i suggested in my initial post, i think the error comes from the different iteration dynamics (orbit) in the complex plane, and conjectured that this may be because of differing magnitudes in the z- and c-values since ieee fp has to convert to a common base when adding or subtracing, which chops off a lot of bits if the operands have significantly different magnitudes. that's why it's important to do the full complex number simulation instead of something 1d, you'd like to actually measure what you're interested in ;)

Except,  of course, 1d is a proper subset of complex numbers (and I picked it as something that could be easily demonstrated), so my example is actually completely valid (though a terribly boring image).  More importantly, however, the errors aren't "one off" - once error is introduced into a system, it only grows - that's basic error analysis (this is especially true in a chaotic system where even minute errors cause radical changes).

The point is that both J and M do identical calculations, the only real difference is where the error in the initial values is. Any additional errors in calculations are comparable between the two (i.e., anything caused by the various addition and multiplication).  And claiming that there are differences in magnitudes of the numbers isn't valid either, because numbers that are too large are treated as "escaped" so the numbers remain in similar ranges.  It's just that the error in J is squared at the first step, while M is added.



Title: Re: Why do Julia calculations fall apart at lower magnification than Mandelbrot?
Post by: lycium on April 06, 2007, 07:05:34 PM
Except,  of course, 1d is a proper subset of complex numbers (and I picked it as something that could be easily demonstrated), so my example is actually completely valid (though a terribly boring image).  More importantly, however, the errors aren't "one off" - once error is introduced into a system, it only grows - that's basic error analysis (this is especially true in a chaotic system where even minute errors cause radical changes).

yes it's a proper subset, but you still have to do the same dynamics, which means using complex numbers. otherwise you're studying something different, that much is clear no? you can't properly study wind turbulence by farting into a jar, even if that's a "proper subset" of wind's behaviour. in this instance it's trivial to get the "big picture", directly from the phenomenon in question, no need to resort to measuring something else, so why change the issue?

also, the error introduced by the screen->complex plane IS once off, because you only do it once! that it grows is obvious, no one is disputing that, but the fact that it only happens once is really, really, really clear. the roundoff introduced at every iteration IS cumulative, because it happens many times, and that's where (i hypothesise) the main source of error is. compare 60k roundoffs (using very different numbers) to that introduced by a single operation, and tell me which is more likely to be responsible for error...

The point is that both J and M do identical calculations, the only real difference is where the error in the initial values is.

that's completely untrue, the value of the c-constant (and the resulting different orbit) is what makes the radically different images! you don't really expect the two to have similar numerical properties if they have wildly different orbits, do you?

Any additional errors in calculations are comparable between the two (i.e., anything caused by the various addition and multiplication).  And claiming that there are differences in magnitudes of the numbers isn't valid either

it was a hypothesis, and until there's evidence one way or the other your saying "isn't valid" isn't valid :P

edit: actually, there is a plain error in your statement. try adding 10000000 + 0.000001 - 10000000 100k times (probably you'll get 0) and compare the answer to adding just 0.000001 100k times. you say the computations are "comparable", but fail to mention that that's only true in an algebraic sense - yes yes we square and add in both cases, but we both know that numerically the actual order of operations matters a great deal, and in general fp arithmetic does not follow the laws of algebra; not only that, but the actual numbers in both cases are probably completely different.

that the relative sizes of the operands involved in addition and subtraction affect the precision of the result is a FACT[1], not established by me and studied worldwide. this is why i suspected that the cumulative distance between the z- and c-values for an orbit is greater for julia iteration than mandelbrot iteration, and suggested that it be measured. should those numbers not be significantly different, i'm more than happy to throw out my 2-second hypothesis, but we shouldn't jump the gun with explanations until we have the data to explain...

it would be interesting to do the iterations in double precision as a reference, then compute the length-sum as an error estimate, then do the iterations again in single precision to get a real error term. if there is a pixelwise correlation between the length-sum and the double-to-single precision error, i'd say that constitutes strong evidence in favour of my hypothesis. how will you test the claims you made, or even just prove that the claims i made are invalid?

[1] http://docs.sun.com/source/806-3568/ncg_goldberg.html#680


Title: Re: Why do Julia calculations fall apart at lower magnification than Mandelbrot?
Post by: lycium on April 06, 2007, 08:49:53 PM
i just realised that there's a better metric for testing my hypothesis; will code it up after some (very rare) tv watching ;)


Title: Re: Why do Julia calculations fall apart at lower magnification than Mandelbrot?
Post by: gandreas on April 06, 2007, 08:56:50 PM
You're getting caught up in the implementation details and failing to notice the basics here.

Given two numbers Z0 and C and their "error" Z0E = Z0 +/- E, CE = C +/- E (where Z0 and C are comparable numbers, so that Z0E and CE have similar uncertainties), the result of:

JZ1 = Z0E * Z0E + C

is going to have a higher degree of uncertainty (in general*) than:

MZ1 = Z0 * Z0 + CE

(*one can come up with specific cases where is not true, but error analysis is much like "big O" analysis - there's a lot of things that appear non-obvious at first on these things)

Arguments about mixing precision (adding 1E9 + 1e-6 and the like) aren't applicable to this example because the problem occurs in standard escape time algorithm rendering of both the M and J set, and when numbers get large like that, the algorithm is terminated.

The original poster wanted to know why the Julia set broke down under magnification sooner than the Mandelbrot set did when using the same basic algorithm for rendering if they are the same routine to calculate them.  And this hold true not just for something like Zn+1 = Zn * Zn + C, but for a wide range of formulas (there are also formulas where the exact opposite is true, and some, such as (Z + C) / (Z - C) will breakdown at roughly the same point).



Title: Re: Why do Julia calculations fall apart at lower magnification than Mandelbrot?
Post by: lycium on April 06, 2007, 11:12:08 PM
You're getting caught up in the implementation details and failing to notice the basics here.

specifics yes, implementation details no; i was very clearly getting at basics (60k roundoffs versus one, relative magnitudes affecting precision, mandel and julia iterations are numerically different) too, but i'm not going to put more convincing behind my conjecture because it's only that, and you're a lot more adamant than i am ;)

anyway, i look forward to a definite answer to this. i'll probably still write that app, but only to put forth evidence.


Title: Re: Why do Julia calculations fall apart at lower magnification than Mandelbrot?
Post by: lycium on April 07, 2007, 01:31:35 AM
M has z0 = constant, and c = pixel +/- E.  J has z0 = pixel +/- E, and c = fixed.

btw, it's important to be clear about what error you're measuring. in that example, you're seeing how the error of the initial pixel->complex plane computation (and only that error!) grows. an equally valid view is to take c as being exact in value*, but not as the value you'd algebraically get. the rationale behind this is that we don't really care how much that first error grows (which we can't exactly say anyway except for broad big-oh generalisations which don't take the differing iterations into account), but rather how the subsequent 60k iterations behave given the differing c-value. i'm sure you'll agree, that's a much more difficult problem, and to me that's really the heart of the matter, not how some error in a constant value behaves under iteration.


* why is this valid? instead of one arbitrary constant, x, you have two - x+y. but since they're arbitrary, we might as well take x+y (where y in this case is the roundoff error) to be w, and call that exact.


Title: Re: Why do Julia calculations fall apart at lower magnification than Mandelbrot?
Post by: gandreas on April 07, 2007, 05:28:37 PM
M has z0 = constant, and c = pixel +/- E.  J has z0 = pixel +/- E, and c = fixed.

btw, it's important to be clear about what error you're measuring. in that example, you're seeing how the error of the initial pixel->complex plane computation (and only that error!) grows.
Exactly - that's the error that causes this.
Quote
an equally valid view is to take c as being exact in value*, but not as the value you'd algebraically get.
Yes and no.  C is an exact value in J, but not in M.  In J you type in a value, and yes, you'll get something slightly different for most cases, but that error will be constant in the entire image.

In the pixel->plane conversion, that initial error will be different at different pixels for J, which will distort the final image (and for M, Z0 will be exact, and the different error values will be introduced in smaller amounts, since uncertainty(a + b) < uncertainty(a * b) where uncertainty(a) ~= uncertainty(b))  The fact that error is different at different pixels helps to explain why the "fall apart" looks like it does.


And you don't need 60K worth of calculations to see this - switching to single precision you can see it with a 100 iterations and the right amount of zoom with the right C values.

Here's two easy experiments to get a feel for it.

In you favorite renderer which uses double precision, take the lines that say something to the effect of:

Code:
double zreal = screenX / magnfication + centerX;
double zimag = screenY / magnification + centerY;
and change them to:

Code:
double zreal = float(screenX / magnfication + centerX);
double zimag = float(screenY / magnification + centerY);
which will convert that specific calculation to single precision (and leave everything else as it is).  This results in increasing the uncertainty of z0[j] and c[m].  You'll see these errors occur at a much lower magnification.

You can also play with different formulas - if you switch from z * z + c to z * z + c * c * c, you should see M fall apart slightly sooner on the average than J.


The truly sad part is that due to the very chaotic nature of the fractal, this early error is wiping out all the precision that you've got through the rest of the calculation.

One idea (and this would work well for basic z * z + c style fractals) is to use a pair of variables and treat them as rational numbers - that would all but completely remove that initial error, though at the cost of roughly 2x-3x speed.  Proper use of C++ templates might even leave the code readable...


Title: The issue seems to be zooming in on 0,0
Post by: Duncan C on April 09, 2007, 12:44:02 AM
Dennis De Mars, author of "Fractal Domains, offered the following in an email to me:
------
I believe you are seeing the results of a loss of precision that  can result when you have near-cancellation an intermediate term. This is the same sort of thing that can cause "ill-conditioned" equation systems to be difficult to solve numerically.

For instance, if in the course of calculation the orbit comes very near zero, you probably had a situation where near cancellation occurred in the previous iteration. For instance, in the real part suppose you had 0.123456123456 - 0.123456000000 = 0.000000123456. In this case, the coordinates were known to 12 significant figures in the previous iteration but the new result is known only to 6 significant figures in the result. I believe that when you zoom in on the origin in the Julia case, you are picking points that are all the result of this kind of near cancellation, so you see the precision break down quicker in that region than in other regions.
------
That makes perfect sense to me, and seems like the best explanation. Zooms that are well off of the origin do not break down as quickly.

We dont zoom in on 0,0 on the Mandelbrot set because its pure black. Unless youre doing some sort of orbit plot, its boring as it can be. Julia sets, on the other hand (or at least their neighborhoods) can be visually fascinating.


Duncan C



Title: Re: Why do Julia calculations fall apart at lower magnification than Mandelbrot?
Post by: keldor314 on July 02, 2008, 08:47:03 AM
Actually, that's incorrect - floating point numbers actually use a form of scientific notation in the way they store numbers - i.e. .0425 would be represented as 4.25*10^-2 (or rather, the base 2 equivalent).  This means that 425323887533 will have the same number of digits of accurately as 0.00425323887533.  This means that near the origin, the accuracy gets very high indeed - in fact, the only limit there is the number of digits in the exponent.  Near any other point, you would only get accuracy starting from the first non-zero digit, so while you might be able to exactly express 0.000000000000028335 as different from zero, 1.000000000000028335 might be indistinguishable from 1.0


Title: Re: Why do Julia calculations fall apart at lower magnification than Mandelbrot?
Post by: Duncan C on August 09, 2008, 04:51:19 PM
Actually, that's incorrect - floating point numbers actually use a form of scientific notation in the way they store numbers - i.e. .0425 would be represented as 4.25*10^-2 (or rather, the base 2 equivalent).  This means that 425323887533 will have the same number of digits of accurately as 0.00425323887533.  This means that near the origin, the accuracy gets very high indeed - in fact, the only limit there is the number of digits in the exponent.  Near any other point, you would only get accuracy starting from the first non-zero digit, so while you might be able to exactly express 0.000000000000028335 as different from zero, 1.000000000000028335 might be indistinguishable from 1.0

keldor314,

It's my understanding that the loss of precision comes when you subtract two numbers that are very near zero. I'll illustrate with an example using decimal math, understanding that floating point is actually done in binary.

If you perform the calculation 0.12345678901 - 0.12345678900, the answer you'd get would be 0.00000000001, or 1 e-11. However, if your calculations use 12 significant digits, then the resulting answer would only have ONE significant digit of precision. That's because the first 11 significant digits collapse to zero. The result has room for 12 significant digits, but that precision was lost "off the end" of the original terms because of the near cancellation of the two values.


Regards,

Duncan C


Title: Re: Why do Julia calculations fall apart at lower magnification than Mandelbrot?
Post by: David Makin on August 10, 2008, 03:12:13 PM
Hi,

Duncan:
The problem with adding/subtracting is not restricted to numbers near zero, it also happens for other numbers that vary in magnitude greatly.
e.g. 1e40 + 1 will probably produce 1e40, or 1e60 + 1e20 == 1e60 etc.

Having said that when rendering normal escape-time fractals the results when zooming will always be better around the origin because the relative size of the steps and the values is similar, whereas if you zoom in at location (2,2) or something like that then the relative size of the values compared to the steps is larger.


Title: Re: Why do Julia calculations fall apart at lower magnification than Mandelbrot?
Post by: Duncan C on August 10, 2008, 04:04:43 PM
Hi,

Duncan:
The problem with adding/subtracting is not restricted to numbers near zero, it also happens for other numbers that vary in magnitude greatly.
e.g. 1e40 + 1 will probably produce 1e40, or 1e60 + 1e20 == 1e60 etc.

Having said that when rendering normal escape-time fractals the results when zooming will always be better around the origin because the relative size of the steps and the values is similar, whereas if you zoom in at location (2,2) or something like that then the relative size of the values compared to the steps is larger.

David,

I said:

Quote
It's my understanding that the loss of precision comes when you subtract two numbers that are very near zero. I'll illustrate with an example using decimal math, understanding that floating point is actually done in binary.

I should have said that "...the loss of precision comes when you subtract two numbers who's difference is very near zero


Title: Re: Why do Julia calculations fall apart at lower magnification than Mandelbrot?
Post by: lycium on August 11, 2008, 11:45:19 AM
*whose

relevant at this juncture is what every scientist should know about floating point arithmetic (http://www.physics.ohio-state.edu/~dws/grouplinks/floating_point_math.pdf).