Logo by DsyneGrafix - Contribute your own Logo!

END OF AN ERA, FRACTALFORUMS.COM IS CONTINUED ON FRACTALFORUMS.ORG

it was a great time but no longer maintainable by c.Kleinhuis contact him for any data retrieval,
thanks and see you perhaps in 10 years again

this forum will stay online for reference
News: Visit us on facebook
 
*
Welcome, Guest. Please login or register. April 26, 2024, 05:08:44 PM


Login with username, password and session length


The All New FractalForums is now in Public Beta Testing! Visit FractalForums.org and check it out!


Pages: 1 [2]   Go Down
  Print  
Share this topic on DiggShare this topic on FacebookShare this topic on GoogleShare this topic on RedditShare this topic on StumbleUponShare this topic on Twitter
Author Topic: Why do Julia calculations fall apart at lower magnification than Mandelbrot?  (Read 9709 times)
0 Members and 1 Guest are viewing this topic.
gandreas
Explorer
****
Posts: 53



WWW
« Reply #15 on: April 07, 2007, 05:28:37 PM »

M has z0 = constant, and c = pixel +/- E.  J has z0 = pixel +/- E, and c = fixed.

btw, it's important to be clear about what error you're measuring. in that example, you're seeing how the error of the initial pixel->complex plane computation (and only that error!) grows.
Exactly - that's the error that causes this.
Quote
an equally valid view is to take c as being exact in value*, but not as the value you'd algebraically get.
Yes and no.  C is an exact value in J, but not in M.  In J you type in a value, and yes, you'll get something slightly different for most cases, but that error will be constant in the entire image.

In the pixel->plane conversion, that initial error will be different at different pixels for J, which will distort the final image (and for M, Z0 will be exact, and the different error values will be introduced in smaller amounts, since uncertainty(a + b) < uncertainty(a * b) where uncertainty(a) ~= uncertainty(b))  The fact that error is different at different pixels helps to explain why the "fall apart" looks like it does.


And you don't need 60K worth of calculations to see this - switching to single precision you can see it with a 100 iterations and the right amount of zoom with the right C values.

Here's two easy experiments to get a feel for it.

In you favorite renderer which uses double precision, take the lines that say something to the effect of:

Code:
double zreal = screenX / magnfication + centerX;
double zimag = screenY / magnification + centerY;
and change them to:

Code:
double zreal = float(screenX / magnfication + centerX);
double zimag = float(screenY / magnification + centerY);
which will convert that specific calculation to single precision (and leave everything else as it is).  This results in increasing the uncertainty of z0[j] and c[m].  You'll see these errors occur at a much lower magnification.

You can also play with different formulas - if you switch from z * z + c to z * z + c * c * c, you should see M fall apart slightly sooner on the average than J.


The truly sad part is that due to the very chaotic nature of the fractal, this early error is wiping out all the precision that you've got through the rest of the calculation.

One idea (and this would work well for basic z * z + c style fractals) is to use a pair of variables and treat them as rational numbers - that would all but completely remove that initial error, though at the cost of roughly 2x-3x speed.  Proper use of C++ templates might even leave the code readable...
Logged
Duncan C
Fractal Fanatic
****
Posts: 348



WWW
« Reply #16 on: April 09, 2007, 12:44:02 AM »

Dennis De Mars, author of "Fractal Domains, offered the following in an email to me:
------
I believe you are seeing the results of a loss of precision that  can result when you have near-cancellation an intermediate term. This is the same sort of thing that can cause "ill-conditioned" equation systems to be difficult to solve numerically.

For instance, if in the course of calculation the orbit comes very near zero, you probably had a situation where near cancellation occurred in the previous iteration. For instance, in the real part suppose you had 0.123456123456 - 0.123456000000 = 0.000000123456. In this case, the coordinates were known to 12 significant figures in the previous iteration but the new result is known only to 6 significant figures in the result. I believe that when you zoom in on the origin in the Julia case, you are picking points that are all the result of this kind of near cancellation, so you see the precision break down quicker in that region than in other regions.
------
That makes perfect sense to me, and seems like the best explanation. Zooms that are well off of the origin do not break down as quickly.

We dont zoom in on 0,0 on the Mandelbrot set because its pure black. Unless youre doing some sort of orbit plot, its boring as it can be. Julia sets, on the other hand (or at least their neighborhoods) can be visually fascinating.


Duncan C

Logged

Regards,

Duncan C
keldor314
Guest
« Reply #17 on: July 02, 2008, 08:47:03 AM »

Actually, that's incorrect - floating point numbers actually use a form of scientific notation in the way they store numbers - i.e. .0425 would be represented as 4.25*10^-2 (or rather, the base 2 equivalent).  This means that 425323887533 will have the same number of digits of accurately as 0.00425323887533.  This means that near the origin, the accuracy gets very high indeed - in fact, the only limit there is the number of digits in the exponent.  Near any other point, you would only get accuracy starting from the first non-zero digit, so while you might be able to exactly express 0.000000000000028335 as different from zero, 1.000000000000028335 might be indistinguishable from 1.0
Logged
Duncan C
Fractal Fanatic
****
Posts: 348



WWW
« Reply #18 on: August 09, 2008, 04:51:19 PM »

Actually, that's incorrect - floating point numbers actually use a form of scientific notation in the way they store numbers - i.e. .0425 would be represented as 4.25*10^-2 (or rather, the base 2 equivalent).  This means that 425323887533 will have the same number of digits of accurately as 0.00425323887533.  This means that near the origin, the accuracy gets very high indeed - in fact, the only limit there is the number of digits in the exponent.  Near any other point, you would only get accuracy starting from the first non-zero digit, so while you might be able to exactly express 0.000000000000028335 as different from zero, 1.000000000000028335 might be indistinguishable from 1.0

keldor314,

It's my understanding that the loss of precision comes when you subtract two numbers that are very near zero. I'll illustrate with an example using decimal math, understanding that floating point is actually done in binary.

If you perform the calculation 0.12345678901 - 0.12345678900, the answer you'd get would be 0.00000000001, or 1 e-11. However, if your calculations use 12 significant digits, then the resulting answer would only have ONE significant digit of precision. That's because the first 11 significant digits collapse to zero. The result has room for 12 significant digits, but that precision was lost "off the end" of the original terms because of the near cancellation of the two values.


Regards,

Duncan C
Logged

Regards,

Duncan C
David Makin
Global Moderator
Fractal Senior
******
Posts: 2286



Makin' Magic Fractals
WWW
« Reply #19 on: August 10, 2008, 03:12:13 PM »

Hi,

Duncan:
The problem with adding/subtracting is not restricted to numbers near zero, it also happens for other numbers that vary in magnitude greatly.
e.g. 1e40 + 1 will probably produce 1e40, or 1e60 + 1e20 == 1e60 etc.

Having said that when rendering normal escape-time fractals the results when zooming will always be better around the origin because the relative size of the steps and the values is similar, whereas if you zoom in at location (2,2) or something like that then the relative size of the values compared to the steps is larger.
Logged

The meaning and purpose of life is to give life purpose and meaning.

http://www.fractalgallery.co.uk/
"Makin' Magic Music" on Jango
Duncan C
Fractal Fanatic
****
Posts: 348



WWW
« Reply #20 on: August 10, 2008, 04:04:43 PM »

Hi,

Duncan:
The problem with adding/subtracting is not restricted to numbers near zero, it also happens for other numbers that vary in magnitude greatly.
e.g. 1e40 + 1 will probably produce 1e40, or 1e60 + 1e20 == 1e60 etc.

Having said that when rendering normal escape-time fractals the results when zooming will always be better around the origin because the relative size of the steps and the values is similar, whereas if you zoom in at location (2,2) or something like that then the relative size of the values compared to the steps is larger.

David,

I said:

Quote
It's my understanding that the loss of precision comes when you subtract two numbers that are very near zero. I'll illustrate with an example using decimal math, understanding that floating point is actually done in binary.

I should have said that "...the loss of precision comes when you subtract two numbers who's difference is very near zero
Logged

Regards,

Duncan C
lycium
Fractal Supremo
*****
Posts: 1158



WWW
« Reply #21 on: August 11, 2008, 11:45:19 AM »

*whose

relevant at this juncture is what every scientist should know about floating point arithmetic.
Logged

Pages: 1 [2]   Go Down
  Print  
 
Jump to:  

Related Topics
Subject Started by Replies Views Last post
Thoughts About the Fall of Humanity Images Showcase (Rate My Fractal) heavenriver 0 1661 Last post April 15, 2011, 03:18:08 PM
by heavenriver
Lower Class Residential Mandelbulb3D Gallery JodyVL 0 796 Last post July 10, 2011, 04:36:25 PM
by JodyVL
Flowers of Fall Mandelbulb3D Gallery Madman 0 1714 Last post August 01, 2011, 10:38:56 PM
by Madman
Roll-your-own 64 or 128 bit fixed point math for Mandelbrot/Julia calculations? Programming Duncan C 4 8832 Last post August 31, 2011, 12:21:58 PM
by real_het
efficient algorithms for drawing mandelbrot or julia set Programming sddsmhr 10 14901 Last post December 15, 2015, 07:47:38 PM
by therror

Powered by MySQL Powered by PHP Powered by SMF 1.1.21 | SMF © 2015, Simple Machines

Valid XHTML 1.0! Valid CSS! Dilber MC Theme by HarzeM
Page created in 0.158 seconds with 24 queries. (Pretty URLs adds 0.009s, 2q)