Welcome to Fractal Forums

Fractal Software => Programming => Topic started by: Duncan C on March 06, 2010, 04:44:47 AM




Title: How much more useable precision will I get using quad precision?
Post by: Duncan C on March 06, 2010, 04:44:47 AM
My app, FractalWorks, currently uses double precision (64 bit) floating point. This is the native precision in the floating point hardware on modern Intel processors, so it is quite fast.

I have been thinking about adding support for quad precision (128 bit). That will slow down calculations, but the math libraries support it, and it will still use hardware floating point - it will just do multiple operations for each quad primitive.

Have any of you worked with 128 bit floating point, and can you give me an idea of how many more useable decimal places this will give me?

Currently I can calculate Mandelbrot fractals with a width of around 6E-14 before the cumulative rounding errors cause the resulting fractal to fall apart completely. Somewhere around 2e-13 the 3D shape (based on DE values) starts to show a striped texture that tells me I am getting close to the limit.

How much of an improvement can I expect by converting to quad precision?

I have double precision data structures peppered throughout my code, so making it use quad precision is a pretty big job - especially if I want to make it a checkbox option (since double precision is much faster.)

My app has logic to do boundary following, identify symmetric areas of a plot and only render the unique bits, it cuts the plot into chunks and gives those chunks to "worker threads" for multi-threaded rendering, etc. All of those features pass around data structures that are currently double precision, and the files are saved to disk using doubles. Thus, it's not a simple matter to change a couple of variables to a different type and "let 'er rip".


Duncan C


Title: Re: How much more useable precision will I get using quad precision?
Post by: Botond Kósa on April 15, 2010, 12:16:56 PM
Do you mean using two doubles to store the high and low part of a floating point number, the way it is described in this topic:

http://www.fractalforums.com/programming/%28java%29-double-double-library-for-128-bit-precision/ (http://www.fractalforums.com/programming/%28java%29-double-double-library-for-128-bit-precision/)

That isn't exactly IEEE quad precision, because it has only 104 bits of significand instead of 112. Nevertheless, it is quite useful for fractal calculation. I implemented a double-double library in my Mandelbrot generator, and it allows zooming about twice as deep (because it has twice as many significand bits), while being about 10x slower than double precision. It is still 4x faster than arbitrary (software-based fix point) precision though.

An interesting side-effect is mentioned in that topic:
Quote
Even though the double-double has less precision than the quad-precision, it is able to store numbers Quad-precision can't. Consider this number:
1.00000000000000000000000000000000001
which in Quad-precision would just be 1.0, but can be expressed in double-double precision by the tuple (1.0, 1.0-45})

This allows to zoom to the points [0,1] and [0,-1] more deeply. These complex numbers have orbits with a period of 3 ([0,1],[-1,1],[0,0] and [0,-1],[-1,1],[0,0] respectively) and all iterations result in complex numbers with real and imaginary parts of 0, 1, or -1, hence the high part of the double-double representations will always be 0, 1, or -1, and the low part can be arbitrarily small (it is only limited by the minimum exponent of double, -1023).

Botond