Welcome to Fractal Forums

Fractal Software => Programming => Topic started by: quaz0r on December 02, 2014, 11:46:49 PM




Title: downsybrot
Post by: quaz0r on December 02, 2014, 11:46:49 PM
(http://i1.someimage.com/Qr4EE4X.png) (http://i1.someimage.com/CfIW1A7.png)

here is an example of some sort of fail with regard to perturbation/series approx/floating-point sorcery, produced by my mandelbrot program.  initially i was using full precision for lots of things until i get around to making things more efficient like kalles fraktaler does with the floatexp type.  i just tried this floatexp sort of routine for the series coefficient calculations and such and now these weird anomalies happen much more often than before.  there are still some things im not sure about with regard to all this newfangled math/float-point sorcery.  for example pauldelbrot described using the magic number 0.001 for glitch detection, so that is easy enough to drop in place.  for determining when the series approximation falls off the cliff i have not found mention on here of any such magic number, formula, or other specifics, so i initially just settled on checking a few pixels for when the series approx and the regular perturbation formula differ by more than some random magic number; i settled on 0.000001 or so.  making this magic number smaller makes these weird anomalies less egregious, though the problem never seems to totally go away, and again it is much more egregious after moving from full precision to a floatexp sort of deal.  also, now i cant render past e300 or so, whereas i could when using full precision for these things.  i noticed his floatexp type has a toDouble function that takes a scaling argument, i guess it must fix these issues.  im just not sure what the idea is exactly or why it is needed or how to implement it.  if you are already applying a huge exponent to a double, i dont understand what this additional scaling thing is supposed to be accomplishing or how it helps anything.

anyone have any tips or thoughts about any of this?


Title: Re: downsybrot
Post by: 3dickulus on December 03, 2014, 05:38:00 AM
I can offer some things I've learned from hacking at SFTC :)

see SFTC/Engine/approximation.cpp for details regarding this...

in the CalculateIterations() routine, with some help from knighty with pauldebrots idea for glitch detection and a lot of twiddling on my part...

Code:
long double EPSILON = 1.0E-13 * M_PI; /// E-13 too big, E-14 too small, something in between works best
long double BIGNUM = 1.0/EPSILON;

the first test is...
(in the approximation?)
Code:
            long double zr=x+dx, zi=xi+dxi;
            if((zr*zr+zi*zi)<EPSILON*(x*x+xi*xi)) return aDetails->GetIterationLimit()+1;

the second test is...
(if the approximation needs more iterations?)
Code:
if((std::min(fabs(c-dx),fabs(ci-dxi))/std::max(fabs(dx),fabs(dxi)) < EPSILON) ||
                    (std::max(fabs(c+dx),fabs(ci+dxi))/std::min(fabs(dx),fabs(dxi)) > BIGNUM)) {
                return iterlim+1;
            }

where iterlim+1 is the value used to flag glitch pixels for re-rendering after picking a new reference point ie:pass 2,3,4...
this seems to catch all glitches



see SFTC/Engine/approximation.cpp and SFTC/GUI/glwidget.cpp for details regarding this...

for scaling re:exponent I have set 2.225E-308 as the limit where scaling starts, I found a couple of references to the limits of type double,
1. smallest double = 2.2250738585072014e-308 w/o loosing precision
2. smallest accurately representable number = MaxExponent - mantissaBits = 272

I think 2 is the limit where you start to loose bits while 1 has a tolerance of 16 decimal places over the absolute smallest value the double type can hold 5e-324, using only 1 bit, not good for anything more than acting as a constant.

scaling is applied to c and ci when calculating the initial reference data and then in the CalculateIterations() routine to delta and deltai,



The voodoo magic is something of a mystery to me, I take it, largely,  for granted that it works and thank folks like Kevin Martin, Pauldebrot Kalles and Knighty for sharing their discoveries and hard work. As my math skills are rudimentary at best, I can't offer any real insight or depth to the idea and appreciate any correction to my assumptions and guesses ;)


Title: Re: downsybrot
Post by: quaz0r on December 08, 2014, 12:54:15 AM
thanks for the response, trying to figure all this stuff out is a real journey.  i decided to try what i originally thought would be ridiculously extreme values for how much the series approx differs from the perturbation and everything seems to be rendering properly now, going a little less far with the series approx but not by a whole lot.  i am sure there must be a more intelligent approach altogether though.  i believe pauldelbrot described doing a different sort of check for when to bail on series approx, and i dont think it involved checking against a magic number, so his approach is probably the smartest.  i'll have to take a look at that again sometime.


Title: Re: downsybrot
Post by: 3dickulus on December 08, 2014, 01:16:56 AM
I think the "magic" is detecting when operations are going to start loosing precision due to the limits of data type, like when the difference between values is smaller than bit resolution allows so they round to values that amplify error ?

the tests I refer to are both post reference calculation.