Logo by AGUS - Contribute your own Logo!

END OF AN ERA, FRACTALFORUMS.COM IS CONTINUED ON FRACTALFORUMS.ORG

it was a great time but no longer maintainable by c.Kleinhuis contact him for any data retrieval,
thanks and see you perhaps in 10 years again

this forum will stay online for reference
News: Did you know ? you can use LaTex inside Postings on fractalforums.com!
 
*
Welcome, Guest. Please login or register. April 25, 2024, 07:07:23 AM


Login with username, password and session length


The All New FractalForums is now in Public Beta Testing! Visit FractalForums.org and check it out!


Pages: [1]   Go Down
  Print  
Share this topic on DiggShare this topic on FacebookShare this topic on GoogleShare this topic on RedditShare this topic on StumbleUponShare this topic on Twitter
Author Topic: downsybrot  (Read 775 times)
Description: floating-point woes
0 Members and 1 Guest are viewing this topic.
quaz0r
Fractal Molossus
**
Posts: 652



« on: December 02, 2014, 11:46:49 PM »



here is an example of some sort of fail with regard to perturbation/series approx/floating-point sorcery, produced by my mandelbrot program.  initially i was using full precision for lots of things until i get around to making things more efficient like kalles fraktaler does with the floatexp type.  i just tried this floatexp sort of routine for the series coefficient calculations and such and now these weird anomalies happen much more often than before.  there are still some things im not sure about with regard to all this newfangled math/float-point sorcery.  for example pauldelbrot described using the magic number 0.001 for glitch detection, so that is easy enough to drop in place.  for determining when the series approximation falls off the cliff i have not found mention on here of any such magic number, formula, or other specifics, so i initially just settled on checking a few pixels for when the series approx and the regular perturbation formula differ by more than some random magic number; i settled on 0.000001 or so.  making this magic number smaller makes these weird anomalies less egregious, though the problem never seems to totally go away, and again it is much more egregious after moving from full precision to a floatexp sort of deal.  also, now i cant render past e300 or so, whereas i could when using full precision for these things.  i noticed his floatexp type has a toDouble function that takes a scaling argument, i guess it must fix these issues.  im just not sure what the idea is exactly or why it is needed or how to implement it.  if you are already applying a huge exponent to a double, i dont understand what this additional scaling thing is supposed to be accomplishing or how it helps anything.

anyone have any tips or thoughts about any of this?
« Last Edit: December 02, 2014, 11:51:17 PM by quaz0r » Logged
3dickulus
Global Moderator
Fractal Senior
******
Posts: 1558



WWW
« Reply #1 on: December 03, 2014, 05:38:00 AM »

I can offer some things I've learned from hacking at SFTC smiley

see SFTC/Engine/approximation.cpp for details regarding this...

in the CalculateIterations() routine, with some help from knighty with pauldebrots idea for glitch detection and a lot of twiddling on my part...

Code:
long double EPSILON = 1.0E-13 * M_PI; /// E-13 too big, E-14 too small, something in between works best
long double BIGNUM = 1.0/EPSILON;

the first test is...
(in the approximation?)
Code:
            long double zr=x+dx, zi=xi+dxi;
            if((zr*zr+zi*zi)<EPSILON*(x*x+xi*xi)) return aDetails->GetIterationLimit()+1;

the second test is...
(if the approximation needs more iterations?)
Code:
if((std::min(fabs(c-dx),fabs(ci-dxi))/std::max(fabs(dx),fabs(dxi)) < EPSILON) ||
                    (std::max(fabs(c+dx),fabs(ci+dxi))/std::min(fabs(dx),fabs(dxi)) > BIGNUM)) {
                return iterlim+1;
            }

where iterlim+1 is the value used to flag glitch pixels for re-rendering after picking a new reference point ie:pass 2,3,4...
this seems to catch all glitches



see SFTC/Engine/approximation.cpp and SFTC/GUI/glwidget.cpp for details regarding this...

for scaling re:exponent I have set 2.225E-308 as the limit where scaling starts, I found a couple of references to the limits of type double,
1. smallest double = 2.2250738585072014e-308 w/o loosing precision
2. smallest accurately representable number = MaxExponent - mantissaBits = 272

I think 2 is the limit where you start to loose bits while 1 has a tolerance of 16 decimal places over the absolute smallest value the double type can hold 5e-324, using only 1 bit, not good for anything more than acting as a constant.

scaling is applied to c and ci when calculating the initial reference data and then in the CalculateIterations() routine to delta and deltai,



The voodoo magic is something of a mystery to me, I take it, largely,  for granted that it works and thank folks like Kevin Martin, Pauldebrot Kalles and Knighty for sharing their discoveries and hard work. As my math skills are rudimentary at best, I can't offer any real insight or depth to the idea and appreciate any correction to my assumptions and guesses wink
« Last Edit: December 03, 2014, 05:39:49 AM by 3dickulus » Logged

Resistance is fertile...
You will be illuminated!

                            #B^] https://en.wikibooks.org/wiki/Fractals/fragmentarium
quaz0r
Fractal Molossus
**
Posts: 652



« Reply #2 on: December 08, 2014, 12:54:15 AM »

thanks for the response, trying to figure all this stuff out is a real journey.  i decided to try what i originally thought would be ridiculously extreme values for how much the series approx differs from the perturbation and everything seems to be rendering properly now, going a little less far with the series approx but not by a whole lot.  i am sure there must be a more intelligent approach altogether though.  i believe pauldelbrot described doing a different sort of check for when to bail on series approx, and i dont think it involved checking against a magic number, so his approach is probably the smartest.  i'll have to take a look at that again sometime.
Logged
3dickulus
Global Moderator
Fractal Senior
******
Posts: 1558



WWW
« Reply #3 on: December 08, 2014, 01:16:56 AM »

I think the "magic" is detecting when operations are going to start loosing precision due to the limits of data type, like when the difference between values is smaller than bit resolution allows so they round to values that amplify error ?

the tests I refer to are both post reference calculation.
Logged

Resistance is fertile...
You will be illuminated!

                            #B^] https://en.wikibooks.org/wiki/Fractals/fragmentarium
Pages: [1]   Go Down
  Print  
 
Jump to:  


Powered by MySQL Powered by PHP Powered by SMF 1.1.21 | SMF © 2015, Simple Machines

Valid XHTML 1.0! Valid CSS! Dilber MC Theme by HarzeM
Page created in 0.312 seconds with 23 queries. (Pretty URLs adds 0.007s, 2q)