Logo by Kaliyuga - Contribute your own Logo!

END OF AN ERA, FRACTALFORUMS.COM IS CONTINUED ON FRACTALFORUMS.ORG

it was a great time but no longer maintainable by c.Kleinhuis contact him for any data retrieval,
thanks and see you perhaps in 10 years again

this forum will stay online for reference
News: Did you know ? you can use LaTex inside Postings on fractalforums.com!
 
*
Welcome, Guest. Please login or register. March 29, 2024, 12:45:51 AM


Login with username, password and session length


The All New FractalForums is now in Public Beta Testing! Visit FractalForums.org and check it out!


Pages: 1 ... 9 10 [11] 12 13 ... 24   Go Down
  Print  
Share this topic on DiggShare this topic on FacebookShare this topic on GoogleShare this topic on RedditShare this topic on StumbleUponShare this topic on Twitter
Author Topic: *Continued* SuperFractalThing: Arbitrary precision mandelbrot set rendering in Java.  (Read 46959 times)
0 Members and 1 Guest are viewing this topic.
stardust4ever
Fractal Bachius
*
Posts: 513



« Reply #150 on: March 22, 2016, 11:53:08 AM »

Maybe unchecked "ignore small addends?"
Where is this option? One could have fun deliberately breaking stuff for LOLz... evil
Logged
PieMan597
Conqueror
*******
Posts: 122



WWW
« Reply #151 on: March 22, 2016, 12:33:05 PM »

It shows up in the first image in http://www.fractalforums.com/mandel-machine/mandel-machine/.
Logged
Kalles Fraktaler
Fractal Senior
******
Posts: 1458



kallesfraktaler
WWW
« Reply #152 on: March 22, 2016, 01:12:20 PM »

@Kalles Fraktaler: Kalles Fraktaler is failing to find the centroid again. I am using latest version.
You should use claude's excellent Newton-Raphson function to locate the minibrot smiley

I agree that perturbation is solid and reliable - for bailout>2.
Because I have found locations that cause glitches even though no approximation is used, for bailout=2 (attached location)

The bigger problem is instead with Series Approximation.
My implementation checks the corners and the frame center of the view, i.e. 8 points.
More points could be added, perhaps also inside the view and not only at the borders.

But MM's glitches are very different from KF's, so I don't know how Botond implemented this...

* glitch48.kfr (2.73 KB - downloaded 79 times.)
Logged

Want to create DEEP Mandelbrot fractals 100 times faster than the commercial programs, for FREE? One hour or one minute? Three months or one day? Try Kalles Fraktaler http://www.chillheimer.de/kallesfraktaler
http://www.facebook.com/kallesfraktaler
stardust4ever
Fractal Bachius
*
Posts: 513



« Reply #153 on: March 23, 2016, 05:29:58 AM »

You should use claude's excellent Newton-Raphson function to locate the minibrot smiley
Ah, so they call it something else then. I'll remember that next time... wink
Logged
hapf
Fractal Lover
**
Posts: 219


« Reply #154 on: March 23, 2016, 09:34:53 AM »

But MM's glitches are very different from KF's, so I don't know how Botond implemented this...
I can't provoke these artifacts either so far. Neither with overskipping nor artificially bit starving the calculations.
Logged
Kalles Fraktaler
Fractal Senior
******
Posts: 1458



kallesfraktaler
WWW
« Reply #155 on: March 23, 2016, 10:49:54 AM »

Ah, so they call it something else then. I'll remember that next time... wink
Menu Sepcial->Newton-Raphson zooming, or ctrl+D smiley
Logged

Want to create DEEP Mandelbrot fractals 100 times faster than the commercial programs, for FREE? One hour or one minute? Three months or one day? Try Kalles Fraktaler http://www.chillheimer.de/kallesfraktaler
http://www.facebook.com/kallesfraktaler
knighty
Fractal Iambus
***
Posts: 819


« Reply #156 on: March 23, 2016, 02:52:19 PM »

Hi,
In principle, one could adress the accuracy problem by using interval/affine arithmetics methods. one of these metods can be defined as:

[z] = z + dz*t + 1/2 ddz*t² + ... + R*|tn+1|*E

Where dz is the first derivative of z, ddz the second derivative ...etc.
R*|tn+1| is a positive real number giving the error in the O(tn+1) term.
E is the unit disc centered at 0. E = [0,1]*exp(i*]-infinity, +infinity[

In the case of the series approximation we have:

Zn = zn + a1,n*t + a2,n*t2 + ... + am,n*tm

One can verify that the ai,n are equal to d(i)zn/i! for all i<m . for i==m it is slightly different.

Let's write Zn in interval form (for m=3 for simplicity):

[Zn] = zn + a1,n*t + a2,n*t2 + a3,n*t3 + Rn*|t4|*E

Or using the notation in the original superfractalting paper:

[Zn] = zn + An*t + Bn*t2 + Cn*t3 + Rn*|t4|*E

We have

[Zn+1] = [Zn]2 + [Z0]
(... some algebra manipulation ...)
[Zn+1] = zn+1 + An+1*t + Bn+1*t2 + Cn+1*t3 + Rn+1*|t4|*E

Where (hopefully):
An+1 = 2*zn*An + 1
Bn+1 = 2*zn*Bn + An2
Cn+1 = 2*( zn*Cn + An*Bn)

And:
Rn+1 = Bn2 + 2*(|zn|*Rn + |An|*|Cn|) + 2*|tmax|*(|An|*Rn + |Bn|*|Cn|) + |tmax2|*(|Cn|2 + 2*|Bn|*Rn) + 2*|tmax3|*|Cn|*Rn + |tmax4|*Rn2

Where  |tmax| is the maximal distance between the reference point and the corner points.
I use "reduction" operations to obtain Rn+1. The rules are:
- Replace any complex number by it's absolute value.
- Replace |tk| by |tm+1|*|tmaxk-m-1| whenever k>m+1. (in our case m=3)
- Remove any occurence of E in the computation of Rn+1. It is added afterward.

(Remark: Obviously, \Delta_n is obtained by removing the zn and error terms from [Zn])

Ok! the formula for Rn+1 is lengthy: it requires m+2 terms. Moreover, it also requires absolute values => square roots... but:
- Some of the terms are very small and could be skipped safely.
- Using disk interval (which requires the absolute value) prevents excessive (actually catastrophic) growth of it's size. For example multiplying a disk interval by \sqrt{2}/2 (1 + i) doesn't change it's size but a sqare interval's size will be multiplied by \sqrt{2}.
- The computations are done once for each reference point. Moreover they can be done in parallel with the computations in full accuracy of zn.
Another drawback of (all?) interval arithmetic methods is that it is very conservative: at some point the error term grows extremly fast.

We are not finished yet   evil! The question is how to use it?

Say, we have a function y(x). For some small enougth \Delta x we have:

\Delta y = y' * \Delta x

Let us call \Delta x and \Delta y absolute errors variations. Their absolute values will be called the errors.
 
The question that we are asking is: What is the maximal allowed error on y that corresponds to some allowed error on x?
In other words :
|\Delta y| <= |y'|* |\Delta x|

Here |\Delta x| will be some fraction of the size of the area corresponding to a pixel. It depends on the resolution and the zoom factor.
y will correspond to \Delta_n and y' to the derivative of \Delta_n (Wich is: An + 2*Bn*t + 3*Cn*t2)
|\Delta y| is simply given by Rn
Therefore, the accuracy of the series approximation is sufficient if the following condition is met:
Rn <= \Delta_n'* |\Delta t|
___________________________________________________________________________________

For the detection of the glitches caused by the numerical inaccuraces, a similar approach can be used: The question is:
What is the maximal allowed relative error on y that corresponds to some allowed absolute error on x?
That is, if:

|\Delta y / y| > |y'/y|* |\Delta x|

is true then the accuracy is insufficient and a glitch occures.
y is replaced by : \delta_{n+1} = (2 z_{n} + \delta_{n}) \delta_{n} + \delta_{0}
y' is replaced by : \delta_{n+1}' = 2 (z_{n} + \delta_{n}) \delta_{n}' + 1
|\Delta x| will be some fraction of the size of the area corresponding to a pixel. It depends on the resolution and the zoom factor (replaced by \Delta\delta_{0} ).
|\Delta y / y| is replaced by the relative accuracy in the computation of \delta_{n+1}. Ignoring the accumulation of rounding errors and taking into account only the cancellation that may occure in the computation of 2 z_{n} + \delta_{n}, I get:
|\Delta y / y| = max(2|zn|,|\delta_{n}|)/|2 z_{n} + \delta_{n}| * 2-pmax
pmax is the number of bits in the mantissa.

So the glitch condition becomes:
2^{-pmax} max(2 |z_n|, |\delta_n|) |\delta_{n+1}|> |2 z_n + \delta_n| |\delta_{n+1}'| |\Delta \delta_{0}|

I was hoping to obtain something as simple and effective as Pauldelbrot's formula  embarrass . Nevertheless, it's in fact quite close... Why?
The formula above tells us that a glitch may occure if: the floating point type used is not accurate enougth, the derivative of \delta_{n+1} is too small and/or if |2 z_{n} + \delta_{n}| is also too small w.r.t. |z_n| or |\delta_{n}|. It also says there will likely be more glitches at higher resolutions.

For deep enough a zoom, adding \delta_{0} is irrelevant (as Pauldelbrot noticed) and corresponds to a null operation. In this case, the derivative of \delta_{n+1} shouldn't include the +1 and becomes \delta_{n+1}' = 2 (z_{n} + \delta_{n}) \delta_{n}'

Ok! This post is becoming too long. I doubt all the above formulas have any practical use at least because they require a lot of computations. In case I didn't do too big mistakes it shows that it is possible to quatify the approximation and accuracy errors.
___________________________________________________________________________________
PS:
The relation between the relative variations  \Delta x / x and \Delta y / y is :
\Delta y / y = x*y'/y * \Delta x / x
Here \sigma = x y' / y is the condition "number" of y.
« Last Edit: March 23, 2016, 04:05:53 PM by knighty, Reason: Errors » Logged
hapf
Fractal Lover
**
Posts: 219


« Reply #157 on: March 23, 2016, 04:37:23 PM »

I suspect the results would be too conservative and too costly to compute compared to brute force testing on the image itself.
Logged
quaz0r
Fractal Molossus
**
Posts: 652



« Reply #158 on: March 23, 2016, 11:07:27 PM »

Quote
I doubt all the above formulas have any practical use at least because they require a lot of computations.

somebody needs to go ahead and try it out and see what the cost actually is, and if it does what it needs to do.  what are we talking about in terms of cost? a couple minutes? hours days weeks?  if it is anything even remotely reasonable, if it in fact can be used to guarantee correctness, it is huge!  one could (would) always include an option in their program to toggle between a fast mode and a safe mode that does this stuff.

not being a math professor like some of you guys, one thing i have wondered about with regard to the handling of this series approximation, is in regard to checking against "the farthest point / corner point" or whatever, and then using the result of that for all points.  is this approach guaranteed to produce correct results for all points?  it always seemed kinda half-assed / janky to me.
Logged
hapf
Fractal Lover
**
Posts: 219


« Reply #159 on: March 24, 2016, 01:46:48 AM »

not being a math professor like some of you guys, one thing i have wondered about with regard to the handling of this series approximation, is in regard to checking against "the farthest point / corner point" or whatever, and then using the result of that for all points.  is this approach guaranteed to produce correct results for all points?  it always seemed kinda half-assed / janky to me.
No. Distance in the image is not relevant. How much you can skip is a function of the number of coefficients and the extent to which the reference and the other points are "in snyc". So a good coverage of points in the image is of advantage. On the other hand it makes no sense to skip less to minimize errors with points that are wrong even with skip zero and need another reference anyway. 
Logged
stardust4ever
Fractal Bachius
*
Posts: 513



« Reply #160 on: March 24, 2016, 07:43:11 AM »

somebody needs to go ahead and try it out and see what the cost actually is, and if it does what it needs to do.  what are we talking about in terms of cost? a couple minutes? hours days weeks?  if it is anything even remotely reasonable, if it in fact can be used to guarantee correctness, it is huge!  one could (would) always include an option in their program to toggle between a fast mode and a safe mode that does this stuff.

not being a math professor like some of you guys, one thing i have wondered about with regard to the handling of this series approximation, is in regard to checking against "the farthest point / corner point" or whatever, and then using the result of that for all points.  is this approach guaranteed to produce correct results for all points?  it always seemed kinda half-assed / janky to me.
I have a pretty solid understanding of even complex math at this point (and by changing the rules regarding what defines "X^N" you change the fractal type, and as some of you may already know, between 2012-2013 I catalogued numerous known and previously unknown 2nd and 3rd order fractal formulae by doing little more than inserting abs and sign change in strategic places) although certain advanced calculus concepts and infinite series go beyond my understanding (though I totally get how derivatives, and to an extent intergals work).

Mandelbrot renders using arbitrary precision require a bit depth just beyond the zoom depth in order to calculate properly. My understanding in layman's terms, is that perturbation relies on the fact that the orbits of nearby pixels diverge rather slowly, with deltas only becoming large as they approach the escape trajectory.

Thus, a floating point value of fixed length far below the arbitrary precision required, can calculate these deltas with reasonable accuracy, except when it doesn't. Holes in the image created by higher bailout than the reference pixel, or areas with different periodicity, can be filled in by subsequent references.

Yet glitch patterns can and do occur, especially at deep zoom depths, typically references contained within iteration bands immediately preceding highly ornate julia structures (Mandel Machine), or errors within zoom frames typically resulting from a zoom path that forks away from the centroid (Kalles Fraktaler). The vast majority of time, these "scaled floats" (as implemented by software) work flawlessly, except for the afformentioned edge cases.

Could these "hard" cases simply be solved by throwing higher precision "arbitrary floats" at them? For instance say a 64-bit float value has 56 bits of precision, a sign, and 7 bits of exponent. Can we not simply create 128-bit, 192-bit or even 256-bit floats? I understand that CPUs may not natively work with high precision floats, but niether do they work with arbitrary bignum integers. By creating an "arbitrary float" library, one could set the number of bits available. When the delta is computed, one could eliminate the zeros at the start value, store that figure in a register, then use arbitrary precision to compute what's left, and scale the resulting value accordingly. The precision of the arbitrary scaled "floats" could still be significantly less than the reference pixel, albeit slower and with more computational overhead than using native CPU floats.

I would even postulate that calculating the deltas to full arbitrary precision, sans the preceding zeros, would yield identical results to full arbitrary precision, but calculations would slow down dramatically once the orbit begins to escape. So the question remains, between the extremes of pure fast float and pure slow arbitrary, how many bits of "arbitrary float" precision are safe to discard for a given location? Half? Three quarters? 99 percent? tongue stuck out

Another thought: since most fractal users are burning at least four logical CPU cores, and it is difficult to multithread reference calculations, perhaps additional reference pixels can be calculated in tandem, utilizing idle CPU time. The reference with the highest bailout is given precidence. Test pixels will then be rendered using perturbation against the arbitrary references. If the corresponding orbit from one of the test pixel differs significantly from the arbitrary full precision reference orbit of the same pixel, then a "glitch" pixel is detected by the software and the precision of the "arbitrary float" library is increased incrementally until the escape orbits agree. This new "enhanced precision" level is used to calculate the remainder of the image. So only minor computational overhead by utilizing idle CPU cycles during the reference orbit to calculate multiple "extra reference" pixels is needed to detect the presence of glitches. An optional "failsafe rendering" checkbox option could detect and autocorrect glitch formations with only minor performance penalty. Of course it is possible a glitch area could squeeze through the cracks, but unlikely.
« Last Edit: March 24, 2016, 08:41:53 AM by stardust4ever » Logged
hapf
Fractal Lover
**
Posts: 219


« Reply #161 on: March 24, 2016, 09:48:26 AM »

Could these "hard" cases simply be solved by throwing higher precision "arbitrary floats" at them? For instance say a 64-bit float value has 56 bits of precision, a sign, and 7 bits of exponent. Can we not simply create 128-bit, 192-bit or even 256-bit floats? I understand that CPUs may not natively work with high precision floats, but niether do they work with arbitrary bignum integers. By creating an "arbitrary float" library, one could set the number of bits available.
I already posted that one can compute the deltas with more bits and avoid glitches this way while still using far less bits than with full arbitrary precision calculations. There are libraries for this and it's straightforward.
Quote
I would even postulate that calculating the deltas to full arbitrary precision, sans the preceding zeros, would yield identical results to full arbitrary precision, but calculations would slow down dramatically once the orbit begins to escape. So the question remains, between the extremes of pure fast float and pure slow arbitrary, how many bits of "arbitrary float" precision are safe to discard for a given location? Half? Three quarters? 99 percent? tongue stuck out
The perturbation method does "remove the zeros" as you put it and I don't think you can do better than this with as few bits as possible for the deltas and an optimal skip. Concerning references if one uses minibrots they never run out of iterations and only glitches are of concern. They cost more than iterating some point but by chosing them well one can minimise the required references.
Logged
quaz0r
Fractal Molossus
**
Posts: 652



« Reply #162 on: March 24, 2016, 09:51:07 AM »

there are indeed soft floats of these sizes readily available as such, you dont even need to implement it yourself.  gcc has https://gcc.gnu.org/onlinedocs/libquadmath.pdf for instance.  it is significantly slower than hardware floating point though, so it is not really useful in this particular application.  it is overall faster to use hardware fp, correcting the glitches, than to use a slow soft float just to incur some degree less glitches.  at least that is the general case.  also, useful arbitrary precision ("bignum") implementations utilize things like simd to maximize performance on different architectures, and would likely be more performant than some cheesy soft float implementation anyhow.

Quote
I would even postulate that calculating the deltas to full arbitrary precision, sans the preceding zeros, would yield identical results to full arbitrary precision
this is already how arbitrary precision libraries are implemented.  it is essentially the same sort of arrangement as the "extended" floats we are using with an integer exponent, except that you can continue adding to the size of the mantissa (and you get the performance benefits of utilizing simd to manipulate these larger mantissas).

Quote
perhaps additional reference pixels can be calculated in tandem
i thought about doing this just for the hell of it, but decided it would be at best of minimal value.  something that might be of greater value would be to utilize remaining resources to calculate a greater number of series coefficients in parallel.  in my brief experiments with this, the cost of parallel synchronization made it a performance win only with a rather large number of coefficients, but potentially useful nonetheless.

Logged
hapf
Fractal Lover
**
Posts: 219


« Reply #163 on: March 24, 2016, 11:00:09 AM »

i thought about doing this just for the hell of it, but decided it would be at best of minimal value.  something that might be of greater value would be to utilize remaining resources to calculate a greater number of series coefficients in parallel.  in my brief experiments with this, the cost of parallel synchronization made it a performance win only with a rather large number of coefficients, but potentially useful nonetheless.
I tested this and on my system it depends also on bits required for the reference orbit (can be computed in parallel with coefficients). The worst case was about 43 coefficients needed for parallel to be faster and the best 13.
Logged
knighty
Fractal Iambus
***
Posts: 819


« Reply #164 on: March 24, 2016, 05:07:59 PM »

I suspect the results would be too conservative and too costly to compute compared to brute force testing on the image itself.
Yes, interval arithmetics are usually too conservative. That depends also on the number of coefficients of the series approximation: IA's error term grows exponentially. The more coefficients the more the error grows rapdily. That also means that it remains very small for most iterations and then blows up at some point. According to my experiments -while writing the mandelbrot polynomial roots finder- the more coefficients there are the better. For that particular case an IA of the form [z]== z + dz*t + R E t² was sufficient.
The cost is dominated by square roots evaluation's overhead, maybe one order of magnitude slower, depending on the number of coefficients. Compared to brute force, it depends obviously on the number of samples smiley. Moreover, with brute force, one can not be sure at 100% that there won't be significant errors. That said, IA method would be useful for testing and when in doubt.

somebody needs to go ahead and try it out and see what the cost actually is, and if it does what it needs to do.  what are we talking about in terms of cost? a couple minutes? hours days weeks?  if it is anything even remotely reasonable, if it in fact can be used to guarantee correctness, it is huge!  one could (would) always include an option in their program to toggle between a fast mode and a safe mode that does this stuff.

I agree 100% smiley
By "practical use" I was meaning "practically for the end user". For testing purposes I believe it would be useful.

I'm not a math prof either. I took (a lot of) shortcuts while deriving the formulas. The main point was to show that it is possible to get garanteed correct results without user intervention or comparison with full accuracy results. A better analysis is still needed. Maybe a good execise (subject?) for a numerical analysis cours student.  angel

I've just tested my glitch detection formula in 3dickulus's SFTC and it seems to work well thought it doesn't have more benefits than Pauldelbrot's method (+ much more costly).
Logged
Pages: 1 ... 9 10 [11] 12 13 ... 24   Go Down
  Print  
 
Jump to:  

Related Topics
Subject Started by Replies Views Last post
Java applet for exploring the Mandelbrot set Announcements & News Paton 5 6600 Last post March 26, 2007, 06:03:34 PM
by Paton
What range/precision for fractional escape counts for Mandelbrot/Julia sets? Programming Duncan C 7 9921 Last post May 01, 2007, 08:23:13 PM
by Duncan C
Java Mandelbrot segment Help & Support fractalwizz 10 1895 Last post December 29, 2008, 08:01:24 PM
by cKleinhuis
[Java] Double-double library for 128-bit precision. Programming Zom-B 10 16966 Last post December 20, 2010, 04:03:48 AM
by David Makin
SuperFractalThing: Arbitrary precision mandelbrot set rendering in Java. Announcements & News « 1 2 ... 16 17 » mrflay 252 98959 Last post August 17, 2016, 11:59:31 PM
by cKleinhuis

Powered by MySQL Powered by PHP Powered by SMF 1.1.21 | SMF © 2015, Simple Machines

Valid XHTML 1.0! Valid CSS! Dilber MC Theme by HarzeM
Page created in 0.296 seconds with 25 queries. (Pretty URLs adds 0.016s, 2q)