News: Did you know ? you can use LaTex inside Postings on fractalforums.com!

## The All New FractalForums is now in Public Beta Testing! Visit FractalForums.org and check it out!

 Pages: [1] 2   Go Down
 Author Topic: Trouble understanding series approximation  (Read 1612 times) Description: 0 Members and 1 Guest are viewing this topic.
quick yellow whale
Forums Freshman

Posts: 18

 « on: December 12, 2016, 04:31:34 AM »

I'm trying to implement series approximation in my fractal program, and was able to implement the perturbation method, but can't get the series approximation to work. I'm using this paper as a reference http://superfractalthing.co.nf/sft_maths.pdf.

From my understanding you first compute all of the X coefficients for the reference point, and using those X coefficients you compute the A, B, and C coefficients. Then for every pixel in the image you compute:

$\Delta _n = A_n\delta + B_n\delta^2 + C_n\delta^3$

where $\delta$ is the difference between the point at the pixel and the reference point. Now at this point I'm not sure how you are supposed to use $\Delta_n$ to compute $Y_n$.

We are looking for the iteration where $Y_n$ escapes, but we don't want to iterate through all of $X_n$ for each pixel, adding $\Delta_n$ to get $Y_n$ and checking if it escapes because that wouldn't save any time. One thing I tried was to first find the first $X_n$ that escapes, and then start from that iteration and go backwards until I found the iteration when $Y_n$ escapes, but that didn't work. What could I be missing here?
 Logged
quaz0r
Fractal Molossus

Posts: 652

 « Reply #1 on: December 12, 2016, 05:10:22 AM »

an approach to implementing perturbation with series approximation might go as follows:  iterate the reference point from the beginning along with the coefficients (you dont need to save the reference iterations yet).  at some point you need to decide to stop iterating the coefficients (a whole other can of worms).  the iteration where you stop is where the perturbed iterations will begin (ie if you stop iterating the coefficients at iteration 1000, all your perturbed iterations start counting from 1000).  continue iterating the reference point from here, saving the reference iterations for use with the perturbation formula.  the $\Delta_n$ (to start the perturbed iterations) for each point is initialized with the series formula $A_n \delta + B_n \delta^2 + C_n \delta^3 ...$
 Logged
quick yellow whale
Forums Freshman

Posts: 18

 « Reply #2 on: December 12, 2016, 08:17:56 AM »

an approach to implementing perturbation with series approximation might go as follows:  iterate the reference point from the beginning along with the coefficients (you dont need to save the reference iterations yet).  at some point you need to decide to stop iterating the coefficients (a whole other can of worms).  the iteration where you stop is where the perturbed iterations will begin (ie if you stop iterating the coefficients at iteration 1000, all your perturbed iterations start counting from 1000).  continue iterating the reference point from here, saving the reference iterations for use with the perturbation formula.  the <Quoted Image Removed> (to start the perturbed iterations) for each point is initialized with the series formula <Quoted Image Removed>

Thanks, that worked. Is the main mandelbrot supposed to get distorted even if only the first iteration is skipped? I'm getting this, just want to make sure it's normal, with the reference point being (0, 0):

 Logged
quaz0r
Fractal Molossus

Posts: 652

 « Reply #3 on: December 12, 2016, 08:59:20 AM »

i guess i havent actually tried using series approximation / perturbation while zoomed all the way out like that.  one thing i did wrestle with initially is making sure all the things are synced up properly.  it is real easy to mismatch things by one iteration (or whatever) when trying to piece the algorithm together.  if you dont fit everything together perfectly you will end up with deformations like that.
 Logged
skychurch
Alien

Posts: 22

 « Reply #4 on: December 13, 2016, 02:45:34 AM »

Probably not worth the candle to try and use SA below e50 magnification. The normal perturbation method will not be noticably slower.
 Logged
quaz0r
Fractal Molossus

Posts: 652

 « Reply #5 on: December 13, 2016, 02:54:47 AM »

you cant really make hard and fast rules like that for any of this stuff; it always depends on multiple factors.

the general approach taken by current implementations is to use hardware floating point to do the standard z2+c up to the limits of hardware floating point, and then use SA+perturbation from there.
 Logged
skychurch
Alien

Posts: 22

 « Reply #6 on: December 13, 2016, 05:08:14 PM »

Okay, I was generalising from personal experience. On my rig, it doesn't normally seem to bring much of a speed gain utilising SA below that figure. I suppose if you are trying to render a high iteration location it may help.

 « Last Edit: December 13, 2016, 07:12:19 PM by skychurch, Reason: qualify » Logged
quick yellow whale
Forums Freshman

Posts: 18

 « Reply #7 on: December 17, 2016, 08:54:34 PM »

Does series approximation offer meaningful performance benefits? I'm asking because I've been playing around with Mandel Machine, and it gives you the option of using series approximation or not, and when I turned series approximation off it didn't cause the rendering to be any slower.
 Logged
lycium
Fractal Supremo

Posts: 1158

 « Reply #8 on: December 17, 2016, 09:19:55 PM »

Does series approximation offer meaningful performance benefits?

Depends whether or not you include the human time spent eyeballing the results to make sure they don't contain rendering errors.
 Logged

quaz0r
Fractal Molossus

Posts: 652

 « Reply #9 on: December 18, 2016, 04:04:10 AM »

if there is no difference in speed between using SA and not using SA then you must not be zooming very deep at all.  once you get deeper with higher iterations it makes all the difference.
 Logged
quick yellow whale
Forums Freshman

Posts: 18

 « Reply #10 on: December 18, 2016, 06:37:21 AM »

if there is no difference in speed between using SA and not using SA then you must not be zooming very deep at all.  once you get deeper with higher iterations it makes all the difference.

At what iteration level does SA start making a difference? Does it matter where the zoom is happening, or how deep the zoom is? For example could I just set the max iterations to a very large number while completely zoomed out for testing purposes?

I set the iteration limit to 10,000,000 in Mandel Machine and saw no performance difference between SA and non-SA when completely zoomed out.
 Logged
quaz0r
Fractal Molossus

Posts: 652

 « Reply #11 on: December 18, 2016, 07:11:09 AM »

i havent used mandel machine but i would assume it is doing standard mandelbrot iterations until you zoom in past the limits of hardware floating point, as i mentioned.  in any case, SA relates to the minimum iteration in a render, not the max.  at shallow depths the min iter is not very high at all for the most part.  for instance zoomed all the way out on the main mandelbrot set, the min iter is what, 1 or 2 or something?  so SA would literally gain you nothing.

perhaps it would help to explain what SA actually does again:  it allows for points to be initialized at a certain iteration which (if everything works correctly) will be less than the minimum escape time for the location, but hopefully close to it.  so for instance say you set the maxIter to 100,000 for a location, you render the image and the maximum escape time is close to that and the minimum escape time is 80,000.  if you used SA with enough terms, maybe SA could initialize all points to the value they would have at iteration 79,000, allowing all points to start iterating from there, instead of having to start at the beginning.  lets say your image resolution is 1000x1000, that would be 79 billion iterations you would not have to do, that you would have to do if you were not using SA.  as you can see this definitely adds up, and is in fact often way more of a win than perturbation itself.
 Logged
quick yellow whale
Forums Freshman

Posts: 18

 « Reply #12 on: December 19, 2016, 12:19:25 AM »

Ok now I understand, thanks. After zooming in to an area where the minimum iteration was around 27,000 I saw that SA gave almost a 5x speed increase.
 Logged
skychurch
Alien

Posts: 22

 « Reply #13 on: December 19, 2016, 06:04:29 PM »

Then, once you are satisfied all is working correctly (ie. can zoom to the limits of the floating point hardware with respect to the deltas and series terms), you will need to implement a customised number that uses a wide exponent as mentioned in other posts here regarding SA method (unless you've already done it). Once you've got this, as well as deeper zooming, you can also play around with using more terms in the series for greater accuracy.
 Logged
quick yellow whale
Forums Freshman

Posts: 18

 « Reply #14 on: January 07, 2017, 10:31:42 AM »

I'm trying to implement an automatic way for determining the number of iterations to skip using series approximation, but my implementation is slower than not skipping any iterations.

What I tried is for every pixel I do a binary search among all the iterations to see what is the highest number of iterations I can skip before the ratio of $B_n\delta^2$ to $C_n\delta^3$ falls below some preset value, currently 1,000,000,000,000. Doing this binary search for every pixel is probably adding a lot of overhead that eliminates any potential speed gains from skipping iterations.

Is there a better way to automatically determine how many iterations can be skipped without distorting the image?

 Logged
 Pages: [1] 2   Go Down