The location was in the text file attached. To this post, I've attached the text file with the coordinates for the image I linked to.

My program is able to zoom beyond this. And yes, it is very time consuming due to the huge number of iterations.

I recall now that the problem I had was that the location just did not get updated, so maybe it is not relevant.

I would like to ask you again, why the coefficients, why equation (3), (4) and (5)? I am only using equation (1).

I create tables with all the delta-values, and when the zoom-level is above e300 the ap-library has a parameter for stripping zeroes when converting to double.

The variable m_nScalingOffset is an integer, and m_nScaling is a double. These variables are actually applied for all zoom levels, however I have not noticed any performance difference.

m_nScalingOffset=0;

m_nScaling=1;

for(i=300;i<m_nZoom;i++){

m_nScalingOffset++;

m_nScaling=m_nScaling*10;

}

The value m_nScalingOffset is used when the delta table is assigned (m_db_cdr and m_db_cdi are tables of type double, cr, ci, rref and iref are ap variables):

m_db_cdr[x][y]=(cr-rref).ToDouble(m_nScalingOffset);

m_db_cdi[x][y]=(ci-iref).ToDouble(m_nScalingOffset);

The standard mandelbrot function is calculated with the ap-library for the reference point, by default the middle point in the image to be rendered. Each value is stored as a double in arrays m_db_dxr and m_db_dxi. All other variables are the ap-library, except i and nMaxIter.

for(i=0;i<nMaxIter;i++){

m_db_dxr[il] = xr.ToDouble();

m_db_dxi[i] = xi.ToDouble();

xin = (xr*xi).Double() + iref;

xrn = sr - si + rref;

xr = xrn;

xi = xin;

sr = xr.Square();

si = xi.Square();

}

Now the delta values are in practical multiplied with the m_nScaling value. So when they are used in function (1), they need to be divided away. Any delta*delta need an additional division of m_nScaling. So here is my implementation of the function (1):

Dnr = (m_db_dxr[antal]*m_dbDr[x][y] - m_db_dxi[antal]*m_dbDi[x][y])*2 + m_dbDr[x][y]*m_dbDr[x][y]/m_nScaling - m_dbDi[x][y]*m_dbDi[x][y]/m_nScaling + m_db_cdr[x][y];

Dni = (m_db_dxr[antal]*m_dbDi[x][y] + m_db_dxi[antal]*m_dbDr[x][y] + m_dbDr[x][y]*m_dbDi[x][y]/m_nScaling)*2 + m_db_cdi[x][y];

yr=m_db_dxr[antal]+m_dbDr[x][y]/m_nScaling;

yi=m_db_dxi[antal]+m_dbDi[x][y]/m_nScaling;

m_dbDi[x][y] = Dni;

m_dbDr[x][y] = Dnr;

I hope this can also help Pauldelbrot.

What is "Series approximation" that my app is lacking?

Unfortunately I still haven't got the SFT java app working on my machine, so I have never tested it. It would be nice if someone could do a comparison on the performance. I suspect that there isn't any big difference, since Java, or any language, would not add any overhead on simple arithmetic with hardware datatypes?