Hello,
I have no idea if anybody is still following this thread any more. I am Piers Lawrence, and I can add some of our new advancements since our first poster.
You can find some of the methodology here:
IAP Poster 2013 or here:
IAP Poster 2013 (alternative).
Let me say something about the progression of this work:
- Lagrange interpolation based approach:
We found the challenge a nice example for testing our Lagrange interpolation based rootfinder since the recurrence definition allows one to evaluate the polynomial in
cost rather than
. This approach worked up to about k=12, the problem being that the roots get too close together, making the computation of some auxiliary values (the barycentric weights) inaccurate.
Since the polynomials have such a sparse recurrence, I started looking to see if we could form a similarly sparse matrix whose characteristic polynomial is exactly the
whose roots we are after. This lead to the the construction you can see in both posters on the topic. Because the size of the matrices increase at a rate of
, it was clear that we needed some tricks. Essentially, this boils down to using Krylov based eigenvalue solvers (i.e. ARPACK) in shift and invert mode to find a small number of roots near some shift. It was particularly nice that the matrices have a very simple and sparse LU decomposition, which made it possible to get over 1 million roots. This solution, while the structure of the matrices is quite beautiful, is not really a good one. Basically since we ran the code on a parallel cluster (it is trivially parallelizable over the shifts chosen) taking around 31 serial years of computing resources, I think it took a little under a week on the cluster.
We started to investigate other ways of computing roots of polynomials, and we discovered that a homotopy based approach is the way of the future! The basic idea is to rewrite the polynomial with a new parameter,
say, such that as we vary
from zero to one, the roots trace a path from the (double) roots of
(plus one extra point at zero) to those of
.
From this new equation, we can form a differential equation describing the path of the roots (see the
poster). The homotopy continuation method is not without other problems (namely singularities in the path), some of which can be alleviated by integrating in the complex
-domain, but what was surprising was the speed: to compute all roots from
to
takes now only around 3 seconds on a desktop! Something that would take on the order of hours via the Mandelbrot matrices!
The complication is the singularities, these only get more dense as we increase
, since we have to worry about the singularities due to the higher derivatives of
.
I attach the code to compute the roots via the homotopy continuation method, it seems clear that one could get over 1 billion roots (
) amd perhaps even further (although I guess higher precision will be needed). There is a shell script run_mandel.sh that will compute all the roots up to
, the rest is written in C, although not well documented (sorry for that), it also will require the installation of the Gnu scientific library.
Anyway, I hope you find this interesting, and perhaps it will be useful for computing other fractals, too, who really knows.
Kind regards,
Dr. Piers Lawrence