Title: giant leaps? fast exponentiation for iteration Post by: claude on September 26, 2013, 01:09:36 AM EDIT: there's a fatal flaw that I didn't spot earlier: iterating F gives F^n which can be accelerated; but the (n+1)th coefficients for the power series in d_0 depend on the (n)th Z value, so in fact there are a whole family of functions not just one, and F_n(F_{n-1}(...(F_2(F_1(d_0)))...)) is definitely not the same as F_1^N(d_0), and so the acceleration of F^n is irrelevant... oops! Consider the iteration: Consider the perturbation method: Consider the polynomial approximation: So far, so SFT. Now I got inspired by this paper: "Parabolic Julia Sets are Polynomial Time Computable" Mark Braverman http://arxiv.org/abs/math.DS/0505036 The basic idea is to use fast exponentiation instead of step by step iteration: What about calculating optimized versions of Non-powers-of-two could be handled like this: http://hackage.haskell.org/packages/archive/base/latest/doc/html/src/GHC-Real.html#^ (http://hackage.haskell.org/packages/archive/base/latest/doc/html/src/GHC-Real.html#^) Going back to the equation (2) with So, what about calculating If it can be determined that the approximation with 3 terms is good for say 1337 iterations, you'd only need to calculate As far as I can tell, this reduces the workload from O(N) to O(log N), keeping K fixed (I think it's O(K^2) keeping N fixed). Anyone fancy implementing this (or already implemented it?) and seeing if the constant factors outweigh the potential gain? (1) derivation of perturbation method |