I wrote a tiny program (source at http://wehner.org/tools/fractals/numbers/mandel.asm
) which works out the first few computations of Z <- Z2 + C
. First, you get Z2
. Then Z4
. Then Z16
. Look at this:2
That is for just eleven iterations. The eleventh number has 617 places of decimal. To work out what has happened to X and iY, you just do the four multiplications that are a "complex multiplication" (ie a+ib times c+id gives ac, ad, bc and bd to be multiplied) - but 32317thing times
. That gives you the first term. Then you do it 32317thing - 2 times, and multiply by 32317thing and by the constant to get the second term. Just keep going until you have resolved the entire polynomial.
Yes, it is Z2 + CZ4 + 2Z2C + C2 + C
and on, and on until there are (roughly) 32317thing/2 terms.
That resolves one dot after eleven iterations. But what causes the pattern? You do the same job for all the neighbouring dots - to see how they compare in their behaviour.
Suppose you have 256 colours, and therefore 256 iterations. Following the sequence above, which doubles in length (approximately) at every step, you will end up with a number so enormous that the universe is too small to contain it.
And that is the size of the job that must be done to understand one dot.
Such is chaos. Organised chaos. Perfect mathematical determinism, but impossible for a human to predict.
The output of that program, as shown above, is at http://wehner.org/tools/fractals/numbers/scale.txt
I also researched compression (http://www.wehner.org/compress
), and came up with the most fundamental string-matching algorithm. Claude Elwood Shannon had invented the word "b
". He put a binary number down as, for example 10000000001, and counted the unchanging bits - there are nine here, between the 1s. This he defined as "entropy". He predicted that data would compress as the logarithm base 2
of its length. There is the famous expression Pi * Log2(Pi)
. No - that is not Pi. It is "blues brother" Claude Elwood having fun.
When I produced programs that did indeed compress according to a logarithmic law, I was surprised to find that when data had poor entropy, but repeated en bloc
, it compressed to Log2
of its length, but when it had total entropy - like 1 1 1 1 1 &c., it compressed to Log1.618
of its size - as I had expected.
Of course, there is another kind of compression nobody ever thinks of. Suppose you write a fractal-generating program in a few bytes or kilobytes. Is that not compression? Running the program generates the image. That is a kind of "unpacking" of the compressed knowledge into a multi-pixel image. How much "compression" do you want? Increase the height and width parameters in the program, and you have more "compression".