I wrote a tiny program (source at
http://wehner.org/tools/fractals/numbers/mandel.asm ) which works out the first few computations of
Z <- Z2 + C. First, you get
Z2. Then
Z4. Then
Z16. Look at this:
2
4
16
256
65536
4294967296
18446744073709551616
340282366920938463463374607431768211456
115792089237316195423570985008687907853269984665640564039457584007913129639936
134078079299425970995740249982058461274793658205923933777235614437217640300735
46976801874298166903427690031858186486050853753882811946569946433649006084096
179769313486231590772930519078902473361797697894230657273430081157732675805500
963132708477322407536021120113879871393357658789768814416622492847430639474124
377767893424865485276302219601246094119453082952085005768838150682342462881473
913110540827237163350510684586298239947245938479716304835356329624224137216
323170060713110073007148766886699519604441026697154840321303454275246551388678
908931972014115229134636887179609218980194941195591504909210950881523864482831
206308773673009960917501977503896521067960576383840675682767922186426197561618
380943384761704705816458520363050428875758915410658086075523991239303855219143
333896683424206849747865645694948561760353263220580778056593310261927084603141
502585928641771167259436037184618573575983511523016459044036976132332872312271
256847108202097251571017269313234696785425806566979350459972683529986382155251
66389437335543602135433229604645318478604952148193555853611059596230656
That is for just eleven iterations. The eleventh number has 617 places of decimal. To work out what has happened to X and iY, you just do the four multiplications that are a "complex multiplication" (ie a+ib times c+id gives ac, ad, bc and bd to be multiplied) - but 32317thing
times. That gives you the first term. Then you do it 32317thing - 2 times, and multiply by 32317thing and by the constant to get the second term. Just keep going until you have resolved the entire polynomial.
Yes, it is
Z2 + CZ4 + 2Z2C + C2 + Cand on, and on until there are (roughly) 32317thing/2 terms.
That resolves one dot after eleven iterations. But what causes the pattern? You do the same job for all the neighbouring dots - to see how they compare in their behaviour.
Suppose you have 256 colours, and therefore 256 iterations. Following the sequence above, which doubles in length (approximately) at every step, you will end up with a number so enormous that the universe is too small to contain it.
And that is the size of the job that must be done to understand one dot.
Such is chaos. Organised chaos. Perfect mathematical determinism, but impossible for a human to predict.
The output of that program, as shown above, is at
http://wehner.org/tools/fractals/numbers/scale.txt .
I also researched compression (
http://www.wehner.org/compress ), and came up with the most fundamental string-matching algorithm. Claude Elwood Shannon had invented the word "
binary dig
it -
bit". He put a binary number down as, for example 10000000001, and counted the unchanging bits - there are nine here, between the 1s. This he defined as "entropy". He predicted that data would compress as the
logarithm base 2 of its length. There is the famous expression
Pi * Log2(Pi). No - that is not Pi. It is "blues brother" Claude Elwood having fun.
When I produced programs that did indeed compress according to a logarithmic law, I was surprised to find that when data had poor entropy, but repeated
en bloc, it compressed to Log
2 of its length, but when it had total entropy - like 1 1 1 1 1 &c., it compressed to Log
1.618 of its size - as I had expected.
Of course, there is another kind of compression nobody ever thinks of. Suppose you write a fractal-generating program in a few bytes or kilobytes. Is that not compression? Running the program generates the image. That is a kind of "unpacking" of the compressed knowledge into a multi-pixel image. How much "compression" do you want? Increase the height and width parameters in the program, and you have more "compression".
Charles