News: Support the forums via Fractalforums.com Online Store
 Welcome, Guest. Please login or register. August 20, 2014, 02:38:41 PM 1 Hour 1 Day 1 Week 1 Month Forever Login with username, password and session length ••• ••• ••• ••• ••• ••• ••• ••• ••• ••• ••• •••

 Pages: [1]   Go Down
 Author Topic: Generic Algebra (2D) Mandelbrots.  (Read 864 times) Description: 0 Members and 1 Guest are viewing this topic.
kram1032
Fractal Senior

Posts: 1550

 « on: August 16, 2010, 05:06:10 PM »

If you are not familiar with matrix notation, don't worry, I expanded it out during the thread.

Previously, I evilly hijacked a thread  with my idea of doing Matrix-based Mandelbrot sets.

In the last two days, I finally started playing around with different matrix configurations to find the ones that reproduce behaviour of the complex, the split complex and the dual numbers. - Later I looked it up at Wikipedia and found that I was correct, which is somewhat satisfying already

First, I'll show you the three kinds of matrices that do this and then I'll state the obvious and show a way, how to extend that to a generic algebra

So here comes the complex matrix which behaves just like complex numbers: $\begin{bmatrix}Re&-Im\\Im&Re\end{bmatrix}$
Here is the Split complex one: $\begin{bmatrix}Re&Im\\Im&Re\end{bmatrix}$
And there you have the dual one: $\begin{bmatrix}Re&0\\Im&Re\end{bmatrix}$

Note, that in the upper right corner, the 2,1-position, you find the imaginary part multiplied by the square of the imaginary unit in either case.
So you could also write those as:
Complex: $\begin{bmatrix}Re&i^2Im\\Im&Re\end{bmatrix}$
Split complex: $\begin{bmatrix}Re&j^2Im\\Im&Re\end{bmatrix}$
Dual $\begin{bmatrix}Re&\epsilon^2Im\\Im&Re\end{bmatrix}$

So the obvious way to extend this is to do
$\begin{bmatrix}Re&n^2Im\\Im&Re\end{bmatrix}$ - where n² is what ever value your additional dimension's unit should square to.

Adding two matrices together is done component-wise, so that's very straight forward.
To multiply them, you simply multiply the rows of one matrix with the collumns of the other and place the result into the corresponding position of the matrix. So in the general 2x2-case, that means:

$\begin{bmatrix}a_1&b_1\\c_1&d_1\end{bmatrix}\begin{bmatrix}a_2&b_2\\c_2&d_2\end{bmatrix}=\begin{bmatrix}a_1a_2+b_1c_2&a_1b_2+b_1d_2\\a_cc_1+c_2d_1&b_2c_1+d_1d_2\end{bmatrix}$

For our general algebra matrix, that means:

$\begin{bmatrix}{Re}_1&n^2{Im}_1\\{Im}_1&{Re}_1\end{bmatrix}\begin{bmatrix}{Re}_2&n^2{Im}_2\\{Im}_2&{Re}_2\end{bmatrix}=\begin{bmatrix}{Re}_1{Re}_2+n^2({Im}_1{Im}_2)&n^2({Re}_1{Im}_2+{Re}_2{Im}_1)\\{Re}_1{Im}_2+{Re}_2{Im}_1&{Re}_1{Re}_2+n^2({Im}_1{Im}_2)\end{bmatrix}$

Note, how for the ns i,j or epsilon, the real part of the multiplication would come out true.
Also note how the n²Im still is intact - the n can be taken out - and how also the Im in the lower left corner still is intact.

That means, any Mbrot, based on an algebra defined by that, has the form:
$x->x^2+n^2y^2+c_x
y->2xy+c_y$

Where c is the constant for each dimension, that's altered for the Mset and indeed constant for the Jset.
So, you just take the upper left and the lower left results and use those to define your iteration

One of the nice properties of this class of matrices is, that all of them have a general inverse:
$\begin{bmatrix}Re&n^2Im\\Im&Re\end{bmatrix}^{-1}={{1}\over{{Re}^2-{{n^2}{Im}^2}}}\begin{bmatrix}Re&-n^2Im\\-Im&Re\end{bmatrix}$
- which if you just look at the left part of the matrix and compare with the inverse of complex numbers, can easily be tested to turn out true

*/ to be continued with even more general but maybe (maybe as not yet tried) less interesting extensions.
 « Last Edit: August 16, 2010, 05:51:17 PM by kram1032 » Logged
kram1032
Fractal Senior

Posts: 1550

 « Reply #1 on: August 17, 2010, 07:21:52 PM »

Ok, the next thing I tried is exponentiation.
How would you define e[matrix]? - simple:
Matrix powers are defined and so are matrix additions. So what could be more natural than using the taylor series?

Apparently, Wolfram Alpha can't really deal with the sum of the powers of a generic 2x2 matrix but it works perfectly fine for a matrix of the form, I showed up there.

$exp^{\begin{bmatrix}Re&n^2Im\\Im&Re\end{bmatrix}}={\begin{bmatrix}exp^{Re}\cosh(n Im)&exp^{Re}n \sinh(n Im)\\ {{exp^{Re}\sinh(n Im)}\over{n}} &exp^{Re}\cosh(n Im)\end{bmatrix}$

Note how the real diagonal still stood the same and how the other diagonal still is different by n². So it's still the same: The Real value is the top left part and the imaginary value is the bottom left part.

However, previously I defined n² being the value, that your additional dimension with the unit n squares to.
n in that formla now should be the "usual sqrt(n²)" and not your imaginary unit. - I hope. If you want to experiment with actual imaginary units in that place, what you need to do is probably define sinh and cosh for this. And/Or replace n by yet an other matrix, upgrading this rather simple 2x2 matrix to a 4x4 one.

So, for a generic 2D vector algebra, with the imaginary unit squaring to n, your exponation is defined by

Real part: $exp^{Re}\cosh(n Im)$
Imaginary part: ${{exp^{Re}\sinh(n Im)}\over{n}}$

With that, we have a definition of the unit circle, which isn't necessarily a circle

If I can fit the logarithm into this picture, we actually have matrix powers of matrices. The main problem with that is, that ln has a lot of different power series which all only partially work for the whole field of complex numbers. So depending on the choice I make, there might be different versions of ln...
 « Last Edit: August 17, 2010, 09:40:03 PM by kram1032 » Logged
kram1032
Fractal Senior

Posts: 1550

 « Reply #2 on: August 17, 2010, 08:09:35 PM »

Ok, here is the generic sine and cosine, as defined by the taylor series. Generic tan and ln will hopefully follow soon

$sin(\begin{bmatrix}Re&n^2Im\\Im&Re\end{bmatrix})=\begin{bmatrix}sin(Re)cos(n Im) & n cos(Re) sin(n Im)\\ {{cos(Re) sin(n Im)}\over{n}} &sin(Re)cos(n Im)\end{bmatrix}$

$cos(\begin{bmatrix}Re&n^2Im\\Im&Re\end{bmatrix})=\begin{bmatrix}cos(Re)cos(n Im) & -n sin(Re) sin(n Im)\\ -{{sin(Re) sin(n Im)}\over{n}} &cos(Re)cos(n Im)\end{bmatrix}$
 Logged
kram1032
Fractal Senior

Posts: 1550

 « Reply #3 on: August 17, 2010, 09:31:02 PM »

Ok, ln is a bit complicated. if abs(x-1)<1, you get the first Definition. If abs(x-1)>1, you get the second definition, which involves the first definition.

So, here you go:
The first definition:
$ln( \begin{bmatrix}Re&n^2Im\\Im&Re\end{bmatrix})=\begin{bmatrix}{1\over2}(ln(Re-n Im)+ln(Re+n Im)& {n\over2}(ln(Re+n Im)-ln(Re-n Im)) \\ -{ln(Re-n Im)-ln(Re+n Im)\over2n} &{1\over2}(ln(Re-n Im)+ln(Re+n Im) \end{bmatrix}$, if |Re-1+nIm|<1

And the second one:
$ln( \begin{bmatrix}Re&n^2Im\\Im&Re\end{bmatrix})=ln(\begin{bmatrix}Re-1&n^2Im\\Im&Re-1\end{bmatrix})-\begin{bmatrix}-{1\over2}(ln(1+{1\over Re-1-n Im})+ln(1+{1\over Re-1+n Im})& {n\over2}(ln(1+{1\over Re-1-n Im})-ln(1+{1\over Re-1+n Im})) \\ {ln(1+{1\over Re-1-n Im})-ln(1+{1\over Re-1+n Im})\over2n} & -{1\over2}(ln(1+{1\over Re-1-n Im})+ln(1+{1\over Re-1+n Im}) \end{bmatrix}$, if |Re-1+nIm|>1
 « Last Edit: August 17, 2010, 09:38:12 PM by kram1032 » Logged
kram1032
Fractal Senior

Posts: 1550

 « Reply #4 on: August 17, 2010, 10:07:04 PM »

some experiments you could do, involve making the unit dependend on the real or imaginary part or on both...

This works by replacing:

n²Im -> +/- Im³ - the imaginary unit is directly dependend on the current imaginary value

n²Im -> +/- Re²Im - the imaginary unit is directly dependend on the current real value

And now the craziest part:

n²Im -> (Re²+n²Im²)Im - which is equivalent to n²->Re²+n²Im²
- Suddenly the imaginary unit itself becomes an iterated function.
I wonder if that converges in the limit... - if it does, the result might be very interesting. It would be the fractional imaginary unit

The first two are easy to solve and can be written in component form as:

dependend on imaginary:
x->x2+/-y4+cx
y->2xy+cy

dependend on real:
x->(1+/-y2)x2+cx
y->2xy+cy

Note, however, that this is only properly defined in case of pure powers (or, luckily, power series) - if you do a "simple" multiplication, you end up with two different possible real values.
So in that case, you need to iterate through both real values, which leads to 3 instead of 2 equations. And then, when you draw them, you need to descide, what to do.
Possibilities include:
-Ommit one real part
-Take one of the several possible averages
-Take their vector length ((r12+r22)1/2)
- Go 3D and basically have 2 different real parts. - maybe an actual new candidate for a "true" 3D Mset

that also leads to a possible 3D or 4D Mset of the general 2x2-Matrix space.

However, the solution in that case wouldn't be 2 or 3 imaginary and 1 real parts but rather 1 or 2 imaginary and 2 real parts!
 Logged
kram1032
Fractal Senior

Posts: 1550

 « Reply #5 on: August 17, 2010, 11:43:03 PM »

Here is an experiment with that

It's a buddhabrot which randomly samples through any algebra which's additional dimension square to anything between -1 and 1. So, it includes the complex, split complex and dual numbers but also anything in between

 Logged
kram1032
Fractal Senior

Posts: 1550

 « Reply #6 on: August 21, 2010, 10:02:23 PM »

Here is an experiment with the imaginary unit being dependend on the current imaginary part: