Logo by bib - Contribute your own Logo!

END OF AN ERA, FRACTALFORUMS.COM IS CONTINUED ON FRACTALFORUMS.ORG

it was a great time but no longer maintainable by c.Kleinhuis contact him for any data retrieval,
thanks and see you perhaps in 10 years again

this forum will stay online for reference
News: Did you know ? you can use LaTex inside Postings on fractalforums.com!
 
*
Welcome, Guest. Please login or register. April 26, 2024, 04:58:29 AM


Login with username, password and session length


The All New FractalForums is now in Public Beta Testing! Visit FractalForums.org and check it out!


Pages: 1 ... 8 9 [10] 11 12 ... 15   Go Down
  Print  
Share this topic on DiggShare this topic on FacebookShare this topic on GoogleShare this topic on RedditShare this topic on StumbleUponShare this topic on Twitter
Author Topic: Geometric Algebra, Geometric Calculus  (Read 12671 times)
0 Members and 1 Guest are viewing this topic.
kram1032
Fractal Senior
******
Posts: 1863


« Reply #135 on: October 29, 2014, 12:23:56 AM »

Thanks for those contributions smiley

At this point I am just experimenting.
I'm not even concerned with the outer product right now. As said, I only experimented with the geometric product.

However, you are right about the outer product being changed quite significantly.
As I already showed in my previous post, in using a \sigma_{ij} \neq -1, you actually don't even get a pure antisymmetric part. You get both a scalar (which is symmetric) AND a(n antisymmetric) bi-vector component if you use the standard definition of a wedge product.

I did not define the inner product as anti-commutative. I already showed how \sigma_{ii}, i.e. the factor you get from exchanging a base-vector with itself, must always be 1. Hence, it is symmetric.

\frac{e_j e_i}{e_i e_j} = \sigma_{ij}, that is correct. It's a nice alternate method to denote it.

Your orthogonality remark is topical: That's actually why I am investigating this direction.
You do not actually need an orthonormal basis of vectors. You can just have n completely arbitrary base-vectors for an n-dimensional space. They can lie at completely odd angles to each other. The only limitation is that they must not be linearly dependent on each other.
That e_i \cdot e_j \neq 0 would just be a generalization. Nothing unheard of. In fact, in certain cases, like, for instance, in triclinic crystals, it's actually more natural to describe your coordinates in such a non-orthognonal and perhaps even non-normal base.

However, I have found the problem with my problem now. I did nothing wrong in my calculations, but I interpreted it wrong:

The offending equation was the following:

e_i e_j e_i = \sigma_{ij} e_i e_i e_j = \sigma_{ij} c_{ii} e_j = \sigma_{ji} e_j e_i e_i = \sigma_{ji} c_{ii} e_j

So that means:

 \sigma_{ij} c_{ii} e_j = \sigma_{ji} c_{ii} e_j

\sigma_{ij} = \sigma_{ji}

I previously simply assumed, that that means, \sigma could only be 1 to fullfill this equation. However, all that is required, in accordance with a different equation I already got before, is this:

\sigma_{ij} = \frac{1}{\sigma_{ji}} and \sigma_{ij} = \sigma_{ji}

\to

\sigma_{ij} = \frac{1}{\sigma_{ij}}

\sigma_{ij}^2 = 1

\sigma_{ij} = \pm 1
This is required for sake of consistency.
Thus, the whole system reduces again to precisely what we already had before, as far as commutativity goes.
All that is left is our c_{ij}, which now are restricted a slight bit further too.

In particular:

e_i e_j = c_{ij} e_{ij}

e_j e_i = c_{ji} e_{ji}

i \neq j \to \sigma_{ij} = -1

i=j \to \sigma_{ij} = \sigma_{ii} = 1

\to

e_j e_i = \sigma_{ij} e_i e_j = - e_i e_j = - c_{ij} e_{ij}

\to

 c_{ji} e_{ji}=-c_{ij} e_{ij}
From there we could get:
-\frac{c_{ji}}{ c_{ij}}=\frac{e_{ij}}{e_{ji}}

This is definitely consistent if we just set c_{ij} = c_{ji}. I'm not absolutely certain right now, that this is the only consistent concept, but it might be.
If it turns out that this is the unique solution, we'd essentially have some symmetric matrix C_{ij}=C_{ji} of which each i and j stands for the additional factor of a geometric product of the i^{th} and j^{th} base vectors. - This would essentially be the metric tensor of our given system. The main diagonal would give the usual signature of the system, while all the other values i \neq j correspond to rotated components. They only occur, if the system is not orthogonal.

As said, I'm not yet fully convinced, that this is the only solution here, though it's not unlikely.
But if it is, we'll still have to look into what happens for other vector multiplications. For instance, what happens if we multiply a vector and a bi-vector or, essentially equivalently, three vectors? Things can become really complicated really quickly, but most certainly, a pattern for multiplying n- with k-vectors should appear sooner rather than later.
Logged
hermann
Iterator
*
Posts: 181



WWW
« Reply #136 on: October 29, 2014, 04:30:14 AM »

if the e_i do not form an orthogonal base I think you have to play the game with covariant and contravariant vectors and you have to define what your metric is.

(e_1 \cdot e_2 \cdot e_3) being the spat product!
I have not tried to write this down with geometric algebra or do we have a source where such thinks are discibed?
Logged

Roquen
Iterator
*
Posts: 180


« Reply #137 on: October 29, 2014, 02:52:26 PM »

The geometric product is defined as follows:
<Quoted Image Removed>
To be anal, no it's not defined that way...the product rules for the basis elements define the full product.  This is an identity to show these two specific partial products are related to the product.
Logged

All code submitted by me is in the public domain. (http://unlicense.org/)
kram1032
Fractal Senior
******
Posts: 1863


« Reply #138 on: October 29, 2014, 10:05:20 PM »

Not really.
You could define a b = a \cdot b + a \wedge b just as well as you could define a \cdot b = \frac{a b+b a}{2} \: \text{and} \: a \wedge b = \frac{a b- b a}{2}

It's almost the same thing as defining:

e^{i z} = \cos{z} + i \sin{z} versus \sin{z} = \frac{e^{i z}-e^{-i z}}{2 i} \: \text{and} \: \cos{z} = \frac{e^{i z}+e^{-i z}}{2}

In both cases I'd argue that the second version is the more fundamental one, however, there is not actually anything wrong with doing it like in the first version. You just have to pick.
One of the two things is defined, and the other is derived. Which one matters very little.

Really, this can be expressed as:

a b = a \cdot b + a \wedge b \Leftrightarrow a \cdot b = \frac{a b+b a}{2} \: \text{and} \: a \wedge b = \frac{a b- b a}{2}

and

e^{i z} = \cos{z} + i \sin{z} \Leftrightarrow \sin{z} = \frac{e^{i z}-e^{-i z}}{2 i} \: \text{and} \: \cos{z} = \frac{e^{i z}+e^{-i z}}{2}

respectively. In either case it takes simple algebraic manipulations to show that these are correct.
« Last Edit: October 29, 2014, 11:33:25 PM by kram1032 » Logged
hermann
Iterator
*
Posts: 181



WWW
« Reply #139 on: October 29, 2014, 10:24:35 PM »

Thanks Kram,

I am a little tired and exhausted this evening so I can work it out in detail.
But I think your construction has something to do with the metric tensor which plays an importend role in general relativety.

http://de.wikipedia.org/wiki/Metrischer_Tensor
http://de.wikipedia.org/wiki/Krummlinige_Koordinaten

g_{ij}

Hermann
P.S may be I can work out the details the next days.
« Last Edit: October 29, 2014, 10:27:04 PM by hermann » Logged

kram1032
Fractal Senior
******
Posts: 1863


« Reply #140 on: October 29, 2014, 11:28:48 PM »

Indeed it does, and I already said so
This would essentially be the metric tensor of our given system.
cheesy

Btw, why do you link to the German Wikis? - Obviously, you and I know German. But I think the largest part of the community, by language, actually is natively English speaking and doesn't understand German either well or at all.

http://en.wikipedia.org/wiki/Metric_tensor
http://en.wikipedia.org/wiki/Curvilinear_coordinates

(Also, very often, English wikis are better than their German counterparts. - Although the German Wikipedia is amongst the best. Especially in Chemistry it actually frequently beats the English one in my experience. Still, usually, English is better, and it's more universally understood here smiley)
Logged
kram1032
Fractal Senior
******
Posts: 1863


« Reply #141 on: October 31, 2014, 11:39:51 PM »

http://wavewatching.net/2014/10/27/the-unintentional-obsfuscation-of-physics/
Logged
kram1032
Fractal Senior
******
Posts: 1863


« Reply #142 on: November 01, 2014, 01:20:25 AM »

Here's another wild idea. I didn't actually test it much, yet. Could be complete nonsense.
However, as we have seen, \sigma only had the following condition on it: \sigma^2=1.
I assumed sigma to be a scalar. But what happens if you let it be its own vector?
For instance, starting with a 2D Geometric Algebra with positive signature:

\sigma^2=(a+bx+cy+di)^2= (a^2 + b^2 + c^2 - d^2) + 2 a b x + 2 a c y + 2 a d i=1

This gives the solution:

\left(d^2+1=b^2+c^2 \wedge a=0\right) \vee \left(b=c=d=0\wedge a^2=1\right)

Anything that sticks to those parameters,
- is its own inverse
- should be a plausible definition of \sigma.

This kind of extension can become very complicated very quickly though. It remains to be seen whether that would even be a valuable addition.
Logged
kram1032
Fractal Senior
******
Posts: 1863


« Reply #143 on: November 01, 2014, 04:57:54 PM »

Here is an awesome paper: Vector Analysis of Spinors (pdf)
It's pretty simple to follow as far as I've seen.
I haven't even reached the meat of the paper yet (I'm just about to get there), but already what happens before that is pretty nice.

It's not very well checked though. I found a few errors: Two times, early on, there is a b which should be a \hat{b}, later down the line, a 1 in an exponent is supposed to be an i and one of the worst errors I found thus far is falsely calling Heisenberg Heisenburg.

There also is this flash video talk:
http://www.worldsci.org/php/DimDimFlashViewer.php?id=336
I wish the quality would be a bit higher. He sadly isn't a great talker: He's very clearly nervous. Though it's still a good talk.
The slides for that talk can be found here http://garretstar.com/nfmtalk2010/AMS-MAA2013.pdf
« Last Edit: November 02, 2014, 01:54:54 AM by kram1032 » Logged
kram1032
Fractal Senior
******
Posts: 1863


« Reply #144 on: November 02, 2014, 01:29:21 AM »

Apparently, this paper of his is more recent:
Geometry of Spin-\frac{1}{2}-particles

He apparently was a direct grad student of Hestenes.
Logged
hermann
Iterator
*
Posts: 181



WWW
« Reply #145 on: November 02, 2014, 08:17:33 PM »

Interesting links!

I hope I have more time next weekend to look on it in detail.

Hermann
Logged

kram1032
Fractal Senior
******
Posts: 1863


« Reply #146 on: November 02, 2014, 08:58:15 PM »

It's actually pretty timely: My recent experiments partially aimed at answering exactly the questions he answered in those papers (if not in full generality). The gist of his work appears to be to unify matrices and Geometric Algebra to the point where you can do everything with GA.
He's focusing heavily on idempotents (values x which have the property, that x^2=x. For the reals, this only works with 0 and 1) and nilpotents (x^2=0 | x \neq 0) and, very importantly, he establishes, that such idempotents pretty much are the Geometric Algebra version of Eigenvalues.
This is something I have pondered about for a while, actually. Eigenvalues and Eigenvectors are, like, THE most important thing behind matrices. If you have them for a given matrix, you can apply /any/ analytic function to those matrices.
Now what is left for me to figure out is: what about non-square matrices, and how to fully translate arbitrary tensor algebra into this format?
It certainly must be possible, by way of the underlying isomorphisms.
Sadly, he only touches on square matrices, focusing on 2x2- and 3x3-ones, not really saying much about nxn- and saying nothing about nxk matrices.
However, a part of what he did hinted at possible extensions for such arbitrary cases.

Obviously, an nxk matrix represents an either under- or over-determined problem. I was wondering how well GA could deal with such cases. I haven't found a whole lot on that yet.

And the other thing, tensor-algebra, is because we learn it that way at university. Coordinate transformations from one frame to another appear to be half of the bread and butter of a physicist. And even if Tensor-algebra is highly cumbersome at times, especially once you get into Co- vs. Contravariant stuff (damn that stuff is easy to mix up. Luckily the distinction isn't always important), it's still really powerful.
I've seen demonstrations of specific cases, but I haven't yet seen a generic 1:1 translation of something expressed with arbitrary tensors to something expressed in some fitting Geometric Algebra, even if that is, alledgedly, completely possible.

Once we can have arbitrary-dimensioned tensors fully translated in GA and back, we can truly compare the two approaches. (Thus far, what I've seen is mostly "very promising", to the point where I'd love to just do everything this way, but no complete, rigorous, exhaustive 1:1-translation)
« Last Edit: November 02, 2014, 10:46:42 PM by kram1032 » Logged
hermann
Iterator
*
Posts: 181



WWW
« Reply #147 on: November 04, 2014, 10:57:53 PM »

I found this fine article on Bivectors in Wikipedia:
http://en.wikipedia.org/wiki/Bivector

Hermann
Logged

kram1032
Fractal Senior
******
Posts: 1863


« Reply #148 on: November 06, 2014, 01:16:35 AM »

Thanks Hermann smiley
There's also http://en.wikipedia.org/wiki/Multivector

Meanwhile, just for fun, idempotents in spherical coordinates:

u_\pm = u_0 \pm \bold{u}

with

u = u_r e_r + u_\theta e_\theta + u_\phi e_\phi

then u_\pm^2=u_0^2+u_r^2 e_r^2+u_\theta^2 e_\theta^2+u_\phi^2 e_\phi^2 \pm 2 u_0 \left(u_r e_r+u_\theta e_\theta+u_\phi e_\phi \right) = u_\pm^2 \Rightarrow \\<br />u_0 = u_0^2+u_r^2 e_r^2+u_\theta^2 e_\theta^2+u_\phi^2 e_\phi^2 = u_0^2+u_r^2+r^2 u_\theta+\left({r \sin \theta}\right)^2 u_\phi^2 \\<br />u_r = 2 u_0 u_r \\<br />u_\theta = 2 u_0 u_\theta \\<br />u_\phi = 2 u_0 u_\phi \Rightarrow \\<br />u_0=\frac{1}{2} \Rightarrow \\<br />\frac{1}{2} = \frac{1}{4} + u_r^2+r^2 u_\theta+\left({r \sin \theta}\right)^2 u_\phi^2 \Rightarrow \\<br />u_r^2+r^2 u_\theta+\left({r \sin \theta}\right)^2 u_\phi^2 = \frac{1}{4} \\<br />u_\theta^2+\sin^2\theta \: u_\phi^2 =\frac{\frac{1}{4}-u_r^2}{r^2}<br />
You could substitute that back into the orignal, but I think it's nicer to look at with that condition. As long as u_0 = \frac{1}{2} and u_\theta^2+\sin^2\theta \: u_\phi^2 =\frac{\frac{1}{4}-u_r^2}{r^2}, we are looking at an idempotent, squaring to itself.
Logged
Roquen
Iterator
*
Posts: 180


« Reply #149 on: November 06, 2014, 07:09:22 AM »

Quote
In both cases I'd argue that the second version is the more fundamental one, however, there is not actually anything wrong with doing it like in the first version. You just have to pick.
One of the two things is defined, and the other is derived. Which one matters very little.
Like I said:  I was being anal.  But if you define in terms of the partial products then number of axioms explodes.  The simplest definitions wins.
Logged

All code submitted by me is in the public domain. (http://unlicense.org/)
Pages: 1 ... 8 9 [10] 11 12 ... 15   Go Down
  Print  
 
Jump to:  

Related Topics
Subject Started by Replies Views Last post
geometric spirals Mandelbulb3D Gallery bib 0 892 Last post October 25, 2010, 09:48:32 PM
by bib
Geometric Buddha Images Showcase (Rate My Fractal) John Smith 0 1046 Last post June 07, 2012, 09:05:17 PM
by John Smith
Retro Geometric Still Frame FracZky 0 1200 Last post May 02, 2013, 07:30:42 PM
by FracZky
Geometric Patterns No. 2 Saturn&Titan Gallery element90 2 913 Last post February 09, 2014, 03:43:35 PM
by Dinkydau
Geometric Fractals Help Help & Support lancelot 13 940 Last post December 15, 2014, 06:10:29 PM
by Sockratease

Powered by MySQL Powered by PHP Powered by SMF 1.1.21 | SMF © 2015, Simple Machines

Valid XHTML 1.0! Valid CSS! Dilber MC Theme by HarzeM
Page created in 0.171 seconds with 24 queries. (Pretty URLs adds 0.012s, 2q)