Logo by JosLeys - Contribute your own Logo!

END OF AN ERA, FRACTALFORUMS.COM IS CONTINUED ON FRACTALFORUMS.ORG

it was a great time but no longer maintainable by c.Kleinhuis contact him for any data retrieval,
thanks and see you perhaps in 10 years again

this forum will stay online for reference
News: Did you know ? you can use LaTex inside Postings on fractalforums.com!
 
*
Welcome, Guest. Please login or register. March 29, 2024, 06:53:09 AM


Login with username, password and session length


The All New FractalForums is now in Public Beta Testing! Visit FractalForums.org and check it out!


Pages: [1]   Go Down
  Print  
Share this topic on DiggShare this topic on FacebookShare this topic on GoogleShare this topic on RedditShare this topic on StumbleUponShare this topic on Twitter
Author Topic: Algebraic Numbers  (Read 354 times)
0 Members and 1 Guest are viewing this topic.
kram1032
Fractal Senior
******
Posts: 1863


« on: August 13, 2015, 12:00:51 AM »

So I've been interested in the so-called ring of algebraic numbers for quite a while now but only recently have I found a document that doesn't just abstractly talk about them but actually explained how to do simple arithmetic. Once I grasped that description, I got a little bit of a feeling for the behavior of this number system and what I found was interesting. Though I'm still learning all the intricacies about them.

Here is what I found out:

First of, "What are algebraic numbers?"

This number system is able to represent a large number of complex numbers - even a ton of irrational ones - exactly. That is, without floating point errors.
To be more precise, anything you could finitely represent by sums of products of nested n-th roots can be expressed neatly with algebraic numbers. They essentially are an extension of rational numbers and they also have a subring which is an extension of the integers.

"How do they do this?"

The trick is to represent all numbers as the roots of some integer- or fraction-valued uni-variate polynomial.
So the general representation of an algebraic number might look like this:
a_0 + a_1 x + a_2 x^2 + ... + a_{n-1} x^{n-1} + a_n x^n =0 , \ k \ | \ \for_{0\leq i \leq n \in \mathbb{N}}: \ {a_i \in \mathbb{Z}, \ k \in \left{ 0,...,n\right}

A polynomial of degree n, as you probably know, has precisely n roots. The k denotes, which of those roots you currently care about. It is not strictly necessary in that all transformations you do on the roots of algebraic numbers you do on all of them at once. (In this actually lies a bit which I'm still unsure about. More about this later.)

Example 1:

The monic Polynomial x=0, perhaps shockingly, represents the number 0.
The monic Polynomial x-n=0, \ n \in \mathbb{Z}, obviously has the solution n and thus represents that value.
The Polynomial a - b x = 0, \ \left(a,b\right) \in \mathbb{Z} is equivalent to the monic Polynomial x - \frac{a}{b} which represents the value \frac{a}{b}.

Thus far I only showed degree-1-polynomials with a single, directly visible solution. Since only rational numbers are allowed for the coefficients, degree-1-polynomials can only represent rational numbers.
The magic starts to happen if we go up a degree.

Example 2:

The monic Polynomial -1-x+x^2=0 simultaneously represents the two values \frac{1}{2}\left(1\pm\sqrt{5}\right)
More generally, the monic Polynomial r_1 r_2 - \left(r_1+r_2\right) x + x^2, where {r_1 r_2, \ r_1 + r_2} \in \mathbb{Q}, simultaneously represents the two roots r_1, \ r_2.
The monic Polynomial x^n-1=0 simultaneously represents n points along the unit circle in the complex plane, each shifted to its direct neighbors by \frac{n}{2 \pi} rad. One of the points will lie at 1. (here the case for n=5 - specifically look at the plot in the complex plane.)
etc.

Ok, I think by now you get the idea. That much I already knew for a long time. So now to the more mysterious bit: How to do math?

Delving into this, I noticed something rather curious:
Aside from rational-valued manipulations of the polynomials (i.e. shifting or multiplying the roots of a given polynomial by a rational constant), inverse operations are generally easier to do than direct ones.

Multiply by rational factor r:

a_0 + a_1 x + a_2 x^2 + ... + a_i x^i + ... + a_{n-1} x^{n-1} + a_n x^n \to a_0 r^n + a_1 r^{n-1} x + a_2 r^{n-2} x^2 + ... + a_i r^{n-i} x^i + ... + a_{n-1} r^1 x^{n-1} + a_n x^n
or, if sum notation is familiar enough, perhaps more readably: \sum_i^n a_i x^i \to \sum_i^n a_i r^{n-i} x^i.

Example:

x^3-x^2+x-1=0 has three roots \left{\pm i, 1 \right}
Let's multiply by 3:

x^3-x^2+x-1 \to 3^{3-3}x^3-3^{3-2}x^2+3^{3-1}x-3^{3-0}1 = x^3-3 x^2 +9 x^2 - 27 has, as wanted, the three roots \left{\pm 3i, 3 \right}.

Add a rational factor r:

a_0 + a_1 x + a_2 x^2 + ... + a_i x^i + ... + a_{n-1} x^{n-1} + a_n x^n \to a_0 + a_1 \left(x-r\right) + a_2 \left(x-r\right)^2 + ... + a_i \left(x-r\right)^i + ... + a_{n-1} \left(x-r\right)^{n-1} + a_n \left(x-r\right)^n
or, if sum notation is familiar enough, perhaps more readably: \sum_i^n a_i x^i \to \sum_i^n a_i \left(x-r\right)^i.

Example:

Let's take the same polynomial as above but, this time, add 3:

x^3-x^2+x-1\to (x-3)^3-(x-3)^2+(x-3)-1=x^3-10x^2+34x-40 with, as wanted, the three roots \left{3 \pm i, 4 \right}.

Multiplicative inverse, or -1:

a_0 + a_1 x + a_2 x^2 + ... + a_i x^i + ... + a_{n-1} x^{n-1} + a_n x^n \to a_n + a_{n-1} x + a_{n-2} x^2 + ... + a_{n-i} x^i + ... + a_{1} x^{n-1} + a_0 x^n
or, if sum notation is familiar enough, perhaps more readably: \sum_i^n a_i x^i \to \sum_i^n a_{n-i} x^i.

Example:

x^2-3x+2 has the roots \left{1,2\right}
2 x^2-3x+1 has the roots \left{1,\frac{1}{2}\right}

Remark 1:

The fact that reversing the order of the coefficients inverts the value of the roots implies that (anti-)palindromic polynomials, if they have a root r, will always also have the inverse \frac{1}{r} as a root. This fact can be very useful for factoring polynomials.

Example:

x^2+x+1 has the roots -\frac{1}{2}\left(1\pm i\sqrt{3}\right). But \frac{1}{2}\left(1+ i\sqrt{3}\right)\frac{1}{2}\left(1-i\sqrt{3}\right)=1, so the roots are inverses of each other.

Remark 2:

As mentioned, all operations you do on the polynomial to modify one of the roots are simultaneously applied to all roots. This means that, if 0 is one of the roots of the given polynomial, the above operation would be ill-defined. And indeed, if your polynomial has 0 as a root at least once, that means that the coefficient of the constant part a_0 \equiv 0. This would force the polynomial in question to drop a degree, causing you to lose one solution. To avoid this, you can always factor out the 0 roots by simply dividing by x until you get a constant value in your polynomial (or you end up with just x which would, however, mean, that all the roots of the polynomial were 0 and you had a polynomial of the form x^n=0).

inverse Integer powers \sqrt[k]{r}:

a_0 + a_1 x + a_2 x^2 + ... + a_i x^i + ... + a_{n-1} x^{n-1} + a_n x^n \to a_0 + a_1 x^k + a_2 x^{2k} + ... + a_i x^{i k} + ... + a_{n-1} x^{k\left(n-1\right)} + a_n x^{k n}
or, if sum notation is familiar enough, perhaps more readably: \sum_i^n a_i x^i \to \sum_i^n a_i x^{i k}.

This operation won't give you just the principal root but, in fact, all the k complex roots in a single go. We've already seen this in this simple case:

Example:

x-1 \to x^k-1 turns the polynomial that represents 1 into one that represents all kth roots of 1.

x-2 has the root 2.
x^2-2 has the roots \pm\sqrt{2}, both of which square to 2.

More general arithmetic operations (i.e. adding or multiplying any two algebraic numbers together, rather than just rational numbers) is a bit more involved, though it's a direct generalization of the rational case and makes use of the so-called companion-matrix. I will come to that shortly. First, let me demonstrate how, if you know what, exactly, a given root is, you can generate "the best" (as in simplest [as in the one of smallest degree]) monic (as in the coefficient associated with the largest exponent a_n equals 1) polynomial.

First, a method based on what I already have written about.

Find minimal polynomial 1:

Say our desired root is the (in)famous \frac{1}{2}\left(\sqrt{5}-1\right).

Let's start with the simplest possible polynomial: x=0.
Add 5: x-5=0.
Take the square root (note: in this step, two solutions are generated, only one of them will eventually be the one we actually technically care about):
x^2-5=0.
Substract 1: (x+1)^2-5=0 \to x^2+2x-4=0.
Divide by 2:  x^2+\frac{1}{2}2x-{\left(\frac{1}{2}\right)}^24=0 \to x^2+x-1=0.
This polynomial has the desired root as well as a second one:

x=-\frac{1}{2}\left(1\pm\sqrt{5}\right).

Now a method based on removing square-roots:

Find minimal polynomial 2:

x=\frac{1}{2}\left(\sqrt{5}-1\right)
2x=\sqrt{5}-1
2x+1=\sqrt{5}
\left(2x+1\right)^2=5 - this is where the second solution appears
4x^2+4x+1=5
4x^2+4x-4=0
x^2+x-1=0

Now that this is established, let's move on to general multiplication and addition, using the companion matrix.

The Companion Matrix
Given a polynomial p with coefficients c_i, the corresponding companion matrix is defined as:
C(p)=\begin{bmatrix}<br />0 & 0 & \dots & 0 & -c_0 \\<br />1 & 0 & \dots & 0 & -c_1 \\<br />0 & 1 & \dots & 0 & -c_2 \\<br />\vdots & \vdots & \ddots & \vdots & \vdots \\<br />0 & 0 & \dots & 1 & -c_{n-1}<br />\end{bmatrix}.

This matrix, among other really neat properties, happens to have the original polynomial as its characteristic polynomial. That is, the determinant of the Matrix \left| x \mathbb{1}-C(p) \right| = p, where \mathbb{1} is the matrix with 1s in its main diagonal, of dimensions equal to those of the companion matrix. (If the polynomial p has degree n, then both the companion matrix C(p) and the unit matrix \mathbb{1} have dimensions \mathbb{R}^{n\times n}).

Thus you have a 1-1 correspondence between characteristic polynomials and companion matrices.

Example: To Be Written

General Multiplication of Algebraic Numbers

Given two polynomials p_1, \ p_2 with their root sets \left{r_{1,i}\right}, \ \left{r_{2,j}\right} and companion matrices A=C\left(p_1\right), \ B=C\left(p_2\right), the roots that are the products of each root of p_1 with each root of p_2 can be found by finding the characteristic polynomial of the tensor product of the companion matrices:

\left|x \mathbb{1} - \left(A\otimes B\right)\right| will have the rootset \for_{i,j} \left{r_{1,i} r_{2,j}\right}.

Example: To Be Written

General Addition of Algebraic Numbers

Given two polynomials p_1, \ p_2 with their root sets \left{r_{1,i}\right}, \ \left{r_{2,j}\right} and companion matrices A=C\left(p_1\right), \ B=C\left(p_2\right), the roots that are the sums of each root of p_1 with each root of p_2 can be found by finding the characteristic polynomial of the tensor sum of the companion matrices:

\left|x \mathbb{1} - \left(A\otimes \mathbb{1}_B + \mathbb{1}_A \otimes B \right)\right| will have the rootset \for_{i,j} \left{r_{1,i} + r_{2,j}\right}.

Example: To Be Written

Taking integer powers of Algebraic Numbers

If you have paid attention, you will have noticed that the above definition of a multiplication won't (just) give rise to powers: Since each root of one matrix is paired up with each root of the other, you'll end up with a quadratic growth of values only a tiny fraction of which will actually be powers of specific roots.
To avoid this, you'll either have to, at each step, factor out the roots you don't actually want (as far as that is possible - in this particular case, I think it should always work), or you can use the constraints the polynomials set to only get actual powers (though there might be duplicates still - if you start from x^2-1=0, both solutions, \pm 1 will square to 1.)

An example algorithm is given here

Example (taken from above link)
Suppose you want to find the square s of a root r of the polynomial x^3-x-1, so s=r^2.

The polynomial tells you, that x^3-x-1=0 \to r^3-r-1 = 0 \to r^3=r+1.
Then s^2=r^4=r^2+r, \ r^5=r^3+r^2=r^2+r+1, \ s^3=r^6=r^3+r^2+r=r^2+2r+1

So you get:
s=r^2\\<br />s^2=r^2+r\\<br />s^3=r^2+2r+1

Or

s=r^2\\<br />s^2=s+r \to r=s^2-s\\<br />s^3=s+2r+1 \to s^3=s+2(s^2-s)+1

So

s^3-2s^2+s-1=0 and thus
x^3-2x^2+x-1=0

This still only has 3 solutions and it won't include anything twice unless something actually squares to the same value.

Using this technique, juggling around with your coefficients, you can also prove that, in general,
given x^2+px+q=0, the square of that polynomial will be x^2+\left(2q-p^2\right)x+q^2=0.

Thus, for instance, a mandelbrot-esque formula on all the algebraic numbers which are representable with a degree-2-polynomial, where you take only rational values for c=\frac{a}{b} might look like this:

x^2+p q + q \to {\left(x-c\right)}^2+\left(2q-p^2\right)\left(x-c\right)+q^2\\<br />=x^2+\left(2\left(q-c\right)-p^2\right)x+\left(q-c\right)^2+p^2 c

I'm not sure if this case is actually interesting (in particular the fact that in this formula c is always real makes me doubt that a little), but it'd be "correct" in the sense of it always being inside the algebraic number field, as long as you keep c, p and q as rational numbers.

And then there is, of course, always "bending the rules" - what would a complex c do in this case? etc.

The reason I didn't add the case where c can be represented by a degree-2-polynomial is because that, by means of adding every value to every value, doubles the polynomial degree in each iteration.

________________

So what currently isn't quite clear to me is how I actually "tag" a polynomial root and keep only solutions that apply to it, following how just that one solution evolves.

There is the notion of a "prime polynomial" which is a monic polynomial which can't be split further into simpler polynomials with integer coefficients. - Each group of solutions will only belong to a single such prime polynomial, so one way (the only way?) to get rid of unnecessary solutions is to factor your polynomials into their simplest parts and to only keep the one you care about. But it's not yet clear to me how, in general, to actually decide which prime polynomial is the one you care about.

Example:

x^2-1 = \left(x+1\right) \left(x-1\right). Those two polynomials are prime.
Meanwhile, x^2+1= \left(x+i\right) \left(x-i\right) but i is not an integer so that is not an allowed factorization. Thus, x^2+1 already is prime.

This definition isn't completely equivalent to prime numbers. For instance, every integer polynomial x-n is prime, whereas, clearly, not all integers n are prime.
« Last Edit: August 13, 2015, 01:24:55 AM by kram1032 » Logged
Pages: [1]   Go Down
  Print  
 
Jump to:  

Related Topics
Subject Started by Replies Views Last post
Imaginary numbers General Discussion A.Real 11 5760 Last post March 05, 2007, 01:51:38 AM
by Nahee_Enterprises
Algebraic Geometry of Discrete Dynamics. Mandelbrot & Julia Set web2k 1 5030 Last post March 24, 2008, 02:57:15 AM
by cKleinhuis
surreal numbers? General Discussion kram1032 4 3893 Last post January 12, 2010, 03:30:26 PM
by kram1032
New numbers set. Mathematics « 1 2 3 » FFDiaz 40 13453 Last post March 18, 2012, 01:29:21 AM
by FFDiaz
FLD algebraic space fold (new) Theories & Research Alef 6 1313 Last post August 29, 2017, 10:12:46 AM
by kjknohw

Powered by MySQL Powered by PHP Powered by SMF 1.1.21 | SMF © 2015, Simple Machines

Valid XHTML 1.0! Valid CSS! Dilber MC Theme by HarzeM
Page created in 0.189 seconds with 24 queries. (Pretty URLs adds 0.007s, 2q)