So I've been interested in the so-called ring of algebraic numbers for quite a while now but only recently have I found a document that doesn't just abstractly talk about them but actually explained how to do simple arithmetic. Once I grasped that description, I got a little bit of a feeling for the behavior of this number system and what I found was interesting. Though I'm still learning all the intricacies about them.
Here is what I found out:
First of,
"What are algebraic numbers?"This number system is able to represent a large number of complex numbers - even a ton of irrational ones - exactly. That is, without floating point errors.
To be more precise, anything you could finitely represent by sums of products of nested n-th roots can be expressed neatly with algebraic numbers. They essentially are an extension of rational numbers and they also have a subring which is an extension of the integers.
"How do they do this?"The trick is to represent all numbers as the roots of some integer- or fraction-valued uni-variate polynomial.
So the general representation of an algebraic number might look like this:
A polynomial of degree n, as you probably know, has precisely n roots. The k denotes, which of those roots you currently care about. It is not strictly necessary in that all transformations you do on the roots of algebraic numbers you do on all of them at once. (In this actually lies a bit which I'm still unsure about. More about this later.)
Example 1:The monic Polynomial
, perhaps shockingly, represents the number 0.
The monic Polynomial
, obviously has the solution n and thus represents that value.
The Polynomial
is equivalent to the monic Polynomial
which represents the value
.
Thus far I only showed degree-1-polynomials with a single, directly visible solution. Since only rational numbers are allowed for the coefficients, degree-1-polynomials can only represent rational numbers.
The magic starts to happen if we go up a degree.
Example 2:The monic Polynomial
simultaneously represents the two values
More generally, the monic Polynomial
, where
, simultaneously represents the two roots
.
The monic Polynomial
simultaneously represents n points along the unit circle in the complex plane, each shifted to its direct neighbors by
rad. One of the points will lie at 1. (
here the case for n=5 - specifically look at the plot in the complex plane.)
etc.
Ok, I think by now you get the idea. That much I already knew for a long time. So now to the more mysterious bit: How to do math?
Delving into this, I noticed something rather curious:
Aside from rational-valued manipulations of the polynomials (i.e. shifting or multiplying the roots of a given polynomial by a rational constant), inverse operations are generally easier to do than direct ones.
Multiply by rational factor r:or, if sum notation is familiar enough, perhaps more readably:
.
Example: has three roots
Let's multiply by 3:
has, as wanted, the three roots
.
Add a rational factor r:or, if sum notation is familiar enough, perhaps more readably:
.
Example:Let's take the same polynomial as above but, this time, add 3:
with, as wanted, the three roots
.
Multiplicative inverse, or -1:or, if sum notation is familiar enough, perhaps more readably:
.
Example: has the roots
has the roots
Remark 1:The fact that reversing the order of the coefficients inverts the value of the roots implies that
(anti-)palindromic polynomials, if they have a root
, will always also have the inverse
as a root. This fact can be very useful for factoring polynomials.
Example: has the roots
. But
, so the roots are inverses of each other.
Remark 2:As mentioned, all operations you do on the polynomial to modify one of the roots are simultaneously applied to all roots. This means that, if 0 is one of the roots of the given polynomial, the above operation would be ill-defined. And indeed, if your polynomial has 0 as a root at least once, that means that the coefficient of the constant part
. This would force the polynomial in question to drop a degree, causing you to lose one solution. To avoid this, you can always factor out the 0 roots by simply dividing by x until you get a constant value in your polynomial (or you end up with just x which would, however, mean, that all the roots of the polynomial were 0 and you had a polynomial of the form
).
inverse Integer powers :or, if sum notation is familiar enough, perhaps more readably:
.
This operation won't give you just the principal root but, in fact, all the k complex roots in a single go. We've already seen this in this simple case:
Example: turns the polynomial that represents 1 into one that represents all k
th roots of 1.
has the root 2.
has the roots
, both of which square to 2.
More general arithmetic operations (i.e. adding or multiplying any two algebraic numbers together, rather than just rational numbers) is a bit more involved, though it's a direct generalization of the rational case and makes use of the so-called
companion-matrix. I will come to that shortly. First, let me demonstrate how, if you know what, exactly, a given root is, you can generate "the best" (as in simplest [as in the one of smallest degree]) monic (as in the coefficient associated with the largest exponent
equals 1) polynomial.
First, a method based on what I already have written about.
Find minimal polynomial 1:Say our desired root is the (in)famous
.
Let's start with the simplest possible polynomial:
.
Add 5:
.
Take the square root (note: in this step, two solutions are generated, only one of them will eventually be the one we actually technically care about):
.
Substract 1:
.
Divide by 2:
.
This polynomial has the desired root as well as a second one:
.
Now a method based on removing square-roots:
Find minimal polynomial 2: - this is where the second solution appears
Now that this is established, let's move on to general multiplication and addition, using the
companion matrix.
The Companion MatrixGiven a polynomial p with coefficients c_i, the corresponding companion matrix is defined as:
This matrix, among other really neat properties, happens to have the original polynomial as its characteristic polynomial. That is, the determinant of the Matrix
, where
is the matrix with 1s in its main diagonal, of dimensions equal to those of the companion matrix. (If the polynomial p has degree n, then both the companion matrix C(p) and the unit matrix
have dimensions
).
Thus you have a 1-1 correspondence between characteristic polynomials and companion matrices.
Example: To Be Written
General Multiplication of Algebraic NumbersGiven two polynomials
with their root sets
and companion matrices
, the roots that are the products of each root of
with each root of
can be found by finding the characteristic polynomial of the tensor product of the companion matrices:
will have the rootset
.
Example: To Be Written
General Addition of Algebraic NumbersGiven two polynomials
with their root sets
and companion matrices
, the roots that are the sums of each root of
with each root of
can be found by finding the characteristic polynomial of the tensor sum of the companion matrices:
will have the rootset
.
Example: To Be Written
Taking integer powers of Algebraic NumbersIf you have paid attention, you will have noticed that the above definition of a multiplication won't (just) give rise to powers: Since each root of one matrix is paired up with each root of the other, you'll end up with a quadratic growth of values only a tiny fraction of which will actually be powers of specific roots.
To avoid this, you'll either have to, at each step, factor out the roots you don't actually want (as far as that is possible - in this particular case, I think it should always work), or you can use the constraints the polynomials set to
only get actual powers (though there might be duplicates still - if you start from
, both solutions,
will square to 1.)
An example algorithm is given
hereExample (taken from above link)Suppose you want to find the square s of a root r of the polynomial
, so
.
The polynomial tells you, that
.
Then
So you get:
Or
So
and thus
This still only has 3 solutions and it won't include anything twice unless something actually squares to the same value.
Using this technique, juggling around with your coefficients, you can also prove that, in general,
given
, the square of that polynomial will be
.
Thus, for instance, a mandelbrot-esque formula on all the algebraic numbers which are representable with a degree-2-polynomial, where you take only rational values for
might look like this:
I'm not sure if this case is actually interesting (in particular the fact that in this formula c is always real makes me doubt that a little), but it'd be "correct" in the sense of it always being inside the algebraic number field, as long as you keep c, p and q as rational numbers.
And then there is, of course, always "bending the rules" - what would a complex c do in this case? etc.
The reason I didn't add the case where c can be represented by a degree-2-polynomial is because that, by means of adding every value to every value, doubles the polynomial degree in each iteration.
________________
So what currently isn't quite clear to me is how I actually "tag" a polynomial root and keep only solutions that apply to it, following how just that one solution evolves.
There is the notion of a "prime polynomial" which is a monic polynomial which can't be split further into simpler polynomials with integer coefficients. - Each group of solutions will only belong to a single such prime polynomial, so one way (the only way?) to get rid of unnecessary solutions is to factor your polynomials into their simplest parts and to only keep the one you care about. But it's not yet clear to me how, in general, to actually decide which prime polynomial is the one you care about.
Example:. Those two polynomials are prime.
Meanwhile,
but
is not an integer so that is not an allowed factorization. Thus,
already is prime.
This definition isn't completely equivalent to prime numbers. For instance, every integer polynomial
is prime, whereas, clearly, not all integers
are prime.