Title: Algebraic Numbers Post by: kram1032 on August 13, 2015, 12:00:51 AM So I've been interested in the so-called ring of algebraic numbers for quite a while now but only recently have I found a document that doesn't just abstractly talk about them but actually explained how to do simple arithmetic. Once I grasped that description, I got a little bit of a feeling for the behavior of this number system and what I found was interesting. Though I'm still learning all the intricacies about them. Here is what I found out: First of, "What are algebraic numbers?" This number system is able to represent a large number of complex numbers - even a ton of irrational ones - exactly. That is, without floating point errors. To be more precise, anything you could finitely represent by sums of products of nested n-th roots can be expressed neatly with algebraic numbers. They essentially are an extension of rational numbers and they also have a subring which is an extension of the integers. "How do they do this?" The trick is to represent all numbers as the roots of some integer- or fraction-valued uni-variate polynomial. So the general representation of an algebraic number might look like this: A polynomial of degree n, as you probably know, has precisely n roots. The k denotes, which of those roots you currently care about. It is not strictly necessary in that all transformations you do on the roots of algebraic numbers you do on all of them at once. (In this actually lies a bit which I'm still unsure about. More about this later.) Example 1: The monic Polynomial The monic Polynomial The Polynomial Thus far I only showed degree-1-polynomials with a single, directly visible solution. Since only rational numbers are allowed for the coefficients, degree-1-polynomials can only represent rational numbers. The magic starts to happen if we go up a degree. Example 2: The monic Polynomial More generally, the monic Polynomial The monic Polynomial etc. Ok, I think by now you get the idea. That much I already knew for a long time. So now to the more mysterious bit: How to do math? Delving into this, I noticed something rather curious: Aside from rational-valued manipulations of the polynomials (i.e. shifting or multiplying the roots of a given polynomial by a rational constant), inverse operations are generally easier to do than direct ones. Multiply by rational factor r: or, if sum notation is familiar enough, perhaps more readably: Example: Let's multiply by 3: Add a rational factor r: or, if sum notation is familiar enough, perhaps more readably: Example: Let's take the same polynomial as above but, this time, add 3: Multiplicative inverse, or -1: or, if sum notation is familiar enough, perhaps more readably: Example: Remark 1: The fact that reversing the order of the coefficients inverts the value of the roots implies that (anti-)palindromic polynomials (https://en.wikipedia.org/wiki/Palindromic_polynomial), if they have a root Example: Remark 2: As mentioned, all operations you do on the polynomial to modify one of the roots are simultaneously applied to all roots. This means that, if 0 is one of the roots of the given polynomial, the above operation would be ill-defined. And indeed, if your polynomial has 0 as a root at least once, that means that the coefficient of the constant part inverse Integer powers or, if sum notation is familiar enough, perhaps more readably: This operation won't give you just the principal root but, in fact, all the k complex roots in a single go. We've already seen this in this simple case: Example: More general arithmetic operations (i.e. adding or multiplying any two algebraic numbers together, rather than just rational numbers) is a bit more involved, though it's a direct generalization of the rational case and makes use of the so-called companion-matrix (https://en.wikipedia.org/wiki/Companion_matrix). I will come to that shortly. First, let me demonstrate how, if you know what, exactly, a given root is, you can generate "the best" (as in simplest [as in the one of smallest degree]) monic (as in the coefficient associated with the largest exponent First, a method based on what I already have written about. Find minimal polynomial 1: Say our desired root is the (in)famous Let's start with the simplest possible polynomial: Add 5: Take the square root (note: in this step, two solutions are generated, only one of them will eventually be the one we actually technically care about): Substract 1: Divide by 2: This polynomial has the desired root as well as a second one: Now a method based on removing square-roots: Find minimal polynomial 2: Now that this is established, let's move on to general multiplication and addition, using the companion matrix (https://en.wikipedia.org/wiki/Companion_matrix). The Companion Matrix Given a polynomial p with coefficients c_i, the corresponding companion matrix is defined as: This matrix, among other really neat properties, happens to have the original polynomial as its characteristic polynomial. That is, the determinant of the Matrix Thus you have a 1-1 correspondence between characteristic polynomials and companion matrices. Example: To Be Written General Multiplication of Algebraic Numbers Given two polynomials Example: To Be Written General Addition of Algebraic Numbers Given two polynomials Example: To Be Written Taking integer powers of Algebraic Numbers If you have paid attention, you will have noticed that the above definition of a multiplication won't (just) give rise to powers: Since each root of one matrix is paired up with each root of the other, you'll end up with a quadratic growth of values only a tiny fraction of which will actually be powers of specific roots. To avoid this, you'll either have to, at each step, factor out the roots you don't actually want (as far as that is possible - in this particular case, I think it should always work), or you can use the constraints the polynomials set to only get actual powers (though there might be duplicates still - if you start from An example algorithm is given here (http://math.stackexchange.com/a/216860/49989) Example (taken from above link) Suppose you want to find the square s of a root r of the polynomial The polynomial tells you, that Then So you get: Or So This still only has 3 solutions and it won't include anything twice unless something actually squares to the same value. Using this technique, juggling around with your coefficients, you can also prove that, in general, given Thus, for instance, a mandelbrot-esque formula on all the algebraic numbers which are representable with a degree-2-polynomial, where you take only rational values for I'm not sure if this case is actually interesting (in particular the fact that in this formula c is always real makes me doubt that a little), but it'd be "correct" in the sense of it always being inside the algebraic number field, as long as you keep c, p and q as rational numbers. And then there is, of course, always "bending the rules" - what would a complex c do in this case? etc. The reason I didn't add the case where c can be represented by a degree-2-polynomial is because that, by means of adding every value to every value, doubles the polynomial degree in each iteration. ________________ So what currently isn't quite clear to me is how I actually "tag" a polynomial root and keep only solutions that apply to it, following how just that one solution evolves. There is the notion of a "prime polynomial" which is a monic polynomial which can't be split further into simpler polynomials with integer coefficients. - Each group of solutions will only belong to a single such prime polynomial, so one way (the only way?) to get rid of unnecessary solutions is to factor your polynomials into their simplest parts and to only keep the one you care about. But it's not yet clear to me how, in general, to actually decide which prime polynomial is the one you care about. Example: Meanwhile, This definition isn't completely equivalent to prime numbers. For instance, every integer polynomial |