Title: The problem with finite-difference Post by: TruthSerum on June 29, 2014, 08:46:21 PM This shows the shortcomings of the finite difference technique
d/dx f(x) = [f(x+h) - f(x)] / h You can never make your step size small enough to really capture the derivative before cancellation occurs: f(x+h) = f(x). The figure makes this quite obvious. Here n_0 is what you want and n_1 is what you get. (http://i.imgur.com/QslTgW9.png) Title: Re: The problem with finite-difference Post by: Roquen on July 01, 2014, 03:38:16 PM Dual numbers are fun.
Title: Re: The problem with finite-difference Post by: TruthSerum on July 01, 2014, 10:40:00 PM Automatic differentiation appears to be useful when you have functions that are known to be composed of other functions at runtime.
Let's say you want to acquire the derivative of the function f(g(x)) d/dx f(g(x)) = f'(g(x))g'(x) by the chain rule Then instead of trying to compute the derivative by finite-difference d/dx f(g(x)) = [f(g(x)) - f(g(x+h))] / h you attack the individual functions: Let u = g(x), then f'(u) = [f(u) - f(u+h)] / h and g'(x) = [g(x) - g(x+h)] / h so that f'(u)g'(x) = [f(u) - f(u+h)] / h * [g(x) - g(x+h)] / h put back u = g(x) to find that d/dx f(g(x)) = [f(g(x)) - f(g(x)+h)] / h * [g(x) - g(x+h)] / h For a practical example, let's compute the derivative of sin(cos(x)) Instead of [sin(cos(x)) - sin(cos(x+h))] / h we now have d/dx sin(cos(x)) = [sin(cos(x)) - sin(cos(x)+h)] / h * [cos(x) - cos(x+h)] / h and at no point did we have to calculate sin(cos(x+h)), so it should be more accurate, at the cost of extra operations. And of course this works for any number of chained functions f_1(f_2(f_n(...))). As Syntopia points out in his article on dual numbers (http://blog.hvidtfeldts.net/index.php/2011/12/distance-estimated-3d-fractals-vii-dual-numbers/), this could be implemented using overloaded operators in C++, and I could see how this would be useful in some cases, but you'll still get the original cancellation problem for functions with a large derivative. This is unless you go down the route of having truncated Taylor series approximations available for every math function that your program uses, which is apparently what some of the libraries listed on the Wikipedia article (http://en.wikipedia.org/wiki/Automatic_differentiation) are doing. If this is the case, why not have the full, exact, analytic derivative available instead? I do not find the dual number framework especially elegant. In particular, they are not associative ((1/e)e)e = e, but (1/e)(e*e) = 0 Now how about symbolic differentiation? Does any fractal software implement this? Title: Re: The problem with finite-difference Post by: Roquen on July 02, 2014, 12:05:16 PM Yeah AD is mostly for unknown functions at compile time made from known building blocks. Your description doesn't match how dual numbers work however. They're the same structure as Complex numbers, but i2=0 instead.
A quick web-search gave me: https://cwzx.wordpress.com/2014/05/31/automatically-computing-the-derivatives-of-a-vector-field-using-dual-numbers/ Quote I do not find the dual number framework especially elegant. In particular, they are not associative Humm: I want to think all the identities of real valued functions hold.Quote This is unless you go down the route of having truncated Taylor series approximations... Not needed for dual numbers, but that aside truncated Power series are terrible. Something like minimax is way better.Title: Re: The problem with finite-difference Post by: Syntopia on July 02, 2014, 03:58:33 PM I do not find the dual number framework especially elegant. In particular, they are not associative ((1/e)e)e = e, but (1/e)(e*e) = 0 Dual numbers are associative. But division is not well-defined for dual numbers. In particular (1/e) does not exist. See 'division' under: en.wikipedia.org/wiki/Dual_number This shows the shortcomings of the finite difference technique d/dx f(x) = [f(x+h) - f(x)] / h You can never make your step size small enough to really capture the derivative before cancellation occurs: f(x+h) = f(x). I am not sure I understand this? Are you refering to the inherent resolution limit of floating point numbers on a computer? Title: Re: The problem with finite-difference Post by: TruthSerum on July 02, 2014, 06:14:25 PM Quote Are you refering to the inherent resolution limit of floating point numbers on a computer? Yes, of course it is exact, in theory, in the limit of h. Is this what you had in mind?Quote Something like minimax is way better. Thanks, I'll look at this in future. |