Complex Step Differentiation

The best way of differentiating numerically analytic functions is don't. Differentiate them analytically instead, possibly using computer-assisted algebra, and then evaluate the derivative.

But there is a surprising alternative method, which sometimes is useful. It relies on a real function's analytic continuation into the complex plain.

Normal expansion about a point tells us that

f(x+ε)=f(x)+εf'(x)+(1/2)ε2f''(x)+ (1/6)ε3f'''(x)+...

where the primes denote differentiation with respect to x. Consider the expansion by setting ε=ih, that is moving a small step from the real line into the complex plain.

f(x+ih)=f(x)+ihf'(x)-(1/2)h2f''(x)- (1/6)ih3f'''(x)+...

Now equate the imaginary parts, noting that along the real line f(x) and all of its derivatives are real.

Im(f(x+ih))=hf'(x)+O(h3)

Or simply

f'(x)=Im(f(x+ih))/h+O(h2)

Python understands complex numbers, and the cmath module defines the exp function for them, so one can repeat the previous example of differentiating x*exp(-x)

plot of x*exp(-x)

#!/usr/bin/python
from cmath import exp

def func(x):
    return(x*exp(-x))

h=0.1
x=0.5

for i in range(0,16):
    z=complex(x,h)
    print("%e %.9f"%(h,(func(z)).imag/h))
    h=h/10

When run this produces

1.000000e-01 0.300740392
1.000000e-02 0.303240058
1.000000e-03 0.303265077
1.000000e-04 0.303265327
1.000000e-05 0.303265330
1.000000e-06 0.303265330
1.000000e-07 0.303265330
1.000000e-08 0.303265330
1.000000e-09 0.303265330
1.000000e-10 0.303265330
1.000000e-11 0.303265330
1.000000e-12 0.303265330
1.000000e-13 0.303265330
1.000000e-14 0.303265330
1.000000e-15 0.303265330
1.000000e-16 0.303265330

And picking the trickier value of x=0.9999 gives

1.000000e-01 -0.001188431
1.000000e-02 0.000024527
1.000000e-03 0.000036669
1.000000e-04 0.000036790
1.000000e-05 0.000036792
1.000000e-06 0.000036792
1.000000e-07 0.000036792
1.000000e-08 0.000036792
1.000000e-09 0.000036792
1.000000e-10 0.000036792
1.000000e-11 0.000036792
1.000000e-12 0.000036792
1.000000e-13 0.000036792
1.000000e-14 0.000036792
1.000000e-15 0.000036792
1.000000e-16 0.000036792

Not only are these results much better than those from the traditional differencing method, but it is easier to justify the choice of h, as the error is expected to be O(h2). If we want an error of +/-1e-9, and we think that f'''(x)/6 is around one in magnitude, then h=1e-5 should do, as indeed it does.

To see how good this method is, let us change the format string to "%e %.15f" and try again, adding the correct result at the end.

1.000000e-01 -0.001188431152619
1.000000e-02 0.000024527258178
1.000000e-03 0.000036668978232
1.000000e-04 0.000036790396647
1.000000e-05 0.000036791610831
1.000000e-06 0.000036791622973
1.000000e-07 0.000036791623094
1.000000e-08 0.000036791623096
1.000000e-09 0.000036791623096
1.000000e-10 0.000036791623096
1.000000e-11 0.000036791623095
1.000000e-12 0.000036791623095
1.000000e-13 0.000036791623096
1.000000e-14 0.000036791623096
1.000000e-15 0.000036791623095
1.000000e-16 0.000036791623096
  Analytic   0.0000367916230955018...

The slight variation in the last digit is unsurprising when the full answer is so close to half-way between two digits.

The method is not perfect, for it assumes that one can evaluate the imaginary part of f(x+ih) accurately when h<<x. This might, or might not, be true.

Python's cmath module contains the normal and hyperbolic trig functions and their inverses, sqrt, exp, log and log10. So one can try much nastier functions, such as f(x)=sin(1/x).

plot of sin(1/x)

#!/usr/bin/python
from cmath import sin
from math import cos

def func(x):
    return(sin(1/x))

h=0.1
x=0.001

for i in range(0,16):
    z=complex(x,h)
    print("%e %14.5f %.11g"%(h,(func(x+h)-func(x)).real/h,(func(z)).imag/h))
    h=h/10

print(" Analytic    %14.5f %.11g"%(-cos(1/x)/(x*x),-cos(1/x)/(x*x)))

(Note that the cmath version of sin returns a complex value, even for real arguments. This needs to be converted to real for formatting by %14.5g.)

This code tabulates the traditional differencing method followed by the complex step method for this more ill-behaved function at x=0.001.

1.000000e-01      -12.85296 -109472.71925
1.000000e-02      -63.10571 4.4381018713e+44
1.000000e-03    -1294.65135 6.2028198097e+219
1.000000e-04   -17478.76609 4.38878715e+46
1.000000e-05  -130395.49722 -706466224.51
1.000000e-06  -852341.62903 -661879.1109
1.000000e-07  -602687.57639 -563325.12042
1.000000e-08  -566498.36046 -562388.53196
1.000000e-09  -562791.85902 -562379.17085
1.000000e-10  -562420.36356 -562379.07724
1.000000e-11  -562383.20286 -562379.0763
1.000000e-12  -562379.48831 -562379.07629
1.000000e-13  -562379.56292 -562379.07629
1.000000e-14  -562379.52073 -562379.07629
1.000000e-15  -562436.98587 -562379.07629
1.000000e-16  -561989.34395 -562379.07629
 Analytic     -562379.07629 -562379.07629

For "large" values of h this new method is wildly, wildly inaccurate. But it does eventually achieve decent accuracy, whereas the differencing method struggles, achieving reasonable accuracy for a fairly short range of h.

This complex step method is surprisingly recent. It is generally attributed to a paper by Squire and Trapp from 1998. That paper, SIAM Review 40 110 (1998), is quite readable.

Note that it works with analytic functions only. |x| (i.e. mod(x)) is the most common function which is analytic nowhere, even though its derivatives along the real line are well-defined at all points on the real line save the origin.