5.4 Series expansion, Taylor Series

Taylor

5.4.1 Coefficients of the Taylor Series

In many cases it is useful to analyze instead of a function f(x) a series that approximates it. This is true especially, if the series converges to the function without restrictions. Then the partial sums of the series can be considered as approximations with increasing accuracy.

For the terms of the sequence that make up the series one will use preferably such functions, that can be differentiated and integrated easily. Especially suitable are series, whose terms are powers or trigonometric functions of the variables. The first case leads to the Taylor series, whose coefficients are obtained via differentiation, which we will study more closely in the following. The second case leads to the Fourier series, which we will visualize after treating the integral, since its coefficients are determined via integration.

Another argument for the choice of a particular series expansion can be, to use functions for the terms of the series, that are particularly adapted to the symmetry of the problem that is described by the function, i.e Bessel functions for cylindrical symmetry and spherical harmonics for point symmetry.

The Taylor series is an infinite series, whose partial sums are an approximation for the function y=f(x), that is exact at the point x0 and approximate in the vicinity of x = x0 and the interval for an acceptable approximation becomes larger with increasing index of the partial sum. Then the members of the sequence that constitutes the series are powers of the distance from the computation point (x - x0). Thus the function is approximated via a power series and the problem consists of finding the coefficients of the individual terms.

To achieve this we first equate the function formally to a power series with terms an(x - x0)n and parameters an. Then we differentiate both sides repeatedly. After each step we put x = x0. Thus all powers containing x - x0 drop out from the power series for the respective derivatives and the coefficient of the remaining term can be easily obtained:

ansatz : f(x) = 0an(x - x0)n = a0 + a1(x - x0) + a2(x - x0)2 + a3(x - x0)3 + ... (x - x0) = 0 a0 = f(x0) f(x) = a1 + 2a2(x - x0) + 3a3(x - x0)2 + 4a4(x - x0)3... (x - x0) = 0 a1 = f(x 0) 1 f(x) = 1 2a2 + 2 3a3(x - x0) + 3 4a4(x - x0)2 + ... (x - x0) = 0 a2 = f(x0) 12 f(x) = 2 3a3 + 3 4 2a4(x - x0)1 + ... (x - x0) = 0 a3 = f(x0) 123 f(n) = n!an + ...n! 2 (x - x0) + ... an = f(n) n!

Thus the coefficient of the n-th power is proportional to the n-th derivative of the function at the computation point and the factorial as a factor simply follows from the differentiating the n-th power. The Taylor series of the function is then with 0!=1,1!=1 and f(0)(x0) = f(x0):

f(x) = f(x0) 0! + f(x 0) 1! (x - x0) + f(x0) 2! (x - x0)2 + f(x 0) 3! (x - x0)3 + ... Taylor series:f(x) = n=0f(n)(x0)(x-x0)n n! zeroth approximation : f(x) = f(x0) first approximation, in x linear : f(x) = f(x0) + f(x0)(x - x0) second approximation, in x quadratic : f(x) = f(x0) + f(x0)(x - x0) + f(x0) 2 (x - x0)2

While f’(x)=df/dx(x) describes the slope of a differentiable function at each point x in its domain of definition, f(x) i.e. dfdx(x) describes the slope of the slope or the change of the slope of f(x). The slope changes if and only if the curve f(x) has a curvature. Therefore f(x) is a measure of the curvature of f(x). If one identifies x with the time t and y=f(t) with the distance traveled by an object during the time t, the first derivative is denoted by velocity and the second derivative is called acceleration of the object that is at time t at position x.

The first approximation of the Taylor series takes into account the slope of the function at the computation point, the second one in addition its curvature. The higher approximations use higher derivatives and it makes sense to also visualize their meaning.


PIC
Figure 5.1: Derivatives of a given fundamental function (blue, chosen on the left hand side) up to the 9-th order, drawn in the colours of the choice boxes above. For the red computation point, that can be moved with the mouse the values of derivatives are given in the number fields on the left. The picture shows the derivatives of sin(x)x.

In the simulation of Fig5.1 the derivatives up to the 9-th order are calculated for a function that can be chosen from 9 given options and are shown as coloured curves in a abscissa region that depends on the function and also may have a shifted origin. With the choice boxes on top the derivatives to be plotted in addition to the function can be selected, all 9 are shown in the picture. For the red point that can be moved with the mouse the values of the local values of the derivatives are calculated anew and display in the number fields on the left.

The derivatives are approximated numerically as differential quotients using both neighboring points:

y(x) y(x+Δx)-y(x-Δx) 2Δx y(x) y(x+Δx)-y(x-Δx) 2Δx y(x+2Δx)-2y(x)-y(x-2Δx) 4Δx2 ...

You will find further details about this in the description pages of the simulation. Parser

In many simulations contained in this book it is possible to enter formulas for functions directly in mathematical notation. For the program the functions are initially strings without meaning, that have to be interpreted by an additional program, a parser and translated to Java code. This is a relatively complex process. If the function is only translated once for a simulation the required time is not of concern and one has, when using the parser for EJS, the advantage of being able to change the function or enter a new one without having to open and edit the program itself.

The determination of the higher derivatives with sufficient accuracy requires a considerable computational effort. In the example of Fig5.1 the function has to be evaluated 10 000 times for one computation, which puts a strain on the computing speed of a simple PC. Therefore the functions are predetermined in our example. If you want to analyze other functions you may open the simulation with EJS console and change the simple Java code of the preset functions.

In the upcoming simulations of Fig5.2 and Fig5.3 the approximations for the derivatives are calculated once without and once with using the parser and you will recognize the difference in the computation speed from these examples.

Convergence of the Taylor series

It should not be taken for granted, that the power series approaches the function also for values of x outside of the computation point x0. During the discussion of the exponential function, which has as a power series a large similarity with the Taylor series, we had however already established that the series also has to converge within the vicinity of the computation point if the factors attached to its term do not diverge. For the Taylor series these factors are the derivatives in the computation point. If the function can be differentiated as many times as desired , the Taylor series converges for all values of the variable This condition includes, that the derivatives at the computation point are bounded: the n-th derivatives grow slower than n!.

For many functions that are important in Physics, as for example polynomials, exponential function, sine and cosine the domain of convergence is unlimited. With increasing order or number of terms the corresponding Taylor series approximates the original function over a larger and larger interval and the domain of small deviations becomes larger and larger. In practice one normally use a partial sum of finite order; then the partial sum is identical to the function at the point of computation and increasingly deviates from the function with growing distance from it.

It is amusing to ask for the Taylor series of the exponential function. Since all it derivatives are equal the Taylor series coincides with the exponential series.

The power function of degree n has non-vanishing derivatives only up to order n + 1. In this case the Taylor series terminates after the n + 1-st term. Its Taylor expansion is thus identical to the original function.

The trigonometric functions however have a unlimited number of derivatives that are repeated periodically, for example sin x, cos x, -sin x, cos x, sin x…. The approximation will become the better the more terms of the series expansion are retained.

Among the possible approximation functions the Taylor series is characterized via the fact, that the coefficients can be determined from data at computation point alone, namely all the derivatives of the function as this point. This series has the large practical advantage, that its terms are powers and can therefore be easily added to and multiplied with each other and also easily integrated and differentiated; the derivatives of the function at the computation point that appear in the coefficients are constants for the above listed operations listed above. Therefore in the physical analysis complex functions are often approximated by a Taylor series with a limited number of terms: linear approximation with two terms and quadratic approximation with three terms.

5.4.2 Approximation Formulas for simple Functions

The linear term of the Taylor series already yields approximations that are often used in practice: for three basic functions the derivation is show; for other cases you may easily derive this your self. You may for example use x = x0 for the computation point and determine the next higher derivative.

Expansion around the computation point x = 0 applicable for  x << 1 1.) y = 1 + x = (1 + x)1 2 y = 1 2(1 + x)-1 2 y 1+0 + 1 1! 1 2(1 + 0)-1 2 x = 1 + 1 2x 2.)y = 1 1-x = (1 - x)-1 y = (1 - x)-2 y 1 + x 3.)y = sinx;y = cosx; y 0 + 1 x = x

5.4.3 Derivation of Formulas and errors bounds for numerical differentiation

Using the Taylor series one can quickly obtain formulas for the calculation of the first derivative y. This also yields a measure for the respective accuracy. We show this for the linear approximation; the procedure can be easily extended to higher approximations.

We assume in the following, that both y(x) and y x + Δ are known.

Taylor series y(x + Δx) = y(x) + y(x)Δx 1 + y(x)Δx2 2 + y(x)Δx3 6 + y(4)(x)Δx4 24 + ... y(x) = y(x+Δx)-y(x) Δx -y(x)Δx 2 + y(x)Δx2 6 + ... y(x) = y(x+Δx)-y(x) Δx - O(Δx);with O(Δx) = y(x)Δx 2 + y(x)Δx2 6 + ... y(x)Δx 2

In the last line is the usual definition for the difference quotient supplemented by the term O(Δx)(letter O), which gives the deviation from the differential quotient due to neglecting the higher terms of the Taylor series. The deviation vanishes in the limit of Δx 0, since all terms contained in O depend at least linearly on Δx. For sufficiently small intervals the higher powers of Δx can be neglected against the linear term and one obtains the important conclusion, that the procedure of differentiation according to the above formula becomes accurate linearly with Δx. If one halves the width of the interval the accuracy is doubled.

Using the Taylor series one can easily derive a method with better convergence for the calculation of the derivative. We write down the Taylor series once for a point that is Δx to the right of the computation point x and once for a point that is Δx to the left of the computation point. Subtracting the two series from each other the terms with even powers drop out:

1y(x + Δx) = y(x) + y(x) Δx + y(x)Δx2 2 + y(x)Δx3 6 + y(x)Δx4 24 + ... 2y(x -Δx) = y(x) -y(x) Δx + y(x)Δx2 2 -y(x)Δx3 6 ... 1 -2 y(x + Δx) - y(x -Δx) = 2y(x) Δx + 2y(x)Δx3 6 + ... y = y(x+Δx)-y(x-Δx) 2Δx - O(y(x)Δx2 6 + ...)

The formula obtained in this way converges quadratically with the width of the interval; halving the interval Δx improves the accuracy by a factor 4.

One can continue with the above procedure and thus obtain even faster converging approximation formulas; however one then needs values of the function at more points to calculate the differential quotient. Therefore one often sticks to the above approximation with quadratic convergence.

5.4.4 Interactive Visualization of Taylor expansions

In the following we consider two simulations to visualize Taylor expansions. The first in Fig5.2 uses the same setup, that was employed for the calculation of derivatives up to 9-th order in Fig5.1. The formulas for the preset functions cannot be edited. The speed of computation is so large, that the approximating polynomial reacts to moving the computation point with the mouse quasi in real time.

The figure shows the 9-th approximation for the Gauss function, which can be selected with the choice boxes above. In the number fields we now have the coefficients of the Taylor series. They only differ from the values of the derivatives via the factor 1 n! for the order n.


PIC
Figure 5.2: Taylor expansions of the Gaussian (blue, selection on the left hand side) from zeroth to ninth order around the adjustable red computation point. The Taylor coefficients fn can be read on the left.

In the following simulation of Fig5.3 a parser is used to evaluate the function that can be edited. Using this simulation you can study the Taylor expansion for arbitrary functions albeit at a slower speed of computation. Here the highest order is limited to 7.

The Taylor approximation of the red function is shown in blue and the deviation is plotted in green. Fig5.3 shows a Gaussian function y = f(x) = e-x2 b2 with the third approximation in the vicinity of the computation point that is drawn in magenta and can be pulled with the mouse along the function. With the keys +1 and -1 the approximation order can be increased and decreased.


PIC
Figure 5.3: Approximation of a function in the vicinity of an adjustable computation point via partial sums of the Taylor series; in the figure the Gaussian is drawn in red, the approximation of 3-rd degree in blue and the deviation in green. The computation point in magenta can be pulled with the mouse and the degree of approximation can be increased or decreased by one with the +1 and -1 keys. Two free parameters a and b can be continuously adjusted with sliders and a third integer parameter m can be changed in the number field. The formula in the function field can be edited arbitrarily.

This simulation allows for many possible experiments. In the selection field for functions a number of standard functions can be selected (sine, exponential function, power function, Gaussian, hyperbolic functions, sin(x)2). They contain up to three parameters and can be edited. You also can enter an arbitrary analytical function for the computation.

Using a parser for the evaluation of the editable function slows down the computation considerably. Depending on the configuration of your computer it can take up to a few minutes until the result for the seventh approximation appears.

After opening the simulation you first call a function from the selection list for which initially the third approximation is calculated for a computation point of x = 0.5. Then you can move the computation point and change parameters, and the result is still shown practically in real time for the third approximation. The description pages of the simulation contain further details and suggestion for experiments that can be done.