In many cases it is useful to analyze instead of a function $f\left(x\right)$ a series that approximates it. This is true especially, if the series converges to the function without restrictions. Then the partial sums of the series can be considered as approximations with increasing accuracy.

For the terms of the sequence that make up the series one will use preferably such functions, that can be differentiated and integrated easily. Especially suitable are series, whose terms are powers or trigonometric functions of the variables. The first case leads to the Taylor series, whose coefficients are obtained via differentiation, which we will study more closely in the following. The second case leads to the Fourier series, which we will visualize after treating the integral, since its coefficients are determined via integration.

Another argument for the choice of a particular series expansion can be, to use functions for the terms of the series, that are particularly adapted to the symmetry of the problem that is described by the function, i.e Bessel functions for cylindrical symmetry and spherical harmonics for point symmetry.

The Taylor series is an infinite series, whose partial sums are an approximation for the function y=f(x), that is exact at the point ${x}_{0}$ and approximate in the vicinity of $x={x}_{0}$ and the interval for an acceptable approximation becomes larger with increasing index of the partial sum. Then the members of the sequence that constitutes the series are powers of the distance from the computation point $\left(x-{x}_{0}\right)$. Thus the function is approximated via a power series and the problem consists of finding the coefficients of the individual terms.

To achieve this we first equate the function formally to a power series with terms ${a}_{n}{\left(x-{x}_{0}\right)}^{n}$ and parameters ${a}_{n}$. Then we differentiate both sides repeatedly. After each step we put $x={x}_{0}$. Thus all powers containing $x-{x}_{0}$ drop out from the power series for the respective derivatives and the coefficient of the remaining term can be easily obtained:

$$\begin{array}{c}\text{ansatz}:\phantom{\rule{0em}{0ex}}f\left(x\right)=\sum _{0}^{\infty}{a}_{n}{\left(x-{x}_{0}\right)}^{n}={a}_{0}+{a}_{1}\left(x-{x}_{0}\right)+{a}_{2}{\left(x-{x}_{0}\right)}^{2}+{a}_{3}{\left(x-{x}_{0}\right)}^{3}+...\hfill \\ \left(x-{x}_{0}\right)=0\to {a}_{0}=f\left({x}_{0}\right)\hfill \\ {f}^{\prime}\left(x\right)={a}_{1}+2{a}_{2}\left(x-{x}_{0}\right)+3{a}_{3}{\left(x-{x}_{0}\right)}^{2}+4{a}_{4}{\left(x-{x}_{0}\right)}^{3}...\hfill \\ \left(x-{x}_{0}\right)=0\to {a}_{1}=\frac{{f}^{\prime}\left({x}_{0}\right)}{1}\hfill \\ {f}^{\u2033}\left(x\right)=1\cdot 2{a}_{2}+2\cdot 3{a}_{3}\left(x-{x}_{0}\right)+3\cdot 4{a}_{4}{\left(x-{x}_{0}\right)}^{2}+...\hfill \\ \left(x-{x}_{0}\right)=0\to {a}_{2}=\frac{{f}^{\u2033}\left({x}_{0}\right)}{1\cdot 2}\hfill \\ {f}^{\u2034}\left(x\right)=2\cdot 3{a}_{3}+3\cdot 4\cdot 2{a}_{4}{\left(x-{x}_{0}\right)}^{1}+...\hfill \\ \left(x-{x}_{0}\right)=0\to {a}_{3}=\frac{{f}^{\u2034}\left({x}_{0}\right)}{1\cdot 2\cdot 3}\hfill \\ {f}^{\left(n\right)}=n!{a}_{n}+...\frac{n!}{2}\left(x-{x}_{0}\right)+...\to {a}_{n}=\frac{{f}^{\left(n\right)}}{n!}\hfill \\ \hfill \end{array}$$

Thus the coefficient of the $n$-th power is proportional to the $n$-th derivative of the function at the computation point and the factorial as a factor simply follows from the differentiating the $n$-th power. The Taylor series of the function is then with 0!=1,1!=1 and ${f}^{\left(0\right)}\left({x}_{0}\right)=f\left({x}_{0}\right)$:

$$\begin{array}{c}f\left(x\right)=\frac{f\left({x}_{0}\right)}{0!}+\frac{{f}^{\prime}\left({x}_{0}\right)}{1!}\left(x-{x}_{0}\right)+\frac{{f}^{\u2033}\left({x}_{0}\right)}{2!}{\left(x-{x}_{0}\right)}^{2}+\frac{{f}^{\u2034}\left({x}_{0}\right)}{3!}{\left(x-{x}_{0}\right)}^{3}+...\hfill \\ \text{Taylorseries:}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}f\left(x\right)=\sum _{n=0}^{\infty}{f}^{\left(n\right)}\left({x}_{0}\right)\frac{{\left(x-{x}_{0}\right)}^{n}}{n!}\hfill \\ \text{zerothapproximation}:\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}f\left(x\right)=f\left({x}_{0}\right)\hfill \\ \text{firstapproximation,inxlinear}:\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}f\left(x\right)=f\left({x}_{0}\right)+{f}^{\prime}\left({x}_{0}\right)\left(x-{x}_{0}\right)\hfill \\ \text{secondapproximation,inxquadratic}:\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}f\left(x\right)=f\left({x}_{0}\right)+{f}^{\prime}\left({x}_{0}\right)\left(x-{x}_{0}\right)+\frac{{f}^{\u2033}\left({x}_{0}\right)}{2}{\left(x-{x}_{0}\right)}^{2}\hfill \\ \hfill \end{array}$$

While f’(x)=df/dx(x) describes the slope of a differentiable function at each point $x$ in its domain of definition, ${f}^{\u2033}\left(x\right)$ i.e. $d{f}^{\prime}\u2215dx\left(x\right)$ describes the slope of the slope or the change of the slope of $f\left(x\right)$. The slope changes if and only if the curve $f\left(x\right)$ has a curvature. Therefore ${f}^{\u2033}\left(x\right)$ is a measure of the curvature of $f\left(x\right)$. If one identifies $x$ with the time $t$ and y=f(t) with the distance traveled by an object during the time $t$, the first derivative is denoted by velocity and the second derivative is called acceleration of the object that is at time $t$ at position $x$.

The first approximation of the Taylor series takes into account the slope of the function at the computation point, the second one in addition its curvature. The higher approximations use higher derivatives and it makes sense to also visualize their meaning.

In the simulation of Fig5.1 the derivatives up to the 9-th order are calculated for a function that can be chosen from 9 given options and are shown as coloured curves in a abscissa region that depends on the function and also may have a shifted origin. With the choice boxes on top the derivatives to be plotted in addition to the function can be selected, all 9 are shown in the picture. For the red point that can be moved with the mouse the values of the local values of the derivatives are calculated anew and display in the number fields on the left.

The derivatives are approximated numerically as differential quotients using both neighboring points:

$$\begin{array}{c}{y}^{\prime}\left(x\right)\approx \frac{y\left(x+\Delta x\right)-y\left(x-\Delta x\right)}{2\Delta x}\hfill \\ {y}^{\u2033}\left(x\right)\approx \frac{{y}^{\prime}\left(x+\Delta x\right)-{y}^{\prime}\left(x-\Delta x\right)}{2\Delta x}\approx \frac{y\left(x+2\Delta x\right)-2y\left(x\right)-y\left(x-2\Delta x\right)}{4{\left(\Delta x\right)}^{2}}\hfill \\ ...\hfill \\ \hfill \end{array}$$

You will find further details about this in the description pages of the simulation. Parser

In many simulations contained in this book it is possible to enter formulas for functions directly in mathematical notation. For the program the functions are initially strings without meaning, that have to be interpreted by an additional program, a parser and translated to Java code. This is a relatively complex process. If the function is only translated once for a simulation the required time is not of concern and one has, when using the parser for EJS, the advantage of being able to change the function or enter a new one without having to open and edit the program itself.

The determination of the higher derivatives with sufficient accuracy requires a considerable computational effort. In the example of Fig5.1 the function has to be evaluated 10 000 times for one computation, which puts a strain on the computing speed of a simple PC. Therefore the functions are predetermined in our example. If you want to analyze other functions you may open the simulation with EJS console and change the simple Java code of the preset functions.

In the upcoming simulations of Fig5.2 and Fig5.3 the approximations for the derivatives are calculated once without and once with using the parser and you will recognize the difference in the computation speed from these examples.

Convergence of the Taylor series

It should not be taken for granted, that the power series approaches the function also for values of $x$ outside of the computation point ${x}_{0}$. During the discussion of the exponential function, which has as a power series a large similarity with the Taylor series, we had however already established that the series also has to converge within the vicinity of the computation point if the factors attached to its term do not diverge. For the Taylor series these factors are the derivatives in the computation point. If the function can be differentiated as many times as desired , the Taylor series converges for all values of the variable This condition includes, that the derivatives at the computation point are bounded: the $n$-th derivatives grow slower than $n!$.

For many functions that are important in Physics, as for example polynomials, exponential function, sine and cosine the domain of convergence is unlimited. With increasing order or number of terms the corresponding Taylor series approximates the original function over a larger and larger interval and the domain of small deviations becomes larger and larger. In practice one normally use a partial sum of finite order; then the partial sum is identical to the function at the point of computation and increasingly deviates from the function with growing distance from it.

It is amusing to ask for the Taylor series of the exponential function. Since all it derivatives are equal the Taylor series coincides with the exponential series.

The power function of degree $n$ has non-vanishing derivatives only up to order $n+1$. In this case the Taylor series terminates after the $n+1$-st term. Its Taylor expansion is thus identical to the original function.

The trigonometric functions however have a unlimited number of derivatives that are repeated periodically, for example sin x, cos x, -sin x, cos x, sin x…. The approximation will become the better the more terms of the series expansion are retained.

Among the possible approximation functions the Taylor series is characterized via the fact, that the coefficients can be determined from data at computation point alone, namely all the derivatives of the function as this point. This series has the large practical advantage, that its terms are powers and can therefore be easily added to and multiplied with each other and also easily integrated and differentiated; the derivatives of the function at the computation point that appear in the coefficients are constants for the above listed operations listed above. Therefore in the physical analysis complex functions are often approximated by a Taylor series with a limited number of terms: linear approximation with two terms and quadratic approximation with three terms.

The linear term of the Taylor series already yields approximations that are often used in practice: for three basic functions the derivation is show; for other cases you may easily derive this your self. You may for example use $x={x}_{0}$ for the computation point and determine the next higher derivative.

$$\begin{array}{c}\text{Expansionaroundthecomputationpoint}x=0\text{applicablefor}\left|x\right|1\hfill \\ \text{1.)}y=\sqrt{1+x}={\left(1+x\right)}^{\frac{1}{2}}\hfill \\ {y}^{\prime}=\frac{1}{2}{\left(1+x\right)}^{-\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\frac{1}{2}}\to \phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}y\approx \sqrt{\text{1+0}}+\frac{1}{1!}\cdot \frac{1}{2}{\left(1+0\right)}^{-\frac{1}{2}}x=1+\frac{1}{2}x\hfill \\ 2.\hfill \end{array})y=\frac{1}{1-x}={\left(1-x\right)}^{-1}{y}^{\prime}={\left(1-x\right)}^{-2}\to \phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}y\approx 1+x\hfill \\ 3.)y=sinx;\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}{y}^{\prime}=cosx;\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\to y\approx 0+1\cdot x=x\hfill \\ \hfill $$

Using the Taylor series one can quickly obtain formulas for the calculation of the first derivative ${y}^{\prime}$. This also yields a measure for the respective accuracy. We show this for the linear approximation; the procedure can be easily extended to higher approximations.

We assume in the following, that both $y\left(x\right)$ and $y\left(x+\Delta \right)$ are known.

$$\begin{array}{c}\text{Taylorseries}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}y\left(x+\Delta x\right)=y\left(x\right)+\frac{{y}^{\prime}\left(x\right)\Delta x}{1}+\frac{{y}^{\u2033}\left(x\right)\Delta {x}^{2}}{2}+\frac{{y}^{\u2034}\left(x\right)\Delta {x}^{3}}{6}+\frac{{y}^{\left(4\right)}\left(x\right)\Delta {x}^{4}}{24}+...\to \hfill \\ {y}^{\prime}\left(x\right)=\frac{y\left(x+\Delta x\right)-y\left(x\right)}{\Delta x}-\left[\frac{{y}^{\u2033}\left(x\right)\Delta x}{2}+\frac{{y}^{\u2034}\left(x\right)\Delta {x}^{2}}{6}+...\right]\hfill \\ {y}^{\prime}\left(x\right)=\frac{y\left(x+\Delta x\right)-y\left(x\right)}{\Delta x}-O\left(\Delta x\right);\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\text{with}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}O\left(\Delta x\right)=\frac{{y}^{\u2033}\left(x\right)\Delta x}{2}+\frac{{y}^{\u2034}\left(x\right)\Delta {x}^{2}}{6}+...\approx \frac{{y}^{\u2033}\left(x\right)\Delta x}{2}\hfill \\ \hfill \end{array}$$

In the last line is the usual definition for the difference quotient supplemented by the term O($\Delta $x)(letter O), which gives the deviation from the differential quotient due to neglecting the higher terms of the Taylor series. The deviation vanishes in the limit of $\Delta x\to 0$, since all terms contained in $O$ depend at least linearly on $\Delta x$. For sufficiently small intervals the higher powers of $\Delta x$ can be neglected against the linear term and one obtains the important conclusion, that the procedure of differentiation according to the above formula becomes accurate linearly with $\Delta x$. If one halves the width of the interval the accuracy is doubled.

Using the Taylor series one can easily derive a method with better convergence for the calculation of the derivative. We write down the Taylor series once for a point that is $\Delta x$ to the right of the computation point $x$ and once for a point that is $\Delta x$ to the left of the computation point. Subtracting the two series from each other the terms with even powers drop out:

$$\begin{array}{c}\left[1\right]\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}y\left(x+\left|\Delta x\right|\right)=y\left(x\right)+{y}^{\prime}\left(x\right)\left|\Delta x\right|+\frac{{y}^{\u2033}\left(x\right){\left|\Delta x\right|}^{2}}{2}+\frac{{y}^{\u2034}\left(x\right){\left|\Delta x\right|}^{3}}{6}+\frac{{y}^{\u2034}{\left(x\right)}^{\prime}{\left|\Delta x\right|}^{4}}{24}+...\hfill \\ \left[2\right]\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}y\left(x\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}-\phantom{\rule{0em}{0ex}}\left|\Delta x\right|\right)\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}=\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}y\left(x\right)\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}-\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}{y}^{\prime}\left(x\right)\left|\Delta x\right|\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}+\frac{{y}^{\u2033}\left(x\right){\left|\Delta x\right|}^{2}}{2}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}-\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\frac{{y}^{\u2034}\left(x\right){\left|\Delta x\right|}^{3}}{6}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\pm ...\hfill \\ \left[1\right]-\left[2\right]\to y\left(x+\left|\Delta x\right|\right)-y\left(x\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}-\phantom{\rule{0em}{0ex}}\left|\Delta x\right|\right)\phantom{\rule{0em}{0ex}}=2\phantom{\rule{0em}{0ex}}{y}^{\prime}\left(x\right)\left|\Delta x\right|+2\frac{{y}^{\u2034}\left(x\right){\left|\Delta x\right|}^{3}}{6}+...\hfill \\ {y}^{\prime}=\frac{y\left(x+\left|\Delta x\right|\right)-y\left(x-\left|\Delta x\right|\right)}{2\left|\Delta x\right|}-O\left(\frac{{y}^{\u2034}\left(x\right){\left|\Delta x\right|}^{2}}{6}+...\right)\hfill \\ \hfill \end{array}$$

The formula obtained in this way converges quadratically with the width of the interval; halving the interval $\left|\Delta x\right|$ improves the accuracy by a factor 4.

One can continue with the above procedure and thus obtain even faster converging approximation formulas; however one then needs values of the function at more points to calculate the differential quotient. Therefore one often sticks to the above approximation with quadratic convergence.

In the following we consider two simulations to visualize Taylor expansions. The first in Fig5.2 uses the same setup, that was employed for the calculation of derivatives up to 9-th order in Fig5.1. The formulas for the preset functions cannot be edited. The speed of computation is so large, that the approximating polynomial reacts to moving the computation point with the mouse quasi in real time.

The figure shows the 9-th approximation for the Gauss function, which can be selected with the choice boxes above. In the number fields we now have the coefficients of the Taylor series. They only differ from the values of the derivatives via the factor $\frac{1}{n!}$ for the order $n$.

In the following simulation of Fig5.3 a parser is used to evaluate the function that can be edited. Using this simulation you can study the Taylor expansion for arbitrary functions albeit at a slower speed of computation. Here the highest order is limited to $7$.

The Taylor approximation of the red function is shown in blue and the deviation is plotted in green. Fig5.3 shows a Gaussian function $y=f\left(x\right)={e}^{-\frac{{x}^{2}}{{b}^{2}}}$ with the third approximation in the vicinity of the computation point that is drawn in magenta and can be pulled with the mouse along the function. With the keys +1 and -1 the approximation order can be increased and decreased.

This simulation allows for many possible experiments. In the selection field for functions a number of standard functions can be selected (sine, exponential function, power function, Gaussian, hyperbolic functions, sin(x)${}^{2}$). They contain up to three parameters and can be edited. You also can enter an arbitrary analytical function for the computation.

Using a parser for the evaluation of the editable function slows down the computation considerably. Depending on the configuration of your computer it can take up to a few minutes until the result for the seventh approximation appears.

After opening the simulation you first call a function from the selection list for which initially the third approximation is calculated for a computation point of $x=0.5$. Then you can move the computation point and change parameters, and the result is still shown practically in real time for the third approximation. The description pages of the simulation contain further details and suggestion for experiments that can be done.