In the domain of abstract mathematics the following relation applies exactly: $2\cdot 2=4$. Exactly means, that if one were to write all numbers as decimal numbers there will be infinite number of zeroes after the dot.

There is the old joke of the natural scientist who is supposed to solve the same problem on his slide rule and obtains $2\cdot 2=3.96$. Where is the difference.

In mathematics numbers and the operations between them are defined in such a way, that the repetition of the same procedure yields exactly the same result. When transferring the mathematical rules for operations to the domain of the natural sciences on often assumes silently that not only the operations are exact and unchangeable, but also the quantities to which the operations are applied as numbers.

This is however not the case. When repeating an experiment in the natural sciences one
cannot assume, that the natural situation in which the experiment takes place stays exactly
the same ^{1} ;
above all one has to take into account, that there are limits to the accurarcy of a
measurement, that even assuming fictitious equal conditions the measured values
describing the result will not be identical in a mathematical sense.

The achievable relative accuracies of measurement are often in the range of $1{0}^{-6}$ to $1{0}^{-2}$ with a corresponding inaccuracy of the single measurement. The highest accuracy is nowadays reached for the measurement of frequencies user laser spectroscopy, with a relative error of $1{0}^{-16}$. For $2$ consecutive measurements one has to expect a maximum difference between the numbers that signify the result of the measurement in this order. The result of a single measurement is only known with this accuracy.

It is the essential purpose of mathematical physical models, to forecast from the knowledge of the current state events in the future or to reproduce from this knowledge the past. That is the content of every formula, in which the time $t$ appears. The limited accuracy of measurements puts a natural limit on this goal.

The predictability does however not only depend on the accuracy for the measurements of numbers, but also on the mathematical operation that is applied to them. For a formula, such as $a={\left(b+b\cdot F\right)}^{n}$, where $b$ is the “true” error-free value and $F$ is the relative measurement error the result depends in addition to the error also on the parameter $n$, that describes the relationship between $a$ and $b$.

For an error that is small relative to the measured value we can estimate the effect of $n$ easily:

$$\begin{array}{c}a={\left(b+b\cdot F\right)}^{n}={b}^{n}{\left(1+F\right)}^{n}={b}^{n}\sum _{0}^{n}\left(\begin{array}{c}\hfill n\hfill \\ \hfill k\hfill \\ \hfill \hfill \end{array}\right){F}^{n}\hfill \\ n=1\to a=b\left(1+F\right)\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\text{linearrelationship}\hfill \\ 1n\ll \left(\frac{1}{F}\right)\to a\approx {b}^{n}\left(1+nF\right)\phantom{\rule{0em}{0ex}}\hfill \\ \hfill \end{array}$$

For a linear relationship ($n=1$)
and an accuracy of $1$% the
result has an uncertainty of $1$%
as well. In the 18th century the thinking in the philosophy of natural sciences was
dominated by the conviction, that the future could be forecast without limit given
sufficiently accurate knowledge about the current state (Laplace’s Daemon
^{2});
this corresponds to linear thinking

For nonlinear operations the dependence of the results from the measurement error is also nonlinear. For the power function $a={b}^{n}{\left(1+F\right)}^{n},\text{with}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}n>1$ used in the example Figure 4.5 shows the dependence of the total error from the measurement error for increasing powers of $n$.

The maximum relative ^{3} total
error grows with the power $n$;
for the relatively small error $<10$%
the growth is nearly a linear function of the power; a measurement error of
$1$% leads for the
$10$th power to a total error
of a bit more than $10$%.

So what? Then one has to make more accurate measurements!

However, many important and fundamental functions of Physics, such as the trigonometric function, the exponential function and $1\u2215r$-dependencies on the radius are highly nonlinear, if one does not restrict them to a small region of values.

Even relatively small non-linearities become important, if sequences are calculated, for which the next term depends on the previous term and its accuracy. This is for example the case, if differential equations have to solved numerically, where easily hundreds of individual calculations are concatenated.

Thus alone for reasons of the limited accuracy of measurements one has to be careful, for how long one makes predictions with mathematical models based on measured initial data and one also has to take non-linearities in the model used into account. In addition one must not loose sight of how accurately the model used describes the reality.

When using computational models, this caution is easily lost, since the computer treats models and numbers within the limits of its computational accuracy as if they were exact in the mathematical sense. One also uses for repeated calculation the same initial values.

Even in the abstract mathematical domain nonlinear functions produce unexpected and sometimes bizarre results. This has nothing to do with limited accuracy, but it lies in the nature of the matter. However the resulting dependence of the the calculated numbers from the initial values is so extreme, that fundamental limits are imposed on transferring these models to physics or technology. An in-depth discussion of these matters you will find in the essay of Grossmann in volume 2 of the book announced in the preface. We want to visualize two of these phenomena using number sequences: bifurkation and fractals. The first example is concerned with a real sequence the second one with a complex sequence.

For the sequences with a free parameter $a$ considered so far, the creation law for the terms of the sequence depended linearly on a parameter:

$$\text{geometricsequence}\frac{{A}_{n}}{{A}_{n-1}}=a;\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\text{exponentialsequence}\frac{{z}_{n}}{{z}_{n-1}}=\frac{a}{n}$$

The behaviour of the sequences and the resulting series was relatively simple and clear. Is this still the case if the creation law is nonlinear? As an example we chose the so-called logistic sequence. This is a model for the development of a population of plants or animals under constant environmental conditions from an arbitrary initial state ${x}_{0}$ for a given reproduction rate. (In agreement with the notation in the literature we choose the letter $x$ for the terms of the sequence.):

$${x}_{n+1}=4r{x}_{n}\left(1-{x}_{n}\right)=4r\left({x}_{n}-{x}_{n}^{2}\right)$$

The factor $4$ scales the sequence in such a manner that for parameter values with $0\le r\le 1$ all terms of the sequence satisfy $0\le {x}_{n}\le 1$.

The logistic sequence assumes firstly that the population in the next generation is proportional to the already present population. This alone would lead to unbounded, exponential growth. However at the same time a death rate that depends quadratically on the population already present is assumed ($-4r{x}^{2}$); note, that , due to the definitions given above, we have ${x}_{n}<1$ and therefore always ${x}_{n}^{2}<{x}_{n}$.

The question that arises is: does the population for a given growth parameter under equal conditions approach a stable limit for an infinite number of generations, and how does this limit depend on the initial value ${x}_{0}$ and the growth parameter $r$?

Population growth only occurs if ${x}_{n+1}>{x}_{n}$, that means for $r>1\u2215\left(4\left(1-{x}_{n}\right)\right)$. Since $0\le {x}_{n}\le 1$ all populations with $r<0.25$ decay to zero independent from the initial value. For larger growth rates, i.e. for $r>0.25$ one would therefore expect , that the population grows up to an asymptotic value unequal to zeroes, or if initially larger decays to this asymptotic value.

In the simulation in Figure 4.6 $r$ is increased consecutively by $0.001$ in the interval $0\le r\le 1$. Now a loop calculates 2000 terms of the sequence for constant $r$. Then one proceeds in steps of $0.001$ to the next value of $r$ until $r=1$ is reached. Each calculation starts with a random value $0<{x}_{1}<1$ for the initial value. The first terms of the sequence still depend on the initial value; therefore the first $999$ iterations are not shown in the picture . The iterations $1000$ to $2000$ are mapped to points in the picture.

For $r<0.75$ these points coincide so closely, that a limit line as function of $r$ is seen, comparable to $1\u2215\left(1-a\right)$ for the geometric sequence. Different initial values do not lead to discernable differences for the shown terms of the sequence with high indices.

For growth rates $r>0.75$ the asymptotic orbit develops two branches bifurcation, that means the iteration creates two different accumulation points. This bifurcation repeats itself, until there are finally no accumulation points visible. Since $1000$ iterations are shown, there could be up to $1000$ values of a given $r$. Thus In this region there cannot exist a unique limit. Surprisingly some regions of $r$ follow, that show fewer accumulation points. The determining factor for the growth limitation is the growth rate $r$.

The bifurcation behaviour does not depend on the growth limiting factor to be exactly $1-{x}_{n}$. Essential is the non-linearity of the operation ${x}_{n}-{x}_{n}^{2}$. To make this experimentally accessible a generalized factor $\left(1-{x}_{n}^{k}\right)$ with $k\le 0$ was chosen:

$${x}_{n+1\text{}}=\text{}4r{x}_{n}\left(1\text{}-\text{}{x}_{n}^{k}\right)$$

In the simulation example you can change $k$ after resetting with the slider between $0,1$ and $2$. the default value is $1$ that leads to the usual quadratic operation.

The left windows shows for the classical case ($k=1$) the total orbit as function of $r$, the right one shows the the bifurcation in larger resolution. For $k=1$ the general character of a bifurcation stays the same, but the characteristic parameter values are moved relative to the logistic sequence and the abscissa range is adjusted accordingly.

For a more accurate viewing the simulation window can be maximized.

In the total picture of the logistic sequence compactified structures of accumulation points appear, that are not visible, if the number of iterations shown, is so large , that the pixel resolution of the screen does not reveal any holes and if the resolution along the $x$ axis is small. The simulation in Figure 4.7 therefore shows the structure of the picture with a very large vertical resolution ($~$ 1000 $points$ in the shown $r$-interval and a limited number of $250$ iterations shown. Please maximize the window before the start of the simulation to full screen size in order to see the details. The lower and upper boundary of the $r$-range can be adjusted with sliders.

What is the reason for the strange behavior which becomes deterministically chaotic for large values of $r$? This becomes evident, if one extends the simulation to show the terms of the sequence with low indices, that are suppressed in the above presentation to elucidate the limit of the sequence.

Thus one can consider individual terms of the sequence and investigate how the bifurcation results from jumping between terms with different indices.

The simulation in Figure 4.8 and Figure 4.9 , which is a real mathematical experimentation kit, calculates an adjustable number of terms. With the slider the constant initial value ${x}_{0}$ of the sequence for a total $r$-scan can be adjusted. In the picture an adjustable number of terms is shown. One can also choose the number of suppressed terms in the picture.

Thus you can show the first iterations as shown in the left window of Figure 4.8 or you can look at a single iteration with a high index as in Figure 4.9.

If one considers for example the first six terms ${x}_{0}$ to ${x}_{5}$ (suppressed =0, shown=6) of the sequence as shown in Figure 4.8 , one recognizes the different terms from the increasing degree of the polynomial ( The initial value as zeroth term is a straight line, the first term a line with positive slope). If you use different initial values, the pictures show differences in their detail. In the lower region of $r$ one however recognizes, how already the lower iterations approach a limiting curve. Then the higher iterations are superimposed in such a way, that there are nearly empty regions close to points, that nearly coincide. Here the bifurcations can be found at higher indices. For the higher iterations the influence of different initial values becomes smaller and smaller.

If one shows for large indices only one term ${x}_{n}$, such as in Figure 4.9, no bifurcation can be seen, but the curve shows kinks at the bifurcation points. If one increases the index by one the kinks turn in the opposite direction. If one shows two terms ${x}_{n},{x}_{n+1}$ with consecutive index one sees the first bifurcation. This bifurcation is thus the superposition of two $r$-scans with indices whose difference is $1$.

Studying the conditions for lower indices one realizes, that the divergence is caused by the change from even to odd powers that determine the individual terms.

Thus the deeper reason for the strange topology is, that for suitably defined polynomials of high order limited regions exits, for which different orders and initial values lead to practically identical values, while in other regions the values diverge, thus deterministic chaos reigns. In the contribution of Siegfried Großmann this is analyzed in general and going into detail, and we suggest, that at this point you study his contribution.

Remembering the starting point of the discussion, namely that the logistic curve is a model for the development of populations, one can draw for example the following conclusions: For small growth rates the population converges in an oscillating manner to a constant value at which the population and resources are in equilibrium with each other. For a higher growth rate the population exceeds the value that would be compatible with the resources. Therefore the next generation reverts to a lower value, and this jumping back and forth is repeated: the system oscillates between extremes.

The essential practical conclusion is, that the result of computations for nonlinear system can depend so sensitively on parameters and progress of the calculation (here iteration index), that a forecast is only possible for a limited number of generations. Is time the essential parameter this applies for forecast over time.

It is therefore part of the art of engineering to avoid regions and dependencies in which non-linearities lead to non-predictable or non-unique results. This is no mean feat, since most physical relationships are well determined, but nonlinear.

We want to conclude the chapter on sequences and series with an example of a complex sequence with a nonlinear creation law. Such sequences lead to the aesthetically pleasing structures called fractals, of which the Mandelbrot set is probably the most well known.

Its creation law reads:

$$\begin{array}{c}{z}_{n+1}={z}_{n}^{2}+c\hfill \\ {z}_{0}=0;\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}c:\text{complex}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\text{number}\hfill \\ \hfill \end{array}$$

For every point $c$ of the complex plane within a limited, but sufficiently large closed region around the origin the sequence is calculated and it is checked, whether it diverges (in the numerical calculation it is assumed that this is the case, as soon the absolute value exceeds $4$, the corresponding points are coloured blue), or converges. Those points for which the sequence converges are coloured red in the graphical representation. The points that are converging to finite values (the boundary of the red surface) constitute the Mandelbrot set. All points that do not belong to it are, depending on the speed of divergence of the sequence painted in different colours.

The interactive Figure 4.10 provides access to a slightly modified Mandelbrot fractal, for which the initial value ${z}_{0}$ can be changed via pulling the white point with the mouse; ${z}_{0}$=0 gives the well known Mandelbrot set, $-2<z<2$ covers the region in which convergence happens at all.

Resetting leads to the initial state. In the manner that should by be customary , detailed regions can be chosen, for which the calculation is repeated in larger resolution.

The region of the calculation can be restricted via specifying a region with the mouse; multiple restriction makes it possible to delve into deep regions of the fractal ramifications (see as example Figure 4.13).

Figure 4.11 shows the modified Mandelbrot set for ${z}_{0}=i$

The topologically novel situation of the fractal structure is, that the boundary of a finite area is infinitely branched and shows self-similarity when delving deeper and deeper, i.e. on all scales similar structures are visible. You will realize this when selecting ever smaller sections.

It is not trivial to understand, which mathematical relationship leads to the special form and symmetry in the figure.

To simplify this task we generalize further to use instead of the quadratic creation rule an arbitrary power:

$$\begin{array}{c}{z}_{n+1}={z}_{n}^{k}+c\hfill \\ {z}_{0}=0;\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}c:\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\text{complex}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\text{number}\hfill \\ k\ge 1\hfill \\ \hfill \end{array}$$

For $k=2$ we find for the set of $c$-values, for which ${z}_{n}$ does not diverge to $\infty $ the Mandelbrot set as discussed above.

In the simulation in Figure 4.12 the power $k$ can be changed as a rational number between $1$ and $10$. In the text field values that are unlimited towards infinity can be entered (after input you have to press the (ENTER) key and must wait until the input field changes colour again!). For this simulation many trigonometric functions have to be calculated which requires a lot of effort. Thus you have to be patient after the first call or after entering a new value. Depending on the resources of your computer this calculation can take many seconds or even minutes.

Figure 4.12 shows the modified Mandelbrot set of the $c$-values for which the complex point sequence ${z}_{n}$ converges. The region of convergence nearly corresponds to the unit circle (as one expects from the geometric series), but exhibits further fractal branching at the boundary, as shown in Figure 4.13 in higher resolution.

An aesthetically especially interesting variant of a given complex fractal is its Julia set. This is one obtained via keeping the point $c$ in the complex plane fixed and asking which points $z$ in the plane lead to a divergent or convergent sequence. Thus for the Mandelbrot set and its Julia set we have:

$$\begin{array}{c}\text{creationlawofsequence}{z}_{n+1}={z}_{n}^{2}+c\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\hfill \\ \text{Mandelbrotset:}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\hfill \\ \phantom{\rule{0em}{0ex}}{z}_{0}=\text{constant}=0\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\text{For}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\text{which}\phantom{\rule{0em}{0ex}}\phantom{\rule{0em}{0ex}}\text{points}c\text{doesthesequenceconvergeordiverge?}\hfill \\ \text{correspondingJuliaset:}\hfill \\ c=\text{constant;}\text{Forwhichpoints}z\text{doesthesequenceconvergeordiverge?}\hfill \\ \hfill \end{array}$$

Thus one can map every point $c$ of the Mandelbrot set to its Julia set. In the following simulation a small white point in the left window showing the Mandelbrot set can be moved with the mouse. The program calculates the corresponding Julia set, which is shown in the right window. Its appearance and symmetry change in a characteristic manner if one moves $c$ around the Mandelbrot Set. With the slider one can adjust the colour shading for the diverging values.

$c=0$ leads to the sequence $z,{z}^{2},{z}^{4},{z}^{8},\dots $ which as the geometric sequence converges inside of the unit circle and diverges outside of it. The Julia set is now identical with the inside of the unit circle.

End of chapter 4.