5.10 Series Expansion (2): the Fourier Series

5.10.1 Taylor Series and Fourier Series

The partial sums of the Taylor series approximating a function f(x) in the vicinity of the computation point x0 via partial sums of a power series. If one would like to approximate a function over a larger interval one would need terms of very high order. The polynomial obtained by truncating the Taylor series would have to have as least as many turning points as the function. For periodical functions this would be very tedious for intervals larger than the period.

Periodical functions have large practical importance in telecommunications and electrical engineering. For such functions the approximation via the superposition of periodical standard functions (sine and cosine0 is much better suited. On expands the function into a series that consists of the fundamental tone and the overtones, i.e. of the functions sinnx and cosnx with integer values of n.

The analogy to the analysis of a vibrating string is immediately obvious: sinx describes the vibration of the fundamental tone, sin2x that of the octave, sin3x that of the fifth above the octave and so on. For a string that is fixed at both ends; the variable x is now the product ωt of the angular frequency ω and the time t.

x = ωt = 2πνt = 2π t T;νfrequency of oscillation ;T duration of one period

Fourier

Depending on the shape of f(t) one superimposes more or fewer of these sine/cosine oscillations with a certain strength and expressed as a number that determines the amplitude. The set of amplitudes of the overtones, i.e. the coefficients of the series expansion represents the spectrum of the periodical oscillation. Spectrum and oscillation form are corresponding representations of the same phenomenon. This representation in terms of superimposed sine and cosine functions is called the Fourier series of f(t).

While the partial sums of the Taylor series approximate the function in the proximity of a point , the partial sums of the Fourier series are approximations for the entire interval of the fundamental period and therefore also - because of the periodicity of the functions considered - for an unlimited region of the variable x. The Fourier series does not have to coincide with the function at any point, while this the case for the Taylor Series at the computation point.

It depends on the properties of f(t), how many overtones have to be superimposed to approximate the function at nearly all points. If one interprets the notion of convergence not strictly, then Fourier series converge for all functions, even for non continuous ones. The convergence is then not necessarily monotone, i.e. it can be better for some values of t and worse for some other values of t and even fail for some values! At discontinuities one observes overshootings even for higher orders of the series. This is called ringing in telecommunications.

Since the periodical phenomena that we consider here are mostly oscillations in time, the variable is usually x = ωt. To also model the the phases of the individual overtones, we use a sum of terms with sinnx and cosnx. Then the sum represents a phase-shifted sine or cosine function. Thus the general Fourier series reads

f(t) = a0 2 + n=1a n cos(nωt) + bn sin(nωt).

For a given spectrum a0,ai,bi,i = 1,2, one can calculate f(t). For a given function f(t) all coefficients can be determined and thus the spectrum is known.

5.10.2 Determination of the Fourier coefficients

How do we now obtain the coefficients an and bn?

For the Taylor series we had used the fact, that after differentiation all terms, that still contains the distance x to the point of computation, become zero, such that the coefficient of the corresponding constant term gives up to a factor the corresponding derivative a the point of computation.

For the Fourier series we instead begin by integrating the product of the function and the overtones cos(mωt)or.sin(mωt);m = 1,2,3... over one period T of the fundamental frequency (m = 1)

0Tcos(mωt)f(t)dt =0Tcos(mωt)(a0 2 + n=1an cos(nωt) + bn sin(nωt))dt 0Tsin(mωt)f(t)dt =0Tsin(mωt)(a0 2 + n=1an cos(nωt) + bn sin(nωt))dt

This looks initially a bit complicated; however it turns out, that the integral over the constant, i.e. the first term before the sum symbol nearly always vanishes, since the integral over a period of cosine or sine is zero. Only for m = 0 one obtains a contribution, since we have cos0 = 1 = const. Therefore the following applies:

a0 2 = 1 T0Tf(t)dt.

In addition the integral over the product of an overtone m and a second overtone n zero, if m and n are not equal. Thus also applies when a cosine and sine function are multiplied, because of the sine functions are odd while the cosine function are even with respect to x = 0. Therefore we are left only with the integrals cos2nx or sin2nx which are both T2. Thus the coefficients can be easily written down, but this requires the determination of integrals which necessitates numerical calculations.

an = 2 Tcos(nωt)f(t)dt;bn = 2 Tsin(nωt)f(t)dt

The simulation in Fig.5.15 visualizes these circumstances that simplify the calculation of the Fourier coefficients. From a selection field a product of periodic functions of the general form that we are interested in is chosen:cos(mx)(acos(nx) + bsin(nx)). The red curve represents the product of in cosmx and the adjustable overtone acosnx + bsinnx, in the figure we have m = 10 and n = 8. The blue curve shows the integral, whose final value (definite integral over one period of f(t)) vanishes for mn. For m = n we obtain when integrating over acosmxcosmx the result aπ, while the integral over the mixed term bcosmxsinmx vanishes. The integration is started via selecting the corresponding option box.

With slides the parameters a and b and the integers m and n can be chosen. The function is drawn in red. After activating the field entitled Integral the blue integral function is calculated over a period of the fundamental oscillation from 0 to 2π. The final value is the definite integral of interest to us.

As a first step we convince ourselves, that that integrals over sine and cosine vanish and that the addition of sine and cosine functions results in a phase-shifted sine or cosine function, whose integral also vanishes. The calculation of the integral for the product of the function defined above with an overtone of initially unknown order shows, that indeed all contributions vanish except for the one where the overtones are identical and the function type (sine or cosine) is the same. One realizes, that the symmetry of the different functions with respect to the midpoint of the period on the x-axis is the reason for this specific result. Thus we have.

0Tcos(mωt)dt = 0;0Tcos(mωt)sin(nωt)dt = 0;

0Tcos(mωt)cos(nωt)dt = 0 for mn T2form = n

This property of the functions sine and cosine means, that they are an example of an orthogonal system of functions. Two functions are called orthogonal if the following applies:

0Tf 1(t)f2(t)dt = 0forf1(t)f2(t)


PIC
Figure 5.15: The simulation visualizes the orthogonality of the trigonometric functions.

In the description pages of the simulation more detailed instructions and hints for experiments are provided. After opening the simulation you choose the function type and press the enter key. The integration process is animated in order for you to see the difference between the integrals more easily when changing the functions.

5.10.3 Visualizing the Calculation of Coefficients and Spectrum

The simulation in Fig.5.16 visualizes the calculation of the Fourier coefficients for the fundamental tone and the first nine overtones for the following typical periodical functions: sawtooth, square wave, square-impulse and Gaussian impulse. To this end the product of the functions under the integral sign is determined and drawn in red while definite integral is shown in blue. The final value of the integral is, except for a factor π that was suppressed to get more easily readable values, equal to the coefficient of the selected order. The functions are provided with up to three parameters a,b and c that control the amplitude, the point of symmetry and the impulse width. From the simulation the spectra of the functions shown can be obtained in a numerical and experimental manner.


PIC
Figure 5.16: Calculation of Fourier coefficients for a choice of functions F(t)for a saw tooth oscillation.

The interactive figure of the simulation shows the situation for the sine coefficients of tenth order of a symmetrical saw tooth. The simulation is started by choosing a function and clicking on the enter key. The description pages and the instructions for experiments contain further details.

5.10.4 Examples of Fourier Expansions

In the next interactive examples (Fig.5.17 to Fig.5.19) the calculation of the coefficients takes place in the background. In the window the function is shown in red and the partial sum of the desired order is shown in blue. The function window is interactive such that many more functions can be entered and a few are suggested in the description. In a text window the order of the analysis can be adjusted; with a slider the approximation order n to be used for the partial sum is selected. The simulation allows using very high orders.

The calculation of the Fourier expansion of n-th order follows immediately after entering the function. The diagram extends beyond the integration region of 2π in order to see the periodic continuation in both directions.

In Fig.5.17 the Fourier expansion of order 43 is shown as approximation for the symmetrical and periodic square-impulse. For the square wave one recognizes very clearly the typical overshooting at discontinuities, which does not vanish even for very high orders.


PIC
Figure 5.17: Periodical square impulse (red) and its Fourier approximation (blue) of 43rd order. The calculated order n can be chosen.

In Fig.5.18, using the same simulation, the approximation of 17th order is shown for a saw tooth oscillation, that has been modulated in a nonlinear fashion with a sine function of high frequency .


PIC
Figure 5.18: Periodic saw tooh, modulated from the middle of the period by a high frequency sine function (red) and Fourier approximation of 17th order (blue). The modulation frequency can be chosen with the slider. Similar complicated wave forms are used in synthesizers to produce interesting sounds.

In a second window of the simulation (Fig.5.19) the spectrum is shown. It can be changed between sine (an)-, cosine (bn) and power spectrum (sn2 + bn2). This figure shows the spectrum of the modulated sawtooth, which is rich in overtones and has a pronounced formant at the sixth and seventh overtone. In acoustics formants are defined as limited regions of overtones with large amplitude; they significantly determine the tone quality.

.


PIC
Figure 5.19: frequency spectrum for the Fourier expansion of the modulated sawtooth in Fig.5.18. The abscissa shows the order n of the overtone (fundamental tone n = 1), on the ordinate one can choose between displaying the individual coefficients or the total power in a given order.

The description of the simulation contains further instructions.

5.10.5 Complex Fourier Series

In the space of complex numbers the Fourier series can be formulated in a very elegant way:

f(t) = n=-cneinωt cn = 1 T0Tf(t)einωtdt.

The connection to the real representation is obtained via reordering the sum and combining, starting with n = 1 terms with - n and n. Taking into account cos(-x) = cos(x);sin(-x) = -sin(x) we get

f(t) = p=-cneinωt = p=-cn(cosnωt + isinnωt) = c0 + (c1 + c-1)cosωt + i(c1 - c-1)sinωt + ... f(t) = c0 + p=1(cn + c-n)(cosnωt + i(cn - c-n)sinnωt)

As connection between real and complex coefficients we obtain

a0 = 2c0;an = cn + c-n;bn = i(cn - c-n).

The complex formulation is especially used in electrical engineering. It has the advantage, that calculations with exponentials are in general easier and more transparent then those with trigonometric function.

For the fast numerical computation of the components of a Fourier series a special algorithm has been developed, which is known as FFT (Fast Fourier Transformation). FFT

5.10.6 Numerical Solution of Equations and iterative Methods

In mathematics and physics one often needs to determine the values of a variable, for which a function depending on this variable has certain value C. An identical problem

Iteration as far as the computation is concerned is to find the value of the variable at which two functions of one variable have the same value. One solves these problems via looking for the zeros of a function.

f we define y1 = f(x);y2 = g(x)  for which x is y1 = C?answer: f(x) - C = 0  for which x is y1 = y2?answer: h(x) f(x) - g(x) = 0

An analytical solution for finding zeros of a function can only be found for very simple functions, thus it is an exception. Therefore one needs a numerical method of solution that preferably works for all functions and all parameter values.

This is achieved with iterative methods that present a reversal of the question. One initially takes a value of the variable, that is probably smaller than the estimated first zero in the interval of interest and calculates both the absolute value of the function value and its sign. Then one increases the variable by a given interval (one can of course also start from the right and decrease the variable step by step). Is the new absolute value for the same sign one moves to the next point. If the sign changes one has obviously crossed a zero. Now the direction of the movement is inverted and the step width is multiplied by a factor < 1. Thus one finds boxes of decreasing size containing the zero until the deviation of the function vale from zero becomes less than a predetermined tolerance. Then one continues with the process in the original direction, until all zeros has been found or until a certain threshold for the value of the variable or of the function itself has been exceeded and thus one is outside the region of interest.

For this iteration process ready made algorithms are available in standard numerical computer codes, that include further refinements. Thus one can, for example, vary the width of the iteration intervals such that the character of the function is taken into account. For example with the Newton method one uses its slope the first derivative to adjust these intervals. Given the speed of today’s computers these refinement do not play matter for simple tasks. The following interactive example in Figure 5.17 determines the zeros of a function that can be entered at will. This function is a preset as a polynomial of fourth degree with irrational roots.

The sequence shows the progression of a very simple iteration algorithm. The speed can be adjusted. The starting point of iteration (magenta) can be dragged with the mouse. The iteration proceeds with a constant step width to larger x-values until the sign of the function changes. The initial value is reset to the last value before the sign change and the step width is decreased by a factor of 10 and the progression to larger x-values is resumed. This is repeated until the deviation of the y-value from zero falls below a given tolerance. In the simulation one can choose whether it stops after reaching a certain accuracy, or whether all zeros in the variable interval are determined in sequence. In a single calculation the magenta point jumps to the calculated value. while the blue dot shows the first iteration value when determining multiple zeros.

To be able to follow the progressive iteration also for high accuracy already achieved well, a section of the window is shown in detail in a magnifying glass, and the scale adjusts to the increasing accuracy.

From the zoom window of Fig.5.20 you can see that the curve is always nearly linear close the root of the curve. The Regula Falsi uses as the next iteration value for x the intersection of the secant formed from the two previous iteration points with the x-axis. It therefore leads quickly to the final solution. We have however chosen the constant step width so that the process can be observed more easily.


PIC
Figure 5.20: Animated iterative calculation of the zeros of a function, a polynomial of fourth degree in the figure. The left window shows the whole calculation interval, the right one a section , whose scale conforms to the resolution achieved. The last iteration point is shown in blue in both windows while the three predecessors are shown in red in the “looking glass” window. in the picture for a return after dividing the interval by 10. The magenta point is the starting point of the iteration. It can be drawn with the mouse. The desired precision delta , the number of time steps per second (speed) and the abscissa range xmax can be chosen. In the number fields the coordinates of the current iteration point x,y and the initial point x0,y0 of the iteration are shown. In the formula window, any functions can be entered whose zeros are to be calculated.

Further details and hints for experiments can be found on the description pages of the simulation.

End of chapter 5