The partial sums of the Taylor series approximating a function in the vicinity of the computation point via partial sums of a power series. If one would like to approximate a function over a larger interval one would need terms of very high order. The polynomial obtained by truncating the Taylor series would have to have as least as many turning points as the function. For periodical functions this would be very tedious for intervals larger than the period.
Periodical functions have large practical importance in telecommunications and electrical engineering. For such functions the approximation via the superposition of periodical standard functions (sine and cosine0 is much better suited. On expands the function into a series that consists of the fundamental tone and the overtones, i.e. of the functions and with integer values of .
The analogy to the analysis of a vibrating string is immediately obvious: describes the vibration of the fundamental tone, that of the octave, that of the fifth above the octave and so on. For a string that is fixed at both ends; the variable is now the product of the angular frequency and the time .
Depending on the shape of one superimposes more or fewer of these sine/cosine oscillations with a certain strength and expressed as a number that determines the amplitude. The set of amplitudes of the overtones, i.e. the coefficients of the series expansion represents the spectrum of the periodical oscillation. Spectrum and oscillation form are corresponding representations of the same phenomenon. This representation in terms of superimposed sine and cosine functions is called the Fourier series of .
While the partial sums of the Taylor series approximate the function in the proximity of a point , the partial sums of the Fourier series are approximations for the entire interval of the fundamental period and therefore also - because of the periodicity of the functions considered - for an unlimited region of the variable . The Fourier series does not have to coincide with the function at any point, while this the case for the Taylor Series at the computation point.
It depends on the properties of , how many overtones have to be superimposed to approximate the function at nearly all points. If one interprets the notion of convergence not strictly, then Fourier series converge for all functions, even for non continuous ones. The convergence is then not necessarily monotone, i.e. it can be better for some values of and worse for some other values of and even fail for some values! At discontinuities one observes overshootings even for higher orders of the series. This is called ringing in telecommunications.
Since the periodical phenomena that we consider here are mostly oscillations in time, the variable is usually . To also model the the phases of the individual overtones, we use a sum of terms with and . Then the sum represents a phase-shifted sine or cosine function. Thus the general Fourier series reads
For a given spectrum one can calculate . For a given function all coefficients can be determined and thus the spectrum is known.
How do we now obtain the coefficients and ?
For the Taylor series we had used the fact, that after differentiation all terms, that still contains the distance to the point of computation, become zero, such that the coefficient of the corresponding constant term gives up to a factor the corresponding derivative a the point of computation.
For the Fourier series we instead begin by integrating the product of the function and the overtones over one period of the fundamental frequency ()
This looks initially a bit complicated; however it turns out, that the integral over the constant, i.e. the first term before the sum symbol nearly always vanishes, since the integral over a period of cosine or sine is zero. Only for one obtains a contribution, since we have const. Therefore the following applies:
In addition the integral over the product of an overtone and a second overtone zero, if and are not equal. Thus also applies when a cosine and sine function are multiplied, because of the sine functions are odd while the cosine function are even with respect to . Therefore we are left only with the integrals or which are both . Thus the coefficients can be easily written down, but this requires the determination of integrals which necessitates numerical calculations.
The simulation in Fig.5.15 visualizes these circumstances that simplify the calculation of the Fourier coefficients. From a selection field a product of periodic functions of the general form that we are interested in is chosen:. The red curve represents the product of in and the adjustable overtone , in the figure we have and . The blue curve shows the integral, whose final value (definite integral over one period of ) vanishes for . For we obtain when integrating over the result , while the integral over the mixed term vanishes. The integration is started via selecting the corresponding option box.
With slides the parameters and and the integers and can be chosen. The function is drawn in red. After activating the field entitled Integral the blue integral function is calculated over a period of the fundamental oscillation from to . The final value is the definite integral of interest to us.
As a first step we convince ourselves, that that integrals over sine and cosine vanish and that the addition of sine and cosine functions results in a phase-shifted sine or cosine function, whose integral also vanishes. The calculation of the integral for the product of the function defined above with an overtone of initially unknown order shows, that indeed all contributions vanish except for the one where the overtones are identical and the function type (sine or cosine) is the same. One realizes, that the symmetry of the different functions with respect to the midpoint of the period on the -axis is the reason for this specific result. Thus we have.
This property of the functions sine and cosine means, that they are an example of an orthogonal system of functions. Two functions are called orthogonal if the following applies:
In the description pages of the simulation more detailed instructions and hints for experiments are provided. After opening the simulation you choose the function type and press the enter key. The integration process is animated in order for you to see the difference between the integrals more easily when changing the functions.
The simulation in Fig.5.16 visualizes the calculation of the Fourier coefficients for the fundamental tone and the first nine overtones for the following typical periodical functions: sawtooth, square wave, square-impulse and Gaussian impulse. To this end the product of the functions under the integral sign is determined and drawn in red while definite integral is shown in blue. The final value of the integral is, except for a factor that was suppressed to get more easily readable values, equal to the coefficient of the selected order. The functions are provided with up to three parameters and that control the amplitude, the point of symmetry and the impulse width. From the simulation the spectra of the functions shown can be obtained in a numerical and experimental manner.
The interactive figure of the simulation shows the situation for the sine coefficients of tenth order of a symmetrical saw tooth. The simulation is started by choosing a function and clicking on the enter key. The description pages and the instructions for experiments contain further details.
In the next interactive examples (Fig.5.17 to Fig.5.19) the calculation of the coefficients takes place in the background. In the window the function is shown in red and the partial sum of the desired order is shown in blue. The function window is interactive such that many more functions can be entered and a few are suggested in the description. In a text window the order of the analysis can be adjusted; with a slider the approximation order to be used for the partial sum is selected. The simulation allows using very high orders.
The calculation of the Fourier expansion of -th order follows immediately after entering the function. The diagram extends beyond the integration region of in order to see the periodic continuation in both directions.
In Fig.5.17 the Fourier expansion of order is shown as approximation for the symmetrical and periodic square-impulse. For the square wave one recognizes very clearly the typical overshooting at discontinuities, which does not vanish even for very high orders.
In Fig.5.18, using the same simulation, the approximation of 17th order is shown for a saw tooth oscillation, that has been modulated in a nonlinear fashion with a sine function of high frequency .
In a second window of the simulation (Fig.5.19) the spectrum is shown. It can be changed between sine ()-, cosine () and power spectrum (). This figure shows the spectrum of the modulated sawtooth, which is rich in overtones and has a pronounced formant at the sixth and seventh overtone. In acoustics formants are defined as limited regions of overtones with large amplitude; they significantly determine the tone quality.
.
The description of the simulation contains further instructions.
In the space of complex numbers the Fourier series can be formulated in a very elegant way:
The connection to the real representation is obtained via reordering the sum and combining, starting with terms with and . Taking into account we get
As connection between real and complex coefficients we obtain
The complex formulation is especially used in electrical engineering. It has the advantage, that calculations with exponentials are in general easier and more transparent then those with trigonometric function.
For the fast numerical computation of the components of a Fourier series a special algorithm has been developed, which is known as FFT (Fast Fourier Transformation). FFT
In mathematics and physics one often needs to determine the values of a variable, for which a function depending on this variable has certain value . An identical problem
Iteration as far as the computation is concerned is to find the value of the variable at which two functions of one variable have the same value. One solves these problems via looking for the zeros of a function.
An analytical solution for finding zeros of a function can only be found for very simple functions, thus it is an exception. Therefore one needs a numerical method of solution that preferably works for all functions and all parameter values.
This is achieved with iterative methods that present a reversal of the question. One initially takes a value of the variable, that is probably smaller than the estimated first zero in the interval of interest and calculates both the absolute value of the function value and its sign. Then one increases the variable by a given interval (one can of course also start from the right and decrease the variable step by step). Is the new absolute value for the same sign one moves to the next point. If the sign changes one has obviously crossed a zero. Now the direction of the movement is inverted and the step width is multiplied by a factor . Thus one finds boxes of decreasing size containing the zero until the deviation of the function vale from zero becomes less than a predetermined tolerance. Then one continues with the process in the original direction, until all zeros has been found or until a certain threshold for the value of the variable or of the function itself has been exceeded and thus one is outside the region of interest.
For this iteration process ready made algorithms are available in standard numerical computer codes, that include further refinements. Thus one can, for example, vary the width of the iteration intervals such that the character of the function is taken into account. For example with the Newton method one uses its slope the first derivative to adjust these intervals. Given the speed of today’s computers these refinement do not play matter for simple tasks. The following interactive example in Figure 5.17 determines the zeros of a function that can be entered at will. This function is a preset as a polynomial of fourth degree with irrational roots.
The sequence shows the progression of a very simple iteration algorithm. The speed can be adjusted. The starting point of iteration (magenta) can be dragged with the mouse. The iteration proceeds with a constant step width to larger -values until the sign of the function changes. The initial value is reset to the last value before the sign change and the step width is decreased by a factor of and the progression to larger -values is resumed. This is repeated until the deviation of the -value from zero falls below a given tolerance. In the simulation one can choose whether it stops after reaching a certain accuracy, or whether all zeros in the variable interval are determined in sequence. In a single calculation the magenta point jumps to the calculated value. while the blue dot shows the first iteration value when determining multiple zeros.
To be able to follow the progressive iteration also for high accuracy already achieved well, a section of the window is shown in detail in a magnifying glass, and the scale adjusts to the increasing accuracy.
From the zoom window of Fig.5.20 you can see that the curve is always nearly linear close the root of the curve. The Regula Falsi uses as the next iteration value for the intersection of the secant formed from the two previous iteration points with the -axis. It therefore leads quickly to the final solution. We have however chosen the constant step width so that the process can be observed more easily.
Further details and hints for experiments can be found on the description pages of the simulation.
End of chapter 5