Fourier Series, Fourier Transforms and the Delta Function
Michael Fowler, UVa.
Introduction
We begin with a brief review of Fourier series. Any periodic function of interest in physics can be expressed as a series in sines and cosineswe have already seen that the quantum wave function of a particle in a box is precisely of this form. The important question in practice is, for an arbitrary wave function, how good an approximation is given if we stop summing the series after terms. We establish here that the sum after terms, , can be written as a convolution of the original function with the function
that is,
.
The structure of the function (plotted below), when put together with the function , gives a good intuitive guide to how good an approximation the sum over terms is going to be for a given function . In particular, it turns out that step discontinuities are never handled perfectly, no matter how many terms are included. Fortunately, true step discontinuities never occur in physics, but this is a warning that it is of course necessary to sum up to some where the sines and cosines oscillate substantially more rapidly than any sudden change in the function being represented.
We go on to the Fourier transform, in which a function on the infinite line is expressed as an integral over a continuum of sines and cosines (or equivalently exponentials ). It turns out that arguments analogous to those that led to now give a function such that
Confronted with this, one might well wonder what is the point of a function which on convolution with gives back the same function The relevance of will become evident later in the course, when states of a quantum particle are represented by wave functions on the infinite line, like and operations on them involve integral operators similar to the convolution above. Working with operations on these functions is the continuum generalization of matrices acting on vectors in a finite-dimensional space, and is the infinite-dimensional representation of the unit matrix. Just as in matrix algebra the eigenstates of the unit matrix are a set of vectors that span the space, and the unit matrix elements determine the set of dot products of these basis vectors, the delta function determines the generalized inner product of a continuum basis of states. It plays an essential role in the standard formalism for continuum states, and you need to be familiar with it!
Fourier Series
Any reasonably smooth real function defined in the interval can be expanded in a Fourier series,
where the coefficients can be found using the orthogonality condition,
and the same condition for the to give:
Note that for an even function only the are nonzero, for an odd function only the are nonzero.
How Smooth is “Reasonably Smooth”?
The number of terms of the series necessary to give a good approximation to a function depends on how rapidly the function changes. To get an idea of what goes wrong when a function is not “smooth”, it is instructive to find the Fourier sine series for the step function
Using the expression for above it is easy to find:
Taking the first half dozen terms in the series gives:
As we include more and more terms, the function becomes smoother but, surprisingly, the initial overshoot at the step stays at a finite fraction of the step height. However, the function recovers more and more rapidly, that is to say, the overshoot and “ringing” at the step take up less and less space. This overshoot is called Gibbs’ phenomenon, and only occurs in functions with discontinuities.
How the Sum over N Terms is Related to the Complete Function
To get a clearer idea of how a Fourier series converges to the function it represents, it is useful to stop the series at terms and examine how that sum, which we denote , tends towards .
So, substituting the values of the coefficients
in the series
gives
We can now use the trigonometric identity
to find
where
(Note that proving the trigonometric identity is straightforward: write , so , and sum the geometric progressions.)
Going backwards for a moment and writing
it is easy to check that
To help visualize , here is
>We have just established that the total area under the curve and it is clear from the diagram that almost all this area is under the central peak, since the areas away from the center are almost equally positive and negative. The width of the central peak is , its height
Exercise: for large approximately how far down does it dip on the first oscillation? ( )
For functions varying slowly compared with the oscillations, the convolution integral
will give close to , and for these functions will tend to as increases.
It is also clear why convoluting this curve with a step function gives an overshoot and oscillations. Suppose the function is a step, jumping from 0 to 1 at From the convolutionary form of the integral, you should be able to convince yourself that the value of at a point is the total area under the curve to the left of that point (area below zerothat is, below the -axisof course counting negative). For this must be exactly 0.5 (since all the area under adds to 1). But if we want the value of at (that is, the first point to the right of the origin where the curve cuts through the -axis), we must add all the area to the left of , which actually adds up to a total area greater than one, since the leftover area to the right of that point is overall negative. That gives the overshoot.
A Fourier Series in Quantum Mechanics: Electron in a Box
The time-independent Schrödinger wave functions for an electron in a box (here a one-dimensional square well with infinite walls) are just the sine and cosine series determined by the boundary conditions. Therefore, any reasonably smooth initial wavefunction describing the electron can be represented as a Fourier series. The time development can then be found by multiplying each term in the series by the appropriate time-dependent phase factor.
Important Exercise: prove that for a function , with the in general complex,
( The physical relevance of this result is as follows: for an electron confined to the circumference of a ring of unit radius, is the position of the electron. An orthonormal basis of states of the electron on this ring is the set of functions with an integer, a correctly normalized superposition of these states must have , so that the total probability of finding the electron in some state is unity. But this must also mean that the total probability of finding the electron anywhere on the ring is unity -- and that’s the left-hand side of the above equation -- the cancel.)
Exponential Fourier Series
In the previous lecture, we discussed briefly how a Gaussian wave packet in -space could be represented as a continuous linear superposition of plane waves that turned out to be another Gaussian wave packet, this time in -space. The plan here is to demonstrate how we can arrive at that representation by carefully taking the limit of the well-defined Fourier series, going from the finite interval to the whole line, and to outline some of the mathematical problems that arise, and how to handle them.
The first step is a trivial one: we need to generalize from real functions to complex functions, to include wave functions having nonvanishing current. A smooth complex function can be written in a Fourier series simply by allowing and to be complex, but in this case a more natural expansion would be in powers of
We write:
and retracing the above steps
.
exactly the same expression as before, therefore giving the same . This isn’t surprising, because using the first terms in can simply be rearranged to a sum over terms for .
Electron out of the Box: the Fourier Transform
To break down a wave packet into its plane wave components, we need to extend the range of integration from the used above to . We do this by first rescaling from (−π, π ) to (−L/2, L/2) and then taking the limit
Scaling the interval from to (in the complex representation) gives:
the sum in being over all integers. This is an expression for in terms of plane waves where the allowed ’s are , with
Retracing the steps above in the derivation of the function we find the equivalent function to be
Studying the expression on the right, it is evident that provide is much greater than this has the same peaked-at-the-origin behavior as the we considered earlier. But we are interested in the limit and therefor fixed Nthis function is low and flat.
Therefore, we must take the limit N going to infinity before taking L going to infinity.
This is what we do in the rest of this section.
Provided is finite, we still have a Fourier series, representing a function of period Our main interest in taking L infinite is that we would like to represent a nonperiodic function, for example a localized wave packet, in terms of plane-wave components.
Suppose we have such a wave packet, say of length by which we mean the wave is exactly zero outside a stretch of the axis of length Why not just express it in terms of an infinite Fourier series based on some large interval provided the wave packet length is completely inside this interval? The point is that such an analysis would indeed accurately reproduce the wave packet inside the interval, but the same sum of plane waves evaluated over all the -axis would reveal an infinite string of identical wave packets apart! This is not what we want.
As a preliminary to taking L to infinity, let us write the exponential plane wave terms in the standard k-notation,
So we are summing over an (infinite ) set of plane waves having wave number values
,
a set of equally-spaced ’s with separation
Consider now what would happen if we double the basic interval from to
The new allowed values are , so the separation is now half of what it was before. It is evident that as we increase the spacing between successive values gets less and less.
Going back to the interval of length writing we have
Recall that the Riemann integral can be defined by
with .
The expression on the right-hand side of the equation for has the same form as the right-hand side of the Riemann integral definition, and here
That is to say,
in the limit or equivalently We are of course assuming here that the function , which we have only defined (for a given ) on the set of points tends to a continuous function in the limit .
It follows that in the infinite limit, we have the Fourier transform equations:
Dirac’s Delta Function
Now we have taken both and to infinity, what has happened to our function ? Remember that our procedure for finding in terms of gave the equation
Following the same formal procedure with the ( ) Fourier transforms, we are forced to take infinite (recall the procedure only made sense if was taken to infinity before ), so in place of an equation for in terms of we get an equation for in terms of itself! Let’s write it down first and think afterwards:
where
This is the Dirac delta function. This hand-waving approach has given a result which is not clearly defined. This integral over is linearly divergent at the origin, and has finite oscillatory behavior everywhere else. To make any progress, we must provide some form of cutoff in -space, then perhaps we can find a meaningful limit by placing the cutoff further and further away.
From our arguments above, we should be able to recover as a limit of by first taking to infinity, then That is to say,
A way to understand this limit is to write and let go to infinity before (This means as we take large on its way to infinity, we’re taking far larger!)
So the numerator is just In the limit of infinite for any finite the denominator is just since in the limit of small
From this,
This is still a rather pathological function, in that it is oscillating more and more quickly as the infinite limit is taken. This comes about from the abrupt cutoff in the sum at the frequency
To see how this relates to the (also ill-defined) , recall came from the series
Expressing the cosine in terms of exponentials, then replacing the sum by an integral in the large limit, in the same way we did earlier, writing , so the interval between successive is so
So it is clear that we’re defining the as a limit of the integral , which is abruptly cut off at the large values In fact, this is not very physical: a much more realistic scenario for a real wave packet would be a gradual diminution in contributions from high frequency (or short wavelength) modesthat is to say, a gentle cutoff in the integral over that was used to replace the sum over For example, a reasonable cutoff procedure would be to multiply the integrand by , then take the limit of small .
Therefore a more reasonable definition of the delta function, from a physicist’s point of view, would be
That is to say, the delta function can be defined as the “narrow limit” of a Gaussian wave packet with total area 1. Unlike the function , has no oscillating sidebands, thanks to our smoothing out of the upper -space cutoff, so step discontinuities do not generate Gibbs’ phenomenon overshoot -- instead, a step will be smoothed out over a distance of order .
Properties of the Delta Function
It is straightforward to verify the following properties from the definition as a limit of a Gaussian wavepacket:
Yet Another Definition, and a Connection with the Principal Value Integral
There is no unique way to define the delta function, and other cutoff procedures can give useful insights. For example, the -space integral can be split into two and simple exponential cutoffs applied to the two halves, that is, we could take the definition to be
Evaluating the integrals,
It is easy to check that this function is correctly normalized by making the change of variable and integrating from This representation of the delta function will prove to be useful later. Note that regarded as a function of a complex variable, the delta function has two poles on the pure imaginary axis at
The standard definition of the principal value integral is:
this is equivalent to
Putting this together with the similar representation of the delta function above, and taking the limit of to be understood, we have the useful result:
Important Exercises:
1. Prove Parseval’s Theorem:
2. Prove the rule for the Fourier Transform of a convolution of two functions: