# Fourier Series, Fourier Transforms and the Delta Function

Michael Fowler, UVa.

### Introduction

We begin with a brief review of Fourier series. Any periodic function of interest in physics can be expressed as a series in sines and cosines$—$we have already seen that the quantum wave function of a particle in a box is precisely of this form.  The important question in practice is, for an arbitrary wave function, how good an approximation is given if we stop summing the series after $N$ terms.  We establish here that the sum after $N$ terms, $f N ( θ )$, can be written as a convolution of the original function with the function

$δ N (x)=( 1/2π )( sin(N+ 1 2 )x )/sin 1 2 x,$

that is,

$f N (θ)= ∫ −π π δ N (θ− θ ′ )f( θ ′ )d θ ′$.

The structure of the function $δ N ( x )$ (plotted below), when put together with the function $f( θ )$, gives a good intuitive guide to how good an approximation the sum over $N$ terms is going to be for a given function $f( θ )$.  In particular, it turns out that step discontinuities are never handled perfectly, no matter how many terms are included.  Fortunately, true step discontinuities never occur in physics, but this is a warning that it is of course necessary to sum up to some $N$ where the sines and cosines oscillate substantially more rapidly than any sudden change in the function being represented.

We go on to the Fourier transform, in which a function on the infinite line is expressed as an integral over a continuum of sines and cosines (or equivalently exponentials $e ikx$ ).  It turns out that arguments analogous to those that led to $δ N ( x )$ now give a function $δ( x )$ such that

Confronted with this, one might well wonder what is the point of a function $δ( x )$ which on convolution with $f( x )$ gives back the same function $f( x ).$ The relevance of $δ( x )$ will become evident later in the course, when states of a quantum particle are represented by wave functions on the infinite line, like $f( x ),$ and operations on them involve integral operators similar to the convolution above.  Working with operations on these functions is the continuum generalization of matrices acting on vectors in a finite-dimensional space, and $δ( x )$ is the infinite-dimensional representation of the unit matrix.  Just as in matrix algebra the eigenstates of the unit matrix are a set of vectors that span the space, and the unit matrix elements determine the set of dot products of these basis vectors, the delta function determines the generalized inner product of a continuum basis of states.  It plays an essential role in the standard formalism for continuum states, and you need to be familiar with it!

### Fourier Series

Any reasonably smooth real function $f( θ )$ defined in the interval $−π<θ≤π$ can be expanded in a Fourier series,

$f(θ)= A 0 2 + ∑ n=1 ∞ ( A n cosnθ+ B n sinnθ)$

where the coefficients can be found using the orthogonality condition,

$∫ −π π cosmθcosnθdθ=π δ m,n$

and the same condition for the  to give:

$A n = 1 π ∫ −π π f(θ)cosnθdθ B n = 1 π ∫ −π π f(θ)sinnθdθ .$

Note that for an even function only the $A n$ are nonzero, for an odd function only the $B n$ are nonzero.

### How Smooth is “Reasonably Smooth”?

The number of terms of the series necessary to give a good approximation to a function depends on how rapidly the function changes.  To get an idea of what goes wrong when a function is not “smooth”, it is instructive to find the Fourier sine series for the step function

Using the expression for $B n$ above it is easy to find:

$f(θ)= 4 π ( sinθ+ sin3θ 3 + sin5θ 5 +... ).$

Taking the first half dozen terms in the series gives:

As we include more and more terms, the function becomes smoother but, surprisingly, the initial overshoot at the step stays at a finite fraction of the step height.  However, the function recovers more and more rapidly, that is to say, the overshoot and “ringing” at the step take up less and less space.  This overshoot is called Gibbs’ phenomenon, and only occurs in functions with discontinuities.

### How the Sum over N Terms is Related to the Complete Function

To get a clearer idea of how a Fourier series converges to the function it represents, it is useful to stop the series at $N$ terms and examine how that sum, which we denote $f N ( θ )$, tends towards $f( θ )$.

So, substituting the values of the coefficients

$A n = 1 π ∫ −π π f(θ)cosnθdθ B n = 1 π ∫ −π π f(θ)sinnθdθ$

in the series

$f(θ)= A 0 2 + ∑ n=1 ∞ ( A n cosnθ+ B n sinnθ)$

gives

$f N (θ)= 1 2π ∫ −π π f( θ ′ )d θ ′ + 1 π ∑ n=1 N ∫ −π π (cosnθcosn θ ′ +sinnθsinn θ ′ )f( θ ′ )d θ ′ = 1 2π ∫ −π π f( θ ′ )d θ ′ + 1 π ∑ n=1 N ∫ −π π cosn(θ− θ ′ )f( θ ′ )d θ ′ .$

We can now use the trigonometric identity

$∑ n=1 N cosnx = sin(N+ 1 2 )x 2sin 1 2 x − 1 2$

to find

$f N (θ)= 1 2π ∫ −π π sin(N+ 1 2 )(θ− θ ′ ) sin 1 2 (θ− θ ′ ) f( θ ′ )d θ ′ = ∫ −π π δ N (θ− θ ′ )f( θ ′ )d θ ′$

where

$δ N (x)= 1 2π sin(N+ 1 2 )x sin 1 2 x .$

(Note that proving the trigonometric identity is straightforward: write $z= e ix$, so $cosnx= 1 2 ( z n + z −n )$, and sum the geometric progressions.)

Going backwards for a moment and writing

$δ N ( x )= 1 π ( ∑ n=1 N cosnx + 1 2 )$

it is easy to check that

$∫ −π π δ N (x)dx=1.$

To help visualize $δ N ( θ )$, here is $N=20:$

>

We have just established that the total area under the curve $=1,$ and it is clear from the diagram that almost all this area is under the central peak, since the areas away from the center are almost equally positive and negative. The width of the central peak is $π/( N+ 1 2 )$, its height $( N+ 1 2 )/π.$

Exercise: for large $N,$ approximately how far down does it dip on the first oscillation?  ( $N/ π 2 .$ )

For functions varying slowly compared with the oscillations, the convolution integral

$f N (θ)= ∫ −π π δ N (θ− θ ′ )f( θ ′ )d θ ′$

will give $f N ( θ )$ close to $f( θ )$, and for these functions $f N ( θ )$ will tend to $f( θ )$ as $N$ increases.

It is also clear why convoluting this curve with a step function gives an overshoot and oscillations.  Suppose the function $f( θ )$ is a step, jumping from 0 to 1 at $θ=0.$  From the convolutionary form of the integral, you should be able to convince yourself that the value of $f N ( θ )$ at a point $θ$ is the total area under the curve $δ N ( θ )$ to the left of that point (area below zero$—$that is, below the $x$ -axis$—$of course counting negative).  For $θ=0,$ this must be exactly 0.5 (since all the area under $δ N ( θ )$ adds to 1).  But if we want the value of $f N ( θ )$ at $θ=π/( 2N+1 )$ (that is, the first point to the right of the origin where the curve cuts through the $x$ -axis), we must add all the area to the left of $θ=π/( 2N+1 )$, which actually adds up to a total area greater than one, since the leftover area to the right of that point is overall negative.  That gives the overshoot.

### A Fourier Series in Quantum Mechanics: Electron in a Box

The time-independent Schrödinger wave functions for an electron in a box (here a one-dimensional square well with infinite walls) are just the sine and cosine series determined by the boundary conditions. Therefore, any reasonably smooth initial wavefunction describing the electron can be represented as a Fourier series.  The time development can then be found by multiplying each term in the series by the appropriate time-dependent phase factor.

Important Exercise: prove that for a function $f( θ )= ∑ n=−∞ ∞ a n e inθ$, with the $a n$ in general complex,

$1 2π ∫ −π π | f( θ ) | 2 dθ = ∑ n=−∞ ∞ | a n | 2 .$

( The physical relevance of this result is as follows: for an electron confined to the circumference of a ring of unit radius,  $θ$ is the position of the electron.  An orthonormal basis of states of the electron on this ring is the set of functions $( 1/ 2π ) e inθ$ with $n$ an integer, a correctly normalized superposition of these states must have $∑ n=−∞ ∞ | a n | 2 =1$, so that the total probability of finding the electron in some state is unity.  But this must also mean that the total probability of finding the electron anywhere on the ring is unity -- and that’s the left-hand side of the above equation -- the  cancel.)

### Exponential Fourier Series

In the previous lecture, we discussed briefly how a Gaussian wave packet in $x$ -space could be represented as a continuous linear superposition of plane waves that turned out to be another Gaussian wave packet, this time in $k$ -space.  The plan here is to demonstrate how we can arrive at that representation by carefully taking the limit of the well-defined Fourier series, going from the finite interval $( −π,π )$ to the whole line, and to outline some of the mathematical problems that arise, and how to handle them.

The first step is a trivial one: we need to generalize from real functions to complex functions, to include wave functions having nonvanishing current.  A smooth complex function can be written in a Fourier series simply by allowing $A n$ and $B n$ to be complex, but in this case a more natural expansion would be in powers of $e iθ , e −iθ .$

We write:

and retracing the above steps

$f N (θ)= 1 2π ∫ −π π ∑ n=−N N e in(θ− θ ′ ) f( θ ′ )d θ ′ = 1 2π ∫ −π π f( θ ′ )d θ ′ + 1 π ∫ −π π ∑ n=1 N cosn(θ− θ ′ ) f( θ ′ )d θ ′$.

exactly the same expression as before, therefore giving the same $δ N ( θ )$.  This isn’t surprising, because using $cosnθ= 1 2 ( e inθ + e −inθ ), sinnθ= 1 2i ( e inθ − e −inθ )$ the first $N$ terms in $A n , B n$ can simply be rearranged to a sum over $e inθ$ terms for $−N≤n≤N$.

### Electron out of the Box: the Fourier Transform

To break down a wave packet into its plane wave components, we need to extend the range of integration from the $( −π,π )$ used above to $( −∞,∞ )$.  We do this by first rescaling from (π, π ) to (L/2, L/2) and then taking the limit $L→∞.$

Scaling the interval from $2π$ to $L$ (in the complex representation) gives:

the sum in $n$ being over all integers. This is an expression for $f( x )$ in terms of plane waves $e ikx$ where the allowed $k$ ’s are $2πn/L$, with $n=0,±1,±2,…$

Retracing the steps above in the derivation of the function $δ N ( x ),$ we find the equivalent function to be

$δ N L ( x )= 1 L ( 1+2 ∑ n=1 N cos 2πnx L )= sin( ( 2N+1 )πx/L ) Lsin( πx/L ) .$

Studying the expression on the right, it is evident that  provide $N$ is much greater than $L,$ this has the same peaked-at-the-origin behavior as the $δ N ( x )$ we considered earlier.  But we are interested in the limit $L→∞,$ and there$—$for fixed N$—$this function $δ N L ( x )$ is low and flat.

Therefore, we must take the limit N going to infinity before taking L going to infinity.

This is what we do in the rest of this section.

Provided $L$ is finite, we still have a Fourier series, representing a function of period $L.$  Our main interest in taking L infinite is that we would like to represent a nonperiodic function, for example a localized wave packet, in terms of plane-wave components.

Suppose we have such a wave packet, say of length $L 1 ,$ by which we mean the wave is exactly zero outside a stretch of the axis of length $L 1 .$ Why not just express it in terms of an infinite $N$ Fourier series based on some large interval $( −L/2, L/2 )$ provided the wave packet length $L 1$ is completely inside this interval?  The point is that such an analysis would indeed accurately reproduce the wave packet inside the interval, but the same sum of plane waves evaluated over all the $x$ -axis would reveal an infinite string of identical wave packets $L$ apart!  This is not what we want.

As a preliminary to taking L to infinity, let us write the exponential plane wave terms in the standard k-notation,

$e 2πinx/L = e i k n x .$

So we are summing over an (infinite $N$ ) set of plane waves having wave number values

$k n =2πn/L, n=0,±1,±2,…$,

a set of equally-spaced $k$ ’s with separation $Δ k=2π/L.$

Consider now what would happen if we double the basic interval from $( −L/2, L/2 )$ to $( −L, L ).$

The new allowed $k$ values are $k n =πn/L, n=0,±1,±2,…$ , so the separation is now $Δ k=π/L,$ half of what it was before.  It is evident that as we increase $L,$ the spacing between successive $k n$ values gets less and less.

Going back to the interval of length $L,$ writing $L a n =a( k n ), k n =2πn/L$ we have

$f(x)= ∑ n=−∞ ∞ a n e 2πinx/L = 1 L ∑ n=−∞ ∞ a( k n ) e i k n x .$

Recall that the Riemann integral can be defined by

$∫ f( k ) dk= lim Δk→o ∑ f( k n ) Δk$

with $k n =nΔk, n=0,±1,±2,…$.

The expression on the right-hand side of the equation for $f( x )$ has the same form as the right-hand side of the Riemann integral definition, and here $Δ k=2π/L.$

That is to say,

in the limit $Δ k→0,$ or equivalently $L→∞.$  We are of course assuming here that the function $a( k n )$, which we have only defined (for a given $L$ ) on the set of points $k n ,$ tends to a continuous function $a( k )$ in the limit $L→∞$.

It follows that in the infinite $L$ limit, we have the Fourier transform equations:

### Dirac’s Delta Function

Now we have taken both$N$  and $L$ to infinity, what has happened to our function $δ N L ( x )$?   Remember that our procedure for finding $f N ( θ )$ in terms of $f( θ )$ gave the equation

$f N (θ)= 1 2π ∫ −π π f( θ ′ )d θ ′ + 1 π ∫ −π π ∑ n=1 N cosn(θ− θ ′ ) f( θ ′ )d θ ′$

Following the same formal procedure with the ( $L=∞$ ) Fourier transforms, we are forced to take $N$ infinite (recall the procedure only made sense if $N$ was taken to infinity before $L$ ), so in place of an equation for $f N ( θ )$ in terms of $f( θ )$ we get an equation for $f( x )$ in terms of itself!  Let’s write it down first and think afterwards:

where

$δ(x)= ∫ −∞ +∞ dk 2π e ikx .$

This is the Dirac delta function.  This hand-waving approach has given a result which is not clearly defined.  This integral over $x$ is linearly divergent at the origin, and has finite oscillatory behavior everywhere else.  To make any progress, we must provide some form of cutoff in $k$ -space, then perhaps we can find a meaningful limit by placing the cutoff further and further away.

From our arguments above, we should be able to recover $δ( x )$ as a limit of $δ N L ( x )$ by first taking $N$ to infinity, then $L.$ That is to say,

$δ( x )= lim L→∞ ( lim N→∞ δ N L ( x ) )= lim L→∞ ( lim N→∞ sin( ( 2N+1 )πx/L ) Lsin( πx/L ) ).$

A way to understand this limit is to write $M=( 2N+1 )π/L$ and let $M$ go to infinity before $L.$ (This means as we take $L$ large on its way to infinity, we’re taking $N$ far larger!)

So the numerator is just $sinMx.$  In the limit of infinite $L,$ for any finite $x$ the denominator is just $πx,$ since $sinθ=θ$ in the limit of small $θ.$

From this,

$δ( x )= lim M→∞ sinMx πx .$

This is still a rather pathological function, in that it is oscillating more and more quickly as the infinite limit is taken. This comes about from the abrupt cutoff in the sum at the frequency $N.$

To see how this relates to the (also ill-defined) $δ(x)= ∫ −∞ +∞ ( dk/2π ) e ikx$ ,  recall $δ N L ( x )$ came from the series

$δ N L ( x )= sin( 2N+1 )πx/L Lsinπx/L = 1 L ( 1+2 ∑ n=1 N cos 2πnx L ).$

Expressing the cosine in terms of exponentials, then replacing the sum by an integral in the large $N$ limit, in the same way we did earlier, writing $k n =2πn/L$, so the interval between successive  is $Δk=2π/L,$ so $∫ f( k )dk≅( 2π/L ) ∑ f( k n ) :$

$δ N L ( x )= 1 L ∑ n=−N N e 2πinx/L ≅ ∫ −2πN/L 2πN/L dk 2π e ikx = sin( 2πNx/L ) πx .$

So it is clear that we’re defining the $δ( x )$ as a limit of the integral $∫ −2πN/L 2πN/L ( dk/2π ) e ikx$, which is abruptly cut off at the large values $±( 2πN/L ).$  In fact, this is not very physical: a much more realistic scenario for a real wave packet would be a gradual diminution in contributions from high frequency (or short wavelength) modes$—$that is to say, a gentle cutoff in the integral over $k$ that was used to replace the sum over $n.$  For example, a reasonable cutoff procedure would be to multiply the integrand by $exp( − Δ 2 k 2 )$, then take the limit of small $Δ$.

Therefore a more reasonable definition of the delta function, from a physicist’s point of view, would be

That is to say, the delta function can be defined as the “narrow limit” of a Gaussian wave packet with total area 1.  Unlike the function $δ N ( θ )$, $δ Δ ( x )$ has no oscillating sidebands, thanks to our smoothing out of the upper $k$ -space cutoff, so step discontinuities do not generate Gibbs’ phenomenon overshoot -- instead, a step will be smoothed out over a distance of order $Δ$.

### Properties of the Delta Function

It is straightforward to verify the following properties from the definition as a limit of a Gaussian wavepacket:

$∫ δ(a−x)δ(x−b)dx=δ(a−b).$

### Yet Another Definition, and a Connection with the Principal Value Integral

There is no unique way to define the delta function, and other cutoff procedures can give useful insights.  For example, the $k$ -space integral can be split into two and simple exponential cutoffs applied to the two halves, that is, we could take the definition to be

$δ(x)= lim ε→0 ( ∫ −∞ 0 dk 2π e ikx e εk + ∫ 0 +∞ dk 2π e ikx e −εk ).$

Evaluating the integrals,

$δ(x)= lim ε→0 1 2π ( 1 ix+ε − 1 ix−ε )= lim ε→0 1 π ( ε x 2 + ε 2 ).$

It is easy to check that this function is correctly normalized by making the change of variable $x=εtanθ$ and integrating from   This representation of the delta function will prove to be useful later.  Note that regarded as a function of a complex variable, the delta function has two poles on the pure imaginary axis at $z=± iε.$

The standard definition of the principal value integral is:

$∫ −D D f(x) P x dx= lim ε→0 ( ∫ −D −ε f(x) x dx + ∫ ε D f(x) x dx ) .$ this is equivalent to

$∫ −D D f(x) P x dx= lim ε→0 ∫ −D D f(x) x x 2 + ε 2 dx .$

Putting this together with the similar representation of the delta function above, and taking the limit of $ε→0$ to be understood, we have the useful result:

$1 x±iε = P x ∓iπδ(x).$

Important Exercises:

1.     Prove Parseval’s Theorem:

2. Prove the rule for the Fourier Transform of a convolution of two functions: