previous   index   next  PDF

9. The Simple Harmonic Oscillator

Michael Fowler 

Einstein’s Solution of the Specific Heat Puzzle

The simple harmonic oscillator, a nonrelativistic particle in a potential 1 2 k x 2 ,  is an excellent model for a wide range of systems in nature.  In fact, not long after Planck’s discovery that the black body radiation spectrum could be explained by assuming energy to be exchanged in quanta, Einstein applied the same principle to the simple harmonic oscillator, thereby solving a long-standing puzzle in solid state physics the mysterious drop in specific heat of all solids at low temperatures. Classical thermodynamics, a very successful theory in many ways, predicted no such drop—with the standard equipartition of energy, kT  in each mode (potential plus kinetic), the specific heat should remain more or less constant as the temperature was lowered (assuming no phase change). 

To explain the anomalous low temperature behavior, Einstein assumed each atom to be an independent (quantum) simple harmonic oscillator, and, just as for black body radiation, he assumed the oscillators could only absorb or emit energy in quanta.  Consequently, at low enough temperatures there is rarely sufficient energy in the ambient thermal excitations to excite the oscillators, and they freeze out, just as blue oscillators do in low temperature black body radiation.  Einstein’s picture was later somewhat refined—the basic set of oscillators was taken to be standing sound wave oscillations in the solid rather than individual atoms (making the picture even more like black body radiation in a cavity) but the main conclusion—the drop off in specific heat at low temperatures—was not affected.

The Classical Simple Harmonic Oscillator 

The classical equation of motion for a one-dimensional simple harmonic oscillator with a particle of mass m  attached to a spring having spring constant k  is

m d 2 x d t 2 =kx.

The solution is

x= x 0 sin(ωt+δ),   ω= k m ,

and the momentum p=mv  has time dependence

p=m x 0 ωcos(ωt+δ).

The total energy

(1/2m)( p 2 + m 2 ω 2 x 2 )=E

is clearly constant in time. 

It is often useful to picture the time-development of a system in phase space, in this case a two-dimensional plot with position on the x  -axis, momentum on the y  -axis.  Actually, to have ( x,y )  coordinates with the same dimensions, we use ( mωx,p ).   

It is evident from the above expression for the total energy that in these variables the point representing the system in phase space moves clockwise around a circle of radius 2mE  centered at the origin.

Note that in the classical problem we could choose any point ( mωx,p ),  place the system there and it would then move in a circle about the origin.  In the quantum problem, on the other hand, we cannot specify the initial coordinates ( mωx,p )  precisely, because of the uncertainly principle. The best we can do is to place the system initially in a small cell in phase space, of size ΔxΔp=/2 .  In fact, we shall find that in quantum mechanics phase space is always divided into cells of essentially this size for each pair of variables.

Schrödinger’s Equation and the Ground State Wave Function

From the classical expression for total energy given above, the Schrödinger equation for the quantum oscillator follows in standard fashion:

2 2m d 2 ψ(x) d x 2 + 1 2 m ω 2 x 2 ψ(x)=Eψ(x).

What will the solutions to this Schrödinger equation look like?  Since the potential 1 2 m ω 2 x 2  increases without limit on going away from x=0,  it follows that no matter how much kinetic energy the particle has, for sufficiently large x  the potential energy dominates, and the (bound state) wavefunction decays with increasing rapidity for further increase in x.  (Obviously, for a real physical oscillator there is a limit on the height of the potential we will assume that limit is much greater than the energies of interest in our problem.) 

We know that when a particle penetrates a barrier of constant height V 0  (greater than the particle’s kinetic energy) the wave function decreases exponentially into the barrier, as e αx , where α= 2m( V 0 E )/ 2 .  But, in contrast to this constant height barrier, the “height” of the simple harmonic oscillator potential continues to increase as the particle penetrates to larger x.  Obviously, in this situation the decay will be faster than exponential.  If we (rather naïvely) assume it is more or less locally exponential, but with a local α  varying with V 0 ,  neglecting E  relative to V 0   in the expression for α  suggests that α  itself is proportional to x  (since the potential is proportional to x 2 ,  and α V  ) so maybe the wavefunction decays as e ( constant ) x 2 ?   

To check this idea, we insert ψ(x)= e x 2 /2 b 2  in the Schrödinger equation, using

d 2 ψ d x 2 = 1 b 2 ψ+ x 2 b 4 ψ

to find

2 2m ( 1 b 2 + x 2 b 4 )ψ(x)+ 1 2 m ω 2 x 2 ψ(x)=Eψ(x) .

The ψ( x )  is just a factor here, and it is never zero, so can be cancelled out.  This leaves a quadratic expression which must have the same coefficients of x 0 , x 2  on the two sides, that is, the coefficient of x 2  on the left hand side must be zero:

2 2m b 4 = m ω 2 2 ,  so  b= mω .

This fixes the wave function.  Equating the constant terms fixes the energy:

E= 2 2m b 2 = 1 2 ω.

So the conjectured form for the wave function is in fact the exact solution for the lowest energy state!  (It’s the lowest state because it has no nodes.)

Also note that even in this ground state the energy is nonzero, just as it was for the square well.  The central part of the wave function must have some curvature to join together the decreasing wave function on the left to that on the right.  This “zero point energy” is sufficient in one physical case to melt the lattice—helium is liquid even down to absolute zero temperature (checked down to microkelvins!) because the wave function spread destabilizes the solid lattice that will form with sufficient external pressure.  

Higher Energy States

It is clear from the above discussion of the ground state that b= mω  is the natural unit of length in this problem, and ω  that of energy, so to investigate higher energy states we reformulate in dimensionless variables,

ξ= x b =x mω ,   ε= E ω .

Schrödinger’s equation becomes

d 2 ψ(ξ) d ξ 2 =( ξ 2 2ε)ψ(ξ).

Deep in the barrier, the ε  term will become negligible, and just as for the ground state wave function, higher bound state wave functions will have e ξ 2 /2  behavior, multiplied by some more slowly varying factor (it turns out to be a polynomial).

Exercise: find the relative contributions to the second derivative from the two terms in x n e x 2 /2 .   For given n,  when do the contributions involving the first term become small?  Define “small”.

The standard approach to solving the general problem is to factor out the e ξ 2 /2  term,

ψ(ξ)=h(ξ) e ξ 2 /2

giving a differential equation for h( ξ ) :

d 2 h d ξ 2 2ξ dh dξ +(2ε1)h=0

We try solving this with a power series in ξ:

h(ξ)= h 0 + h 1 ξ+ h 2 ξ 2 =...   .

Inserting this in the differential equation, and requiring that the coefficient of each power ξ n  vanish identically, leads to a recurrence formula for the coefficients h n :  

h n+2 = (2n+12ε) (n+1)(n+2) h n .

Evidently, the series of odd powers and that of even powers are independent solutions to Schrödinger’s equation.  (Actually this isn’t surprising: the potential is even in x,  so the parity operator P  commutes with the Hamiltonian. Therefore, unless states are degenerate in energy, the wave functions will be even or odd in x.  )  For large n,  the recurrence relation simplifies to

h n+2 2 n h n ,nε.

The series therefore tends to

2 n ξ 2n (2n2)(2n4)...2 =2 ξ 2 ξ 2( n1 ) ( n1 )! = e ξ 2 .

Multiply this by the e ξ 2 /2  factor to recover the full wavefunction, we find ψ  diverges for large ξ  as e + ξ 2 /2

Actually we should have expected this—for a general value of the energy, the Schrödinger equation has the solution A e + ξ 2 /2 +B e ξ 2 /2  at large distances, and only at certain energies does the coefficient A  vanish to give a normalizable bound state wavefunction.

So how do we find the nondiverging solutions?  It is clear that the infinite power series must be stopped!  The key is in the recurrence relation.

If the energy satisfies

2ε=2n+1,n an integer,

 then h n+2  and all higher coefficients vanish.  

This requirement in fact completely determines the polynomial (except for an overall constant) because with 2ε=2n+1  the coefficients h m  for m<n  are determined by

h m+2 = (2m+12ε) (m+1)(m+2) h m = (2m+1( 2n+1 )) (m+1)(m+2) h m .

This n th  order polynomial is called a Hermite polynomial and written H n ( ξ ).   The standard normalization of the Hermite polynomials H n ( ξ )  is to take the coefficient of the highest power ξ n  to be 2 n .    The other coefficients then follow using the recurrence relation above, giving:

H 0 (ξ)=1,   H 1 (ξ)=2ξ,   H 2 (ξ)=4 ξ 2 2,   H 3 (ξ)=8 ξ 3 12ξ,  etc.

So the bottom line is that the wavefunction for the n th  excited state, having energy ε=n+ 1 2 , is ψ n (ξ)= C n H n (ξ) e ξ 2 /2 , where C n  is a normalization constant to be determined in the next section.

It can be shown (see exercises at the end of this lecture) that H n ( ξ )=2n H n1 ( ξ ) .  Using this, beginning with the ground state, one can easily convince oneself that the successive energy eigenstates each have one more node—the n th  state has n  nodes.  This is also evident from numerical solution using the spreadsheet, watching how the wave function behaves at large x  as the energy is cranked up.

The spreadsheet can also be used to plot the wave function for large n,  say n=200.  It is instructive to compare the probability distribution with that for a classical pendulum, one oscillating with fixed amplitude and observed many times at random intervals. For the pendulum, the probability peaks at the end of the swing, where the pendulum is slowest and therefore spends most time.  The n=200  distribution amplitude follows this pattern, but of course oscillates.  However, in the large n  limit these oscillations take place over undetectably small intervals.

The classical pendulum when not at rest clearly has a time-dependent probability distribution—it swings backwards and forwards.  This means it cannot be in an eigenstate of the energy.  In fact, the quantum state most like the classical is a coherent state built up of neighboring energy eigenstates.  We shall discuss coherent states later in the course.

Operator Approach to the Simple Harmonic Oscillator

Having scaled the position coordinate x  to the dimensionless ξ by ξ=x/b=x mω/ ,  let us also scale the momentum from p  to π=id/dξ   (so π=bp/=p/ mω  ). 

The Hamiltonian is

H= p 2 + m 2 ω 2 x 2 2m = ω 2 ( π 2 + ξ 2 ).

Dirac had the brilliant idea of factorizing this expression: the obvious thought ( ξ 2 + π 2 )=( ξ+iπ )( ξiπ )  isn’t quite right, because it fails to take account of the noncommutativity of the operators, but the symmetrical version

H= ω 4 [ ( ξ+iπ )( ξiπ )+( ξiπ )( ξ+iπ ) ]

is fine, and we shall soon see that it leads to a very easy way of finding the eigenvalues and operator matrix elements for the oscillator, far simpler than using the wave functions we found above.  Interestingly, Dirac’s factorization here of a second-order differential operator into a product of first-order operators is close to the idea that led to his most famous achievement, the Dirac equation, the basis of the relativistic theory of electrons, protons, etc.

To continue, we define new operators a, a  by

a= ξ+iπ 2 = 1 2mω ( mωx+ip ), a = ξiπ 2 = 1 2mω ( mωxip ).

(We’ve expressed a  in terms of the original variables x,p  for later use.)

From the commutation relation [ iπ,ξ ]=1  it follows that

[a, a ]=1.

Therefore the Hamiltonian can be written:

H=ω( a a+ 1 2 )=ω( N+ 1 2 ), where N= a a.

Note that the operator N  can only have non-negative eigenvalues, since

ψ|N|ψ=ψ| a a|ψ= ψ a | ψ a 0.

Now

[ N, a ]= a a a a a a= a [ a, a ]= a

Suppose N  has an eigenfunction |ν  with eigenvalue ν ,

N|ν=ν|ν.

From the two equations above

N a |ν= a N|ν+ a |ν=( ν+1 ) a |ν

so a |ν  is an eigenfunction of N  with eigenvalue ν+1.   Operating with a  again and again, we climb an infinite ladder of eigenstates equally spaced in energy.

a  is often termed a creation operator, since the quantum of energy ω   added each time it operates is equivalent to an added photon in black body radiation (electromagnetic oscillations in a cavity).

It is easy to check that the state a|ν  is an eigenstate with eigenvalue ν1,  provided it is nonzero, so the operator a  takes us down the ladder. However, this cannot go on indefinitely—we have established that N  cannot have negative eigenvalues. We must eventually reach a state |νfor which a|ν=0,  the operator a  annihilates the state. (At each step down, a  annihilates one quantum of energy—so a  is often called an annihilation or destruction operator.)

Since the norm squared of a|ν,    | a|ν | 2 =ν| a a|ν=ν|N|ν=νν|ν,  and since ν|ν>0  for any nonvanishing state, it must be that the lowest eigenstate (the |νfor which a|ν=0  ) has ν=0.    It follows that the  ν  ’s on the ladder are the positive integers, so from this point on we relabel the eigenstates with n  in place of ν.

That is to say, we have proved that the only possible eigenvalues of N  are zero and the positive integers: 0, 1, 2, 3… .   

N  is called the number operator: it measures the number of quanta of energy in the oscillator above the irreducible ground state energy (that is, above the “zero-point energy” arising from the wave-like nature of the particle).

Since from above the Hamiltonian

H=ω( a a+ 1 2 )=ω( N+ 1 2 )

the energy eigenvalues are

H|n=( n+ 1 2 )ω|n.

It is important to appreciate that Dirac’s factorization trick and very little effort has given us all the eigenvalues of the Hamiltonian

H= ω 2 ( π 2 + ξ 2 ).

Contrast the work needed in this section with that in the standard Schrödinger approach. We have also established that the lowest energy state |0 , having energy 1 2 ω,  must satisfy the first-order differential equation a|0=0,  that is,

(ξ+iπ)|0>=( ξ+ d dξ ) ψ 0 (ξ)=0.

The solution, unnormalized, is

ψ 0 (ξ)=C e ξ 2 /2 .

(In fact, we’ve seen this equation and its solution before:  this was the condition for the “least uncertain” wave function in the discussion of the Generalized Uncertainty Principle.)

We denote the normalized set of eigenstates |0,|1,|2,|n with  n|n=1.  Now a |n= C n | n+1,  and Cn is easily found:

| C n | 2 =| C n | 2 n+1| n+1=n|a a |n=( n+1 ),

and

a |n= n+1 | n+1.

Therefore, if we take the set of orthonormal states |0,|1,|2,|n  as the basis in the Hilbert space, the only nonzero matrix elements of a  are n+1| a |n= n+1 .  That is to say,

a =( 0 0 0 0 1 0 0 0 0 2 0 0 0 0 3 0 ).

(The column vectors in the space this matrix operates on have an infinite number of elements: the lowest energy, the ground state component, is the entry at the top of the infinite vector—so up the energy ladder is down the vector!)

The adjoint

a=( 0 1 0 0 0 0 2 0 0 0 0 3 0 0 0 0 ).

So

a|n= n | n1.

For practical computations, we need to find the matrix elements of the position and momentum variables between the normalized eigenstates.  Now

x= /2mω ( a +a ),p=i mω/2 ( a a )

so

x= /2mω ( 0 1 0 0 1 0 2 0 0 2 0 3 0 0 3 0 ),p=i mω/2 ( 0 1 0 0 1 0 2 0 0 2 0 3 0 0 3 0 ).

These matrices are, of course, Hermitian (not forgetting the i  factor in p.  ) 

To find the matrix elements between eigenstates of any product of x  ’s and p  ’s, express all the x  ’s and p  ’s in terms of a  ’s and a  ’s, to give a sum of products of a  ’s and a  ’s. Each product in this sum can be evaluated sequentially from the right, because each a  or a  has only one nonzero matrix element when the product operates on one eigenstate.

Normalizing the Eigenstates in x-space

The normalized ground state wave function is

ψ 0 (ξ)=C e ξ 2 /2 = ( mω π ) 1 4 e mω x 2 /2 ,

where we have gone back to the x  variable, and normalized using e a x 2 dx= π/a .

To find the normalized wave functions for the higher states, they are first constructed formally by applying the creation operator a  repeatedly on the ground state |0.   Next, the result is translated into x  -space (actually ξ=x/b  ) by writing a  as a differential operator, acting on ψ 0 ( ξ ).

Using n| a | n1= n ,  

|n= a n | n1== ( a ) n n! |0.

Now

a =( 1/ 2 )( ξiπ )=( 1/ 2 )( ξd/dξ ),

so

ψ n (ξ)= ( a ) n n! |0= 1 n! ( 1 2 ( ξ d dξ ) ) n ( mω π ) 1 4 e ξ 2 /2 .

We need to check that this expression is indeed the same as the Hermite polynomial wave function derived earlier, and to do that we need some further properties of the Hermite polynomials.

Some Properties of Hermite Polynomials

The mathematicians define the Hermite polynomials by:

H n (ξ)= () n e ξ 2 d n d ξ n e ξ 2

so

H 0 (ξ)=1,   H 1 (ξ)=2ξ,   H 2 (ξ)=4 ξ 2 2,   H 3 (ξ)=8 ξ 3 12ξ,  etc.

It follows immediately from the definition that the coefficient of the leading power is 2 n .  

It is a straightforward exercise to check that H n  is a solution of the differential equation

( d 2 d ξ 2 2ξ d dξ +2n ) H n (ξ)=0,

so these are indeed the same polynomials we found by the series solution of Schrödinger’s equation earlier (recall the equation for the polynomial component of the wave function was

d 2 h d ξ 2 2ξ dh dξ +(2ε1)h=0

 with 2ε=2n+1 .)

We have found ψ n (ξ)  in the form

ψ n (ξ)= 1 n! ( 1 2 ( ξ d dξ ) ) n ( mω π ) 1 4 e ξ 2 /2 .

We shall now prove that the polynomial component is exactly equivalent to the Hermite polynomial as defined at the beginning of this section.

We begin with the operator identity:

( ξ d dξ )= e ξ 2 /2 d dξ e ξ 2 /2

Both sides of this expression are to be regarded as operators, that is, it is assumed that both are operating on some function f( ξ ) .

Now take the n th  power of both sides: on the right, we find, for example,

( e ξ 2 /2 d dξ e ξ 2 /2 ) 3 = ( ) 3 e ξ 2 /2 d dξ e ξ 2 /2 e ξ 2 /2 d dξ e ξ 2 /2 e ξ 2 /2 d dξ e ξ 2 /2 = ( ) 3 e ξ 2 /2 d 3 d ξ 3 e ξ 2 /2

since the intermediate exponential terms cancel against each other.

So:

( ξ d dξ ) n = () n e ξ 2 /2 d n d ξ n e ξ 2 /2

and substituting this into the expression for ψ n (ξ)  above,

ψ n ( ξ )= 1 2 n n! ( ) n ( e ξ 2 /2 d n d ξ n e ξ 2 /2 ) ( mω π ) 1/4 e ξ 2 /2 = 1 2 n n! ( ) n ( mω π ) 1/4 e ξ 2 /2 ( e ξ 2 d n d ξ n e ξ 2 ) = 1 2 n n! ( mω π ) 1/4 H n ( ξ ) e ξ 2 /2 , with ξ= mω x.

This established the equivalence of the two approaches to Schrödinger’s equation for the simple harmonic oscillator, and provides us with the overall normalization constants without doing integrals.  (The expression for ψ n (ξ)  above satisfies | ψ n | 2 dx=1 .)

Exercises:

Use H n ( ξ )= ( ) n e ξ 2 d n d ξ n e ξ 2  to prove:

(a) the coefficient of ξ n  is 2 n .

(b) H n ( ξ )=2n H n1 ( ξ )

(c) H n+1 ( ξ )=2ξ H n ( ξ )2n H n1 ( ξ )

(d) e ξ 2 H n 2 ( ξ ) dξ= 2 n n! π

(Hint: rewrite as H n ( ξ ) ( ) n d n d ξ n e ξ 2 dξ , then integrate by parts n times, and use (a).)

(e) e ξ 2 H n ( ξ ) H m ( ξ )dξ=0, for mn.

It’s worth doing these exercises to become more familiar with the Hermite polynomials, but in evaluating matrix elements (and indeed in establishing some of these results) it is almost always far simpler to work with the creation and annihilation operators.

Exercise: use the creation and annihilation operators to find n| x 4 |n .  This matrix element is useful in estimating the energy change arising on adding a small nonharmonic potential energy term to a harmonic oscillator.

Time-Dependent Wave Functions

The set of normalized eigenstates |0,|1,|2,|n   discussed above are of course solutions to the time-independent Schrödinger equation, or in ket notation eigenstates of the Hamiltonian H|n=( n+ 1 2 )ω|n.   Putting in the time-dependence explicitly, | n,t= e iHt/ | n,t=0= e i( n+ 1 2 )ωt |n .  It is necessary to include the time dependence when dealing with a state which is a superposition of states of different energies, such as ( 1/ 2 )( |0+|1 ),  which then becomes ( 1/ 2 )( e iωt/2 |0+ e 3iωt/2 |1 ).  Expectation values of combinations of position and/or momentum operators in such states are best evaluated by expressing everything in terms of annihilation and creation operators.

Solving Schrödinger’s Equation in Momentum Space

In the lecture on Function Spaces,  we established that the basis of |x  states (eigenstates of the position operator) and that of |k  states (eigenstates of the momentum operator) were both complete bases in Hilbert space (physicist’s definition) so we could work equally well with either from a formal point of view.  Why then do we almost always work in x -space?  Well, probably because we live in x -space, but there’s another reason. The momentum operator in the x -space representation is p=id/dx , so Schrödinger’s equation, written ( p 2 /2m+V( x ) )ψ( x )=Eψ( x ) , with p  in operator form, is a second-order differential equation.  Now consider what happens to Schrödinger’s equation if we work in p -space.  Since the operator identity [ x,p ]=i  is true regardless of representation, we must have x=id/dp .  So for a particle in a potential V( x ) , writing Schrödinger’s equation in p -space we are confronted with the nasty looking operator V( id/dp ) !  This will produce a differential equation in general a lot harder to solve than the standard x -space equation—so we stay in x -space. 

But there are two potentials that can be handled in momentum space: first, for a linear potential V( x )=Fx , the momentum space analysis is actually easier—it’s just a first-order equation.  Second, for a particle in a quadratic potential—a simple harmonic oscillator—the two approaches yield the same differential equation. That means that the eigenfunctions in momentum space (scaled appropriately) must be identical to those in position space—the simple harmonic eigenfunctions are their own Fourier transforms!

previous   index   next  PDF