previous   index  next  PDF

Time-Dependent Perturbation Theory

Michael Fowler

Introduction: General Formalism

We look at a Hamiltonian H= H 0 +V( t ) , with V( t )  some time-dependent perturbation, so now the wave function will have perturbation-induced time dependence.

Our starting point is the set of eigenstates |n  of the unperturbed Hamiltonian H 0 |n= E n |n , notice we are not labeling with a zero, no E n 0 , because with a time-dependent Hamiltonian, energy will not be conserved, so it is pointless to look for energy corrections.  What happens instead, provided the perturbation is not too large, is that the system makes transitions between the eigenstates |n  of H 0 .

Of course, even for V=0,  the wave functions have the usual time dependence,

| ψ( t )= n c n e i E n t/ |n

with the c n  ’s constant.  What happens on introducing V( t )  is that the c n  ’s themselves acquire  time dependence,

| ψ( t )= n c n ( t ) e i E n t/ |n

and this time dependence is determined by Schrödinger’s equation with H= H 0 +V( t ) :

i t n c n ( t ) e i E n t/ |n =( H 0 +V( t ) ) n c n ( t ) e i E n t/ |n

so

i n c ˙ n ( t ) e i E n t/ |n =V( t ) n c n ( t ) e i E n t/ |n

Taking the inner product with the bra m| e i E m t/ , and introducing ω mn = E m E n ,

i c ˙ m = n m|V( t )|n c n e i ω mn t = n V mn e i ω mn t c n

This is a matrix differential equation for the c n  ’s :

i( c ˙ 1 c ˙ 2 c ˙ 3 . . )=( V 11 V 12 e i ω 12 t . . . V 21 e i ω 21 t V 22 . . . . . V 33 . . . . . . . . . . . . )( c 1 c 2 c 3 . . )

and solving this set of coupled equations will give us the c n ( t )  ’s, and hence the probability of finding the system in any particular state at any later time.

If the system is in initial state |i  at t=0,  the probability amplitude for it being in state |f  at time t  is to leading order in the perturbation

c f ( t )= δ fi i 0 t V fi ( t ) e i ω fi t d t .

The probability that the system is in fact in state |f  at time t  is therefore

| c f ( t ) | 2 = 1 2 | 0 t V fi ( t ) e i ω fi t d t | 2 .

Obviously, this is only going to be a good approximation if it predicts that the probability of transition is small otherwise we need to go to higher order, using the Interaction Representation (or an exact solution like that in the next section).

Example: kicking an oscillator.

Suppose a simple harmonic oscillator is in its ground state |0  at t=.  It is perturbed by a small time-dependent potential V( t )=eEx e t 2 / τ 2 .  What is the probability of finding it in the first excited state |1  at t=+ ?  

Here V fi ( t )=eE1|x|0 e t 2 / τ 2 , and x= /2mω ( a+ a ) , from which the probability can be evaluated.  It is ( e 2 E 2 / 2 )( /2mω )π τ 2 e ω 2 τ 2 /2 .   

It’s worth thinking through the physical interpretations for very long and for very short times, and explaining the significance of the time for which the probability is a maximum.

The Two-State System: an Exact Solution

For the particular case of a two-state system perturbed by a periodic external field, the matrix equation above can be solved exactly.  Of course, real physical systems have more than two states, but in fact for some important cases two of the states may be strongly coupled to each other, but only weakly coupled to other states, and the analysis then becomes relevant.  A famous example, the ammonia maser, is discussed at the end of the section.

For a two-state system, then, the most general wave function is

| ψ( t )= c 1 ( t ) e i E 1 t/ |1+ c 2 ( t ) e i E 2 t/ |2

and the differential equation for the c n ( t )  ’s is:

i( c ˙ 1 c ˙ 2 )=( 0 V e iωt e i ω 12 t V e iωt e i ω 12 t 0 )( c 1 c 2 ).

Writing ω+ ω 12 =α  for convenience, the coupled equations are:

i c ˙ 1 =V e iαt c 2 i c ˙ 2 =V e iαt c 1 .

These two first-order equations can be transformed into a single second-order equation by differentiating the second one, then substituting c ˙ 1  from the first one and c 1  from the second one to give

c ¨ 2 =iα c ˙ 2 V 2 2 c 2 .

This is a standard second-order differential equation, solved by putting in a trial solution c 2 ( t )= c 2 ( 0 ) e iΩt  .  This satisfies the equation if

Ω= α 2 ± α 2 4 + V 2 2 ,

so, reverting to the original ω+ ω 12 =α , the general solution is:

c 2 ( t )= e i ( ω ω 21 ) 2 t ( A e i ( ω ω 21 2 ) 2 + V 2 2 t +B e i ( ω ω 21 2 ) 2 + V 2 2 t ).

Taking the initial state to be c 1 ( 0 )=1, c 2 ( 0 )=0  gives A=B.   

To fix the overall constant, note that at t = 0, 

c ˙ 2 ( 0 )= V i c 1 ( 0 )= V i .

Therefore

| c 2 ( t ) | 2 = V 2 2 ( ω ω 21 2 ) 2 + V 2 2 sin 2 ( ( ω ω 21 2 ) 2 + V 2 2 t ).

Note in particular the result if ω= ω 12 :

| c 2 ( t ) | 2 = sin 2 ( Vt ) .

Assuming E 2 > E 1 ,  and the two-state system to be initially in the ground state |1 , this means that after a time h/4V  the system will certainly be in state |2 , and will oscillate back and forth between the two states with period h/2V

That is to say, a precisely timed period spent in an oscillating field can drive a collection of molecules all in the ground state to be all in an excited state.  The ammonia maser works by sending a stream of ammonia molecules, traveling at known velocity, down a tube having an oscillating field for a definite length, so the molecules emerging at the other end are all (or almost all, depending on the precision of ingoing velocity, etc.) in the first excited state.  Application of a small amount of electromagnetic radiation of the same frequency to the outgoing molecules will cause some to decay, generating intense radiation and therefore a much shorter period for all to decay, emitting coherent radiation.

A “Sudden” Perturbation

A sudden perturbation is defined here as a sudden switch from one time-independent Hamiltonian H 0  to another one H 0 , the time of switching being much shorter than any natural period of the system.  In this case, perturbation theory is irrelevant: if the system is initially in an eigenstate |n  of H 0 , one simply has to write it as a sum over the eigenstates of H 0 , |n= n | n n |n .  The nontrivial part of the problem is in establishing that the change is sudden enough, by estimating the actual time taken for the Hamiltonian to change, and the periods of motion associated with the state |n  and with its transitions to neighboring states.

(We discussed one example last semester an electron in the ground state in a one-dimensional box that suddenly doubles in size.  Other favorite examples include an atom with spin-orbit coupling in a magnetic field that suddenly reverses (Messiah p 743), and the reaction of orbiting electrons to nuclear α  - or β  -decay.)

Harmonic Perturbations: Fermi’s Golden Rule

Let us consider a system in an initial state |i  perturbed by a periodic potential V( t )=V e iωt  switched on at t=0.  For example, this could be an atom perturbed by an external oscillating electric field, such as an incident light wave.

What is the probability that at a later time t  the system be in state |f ?

Recall the matrix differential equation for the c n  ’s :

i( c ˙ 1 c ˙ 2 c ˙ 3 . . )=( V 11 V 12 e i ω 12 t . . . V 21 e i ω 21 t V 22 . . . . . V 33 . . . . . . . . . . . . )( c 1 c 2 c 3 . . )

Since the system is definitely in state |i  at t=0,  the ket vector on the right is initially c i =1, c ji =0.  

The first-order approximation to keep the vector c i =1, c ji =0  on the right, that is, to solve the equations

i c ˙ f (t)= V fi e i ω fi t .

Integrating this equation, the probability amplitude for an atom in initial state |i  to be in state |f  after time t  is, to first order:

c f ( t )= i 0 t f| V|i e i( ω fi ω ) t d t = i f|V|i e i( ω fi ω )t 1 i( ω fi ω ) .

The probability of transition is therefore

P if ( t )= | c f | 2 = 1 2 | f|V|i | 2 ( sin( ( ω fi ω )t/2 ) ( ω fi ω )/2 ) 2

and we’re interested in the large t  limit.

Writing α=( ω fi ω )/2 , our function has the form sin 2 αt α 2 .   This function has a peak at α=0,  with maximum value t 2 ,  and width of order 1/t,  so a total weight of order t.  The function has more peaks at αt=( n+1/2 )π .  These are bounded by the denominator at 1/ α 2 . For large t  their contribution comes from a range of order 1/t  also, and as t  the function tends to a δ  function at the origin, but multiplied by t.   

This divergence is telling us that there is a finite probability rate for the transition, so the likelihood of transition is proportional to time elapsed. Therefore, we should divide by t  to get the transition rate.

To get the quantitative result, we need to evaluate the weight of the δ  function term. We use the standard result ( sinξ ξ ) 2 dξ=π  to find  ( sinαt α ) 2 dα=πt , and therefore

lim t 1 t ( sinαt α ) 2 =πδ( α ).

Now, the transition rate is the probability of transition divided by t  in the large t  limit, that is,

R if ( t )= lim t P if ( t ) t = lim t 1 t 1 2 | f|V|i | 2 [ sin( ( ω fi ω )t/2 ) ( ω fi ω )/2 ] = 1 2 | f|V|i | 2 πδ( 1 2 ( ω fi ω ) ) = 2π 2 | f|V|i | 2 δ( ω fi ω )

This last line is Fermi’s Golden Rule: we shall be using it a lot.  You might worry that in the long time limit we have taken the probability of transition is in fact diverging, so how can we use first order perturbation theory?   The point is that for a transition with ω fi ω , “long time” means ( ω fi ω )t1 , this can still be a very short time compared with the mean transition time, which depends on the matrix element.   In fact, Fermi’s Rule agrees extremely well with experiment when applied to atomic systems.

Another Derivation of the Golden Rule

Actually, when light falls on an atom, the full periodic potential is not suddenly switched on, on an atomic time  scale, but builds up over many cycles (of the atom and of the light).  Baym re-derives the Golden Rule assuming the limit of a very slow switch on,

V( t )= e εt V e iωt

with ε  very small, so V  switched on very gradually in the past, and we are looking at times much smaller than 1/ε . We can then take the initial time to be , that is,

c f ( t )= i t f| V|i e i( ω fi ωiε ) t d t = 1 e i( ω fi ωiε )t ω fi ωiε f|V|i

so

| c f ( t ) | 2 = 1 2 e 2εt ( ω fi ω ) 2 + ε 2 | f|V|i | 2

and the time rate of change

d dt | c f ( t ) | 2 = 1 2 2ε e 2εt ( ω fi ω ) 2 + ε 2 | f|V|i | 2 .

In the limit ε0 , the function

2ε ( ω fi ω ) 2 + ε 2 2πδ( ω fi ω )

giving the Golden Rule again.

Harmonic Perturbations: Second-Order Transitions

Sometimes the first order matrix element f|V|i  is identically zero (parity, Wigner-Eckart, etc.) but other matrix elements are nonzero and the transition can be accomplished by an indirect route.  In the notes on the interaction representation, we derived the probability amplitude for the second-order process,

c n ( 2 ) ( t )= ( 1 i ) 2 n 0 t 0 t d t d t e i ω f (t t ) f| V S ( t )|n e i ω n ( t t ) n| V S ( t )|i e i ω i t ,

Taking the gradually switched-on harmonic perturbation V S ( t )= e εt V e iωt , and the initial time , as above,

c n ( 2 ) ( t )= ( 1 i ) 2 n f|V|nn|V|i e i ω f t t d t t d t e i( ω f ω n ωiε ) t e i( ω n ω i ωiε ) t .

Exactly as in the section above on the first-order Golden Rule, we can find the transition rate:

d dt | c n ( 2 ) ( t ) | 2 = 2π 4 | n f|V|nn|V|i ω n ω i ωiε | 2 δ( ω f ω i 2ω ).

(The 4  in the denominator goes to  on replacing the frequencies ω  with energies E,  both in the denominator and the delta function, remember that if E=ω , δ( ω )=δ( E ).  )

This is a transition in which the system gains energy 2ω  from the beam, in other words two photons are absorbed, the first taking the system to the intermediate energy ω n , which is short-lived and therefore not well defined in energy there is no energy conservation requirement into this state, only between initial and final states.

Of course, if an atom in an arbitrary state is exposed to monochromatic light, other second order processes in which two photons are emitted, or one is absorbed and one emitted (in either order) are also possible.

previous   index  next  PDF