*Michael Fowler 7/6/07*

We look at a Hamiltonian _{}, with _{} some time-dependent
perturbation, so now the wave function will have perturbation-induced time
dependence.

Our starting point is the set of eigenstates _{} of the unperturbed
Hamiltonian _{}, notice we are not labeling with a zero, no _{}, because with a time-dependent Hamiltonian, energy will not
be conserved, so it is pointless to look for energy corrections. What happens instead, provided the
perturbation is not too large, is that the system makes transitions between the
eigenstates _{} of _{}.

Of course, even for *V*
= 0, the wave functions have the usual time dependence,

_{}

with the *c _{n}*’s
constant. What happens on introducing

_{}

and this time dependence is determined by Schrödinger’s
equation with _{}:

_{}

so

_{}

Taking the inner product with the bra _{}, and introducing _{},

_{}

This is a matrix differential equation for the *c _{n}*’s :

_{}

and solving this set of coupled equations will give us the *c _{n}*(

If the system is in initial state _{} at *t* = 0,
the probability amplitude for it being in state _{} at time *t* is *to
leading order* in the perturbation

_{}

The probability that the system is in fact in state _{} at time *t* is therefore

_{}

Obviously, this is only going to be a good approximation if it predicts that the probability of transition is small—otherwise we need to go to higher order, using the Interaction Representation (or an exact solution like that in the next section).

*Example: kicking an oscillator*.

Suppose a simple harmonic oscillator is in its ground state _{} at *t* =
– ∞. It is perturbed by a
small time-dependent potential _{} What is the
probability of finding it in the first excited state _{} at *t* = + ∞?

Here_{}, and _{}, from which the probability can be evaluated. It is _{}

It’s worth thinking through the physical interpretations for very long and for very short times, and explaining the significance of the time for which the probability is a maximum.

For the particular case of a two-state system perturbed by a periodic external field, the matrix equation above can be solved exactly. Of course, real physical systems have more than two states, but in fact for some important cases two of the states may be only weakly coupled to other degrees of freedom and the analysis then becomes relevant. A famous example, the ammonia maser, is discussed at the end of the section.

For a two-state system, then, the most general wave function is

_{}

and the differential equation for the *c _{n}*(

_{}

Writing _{} for convenience, the coupled
equations are:

_{}

These two first-order equations can be transformed into a
single second-order equation by differentiating the second one, then
substituting _{} from the first one and
_{} from the second one to
give

_{}

This is a standard second-order differential equation,
solved by putting in a trial solution _{} . This satisfies the equation if _{}, so, reverting to the original _{}, the general solution is:

_{}.

Taking the initial state to be _{} gives *A* =
-*B.*

To fix the overall constant, note that at *t* = 0,

_{}

Therefore

_{}

Note in particular the result if _{}

_{}.

Assuming _{}, and the two-state system to be initially in the ground
state _{}, this means that after a time _{} the system will *certainly* be in state _{}, and will oscillate back and forth between the two states
with period _{}.

That is to say, a precisely timed period spent in an
oscillating field can drive a collection of molecules all in the ground state
to be all in an excited state. The
ammonia *maser* works by sending a
stream of ammonia molecules, traveling at known velocity, down a tube having an
oscillating field for a definite length, so the molecules emerging at the other
end are all (or almost all, depending on the precision of ingoing velocity,
etc.) in the first excited state.
Application of a small amount of electromagnetic radiation of the same
frequency to the outgoing molecules will cause some to decay, generating
intense radiation and therefore a much shorter period for all to decay,
emitting coherent radiation.

A sudden perturbation is defined here as a sudden switch
from one time-independent Hamiltonian _{} to another one _{}, the time of switching being much shorter than any natural
period of the system. In this case,
perturbation theory is irrelevant: if the system is initially in an eigenstate _{} of _{}, one simply has to write it as a sum over the eigenstates of
_{}, _{}. The nontrivial part
of the problem is in establishing that the change* is* sudden enough, by estimating the actual time taken for the
Hamiltonian to change, and the periods of motion associated with the state _{} and with its
transitions to neighboring states.

(We discussed one example last semester—an electron in the
ground state in a one-dimensional box that suddenly doubles in size. Other favorite examples include an atom with
spin-orbit coupling in a magnetic field that suddenly reverses (Messiah p 743),
and the reaction of orbiting electrons to nuclear _{}- or _{}-decay.)

Let us consider a system in an initial state _{} perturbed by a
periodic potential _{} switched on at *t*
= 0. For example, this could be an atom
perturbed by an external oscillating electric field, such as an incident light
wave.

What is the probability that at a later time *t* the system be in state _{}?

Recall the matrix differential equation for the *c _{n}*’s :

_{}

Since the system is definitely in state _{} at *t* = 0, the ket vector on the right is
initially _{}.

The first-order approximation to keep the vector _{} on the right, that is,
to solve the equations

_{}

Integrating this equation, the probability amplitude for an
atom in initial state _{} to be in state _{} after time *t * is, to first order:

_{}

The probability of transition is therefore

_{}

and we’re interested in the large *t * limit.

Writing _{}, our function has the form _{}. This function has a
peak at _{}, with maximum value *t*^{2}, and width of order
1/*t*, so a total weight of order
*t*. The function has more peaks at _{}. These are bounded by
the denominator at _{}. For large *t* their contribution comes from a range of
order 1/*t* also, and as _{} the function tends to
a _{} function at the
origin, but multiplied by *t*.

This divergence is telling us that there is a finite
probability *rate* for the transition, so the likelihood of transition is
proportional to time elapsed. Therefore, we should divide by *t* to get
the transition rate.

To get the quantitative result, we need to evaluate the
weight of the _{} function term. We use
the standard result _{} to find _{}, and therefore

_{}

Now, the transition rate is the probability of transition
divided by *t* in the large *t* limit, that is,

_{}

This last line is Fermi’s Golden Rule: we shall be using it
a lot. You might worry that in the long time
limit we have taken the probability of transition is in fact diverging, so how
can we use first order perturbation theory?
The point is that for a transition with _{}, “long time” means _{}, this can still be a very short time compared with the mean
transition time, which depends on the matrix element. In fact, Fermi’s Rule agrees extremely well
with experiment when applied to atomic systems.

Actually, when light falls on an atom, the full periodic potential is not suddenly switched on, on an atomic time scale, but builds up over many cycles (of the atom and of the light). Baym re-derives the Golden Rule assuming the limit of a very slow switch on,

_{}

with _{} very small, so *V* switched on very gradually in the
past, and we are looking at times much smaller than _{}. We can then take the initial time to be _{}, that is,

_{}

so

_{}

and the time rate of change

_{}.

In the limit _{}, the function

_{}

giving the Golden Rule again.

Sometimes the first order matrix element _{} is identically zero
(parity, Wigner Eckart, etc.) but other matrix elements are nonzero—and the
transition can be accomplished by an indirect route. In the notes on the interaction
representation, we derived the probability amplitude for the second-order
process,

_{}

(writing _{}, etc.)

Taking the gradually switched-on harmonic perturbation _{}, and the initial time _{}, as above,

_{}

The integrals are straightforward, and yield

_{}

Exactly as in the section above on the first-order Golden Rule, we can find the transition rate:

_{}

(The _{} in the denominator
goes to _{} on replacing the
frequencies _{} with energies *E*, both in the denominator and the delta
function, remember that if _{}, _{})

This is a transition in which the system gains energy _{} from the beam, in
other words *two* photons are absorbed,
the first taking the system to the intermediate energy _{}, which is short-lived and therefore not well defined in
energy—there is no energy conservation requirement into this state, only
between initial and final states.

Of course, if an atom in an arbitrary state is exposed to monochromatic light, other second order processes in which two photons are emitted, or one is absorbed and one emitted (in either order) are also possible.