Michael Fowler 7/6/07
We look at a Hamiltonian , with some time-dependent perturbation, so now the wave function will have perturbation-induced time dependence.
Our starting point is the set of eigenstates of the unperturbed Hamiltonian , notice we are not labeling with a zero, no , because with a time-dependent Hamiltonian, energy will not be conserved, so it is pointless to look for energy corrections. What happens instead, provided the perturbation is not too large, is that the system makes transitions between the eigenstates of .
Of course, even for V = 0, the wave functions have the usual time dependence,
with the cn’s constant. What happens on introducing is that the cn’s themselves acquire time dependence,
and this time dependence is determined by Schrödinger’s equation with :
Taking the inner product with the bra , and introducing ,
This is a matrix differential equation for the cn’s :
and solving this set of coupled equations will give us the cn(t)’s, and hence the probability of finding the system in any particular state at any later time.
If the system is in initial state at t = 0, the probability amplitude for it being in state at time t is to leading order in the perturbation
The probability that the system is in fact in state at time t is therefore
Obviously, this is only going to be a good approximation if it predicts that the probability of transition is small—otherwise we need to go to higher order, using the Interaction Representation (or an exact solution like that in the next section).
Example: kicking an oscillator.
Suppose a simple harmonic oscillator is in its ground state at t = – ∞. It is perturbed by a small time-dependent potential What is the probability of finding it in the first excited state at t = + ∞?
Here, and , from which the probability can be evaluated. It is
It’s worth thinking through the physical interpretations for very long and for very short times, and explaining the significance of the time for which the probability is a maximum.
For the particular case of a two-state system perturbed by a periodic external field, the matrix equation above can be solved exactly. Of course, real physical systems have more than two states, but in fact for some important cases two of the states may be only weakly coupled to other degrees of freedom and the analysis then becomes relevant. A famous example, the ammonia maser, is discussed at the end of the section.
For a two-state system, then, the most general wave function is
and the differential equation for the cn(t)’s is:
Writing for convenience, the coupled equations are:
These two first-order equations can be transformed into a single second-order equation by differentiating the second one, then substituting from the first one and from the second one to give
This is a standard second-order differential equation, solved by putting in a trial solution . This satisfies the equation if , so, reverting to the original , the general solution is:
Taking the initial state to be gives A = -B.
To fix the overall constant, note that at t = 0,
Note in particular the result if
Assuming , and the two-state system to be initially in the ground state , this means that after a time the system will certainly be in state , and will oscillate back and forth between the two states with period .
That is to say, a precisely timed period spent in an oscillating field can drive a collection of molecules all in the ground state to be all in an excited state. The ammonia maser works by sending a stream of ammonia molecules, traveling at known velocity, down a tube having an oscillating field for a definite length, so the molecules emerging at the other end are all (or almost all, depending on the precision of ingoing velocity, etc.) in the first excited state. Application of a small amount of electromagnetic radiation of the same frequency to the outgoing molecules will cause some to decay, generating intense radiation and therefore a much shorter period for all to decay, emitting coherent radiation.
A sudden perturbation is defined here as a sudden switch from one time-independent Hamiltonian to another one , the time of switching being much shorter than any natural period of the system. In this case, perturbation theory is irrelevant: if the system is initially in an eigenstate of , one simply has to write it as a sum over the eigenstates of , . The nontrivial part of the problem is in establishing that the change is sudden enough, by estimating the actual time taken for the Hamiltonian to change, and the periods of motion associated with the state and with its transitions to neighboring states.
(We discussed one example last semester—an electron in the ground state in a one-dimensional box that suddenly doubles in size. Other favorite examples include an atom with spin-orbit coupling in a magnetic field that suddenly reverses (Messiah p 743), and the reaction of orbiting electrons to nuclear - or -decay.)
Let us consider a system in an initial state perturbed by a periodic potential switched on at t = 0. For example, this could be an atom perturbed by an external oscillating electric field, such as an incident light wave.
What is the probability that at a later time t the system be in state ?
Recall the matrix differential equation for the cn’s :
Since the system is definitely in state at t = 0, the ket vector on the right is initially .
The first-order approximation to keep the vector on the right, that is, to solve the equations
Integrating this equation, the probability amplitude for an atom in initial state to be in state after time t is, to first order:
The probability of transition is therefore
and we’re interested in the large t limit.
Writing , our function has the form . This function has a peak at , with maximum value t2, and width of order 1/t, so a total weight of order t. The function has more peaks at . These are bounded by the denominator at . For large t their contribution comes from a range of order 1/t also, and as the function tends to a function at the origin, but multiplied by t.
This divergence is telling us that there is a finite probability rate for the transition, so the likelihood of transition is proportional to time elapsed. Therefore, we should divide by t to get the transition rate.
To get the quantitative result, we need to evaluate the weight of the function term. We use the standard result to find , and therefore
Now, the transition rate is the probability of transition divided by t in the large t limit, that is,
This last line is Fermi’s Golden Rule: we shall be using it a lot. You might worry that in the long time limit we have taken the probability of transition is in fact diverging, so how can we use first order perturbation theory? The point is that for a transition with , “long time” means , this can still be a very short time compared with the mean transition time, which depends on the matrix element. In fact, Fermi’s Rule agrees extremely well with experiment when applied to atomic systems.
Actually, when light falls on an atom, the full periodic potential is not suddenly switched on, on an atomic time scale, but builds up over many cycles (of the atom and of the light). Baym re-derives the Golden Rule assuming the limit of a very slow switch on,
with very small, so V switched on very gradually in the past, and we are looking at times much smaller than . We can then take the initial time to be , that is,
and the time rate of change
In the limit , the function
giving the Golden Rule again.
Sometimes the first order matrix element is identically zero (parity, Wigner Eckart, etc.) but other matrix elements are nonzero—and the transition can be accomplished by an indirect route. In the notes on the interaction representation, we derived the probability amplitude for the second-order process,
(writing , etc.)
Taking the gradually switched-on harmonic perturbation , and the initial time , as above,
The integrals are straightforward, and yield
Exactly as in the section above on the first-order Golden Rule, we can find the transition rate:
(The in the denominator goes to on replacing the frequencies with energies E, both in the denominator and the delta function, remember that if , )
This is a transition in which the system gains energy from the beam, in other words two photons are absorbed, the first taking the system to the intermediate energy , which is short-lived and therefore not well defined in energy—there is no energy conservation requirement into this state, only between initial and final states.
Of course, if an atom in an arbitrary state is exposed to monochromatic light, other second order processes in which two photons are emitted, or one is absorbed and one emitted (in either order) are also possible.