# Rapid Review of Early Quantum Mechanics

*Michael Fowler, UVa*

**Note:** *This is stuff you—hopefully—already know well from an
undergraduate Modern Physics course.
We’re going through it quickly just to remind you.*

The next three lectures online are a more detailed account of this introductory material: lecture 2 on the birth of Quantum Mechanics includes historical stuff you don't really need, but I think it's worth reminding yourself how the physics evolved. Lectures 3, 4 and 5 cover undergraduate material you absolutely need to have down.

*Concerning the course as a whole, our text is Shankar, but a substantial fraction of the material covered in the
first semester can be found in a good undergraduate quantum text, such as Griffiths*.

### What was Wrong with Classical Mechanics?

Basically, classical statistical mechanics wasn’t making sense...

Maxwell and Boltzmann evolved the equipartition theorem: a physical system can have many states (gas with particles having different velocities, or springs in different states of compression).

At nonzero temperature, energy will flow around in the system, it will constantly move from one state to another. So, what is the probability that at any instant it is in a particular state with energy $E$?

M&B proved it to be proportional to ${e}^{-E/kT}$. This proportionality factor is also correct for any subsystem of the system: for example a single molecule.

Notice this means if a system is a set of oscillators,
different masses on different strength springs, for example, then in thermal
equilibrium *each oscillator has on
average the same energy as all the others*. For three-dimensional oscillators in thermal equilibrium,
the average energy of each oscillator is $3kT,$ where $k$ is Boltzmann’s constant.

### Black Body Radiation

Now put this together with Maxwell’s discovery that light is an electromagnetic wave: inside a hot oven, Maxwell’s equations can be solved yielding standing wave solutions, and the set of different wavelength allowed standing waves amount to an infinite series of oscillators, with no upper limit on the frequencies on going far into the ultraviolet. Therefore, from the classical equipartition theorem, an oven at thermal equilibrium at a definite temperature should contain an infinite amount of energy$\u2014$of order $kT$ in each of an infinite number of modes$\u2014$and if you let radiation out through a tiny hole in the side, you should see radiation of all frequencies.

This is not, of course, what is observed: as an oven is
warmed, it emits infrared, then red, then yellow light, etc. This means that the higher frequency
oscillators (blue, etc.) are in fact *not*
excited at low temperatures: equipartition isn’t true.

Planck showed that the experimentally observed
intensity/frequency curve was exactly reproduced if it was assumed that the
radiation was *quantized*: light of
frequency $f$ could only be emitted in quanta$\u2014$now photons$\u2014$having energy $hf,\text{\hspace{0.17em}}\text{\hspace{0.17em}}h$ being Planck’s constant. This was the beginning of quantum mechanics.

### The Photoelectric Effect

Einstein showed the same quantization of electromagnetic radiation explained the photoelectric effect: a photon of energy $hf$ knocks an electron out of a metal, it takes a certain work $W$ to get it out, the rest of the photon energy goes to the kinetic energy of the electron, for the fastest electrons emitted (those that come right from the surface, so encountering no further resistance). Plotting the maximum electron kinetic energy as a function of incident light frequency confirms the hypothesis, giving the same value for $h$ as that needed to explain radiation from an oven. (It had previously been assumed that more intense light would increase the average kinetic energy of each emitted electron$\u2014$this turned out not to be the case.)

### The Bohr Atom

Bohr put together this quantization of light energy with

$\frac{1}{\lambda}={R}_{H}\left(\frac{1}{4}-\frac{1}{{n}^{2}}\right),\text{\hspace{0.17em}}\text{\hspace{0.17em}}n=3,4,5.$

( ${R}_{H}$ is now called the Rydberg constant.) Bohr realized these were photons having
energy equal to the *energy difference
between two allowed orbits* of the electron circling the nucleus (the
proton), ${E}_{n}-{E}_{m}=hf$,
leading to the conclusion that the allowed levels must be ${E}_{n}=-hc{R}_{H}/{n}^{2}$.

How could the quantum $hf$ restricting allowed radiation energies also restrict the allowed electron orbits? Bohr realized there must be a connection$\u2014$because $h$ has the dimensions of angular momentum! What if the electron were only allowed to be in circular orbits of angular momentum $nKh$ with $n$ an integer? Bohr did the math for orbits under an inverse square law, and found that the observed spectra were in fact correctly accounted for by taking $K=1/2\pi .$

But then he realized he didn’t even need the experimental
results to find $K:$ quantum mechanics *must *agree with classical mechanics in the regime where we know
experimentally that classical mechanics (including Maxwell’s equations) is
correct, that is, for systems of macroscopic size. Consider a negative charge
orbiting around a fixed positive charge at a radius of 10cm., the charges being
such that the speed is of order meters per second (we don’t want relativistic
effects making things more complicated). Then from classical E&M, the
charge will radiate at the orbital frequency.
Now imagine this is actually a hydrogen atom, in a perfect vacuum, in a
high state of excitation. It must be radiating at this same frequency. But
Bohr’s theory can’t just be right for small orbits, so the radiation must
satisfy ${E}_{n}-{E}_{m}=hf.$ The spacing between adjacent levels will vary
slowly for these large orbits, so $h$ times the orbital frequency must be the energy
difference between adjacent levels. Now,
that energy difference depends on the allowed angular momentum step between the
adjacent levels: that is, on $K.$ Reconciling these two expressions for the
radiation frequency gives $K=1/2\pi .$

This *classical limit
argument, then, predicts the Rydberg constant* in terms of already known
quantities:

${R}_{H}={\left(\frac{1}{4\pi {\epsilon}_{0}}\right)}^{2}.\frac{2{\pi}^{2}m{e}^{4}}{c{h}^{3}}.$

What’s right about the Bohr atom?

1. It gives the Balmer series spectra.

2. The first orbit size is close to the observed size of the atom: and remember there are no adjustable parameters, the classical limit argument determines the spectra and the size.

What’s *wrong* with
the Bohr atom?

1. No explanation for why angular momentum should be quantized.

This was solved by de Broglie a little later.

2. Why don’t the circling electrons radiate, as predicted classically?

Well, the fact that radiation is quantized means the classical picture of an accelerating charge smoothly emitting radiation cannot work if the energies involved are of order $h$ times the frequencies involved.

3. The lowest state has nonzero angular momentum.

This is a defect of the model, corrected in the truly quantum model (Schrödinger’s equation).

4. In an inverse square field, orbits are in general elliptical.

This was at first a puzzle: why should there be only
circular orbits allowed? In fact, the
model *does* allow elliptical orbits,
and they don’t show up in the Balmer series because, as proved by Sommerfeld,
if the allowed elliptical orbits have the same allowed angular momenta as
Bohr’s orbits, they have the same set of energies. This is a special property of the inverse
square force.

### De Broglie Waves

The first explanation of why only certain angular momenta are allowed for the circling electron was given by de Broglie: just as photons act like particles (definite energy and momentum), but undoubtedly are wave like, being light, so particles like electrons perhaps have wave like properties. For photons, the relationship between wavelength and momentum is $p=h/\lambda .$ Assuming this is also true of electrons, and that the allowed circular orbits are standing waves, Bohr’s angular momentum quantization follows.

### Schrödinger’s Wave Equation

De Broglie’s idea was clearly on the right track$\u2014$but waves in space are three-dimensional, thinking of the circular orbit as a string under tension can’t be right, even if the answer is.

Photon waves (electromagnetic waves) obey the equation

${\nabla}^{2}\overrightarrow{E}-\frac{1}{{c}^{2}}\frac{\partial {\text{\hspace{0.05em}}}^{2}\overrightarrow{E}}{\partial \text{\hspace{0.05em}}{t}^{2}}=0.$.

A solution of definite momentum is the plane wave:

$\left(\frac{\partial {\text{}}^{2}}{\partial \text{}{x}^{2}}-\frac{1}{{c}^{2}}\frac{\partial {\text{}}^{2}}{\partial \text{}{t}^{2}}\right){\overrightarrow{E}}_{0}{e}^{i(kx-\omega t)}=\left({k}^{2}-\frac{{\omega}^{2}}{{c}^{2}}\right){\overrightarrow{E}}_{0}{e}^{i(kx-\omega t)}=0.$.

Notice that the last equality is essentially just $\omega =ck,$ where for a plane wave solution the energy and momentum of the photon are translated into differential operators with respect to time and space respectively, to give a differential equation for the wave.

Schrödinger’s wave equation is equivalently taking the (nonrelativistic) energy-momentum relation $E={p}^{2}/2m,$ and using the same recipe to translate it into a differential equation:

$i\hslash \frac{\partial \psi (x,t)}{\partial t}=-\frac{{\hslash}^{2}}{2m}\frac{{\partial}^{2}\psi (x,t)}{\partial {x}^{2}}.$

Making the natural extension to three dimensions, and assuming we can add a potential term in the most naïve way possible, that is, going from $E={p}^{2}/2m$ to $E={p}^{2}/2m+V\left(x,y,z\right),$ we get

$i\hslash \frac{\partial \psi (x,y,z,t)}{\partial t}=-\frac{{\hslash}^{2}}{2m}{\nabla}^{2}\psi (x,y,z,t)+V(x,y,z)\psi (x,y,z,t).$

This is the equation Schrödinger wrote down and solved, the solutions gave the same set of energies as the Bohr model, but now the ground state had zero angular momentum, and many of the details of the solutions were borne out by experiment, as we shall discuss further later.

### A Conserved Current

Schrödinger also showed that a conserved current could be defined in terms of the wave function $\psi $:

$\begin{array}{c}\frac{\partial \rho}{\partial t}+\mathrm{div}\overrightarrow{j}=0\\ \text{where}\rho ={\psi}^{*}\psi =|\psi {|}^{2}\\ \text{and}\overrightarrow{j}=\frac{\hslash}{2mi}({\psi}^{*}\overrightarrow{\nabla}\psi -\psi \text{}\overrightarrow{\nabla}{\psi}^{*}).\end{array}$

Schrödinger’s interpretation of his equation was that the electron was simply a wave, not a particle, and this was the wave intensity. But thinking of electromagnetic waves in this way gave no clue to the quantum photon behavior—this couldn’t be the whole story.

### Interpreting the Wave Function

The correct interpretation of the wave function (due to Born) follows from analogy to the electromagnetic case. Let’s review that briefly. The basic example is the two-slit diffraction pattern, as built up by sending through one photon at a time, to a bank of photon detectors. The pattern gradually emerges: solve the wave equation, then the predicted local energy density (proportional to ${\left|\overrightarrow{E}\left(x,y,z,t\right)\right|}^{2}dxdydz$) gives the probability of one photon going through the system landing at that spot.

Born suggested that similarly $|\psi {|}^{2}$ at any point was proportional to the probability of detecting the electron at that point. This has turned out to be correct.

### Localizing the Electron

Despite its wavelike properties, we know that an
electron can behave like a particle: specifically, it can move as a fairly
localized entity from one place to another. What’s the wave representation of
that? It’s called a *wave packet*: a localized wave excitation. To see how this can come about, first
remember that the Schrödinger equation is a linear equation, the sum of any two
or more solutions is itself a solution.
If we add together two plane waves close in wavelength, we get beats,
which can be regarded as a string of wave packets. To get a single wave packet, we must add
together a continuous range of wavelengths.

The standard example is the Gaussian wave packet, $\psi (x,t=0)=A{e}^{i{k}_{0}x}{e}^{-{x}^{2}/2{\Delta}^{2}}$ where ${p}_{0}=\hslash {k}_{0}$.

Using the standard result

$\underset{-\infty}{\overset{+\infty}{\int}}{e}^{-a{x}^{2}}dx=\sqrt{\frac{\pi}{a}}$

we find ${\left|A\right|}^{2}={\left(\pi {\Delta}^{2}\right)}^{-1/2}$ so

$\psi (x,t=0)=\frac{1}{{(\pi {\Delta}^{2})}^{1/4}}{e}^{i{k}_{0}x}{e}^{-{x}^{2}/2{\Delta}^{2}}.$

But how do we construct this particular wavepacket by superposing plane waves? That is to say, we need a representation of the form:

$\psi (x)={\displaystyle \underset{-\infty}{\overset{+\infty}{\int}}\frac{dk}{2\pi}{e}^{ikx}\varphi (k)}$

The function $\varphi \left(k\right)$ represents the weighting of plane waves in the
neighborhood of wavenumber $k.$ This is a particular example of a *Fourier
transform*$\u2014$we will be
discussing the general case in detail a little later in the course. Note that if $\varphi \left(k\right)$ is a bounded function, any particular $k$ value gives a vanishingly small contribution,
the plane-wave contribution to $\psi \left(x\right)$ from a range $dk$ is $\varphi \left(k\right)dk/2\pi .$ In fact, $\varphi \left(k\right)$ is given in terms of $\psi \left(x\right)$ by

$\varphi (k)={\displaystyle \underset{-\infty}{\overset{+\infty}{\int}}dx{e}^{-ikx}\psi (x)}.$

It is perhaps
worth mentioning at this point that this can be understood *qualitatively*
by observing that the plane wave prefactor ${e}^{-ikx}$ will interfere destructively with all plane
wave components of $\psi \left(x\right)$ except that of wavenumber $k,$ where it may at first appear that the
contribution is infinite, but recall that as stated above, any particular $k$ component has a vanishingly small weight$\u2014$and, in fact,
this is the right answer, as we shall show in more convincing fashion later.

In the present case, the above handwaving argument is unnecessary, because both the integrals can be carried out exactly, using the standard result:

$\underset{-\infty}{\overset{\infty}{\int}}{e}^{-a{x}^{2}+bx}}dx={e}^{{b}^{2}/4a}\sqrt{\frac{\pi}{a}$

giving

$\varphi (k)={(4\pi {\Delta}^{2})}^{{\scriptscriptstyle \frac{1}{4}}}{e}^{-{\Delta}^{2}{(k-{k}_{0})}^{2}/2}$.

### The Uncertainty Principle

Note that the spreads in $x$-space and $p$-space are inversely related: $\Delta \text{\hspace{0.05em}}x$ is of order $\Delta ,$ $\Delta p=\hslash \Delta k\sim \hslash /\Delta .$ This is of course the Uncertainty Principle, localization in $x$-space requires a large spread in contributing momentum states.

It’s worth reviewing the undergraduate exercises on applications of the uncertainty principle. The help sharpen one’s appreciation of the wave/particle nature of quantum objects.

There’s a limit to how well the position of an electron can
be determined: it is detected by bouncing a photon off of it, and the photon
wavelength sets the limit on $\Delta x.$ But if the photon has enough energy to create
an electron-positron pair out of the vacuum, you can’t be sure which electron
you’re seeing. This limits $\Delta x\sim \hslash /mc$ at best.
(This is called the *Compton
wavelength*, written ${\lambda}_{C}$ * _{ }*$\u2013$ it appears in

*fine structure*in the atomic spectra.