Michael Fowler
As we discussed in the Linear Algebra lecture, if two physical variables correspond to commuting Hermitian operators, they can be diagonalized simultaneously—that is, they have a common set of eigenstates. In these eigenstates both variables have precise values at the same time, there is no “Uncertainty Principle” requiring that as we know one of them more accurately, we increasingly lose track of the other. For example, the energy and momentum of a free particle can both be specified exactly. More interesting examples will appear in the sections on angular momentum and spin.
But if two operators do not commute, in general one cannot
specify both values precisely. Of
course, such operators could still have some common eigenvectors, but
the interesting case arises in attempting to measure A and B
simultaneously for a state |y> in which the commutator [A, B] has a
nonzero expectation value,
.
Our task here is to give a quantitative analysis of
how accurately noncommuting variables can be measured together. We found
earlier using a semi-quantitative argument that for a free particle,
at best. To improve on that result, we need to be
precise about the uncertainty DA in a state y.
We define DA as the root mean square deviation:
![]()
To make the equations more compact, we define â by
A = <A> + â.
(We’ll put a caret on the â to remind ourselves it’s an operator—and, of course, it’s a Hermitian operator, like A.) We also drop the y, on the understanding that this whole argument is for a particular state. Now
![]()
Introduce an operator B in exactly similar fashion,
, having the property that
.
Theorem:
![]()
(Remark: remember that for A, B
Hermitian, [A, B] is antiHermitian, so
is real!)
Proof:
Define
![]()
Then
![]()
Using Schwartz’s inequality
![]()
gives immediately
.
The operator
is neither Hermitian nor antiHermitian. To evaluate the mod squared of its
expectation value, we break the amplitude into real and imaginary parts:
.
(The first term on the right-hand side is the expectation value of a Hermitian matrix, and so is real, the second term is the expectation value of an antiHermitian matrix, so is pure imaginary.)
It follows immediately that
.
But
so
![]()
For a particle in one dimension denote
![]()
(It important in that last step to understand that the
operator
operates on everything
to its right, and, as we are always finding matrix elements of operators, there will be a following ket it operates on,
so
)
We conclude that
![]()
Question: Is there a wavefunction for which this
inequality becomes an equality?
That would require
which can only be true
if the two vectors are parallel,
or, written
explicitly,
![]()
Actually , that’s not enough: we also need
to be zero. (Look at
the equation above giving
in terms of its real
and imaginary parts, and how we used it to establish the inequality.)
Using
we find
![]()
so this will be zero if and only if l is pure imaginary.
Turning to the differential equation, we first take the simplest case where <x> and <p> are both zero. The first requirement just sets the origin, but the second stipulates that our wave function has no net momentum.
For this simple case,
becomes

and recalling that l is pure imaginary, this is a Gaussian wave packet! It is straightforward to check that the solution with <x> and <p> nonzero is
![]()
where a = -il is real, and C is the usual Gaussian normalization constant.
Exercise: check this.
The conclusion is then that the Gaussian wave packet gives
the optimum case for minimizing the joint uncertainties in position and
momentum.
Note that the condition
does not mean that
is an eigenstate of
either
, but it is an eigenstate of the nonHermitian
operator
, with eigenvalue zero. We shall soon see that this nonHermitian
operator and its adjoint play important roles in the quantum mechanics of the
simple harmonic oscillator.