Michael Fowler UVa
Introduction: Cartesian Vectors and Tensors
Physics is full of vectors: and so on. Classically, a (three-dimensional) vector is defined by its properties under rotation: the three components corresponding to the Cartesian axes transform as
with the usual rotation matrix, for example
for rotation about the axis. (We’ll use interchangeably.)
A tensor is a generalization of a such a vector to an object with more than one suffix, such as, for example, (having 9 and 27 components respectively in three dimensions) with the requirement that these components mix among themselves under rotation by each individual suffix following the vector rule, for example
where is the same rotation matrix that transforms a vector. Tensors written in this way are called Cartesian tensors (since the suffixes refer to Cartesian axes). The number of suffixes is the rank of the Cartesian tensor, a rank tensor has of course components.
Tensors are common in physics: they are essential in describing stress, distortion and flow in solids and liquids. The inertial tensor is the basis for analyzing angular motion in classical mechanics. Tensor forces are important in the dynamics of the deuteron, and in fact tensors arise for any charge distribution more complicated than a dipole. Going to four dimensions, and generalizing from rotations to Lorentz transformations, Maxwell’s equations are most naturally expressed in tensor form, and tensors are central to General Relativity.
To get back to non-relativistic physics, since the defining property of a tensor is its behavior under rotations, spherical polar coordinates are sometimes a more natural basis than Cartesian coordinates. In fact, in that basis tensors (called spherical tensors) have rotational properties closely related to those of angular momentum eigenstates, as will become clear in the following sections.
The Rotation Operator in Angular Momentum Eigenket Space
As a preliminary to discussing general tensors in quantum mechanics, we briefly review the rotation operator and quantum vector operators. (A full treatment is given in my 751 lecture.)
Recall that the rotation operator turning a ket through an angle (the vector direction denotes the axis of rotation, its magnitude the angle turned through) is
Since commutes with the total angular momentum squared we can restrict our attention to a given total angular momentum having as usual an orthonormal basis set , or for short, with components, a general ket in this space is then:
Rotating this ket,
Putting in a complete set of states, and using the standard notation for matrix elements of the rotation operator,
is standard notation (see the earlier lecture).
So the ket rotation transformation is
with the usual matrix-multiplication rules.
Rotating a Basis Ket
Now suppose we apply the rotation operator to one of the basis kets , what is the result?
Note the reversal of m, m′ compared with the operation on the set of component coefficients of the general ket.
(You may be thinking: wait a minute, is a ket in the spaceit can be written with , so we could use the previous rule to get . Reassuringly, this leads to the same result we just found.)
Rotating an Operator, Scalar Operators
Just as in the Schrödinger versus Heisenberg formulations, we can either apply the rotation operator to the kets and leave the operators alone, or we can leave the kets alone, and rotate the operators:
which will yield the same matrix elements, so the same physics.
A scalar operator is an operator which is invariant under rotations, for example the Hamiltonian of a particle in a spherically symmetric potential. (There are many less trivial examples of scalar operators, such as the dot product of two vector operators, as in a spin-orbit coupling.)
The transformation of an operator under an infinitesimal rotation is given by:
It follows that a scalar operator which does not change at all, must commute with all the components of the angular momentum operator, and hence must have a common set of eigenkets with, say, and
Vector Operators: Definition and Commutation Properties
A quantum mechanical vector operator is defined by requiring that the expectation values of its three components in any state transform like the components of a classical vector under rotation.
It follows from this that the operator itself must transform vectorially,
To see what this implies, it is easiest to look at a simple case. For an infinitesimal rotation about the axis,
the vector transforms
The unitary Hilbert space operator U corresponding to this rotation so
The requirement that the two transformations above, the infinitesimal classical rotation generated by and the infinitesimal unitary transformation , are in fact the same thing yields the commutation relations of a vector operator with angular momentum:
From this result and its cyclic equivalents, the components of any vector operator must satisfy:
Exercise: verify that the components of do in fact satisfy these commutation relations.
(Note: Confusingly, there is a slightly different situation in which we need to rotate an operator, and it gives an opposite result. Suppose an operator T acts on a ket to give the ket . For kets and to go to and respectively under a rotation itself must transform as (recall ). The point is that this is a Schrödinger rather than a Heisenberg-type transformation: we’re rotating the kets, not the operators.)
Warning: Does a vector operator transform like the components of a vector or like the basis kets of the space? You’ll see it written both ways, so watch out!
We’ve already defined it as transforming like the components:
but if we now take the opposite rotation, the unitary matrix is replaced by its inverse and vice versa. Remember also that the ordinary spatial rotation matrix is orthogonal, so its inverse is its transpose, and the above equation is equivalent to
This definition of a vector operator is that its elements transform just as do the basis kets of the spaceso it’s crucial to look carefully at the equation to figure out which is the rotation matrix, and which is its inverse!
This second form of the equation is the one in common use.
Cartesian Tensor Operators
From the definition given earlier, under rotation the elements of a rank two Cartesian tensor transform as:
where is the rotation matrix for a vector.
It is illuminating to consider a particular example of a second-rank tensor, , where and are ordinary three-dimensional vectors.
The problem with this tensor is that it is reducible, using the word in the same sense as in our discussion of group representations is discussing addition of angular momenta. That is to say, combinations of the elements can be arranged in sets such that rotations operate only within these sets. This is made evident by writing:
The first term, the dot product of the two vectors, is clearly a scalar under rotation, the second term, which is an antisymmetric tensor has three independent components which are the vector components of the vector product , and the third term is a symmetric traceless tensor, which has five independent components. Altogether, then, there are components, as required.
Notice the numbers of elements of these irreducible subgroups: These are exactly the numbers of elements of angular momenta representations for j = 0, 1, 2!
This is of course no coincidence: as we shall make more explicit below, a three-dimensional vector is mathematically isomorphic to a quantum spin one, the tensor we have written is therefore a direct product of two spins one, so, exactly as we argues in discussing addition of angular momenta, it will be a reducible representation of the rotation group, and will be a sum of representations corresponding to the possible total angular momenta from adding two spins one, that is,
As discussed earlier, the matrix elements of the rotation operator
within a definite subspace are written
so under rotation operator a basis state transforms as:
The essential point is that these irreducible subgroups into which Cartesian tensors decompose under rotation (generalizing from our one example) form a more natural basis set of tensors for problems with rotational symmetries.
Definition: We define a spherical tensor of rank as a set of operators such that under rotation they transform among themselves with exactly the same matrix of coefficients as that for the angular momentum eigenkets for that is,
To see the properties of these spherical tensors, it is useful to evaluate the above equation for infinitesimal rotations, for which
(The matrix element is just the familiar Clebsch Gordan coefficient in changed notation: the rank corresponds to the usual and to the “magnetic” quantum number )
Specifically, consider an infinitesimal rotation (Strictly speaking, this is not a real rotation, but the formalism doesn’t care, and the result we derive can be confirmed by rotation about the and directions and adding appropriate terms.)
The equation is
and equating terms linear in
Sakurai observes that this set of commutation relations could be taken as the definition of the spherical tensors.
Notational note: we have followed Shankar here in having the rank as a subscript, the “magnetic” quantum number as a superscript, the same convention used for the spherical harmonics (but not for the matrices!) Sakurai, Baym and others have the rank above, usually in parentheses, and the magnetic number below. Fortunately, all use for rank and for magnetic quantum number.
A Spherical Vector
The angular momentum eigenkets are just the familiar spherical harmonics
The rotation operator will transform as an ordinary vector in three-space, and this is evidently equivalent to
It follows that the spherical representation of a three vector has the form:
In line with spherical tensor notation, the components are denoted
Matrix Elements of Tensor Operators between Angular Momentum Eigenkets
By definition, an irreducible tensor operator transforms under rotation like an angular momentum eigenket . Therefore, rotating the ket ,
The product of the two matrices appearing is precisely the set of coefficients to rotate the direct product of eigenkets where is the angular momentum eigenket having
We have met this direct product of two angular momentum eigenkets before: this is just a system having two angular momenta, such as orbital plus spin angular momenta. So we see that acting on generates a state having total angular momentum the sum of and
To link up (more or less) with Shankar’s notation: our direct product state is the same as in the notation for a product state of two angular momenta (possibly including spins). Such a state can be written as a sum over states of the form where this denotes a state of total angular momentum direction component made up of two spins having total angular momentum respectively.
This is the standard Clebsch-Gordan sum:
The summed terms give a unit operator within this dimensional space, the term is a Clebsch-Gordan coefficient. The only nonzero coefficients have and restricted as noted, so for given we just set we don’t sum over and the sum over begins at
Translating into our notation, and cleaning up,
We are now able to evaluate the angular component of the matrix element of a spherical tensor operator between angular momentum eigenkets: we see that it will only be nonzero for and at least
The Wigner-Eckart Theorem
At this point, we must bear in mind that these tensor operators are not necessarily just functions of angle. For example, the position operator is a spherical vector multiplied by the radial variable and kets specifying atomic eigenstates will include radial quantum numbers as well as angular momentum, so the matrix element of a tensor between two states will have the form
where the ’s and ’s denote the usual angular momentum eigenstates and the ’s are nonangular quantum numbers, such as those for radial states.
The basic point of the WignerEckart theorem is that the angular dependence of these matrix elements can be factored out, and it is given by the Clebsch-Gordan coefficients.
Having factored it out, the remaining dependence, which is only on the total angular momentum in each of the kets, not the relative orientation (and of course on the ’s), is traditionally written as a bracket with double lines, that is,
The denominator is the conventional normalization of the double-bar matrix element. The proof is given in, for example, Sakurai (page 239) and is not that difficult. The basic strategy is to put the defining identities
between bras and kets, then get rid of the by having them operate on the bra or ket. This generates a series of linear equations for matrix elements with variables differing by one, and in fact this set of linear equations is identical to the set that generates the Clebsch-Gordan coefficients, so we must conclude that these spherical tensor matrix elements, ranging over possible and values, are exactly proportional to the Clebsch-Gordan coefficientsand that is the theorem.
A Few Hints for Shankar’s problem 15.3.3: that first matrix element comes from adding a spin to a spin 1, writing the usual maximum state, applying the lowering operator to both sides to get the total angular momentum state, then finding the same state orthogonal to that, which corresponds to total angular momentum (instead of ) .
For the operator the Wigner-Eckart matrix element simplifies because cannot affect and also it commutes with so cannot change the total angular momentum.
So, in the Wigner-Eckart equation, replace on the left-hand side by , which is just The result of (1) should follow.
(2) First note that a scalar operator cannot change Since is independent of we can take to find