previous  home  next   PDF

Tensor Operators

Michael Fowler, 1/13/08

Introduction: Cartesian Vectors and Tensors

Physics is full of vectors:  and so on.  Classically, a (three-dimensional) vector is defined by its properties under rotation: the three components corresponding to the Cartesian x, y, and z axes transform as

 

,

 

with the usual rotation matrix, for example

 

 

for rotation about the z-axis.  (We’ll use  interchangeably.)

 

A tensor is a generalization of a such a vector to an object with more than one suffix, such as, for example, (having 9 and 27 components respectively in three dimensions) with the requirement that these components mix among themselves under rotation by each individual suffix following the vector rule, for example

 

 

where R is the same rotation matrix that transforms a vector.  Tensors written in this way are called Cartesian tensors (since the suffixes refer to Cartesian axes).  The number of suffixes is the rank of the Cartesian tensor, a rank n tensor has of course 3n components.

 

Tensors are common in physics: they are essential in describing stress, distortion and flow in solids and liquids.  Tensor forces play an important role in the dynamics of  the deuteron, and in fact tensors arise for any charge distribution more complicated than a dipole.  Going to four dimensions, and generalizing from rotations to Lorentz transformations, Maxwell’s equations are most naturally expressed in tensor form, and tensors are central to General Relativity.

 

To get back to non-relativistic physics, since the defining property of a tensor is its behavior under rotations, spherical polar coordinates are sometimes a more natural basis than Cartesian coordinates.  In fact, in that basis tensors (called spherical tensors) have rotational properties closely related to those of angular momentum eigenstates, as will become clear in the following sections. 

 

 

The Rotation Operator in Angular Momentum Eigenket Space

As a preliminary to discussing general tensors in quantum mechanics, we briefly review the rotation operator and quantum vector operators.  (A full treatment is given in my 751 lecture.)

 

Recall that the rotation operator turning a ket through an angle  (the vector direction denotes the axis of rotation, its magnitude the angle turned through) is

 

 

Since  commutes with the total angular momentum squared  we can restrict our attention to a given total angular momentum j, having as usual an orthonormal basis set , or  for short, with 2j + 1 components, a general ket  in this space is then:

.

Rotating this ket,

 

 

Putting in a complete set of states, and using the standard notation for matrix elements of the rotation operator,

 

 

 is standard notation (see the earlier lecture).

 

So the ket rotation transformation is

 

 

with the usual matrix-multiplication rules.

Rotating a Basis Ket

Now suppose we apply the rotation operator to one of the basis kets , what is the result?

 

 

Note the reversal of m, m¢ compared with the operation on the set of component coefficients of the general ket.

 

(You may be thinking: wait a minute,  is a ket in the space—it can be written  with , so we could use the previous rule  to get .  Reassuringly, this leads to the same result we just found.)

Rotating an Operator, Scalar Operators

Just as in the Schrödinger versus Heisenberg formulations, we can either apply the rotation operator to the kets and leave the operators alone, or we can leave the kets alone, and rotate the operators:

 

 

which will yield the same matrix elements, so the same physics.

 

A scalar operator is an operator which is invariant under rotations, for example the Hamiltonian of a particle in a spherically symmetric potential. (There are many less trivial examples of scalar operators, such as the dot product of two vector operators, as in a spin-orbit coupling.) 

 

The transformation of an operator under an infinitesimal rotation is given by:

           

from which

 

 

It follows that a scalar operator S, which does not change at all, must commute with all the components of the angular momentum operator, and hence must have a common set of eigenkets with, say, J 2 and Jz.  

Vector Operators:  Definition and Commutation Properties

A quantum mechanical vector operator  is defined by requiring that the expectation values of its three components in any state transform like the components of a classical vector under rotation. 

 

It follows from this that the operator itself must transform vectorially,

 

.

 

 

To see what this implies, it is easiest to look at a simple case.  For an infinitesimal rotation about the z-axis,

 

 

the vector transforms

 

 

The unitary Hilbert space operator U corresponding to this rotation  so

 

 

The requirement that the two transformations above, the infinitesimal classical rotation generated by  and the infinitesimal unitary transformation , are in fact the same thing yields the commutation relations of a vector operator with angular momentum:

 

 

From this result and its cyclic equivalents, the components of any vector operator must satisfy:

 

.

 

Exercise: verify that the components of  do in fact satisfy these commutation relations.

 

(Note:  Confusingly, there is a slightly different situation in which we need to rotate an operator, and it gives an opposite result.  Suppose an operator T acts on a ket  to give the ket .  For kets  and  to go to  and  respectively under a rotation U, T itself must transform as  (recall ).  The point is that this is a Schrödinger rather than a Heisenberg-type transformation: we’re rotating the kets, not the operators.)

Cartesian Tensor Operators

From the definition given earlier, under rotation the elements of a rank two Cartesian tensor transform as::

 

 

 

where Rij is the rotation matrix for a vector.

 

It is illuminating to consider a particular example of a second-rank tensor, , where  and  are ordinary three-dimensional vectors.

 

The problem with this tensor is that it is reducible, using the word in the same sense as in our discussion of group representations is discussing addition of angular momenta.  That is to say, combinations of the elements can be arranged in sets such that rotations operate only within these sets.  This is made evident by writing:

 

 

The first term, the dot product of the two vectors, is clearly a scalar under rotation, the second term, which is an antisymmetric tensor has three independent components which are the vector components of the vector product , and the third term is a symmetric traceless tensor, which has five independent components.  Altogether, then, there are 1 + 3 + 5 = 9 components, as required. 

Spherical Tensors

Notice the numbers of elements of these irreducible subgroups: 1, 3, 5.  These are exactly the numbers of elements of angular momenta representations for j = 0, 1, 2!  

 

This is of course no coincidence: as we shall make more explicit below, a three-dimensional vector is mathematically isomorphic to a quantum spin one, the tensor we have written is therefore a direct product of two spins one, so, exactly as we argues in discussing addition of angular momenta, it will be a reducible representation of the rotation group, and will be a sum of representations corresponding to the possible total angular momenta from adding two spins one, that is, j = 0, 1, 2.  

 

As discussed earlier, the matrix elements of the rotation operator

 

 

within a definite j subspace are written

 

 

so under rotation operator a basis state  transforms as:

 

 

The essential point is that these irreducible subgroups into which Cartesian tensors decompose under rotation (generalizing from our one example) form a more natural basis set of tensors for problems with rotational symmetries.

 

Definition: We define a spherical tensor of rank k as a set of 2k + 1 operators   such that under rotation they transform among themselves with exactly the same matrix of coefficients as that for the 2j + 1 angular momentum eigenkets  for k = j,  that is,

 

.

 

To see the properties of these spherical tensors, it is useful to evaluate the above equation for infinitesimal rotations, for which

 

Specifically, consider an infinitesimal rotation  (Strictly speaking, this is not a real rotation, but the formalism doesn’t care, and the result we derive can be confirmed by rotation about the x and y directions and adding appropriate terms.)

 

The equation is

 

 

and equating terms linear in

 

 

Sakurai observes that this set of commutation relations could be taken as the definition of the spherical tensors.

 

Notational note: we have followed Shankar here in having the rank k as a subscript, the “magnetic” quantum number q as a superscript, the same convention used for the spherical harmonics (but not for the D matrices!)  Sakurai, Baym and others have the rank above, usually in parentheses, and the magnetic number below.  Fortunately, all use k for rank and q for magnetic quantum number.

A Spherical Vector

The  j = 1 angular momentum eigenkets are just the familiar spherical harmonics

 

 

The rotation operator will transform (x, y, z) as an ordinary vector in three-space, and this is evidently equivalent to

 

 

It follows that the spherical representation of a three vector has the form:

 

 

In line with spherical tensor notation, the components are denoted

Matrix Elements of Tensor Operators Between Angular Momentum Eigenkets

By definition, an irreducible tensor operator  transforms under rotation like an angular momentum eigenket .  Therefore, rotating the ket ,

 

.

 

The product of the two D matrices appearing is precisely the set of coefficients to rotate the direct product of eigenkets  where  is the angular momentum eigenket having  j = k, m = q.

 

We have met this direct product of two angular momentum eigenkets before: this is just a system having two angular momenta, such as orbital plus spin angular momenta.   So we see that  acting on  generates a state having total angular momentum the sum of (k, q) and (j, m).  

 

To link up (more or less) with Shankar’s notation: our direct product state  is the same as  in the notation  for a product state of two angular momenta (possibly including spins). Such a state can be written as a sum over states of the form  where this denotes a state of total angular momentum jtot,  z-direction component mtot, made up of two spins having total angular momentum j1, j2 respectively.

 

This is the standard Clebsch-Gordan sum:

 

 

The summed terms give a unit operator within this (2j1 + 1)(2j2 +1) dimensional space, the term  is a Clebsch-Gordan coefficient.  The only nonzero coefficients have mtot = m1 + m2, and jtot restricted as noted, so for given m1, m2 we just set mtot = m1 + m2, we don’t sum over mtot, and the sum over jtot begins at |mtot|. 

 

Translating into our  notation, and cleaning up,

 

 

We are now able to evaluate the angular component of the matrix element of a spherical tensor operator between angular momentum eigenkets: we see that it will only be nonzero for  mtot = m1 + m2, and  jtot at least |mtot|. 

The Wigner-Eckart Theorem

At this point, we must bear in mind that these tensor operators are not necessarily just functions of angle.  For example, the position operator is a spherical vector multiplied by the radial variable r, and kets specifying atomic eigenstates will include radial quantum numbers as well as angular momentum, so the matrix element of a tensor between two states will have the form

 

,

 

where the j’s and m’s denote the usual angular momentum eigenstates and the a’s are nonangular quantum numbers, such as those for radial states.

 

The basic point of the Wigner–Eckart theorem is that the angular dependence of these matrix elements can be factored out, and it is given by the Clebsch-Gordan coefficients. 

 

Having factored it out, the remaining dependence, which is only on the total angular momentum in each of the kets, not the relative orientation (and of course on the a’s), is traditionally written as a bracket with double lines, that is,

 

 

                                                                             

The denominator is the conventional normalization of the double-bar matrix element. The proof is given in, for example, Sakurai (page 239) and is not that difficult.  The basic strategy is to put the defining identities

 

 

between  bras and kets, then get rid of the by having them operate on the bra or ket.  This generates a series of linear equations for  matrix elements with m variables differing by one, and in fact this set of linear equations is identical to the set that generates the Clebsch-Gordan coefficients, so we must conclude that these spherical tensor matrix elements, ranging over possible m and j values, are exactly proportional to the Clebsch-Gordan coefficients—and that is the theorem.

 

previous  home  next   PDF

 

A Few Hints for Shankar’s problem 15.3.3 (actually 15.3.4 in the new Second Edition, which looks identical to the old Second Edition, so watch out!): that first matrix element comes from adding a spin j to a spin 1, writing the usual maximum m state, applying the lowering operator to both sides to get the total angular momentum j + 1, m = j state, then finding the same m state orthogonal to that, which corresponds to total angular momentum j (instead of j + 1). 

 

For the operator J, the Wigner-Eckart matrix element simplifies because J cannot affect a, and also it commutes with J 2, so cannot change the total angular momentum.

 

So, in the Wigner-Eckart equation, replace  on the left-hand side by , which is just Jz.  The result of (1) should follow.

 

(2) First note that a scalar operator cannot change m.  Since c is independent of A we can take

A = J to find c.