# Tensor Operators

*Michael Fowler UVa*

### Introduction: Cartesian Vectors and Tensors

Physics is full of vectors: $\overrightarrow{x},\text{\hspace{0.17em}}\overrightarrow{L},\text{\hspace{0.17em}}\overrightarrow{S}$ and so on. Classically, a (three-dimensional) vector is defined by its properties under rotation: the three components corresponding to the Cartesian $x,y,z$ axes transform as

${V}_{i}\to {\displaystyle \sum {R}_{ij}\text{\hspace{0.17em}}{V}_{j}}$,

with the usual rotation matrix, for example

${R}_{z}(\theta )=\left(\begin{array}{ccc}\mathrm{cos}\theta & -\mathrm{sin}\theta & 0\\ \mathrm{sin}\theta & \mathrm{cos}\theta & 0\\ 0& 0& 1\end{array}\right)$

for rotation about the $z\text{-}$ axis. (We’ll use $\left(x,y,z\right)\text{and}\left({x}_{1},{x}_{2},{x}_{3}\right)$ interchangeably.)

A *tensor* is a
generalization of a such a vector to an object with more than one suffix, such
as, for example, ${T}_{ij}\text{or}{T}_{ijk}$ (having 9 and 27 components respectively in
three dimensions) with the requirement that these components mix among
themselves under rotation by each individual suffix following the vector rule, for
example

${T}_{ijk}\to {\displaystyle \sum {R}_{il}\text{\hspace{0.17em}}{R}_{jm}{R}_{kn}{T}_{lmn}}$

where $R$ is the same rotation matrix that transforms a
vector. Tensors written in this way are
called *Cartesian tensors* (since the
suffixes refer to Cartesian axes). The
number of suffixes is the *rank* of the
Cartesian tensor, a rank $n$ tensor has of course ${3}^{n}$ components.

Tensors are common in physics: they are essential in
describing stress, distortion and flow in solids and liquids. The inertial tensor is the basis for
analyzing angular motion in classical mechanics. Tensor *forces*
are important in the dynamics of the deuteron,
and in fact tensors arise for any charge distribution more complicated than a
dipole. Going to four dimensions, and
generalizing from rotations to Lorentz transformations, Maxwell’s equations are
most naturally expressed in tensor form, and tensors are central to General
Relativity.

To get back to non-relativistic physics, since the defining
property of a tensor is its behavior under rotations, spherical polar
coordinates are sometimes a more natural basis than Cartesian coordinates. In fact, in that basis tensors (called *spherical tensors*) have rotational
properties closely related to those of angular momentum eigenstates, as will
become clear in the following sections.

### The Rotation Operator in Angular Momentum Eigenket Space

As a preliminary to discussing general tensors in quantum mechanics, we briefly review the rotation operator and quantum vector operators. (A full treatment is given in my 751 lecture.)

Recall that the rotation operator turning a ket through an angle $\overrightarrow{\theta}$ (the vector direction denotes the axis of rotation, its magnitude the angle turned through) is

$U\left(R\left(\overrightarrow{\theta}\right)\right)={e}^{-\frac{i\overrightarrow{\theta}\cdot \overrightarrow{J}}{\hslash}}.$

Since $\overrightarrow{J}$ commutes with the total angular momentum
squared ${\overrightarrow{J}}^{2}=j\left(j+1\right){\hslash}^{2},$ we can restrict our attention to a given *total *angular momentum $j,$ having as usual an orthonormal basis set $|j,m\rangle $,
or $|m\rangle $ for short, with $2j+1$ components, a general ket $|\alpha \rangle $ in this space is then:

$|\alpha \rangle ={\displaystyle \sum _{m=-j}^{j}{\alpha}_{m}|m\rangle}$.

Rotating this ket,

$|\alpha \rangle \to |{\alpha}^{\prime}\rangle ={e}^{-\frac{i\overrightarrow{\theta}\cdot \overrightarrow{J}}{\hslash}}|\alpha \rangle $

Putting in a complete set of states, and using the standard notation for matrix elements of the rotation operator,

$\begin{array}{c}|{\alpha}^{\prime}\rangle ={e}^{-\frac{i\overrightarrow{\theta}\cdot \overrightarrow{J}}{\hslash}}|\alpha \rangle \\ ={\displaystyle \sum _{{m}^{\prime},m}{\alpha}_{m}|{m}^{\prime}\rangle}\langle {m}^{\prime}|{e}^{-\frac{i\overrightarrow{\theta}\cdot \overrightarrow{J}}{\hslash}}|m\rangle \\ ={\displaystyle \sum _{{m}^{\prime},m}{D}_{{m}^{\prime}m}^{\left(j\right)}\left(R\left(\overrightarrow{\theta}\right)\right){\alpha}_{m}|{m}^{\prime}\rangle}.\end{array}$

${D}_{{m}^{\prime}m}^{\left(j\right)}=\langle {m}^{\prime}|{e}^{-\frac{i\overrightarrow{\theta}\cdot \overrightarrow{J}}{\hslash}}|m\rangle $ is standard notation (see the earlier lecture).

So the ket rotation transformation is

${{\alpha}^{\prime}}_{{m}^{\prime}}={\displaystyle \sum _{m}{D}_{{m}^{\prime}m}^{\left(j\right)}{\alpha}_{m}},\text{\hspace{1em}}\text{or}{\alpha}^{\prime}=D\alpha .$

with the usual matrix-multiplication rules.

### Rotating a Basis Ket

Now suppose we apply the rotation operator to one of the *basis
kets* $|j,m\rangle $,
what is the result?

${e}^{-\frac{i\overrightarrow{\theta}\cdot \overrightarrow{J}}{\hslash}}|j,m\rangle ={\displaystyle \sum _{{m}^{\prime}}|j,{m}^{\prime}\rangle \langle j,{m}^{\prime}|{e}^{-\frac{i\overrightarrow{\theta}\cdot \overrightarrow{J}}{\hslash}}|j,m\rangle}={\displaystyle \sum _{{m}^{\prime}}|j,{m}^{\prime}\rangle}\text{\hspace{0.17em}}{D}_{{m}^{\prime}m}^{\left(j\right)}\left(R\right).$

Note the *reversal*
of *m*, *m*′ compared with the operation on the set
of component coefficients of the general ket.

(You may be thinking: wait a minute, $|j,m\rangle $ *is* a
ket in the space$\u2014$it can be
written $\sum {\alpha}_{{{m}^{\prime}}^{\prime}}}|j,{{m}^{\prime}}^{\prime}\rangle $ with ${\alpha}_{{{m}^{\prime}}^{\prime}}={\delta}_{{{m}^{\prime}}^{\prime}m}$,
so we could use the previous rule ${{\alpha}^{\prime}}_{{m}^{\prime}}={\displaystyle \sum _{m}{D}_{{m}^{\prime}m}^{\left(j\right)}{\alpha}_{m}}$ to get ${{\alpha}^{\prime}}_{{m}^{\prime}}={\displaystyle \sum {D}_{{m}^{\prime}{{m}^{\prime}}^{\prime}}^{\left(j\right)}{\alpha}_{{{m}^{\prime}}^{\prime}}=}{\displaystyle \sum {D}_{{m}^{\prime}{{m}^{\prime}}^{\prime}}^{\left(j\right)}{\delta}_{{{m}^{\prime}}^{\prime}m}=}\text{\hspace{0.17em}}{D}_{{m}^{\prime}m}^{\left(j\right)}$. Reassuringly, this leads to the same result
we just found.)

### Rotating an Operator, Scalar Operators

Just as in the Schrödinger versus Heisenberg formulations, we can either apply the rotation operator to the kets and leave the operators alone, or we can leave the kets alone, and rotate the operators:

$A\to {e}^{\frac{i\overrightarrow{\theta}\cdot \overrightarrow{J}}{\hslash}}A{e}^{-\frac{i\overrightarrow{\theta}\cdot \overrightarrow{J}}{\hslash}}={U}^{\u2020}AU$

which will yield the same matrix elements, so the same physics.

A *scalar *operator
is an operator which is *invariant*
under rotations, for example the Hamiltonian of a particle in a spherically
symmetric potential. (There are many less trivial examples of scalar operators,
such as the dot product of two vector operators, as in a spin-orbit
coupling.)

The transformation of an operator under an infinitesimal rotation is given by:

$S\to {U}^{\u2020}(R)SU(R)\text{with}U(R)=1-\frac{i\overrightarrow{\epsilon}\cdot \overrightarrow{J}}{\hslash}$

from which

$S\to S+\left[\frac{i\overrightarrow{\epsilon}\cdot \overrightarrow{J}}{\hslash},\text{\hspace{0.17em}}S\right].$

It follows that a scalar operator $S,$ which does not change at all,* *must
commute with all the components of the angular momentum operator, and hence
must have a common set of eigenkets with, say, ${\overrightarrow{J}}^{2}$ and ${J}_{z}.$

### Vector Operators: Definition and Commutation Properties

A *quantum mechanical
vector operator *$\overrightarrow{V}$ is *defined *by requiring that the
expectation values of its three components in any state *transform like the components of a classical vector* under
rotation.

It follows from this that the operator itself must transform vectorially,

${V}_{i}{}^{\prime}={U}^{\u2020}\left(R\right){V}_{i}U\left(R\right)={\displaystyle \sum {R}_{ij}{V}_{j}}$.

To see what this implies, it is easiest to look at a simple case. For an infinitesimal rotation about the $z\text{-}$ axis,

${R}_{z}(\epsilon )=\left(\begin{array}{ccc}1& -\epsilon & 0\\ \epsilon & 1& 0\\ 0& 0& 1\end{array}\right)$

the vector transforms

$\left(\begin{array}{c}{V}_{x}\\ {V}_{y}\\ {V}_{z}\end{array}\right)\text{\hspace{0.17em}}\text{\hspace{1em}}\to \text{\hspace{1em}}\left(\begin{array}{ccc}1& -\epsilon & 0\\ \epsilon & 1& 0\\ 0& 0& 1\end{array}\right)\left(\begin{array}{c}{V}_{x}\\ {V}_{y}\\ {V}_{z}\end{array}\right)\text{\hspace{0.17em}}\text{\hspace{1em}}=\text{\hspace{1em}}\left(\begin{array}{c}{V}_{x}-\epsilon {V}_{y}\\ {V}_{y}+\epsilon {V}_{x}\\ {V}_{z}\end{array}\right)$

The unitary Hilbert space operator *U* corresponding to
this rotation $U\left({R}_{z}\left(\epsilon \right)\right)=1-\frac{i\epsilon {J}_{z}}{\hslash},$ so

$\begin{array}{c}{U}^{\u2020}{V}_{i}U=\left(1+i\epsilon {J}_{z}/\hslash \right){V}_{i}\left(1-i\epsilon {J}_{z}/\hslash \right)\\ ={V}_{i}+\frac{i\epsilon}{\hslash}\left[{J}_{z},\text{\hspace{0.17em}}{V}_{i}\right].\end{array}$

The requirement that the two transformations above, the infinitesimal classical rotation generated by ${R}_{z}(\epsilon )$ and the infinitesimal unitary transformation ${U}^{\u2020}\left(R\right){V}_{i}U\left(R\right)$, are in fact the same thing yields the commutation relations of a vector operator with angular momentum:

$\begin{array}{l}i\left[{J}_{z},{V}_{x}\right]=-\hslash {V}_{y}\\ i\left[{J}_{z},{V}_{y}\right]=+\hslash {V}_{x}.\end{array}$

From this result and its cyclic equivalents, the components
of *any* vector operator $\overrightarrow{V}$ must satisfy:

$\left[{V}_{i},{J}_{j}\right]=i{\epsilon}_{ijk}\hslash {V}_{k}$.

*Exercise*: verify that the components of $\overrightarrow{x},\text{\hspace{0.17em}}\overrightarrow{L},\text{\hspace{0.17em}}\overrightarrow{S}$ do in fact satisfy these commutation
relations.

(*Note*: Confusingly, there is a slightly different
situation in which we need to rotate an operator, and it gives an opposite
result. Suppose an operator *T*
acts on a ket $|\alpha \rangle $ to give the ket $|{\alpha}^{\prime}\rangle =T|\alpha \rangle $. For kets $|\alpha \rangle $ and $|{\alpha}^{\prime}\rangle $ to go to $U|\alpha \rangle $ and $U|{\alpha}^{\prime}\rangle $ respectively under a rotation $U,T$ itself must transform as $T\to UT{U}^{\u2020}$ (recall ${U}^{\u2020}={U}^{-1}$ ). The
point is that this is a Schrödinger rather than a Heisenberg-type
transformation: we’re rotating the kets, not the operators.)

*Warning*: Does a vector operator transform like the
components of a vector or like the basis kets of the space? You’ll see it written both ways, so watch
out!

We’ve already defined it as transforming like the components:

$${V}_{i}{}^{\prime}={U}^{\u2020}\left(R\right){V}_{i}U\left(R\right)={\displaystyle \sum {R}_{ij}{V}_{j}}$$

but if we now take the *opposite*
rotation, the unitary matrix $U\left(R\right)$ is replaced by its inverse ${U}^{\u2020}\left(R\right)$ and *vice
versa.* Remember also that the ordinary spatial rotation matrix $R$ is orthogonal, so its inverse is its
transpose, and the above equation is equivalent to

$${V}_{i}{}^{\prime}=U\left(R\right){V}_{i}{U}^{\u2020}\left(R\right)={\displaystyle \sum {R}_{ji}{V}_{j}}$$.

*This* definition of
a vector operator is that its elements transform just as do the basis kets of
the space$\u2014$so it’s
crucial to look carefully at the equation to figure out which is the rotation
matrix, and which is its inverse!

This second form of the equation is the one in common use.

### Cartesian Tensor Operators

From the definition given earlier, under rotation the elements of a rank two Cartesian tensor transform as:

${T}_{ij}\to {T}_{ij}{}^{\prime}={\displaystyle \sum {\displaystyle \sum {R}_{i{i}^{\prime}}}}{R}_{j{j}^{\prime}}{T}_{{i}^{\prime}{j}^{\prime}}.$

where ${R}_{ij}$ is the rotation matrix for a vector.

It is illuminating to consider a particular example of a second-rank tensor, ${T}_{ij}={U}_{i}{V}_{j}$, where $\overrightarrow{U}$ and $\overrightarrow{V}$ are ordinary three-dimensional vectors.

The problem with this tensor is that it is *reducible*, using the word in the same sense as in our
discussion of group representations is discussing addition of angular
momenta. That is to say, combinations
of the elements can be arranged in sets such that rotations operate only within
these sets. This is made evident by
writing:

${U}_{i}{V}_{j}=\frac{\overrightarrow{U}\cdot \overrightarrow{V}}{3}{\delta}_{ij}+\frac{\left({U}_{i}{V}_{j}-{U}_{j}{V}_{i}\right)}{2}+\left(\frac{{U}_{i}{V}_{j}+{U}_{j}{V}_{i}}{2}-\frac{\overrightarrow{U}\cdot \overrightarrow{V}}{3}{\delta}_{ij}\right).$

The first term, the dot product of the two vectors, is
clearly a *scalar* under rotation, the second term, which is an
antisymmetric tensor has three independent components which are the *vector*
components of the vector product $\overrightarrow{U}\times \overrightarrow{V}$,
and the third term is a *symmetric traceless tensor*, which has five
independent components. Altogether,
then, there are $1+3+5=9$ components, as required.

### Spherical Tensors

Notice the numbers of elements of these irreducible
subgroups: $1,\text{\hspace{0.17em}}\text{\hspace{0.17em}}3,\text{\hspace{0.17em}}5.$ These are exactly the numbers of elements of
angular momenta representations for *j *= 0, 1, 2!

This is of course no coincidence: as we shall make more explicit below, a three-dimensional vector is mathematically isomorphic to a quantum spin one, the tensor we have written is therefore a direct product of two spins one, so, exactly as we argues in discussing addition of angular momenta, it will be a reducible representation of the rotation group, and will be a sum of representations corresponding to the possible total angular momenta from adding two spins one, that is, $j=0,\text{\hspace{0.17em}}1,\text{\hspace{0.17em}}2.$

As discussed earlier, the matrix elements of the rotation operator

$U\left(R\left(\overrightarrow{\theta}\right)\right)={e}^{-\frac{i\overrightarrow{\theta}\cdot \overrightarrow{J}}{\hslash}}$

within a definite $j$ subspace are written

${D}_{{m}^{\prime}m}^{j}\left(R\left(\overrightarrow{\theta}\right)\right)=\langle j,{m}^{\prime}|{e}^{-\frac{i\overrightarrow{\theta}\cdot \overrightarrow{J}}{\hslash}}|j,m\rangle $

so under rotation operator a basis state $|j,m\rangle $ transforms as:

${e}^{-\frac{i\overrightarrow{\theta}\cdot \overrightarrow{J}}{\hslash}}|j,m\rangle ={\displaystyle \sum _{{m}^{\prime}}|j,{m}^{\prime}\rangle \langle j,{m}^{\prime}|{e}^{-\frac{i\overrightarrow{\theta}\cdot \overrightarrow{J}}{\hslash}}|j,m\rangle}={\displaystyle \sum _{{m}^{\prime}}|j,{m}^{\prime}\rangle}\text{\hspace{0.17em}}{D}_{{m}^{\prime}m}^{\left(j\right)}\left(R\right).$

The essential point is that these irreducible subgroups into which Cartesian tensors decompose under rotation (generalizing from our one example) form a more natural basis set of tensors for problems with rotational symmetries.

**Definition**: We define a *spherical tensor* of rank $k$ as a set of $2k+1$ operators ${T}_{k}^{q},\text{\hspace{1em}}q=k,k-1,\dots ,-k$ such that under rotation they transform among
themselves with exactly the same matrix of coefficients as that for the $2j+1$ angular momentum eigenkets $|m\rangle $ for $k=j,$ that is,

$U\left(R\right){T}_{k}^{q}{U}^{\u2020}\left(R\right)={\displaystyle \sum _{{q}^{\prime}}{D}_{{q}^{\prime}q}^{\left(k\right)}{T}_{k}^{{q}^{\prime}}}$.

To see the properties of these spherical tensors, it is useful to evaluate the above equation for infinitesimal rotations, for which ${D}_{{q}^{\prime}q}^{\left(k\right)}\left(\overrightarrow{\epsilon}\right)=\langle k,{q}^{\prime}|I-i\overrightarrow{\epsilon}\cdot \overrightarrow{J}/\hslash |k,q\rangle ={\delta}_{{q}^{\prime}q}-i\overrightarrow{\epsilon}\cdot \langle k,{q}^{\prime}|\overrightarrow{J}/\hslash |k,q\rangle .$

(The matrix element $\langle k,{q}^{\prime}|\overrightarrow{J}/\hslash |k,q\rangle $ is just the familiar Clebsch Gordan
coefficient in changed notation: the rank $k$ corresponds to the usual *$j,$** *and
$q$ to the “magnetic” quantum number $m.$ )

Specifically, consider an infinitesimal rotation $\overrightarrow{\epsilon}\cdot \overrightarrow{J}=\epsilon {J}_{+}.$ (Strictly speaking, this is not a real rotation, but the formalism doesn’t care, and the result we derive can be confirmed by rotation about the $x$ and $y$ directions and adding appropriate terms.)

The equation is

$\left(1-i\epsilon {J}_{+}/\hslash \right){T}_{k}^{q}\left(1+i\epsilon {J}_{+}/\hslash \right)={\displaystyle \sum _{{q}^{\prime}}\left({\delta}_{{q}^{\prime}q}-i\epsilon \langle k,{q}^{\prime}|{J}_{+}/\hslash |k,q\rangle \right){T}_{k}^{{q}^{\prime}}}$

and equating terms linear in $\epsilon ,$

$\begin{array}{l}\left[{J}_{\pm},{T}_{k}^{q}\right]=\pm \hslash \sqrt{\left(k\mp q\right)\left(k\pm q+1\right)}\text{\hspace{0.17em}}{T}_{k}^{q\pm 1}\\ \left[{J}_{z},{T}_{k}^{q}\right]=\hslash q{T}_{k}^{q}.\end{array}$

Sakurai observes that this set of commutation relations
could be taken as the *definition* of the spherical tensors.

*Notational note*: we have followed Shankar here in
having the rank $k$ as a subscript, the “magnetic” quantum number $q$ as a superscript, the same convention used for
the spherical harmonics (but not for the $D$ matrices!)
Sakurai, Baym and others have the rank above, usually in parentheses,
and the magnetic number below. Fortunately,
all use $k$ for rank and $q$ for magnetic quantum number.

### A Spherical Vector

The $j=1$ angular momentum eigenkets are just the familiar spherical harmonics

${Y}_{1}^{0}=\sqrt{\frac{3}{4\pi}}\frac{z}{r},\text{\hspace{1em}}{Y}_{1}^{\pm 1}=\mp \sqrt{\frac{3}{4\pi}}\frac{x\pm iy}{\sqrt{2}r}.$

The rotation operator will transform $\left(x,y,z\right)$ as an ordinary vector in three-space, and this is evidently equivalent to

$|j=1,m\rangle \to {\displaystyle \sum _{{m}^{\prime}}|j=1,{m}^{\prime}\rangle}\text{\hspace{0.17em}}{D}_{{m}^{\prime}m}^{\left(j\right)}\left(R\right)$

It follows that the spherical representation of a three vector $\left({V}_{x},\text{\hspace{0.17em}}{V}_{y},\text{\hspace{0.17em}}{V}_{z}\right)$ has the form:

${T}_{1}^{\pm 1}=\mp \frac{{V}_{x}\pm i{V}_{y}}{\sqrt{2}}={V}_{1}^{\pm 1},\text{\hspace{1em}}{T}_{1}^{0}={V}_{z}={V}_{1}^{0}.$

In line with spherical tensor notation, the components $\left({T}_{1}^{1},\text{\hspace{0.17em}}{T}_{1}^{0},\text{\hspace{0.17em}}{T}_{1}^{-1}\right)$ are denoted ${T}_{1}^{q}.$

### Matrix Elements of Tensor Operators between Angular Momentum Eigenkets

By definition, an irreducible tensor operator ${T}_{k}^{q}$ transforms under rotation like an angular momentum eigenket $|k,q\rangle $. Therefore, rotating the ket ${T}_{k}^{q}|j,m\rangle $,

$U\text{\hspace{0.05em}}{T}_{k}^{q}|j,m\rangle =U\text{\hspace{0.05em}}{T}_{k}^{q}{U}^{-1}U|j,m\rangle ={\displaystyle \sum _{{q}^{\prime}}{D}_{{q}^{\prime}q}^{\left(k\right)}{T}_{k}^{{q}^{\prime}}}{\displaystyle \sum _{{m}^{\prime}}{D}_{{m}^{\prime}m}^{\left(j\right)}|j,{m}^{\prime}\rangle}$.

The product of the two $D$ matrices appearing is precisely the set of
coefficients to rotate *the direct product of eigenkets* $|k,q\rangle \otimes |j,m\rangle $ where $|k,q\rangle $ is the angular momentum eigenket having $j=k,\text{\hspace{0.17em}}m=q.$

We have met this direct product of two angular momentum eigenkets before: this is just a system having two angular momenta, such as orbital plus spin angular momenta. So we see that ${T}_{k}^{q}$ acting on $|j,m\rangle $ generates a state having total angular momentum the sum of $\left(k,q\right)$ and $\left(j,m\right).$

To link up (more or less) with Shankar’s notation: our direct product state $|k,q\rangle \otimes |j,m\rangle $ is the same as $|k,q;j,m\rangle $ in the notation $|{j}_{1},{m}_{1};{j}_{2},{m}_{2}\rangle $ for a product state of two angular momenta (possibly including spins). Such a state can be written as a sum over states of the form $|{j}_{\text{tot}},{m}_{\text{tot}};{j}_{1},{j}_{2}\rangle $ where this denotes a state of total angular momentum ${j}_{\text{tot}},$ $z\text{-}$ direction component ${m}_{\text{tot}},$ made up of two spins having total angular momentum ${j}_{1},{j}_{2}$ respectively.

This is the standard Clebsch-Gordan sum:

$|{j}_{1},{m}_{1};{j}_{2},{m}_{2}\rangle ={\displaystyle \sum _{{j}_{tot}=\left|{j}_{1}-{j}_{2}\right|}^{{j}_{1}+{j}_{2}}{\displaystyle \sum _{{m}_{tot}=-{j}_{tot}}^{{j}_{tot}}|{j}_{tot},{m}_{tot};{j}_{1},{j}_{2}\rangle \langle {j}_{tot},{m}_{tot};{j}_{1},{j}_{2}|}}{j}_{1},{m}_{1};{j}_{2},{m}_{2}\rangle .$

The summed terms give a unit operator within this $\left(2{j}_{1}+1\right)\left(2{j}_{2}+1\right)$ dimensional space, the term $\langle {j}_{\text{tot}},{m}_{\text{tot}};{j}_{1},{j}_{2}|{j}_{1},{m}_{1};{j}_{2},{m}_{2}\rangle $ is a Clebsch-Gordan coefficient. The only nonzero coefficients have ${m}_{\text{tot}}={m}_{1}+{m}_{2},$ and ${j}_{\text{tot}}$ restricted as noted, so for given ${m}_{1},\text{\hspace{0.17em}}{m}_{2}$ we just set ${m}_{\text{tot}}={m}_{1}+{m}_{2},$ we don’t sum over ${m}_{\text{tot}},$ and the sum over ${j}_{\text{tot}}$ begins at $\left|{m}_{\text{tot}}\right|.$

Translating into our $|k,q\rangle \otimes |j,m\rangle $ notation, and cleaning up,

$|k,q;j,m\rangle ={\displaystyle \sum _{{j}_{tot}=\left|q+m\right|}^{k+j}|{j}_{tot},q+m;k,j\rangle}\langle {j}_{tot},q+m;k,j|k,q;j,m\rangle .$

We are now able to evaluate the angular component of the matrix element of a spherical tensor operator between angular momentum eigenkets: we see that it will only be nonzero for ${m}_{\text{tot}}={m}_{1}+{m}_{2},$ and ${j}_{\text{tot}}$ at least $\left|{m}_{\text{tot}}\right|.$

### The Wigner-Eckart Theorem

At this point, we must bear in mind that these tensor operators are not necessarily just functions of angle. For example, the position operator is a spherical vector multiplied by the radial variable $r,$ and kets specifying atomic eigenstates will include radial quantum numbers as well as angular momentum, so the matrix element of a tensor between two states will have the form

$\langle {\alpha}_{2},{j}_{2},{m}_{2}|{T}_{k}^{q}|{\alpha}_{1},{j}_{1},{m}_{1}\rangle $,

where the $j$ ’s and $m$ ’s denote the usual angular momentum
eigenstates and the $\alpha $ ’s are *nonangular*
quantum numbers, such as those for radial states.

The basic point of the Wigner$\u2013$Eckart
theorem is that*
the angular dependence of these matrix elements can be factored out, and it is
given by the Clebsch-Gordan coefficients*.

Having factored it out, the remaining dependence, which is
only on the *total *angular momentum in
each of the kets, *not* the relative
orientation (and of course on the $\alpha $ ’s), is traditionally written as a bracket
with double lines, that is,

$\langle {\alpha}_{2},{j}_{2},{m}_{2}|{T}_{k}^{q}|{\alpha}_{1},{j}_{1},{m}_{1}\rangle =\frac{\langle {\alpha}_{2},{j}_{2}\Vert {T}_{k}\Vert {\alpha}_{1},{j}_{1}\rangle}{\sqrt{2j+1}}\cdot \langle {j}_{2},{m}_{2}|k,q;{j}_{1},{m}_{1}\rangle .$

The denominator is the conventional normalization of the double-bar matrix element. The proof is given in, for example, Sakurai (page 239) and is not that difficult. The basic strategy is to put the defining identities

$\begin{array}{l}\left[{J}_{\pm},{T}_{k}^{q}\right]=\pm \hslash \sqrt{\left(k\mp q\right)\left(k\pm q+1\right)}\text{\hspace{0.17em}}{T}_{k}^{q\pm 1}\\ \left[{J}_{z},{T}_{k}^{q}\right]=\hslash q{T}_{k}^{q}\end{array}$

between $|\alpha ,j,m\rangle $ bras and kets, then get rid of the ${J}_{\pm \text{}}\text{and}{J}_{z}$ by having them operate on the bra or ket. This generates a series of linear equations
for $\langle {\alpha}_{2},{j}_{2},{m}_{2}|{T}_{k}^{q}|{\alpha}_{1},{j}_{1},{m}_{1}\rangle $ matrix elements with $m$ variables differing by one, and in fact this
set of linear equations is *identical* to the set that generates the
Clebsch-Gordan coefficients, so we must conclude that these spherical tensor
matrix elements, ranging over possible $m$ and $j$ values, are exactly proportional to the
Clebsch-Gordan coefficients$\u2014$and that
is the theorem.

**A Few Hints for Shankar’s problem 15.3.3**: that first matrix element comes from adding a spin $j$ to a spin 1, writing the usual maximum $m$ state, applying the lowering operator to both sides to get the total angular momentum $j+1,\text{\hspace{0.17em}}m=j$ state, then finding the same m state orthogonal to that, which corresponds to total angular momentum $j$ (instead of ). $j+1$

For the operator $J,$ the Wigner-Eckart matrix element simplifies because $J$ cannot affect $\alpha ,$ and also it commutes with ${J}^{2},$ so cannot change the total angular momentum.

So, in the Wigner-Eckart equation, replace ${T}_{k}^{q}$ on the left-hand side by ${J}_{1}^{0}$, which is just ${J}_{z}.$ The result of (1) should follow.

(2) First note that a scalar operator cannot change $m.$ Since $c$ is independent of $A$ we can take

$A=J$ to find $c.$