next up previous
Next: About this document ...

Lecture 14

Applications of Radioactivity

Radiometric Dating

The statistically steady rate of nuclear decay has been exploited to determine the age of various materials. The best known technique is Carbon Dating. Natural Carbon contains mainly the stable isotope Carbon-12, with smaller amounts of, equally stable, Carbon-13, and even smaller amounts of unstable Carbon-14, half-life around 5700 years. Living organisms continuously exchange Carbon with the environment, therefore a living organism will contain the same fraction of C-14 as it is found in its surroundings. But when the organism dies, it stops recycling its Carbon, and the C-14 it contains will start decaying away, without being replenished.

Measuring the ratio of C-14 over C-12 found in an old organic specimen can then tell us how long ago that particular sample belonged to a living organism.

Obviously Carbon dating can be useful only to measure time periods comparable to the C-14 half-life. Carbon dating could not be used to date something that died only a few years ago (too few C-14 nuclei would have decayed), nor something that died millions of years ago (most of the C-14 would be gone).

Nuclear Power

As mentioned earlier, in a nuclear transition leading to a state with larger binding energy (i.e. "missing more mass") the amount of mass missing from the final state is released in the form of kinetic energy. We also know that a disordered increase in the average kinetic energy of a collection of molecules will manifest itself as heat. This is how nuclear energy is exploited in the nuclear power stations or, when done in an uncontrolled, explosive way, in nuclear weapons. From extensive experimentation, it was learnt that there are two different ways of attaining a state of higher binding energy, i.e. either breaking up a heavy nucleus into smaller fragments (fission) or by combining the nuclei of light elements to form a heavier one (fusion).

To understand how two completely opposite types of reactions can both be "exothermic", one should look at a plot of the nuclear binding energy as a function of atomic number. Starting from the lowest elements, binding energy increases with atomic number, until it reaches a maximum for Iron, and then it starts decreasing. Consequently, elements to the left of Iron will produce energy when "fusing" together, while to the right of Iron energy will be released when breaking up into smaller nuclei. Both fission and fusion reactions have been exploited to produce nuclear weapons (the so called A and H bombs), but, so far, only fission has been exploited in a controlled way in nuclear power stations.

Fission

In the '30s it was discovered that when certain heavy nuclei absorb a slow neutron, the consequent instability will cause the nucleus to break-up into smaller fragments, with accompanying release of energy. Moreover, a few more neutrons will be released in the break-up, opening therefore the possibility to inititate a chain reaction. The rest of the story is well known : the operation of a controlled chain reaction, the first "atomic pile" was achieved in 1941 by Enrico Fermi and his collaborators in Chicago, and a few years later the chain reaction was unleashed in an explosive, uncontrolled way, in the atomic bomb.

The two main requirements for realizing a self sustained chain reaction are

1.
the availability of a fissionable isotope and
2.
knowledge of the so called critical mass, i.e. what quantity of the fissionable isotope is required to achieve self-sustainment.
Not all heavy isotopes are fissionable, in fact the only one found in nature is U-235, comprising about 0.7% of natural Uranium (which is mostly U-238). Another commonly used fissionable material is Plutonium-239, which is not found in nature but is readily produced in nuclear reactors, according to the chain:

$U_{92}^{238} + n \rightarrow U_{92}^{239}\rightarrow\beta decay\rightarrow
Np_{93}^{239} \rightarrow\beta decay\rightarrow \\ \rightarrow Pu_{94}^{239}$

Plutonium-239 has a half life of 24,000 years, therefore it lives long enough to be stored as fuel. A reactor containing a mixture of U-235 and U-238, will "burn" the U-235 to produce energy, and, at the same time, produce more fuel in the form of Pu-239. Unluckily, Plutonium is among the nastiest stuff known to man....

In order to operate a nuclear power station in a controlled way, one needs the correct flux of slow neutrons. This is achieved by means of a moderator , i.e. some substance (water, graphite, etc.) capable of slowing down the neutrons to the required values, and by control rods, made of some neutron absorbing material, (Boron, Cadmium) that can be lowered or lifted to regulate the overall neutron flux.

Much simpler, in a sense, is the making of an A-bomb : two or more masses of fissionable material, sub-critical by themselves but critical when joined together, are within the bomb's involucre. At the desired moment, an explosive charge brings the masses together.....

Fusion

Fusion is the reaction the fuels the stars and the sun, and we will explore it in more detail later. From the application point of view, the main difficulty to overcome in order to achieve fusion is to bring the light nuclei (e.g. hydrogen or deuterium) close together so that the nuclear force will intervene to induce fusion. But to do so, one has to overcome the long range electrostatic repulsion. In an H-bomb, this is done by exploding an A-bomb as a trigger (!!!!), but no large scale, self-sustained fusion has yet been achieved in a controlled way. Still, progress in the field of controlled fusion research has been steady, and fusion might provide (a few decades from now?) the answer to many of our current energy problems.

Elementary Particles
Around the late 30's to early 40's, with obvious disruptions caused by World War II, subatomic research split into two main directions. On the one side, nuclear physicists investigated further the properties of nuclei, radioactive isotopes, nuclear instability, etc. trying to understand better the nature of the strong force, and also realizing,as we have just discussed, several practical applications of nuclear phenomena. On the other side, study of the nature and properties of the most elementary constituents of matter, was being pursued in what became the discipline of Elementary Particle Physics.

The investigation of elementary particles brought a vast amount of surprising and completely unexpected results, and revealed that the apparently simple (and appealing, because of its simplicity) picture requiring the existence of only three types of particles, electrons, protons and neutrons, was in reality just the outside appearance of a much more complex underlying structure.

In the last fifty years of this century, elementary particles research went through a first stage of ever increasing complexity and confusion, with the number of known "elementary" particles increasing in an inordinate way, unexplained by any existing theory. Later, thanks to continuous progress in the gathering of experimental facts and theoretical attempts to explain them, the confusion and complexity have revealed a surprising new layer of simplicity.

In our discussion, we will not follow the chronological development of the study of elementary particles throughout all the intermediate stages, but instead, after acquainting ourselves with the tools and techniques employed in this research, we will concentrate on the most up to date picture of the ultimate structure of matter and of the forces that control its behaviour.

Cosmic Rays and Accelerators

The first indications that the description of matter in terms of electrons, protons and neutrons was not the whole story came from early cosmic rays experiments. As mentioned earlier, cosmic rays are mostly consisting of protons (or some other nucleus) that have been accelerated to very high energies by the electromagnetic fields present within stars and galaxies (the process by which cosmic rays reach extremely high energies is not completely understood). In their random flight, some cosmic rays will come across the earth's path, and will most likely collide with the molecules of the atmosphere. The by-products of such collisions can eventually be detected by instruments on the ground. Even better, placing particle detectors at some high elevation (on top of a mountain or aboard a balloon), there is a chance that, from time to time, a primary cosmic ray will interact within the detector, allowing to study the products of such reactions. By performing such kinds of experiments, researchers observed that when an energetic cosmic ray hits a nucleus, it can generate a whole host of completely new particles (another example of conversion of kinetic energy into mass), whose behaviour and properties are completely different from the more familiar protons and neutrons.

Research with cosmic rays could only produce results at a very limited rate, since the experimenters had no control on their primary source of particle production, but they could only wait for the rare cases when some ray would produce an interesting event within their detectors. In order to make concrete progress in the study of subatomic particles, it was necessary to wait for the developmnet of particle accelerators, capable of providing, in a controlled way, particle beams with well defined energy and intensity. The production of the new particles under study could then be studied by sending the high energy beams against a target. This is in a sense equivalent to Rutherford's experiment, with the $\alpha$ projectiles from radioactive decay replaced by proton or electron beams of higher and higher energy.

The need for high energies is due to the fact that the masses of the new particles observed in cosmic rays were found to be rather large, typically of the same order of magnitude as the proton mass or some fraction of it. Consequently, in order to produce them in the Lab by sending a beam against a target, the beam's energy should have been at least equivalent to the masses of the particles to be investigated. The following table will give a better idea of the energy scales characteristic of different phenomena.

Energy Scales
(masses are converted into energies via E=mc2)

Physical object Equivalent Energy (in electron-Volts)
energy levels for atomic electrons 1-10 eV
X-Rays 10-100 keV
electron mass 0.5 MeV
typical $\alpha ,\beta , \gamma$ energies 0.5 - 5 MeV
masses of new particles  
observed in cosmic rays reactions 100-1000 MeV
energy of accelerators in the  
late 40's up to a few hundred MeV
proton mass 1 GeV (938 MeV)
mass of heaviest known particle  
(top quark) 175 GeV
energy of biggest exisiting  
accelerator (Fermilab Tevatron) 1 + 1 TeV
energy of biggest planned  
accelerator (CERN LHC) 8 + 8 TeV
highest energy observed in  
cosmic rays > 1015 eV

The history of discoveries in the field of elementary particle physics has coincided with the design and construction of higher and higher energy accelerators. What is the reason behind the need for higher and higher energies? Apart from the mass/energy relation mentioned earlier, requiring higher and higher energies to produce mor and more massive particles, there is another, maybe more fundamental reason, connected to the dual wave/particle nature of matter. When learning about Heisenberg's principle, we had stated that a wave of a given wavelength can only explore objects of dimensions equal to or greater than the actual wavelength. If we want to probe deeper and deeper inside the nucleus, i.e. to see what happens at shorter and shorter distances, then we need shorter and shorter wavelengths, i.e. particles of higher and higher energies : a high energy accelerator can be thought of as being equivalent to the most powerful possible microscope...

What is the basic operating principle of an accelerator? In the most common configuration, particles in an accelerator are constrained in a circular path by means of a magnetic field, and are gradually accelerated to the peak energy by receiving a relatively small boost at every revolution. The basic expression that relates the radius of curvature r of a particle of mass m and velocity v to the strength B of the magnetic field is :

r = k mv/B
where k is a constant. The formula has the following implications: for constant magnetic field B , as the particle increases its energy (velocity), it will move on a trajectory of larger and larger radius, i.e its trajectory will be an outwardly growing spiral. This is what was happening in the early circular accelerators (Lawrence's cyclotron).

Alternatively, one could keep the particles on a constant radius trajectory if the B-field is increased in synchronism with the growth of the particles energy. This is the principle employed in modern accelerators (synchrotrons). The formula has also another obvious consequence: given that our current technology sets a limit to the maximum value attainable for a magnetic field, the size (i.e. the radius) of the accelerator will determine the maximum energy it can reach. For a given magnetic field, if I want to achieve a higher energy I need a bigger machine... The biggest existing accelerator is a 16 mile long underground ring of magnets !! (you can visit www.fnal.gov or www.cern.ch to learn more about the highest energy accelerator centers).



 
next up previous
Next: About this document ...
Sergio Conetti
3/19/1998