next up previous
Next: About this document ...

Lecture 11

Diodes, Transistors and the Computer


Doped Semiconductors, Diodes and Transistors

In a semiconductor (a material with large, but not infinitely large, resistivity), electrons would normally be bound within the atom. A good example is silicon, where all electrons would normally be constrained by covalent bonds. In reality, due to thermal agitation, some electrons can shake themselves free and therefore become available for electric conduction. If an electric field is applied, a weak current can propagate through the material.

An electron liberated from an atom will leave behind a vacancy (a hole) In the constant dance of particles, another electron can go and fill the hole, leaving a new hole behind itself. Being "the absence of one electron", a hole can be interpreted as a positive charge. It often is more convenient to treat such processes in terms of "hole translation", and interpreting the hole as a positive particle free to move throughout the body of the material.

The electrical properties of semiconductors can be enhanced by adding to a pure material small amounts (a few parts per million) of impurities. The most widespread application is based on Silicon (4 electrons in the third shell) doped with Aluminum (3 outer electrons) or Phosphorus (5 outer electrons).

In a Phosphorus-doped material, the 5th electron will be relatively free to move around, leaving behind positively charged Phosphorus ions. This type of semiconductor is called an n-type, since the current carriers are negative electrons. Conversely, in an Al-doped material, there are positive "holes"; electrons can move to fill the hole, leaving a hole in a different position. In this case one talks about p-type semiconductors, since one can think in terms of positive charge carriers.

The most interesting effects occur when one builds a diode, by coupling an n-type material with a p-type. Without trying to explain the details, we will just say that, because of the nature of the electric fields present at the interface between the two materials, the so called pn junction, a diode will allow the flow of electrons only in one direction, from n to p, but not in the opposite.

The most common application for diodes is as current rectifiers: electric power is transmitted as Alternating Current (AC), but most of the electrically operated equipment does need current of one sign only. A diode at the input to the appliance will transmit only the positive part of the cycle.

Special diodes, with a thin n-type over a thick p-type [*] can exploit solar energy to generate electricity. Electrons liberated within the n-type material by the sun's energy are attracted into the p-type, and the continuing process originates a steady current.

Note that this is not the same as the photoelectric effect: in the photoelectric effect, electrons are actually pulled out of the material, and a battery is still needed to generate a current. In the solar cell diode, the "electromotive force" is provided by the electric field at the pn junction.


An even more remarkable device is obtained with a pnp or an npn sandwich, where the three components are called respectively emitter, base and collector. This is the transistor, born in 1947 to start the micro-electronic age. The two most fundamental applications of the transistor are :

1.
Amplifier.
2.
Switch.
Transistor as an Amplifier : again we will not try to explain how, but we will just state that a transistor is able to translate small variations of current received by the base, (the mid part of the "sandwich") into proportionally large variations in the current flowing between the emitter and collector. Note that, even though the term "amplifier" might give the impression that energy is somehow gained, in reality there is no violation of energy conservation. To operate, a transistor must be supplied with an adequate amount of electric power that, under quiescent conditions, results in a steady current between emitter and collector. The signal applied to the base has just the effect of "modulating" this current.

The first application of transistors was in the field of radio transmission. The process of radio (and TV, and telephone calls, etc.) broadcast is based on the transmission and reception of a signal carried by electromagnetic waves. The transmitted signal is picked up by an antenna, whose electrons respond to the variation in the wave's amplitude (AM) or frequency (FM) and generate tiny electric pulses, oscillating in the same pattern as the original transmitted signal. In order to produce, e.g., a sound, these pulses need to be strengthened, to be able to drive the loudspeakers (this is a mechanical process that requires much more energy than the one available from the electric signal). Hence the need for an amplifier.

You might have never seen one, but you might be aware that radios existed before the invention of transistors. In those radios of the 20's, 30's and 40's, the rectifying and amplifying functions were performed by mean of ;SPMquot;vacuum tubes", diodes, triodes, etc. As the name says, these devices operated by controlling the flow of electrons in a vacuum, but were necessarily bulky, required high power to operate, etc.

The advent of transistors opened up the age of miniaturization. In the first step, replacing tubes with transistors reduced the size of a typical radio from that of a cooler to that of a small purse. The miniaturization process has then proceeded at an amazing pace : while the first individual transistors had the dimensions of a few square mm, nowadays something of the order of hundred thousand transistors can be formed over the same small area.

Transistor as a Switch: by applying the proper voltage to the transistor's base, one can stop a current from going from emitter to collector. In this way one can create a rapid sequence of current/no-current (or 0/1) states. Such a feature is the basis for the operation of computers and all sort of other electronic devices.

COMPUTERS

A few basic facts about computers:

The progress in miniaturization has gone together with an increase in speed (it is just a matter of how far a signal has to travel...). In the early 60's, top of the line mini-computers (the precursors of your PC) ran with 1 MHz clocks.

You might wonder how can a computer do its most complex calculation while handling only zeros and ones. To understand this, we need to get familiar with binary (or base 2) arithmetics.

Our normal way of representing numbers is in base 10 : when we write, e.g., 3475, this means
( 3 x 1000) + (4 x 100) + (7 x 10) + (5 x 1)
or, if you prefer,
(3 x 103) + (4 x 102) + (7 x 101) + (5 x 100) .

The use of base 10 also implies that, to represent any number, you need 10 different digits, 0 to 9.

Obviously the choice of 10 as a base, even if it's the only one youy are familiar with, is a completely arbitrary choice, and we could represent numbers in any other base. In particular, if we want to limit ourselves to the use of two digits only, 0 and 1, then we could adopt a base 2 representation. In the computer world, each 0 or 1 is a binary digit or bit. Here is how numbers would then look like in binary representation, together with their decimal equivalents:

binary decimal
   
1 1
10 2
11 3
100 4
101 5
etc.  

A number like 1001101 will then be interpreted as
1x26+0x25+0x24+1x23+1x22+0x21+1x20 = 77.

Additions with binary numbers are quite simple:

0+0=0,
1+0=0+1=1,
1+1=10, or 1+1=0 and carry 1 .
For example

110001101 + 100001111 = 1010011100

What about non integer numbers? They can be represented in a way similar to our scientific notation: the way we express a number by its significant digits multiplied by a power of 10, we can think of a binary representation given by the significant digits (in binary representation) times a power of two.

For instance : 0.75 = 3/4 = 3x1/4 = 3x1/22 = 3x2-2 . In the binary representation this could be expressed as 11 10 1, where the 11 (decimal 3) represents the significant digits , 10 (decimal 2) is the exponent , and the last 1 is needed to specify that the exponent is negative. More precisely, one should assign a priori a fixed number of bits to each quantity. In other words, suppose we assign 5 bits for the significant part, 5 for the exponent and one for the sign, the binary representation of 0.75 is 00011000101. And so on......

How does a computer perform a given task? The "brain" of the computer (the so called CPU, Central Processing Unit) knows a certain amount of basic instructions, like ADD, SUBTRACT, COMPARE, TEST IF ZERO, AND, OR, FETCH, JUMP, etc. Each of these instructions will have a well defined binary code, e.g ADD=0000, SUBTRACT=0001,.... Moreover, for each instruction the computer will need to know where to find the operands (e.g. the two quantities to be added together) and where to put the results. A full computer instructions will then consist of a string of bits comprising several "fields": a few bits to specify the operation, more bits for the "source" (the address of the operands) and more bits for the "destination" (i.e. what to do with the result). The complete specification for a task, a program, will consist of a sequence of instructions, each one made up of a string of bits. To run the program, it is enough to tell the CPU which is the starting location (the address) of the first instruction, and the CPU will then obediently follow the flow of the program, at the rate controlled by the internal clock.

The bottom line is the anything that a computer does is in fact reduced to the sequential processing of 0's and 1's bit strings. Obviously, to program a computer you don't provide directly such strings, but you write your program in some high level language. A special set of programs (Compilers, Interpreters and Assemblers) is responsible for translating the program from "human" to binary language.


 
next up previous
Next: About this document ...
Sergio Conetti
2/26/1998