text
stringlengths 790
2.88k
|
|---|
done in automotive ignition systems to produce high-voltage spark plug power from a low-voltage DC battery), but pulsed DC is not that difierent from AC. Perhaps more than any other reason, this is why AC flnds such widespread application in power systems. If we were to follow the changing voltage produced by a coil in an alternator from any point on the sine wave graph to that point when the wave shape begins to repeat itself, we would have marked exactly one cycle of that wave. This is most easily shown by spanning the distance between identical peaks, but may be measured between any corresponding points on the graph. The degree marks on the horizontal axis of the graph represent the domain of the trigonometric sine function, and also the angular position of our simple two-pole alternator shaft as it rotates: one wave cycle 0 90 180 270 Alternator shaft position (degrees) 180 90 360 (0) one wave cycle 270 360 (0) Since the horizontal axis of this graph can mark the passage of time as well as shaft position in degrees, the dimension marked for one cycle is often measured in a unit of time, most often seconds or fractions of a second. When expressed as a measurement, this is often called the period of a wave. The period of a wave in degrees is always 360, but the amount of time one period occupies depends on the rate voltage oscillates back and forth. A more popular measure for describing the alternating rate of an AC voltage or current wave than period is the rate of that back-and-forth oscillation. This is called frequency. The modern 303 unit for frequency is the Hertz (abbreviated Hz), which represents the number of wave cycles completed during one second of time. In the United States of America, the standard power-line frequency is 60 Hz, meaning that the AC voltage oscillates at a rate of 60 complete back-andforth cycles every second. In Europe, where the power system frequency is 50 Hz, the AC voltage only completes 50 cycles every second. A radio station transmitter broadcasting at a frequency of 100 MHz generates an AC voltage oscillating at a rate of 100 million cycles every second. Prior to the canonization of the Hertz unit, frequency was simply expressed as "cycles per second." Older meters and electronic equipment often bore frequency units of "CPS" (Cycles Per Second) instead of Hz. Many people believe the change from self-explanatory units like CPS to Hertz
|
constitutes a step backward in clarity. A similar change occurred when the unit of "Celsius" replaced that of "Centigrade" for metric temperature measurement. The name Centigrade was based on a 100-count ("Centi-") scale ("-grade") representing the melting and boiling points of H2O, respectively. The name Celsius, on the other hand, gives no hint as to the unit’s origin or meaning. Period and frequency are mathematical reciprocals of one another. That is to say, if a wave has a period of 10 seconds, its frequency will be 0.1 Hz, or 1/10 of a cycle per second: Frequency in Hertz = 1 Period in seconds An instrument called an oscilloscope is used to display a changing voltage over time on a graphical screen. You may be familiar with the appearance of an ECG or EKG (electrocardiograph) machine, used by physicians to graph the oscillations of a patient’s heart over time. The ECG is a special-purpose oscilloscope expressly designed for medical use. General-purpose oscilloscopes have the ability to display voltage from virtually any voltage source, plotted as a graph with time as the independent variable. The relationship between period and frequency is very useful to know when displaying an AC voltage or current waveform on an oscilloscope screen. By measuring the period of the wave on the horizontal axis of the oscilloscope screen and reciprocating that time value (in seconds), you can determine the frequency in Hertz. OSCILLOSCOPE vertical Y DC GND AC V/div trigger timebase 1m X s/div DC GND AC 16 divisions @ 1ms/div = a period of 16 ms Frequency = 1 period = 1 16 ms = 62.5 Hz Voltage and current are by no means the only physical variables subject to variation over time. Much more common to our everyday experience is sound, which is nothing more than 304 the alternating compression and decompression (pressure waves) of air molecules, interpreted by our ears as a physical sensation. Because alternating current is a wave phenomenon, it shares many of the properties of other wave phenomena, like sound. For this reason, sound (especially structured music) provides an excellent analogy for relating AC concepts. In musical terms, frequency is equivalent to pitch. Low-pitch notes such as those produced by a tuba or bassoon consist of air molecule vibrations that are relatively slow (low frequency). High-pitch notes such as those produced
|
by a ute or whistle consist of the same type of vibrations in the air, only vibrating at a much faster rate (higher frequency). Here is a table showing the actual frequencies for a range of common musical notes: Note A A sharp (or B flat) B C (middle) C sharp (or D flat) D D sharp (or E flat) E F F sharp (or G flat) G G sharp (or A flat) A A sharp (or B flat) B C Musical designation Frequency (in hertz) A1 A# or Bb B1 C C# or Db D D# or Eb E F F# or Gb G G# or Ab A A# or Bb B C1 220.00 233.08 246.94 261.63 277.18 293.66 311.13 329.63 349.23 369.99 392.00 415.30 440.00 466.16 493.88 523.25 Astute observers will notice that all notes on the table bearing the same letter designation are related by a frequency ratio of 2:1. For example, the flrst frequency shown (designated with the letter "A") is 220 Hz. The next highest "A" note has a frequency of 440 Hz { exactly twice as many sound wave cycles per second. The same 2:1 ratio holds true for the flrst A sharp (233.08 Hz) and the next A sharp (466.16 Hz), and for all note pairs found in the table. Audibly, two notes whose frequencies are exactly double each other sound remarkably similar. This similarity in sound is musically recognized, the shortest span on a musical scale separating such note pairs being called an octave. Following this rule, the next highest "A" note (one octave above 440 Hz) will be 880 Hz, the next lowest "A" (one octave below 220 Hz) will be 110 Hz. A view of a piano keyboard helps to put this scale into perspective: 305 C# Db D# Eb F# Gb G# Ab A# Bb C# Db D# Eb F# Gb G# Ab A# Bb C# Db D# Eb F# Gb G# Ab A# Bb one octave As you can see, one octave is equal to eight white keys’ worth of distance on a piano keyboard. The familiar musical mnemonic (doe
|
-ray-mee-fah-so-lah-tee-doe) { yes, the same pattern immortalized in the whimsical Rodgers and Hammerstein song sung in The Sound of Music { covers one octave from C to C. While electromechanical alternators and many other physical phenomena naturally produce sine waves, this is not the only kind of alternating wave in existence. Other "waveforms" of AC are commonly produced within electronic circuitry. Here are but a few sample waveforms and their common designations: Square wave Triangle wave one wave cycle one wave cycle Sawtooth wave These waveforms are by no means the only kinds of waveforms in existence. They’re simply a few that are common enough to have been given distinct names. Even in circuits that are supposed to manifest "pure" sine, square, triangle, or sawtooth voltage/current waveforms, the real-life result is often a distorted version of the intended waveshape. Some waveforms are so complex that they defy classiflcation as a particular "type" (including waveforms associated with many kinds of musical instruments). Generally speaking, any waveshape bearing close resemblance to a perfect sine wave is termed sinusoidal, anything difierent being labeled as nonsinusoidal. Being that the waveform of an AC voltage or current is crucial to its impact in a circuit, we need to be aware of the fact that AC waves come in a variety of shapes. 306 15.5 Measurements of AC magnitude So far we know that AC voltage alternates in polarity and AC current alternates in direction. We also know that AC can alternate in a variety of difierent ways, and by tracing the alternation over time we can plot it as a "waveform." We can measure the rate of alternation by measuring the time it takes for a wave to evolve before it repeats itself (the "period"), and express this as cycles per unit time, or "frequency." In music, frequency is the same as pitch, which is the essential property distinguishing one note from another. However, we encounter a measurement problem if we try to express how large or small an AC quantity is. With DC, where quantities of voltage and current are generally stable, we have little trouble expressing how much voltage or current we have in any part of a circuit. But how do you grant a single measurement of magnitude to something that is constantly changing? One way to express the
|
intensity, or magnitude (also called the amplitude), of an AC quantity is to measure its peak height on a waveform graph. This is known as the peak or crest value of an AC waveform: Peak Time Another way is to measure the total height between opposite peaks. This is known as the peak-to-peak (P-P) value of an AC waveform: Peak-to-Peak Time Unfortunately, either one of these expressions of waveform amplitude can be misleading when comparing two difierent types of waves. For example, a square wave peaking at 10 volts is obviously a greater amount of voltage for a greater amount of time than a triangle wave peaking at 10 volts. The efiects of these two AC voltages powering a load would be quite difierent: 307 10 V Time 10 V (peak) 10 V (peak) more heat energy dissipated (same load resistance) less heat energy dissipated One way of expressing the amplitude of difierent waveshapes in a more equivalent fashion is to mathematically average the values of all the points on a waveform’s graph to a single, aggregate number. This amplitude measure is known simply as the average value of the waveform. If we average all the points on the waveform algebraically (that is, to consider their sign, either positive or negative), the average value for most waveforms is technically zero, because all the positive points cancel out all the negative points over a full cycle: + + + + + + + + + - - - - - - - - - True average value of all points (considering their signs) is zero! This, of course, will be true for any waveform having equal-area portions above and below the "zero" line of a plot. However, as a practical measure of a waveform’s aggregate value, "average" is usually deflned as the mathematical mean of all the points’ absolute values over a cycle. In other words, we calculate the practical average value of the waveform by considering all points on the wave as positive quantities, as if the waveform looked like this: 308 + + + + + + + + + + ++ + + + + + + Practical average of points, all values assumed to be positive. Polarity-insensitive mechanical meter movements (meters designed to respond equally to the positive and negative half-cycles of an alternating voltage or current)
|
register in proportion to the waveform’s (practical) average value, because the inertia of the pointer against the tension of the spring naturally averages the force produced by the varying voltage/current values over time. Conversely, polarity-sensitive meter movements vibrate uselessly if exposed to AC voltage or current, their needles oscillating rapidly about the zero mark, indicating the true (algebraic) average value of zero for a symmetrical waveform. When the "average" value of a waveform is referenced in this text, it will be assumed that the "practical" deflnition of average is intended unless otherwise specifled. Another method of deriving an aggregate value for waveform amplitude is based on the waveform’s ability to do useful work when applied to a load resistance. Unfortunately, an AC measurement based on work performed by a waveform is not the same as that waveform’s "average" value, because the power dissipated by a given load (work performed per unit time) is not directly proportional to the magnitude of either the voltage or current impressed upon it. Rather, power is proportional to the square of the voltage or current applied to a resistance (P = E2/R, and P = I2R). Although the mathematics of such an amplitude measurement might not be straightforward, the utility of it is. Consider a bandsaw and a jigsaw, two pieces of modern woodworking equipment. Both types of saws cut with a thin, toothed, motor-powered metal blade to cut wood. But while the bandsaw uses a continuous motion of the blade to cut, the jigsaw uses a back-and-forth motion. The comparison of alternating current (AC) to direct current (DC) may be likened to the comparison of these two saw types: Bandsaw blade motion wood Jigsaw wood blade motion (analogous to DC) (analogous to AC) The problem of trying to describe the changing quantities of AC voltage or current in a single, aggregate measurement is also present in this saw analogy: how might we express the speed of a jigsaw blade? A bandsaw blade moves with a constant speed, similar to the way DC voltage pushes or DC current moves with a constant magnitude. A jigsaw blade, on the other hand, moves back and forth, its blade speed constantly changing. What is more, the back-and-forth motion of any two jigsaws may not be of the
|
same type, depending on the mechanical design 309 of the saws. One jigsaw might move its blade with a sine-wave motion, while another with a triangle-wave motion. To rate a jigsaw based on its peak blade speed would be quite misleading when comparing one jigsaw to another (or a jigsaw with a bandsaw!). Despite the fact that these difierent saws move their blades in difierent manners, they are equal in one respect: they all cut wood, and a quantitative comparison of this common function can serve as a common basis for which to rate blade speed. Picture a jigsaw and bandsaw side-by-side, equipped with identical blades (same tooth pitch, angle, etc.), equally capable of cutting the same thickness of the same type of wood at the same rate. We might say that the two saws were equivalent or equal in their cutting capacity. Might this comparison be used to assign a "bandsaw equivalent" blade speed to the jigsaw’s back-and-forth blade motion; to relate the wood-cutting efiectiveness of one to the other? This is the general idea used to assign a "DC equivalent" measurement to any AC voltage or current: whatever magnitude of DC voltage or current would produce the same amount of heat energy dissipation through an equal resistance: 10 V RMS 10 V 5 A RMS 5 A RMS 5 A 5 A 2 W 50 W power dissipated Equal power dissipated through equal resistance loads 2 W 50 W power dissipated Suppose we were to wrap a coil of insulated wire around a loop of ferromagnetic material and energize this coil with an AC voltage source: iron core wire coil As an inductor, we would expect this iron-core coil to oppose the applied voltage with its inductive reactance, limiting current through the coil as predicted by the equations XL = 2…fL and I=E/X (or I=E/Z). For the purposes of this example, though, we need to take a more 310 detailed look at the interactions of voltage, current, and magnetic ux in the device. Kirchhofi’s voltage law describes how the algebraic sum of all voltages in a loop must equal zero. In this example, we could apply this fundamental law of electricity to describe the respective voltages of the source and of the inductor coil. Here, as in any one-source, one-load circuit, the
|
voltage dropped across the load must equal the voltage supplied by the source, assuming zero voltage dropped along the resistance of any connecting wires. In other words, the load (inductor coil) must produce an opposing voltage equal in magnitude to the source, in order that it may balance against the source voltage and produce an algebraic loop voltage sum of zero. From where does this opposing voltage arise? If the load were a resistor, the opposing voltage would originate from the "friction" of electrons owing through the resistance of the resistor. With a perfect inductor (no resistance in the coil wire), the opposing voltage comes from another mechanism: the reaction to a changing magnetic ux in the iron core. Michael Faraday discovered the mathematical relationship between magnetic ux (') and induced voltage with this equation: e = N dF dt Where, e = (Instantaneous) induced voltage in volts N = F = t = Number of turns in wire coil (straight wire = 1) Magnetic flux in Webers Time in seconds The instantaneous voltage (voltage dropped at any instant in time) across a wire coil is equal to the number of turns of that coil around the core (N) multiplied by the instantaneous rate-ofchange in magnetic ux (d'/dt) linking with the coil. Graphed, this shows itself as a set of sine waves (assuming a sinusoidal voltage source), the ux wave 90o lagging behind the voltage wave: e = voltage F = magnetic flux e F Magnetic ux through a ferromagnetic material is analogous to current through a conductor: it must be motivated by some force in order to occur. In electric circuits, this motivating force is voltage (a.k.a. electromotive force, or EMF). In magnetic "circuits," this motivating force is magnetomotive force, or mmf. Magnetomotive force (mmf) and magnetic ux (') are related to each other by a property of magnetic materials known as reluctance (the latter quantity symbolized by a strange-looking letter "R"): 311 A comparison of "Ohm’s Law" for electric and magnetic circuits: E = IR Electrical mmf = F´ Magnetic In our example, the mmf required to produce this changing magnetic ux (') must be supplied by a changing current through the coil. Magnetomotive force generated by an electromagnet coil is equal to the amount of current through that coil (in amps) multiplied by the number of turns of that coil
|
around the core (the SI unit for mmf is the amp-turn). Because the mathematical relationship between magnetic ux and mmf is directly proportional, and because the mathematical relationship between mmf and current is also directly proportional (no rates-of-change present in either equation), the current through the coil will be in-phase with the ux wave: e = voltage F = magnetic flux i = coil current e F i This is why alternating current through an inductor lags the applied voltage waveform by 90o: because that is what is required to produce a changing magnetic ux whose rate-of-change produces an opposing voltage in-phase with the applied voltage. Due to its function in providing magnetizing force (mmf) for the core, this current is sometimes referred to as the magnetizing current. It should be mentioned that the current through an iron-core inductor is not perfectly sinusoidal (sine-wave shaped), due to the nonlinear B/H magnetization curve of iron. In fact, if the inductor is cheaply built, using as little iron as possible, the magnetic ux density might reach high levels (approaching saturation), resulting in a magnetizing current waveform that looks something like this: e = voltage F = magnetic flux i = coil current e F i When a ferromagnetic material approaches magnetic ux saturation, disproportionately greater levels of magnetic fleld force (mmf) are required to deliver equal increases in magnetic fleld ux ('). Because mmf is proportional to current through the magnetizing coil (mmf = NI, where "N" 312 is the number of turns of wire in the coil and "I" is the current through it), the large increases of mmf required to supply the needed increases in ux results in large increases in coil current. Thus, coil current increases dramatically at the peaks in order to maintain a ux waveform that isn’t distorted, accounting for the bell-shaped half-cycles of the current waveform in the above plot. The situation is further complicated by energy losses within the iron core. The efiects of hysteresis and eddy currents conspire to further distort and complicate the current waveform, making it even less sinusoidal and altering its phase to be lagging slightly less than 90o behind the applied voltage waveform. This coil current resulting from the sum total of all magnetic efiects in the core
|
(d'/dt magnetization plus hysteresis losses, eddy current losses, etc.) is called the exciting current. The distortion of an iron-core inductor’s exciting current may be minimized if it is designed for and operated at very low ux densities. Generally speaking, this requires a core with large cross-sectional area, which tends to make the inductor bulky and expensive. For the sake of simplicity, though, we’ll assume that our example core is far from saturation and free from all losses, resulting in a perfectly sinusoidal exciting current. As we’ve seen already in the inductors chapter, having a current waveform 90o out of phase with the voltage waveform creates a condition where power is alternately absorbed and returned to the circuit by the inductor. If the inductor is perfect (no wire resistance, no magnetic core losses, etc.), it will dissipate zero power. Let us now consider the same inductor device, except this time with a second coil wrapped around the same iron core. The flrst coil will be labeled the primary coil, while the second will be labeled the secondary: iron core wire coil wire coil If this secondary coil experiences the same magnetic ux change as the primary (which it should, assuming perfect containment of the magnetic ux through the common core), and has the same number of turns around the core, a voltage of equal magnitude and phase to the applied voltage will be induced along its length. In the following graph, the induced voltage waveform is drawn slightly smaller than the source voltage waveform simply to distinguish one from the other: 313 ep = primary coil voltage es = secondary coil voltage F = magnetic flux ip = primary coil current ep es F ip 314 Chapter 16 Electronics (NOTE TO SELF: Mark: I have very little idea of how to make this ow, flt in or even how best to explain any of it. All the content in here is just trawled from other GFDL projects: www.wikipedia.com www.wikibooks.com The syllabus document has NO meaningful information on this stufi.) Electronics: 16.1 capacitive and inductive circuits 16.1.1 A capacitor (NOTE TO SELF: If we are going to talk of capacitive circuits we need a deflnition of capacitor.) A capacitor (historically known as a "condenser") is a device that stores energy in an electric fl
|
eld, by accumulating an internal imbalance of electric charge. It is made of two conductors separated by a dielectric (insulator). The problem of two parallel plates with a uniform electric fleld between them is a capacitor. When voltage exists one end of the capacitor is getting drained and the other end is getting fllled with charge. This is known as charging. Charging creates a charge imbalance between the two plates and creates a reverse voltage that stops the capacitor from charging. This is why when capacitors are flrst connected to voltage charge ows only to stop as the capacitor becomes charged. When a capacitor is charged current stops owing and it becomes an open circuit. It is as if the capacitor gained inflnite resistance. Just as the capacitor charges it can be discharged. 16.1.2 An inductor (NOTE TO SELF: If we are going to talk of inductive circuits we need a deflnition of a inductor) An inductor is a device which stores energy in a magnetic fleld. Inductors are formed of a coil of conductive material. When current ows through the wire it creates a magnetic fleld which exists inside the coil. When the current stops the magnetic fleld gets less, but we have learnt that a changing magnetic fleld induces a current in a wire. So when the current turns ofi the magnetic fleld decreases inducing another current in the wire. As the fleld decreases in strength so does the induced magnetic fleld. Normally they are made of copper wire, but not always (Example: aluminum wire, or spiral pattern etched on circuit board). The material around and within the coil afiects its properties; 315 common types are air-core (only a coil of wire), iron-core, and ferrite core. Iron and ferrite types are more e–cient because they conduct the magnetic fleld much better than air; of the two, ferrite is more e–cient because stray electricity cannot ow through it. Interesting Fact: Some inductors have more than a core, which is just a rod the coil is formed about. Some are formed like transformers, using two E-shaped pieces facing each other, the wires wound about the central leg of the E’s. The E’s are made of laminated
|
iron/steel or ferrite. Important qualities of an inductor There are several important properties for an inductor. * Current carrying capacity is determined by wire thickness. * Q, or quality, is determined by the uniformity of the windings, as well as the core material and how thoroughly it surrounds the coil. * Last but not least, the inductance of the coil. The inductance is determined by several factors. * coil shape: short and squat is best * core material * windings: winding in opposite directions will cancel out the inductance efiect, and you will have only a resistor. 16.2 fllters and signal tuning (NOTE TO SELF: I think this relies on an understanding of second order ODEs and thats beyond the scope of the maths syllabus - we can put something high level but there is no way they’ll understand it properly - surely we should teach as little phenomonology as possible - the waves chapter has a ton of it already) 16.3 active circuit elements, diode, LED and fleld efiect transistor, operational amplifler 16.3.1 Diode A diode functions as the electronic version of a one-way valve. By restricting the direction of movement of charge carriers, it allows an electric current to ow in one direction, but blocks it in the opposite direction. It is a one-way street for current. 316 Diode behavior is analogous to the behavior of a hydraulic device called a check valve. A check valve allows uid ow through it in one direction only: Check valves are essentially pressure-operated devices: they open and allow ow if the pressure across them is of the correct "polarity" to open the gate (in the analogy shown, greater uid pressure on the right than on the left). If the pressure is of the opposite "polarity," the pressure difierence across the check valve will close and hold the gate so that no ow occurs. Like check valves, diodes are essentially "pressure-" operated (voltage-operated) devices. The essential difierence between forward-bias and reverse-bias is the polarity of the voltage dropped across the diode. Let’s take a closer look at the simple battery-diode-lamp circuit shown earlier, this time investigating voltage drops across the various components: 317 When the diode is forward-biased and
|
conducting current, there is a small voltage dropped across it, leaving most of the battery voltage dropped across the lamp. When the battery’s polarity is reversed and the diode becomes reverse-biased, it drops all of the battery’s voltage and leaves none for the lamp. If we consider the diode to be a sort of self-actuating switch (closed in the forward-bias mode and open in the reverse-bias mode), this behavior makes sense. The most substantial difierence here is that the diode drops a lot more voltage when conducting than the average mechanical switch (0.7 volts versus tens of millivolts). This forward-bias voltage drop exhibited by the diode is due to the action of the depletion region formed by the P-N junction under the inuence of an applied voltage. When there is no voltage applied across a semiconductor diode, a thin depletion region exists around the region of the P-N junction, preventing current through it. The depletion region is for the most part devoid of available charge carriers and so acts as an insulator: 318 16.3.2 LED A light-emitting diode (LED) is a semiconductor device that emits light when charge ows in the correct direction through it. If you apply a voltage to force current to ow in the direction the LED allows it will light up. This notation of having two small arrows pointing away from the device is common to the schematic symbols of all light-emitting semiconductor devices. Conversely, if a device is lightactivated (meaning that incoming light stimulates it), then the symbol will have two small arrows pointing toward it. It is interesting to note, though, that LEDs are capable of acting as lightsensing devices: they will generate a small voltage when exposed to light, much like a solar cell on a small scale. This property can be gainfully applied in a variety of light-sensing circuits. The color depends on the semiconducting material used to construct the LED, and can be in the near-ultraviolet, visible or infrared part of the electromagnetic spectrum. Interesting Fact: Nick Holonyak Jr. (1928 ) of the University of Illinois at Urbana-Champaign developed the flrst practical visible-spectrum LED in 1962. 319 Physical function Because LEDs are made of difierent chemical substances than normal rectifying diodes, their forward voltage drops will be difi
|
erent. Typically, LEDs have much larger forward voltage drops than rectifying diodes, anywhere from about 1.6 volts to over 3 volts, depending on the color. Typical operating current for a standard-sized LED is around 20 mA. When operating an LED from a DC voltage source greater than the LED’s forward voltage, a series-connected "dropping" resistor must be included to prevent full source voltage from damaging the LED. Consider this example circuit: With the LED dropping 1.6 volts, there will be 4.4 volts dropped across the resistor. Sizing the resistor for an LED current of 20 mA is as simple as taking its voltage drop (4.4 volts) and dividing by circuit current (20 mA), in accordance with Ohm’s Law (R=E/I). This gives us a flgure of 220?. Calculating power dissipation for this resistor, we take its voltage drop and multiply by its current (P=IE), and end up with 88 mW, well within the rating of a 1/8 watt resistor. Higher battery voltages will require larger-value dropping resistors, and possibly higher-power rating resistors as well. Consider this example for a supply voltage of 24 volts: Here, the dropping resistor must be increased to a size of 1.12 k? in order to drop 22.4 volts at 20 mA so that the LED still receives only 1.6 volts. This also makes for a higher resistor power dissipation: 448 mW, nearly one-half a watt of power! Obviously, a resistor rated for 1/8 watt power dissipation or even 1/4 watt dissipation will overheat if used here. Dropping resistor values need not be precise for LED circuits. Suppose we were to use a 1 k? resistor instead of a 1.12 k? resistor in the circuit shown above. The result would be a slightly greater circuit current and LED voltage drop, resulting in a brighter light from the LED and slightly reduced service life. A dropping resistor with too much resistance (say, 1.5 k? instead of 1.12 k?) will result in less circuit current, less LED voltage, and a dimmer light. LEDs are quite tolerant of variation in applied power, so you need not strive for perfection in sizing the dropping resistor. Also because of their unique chemical makeup, LEDs have much, much lower peak-inverse voltage (PIV) ratings than ordinary rectifying diodes
|
. A typical LED might only be rated at 5 volts in reverse-bias mode. Therefore, when using alternating current to power an LED, you should connect a protective rectifying diode in series with the LED to prevent reverse breakdown every other half-cycle: 320 Light emission The wavelength of the light emitted, and therefore its color, depends on the materials forming the pn junction. A normal diode, typically made of silicon or germanium, emits invisible far-infrared light (so it can’t be seen), but the materials used for an LED have emit light corresponding to near-infrared, visible or near-ultraviolet frequencies. Considerations in use Unlike incandescent light bulbs, which can operate with either AC or DC, LEDs require a DC supply of the correct electrical polarity. When the voltage across the pn junction is in the correct direction, a signiflcant current ows and the device is said to be forward biased. The voltage across the LED in this case is flxed for a given LED and is proportional to the energy of the emitted photons. If the voltage is of the wrong polarity, the device is said to be reverse biased, very little current ows, and no light is emitted. Because the voltage versus current characteristics of an LED are much like any diode, they can be destroyed by connecting them to a voltage source much higher than their turn on voltage. The voltage drop across a forward biased LED increases as the amount of light emitted increases because of the optical power being radiated. One consequence is that LEDs of the same type can be readily operated in parallel. The turn-on voltage of an LED is a function of the color, a higher forward drop is associated with emitting higher energy (bluer) photons. The reverse voltage that most LEDs can sustain without damage is usually only a few volts. Some LED units contain two diodes, one in each direction and each a difierent color (typically red and green), allowing two-color operation or a range of colors to be created by altering the percentage of time the voltage is in each polarity. LED materials LED development began with infrared and red devices made with gallium arsenide. Advances in materials science have made possible the production of devices with ever shorter wavelengths, producing light in a variety of colors. Conventional LEDs are made from a variety of inorganic minerals, producing the following colors: aluminium gallium arsenide (AlGaAs):
|
red and infrared gallium arsenide/phosphide (GaAsP): red, orange-red, orange, and yellow gallium nitride (GaN): green, pure green (or emerald green), and blue gallium phosphide (GaP): red, yellow and green zinc selenide (ZnSe): blue † † † † † 321 indium gallium nitride (InGaN): bluish-green and blue silicon carbide (SiC): blue diamond (C): ultraviolet † † † silicon (Si) - under development † (NOTE TO SELF: The above list is taken from public sources, but at least one LED given as blue does not produce blue light. (There is a good chance that almost none do, because of the higher frequency of blue.) This is a common problem in daily life due to the majority of mankind being ignorant of colour theory and conating blue with light blue with cyan, the latter often called "sky blue". A cyan LED may be distinguished from a blue LED in that adding a yellow phosphor to the output makes green, rather than white light. And often aqua is called blue-green when in actuality the latter is cyan, and light cyan-green would be aqua. What adds to the confusion is that cyan LEDs are enclosed in blue plastic. A great amount of work is needed to dispel these intuitive myths of colour mixing before accurate descriptions of physical phenomena and their production can happen. - This needs to be sorted out) Blue and white LEDs and Other colors Commercially viable blue LEDs based invented by Shuji Nakamura while working in Japan at Nichia Corporation in 1993 and became widely available in the late 1990s. They can be added to existing red and green LEDs to produce white light. Most "white" LEDs in production today use a 450nm 470nm blue GaN (gallium nitride) LED covered by a yellowish phosphor coating usually made of cerium doped yttrium aluminium garnet (YAG:Ce) crystals which have been powdered and bound in a type of viscous adhesive. The LED chip emits blue light, part of which is converted to yellow by the YAG:Ce. The single crystal form of YAG:Ce is actually considered a scintillator rather than a phosphor. Since yellow light stimulates the red and green receptors of the eye, the resulting mix of blue and yellow light gives the appearance of white.
|
The newest method used to produce white light LEDs uses no phosphors at all and is based on homoepitaxially grown zinc selenide (ZnSe) on a ZnSe substrate which simultaneously emits blue light from its active region and yellow light from the substrate. Other colors Recent color developments include pink and purple. They consist of one or two phosphor layers over a blue LED chip. The flrst phosphor layer of a pink LED is a yellow glowing one, and the second phosphor layer is either red or orange glowing. Purple LEDs are blue LEDs with an orange glowing phosphor over the chip. Some pink LEDs have run into issues. For example, some are blue LEDs painted with uorescent paint or flngernail polish that can wear ofi, and some are white LEDs with a pink phosphor or dye that unfortunately fades after a short tme. Ultraviolet, blue, pure green, white, pink and purple LEDs are relatively expensive compared to the more common reds, oranges, greens, yellows and infrareds and are thus less commonly used in commercial applications. The semiconducting chip is encased in a solid plastic lens, which is much tougher than the glass envelope of a traditional light bulb or tube. The plastic may be colored, but this is only for cosmetic reasons and does not afiect the color of the light emitted. 322 Operational parameters and e–ciency Most typical LEDs are designed to operate with no more than 30-60 milliwatts of electrical power. It is projected that by 2005, 10-watt units will be available. These devices will produce about as much light as a common 50-watt incandescent bulb, and will facilitate use of LEDs for general illumination needs. Interesting Fact: In September 2003 a new type of blue LED was demonstrated by the company Cree, Inc. to have 35% e–ciency at 20 mA. This produced a commercially packaged white light having 65 lumens per watt at 20 mA, becoming the brightest white LED commercially available at the time. Organic light-emitting diodes (OLEDs) If the emissive layer material of an LED is an organic compound, it is known as an Organic Light Emitting Diode (OLED). To function as a semiconductor, the organic emissive material must have conjugated pi bonds. The emissive material can be a small organic molecule in a crystalline phase, or
|
a polymer. Polymer materials can be exible; such LEDs are known as PLEDs or FLEDs. Compared with regular LEDs, OLEDs are lighter and polymer LEDs can have the added beneflt of being exible. Some possible future applications of OLEDs could be: Light sources Wall decorations Luminous cloth † † † LED applications Here is a list of known applications for LEDs, some of which are further elaborated upon in the following text: in general, commonly used as information indicators in various types of embedded systems (many of which are listed below) thin, lightweight message displays, e.g. in public information signs (at airports and railway stations, among other places) status indicators, e.g. on/ofi lights on professional instruments and consumers audio/video equipment infrared LEDs in remote controls (for TVs, VCRs, etc) clusters in tra–c signals, replacing ordinary bulbs behind colored glass car indicator lights and bicycle lighting; also for pedestrians to be seen by car tra–c † † † † † † 323 calculator and measurement instrument displays (seven segment displays), although now mostly replaced by LCDs red or yellow LEDs are used in indicator and [alpha]numeric displays in environments where night vision must be retained: aircraft cockpits, submarine and ship bridges, astronomy observatories, and in the fleld, e.g. night time animal watching and military fleld use red or yellow LEDs are also used in photographic darkrooms, for providing lighting which does not lead to unwanted exposure of the fllm illumination, e.g. ashlights (a.k.a. torches, UK), and backlights for LCD screens signaling/emergency beacons and strobes movement sensors, e.g. in mechanical and optical computer mice and trackballs † † † † † † in LED printers, e.g. high-end color printers † LEDs ofier beneflts in terms of maintenance and safety. † † † † The typical working lifetime of a device, including the bulb, is ten years, which is much longer than the lifetimes of most other light sources. LEDs fail by dimming over time, rather than the abrupt burn-out of incandescent bulbs. LEDs give ofi less heat than incandescent light bulbs and are less fragile than uorescent lamps
|
. Since an individual device is smaller than a centimetre in length, LED-based light sources used for illumination and outdoor signals are built using clusters of tens of devices. Because they are monochromatic, LED lights have great power advantages over white lights where a speciflc color is required. Unlike the white lights, the LED does not need a fllter that absorbs most of the emitted white light. Colored uorescent lights are made, but they are not widely available. LED lights are inherently colored, and are available in a wide range of colors. One of the most recently introduced colors is the emerald green (bluish green, about 500 nm) that meets the legal requirements for tra–c signals and navigation lights. Interesting Fact: The largest LED display in the world is 36 metres high (118 feet), at Times Square, New York, U.S.A. There are applications that speciflcally require light that does not contain any blue component. Examples are photographic darkroom safe lights, illumination in laboratories where certain photo-sensitive chemicals are used, and situations where dark adaptation (night vision) must be preserved, such as cockpit and bridge illumination, observatories, etc. Yellow LED lights are a good choice to meet these special requirements because the human eye is more sensitive to yellow light. 324 16.3.3 Transistor The transistor is a solid state semiconductor device used for ampliflcation and switching, and has three terminals. The transistor itself does not amplify current though, which is a common misconception, but a small current or voltage applied to one terminal controls the current through the other two, hence the term transistor; a voltage- or current-controlled resistor. It is the key component in all modern electronics. In digital circuits, transistors are used as very fast electrical switches, and arrangements of transistors can function as logic gates, RAM-type memory and other devices. In analog circuits, transistors are essentially used as ampliflers. Transistor was also the common name in the sixties for a transistor radio, a pocket-sized portable radio that utilized transistors (rather than vacuum tubes) as its active electronics. This is still one of the dictionary deflnitions of transistor. The only functional difierence between a PNP transistor and an NPN transistor is the proper biasing (polarity) of the junctions when operating. For any given state of operation,
|
the current directions and voltage polarities for each type of transistor are exactly opposite each other. Bipolar transistors work as current-controlled current regulators. In other words, they restrict the amount of current that can go through them according to a smaller, controlling current. The main current that is controlled goes from collector to emitter, or from emitter to collector, depending on the type of transistor it is (PNP or NPN, respectively). The small current that controls the main current goes from base to emitter, or from emitter to base, once again depending on the type of transistor it is (PNP or NPN, respectively). According to the confusing standards of semiconductor symbology, the arrow always points against the direction of electron ow: 325 Bipolar transistors are called bipolar because the main ow of electrons through them takes place in two types of semiconductor material: P and N, as the main current goes from emitter to collector (or visa-versa). In other words, two types of charge carriers { electrons and holes { comprise this main current through the transistor. As you can see, the controlling current and the controlled current always mesh together through the emitter wire, and their electrons always ow against the direction of the transistor’s arrow. This is the flrst and foremost rule in the use of transistors: all currents must be going in the proper directions for the device to work as a current regulator. The small, controlling current is usually referred to simply as the base current because it is the only current that goes through the base wire of the transistor. Conversely, the large, controlled current is referred to as the collector current because it is the only current that goes through the collector wire. The emitter current is the sum of the base and collector currents, in compliance with Kirchhofi’s Current Law. If there is no current through the base of the transistor, it shuts ofi like an open switch and prevents current through the collector. If there is a base current, then the transistor turns on like a closed switch and allows a proportional amount of current through the collector. Collector current is primarily limited by the base current, regardless of the amount of voltage available to push it. The next section will explore in more detail the use of bipolar transistors as switching elements. Importance The transistor is considered by many to be one of the greatest discoveries or inventions in modern history, ranking with banking and the printing press. Key to the importance of
|
the transistor in modern society is its ability to be produced in huge numbers using simple techniques, resulting in vanishingly small prices. Computer "chips" consist of millions of transistors and sell for rands, with per-transistor costs in the thousandths-of-cents. The low cost has meant that the transistor has become an almost universal tool for nonmechanical tasks. Whereas a common device, say a refrigerator, would have used a mechanical device for control, today it is often less expensive to simply use a few million transistors and the appropriate computer program to carry out the same task through "brute force". Today 326 transistors have replaced almost all electromechanical devices, most simple feedback systems, and appear in huge numbers in everything from computers to cars. Hand-in-hand with low cost has been the increasing move to "digitizing" all information. With transistorized computers ofiering the ability to quickly flnd (and sort) digital information, more and more efiort was put into making all information digital. Today almost all media in modern society is delivered in digital form, converted and presented by computers. Common "analog" forms of information such as television or newspapers spend the vast majority of their time as digital information, being converted to analog only for a small portion of the time. Interesting Fact: The transistor was invented at Bell Laboratories in December 1947 (flrst demonstrated on December 23) by John Bardeen, Walter Houser Brattain, and William Bradford Shockley, who were awarded the Nobel Prize in physics in 1956. 16.3.4 The transistor as a switch Because a transistor’s collector current is proportionally limited by its base current, it can be used as a sort of current-controlled switch. A relatively small ow of electrons sent through the base of the transistor has the ability to exert control over a much larger ow of electrons through the collector. Suppose we had a lamp that we wanted to turn on and ofi by means of a switch. Such a circuit would be extremely simple: For the sake of illustration, let’s insert a transistor in place of the switch to show how it can control the ow of electrons through the lamp. Remember that the controlled current through a transistor must go between collector and emitter. Since it’s the current through the lamp that we want to control, we must position the collector and emitter of our transistor where the two contacts of the switch
|
are now. We must also make sure that the lamp’s current will move against the direction of the emitter arrow symbol to ensure that the transistor’s junction bias will be correct: 327 In this example I happened to choose an NPN transistor. A PNP transistor could also have been chosen for the job, and its application would look like this: The choice between NPN and PNP is really arbitrary. All that matters is that the proper current directions are maintained for the sake of correct junction biasing (electron ow going against the transistor symbol’s arrow). Going back to the NPN transistor in our example circuit, we are faced with the need to add something more so that we can have base current. Without a connection to the base wire of the transistor, base current will be zero, and the transistor cannot turn on, resulting in a lamp that is always ofi. Remember that for an NPN transistor, base current must consist of electrons owing from emitter to base (against the emitter arrow symbol, just like the lamp current). Perhaps the simplest thing to do would be to connect a switch between the base and collector wires of the transistor like this: If the switch is open, the base wire of the transistor will be left "oating" (not connected to anything) and there will be no current through it. In this state, the transistor is said to be cutofi. If the switch is closed, however, electrons will be able to ow from the emitter through to the base of the transistor, through the switch and up to the left side of the lamp, back to the positive side of the battery. This base current will enable a much larger ow of electrons from the emitter through to the collector, thus lighting up the lamp. In this state of maximum circuit current, the transistor is said to be saturated. 328 Of course, it may seem pointless to use a transistor in this capacity to control the lamp. After all, we’re still using a switch in the circuit, aren’t we? If we’re still using a switch to control the lamp { if only indirectly { then what’s the point of having a transistor to control the current? Why not just go back to our original circuit and use the switch directly to control the lamp current? There are a couple of points to be made here, actually. First is the fact that when used in this manner, the switch contacts need only handle what little base
|
current is necessary to turn the transistor on, while the transistor itself handles the majority of the lamp’s current. This may be an important advantage if the switch has a low current rating: a small switch may be used to control a relatively high-current load. Perhaps more importantly, though, is the fact that the current-controlling behavior of the transistor enables us to use something completely difierent to turn the lamp on or ofi. Consider this example, where a solar cell is used to control the transistor, which in turn controls the lamp: Or, we could use a thermocouple to provide the necessary base current to turn the transistor on: 329 Even a microphone of su–cient voltage and current output could be used to turn the transistor on, provided its output is rectifled from AC to DC so that the emitter-base PN junction within the transistor will always be forward-biased: The point should be quite apparent by now: any su–cient source of DC current may be used to turn the transistor on, and that source of current need only be a fraction of the amount of current needed to energize the lamp. Here we see the transistor functioning not only as a switch, but as a true amplifler: using a relatively low-power signal to control a relatively large amount of power. Please note that the actual power for lighting up the lamp comes from the battery to the right of the schematic. It is not as though the small signal current from the solar cell, thermocouple, or microphone is being magically transformed into a greater amount of power. Rather, those small power sources are simply controlling the battery’s power to light up the lamp. Field-Efiect Transistor (FET) (NOTE TO SELF: Schematic can be found under GFDL on wikipedia) The schematic symbols for p- and n-channel MOSFETs. The symbols to the right include an extra terminal for the transistor body (allowing for a seldom-used channel bias) whereas in those to the left the body is implicitly connected to the source. The most common variety of fleld-efiect transistors, the enhancement-mode MOSFET (metaloxide semiconductor fleld-efiect transistor) consists of a unipolar conduction channel and a metal gate separated from the main conduction channel by a thin layer of (Si
|
O2) glass. This is why an alternative name for the FET is ’unipolar transistor.’ When a potential difierence (of the proper polarity) is impressed across gate and source, charge carriers are introduced to the channel, making it conductive. The amount of this current can be modulated, or (nearly) completely turned ofi, by varying the gate potential. Because the gate is insulated, no DC current ows to or from the gate electrode. This lack of a gate current and the ability of the MOSFET to act like a switch, allows particularly e–cient digital circuits to be created, with very low power consumption at low frequencies. The power consumption increases markedly with frequency, because the capacitive loading of the FET control terminal takes more energy to slew at higher frequencies, in direct proportion to the frequency. Hence, MOSFETs have become the dominant technology used in computing hardware such as microprocessors and memory devices such as RAM. Bipolar transistors are more rugged and hence more useful for low-impedance loads and inductively reactive (e.g. motor) loads. Power MOSFETs become less conductive with increasing temperature and can therefore be applied in shunt, to increase current capacity, unlike the bipolar transistor, which has a negative 330 temperature coe–cient of resistance, and is therefore prone to thermal runaway. The downside of this is that, while the power FET can protect itself from overheating by diminishing the current through it, high temperatures need to be avoided by using a larger heat sink than for an equivalent bipolar device. Macroscopic FET power transistors are actually composed of many little transistors. They are stacked (on-chip) to increase breakdown potential and paralleled to reduce Ron, i.e. allowing for more current, bussing the gates to provide a single control (gate) terminal. The depletion mode FET is a little difierent. It uses a back-biased diode for the control terminal, which presents a capacitive load to the driving circuit in normal operation. With the gate tied to the source, a DFET is fully on. Changing the potential of a DFET (pulling an Nchannel gate downward, for example) will turn it ofi, i.e. ’deplete’ the channel (drain-source) of charge carriers. MOSFETs, formerly called IGF
|
ETs (for Insulated Gate Field-Efiect Transistor) can be depletion-mode, enhancement-mode, or mixed-mode, but are almost always enhancement mode in modern commercial practice. This means that, with the source and gate tied together (thus equipotential) the channel will be ofi (high impedance or non-conducting). The n-channel device (reverse for P-channel), like in the DFET, is turned on by raising the potential of the gate. Typically, the gate on a MOSFET will withstand +-20V, relative to the source terminal. If one were to raise the gate potential of an n-channel device without limiting the current to a few milliamps, one would destroy the gate diode, like any other small diode. Why do we typically think of n-channel devices as the default? In silicon devices, the ones that use majority carriers that are electrons, rather than holes, are slightly faster and can carry more current than their P-type counterparts. The same is true in GaAs devices. The FET is simpler in concept than the bipolar transistor and can be constructed from a wide range of materials. The most common use of MOSFET transistors today is the CMOS (complementary metallic oxide semiconductor) integrated circuit which is the basis for most digital electronic devices. These use a totem-pole arrangement where one transistor (either the pull-up or the pull-down) is on while the other is ofi. Hence, there is no DC drain, except during the transition from one state to the other, which is very short. As mentioned, the gates are capacitive, and the charging and discharging of the gates each time a transistor switches states is the primary cause of power drain. The C in CMOS stands for ’complementary.’ The pull-up is a P-channel device (using holes for the mobile carrier of charge) and the pull-down is N-channel (electron carriers). This allows busing of the control terminals, but limits the speed of the circuit to that of the slower P device (in silicon devices). The bipolar solutions to push-pull include ’cascode’ using a current source for the load. Circuits that utilize both unipolar and bipolar transistors are called Bi-Fet. A recent development is called ’vertical P.’ Formerly, BiFet chip
|
users had to settle for relatively poor (horizontal) P-type FET devices. This is no longer the case and allows for quieter and faster analog circuits. A clever variant of the FET is the dual-gate device. This allows for two opportunities to turn the device ofi, as opposed to the dual-base (bipolar) transistor which presents two opportunities to turn the device on. FETs can switch signals of either polarity, if their amplitude is signiflcantly less than the gate swing, as the devices (especially the parasitic diode-free DFET) are basically symmetrical. This means that FETs are the most suitable type for analog multiplexing. With this concept, one can construct a solid-state mixing board, for example. The power MOSFET has a ’parasitic diode’ (back-biased) normally shunting the conduction channel that has half the current capacity of the conduction channel. Sometimes this is useful in driving dual-coil magnetic circuits (for spike protection), but in other cases it causes problems. 331 The high impedance of the FET gate makes it rather vulnerable to electrostatic damage, though this is not usually a problem after the device has been installed. A more recent device for power control is the insulated-gate bipolar transistor, or IGBT. This has a control structure akin to a MOSFET coupled with a bipolar-like main conduction channel. These have become quite popular. 16.4 principles of digital electronics logical gates, counting circuits 16.4.1 Electronic logic gates The simplest form of electronic logic is diode logic (DL). This allows AND and OR gates to be built, but not inverters, and so is an incomplete form of logic. To built a complete logic system, valves or transistors can be used. The simplest family of logic gates using bipolar transistors is called resistor-transistor logic, or RTL. Unlike diode logic gates, RTL gates can be cascaded indeflnitely to produce more complex logic functions. These gates were used in early integrated circuits. For higher speed, the resistors used in RTL were replaced by diodes, leading to diodetransistor logic, or DTL. It was then discovered that one transistor could do the job of two diodes in the space of one diode, so transistor-transistor logic, or TTL, was created. In some types
|
of chip, to reduce size and power consumption still further, the bipolar transistors were replaced with complementary fleld-efiect transistors (MOSFETs), resulting in complementary metal-oxide-semiconductor (CMOS) logic. For small-scale logic, designers now use prefabricated logic gates from families of devices such as the TTL 7400 series invented by Texas Instruments and the CMOS 4000 series invented by RCA, and their more recent descendants. These devices usually contain transistors with multiple emitters, used to implement the AND function, which are not available as separate components. Increasingly, these flxed-function logic gates are being replaced by programmable logic devices, which allow designers to pack a huge number of mixed logic gates into a single integrated circuit. Electronic logic gates difier signiflcantly from their relay-and-switch equivalents. They are much faster, consume much less power, and are much smaller (all by a factor of a million or more in most cases). Also, there is a fundamental structural difierence. The switch circuit creates a continuous metallic path for current to ow (in either direction) between its input and its output. The semiconductor logic gate, on the other hand, acts as a high-gain voltage amplifler, which sinks a tiny current at its input and produces a low-impedance voltage at its output. It is not possible for current to ow between the output and the input of a semiconductor logic gate. Another important advantage of standardised semiconductor logic gates, such as the 7400 and 4000 families, is that they are cascadable. This means that the output of one gate can be wired to the inputs of one or several other gates, and so on ad inflnitum, enabling the construction of circuits of arbitrary complexity without requiring the designer to understand the internal workings of the gates. In practice, the output of one gate can only drive a flnite number of inputs to other gates, a number called the ’fanout limit’, but this limit is rarely reached in the newer CMOS logic circuits, as compared to TTL circuits. Also, there is always a delay, called the ’propagation delay’, from a change an input of a gate to the corresponding change in its output. When gates are cascaded, the total propagation delay is approximately the sum
|
of the individual delays, an efiect which can become a problem in high-speed circuits. The US symbol for an AND gate is: AND symbol and the IEC symbol is AND symbol. The US circuit symbol for an OR gate is: OR symbol and the IEC symbol is: OR symbol. 332 The US circuit symbol for a NOT gate is: NOT symbol and the IEC symbol is: NOT symbol. In electronics a NOT gate is more commonly called an inverter. The circle on the symbol is called a bubble, and is generally used in circuit diagrams to indicate an inverted input or output. The US circuit symbol for a NAND gate is: NAND symbol and the IEC symbol is: NAND symbol. The US circuit symbol for a NOR gate is: NOR symbol and the IEC symbol is: NOR symbol. In practice, the cheapest gate to manufacture is usually the NAND gate. Additionally, Charles Peirce showed that NAND gates alone (as well as NOR gates alone) can be used to reproduce all the other logic gates. Two more gates are the exclusive-OR or XOR function and its inverse, exclusive-NOR or XNOR. Exclusive-OR is true only when exactly one of its inputs is true. In practice, these gates are built from combinations of simpler logic gates. The US circuit symbol for an XOR gate is: XOR symbol and the IEC symbol is: XOR symbol. 16.5 Counting circuits An arithmetic and logical unit (ALU) adder provides the basic functionality of arithmetic operations within a computer, and is a signiflcant component of the arithmetic and logical unit. Adders are composed of half adders and full adders, which add two-bit binary pairs, and ripple carry adders and carry look ahead adders which do addition operations to a series of binary numbers. (NOTE TO SELF: Pictures on wikipedia under GFDL) 16.5.1 Half Adder A half adder is a logical circuit that performs an addition operation on two binary digits. The half adder produces a sum and a carry value which are both binary digits. Sum(s) = A xor B Cot(c) = A and B Half adder circuit diagram Half adder circuit diagram Following is the logic table for a half adder: A B Sum Cot 16.5.2 Full adder A full adder is a logical circuit that performs an addition operation on three binary digits. The full
|
adder produces a sum and carry value, which are both binary digits. Sum = (A xor B) xor Cin Cot = (A nand B) nand (Cin nand (A xor B)) Full adder circuit diagram Full adder circuit diagram 333 A B Cin Sum Cot Quantity Symbol Unit S.I. Units Direction Units or Table 16.1: Units used in Electronics 334 Chapter 17 The Atom Atoms are the building blocks of matter. They are the basis of all the structures and organisms in the universe. The planets, the sun, grass and trees, the air we breathe, and people are all made up of atoms. 17.1 Models of the Atom 17.2 Structure of the Atom Atoms are very small and cannot be seen with the naked eye. They consist of two main parts: the positively charged nucleus at the centre and the negatively charged elementary particles called electrons which surround the nucleus in their orbitals. (Elementary particle means that the electron cannot be broken down to anything smaller and can be thought of as a point particle.) The nucleus of an atom is made up of a collection of positively charged protons and neutral particles called neutrons. interesting fact: the neutrons and protons are not elementary particles. They are actually made up of even smaller particles called quarks. Both protons and neutrons are made of three quarks each. There are all sorts of other particles composed of quarks which nuclear physicists study using huge detectors - you can flnd out more about this by reading the essay in Chapter??. (NOTE TO SELF: Insert diagram of atomic structure - see lab posters) Atoms are electrically neutral which means that they have the same number of negative electrons as positive protons. The number of protons in an atom is called the atomic number which is sometimes also called Z. (NOTE TO SELF: check A and Z) The atomic number is what distinguishes the difierent chemical elements in the Periodic table from each other. In fact, the elements are listed on the Periodic table in order of their atomic numbers. For example, the flrst element, hydrogen (H), has one proton whereas the sixth element, carbon (C) has 6 protons. Atoms with the same number of protons (atomic number) share physical properties and show similar chemical behaviour. The number of neutrons plus protons in the nucleus is called the atomic mass of the atom. 335 17.
|
3 Isotopes Two atoms are considered to be the same element if they have the same number of protons (atomic number). However, they do not have to have the same number of neutrons or overall atomic mass. Atoms which have the same number of protons but difierent numbers of neutrons are called isotopes. For example, the hydrogen atom has one proton and no neutrons. Therefore its atomic number is Z=1 and atomic mass is A=1. If a neutron is added to the hydrogen nucleus, then a new atom is formed with atomic mass A=2 but atomic number is still Z=1. This atom is called deuterium and is an isotope of hydrogen. 17.4 Energy quantization and electron conflguration 17.5 Periodicity of ionization energy to support atom ar- rangement in Periodic Table 17.6 Successive ionisation energies to provide evidence for arrangement of electrons into core and valence [Brink and Jones sections: de Broglie - matter shows particle and wave characteristics, proved by Davisson and Germer. Shroedinger and Heisenberg developed this model into quantum mechanics] The nucleus (atomic nucleus) is the center of an atom. It is composed of one or more protons and usually some neutrons as well. The number of protons in an atom’s nucleus is called the atomic number, and determines which element the atom is (for example hydrogen, carbon, oxygen, etc.). Though the positively charged protons exert a repulsive electromagnetic force on each other, the distances between nuclear particles are small enough that the strong interaction (which is stronger than the electromagnetic force but decreases more rapidly with distance) predominates. (The gravitational attraction is negligible, being a factor 1036 weaker than this electromagnetic repulsion.) The discovery of the electron was the flrst indication that the atom had internal structure. This structure was initially imagined according to the "raisin cookie" or "plum pudding" model, in which the small, negatively charged electrons were embedded in a large sphere containing all the positive charge. Ernest Rutherford and Marsden, however, discovered in 1911 that alpha particles from a radium source were sometimes scattered backwards from a gold foil, which led to the acceptance of a planetary model, in which the electrons orbited a tiny nucleus in the same way that the planets orbit the sun. Interesting Fact: The word atom is derived from the Greek atomos, indiv
|
isible, from a-, not, and tomos, a cut. An atom is the smallest portion into which a chemical element can be divided while still retaining its properties. Atoms are the basic constituents of molecules and ordinary matter. Atoms are composed of subatomic particles. Atoms are composed mostly of empty space, but also of smaller subatomic particles. At the center of the atom is a tiny positive nucleus composed of nucleons (protons and neutrons). The rest of the atom contains only the fairly exible electron shells. Usually atoms are electrically neutral with as many electrons as protons. 336 Atoms are generally classifled by their atomic number, which corresponds to the number of protons in the atom. For example, carbon atoms are those atoms containing 6 protons. All atoms with the same atomic number share a wide variety of physical properties and exhibit the same chemical behavior. The various kinds of atoms are listed in the Periodic table. Atoms having the same atomic number, but difierent atomic masses (due to their difierent numbers of neutrons), are called isotopes. The simplest atom is the hydrogen atom, having atomic number 1 and consisting of one proton and one electron. It has been the subject of much interest in science, particularly in the early development of quantum theory. In The chemical behavior of atoms is largely due to interactions between the electrons. particular the electrons in the outermost shell, called the valence electrons, have the greatest inuence on chemical behavior. Core electrons (those not in the outer shell) play a role, but it is usually in terms of a secondary efiect due to screening of the positive charge in the atomic nucleus. There is a strong tendency for atoms to completely flll (or empty) the outer electron shell, which in hydrogen and helium has space for two electrons, and in all other atoms has space for eight. This is achieved either by sharing electrons with neighboring atoms or by completely removing electrons from other atoms. When electrons are shared a covalent bond is formed between the two atoms. Covalent bonds are the strongest type of atomic bond. When one or more electrons are completely removed from one atom by another, ions are formed. Ions are atoms that possess a net charge due to an imbalance in the number of protons and electrons. The ion that stole the electron(s) is called an anion and is negatively charged. The atom that lost the electron(s
|
) is called a cation and is positively charged. Cations and anions are attracted to each other due to coulombic forces between the positive and negative charges. This attraction is called ionic bonding and is weaker than covalent bonding. As mentioned above covalent bonding implies a state in which electrons are shared equally between atoms, while ionic bonding implies that the electrons are completely conflned to the anion. Except for a limited number of extreme cases, neither of these pictures is completely accurate. In most cases of covalent bonding, the electron is unequally shared, spending more time around the more electronegative atom, resulting in the covalent bond having some ionic character. Similarly, in ionic bonding the electrons often spend a small fraction of time around the more electropositive atom, resulting in some covalent character for the ionic bond. [edit] Models of the atom * Democritus’ shaped-atom model (for want of a better name) * The plum pudding model * Cubical atom * The Bohr model * The quantum mechanical model The Plum pudding model of the atom was made after the discovery of the electron but before the discovery of the proton or neutron. In it, the atom is envisioned as electrons surrounded by a soup of positive charge, like plums surrounded by pudding. This model was disproved by an experiment by Ernest Rutherford when he discovered the nucleus of the atom. The Bohr Model is a physical model that depicts the atom as a small positively charged nucleus with electrons in orbit at difierent levels, similar in structure to the solar system. Because of its simplicity, the Bohr model is still commonly used and taught today. In the early part of the 20th century, experiments by Ernest Rutherford and others had established that atoms consisted of a small dense positively charged nucleus surrounded by orbiting negatively charged electrons. However classical physics at that time was unable to explain why the orbiting electrons did not spiral into the nucleus. The simplest possible atom is hydrogen, which consists of a nucleus and one orbiting electron. Since the nucleus is positive and the electron are oppositely charged they will attract one another by coulomb force, in much the same way that the sun attracts the earth by gravitational force. 337 However, if the electron orbits the nucleus in a classical orbit, it ought to emit electromagnetic radiation (light) according to well established theories of electromagnetism. If the orbiting electron emits light, it must lose energy
|
and spiral into the nucleus, so why do atoms even exist? What’s more, the spectra of atoms show that the orbiting electrons can emit light but only at certain frequencies. This made no sense at all to the scientists of the time. These di–culties were resolved in 1913 by Niels Bohr who proposed that: * (1) The orbiting electrons existed in orbits that had discrete quantized energies. That is, not every orbit is possible but only certain speciflc ones. The exact energies of the allowed orbits depends on the atom in question. * (2) The laws of classical mechanics do not apply when electrons make the jump from one allowed orbit to another. * (3) When an electron makes a jump from one orbit to another the energy difierence is carried ofi (or supplied) by a single quantum of light (called a photon) which has a frequency that directly depends on the energy difierence between the two orbitals. f = E / h where f is the frequency of the photon, E the energy difierence, and h is a constant of proportionality known as Planck’s constant. Deflning we can write where? is the angular frequency of the photon. * (4) The allowed orbits depend on quantized (discrete) values of orbital angular momentum, L according to the equation Where n = 1,2,3, and is called the angular momentum quantum number. These assumptions explained many of the observations seen at the time, such as why spectra consist of discrete lines. Assumption 4) states that the lowest value of n is 1. This corresponds to a smallest possible radius (for the mathematics see Ohanian-principles of physics or any of the large, usually American, college introductory physics textbooks) of 0.0529 nm. This is known as the Bohr radius, and explains why atoms are stable. Once an electron is in the lowest orbit, it can go no further. It cannot emit any more light because it would need to go into a lower orbit, but it can’t do that if it is already in the lowest allowed orbit. The Bohr model is sometimes known as the semiclassical model because although it does include some ideas of quantum mechanics it is not a full quantum mechanical description of the atom. Assumption 2) states that the laws of classical mechanics don’t apply during a quantum jump but doesn’
|
t state what laws should replace classical mechanics. Assumption 4) states that angular momentum is quantised but does not explain why. In order to fully describe an atom we need to use the full theory of quantum mechanics, which was worked out by a number of people in the years following the Bohr model. This theory treats the electrons as waves, which create 3D standing wave patterns in the atom. (This is why quantum mechanics is sometimes called wave mechanics.) This theory considers that idea of electrons as being little billiard ball like particles that travel round in orbits as absurdly wrong; instead electrons form probability clouds. You might flnd the electron here with a certain probability; you might flnd it over there with a difierent probability. However it is interesting to note that if you work out the average radius of an electron in the lowest possible energy state it turns out to be exactly equal to the Bohr radius (although it takes many more pages of mathematics to work it out). The full quantum mechanics theory is a beautiful theory that has been experimentally tested and found to be incredibly accurate, however it is mathematically much more advanced, and often using the much simpler Bohr model will get you the results with much less hassle. The thing to remember is that it is only a model, an aid to understanding. Atoms are not really little solar systems. * See also: Hydrogen atom, quantum mechanics, Schrdinger equation, Niels Bohr. * An interactive demonstration (http://webphysics.davidson.edu/faculty/dmb/hydrogen/) of the prob- 338 ability clouds of electron in Hydrogen atorm according to the full QM solution. 17.7 Bohr orbits Brink and Jones sections: Standing waves (quantisation). Atom seen as positive nucleus with vibrating electron waves surrounding it. Shrodinger’s equation calucaltes the energy of these waves and their shape and position{ most probable region of movement of electrons called orbitals (talk about n=1,2 energy levels and spdf orbitals). 17.8 Heisenberg uncertainty Principle Quantum mechanics is a physical theory that describes the behavior of physical systems at short distances. Quantum mechanics provides a mathematical framework derived from a small set of basic principles capable of producing experimental predictions for three types of phenomena that classical mechanics and classical electrodynamics cannot account for: quantization, wave-particle duality, and quantum entanglement. The related
|
terms quantum physics and quantum theory are sometimes used as synonyms of quantum mechanics, but also to denote a superset of theories, including pre-quantum mechanics old quantum theory, or, when the term quantum mechanics is used in a more restricted sense, to include theories like quantum fleld theory. Quantum mechanics is the underlying theory of many flelds of physics and chemistry, includ- ing condensed matter physics, quantum chemistry, and particle physics. 17.9 Pauli exclusion principle The Pauli exclusion principle is a quantum mechanical principle which states that no two identical fermions may occupy the same quantum state. Formulated by Wolfgang Pauli in 1925, it is also referred to as the "exclusion principle" or "Pauli principle." The Pauli principle only applies to fermions, particles which form antisymmetric quantum states and have half-integer spin. Fermions include protons, neutrons, and electrons, the three types of elementary particles which constitute ordinary matter. The Pauli exclusion principle governs many of the distinctive characteristics of matter. Particles like the photon and graviton do not obey the Pauli exclusion principle, because they are bosons (i.e. they form symmetric quantum states and have integer spin) rather than fermions. The Pauli exclusion principle plays a role in a huge number of physical phenomena. One of the most important, and the one for which it was originally formulated, is the electron shell structure of atoms. An electrically neutral atom contains bound electrons equal in number to the protons in the nucleus. Since electrons are fermions, the Pauli exclusion principle forbids them from occupying the same quantum state. For example, consider a neutral helium atom, which has two bound electrons. Both of these electrons can occupy the lowest-energy (1s) states by acquiring opposite spin. This does not violate the Pauli principle because spin is part of the quantum state of the electron, so the two electrons are occupying difierent quantum states. However, the spin can take only two difierent In a lithium atom, which contains three bound electrons, the third values (or eigenvalues.) electron cannot flt into a 1s state, and has to occupy one of the higher-energy 2s states instead. Similarly, successive elements produce successively higher-energy shells. The chemical properties of an element largely depends on the number of electrons in the outermost shell, which gives rise to
|
the periodic table of the elements. 339 The Pauli principle is also responsible for the large-scale stability of matter. Molecules cannot be pushed arbitrarily close together, because the bound electrons in each molecule are forbidden from entering the same state as the electrons in the other molecules - this is the reason for the repulsive r-12 term in the Lennard-Jones potential. The Pauli principle is the reason you do not fall through the oor. Astronomy provides the most spectacular demonstrations of this efiect, in the form of white dwarf stars and neutron stars. In both types of objects, the usual atomic structures are disrupted by large gravitational forces, leaving the constituents supported only by a "degeneracy pressure" produced by the Pauli exclusion principle. This exotic form of matter is known as degenerate matter. In white dwarfs, the atoms are held apart by the degeneracy pressure of the electrons. In neutron stars, which exhibit even larger gravitational forces, the electrons have merged with the protons to form neutrons, which produce a larger degeneracy pressure. Another physical phenomenon for which the Pauli principle is responsible is ferromagnetism, in which the exclusion efiect implies an exchange energy that induces neigboring electron spins to align (whereas classically they would anti-align). 17.10 Ionization Energy (flrst, second etc.) 17.11 Electron conflguration i.e. fllling the orbitals starting from 1s..... Aufbau principle unpaired and paired electrons Hund’s rule: 1 e- in each orbital before pairing in p orbitals shorthand: 1s22s22p1 etc 17.12 Valency Capacity for bonding Covalent bonding is a form of chemical bonding characterized by the sharing of one or more pairs of electrons, by two atoms, in order to produce a mutual attraction; atoms tend to share electrons, so as to flll their outer electron shells. Such bonds are always stronger than the intermolecular hydrogen bond and similar in strength or stronger than the ionic bond. Commonly covalent bond implies the sharing of just a single pair of electrons. The sharing of two pairs is called a double bond and three pairs is called a triple bond. Aromatic rings of atoms and other resonant structures are held together by covalent bonds that are intermediate between single and double. The triple bond is relatively rare in nature, and two
|
atoms are not observed to bond more than triply. Covalent bonding most frequently occurs between atoms with similar electronegativities, where neither atom can provide su–cient energy to completely remove an electron from the other atom. Covalent bonds are more common between non-metals, whereas ionic bonding is more common between two metal atoms or a metal and a non-metal atom. Covalent bonding tends to be stronger than other types of bonding, such as ionic bonding. In addition unlike ionic bonding, where ions are held together by a non-directional coulombic attraction, covalent bonds are highly directional. As a result, covalently bonded molecules tend to form in a relatively small number of characteristic shapes, exhibiting speciflc bonding angles. 340 17.13 341 Chapter 18 Modern Physics 18.1 Introduction to the idea of a quantum Imagine that a beam of light is actually made up of little "packets" or "bundles" of energy, called quanta. It’s like looking at a crowd of people from above. At flrst, it seems as though they are one huge patch, without any spaces between them. You would never suspect that they were people. But as you move closer, you slowly begin to see that they are individuals, and when you get even closer, you may even recognize a few. Light seems like a continuous wave at flrst, but when we zoom in at the subatomic level, we notice that a beam of light actually consists of little "packets" of energy, or quanta. This idea introduces the concept of the quantum (particle) nature of light, which is demonstrated by the photoelectric efiect. When a metal surface is illuminated with light, electrons can be emitted from the surface. This is known as the photoelectric efiect. 18.2 The wave-particle duality The wave nature of light is demonstrated by difiraction, interference, and polarization of light; and the particle nature of light is demonstrated by the photoelectric efiect. So light has both wave-like and particle-like properties, but only shows one or the other, depending on the kind of experiment we perform. A wave-type experiment shows the wave nature, and a particle-type experiment shows particle nature. When you’re watching a cricketer on the fleld, you see
|
only that side of his personality. So to you, he is just a good cricketer. You do not see his golflng side, for example. Only when he is playing golf, will that side be revealed to you. The same applies to light. Now, we consider light to behave not as a wave, but as particles. But what do we call a ’particle’ of light? Photon : A photon is a quantum (energy packet) of light. Imagine a sheet of metal. On the surface, there are electrons that are waiting to be set free. If a photon comes along and strikes the surface of the metal, then it will give its entire energy packet to one electron. This means that the electron now has some energy, and it may escape (leave the surface) if this energy Ek is greater than the minimum energy required to free an electron Emin. Now, suppose the electron needs 5eV of kinetic energy to escape. And suppose this little 342 photon has just 2eV of energy in its energy packet. Then the electron will not leave the surface of the metal. But suppose the photon has 8eV of energy. This means that the electron will emerge with 3eV of kinetic energy. Note that this does not mean the photon can give 5eV of energy to one electron and 3eV to another. A photon will give all of its energy to just one electron. The minimum amount of energy needed for an electron to escape (electrons do not normally leave a metal whenever they please), is called the work function of the metal. In our example, the work function is 5eV. The work function has a difierent value for each metal: 4.70eV for copper and 2.28eV for sodium. It is worth mentioning that the best conductors are those with the smallest work functions. The frequency of the radiation is very important, because if it is below a certain threshold value, no electrons will be emitted. Even if the intensity of the light is increased, and the light is allowed to fall on the surface for a long period of time, if the frequency of the radiation is below the threshold frequency, electrons will not be emitted. We therefore reason that E = h? Where E is the energy of the photon, h = 6:57X10¡34Js is Plancks constant, and? is the frequency of the radiation. This means that the kinetic energy acquired by the electron is equal
|
to the energy of the photon minus the work function,?, i.e. Ek = h? -? The electrons emerge with a range of velocities from zero up to a maximum vmax. The maximum kinetic energy, (1/2)mvmax2, depends (linearly) on the frequency of the radiation, and is independent of its intensity. For incident radiation of a given frequency, the number of electrons emitted per unit time is proportional to the intensity of the radiation. Electron emission takes place from the instant the light shines on the surface, i.e. there is no detectable time delay. What are the uses of the photoelectric efiect? For this work, Einstein received the Nobel prize in 1905. 18.3 Practical Applications of Waves: Electromagnetic Waves In physics, wave-particle duality holds that light and matter simultaneously exhibit properties of waves and of particles. This concept is a consequence of quantum mechanics. In 1905, Einstein reconciled Huygens’ view with that of Newton; he explained the photoelectric efiect (an efiect in which light did not seem to act as a wave) by postulating the existence of photons, quanta of energy with particulate qualities. Einstein postulated that light’s frequency,?, is related to the energy, E, of its photons: E = hf 10¡34Js). (18.1) where h is Planck’s constant (6:626 £ In 1924, De Broglie claimed that all matter has a wave-like nature; he related wavelength,?, and momentum, p: ‚ = h p (18.2) This is a generalization of Einstein’s equation above since the momentum of a photon is given by, p = ; (18.3) E c where c is the speed of light in vacuum, and? = c /?. 343 De Broglie’s formula was conflrmed three years later by guiding a beam of electrons (which have rest mass) through a crystalline grid and observing the predicted interference patterns. Similar experiments have since been conducted with neutrons and protons. Authors of similar recent experiments with atoms and molecules claim that these larger particles also act like waves. This is still a contoversial subject because these experimenters have assumed arguments of waveparticle duality and have assumed the validity of deBroglie�
|
�s equation in their argument. The Planck constant h is extremely small and that explains why we don’t perceive a wave-like quality of everyday objects: their wavelengths are exceedingly small. The fact that matter can have very short wavelengths is exploited in electron microscopy. In quantum mechanics, the wave-particle duality is explained as follows: every system and particle is described by state functions which encode the probability distributions of all measurable variables. The position of the particle is one such variable. Before an observation is made the position of the particle is described in terms of probability waves which can interfere with each other. 344 Chapter 19 Inside atomic nucleus Amazingly enough, human mind that is kind of contained inside a couple of liters of human’s brain, is able to deal with extremely large as well as extremely small objects such as the whole universe and its smallest building blocks. So, what are these building blocks? As we already know, the universe consists of galaxies, which consist of stars with planets moving around. The planets are made of molecules, which are bound groups (chemical compounds) of atoms. There are more than 1020 stars in the universe. Currently, scientists know over 12 million chemical compounds i.e. 12 million difierent molecules. All this variety of molecules is made of only a hundred of difierent atoms. For those who believe in beauty and harmony of nature, this number is still too large. They would expect to have just few difierent things from which all other substances are made. In this chapter, we are going to flnd out what these elementary things are. 19.1 What the atom is made of The Greek word fi¿ o„o” (atom) means indivisible. The discovery of the fact that an atom is actually a complex system and can be broken in pieces was the most important step and pivoting point in the development of modern physics. It was discovered (by Rutherford in 1911) that an atom consists of a positively charged nucleus and negative electrons moving around it. At flrst, people tried to visualize an atom as a microscopic analog of our solar system where planets move around the sun. This naive planetary model assumes that in the world of very small objects the Newton laws of classical mechanics are valid. This, however, is not the case. The microscopic world is governed by quantum mechanics which does not have such notion as trajectory. Instead, it describes the dynamics of particles in
|
terms of quantum states that are characterized by probability distributions of various observable quantities. For example, an electron in the atom is not moving along a certain trajectory but rather along all imaginable trajectories with difierent probabilities. If we were trying to catch this electron, after many such attempts we would discover that the electron can be found anywere around the nucleus, even very close to and very far from it. However, the probabilities of flnding the electron 345 P (r) RBohr r Figure 19.1: Probability density P (r) for flnding the electron at a distance r from the proton in the ground state of hydrogen atom. at difierent distances from the nucleus would be difierent. What is amazing: the most probable distance corresponds to the classical trajectory! You can visualize the electron inside an atom as moving around the nucleus chaotically and extremely fast so that for our \mental eyes" it forms a cloud. In some places this cloud is more dense while in other places more thin. The density of the cloud corresponds to the probability of flnding the electron in a particular place. Space distribution of this density (probability) is what we can calculate using quantum mechanics. Results of such calculation for hydrogen atom are shown in Fig. 19.1. As was mentioned above, the most probable distance (maximum of the curve) coincides with the Bohr radius. Quantum mechanical equation for any bound system (like an atom) can have solutions only at a discrete set of energies E1; E2; E3 : : :, etc. There are simply no solutions for the energies E in between these values, such as, for instance, E1 < E < E2. This is why a bound system of microscopic particles cannot have an arbitrary energy and can only be in one of the quantum states. Each of such states has certain energy and certain space conflguration, i.e. distribution of the probability. A bound quantum system can make transitions from one quantum state to another either spontaneously or as a result of interaction with other systems. The energy conservation law is one of the most fundamental and is valid in quantum world as well as in classical world. This means that any transition between the states with energies Ei and Ej is accompanied with either Ei ¡ emission or absorption of the energy ¢E = j. This is how an atom emits light. Ejj Electron is a very
|
light particle. Its mass is negligible as compared to the total mass of the atom. For example, in the lightest of all atoms, hydrogen, the electron constitutes only 0.054% of the atomic mass. In the silicon atoms that are the main component of the rocks around us, all 14 electrons make up only 0.027% of the mass. Thus, when holding a heavy rock in your hand, you actually feel the collective weight of all the nuclei that are inside it. 346 19.2 Nucleus Is the nucleus a solid body? Is it an elementary building block of nature? No and no! Although it is very small, a nucleus consists of something even smaller. 19.2.1 Proton The only way to do experiments with such small objects as atoms and nuclei, is to collide them with each other and watch what happens. Perhaps you think that this is a barbaric way, like colliding a \Mercedes" and \Toyota" in order to learn what is under their bonnets. But with microscopic particles nothing else can be done. In the early 1920’s Rutherford and other physicists made many experiments, changing one element into another by striking them with energetic helium nuclei. They noticed that all the time hydrogen nuclei were emitted in the process. It was apparent that the hydrogen nucleus played a fundamental role in nuclear structure and was a constituent part of all other nuclei. By the late 1920’s physicists were regularly referring to hydrogen nucleus as proton. The term \proton" seems to have been coined by Rutherford, and flrst appears in print in 1920. 19.2.2 Neutron Thus it was established that atomic nuclei consist of protons. Number of protons in a nucleus is such that makes up its positive charge. This number, therefore, coincides with the atomic number of the element in the Mendeleev’s Periodic table. This sounded nice and logical, but serious questions remained. Indeed, how can positively charged protons stay together in a nucleus? Repelling each other by electric force, they should y away in difierent directions. Who keeps them together? Furthermore, the proton mass is not enough to account for the nuclear masses. For example, if the protons were the only particles in the nucleus, then a helium nucleus (atomic number 2) would have two protons and therefore only twice the mass of hydrogen. However, it actually is four times heavier than hydrogen. This
|
suggests that it must be something else inside nuclei in addition to protons. These additional particles that kind of \glue" the protons and make up the nuclear mass, apparently, are electrically neutral. They were therefore called neutrons. Rutherford predicted the existence of the neutron in 1920. Twelve years later, in 1932, his assistant James Chadwick found it and measured its mass, which turned out to be almost the same but slightly larger than that of the proton. 19.2.3 Isotopes Thus, in the early 1930’s it was flnally proved that atomic nucleus consists of two types of particles, the protons and neutrons. The protons are positively charged while the neutrons are electrically neutral. The proton charge is exactly equal but opposite to that of the electron. The masses of proton and neutron are almost the same, approximately 1836 and 1839 electron masses, respectively. 347 Apart from the electric charge, the proton and neutron have almost the same properties. This is why there is a common name for them: nucleon. Both the proton and neutron are nucleons, like a man and a woman are both humans. In physics literature, the proton is denoted by letter p and the neutron by n. Sometimes, when the difierence between them is unimportant, it is used the letter N meaning nucleon (in the same sense as using the word \person" instead of man or woman). Chemical properties of an element are determined by the charge of its atomic nucleus, i.e. by the number of protons. This number is called the atomic number and is denoted by letter Z. The mass of an atom depends on how many nucleons its nucleus contains. The number of nucleons, i.e. total number of protons and neutrons, is called the atomic mass number and is denoted by letter A. Standard nuclear notation shows the chemical symbol, the mass number and the atomic number of the isotope. number of nucleons number of protons A ZX chemical symbol For example, the iron nucleus (26-th place in the Mendeleev’s periodic table of the elements) with 26 protons and 30 neutrons is denoted as 56 26Fe ; where the total nuclear charge is Z = 26 and the mass number A = 56. The number of neutrons Z (here, it is used the same letter N, as for nucleon, but this is simply
|
the difierence N = A should not cause any confusion). Chemical symbol is inseparably linked with Z. This is why the lower index is sometimes omitted and you may encounter the simplifled notation like 56Fe. ¡ If we add or remove a few neutrons from a nucleus, the chemical properties of the atom remain the same because its charge is the same. This means that such atom should remain in the same place of the Periodic table. In Greek, \same place" reads ¶¶o& ¿ ¶o…o& (isos topos). The nuclei, having the same number of protons, but difierent number of neutrons, are called therefore isotopes. Difierent isotopes of a given element have the same atomic number Z, but difierent mass numbers A since they have difierent numbers of neutrons N. Chemical properties of difierent isotopes of an element are identical, but they will often have great difierences in nuclear stability. For stable isotopes of the light elements, the number of neutrons will be almost equal to the number of protons, but for heavier elements, the number of neutrons is always greater than Z and the neutron excess tends to grow when Z increases. This is because neutrons are kind of glue that keeps repelling protons together. The greater the repelling charge, the more glue you need. 348 19.3 Nuclear force Since atomic nuclei are very stable, the protons and neutrons must be kept inside them by some force and this force must be rather strong. What is this force? All of modern particle physics was discovered in the efiort to understand this force! Trying to answer this question, at the beginning of the XX-th century, physicists found that all they knew before, was inadequate. Actually, by that time they knew only gravitational and electromagnetic forces. It was clear that the forces holding nucleons were not electromagnetic. Indeed, the protons, being positively charged, repel each other and all nuclei would decay in a split of a second if some other forces would not hold them together. On the other hand, it was also clear that they were not gravitational, which would be too weak for the task. The simple conclusion was that nucleons are able to attract each other by yet unknown nuclear forces, which are stronger than the electromagnetic ones. Further studies proved that this hypothesis was
|
correct. Nuclear force has rather unusual properties. Firstly, it is charge independent. This means 10¡13 cm, that in all pairs nn, pp, and np nuclear forces are the same. Secondly, at distances 100 times stronger than the electromagnetic the nuclear force is attractive and very strong, » repulsion. Thirdly, the nuclear force is of a very short range. If the nucleons move away from each other for more than few fermi (1 fm=10¡13 cm) the nuclear attraction practically disappears. Therefore the nuclear force looks like a \strong man with very short hands". » 19.4 Binding energy and nuclear masses 19.4.1 Binding energy When a system of particles is bound, you have to spend certain energy to disintegrate it, i.e. to separate the particles. The easiest way to do it is to strike the system with a moving particle that carries kinetic energy, like we can destroy a glass bottle with a bullet or a stone. If our bulletparticle moves too slow (i.e. does not have enough kinetic energy) it cannot disintegrate the system. On the other hand, if its kinetic energy is too high, the system is not only disintegrated but the separated particles acquire some kinetic energy, i.e. move away with some speed. There is an intermediate value of the energy which is just enough to destroy the system without giving its fragments any speed. This minimal energy needed to break up a bound system is called binding energy of this system. It is usually denoted by letter B. 19.4.2 Nuclear energy units The standart unit of energy, Joule, is too large to measure the energies associated with individual nuclei. This is why in nuclear physics it is more convenient to use a much smaller unit called Mega-electron-Volt (MeV). This is the amount of energy that an electron acquires after passing between two charged plates with the potential difierence (voltage) of one million Volts. Sounds very huge, isn’t it? But look at this relation and think again. In the units of MeV, most of the energies in nuclear world can be expressed by values with only few digits before decimal point and without ten to the power of something. For 1 MeV = 1:602 10¡13 J £ 349 example, the binding energy of proton and neutron (which is the simplest nuclear system and is called deuteron) is Bpn = 2
|
:225 MeV : The simplicity of the numbers is not the only advantage of using the unit MeV. Another, more important advantage, comes from the fact that most of experiments in nuclear physics are collision experiments, where particles are accelerated by electric fleld and collide with other particles. From the above value of Bpn, for instance, we immediately know that in order to break up deuterons, we need to bombard them with a ux of electrons accelerated through a voltage not less than 2.225 million Volts. No calculation is needed! On the other hand, if we know that a charged particle (with a unit charge) passes through a voltage, say, 5 million Volts, we can, without any calculation, say that it acqures the energy of 5 MeV. It is very convenient. Isn’t it? 19.4.3 Mass defect Comparing the masses of atomic nuclei with the masses of the nucleons that constitute them, we encounter a surprising fact: Total mass of the nucleons is greater than mass of the nucleus! For example, for the deuteron we have md < mp + mn ; where md, mp, and mn are the masses of deuteron, proton, and neutron, respectively. The difierence is rather small, but on the nuclear scale is noticeable since the mass of proton, for example, (mp + mn) ¡ md = 3:968 £ 10¡30 kg ; mp = 1672:623 10¡30 kg £ is also very small. This phenomenon is called \mass defect". Where the mass disappears to, when nucleons are bound? To answer this question, we notice that the energy of a bound state is lower than the energy of free particles. Indeed, to liberate them from a bound complex, we have to give them some energy. Thinking in the opposite direction, we conclude that, when forming a bound state, the particles have to get rid of the energy excess, which is exactly equal to the binding energy. This is observed experimentally: When a proton captures a neutron to form a deuteron, the excess energy of 2.225 MeV is emitted via electromagnetic radiation. A logical conclusion from the above comes by itself: When proton and neutron are bounding, some part of their mass disappears together with the energy that is carried away by the radiation. And in the opposite process, when we break up the deutron, we give
|
it the energy, some part of which makes up the lost mass. Albert Einstein came to the idea of the equivalence between the mass and energy long before any experimental evidences were found. In his theory of relativity, he showed that total energy E of a moving body with mass m is ; v2 c2 mc2 E = 1 ¡ r 350 (19.1) where v is its velocity and c the speed of light. Applying this equation to a non-moving body (v = 0), we conclude that it possesses the rest energy E0 = mc2 (19.2) simply because it has mass. As you will see, this very formula is the basis for making nuclear bombs and nuclear power stations! All the development of physics and chemistry, preceding the theory of relativity, was based on the assumption that the mass and energy of a closed system are conserving in all possible processes and they are conserved separately. In reality, it turned out that the conserving quantity is the mass-energy, Ekin + Epot + Erad + mc2 = const ; i.e. the sum of kinetic energy, potential energy, the energy of radiation, and the mass of the system. In chemical reactions the fraction of the mass that is transformed into other forms of energy (and vise versa), is so small that it is not detectable even in most precise measurements. In nuclear processes, however, the energy release is very often millions times higher and therefore is observable. You should not think that mutual transformations of mass and energy are the features of only nuclear and atomic processes. If you break up a piece of rubber or chewing gum, for example, in two parts, then the sum of masses of these parts will be slightly larger than the mass of the whole piece. Of course we will not be able to detect this \mass defect" with our scales. But we can calculate it, using the Einstein formula (19.2). For this, we would need to measure somehow the mechanical work A used to break up the whole piece (i.e. the amount of energy supplied to it). This can be done by measuring the force and displacement in the breaking process. Then, according to Eq. (19.2), the mass defect is ¢m = A c2 : To estimate possible efiect, let us assume that we need to stretch a piece of rubber in 10 cm before it breaks, and the average force needed for this is 10 N (approximately 1 kg). Then
|
and hence A = 10 N £ 0:1 m = 1 J ; ¢m = 1 J (299792458 m=s)2 … 10¡17 kg: 1:1 £ This is very small value for measuring with a scale, but huge as compared to typical masses of atoms and nuclei. 19.4.4 Nuclear masses Apparently, an individual nucleus cannot be put on a scale to measure its mass. Then how can nuclear masses be measured? This is done with the help of the devices called mass spectrometers. In them, a ux of identical nuclei, accelerated to a certain energy, is directed to a screen where it makes a visible mark. 351 Before striking the screen, this ux passes through magnetic fleld, which is perpendicular to velocity of the nuclei. As a result, the ux is deected to certain angle. The greater the mass, the smaller is the angle (because of inertia). Thus, measuring the displacement of the mark from the center of the screen, we can flnd the deection angle and then calculate the mass. Since mass and energy are equivalent, in nuclear physics it is customary to measure masses of all particles in the units of energy, namely, in MeV. Examples of masses of subatomic particles are given in Table 19.1. The values given in this table, are the energies to which the nuclear particle number of protons number of neutrons mass (MeV) e p n 2 1H 3 1H 3 2He 4 2He 7 3Li 9 4Be 12 6C 16 8O.511 938.272 939.566 1875.613 2808.920 2808.391 3727.378 6533.832 8392.748 11174.860 14895.077 238 92U 92 146 221695.831 Table 19.1: Masses of electron, nucleons, and some nuclei. masses are equivalent via the Einstein formula (19.2). £ There are several advantages of using the units of MeV to measure particle masses. First of all, like with nuclear energies, we avoid handling very small numbers that involve ten to the power of something. For example, if we were measuring masses in kg, the electron mass would 10¡31 kg. When masses are given in the equivalent energy units, it is very be me = 9:1093897 easy to calculate the mass defect. Indeed, adding the masses
|
of proton and neutron, given in the second and third rows of Table 19.1, and subtracting the mass of 2 1H, we obtain the binding energy 2.225 MeV of the deuteron without further ado. One more advantage comes from particle physics. In collisions of very fast moving particles new particles (like electrons) can be created from vacuum, i.e. kinetic energy is directly transformed into mass. If the mass is expressed in the energy units, we know how much energy is needed to create this or that particle, without calculations. 352 19.5 Radioactivity As was said before, the nucleus experiences the intense struggle between the electric repulsion of protons and nuclear attraction of the nucleons to each other. It therefore should not be surprising that there are many nuclei that are unstable. They can spontaneously (i.e. without an external push) break in pieces. When the fragments reach the distances where the short range nuclear attraction disappears, they flercely push each other away by the electric forces. Thus accelerated, they move in difierent directions like small bullets making destruction on their way. This is an example of nuclear radioactivity but there are several other varieties of radioactive decay. 19.5.1 Discovery of radioactivity Nuclear radioactivity was discovered by Antoine Henri Becquerel in 1896. Following Wilhelm Roentgen who discovered the X-rays, Becquerel pursued his own investigations of these mysterious rays. The material Becquerel chose to work with contained uranium. He found that the crystals containing uranium and exposed to sunlight, made images on photographic plates even wrapped in black paper. He mistakingly concluded that the sun’s energy was being absorbed by the uranium which then emitted X-rays. The truth was revealed thanks to bad weather. On the 26th and 27th of February 1896 the skies over Paris were overcast and the uranium crystals Becquerel intended to expose to the sun were returned to a drawer and put over (by chance) the photographic plates. On the flrst of March, Becquerel developed the plates and to his surprise, found that the images on them were clear and strong. Therefore the uranium emitted radiation without an external source of energy such as the sun. This was the flrst observation of the nuclear radioactivity. Later, Becquerel demonstrated that the uranium radiation was similar to the X-rays but, unlike them, could be deected by
|
a magnetic fleld and therefore must consist of charged particles. For his discovery of radioactivity, Becquerel was awarded the 1903 Nobel Prize for physics. 19.5.2 Nuclear fi, fl, and rays Classical experiment that revealed complex content of the nuclear radiation, was done as follows. The radium crystals (another radioactive element) were put at the bottom of a narrow straight channel made in a thick piece of lead and open at one side. The lead absorbed everything except the particles moving along the channel. This device therefore produced a ux of particles moving in one direction like bullets from a machine gun. In front of the channel was a photoplate that could register the particles. Without the magnetic fleld, the image on the plate was in the form of one single dot. When the device was immersed into a perpendicular magnetic fleld, the ux of particles was split in three uxes, which was reected by three dots on the photographic plate. One of the three uxes was stright, while two others were deected in opposite directions. This showed that the initial ux contained positive, negative, and neutral particles. They were 353 named respectively the fi, fl, and particles. The fi-rays were found to be the 4He nuclei, two protons and two neutrons bound together. They have weak penetrating ability, a few centimeters of air or a few sheets of paper can efiectively block them. The fl-rays proved to be electrons. They have a greater penetrating power than the fi-particles and can penetrate 3 mm of aluminum. The -rays are not deected because they are high energy photons. They have the same nature as the radio waves, visible light, and the X-rays, but have much shorter wavelength and therefore are much more energetic. Among the three, the -rays have the greatest penetrating power being able to pass through several centimeters of lead and still be detected on the other side. 19.5.3 Danger of the ionizing radiation The fi, fl, and particles moving through matter, collide with atoms and knock out electrons from them, i.e. make positive ions out of the atoms. This is why these rays are called ionizing radiation. Apart from ionizing the atoms, this radiation destroys molecules. For humans and all other organisms, this is the most dangerous feature of the radiation.
|
Imagine thousands of tiny tiny bullets passing through your body and making destruction on their way. Although people do not feel any pain when exposed to nuclear radiation, it harms the cells of the body and thus can make people sick or even kill them. Illness can strike people years after their exposure to nuclear radiation. For example, the ionizing particles can randomly modify the DNA (long organic molecules that store all the information on how a particular cell should function in the body). As a result, some cells with wrong DNA may become cancer cells. Fortunately, our body is able to repair some damages caused by radiation. Indeed, we are constantly bombarded by the radiation coming from the outer space as well as from the inner parts of our own planet and still survive. However, if the number of damages becomes too large, the body will not cope with them anymore. There are established norms and acceptable limits for the radiation that are considered safe for human body. If you are going to work in contact with radioactive materials or near them, make sure that the exposure dose is monitored and the limits are adhered to. You should understand that no costume can protect you from -rays! Only a thick wall of concrete or metal can stop them. The special costumes and masks that people wear when handling radioactive materials, protect them not from the rays but from contamination with that materials. Imagine if few specks of radioactive dirt stain your everyday clothes or if you inhale radioactive atoms. They will remain with you all the time and will shoot the \bullets" at you even when you are sleeping. In many cases, a very efiective way of protecting yourself from the radiation is to keep certain distance. Radiation from nuclear sources is distributed equally in all directions. Therefore the number n of dangerous particles passing every second through a unit area (say 1 cm2) is the total number N of particles emitted during 1 second, divided by the surface of a sphere n = N 4…r2 ; 354 where r is the distance at which we make the observation. From this simple formula, it is seen that the radiation intensity falls down with incresing distance quadratically. In other words, if you increase the distance by a factor of 2, your exposure to the radiation will be decreased by a factor of 4. 19.5.4 Decay law Unstable nuclei decay spontaneously. A given nucleus can decay next moment, next day or even next century. Nobody can predict when it is going to happen. Despite this seemingly chaotic and \unscienti�
|
��c" situation, there is a strict order in all this. Atomic nuclei, being microscopic objects, are ruled by quantum probabilistic laws. Although we cannot predict the exact moment of its decay, we can calculate the probability that a nucleus will decay within this or that time interval. Nuclei decay because of their internal dynamics and not because they become \old" or somehow \rotten". To illustrate this, let us imagine that yesterday morning we found that a certain nucleus was going to decay within 24 hours with the probability of 50%. However, this morning we found that it is still \alive". This fact does not mean that the decay probability for another 24 hours increased. Not at all! It remains the same, 50%, because the nucleus remains the same, nothing wrong happened to it. This can go on and on for centuries. Actually, we never deal with individual nuclei but rather with huge numbers of identical nuclei. For such collections (ensembles) of quantum objects, the probabilistic laws become statictical laws. Let us assume that in the above example we had 1 million identical nuclei instead of only one. Then by this morning only half of these nuclei would survive because the decay probability for 24 hours was 50%. Among the remaining 500000 nuclei, 250000 will decay by tomorrow morning, then after another 24 hours only 125000 will remain and so on. The number of unstable nuclei that are still \alive" continuously decreases with time according to the curve shown in Fig. 19.2. If initially, at time t = 0, their number is N0, then after certain time interval T1=2 only half of these nuclei will remain, namely, 1 2 N0. Another one half of the remaining half will decay during another such interval. So, after the time 2T1=2, we will have only one quarter of the initial amount, and so on. The time interval T1=2, during which one half of unstable nuclei decay, is called their half-life time. It is speciflc for each unstable nucleus and vary from a fraction of a second to thousands and millions of years. A few examples of such lifetimes are given in Table 19.2 19.5.5 Radioactive dating Examining the amounts of the decay products makes possible radioactive dating. The most famous is the Carbon dating, a variety of radioactive dating which is applicable only to matter which was once living and presumed
|
to be in equilibrium with the atmosphere, taking in carbon dioxide from the air for photosynthesis. Cosmic ray protons blast nuclei in the upper atmosphere, producing neutrons which in turn bombard nitrogen, the major constituent of the atmosphere. This neutron bombardment produces the radioactive isotope 14 6C. The radioactive carbon-14 combines with oxygen to form 355 N (t) N0 1 2 N0 1 4 N0 1 8 N0 0 T1=2 2T1=2 3T1=2 4T1=2 t Figure 19.2: The time T1=2 during which one half of the initial amount of unstable particles decay, is called their half-life time. isotope T1=2 decay mode 214 84Po 89 36Kr 222 86Rn 90 38Sr 226 88Ra 14 6C 238 92U 115 49In 1:64 10¡4 s £ 3:16 min 3.83 days 28:5 years 103 years 1:6 £ 5:73 4:47 4:41 £ £ £ 103 years 109 years 1014 years fi; fl¡; fi; fl¡ fi; fl¡ fi; fl¡ Table 19.2: Half-life times of several unstable isotopes. carbon dioxide and is incorporated into the cycle of living things. The isotope 14 6C decays (see Table 19.2) inside living bodies but is replenished from the air 356 and food. Therefore, while an organism is alive, the concentration of this isotope in the body remains constant. After death, the replenishment from the breath and food stops, but the isotopes that are in the dead body continue to decay. As a result the concentration of 14 6C in it gradually decreases according to the curve shown in Fig. 19.2. The time t = 0 on this Figure corresponds to the moment of death, and N0 is the equilibrium concentration of 14 6C in living organisms. Therefore, by measuring the radioactive emissions from once-living matter and comparing its activity with the equilibrium level of emissions from things living today, an estimation of the time elapsed can be made. For example, if the rate of the radioactive emissions from a piece of wood, caused by the decay of 14 6C, is one-half lower than from living trees, then we can conclude it is elapsed exactly one half-life-time that we are at the point t =
|
T1=2 on the curve 19.2, i.e. period. According to the Table 19.2), this means that the tree, from which this piece of wood was made, was cut approximately 5730 years ago. This is how physicists help archaeologists to assign dates to various organic materials. 19.6 Nuclear reactions Those of you who studied chemistry, are familiar with the notion of chemical reaction, which, in essence, is just regrouping of atoms that constitute molecules. As a result, reagent chemical compounds are transformed into product compounds. In the world of nuclear particles, similar processes are possible. When nuclei are close to each other, nucleons from one nucleus can \jump" into another one. This happens because there are attractive and repulsive forces between the nucleons. The complicated interplay of these forces may cause their regrouping. As a result, the reagent particles are transformed into product particles. Such processes are called nuclear reactions. For example, when two isotopes 3 in such a way that the isotope 4 reactions, this process is denoted as 2He collide, the six nucleons constituting them, can rearrange 2He is formed and two protons are liberated. Similarly to chemical 2He + 3 3 2He ¡! 4 2He + p + p + 12:86 MeV : (19.3) The same as in chemical reactions, nuclear reactions can also be either exothermic (i.e. releasing energy) or endothermic (i.e. requiring an energy input). The above reaction releases 12.86 MeV of energy. This is because the total mass on the left hand side of Eq. (19.3) is in 12.86 MeV greater than the total mass of the products on the right hand side (you can check this using Table 19.1). Thus, when considering a particular nuclear reaction, we can always learn if it releases or absorbs energy. For this, we only need to compare total masses on the left and right hand sides of the equation. Now, you can understand why it is very convenient to express masses in the units of energy. Composing equations like (19.3), we should always check the superscripts and subscripts of the nuclei in order to have the same number of nucleons and the same charge on both sides of the equation. In the above example, we have six nucleons and the charge +4 in both the initial and flnal
|
states of the reaction. To make the checking of nucleon number and charge conservation easier, sometimes the proton and neutron are denoted with superscripts and subscripts as well, 357 namely, 1 subscripts are the same on both sides of the equation. 1p and 1 0n. In this case, all we need is to check that sum of superscripts and sum of 19.7 Detectors How can we observe such tiny tiny things as protons and fi-particles? There is no microscope that would be able to discern them. From the very beginning of the sub-atomic era, scientists have been working on the development of special instruments that are called particle detectors. These devices enable us either to register the mere fact that certain particle has passed through certain point in space or to observe the trace of its path (the trajectory). Actually, this is as good as watching the particle. Although the particle sizes are awfully small, when passing through some substances, they leave behind visible traces of tens of centimeters in length. By measuring the curvature of the trajectory of a particle deected in electric or magnetic fleld, a physicist can determine the charge and mass of the particle and thus can identify it. 19.7.1 Geiger counter The most familiar device for registering charged particles is the Geiger counter. It cannot tell you anything about the particle except the fact that it has passed through the counter. The counter consists of a thin metal cylinder fllled with gas. A wire electrode runs along the center of the tube and is kept at a high voltage ( 2000 V) relative to the cylinder. When a particle passes through the tube, it causes ionization of the gas atoms and thus an electric discharge between the cylinder and the wire. The electric pulse can be counted by a computer or made to produce a \click" in a loudspeaker. The number of counts per second tells us about intensity of the radiation. » 19.7.2 Fluorescent screen The very flrst detector was the uorescent screen. When a charged particle hits the screen, a human eye can discern a ash of light at the point of impact. In fact, we all use this kind of detectors every day when watching TV of looking at a computer (if it does not have an LCD screen of course). Indeed, the images on the screens of their electron-ray tubes are formed by the accelerated electrons. 19.7.3 Photo-emulsion Another type of particle detector,
|
dating back to Becquerel, is the nuclear photographic emulsion. Passage of charged particles is recorded in the emulsion in the same way that ordinary black and white photographic fllm records a picture. The only difierence is that nuclear photoemulsion is made rather thick in order to catch a signiflcant part of the particle path. After the developing, a permanent record of the charged particle trajectory is available. 19.7.4 Wilson’s chamber In the flelds of sub-atomic physics and nuclear physics, Wilson’s cloud chamber is the most fundamental device to observe the trajectories of particles. Its basic principle was discovered by C. T. R. Wilson in 1897, and it was put to the practical use in 1911. 358 The top and the side of the chamber are covered with round glasses of several centimeters in diameter. At the bottom of the chamber, a piston is placed. The air fllled in the chamber is saturated with vapor of water. When pulling down the piston quickly, the volume of the chamber would be expanded and the temperature goes down. As a result, the air inside would be supersaturated with the vapor. If a fast moving charged particle enters the chamber when it is in such a supersaturated state, the vapor of water would condense along the line of the ions generated by the particle, which is the path of the particle. Thus we can observe the trace, and also take a photograph. To make clear the trace, a light is sometimes illuminated from the side. When placing the cloud chamber in a magnetic fleld, we can obtain various informations about the charged particle by measuring the curvature of the trace and other data. The bubble chamber and the spark chamber have taken place of the cloud chamber which is nowadays used only for the educational purposes. Wilson’s cloud chamber has however played a very important role in the history of physics. 19.7.5 Bubble chamber Bubble chamber is a particle detector of major importance during the initial years of high-energy physics. The bubble chamber has produced a wealth of physics from about 1955 well into the 1970s. It is based on the principle of bubble formation in a liquid heated above its boiling point, which is then suddenly expanded, starting boiling where passing charged particles have ionized the atoms of the liquid. The technique was honoured by the Nobel prize award to D. Glaser in 1960. Even today, bubble chamber photographs provide the
|
aesthetically most appealing visualization of subnuclear collisions. 19.7.6 Spark chamber Spark chamber is a historic device using electric discharges over a gap between two electrodes with large potential difierence, to render passing particles visible. Sparks occurred where the gas had been ionized. Most often, multiple short gaps were used, but wide-gap chambers with gaps up to 40 cm were also built. The spark chamber is still of great scientiflc value in that it remains relatively simple and cheap to build as well as enabling an observer to view the paths of charged particles. 19.8 Nuclear energy Nuclei can produce energy via two difierent types of reactions, namely, flssion and fusion reactions. Fission is a break up of a nucleus in two or more pieces (smaller nuclei). Fusion is the opposite process: Formation of a bigger nucleus from two small nuclei. A question may arise: How two opposite processes can both produce energy? Can we make an inexhaustible souce of energy by breaking up and then fusing the same nuclei? Of cousre not! The energy conservation law cannot be circumvented in no way. When speaking about fusion and flssion, we speak about difierent ranges of nuclei. Energy can only be released when either light nuclei fuse or heavy nuclei flssion. To understand why this is so, let us recollect that for releasing energy the mass of initial nuclei must be greater than the mass of the products of a nuclear reaction. The mass difierence is transformed into the released energy. And why the product nuclei can loose some mass as compared to the initial nuclei? Because they are more tightly bound, i.e. their binding energies 359 are lager. Fig. 19.3 shows the dependence of the binding energy B per nucleon on the number A of 9 MeV nucleons constituting a nucleus. As you see, the curve reaches the maximum value of per nucleon at around A 50. The nuclei with such number of nucleons cannot produce energy neither through fusion nor through flssion. They are kind of \ashes" and cannot serve as a fuel. In contrast to them, very light nuclei, when fused with each other, make more tightly bound products as well as very heavy nuclei do when split up in lighter fragments. » » B=A, (MeV) 10
|
8 6 4 2 0 | {z } "fusion % | {z } - ˆ flssion | {z }. ˆ 10 8 6 4 2 0 0 50 100 150 200 250 number of nucleons, A Figure 19.3: Binding energy per nucleon. In flssion processes, which were discovered and used flrst, a heavy nucleus like, for example, uranium or plutonium, splits up in two fragments which are both positively charged. These fragments repel each other by an electric force and move apart at a high speed, distributing their kinetic energy in the surrounding material. In fusion reactions everything goes in the opposite direction. Very light nuclei, like hydrogen or helium isotopes, when approaching each other to a distance of a few fm (1 fm = 10¡13 cm), experience strong attraction which overpowers their Coulomb (that is electric) repulsion. As a result the two nuclei fuse into a single nucleus. They collapse with extremely high speeds towards each other. To form a stable nucleus they must get rid of the excessive energy. This energy is emitted by ejecting a neutron or a photon. 19.8.1 Nuclear reactors Since the discovery of radioactivity it was known that heavy nuclei release energy in the processes of spontaneous decay. This process, however, is rather slow and cannot be inuenced (speed up or slow down) by humans and therefore could not be efiectively used for large-scale energy production. Nonetheless, it is ideal for feeding the devices that must work autonomously in remote 360 places for a long time and do not require much energy. For this, heat from the spontaneousdecays can be converted into electric power in a radioisotope thermoelectric generator. These generators have been used to power space probes and some lighthouses built by Russian engineers. Much more efiective way of using nuclear energy is based on another type of nuclear decay which is considered next. Chain reaction The discovery that opened up the era of nuclear energy was made in 1939 by German physicists O. Hahn, L. Meitner, F Strassmann, and O. Frisch. They found that a uranium nucleus, after absorbing a neutron, splits into two fragments. This was not a spontaneous but induced flssion n + 235 92U ¡! 54Xe + 94 140 38Sr + n + n + 185 MeV (
|
19.4) » that released 185 MeV of energy as well as two neutrons which could cause similar reactions on surrounding nuclei. The fact that instead of one initial neutron, in the reaction (19.4) we obtain two neutrons, is crucial. This gives us the possibility to make the so-called chain reaction schematically shown in Fig. 19.4 Figure 19.4: Chain reaction on uranium nuclei. In such process, one neutron breaks one heavy nucleus, the two released neutrons break two more heavy nuclei and produce four neutrons which, in turn, can break another four nuclei and so on. This process develops extremely fast. In a split of a second a huge amount of energy can be released, which means explosion. In fact, this is how the so-called atomic bomb works. Can we control the development of the chain reaction? Yes we can! This is done in nuclear reactors that produce energy for our use. How can it be done? Critical mass First of all, if the piece of material containing flssile nuclei is too small, some neutrons may reach its surface and escape without causing further flssions. For each type of flssile material there is therefore a minimal mass of a sample that can support explosive chain reaction. It is called the critical mass. For example, the critical mass of 235 92U is approximately 50 kg. If the mass is 361 below the critical value, nuclear explosion is not possible, but the energy is still released and the sample becomes hot. The closer mass is to its critical value, the more energy is released and more intensive is the neutron radiation from the sample. The criticality of a sample (i.e. its closeness to the critical state) can be reduced by changing its geometry (making its surface bigger) or by putting inside it some other material (boron or cadmium) that is able to absorb neutrons. On the other hand, the criticality can be increased by putting neutron reectors around the sample. These reectors work like mirrors from which the escaped neutrons bounce back into the sample. Thus, moving in and out the absorbing material and reectors, we can keep the sample close to the critical state. How a nuclear reactor works In a typical nuclear reactor, the fuel is not in one piece, but in the form of several hundred vertical rods, like a brush. Another system of rods that contain a neutron absorbing material (control rods
|
) can move up and down in between the fuel rods. When totally in, the control rods absorb so many neutrons, that the reactor is shut down. To start the reactor, operator gradually moves the control rods up. In an emergency situation they are dropped down automatically. To collect the energy, water ows through the reactor core. It becomes extremely hot and goes to a steam generator. There, the heat passes to water in a secondary circuit that becomes steam for use outside the reactor enclosure for rotating turbines that generate electricity. Nuclear power in South Africa By 2004 South Africa had only one commercial nuclear reactor supplying power into the national grid. It works in Koeberg located 30 km north of Cape Town. A small research reactor was also operated at Pelindaba as part of the nuclear weapons program, but was dismantled. Koeberg Nuclear Power station is a uranium Pressurized Water Reactor (PWR). In such a reactor, the primary coolant loop is pressurised so the water does not boil, and heat exchangers, called steam generators, are used to transmit heat to a secondary coolant which is allowed to boil to produce steam. To remove as much heat as possible, the water temperature in the primary loop is allowed to rise up to about 300 –C which requires the pressure of 150 atmospheres (to keep water from boiling). The Koeberg power station has the largest turbine generators in the southern hemisphere and produces 10000 MWh of electric energy. Construction of Koeberg began in 1976 and two of its Units were commissioned in 1984-1985. Since then, the plant has been in more or less continuous operation and there have been no serious incidents. » Eskom that operates this power station, may be the current technology leader. It is developing a new type of nuclear reactor, a modular pebble-bed reactor (PBMR). In contrast to traditional nuclear reactors, in this new type of reactors the fuel is not assembled in the form of rods. The uranium, thorium or plutonium fuels are in oxides (ceramic form) contained within spherical pebbles made of pyrolitic graphite. The pebbles, having a size of a tennis ball, are in a bin or can. An inert gas, helium, nitrogen or carbon dioxide, circulates through the spaces between the fuel pebbles. This carries heat away from the reactor. 362 Ideally, the heated gas is run directly through a turbine. However since the gas from the primary coolant can be made radioactive by the
|
neutrons in the reactor, usually it is brought to a heat exchanger, where it heats another gas, or steam. The primary advantage of pebble-bed reactors is that they can be designed to be inherently safe. When a pebble-bed reactor gets hotter, the more rapid motion of the atoms in the fuel increases the probability of neutron capture by 238 92U isotopes through an efiect known as Doppler broadening. This isotope does not split up after capturing a neutron. This reduces the number of neutrons available to cause 235 92U flssion, reducing the power output by the reactor. This natural negative feedback places an inherent upper limit on the temperature of the fuel without any operator intervention. The reactor is cooled by an inert, flreproof gas, so it cannot have a steam explosion as a water reactor can. A pebble-bed reactor thus can have all of its supporting machinery fail, and the reactor will not crack, melt, explode or spew hazardous wastes. It simply goes up to a designed "idle" temperature, and stays there. In that state, the reactor vessel radiates heat, but the vessel and fuel spheres remain intact and undamaged. The machinery can be repaired or the fuel can be removed. A large advantage of the pebble bed reactor over a conventional water reactor is that they operate at higher temperatures. The reactor can directly heat uids for low pressure gas turbines. The high temperatures permit systems to get more mechanical energy from the same amount of thermal energy. Another advantage is that fuel pebbles for difierent fuels might be used in the same basic design of reactor (though perhaps not at the same time). Proponents claim that some kinds of pebble-bed reactors should be able to use thorium, plutonium and natural unenriched Uranium, as well as the customary enriched uranium. One of the projects in progress is to develop pebbles and reactors that use the plutonium from surplus or expired nuclear explosives. On June 25, 2003, the South African Republic’s Department of Environmental Afiairs and Tourism approved ESKOM’s prototype 110 MW pebble-bed modular reactor for Koeberg. Eskom also has approval for a pebble-bed fuel production plant in Pelindaba. The uranium for this fuel is to be imported from Russia. If the trial is successful, Eskom says it will build up to ten local
|
PBMR plants on South Africa’s seacoast. Eskom also wants to export up to 20 PBMR plants per year. The estimated export revenue is 8 billion rand a year, and could employ about 57000 people. 19.8.2 Fusion energy For a given mass of fuel, a fusion reaction like 1H + 3 2 1H ¡! 4 2He + n + 17:59 MeV : (19.5) yield several times more energy than a flssion reaction. This is clear from the curve given in Fig. 19.3. Indeed, a change of the binding energy (per nucleon) is much more signiflcant for a fusion reaction than for a flssion reaction. Fusion is, therefore, a much more powerful source of energy. For example, 10 g of Deuterium which can be extracted from 500 litres of water and 15 g of Tritium produced from 30 g of Lithium would give enough fuel for the lifetime electricity 363 needs of an average person in an industrialised country. But this is not the only reason why fusion attracted so much attention from physicists. Another, more fundamental, reason is that the fusion reactions were responsible for the synthesis of the initial amount of light elements at primordial times when the universe was created. Furthermore, the synthesis of nuclei continues inside the stars where the fusion reactions produce all the energy which reaches us in the form of light. Thermonuclear reactions If fusion is so advantageous, why is it not used instead of flssion reactors? The problem is in the electric repulsion of the nuclei. Before the nuclei on the left hand side of Eq. (19.5) can fuse, 10¡13 cm. This is not an we have to bring them somehow close to each other to a distance of easy task! They both are positively charged and \refuse" to approach each other. » What we can do is to make a mixture of the atoms containing such nuclei and heat it up. At high temperatures the atoms move very fast. They flercely collide and loose all the electrons. The mixture becomes plasma, i.e. a mixture of bare nuclei and free moving electrons. If the temperature is high enough, the colliding nuclei can overcome the electric repulsion and approach each other to a fusion distance. When the nuclei fuse, they release much more energy than was spent to heat up the
|
plasma. Thus the initial energy \investment" pays ofi. The typical temperature needed to ignite the reaction of the type (19.5) is extremely high. In fact, it is the same temperature that our sun has in its center, namely, 15 million degrees. This is why the reactions (19.3), (19.5), and the like are called thermonuclear reactions. » Human-made thermonuclear reactions The same as with flssion reactions, the flrst application of thermonuclear reactions was in weapons, namely, in the hydrogen bomb, where fusion is ignited by the explosion of an ordinary (flssion) plutonium bomb which heats up the fuel to solar temperatures. In an attempt to make a controllable fusion, people encounter the problem of holding the plasma. It is relatively easy to achieve a high temperature (with laser pulses, for example). But as soon as plasma touches the walls of the container, it immediately cools down. To keep it from touching the walls, various ingenious methods are tried, such as strong magnetic fleld and laser beams directed to plasma from all sides. In spite of all efiorts and ingenious tricks, all such attempts till now have failed. Most probably this straightforward approach to controllable fusion is doomed because one has to hold in hands a \piece of burning sun". Cold fusion To visualize the struggle of the nuclei approaching each other, imagine yourself pushing a metallic ball towards the top of a slope shown in Fig. 19.5. The more kinetic energy you give to the ball, the higher it can climb. Your purpose is to make it fall into the narrow well that is behind the barrier. 364 Coulomb barrier Vefi ¡ ¡“ projectile x R Figure 19.5: Efiective nucleus{nucleus potential as a function of the separation between the nuclei. In fact, the curve in Fig. 19.5 shows the dependence of relative potential energy Vefi between two nuclei on the distance R separating them. The deep narrow well corresponds to the strong short-range attraction, and the 1=R barrier represents the Coulomb (electric) repulsion. The nuclei need to overcome this barrier in order to \touch" each other and fuse, i.e. to fall into the narrow and deep potential well. One way to achieve this is to give them enough kinetic energy, which means to rise
|
the temperature. However, there is another way based on the quantum laws. » As you remember, when discussing the motion of the electron inside an atom (see Sec. 19.1), we said that it formed a \cloud" of probability around the nucleus. The density of this cloud diminishes at very short and very long distances but never disappears completely. This means that we can flnd the electron even inside the nucleus though with a rather small probability. The nuclei moving towards each other, being microscopic objects, obey the quantum laws as well. The probability density for flnding one nucleus at a distance R from another one also forms a cloud. This density is non-zero even under the barrier and on the other side of the barrier. This means that, in contrast to classical objects, quantum particles, like nuclei, can penetrate through potential barriers even if they do not have enough energy to go over it! This is called the tunneling efiect. The tunneling probability strongly depends on thickness of the barrier. Therefore, instead of lifting the nuclei against the barrier (which means rising the temperature), we can try to make the barrier itself thinner or to keep them close to the barrier for such a long time that even a low penetration probability would be realized. How can this be done? The idea is to put the nuclei we want to fuse, inside a molecule where they can stay close to each other for a long time. Furthermore, in a molecule, the Coulomb barrier becomes thinner because of electron screening. In this way fusion may proceed even at 365 room temperature. This idea of cold fusion was originally (in 1947) discussed by F. C. Frank and (in 1948) put forward by A. D. Sakharov, the \father" of Russian hydrogen bomb, who at the latest stages of his career was worldwide known as a prominent human rights activist and a winner of the Nobel Prize for Peace. When working on the bomb project, he initiated research into peaceful applications of nuclear energy and suggested the fusion of two hydrogen isotopes via the reaction (19.5) by forming a molecule of them where one of the electrons is replaced by a muon. The muon is an elementary particle (see Sec. 19.9), which has the same characteristics as an electron. The only difierence between them is that the muon is 200 times heavier than the electron. In other words, a muon is a heavy electron. What will happen if
|
we make a muonic atom of hydrogen, that is a bound state of a proton and a muon? Due to its large mass the muon would be very close to the proton and the size of such atom would be 200 times smaller than that of an ordinary atom. This is clearly seen from the formula for the atomic Bohr radius where the mass is in the denominator. RBohr = ~2 me2 ; Now, what happens if we make a muonic molecule? It will also be 200 times smaller than an ordinary molecule. The Coulomb barrier will be 200 times thinner and the nuclei 200 times closer to each other. This is just what we need! Speaking in terms of the efiective nucleus{nucleus potential shown in Fig. 19.5, we can say that the muon modifles this potential in such a way that a second minimum appears. Such a modifled potential is (schematically) shown in Fig. 19.6. Vefi probability density ¡ ¡“ x R Figure 19.6: Efiective nucleus{nucleus potential (thick curve) for nuclei conflned in a molecule. Thin curve shows the corresponding distribution of the probability for flnding the nuclei at a given distance from each other. The molecule is a bound state in the shallow but wide minimum of this potential. Most of 366 the time, the nuclei are at the distance corresponding to the maximum of the probability density distribution (shown by the thin curve). Observe that this density is not zero under the barrier (though is rather small) and even at R = 0. This means that the system can (with a small probability) jump from the shallow well into the deep well through the barrier, i.e. can tunnel and fuse. Unfortunately, the muon is not a stable particle. Its lifetime is only 10¡6 sec. This means that a muonic molecule cannot exist longer than 1 microsecond. As a matter of fact, from a quantum mechanical point of view, this is quite a long interval. » The quantum mechanical wave function (that describes the probability density) oscillates with a frequency which is proportional to the energy of the system. With a typical binding 1017 s¡1. This means that the particle energy of a muonic molecule of 300 eV this frequency is hits the barrier with this frequency and during 1 micro
|
second it makes 1011 attempts to jump 10¡7. Therefore, during through it. The calculations show that the penetration probability is 1 microsecond nuclei can penetrate through the barrier 10000 times and fusion can happen much faster than the decay rate of the muon. » » Cold fusion via the formation of muonic molecules was done in many laboratories, but unfortunately, it cannot solve the problem of energy production for our needs. The obstacle is the negative e–ciency, i.e. to make muonic cold fusion we have to spend more energy than it produces. The reason is that muons do not exist like protons or electrons. We have to produce them in accelerators. This takes a lot of energy. Actually, the muon serves as a catalyst for the fusion reaction. After helping one pair of nuclei to fuse, the muon is liberated from the molecule and can form another molecule, and so on. It was estimated that the e–ciency of the energy production would be positive only if each muon ignited at least 1000 fusion events. Experimentalists tried their best, but by now the record number is only 150 fusion events per muon. This is too few. The main reason why the muon does not catalyze more reactions is that it is eventually trapped by a 4He nucleus which is a by-product of fusion. Helium captures the muon into an atomic orbit with large binding energy, and it cannot escape. Nonetheless, the research in the fleld of cold fusion continues. There are some other ideas of how to keep nuclei close to each other. One of them is to put the nuclei inside a crystal. Another way out is to increase the penetration probability by using molecules with special properties, namely, those that have quantum states with almost the same energies as the excited states on the compound nucleus. Scientists try all possibilities since the energy demands of mankind grow continuously and therefore the stakes in this quest are high. 19.9 Elementary particles In our quest for the elementary building blocks of the universe, we delved inside atomic nucleus and found that it is composed of protons and neutrons. Are the three particles, e, p, and n, the blocks we are looking for? The answer is \no". Even before the structure of the atom was understood, Becquerel discovered the redioactivity (see Sec. 19.5.1) that afterwards puzzled physicists and forced them to look deeper, i.e. inside protons and neutrons
|
. 367 19.9.1 fl decay Among the three types of radioactivity, the fi and rays were easily explained. The emission of fi particle is kind of flssion reaction, when an initial nucleus spontaneously decays in two fragments one of which is the nucleus 4 2He (i.e. fi particle). The rays are just electromagnetic quanta emitted by a nuclear system when it transits from one quantum state to another (the same like an atom emits light). The fl rays posed the puzzle. On the one hand, they are just electrons and you may think that it looks simple. But on the other hand, they are not the electrons from the atomic shell. It was found that they come from inside the nucleus! After the fl-decay, the charge of the nucleus increases in one unit, A Z (parent nucleus) ¡! A Z+1 (daughter nucleus) + e ; which is in accordance with the charge conservation law. There was another puzzle associated with the fl decay: The emitted electrons did not have a certain energy. Measuring their kinetic energies, you could flnd very fast and very slow electrons as well as the electrons with all intermediate speeds. How could identical parent nuclei, after loosing difierent amount of energy, become identical daughter nuclei. May be energy is not conserving in the quantum world? The fact was so astonishing that even Niels Bohr put forward the idea of statistical nature of the energy conservation law. To explain the flrst puzzle, it was naively suggested that neutron is a bound state of proton and electron. At that time, physicists believed that if something is emitted from an object, this something must be present inside that object before the emission. They could not imagine that a particle could be created from vacuum. The naive (pe) model of the neutron contradicted the facts. Indeed, it was known already that the pe bound state is the hydrogen atom. Neutron is much smaller than the atom. Therefore, it would be unusually tight binding, and perhaps with something elese involved that keeps the size small. By the way, this \something elese" could also save the energy conservation law. In 1930, Wolfgang Pauli suggested that in addition to the electron, the fl decay involves another particle, ”, that is emitted along with the electron and carries away part of the energy. For example
|
, 234 90Th ¡! 234 91Pa + e¡ + „” : (19.6) This additional particle was called neutrino (in Italian the word \neutrino" means small neutron). The neutrino is electrically neutral, has extremely small mass (maybe even zero, which is still a question in 2004) and very weakly interacts with matter. This is why it was not detected experimentally till 1956. The \bar" over ” in Eq. (19.6) means that in this reaction actually the anti-neutrino is emitted (see the discussion on anti-particles further down in Sec. 19.9.2). 19.9.2 Particle physics In an attempt to explain the fl decay and to understand internal structure of the neutron a new branch of physics was born, the particle physics. The only way to explore the structure of subatomic particles is to strike them with other particles in order to knock out their \constituent" parts. The simple logic says: The more powerful the impact, the smaller parts can be knocked 368 out. At the beginning the only source of energetic particles to strike other particles were the cosmic rays. Earth is constantly bombarded by all sort of particles coming from the outer space. Atmosphere protects us from most of them, but many still reach the ground. Antiparticles In 1932, studying the cosmic rays with a bubble chamber, Carl Anderson made a photograph of two symmetrical tracks of charged particles. The measurements of the track curvatures showed that one track belonged to an electron and the other was made by a particle having the same mass and equal but positive charge. These particles were created when a cosmic quantum of a high energy collided with a nucleus. The discovered particle was called positron and denoted as e+ to distinguish it from the electron, which sometimes is denoted as e¡. It was the flrst antiparticle discovered. Later, it was found that every particle has its \mirror reection", the antiparticle. To denote an antiparticle, it is used \bar" over a particle symbol. For example, „p is the anti-proton, which has the same mass as an ordinary proton but a negative charge. When a particle collides with its \mirror reection", they annihilate, i.e. they burn out completely. In this collision, all their mass is transformed into electromagnetic energy in the form
|
of quanta. For example, if an electron collides with a positron, the following reaction may take place e¡ + e+ + ; ¡! (19.7) where two photons are needed to conserve the total momentum of the system. In principle, stable antimatter can exist. For example, the pair of „p and e+ can form an atom of anti-hydrogen with exactly the same energy states as the ordinary hydrogen. Experimentally, atoms of anti-helium were obtained. The problem with them is that, surrounded by ordinary matter, they cannot live long. Colliding with ordinary atoms, they annihilate very fast. There are speculations that our universe should be symmetric with respect to particles and antiparticles. Indeed, why should preference be given to matter and not to anti-matter? This implies that somewhere very far, there must be equal amount of anti-matter, i.e. anti-universe. Can you imagine what happens if they meet? Muon, mesons, and the others In yet another cosmic-ray experiment a particle having the same properties as the electron but 207 times heavier, was discovered in 1935. It was given the name muon and the symbol „. For » a long time it remained \unnecessary" particle in the picture of the world. Only the modern theories harmonically included the muon as a constituent part of matter (see Sec 19.9.3). The same inexhaustible cosmic rays revealed the … and K mesons in 1947. The … mesons (or simply pions) were theoretically predicted twelve years before by Yukawa, as the mediators of the strong forces between nucleons. The K mesons, however, were unexpected. Furthermore, they showed very strange behaviour. They were easily created only in pairs. The probability 369 of the inverse process (i.e. their decay) was 1013 times lower than the probability of their creation. It was suggested that these particles possess a new type of charge, the strangeness, which is conserving in the strong interactions. When a pair of such particles is created, one of them has strangeness +1 and the other 1, so the total strangeness remains zero. When decaying, they act individually and therefore the strangeness is not conserving. According to the suggestion, this is only possible through the weak interactions that are much weaker than the strong interactions (see Sec. 19.9.4) and thus the decay probability is much lower.
|
¡ The golden age of particle physics began in 1950-s with the advent of particle accelerators, the machines that produced beams of electrons or protons with high kinetic energy. Having such beams available, experimentalists can plan the experiment and repeat it, while with the cosmic rays they were at the mercy of chance. When the accelerators became the main tool of exploration, the particle physics acquired its second name, the high energy physics. During the last half a century, experimentalists discovered so many new particles (few of them are listed in Table 19.3) that it became obvious that they cannot all be elementary. When colliding with each other, they produce some other particles. Mutual transformations of the particles is their main property. family photon leptons hadrons Lifetime T1=2 (s) stable stable 2:2 10¡6 £ 10¡13 stable stable stable 2:6 0:8 1:2 0:9 5:2 10¡8 10¡16 10¡8 10¡10 10¡8 £ £ £ £ £ 10¡18 stable 900 particle photon electron muon tau electron neutrino muon neutrino tau neutrino pion pion kaon kaon kaon eta meson proton neutron lambda sigma sigma sigma omega symbol e¡, e+ „¡, „+ ¿ ¡, ¿ + ”e ”„ ”¿ …+, …¡ …0 K +, K ¡ K 0 S K 0 L ·0 p n ⁄0 §+ §0 §¡ ›¡, ›+ mass (MeV) 0 0.511 105.7 1777 0 » 0 » 0 » 139.6 135.0 493.7 497.7 497.7 548.8 938.3 939.6 1116 1189 1192 1197 1672 £ £ £ £ £ Table 19.3: Few representatives of difierent particle families. 2:6 0:8 6 1:5 0:8 10¡10 10¡10 10¡20 10¡10 10¡10 Physicists faced the problem of particle classiflcation similar to the problems of classiflcation of animals, plants, and chemical elements. The flrst approach was very simple. The particles were lept
|
ons (light particles, like electron), mesons divided in four groups according to their mass: (intermediate mass, like pion), baryons (heavy particles, like proton or neutron), and hyperons 370 (very heavy particles). Then it was realized that it would be more logical to divide the particles in three families according to their ability to interact via weak, electromagnetic, and strong forces (in addition to that, all particles experience gravitational attraction towards each other). Except for the gravitational interaction, the photon ( quantum) participates only in electromagnetic interactions, the leptons take part in both weak and electromagnetic interactions, and hadrons are able to interact via all forces of nature (see Sec. 19.9.4). In addition to conservation of the strangeness, several other conservation laws were discovered. For example, number of leptons is conserving. This is why in the reaction (19.6) we have an electron (lepton number +1) and anti-neutrino (lepton number 1) in the flnal state. Similarly, the number of baryons is conserving in all reactions. ¡ The quest for the constituent parts of the neutron has led us to something unexpected. We found that there are several hundreds of difierent particles that can be \knocked out" of the neutron but none of them are its parts. Actually, the neutron itself can be knocked out of some of them! What a mess! Further efiorts of experimentalists could not flnd an order, which was flnally discovered by theoreticians who introduced the notion of quarks. 19.9.3 Quarks and leptons While experimentalists seemed to be lost in the maze, the theoreticians groped for the way out. Using an extremely complicated mathematical technique, they managed to group the hadrons in such families which implied that all known (and yet unknown) hadrons are build of only six types of particles with fractional charges. The main credit for this (in the form of Nobel Prize) was given to M. Gell-Mann and G. Zweig. At flrst, they considered a subset of the hadrons and developed a theory with only three types of such truly elementary particles. When Murray Gell-Mann thought of the name for them, he came across the book "Finnegan’s Wake" by James Joyce. The line "Three qu
|
arks for Mister Mark..." appeared in that fanciful book (in German, the word \quark" means cottage cheese). He needed a name for three particles and this was the answer. Thus the term quark was coined. Later, the theory was generalized to include all known particles, which required six types of quarks. Modern theories require also that the number of difierent leptons should be the same as the number of difierent quark types. According to these theories, the quarks and leptons are truly elementary, i.e. they do not have any internal structure and therefore are of a zero size (pointlike). Thus, the world is constructed of just twelve types of elementary building blocks that are given in Table 19.4. Amazingly enough, the electron that was discovered before all other particles, more than a century ago, turned out to be one of them! After Gell-Mann, who used a funny name (quark) for an elementary particle, the fundamental physics was ooded with such names. For example, the six quark types are called avors (for cottage cheese, this is appropriate indeed), the three difierent states in which each quark can be, are called colors (red, green, blue), etc. Modern physics is so complicated and mathematical, that people working in it, need such kind of jokes to \spice unsavoury dish with avors". The funny names should not confuse anybody. Elementary particles do not have any smell, taste, or colour. These terms simply denote certain properties (similar to electric charge) that do not 371 family elementary particle symbol charge leptons quarks electron muon tau electron neutrino muon neutrino tau neutrino up down strange charmed top (truth) bottom (beauty) e¡ „¡ ¿ ¡ ”e ”„ ”¿ 2=3 1=3 ¡ 1=3 ¡ +2=3 +2=3 1=3 ¡ lepton number 1 baryon number /3 1/3 1/3 1/3 1/3 1/3 mass (MeV) 0.511 105.7 1777 » » 0 0 0 » 360 360 1500 540 174000 5000 Table 19.4: Elementary building blocks of the universe. exist in human world. Hadrons There are particles that are able to interact with each other by the
|
so-called strong forces. An10¡15 m), other name for these forces is nuclear forces. They are very strong at short distances ( and very quickly vanish when the distance between the particles increases. All these particles are called hadrons. The protons and neutrons are examples of hadrons. » As you remember, we learned about the existence of huge variety of particles when trying to look inside a nucleon, more particularly, the neutron. So, what the neutron is made of? Can we get the answer at last, after learning about the quarks? Yes, we can. According to modern theories, all hadrons are composed of quarks. The quarks can be combined in groups of two or three. The bound states of two quarks are called mesons, and the bound complexes of three quarks are called baryons. No other numbers of quarks can form observable particles1. Nucleons are baryons and therefore consist of three quarks while the pion is a meson containing only two quarks, as schematically shown in Fig. 19.7. Comparing this flgure with Table 19.4, you can see why quarks have fractional charges. Counting the total charge of a hadron, you should not forget that anti-quarks have the opposite charges. The baryon number for an anti-quark also has the opposite sign (negative). This is why mesons actually consist of a quark and anti-quark in order to have total baryon number zero. 1Recently, experimentalists and theoreticians started to actively discuss the possibility of the existence of pentaquarks, exotic particles that are bound complexes of flve quarks. 372 ”•u ”•u „‚ „‚ ”•d „‚ proton ”•d „‚ ”•d „‚ ”•u ”•u ”•„d „‚ „‚ „‚ neutron …+ meson Figure 19.7: Quark content of the proton, neutron, and …+-meson. Particle reactions At the early stages of the particle physics development, in order to flnd the constituent parts of various particles, experimentalists simply collided them and watched the \fragments". However, this straightforward approach led to confusion. For example, the reaction between the … ¡
|
meson and proton, (19.8) would suggest (if naively interpreted) that either K 0 or ⁄0 is a constituent part of the nucleon while the pion is incorporated into the other \fragment". On the other hand, the same collision can knock out difierent \fragments" from the same proton. For example, ¡! …¡ + p K 0 + ⁄0 ; which leads to an absurd suggestion that neutron is a constituent part of proton. …¡ + p ¡! …0 + n ; (19.9) The quark model explains all such \puzzles" nicely and logically. Similarly to chemical reactions that are just rearrangements of atoms, the particle reactions of the type (19.8) and (19.9) are just rearrangements of the quarks. The only difierence is that, in contrast to chemistry where the number of atoms is not changing, the number of quarks before the collision is not necessarily equal to their number after the collision. This is because a quark from one colliding particle can annihilate with the corresponding antiquark from another particle. Moreover, if the collision is su–ciently powerful, the quark-antiquark pairs can be created from vacuum. It is convenient to depict the particle transformations in the form of the so-called quark ow diagrams. On such diagrams, the quarks are represented by lines that may be visualized as the trajectories showing their movement from the left to the right. For example, the diagram given in Fig. (19.8), shows the quark rearrangement for the reaction (19.8). As you can see, when the pion collides with proton, its „u quark annihilates with the u quark from the proton. At the same time, the s„s pair is created from the vacuum. Then, the „s quark binds with the d quark to form the strange meson K 0, while the s quark goes together with the ud pair as the strange baryon ⁄0. The charge-exchange reaction (19.9) is a more simple rearrangement process shown in Fig. 19.9. You may wonder why the quark and antiquark of the same avor in the …0 meson do not annihilate. Yes they do, but
|
not immediately. And due to this annihilation, the lifetime of … 0 is 100 million times shorter than the lifetime of …§ (see Table 19.3). 373 …¡ ‰ p ( d „u u u d d „s s u d K 0 ⁄0 ) Figure 19.8: Quark-ow diagram for the reaction …¡ + p K 0 + ⁄0. ¡! …¡ ‰ p ( „u d u u d „u u d u d …0 n ) Figure 19.9: Quark-ow diagram for the reaction …¡ + p …0 + n. ¡! Despite its simplicity, the quark-ow diagram technique is very powerful method not only for explaining the observed reactions but also for predicting new reactions that have not yet been seen in experiments. Knowing the quark content of particles (which is available in modern Physics Handbooks), you can draw plenty of such diagrams that will describe possible particle transformations. The only rule is to keep the lines continuous. They can disappear or emerge only for a quark-antiquark pair of the same avor. However, the continuity of the quark lines is valid only for the processes caused by the strong interaction. Indeed, the fl-decay of a free neutron (caused by the weak forces), as well as the fl-decay of the nuclei, indicate that quarks can change avor. In particular, the fl-decay (19.10) or (19.6) happens because the d quark transformes into the u quark, n ¡! p + e¡ + „”e ; (19.10) d ¡! u + e¡ + „”e ; (19.11) due to the weak interaction, as shown in Fig. 19.10 Quark conflnement At this point, it is very logical to ask if anybody observed an isolated quark. The answer is \no". Why? And how can one be so confldent of the quark model when no one has ever seen an 374 „”e e Figure 19.10: Quark-ow diagram for the fl decay of neutron. isolated quark? Basically, you can’t see an isolated quark because the quark-quark attractive force does not let them go
|
. In contrast to all other systems, the attraction between quarks grows with the distance separating them. It is like a rubber cord connecting two balls. When the balls are close to each other, the cord is not stretched and the balls do not feel any force. If, however, you try to separate the balls, the cord pulls them back. The more you stretch the cord, the stronger the force becomes (according to the Hook’s law of elasticity). Of course, a real rubber cord would eventually break. This does not happen with the quark-quark force. It can grow to inflnity. This phenomenon is called the conflnement of quarks. Nonetheless, we are sure that the nucleon consists of three quarks having fractional charges. A hundred years ago Rutherford, by observing the scattering of charged particles from an atom, proved that its positive charge is concentrated in a small nucleus. Nowadays, similar experiments prove the existence of fractional point-like charges inside the nucleon. The quark model actually is much more complicated than the quark-ow diagrams. It is a consistent mathematical theory that explains a vast variety of experimental data. This is why nobody doubts that it reects the reality. 19.9.4 Forces of nature If asked how many types of forces exist, many people start counting on their flngers, and when the count exceeds ten, they answer \plenty of". Indeed, there are gravitational forces, electrical, magnetic, elastic, frictional forces, and also forces of wind, of expanding steam, of contracting muscles, etc. If, however, we analyze the root causes of all these forces, we can reduce their number to just a few fundamental forces (or fundamental interactions, as physicists say). For example, the elastic force of a stretched rubber cord is due to the attraction between the molecules that the rubber is made of. Looking deeper, we flnd that the molecules attract each other because of the electromagnetic attraction between the electrons of one molecule and nuclei of the other. Similarly, if we depress a piece of rubber, it resists because the molecules refuse to approach each other too close due to the electric repulsion of the nuclei. Therefore the elasticity 375 of rubber has the electromagnetic origin. Any other force in the human world can be analyzed in the same manner. After doing this, we will flnd that all forces that we see around us (in the macroworld),
|
are either of gravitational or electromagnetic nature. As we also know, in the microworld there are two other types of forces: The strong (nuclear) forces that act between all hadrons, and the weak forces that are responsible for changing the quark avors. Therefore, all interactions in the Universe are governed by only four fundamental forces: Strong, electromagnetic, weak and gravitational. These forces are very difierent in strength and range. Their relative strengths are given in Table 19.5. The most strong is the nuclear interaction. The strength of the electromagnetic forces is one hundred times lower. The weak forces are nine orders of magnitude weaker than the nuclear forces, and the gravity is 38 orders of magnitude weaker! It is amazing that this subtle interaction governs the cosmic processes. The reason is that the gravitational forces are of long range and always attractive. There is no such thing as negative mass that would screen the gravitational fleld, like negative electrons screen the fleld of positive nuclei. Force Relative Strength Range Strong 1 Electromagnetic 0.0073 Short Long Weak Gravitational 10¡9 10¡38 Very Short Long Table 19.5: Four fundamental forces and their relative strengths. Towards the unifled force Physicists always try to simplify things. Since there are only four fundamental forces, it is tempting to ask "If only four, then why not only one?". Can it be that all interactions are just difierent faces of one master force? The flrst who started the quest for uniflcation of forces was Einstein. After completing his general theory of relativity, he spent 30 years in unsuccessful attempts to unify the electromagnetic and gravity forces. At that time, it seemed logical because both of them were inflnite in range and obeyed the same inverse square law. Einstein failed because the uniflcation should be done on the basis of quantum laws, but he tried to do it using the classical concepts. Electro-weak uniflcation Now it is known that despite the similarities in form of the gravity and electromagnetic forces, the gravity will be the last to yield to uniflcation. The more implausible uniflcation of the electro- 376 magnetic and weak forces turned out to be the flrst successful step towards the unifled interaction. In 1979, the Nobel prize was awarded to Weinberg, Salam
|
, and Glashow, who developed a unifled theory of electromagnetic and weak interactions. According to that theory, the electromagnetic and weak forces converge to one electro-weak interaction at very high collision energies. The theory also predicted the existence of heavy particles, the W and Z, with masses around 80000 MeV and 90000 MeV, respectively. These particles were discovered in 1983, which brought experimental veriflcation to the new theory. Grand uniflcation The next step was to try to combine the electro-weak theory with the theory of the strong interactions (i.e. quark theory) in a single theory. This work was called the grand uniflcation. Currently, physicists discuss versions of such theory that predicts the convergence of the three forces at aw1017 MeV. The quarks and leptons in this theory, are the unifled leptoquarks. fully high energies » The grand uniflcation is not that successful as the electro-weak theory. It has the problem of mathematical consistency and contradicts to at least one experiment. The matter is that it predicts the proton decay, that does not conserve both the baryon and lepton numbers, with the lifetime of The measurements show, however, that the lifetime of the proton is at least 1032 years. » 1029 years. p ¡! e+ + …0 ; Theory of everything Some people believe that the grand uniflcation has an inherent principal aw. According to them, one cannot unify the forces step by step (leaving the gravity out), and the correct way is to combine all four forces in the so-called theory of everything. There are few difierent approaches to unifying everything. One of them suggests that all fundamental particles (quarks and leptons) are just vibrating modes of string loops in multidimensional space. The electron is a string vibrating one way, the up-quark is a string vibrating another way, and so on. The other approach introduces a new level of fundamental particles, the preons, that could be constituent parts of quarks and leptons. The quest goes on. Everyone agrees that constructing the theory of everything would in no way mean that biology, geology, chemistry, or even physics had been solved. The universe is so rich and complex that the discovery of the fundamental theory would not mean
|
the end of science. The ultimate theory of everything would provide an unshakable pillar of coherence forever assuring us that the universe is a comprehensible place. 19.10 Origin of the universe Looking deep inside microscopic particles, physicists need to collide them with high kinetic energies. The smaller parts of matter they want to observe, the higher energy they need. This is why they build more and more powerful accelerators. However, the accelerators have natural limitations. Indeed, an accelerator cannot be bigger than the size of our planet. And even if we manage to build a circular accelerator around the whole earth (along the equator, for example), 377 it would not be able to reach the energy of mental interactions takes place. » 1017 MeV at which the grand uniflcation of funda- So, what are we to do? How can we test the theory of everything? Is it possible at all? 1017 MeV, should be looked for in the cosmos, Yes, it is! The astronomically high values, like » of course. Our journey towards extremely small objects eventually leads us to extremely large objects, like whole universe. Equations of Einstein’s theory of relativity can describe the evolution of the universe. Physicists solved these equations back in time and found that the universe had its beginning. Approximately 15 billion years ago, it started from a zero size point that exploded and rapidly expanded to the present tremendous scale. At the flrst instants after the explosion, the matter was at such incredibly high density and temperature that all particles had kinetic energies even higher than 1017 MeV. This means that at the very beginning there was only one the uniflcation energy sigle force and no difierence among fundamental particles. Everything was unifled and \simple". » You may ask \So what? How can so distant past help us?". In many ways! The development of the universe was governed by the fundamental forces. If our theories about them are correct, we should be able to reproduce (with calculations) how that development proceeded step by step. During the expansion, all the nuclei and atoms in the cosmos were created. The amounts of difierent nuclei are not the same. Why? Their relative abundances were determined by the processes in the flrst moments after the explosion. Thus, comparing what follows from the theories with the observed abundances of chemical elements, we can judge
|
validity of our theories. Nowadays, the most popular theory, describing the history of the universe, is the so{called Big-Bang model. The diagram given in Fig. 19.11, shows the sequence of events which led to the creation of matter in its present form. Nobody knows what was before the Big Bang and why it happened, but it is assumed that just after this enigmatic cataclysm, the universe was so dense and hot that all four forces of nature (strong, electromagnetic, weak, and gravitational) were indistinguishable and therefore gravity was governed by quantum laws, like the other three types of interactions. A complete theory of quantum gravity has not been constructed yet, and this very flrst \epoch" of our history remains as enigmatic as the Big Bang itself. The ideal \democracy" (equality) among the forces lasted only a small fraction of a second. 1032 K and the gravity separated. By the time t The other three forces, however, remained unifled into one universal interaction mediated by an extremely heavy particle, the so-called X boson, which could transform leptons into quarks and vice versa. 10¡43 sec the universe cooled down to » » » When at t 10¡35 sec most of the X bosons decayed, the quarks combined in trios and pairs 10¡10 sec, to form nucleons, mesons, and other hadrons. The only symmetry which lasted up to was between the electromagnetic and weak forces mediated by the Z and W particles. From 10¡10 sec) until the universe was about one the moment when this last symmetry was broken ( second old, neutrinos played the most signiflcant role by mediating the neutron-proton transmutations and therefore flxing their balance (neutron to proton ratio). » » Already in a few seconds after the Big Bang nuclear reactions started to occur. The protons 378??????? BIG BANG???? single unifled force gravitational force separated strong force separated weak force separated n+”! p+e¡, p+ „”! n+e+ p+n! 2H+, 2H+2H! 4He+ pp{chain today 1032 K 1028 K 1015 K 1010 K 109 K 107 K 2:9 K »? temperature 0 10 10 10 ¡43sec �
|
�35sec ¡10sec 1 sec 10 sec 500 sec »? 15 £ 109years time Figure 19.11: Schematic \history" of the universe. and neutrons combined very rapidly to form deuterium and then helium. During the very flrst seconds there were too many very energetic photons around which destroyed these nuclei immediately after their formation. Very soon, however, the continuing expansion of the universe changed the conditions in favour of these newly born nuclei. The density decreased and the photons could not destroy them that fast anymore. During a short period of cosmic history, between about 10 and 500 seconds, the entire universe behaved as a giant nuclear fusion reactor burning hydrogen. This burning took place via a chain of nuclear reactions, which is called the pp-chain because the flrst reaction in this sequence is the proton-proton collision leading to the formation of a deuteron. Nowadays, the same pp-chain is the main source of energy in our sun and other stars. But how do we know that the scenario was like this? In other words, how can we check the Big{Bang theory? Is it possible to prove something which happened 15 billion years ago and in such a short time? Yes, it is! The pp-chain fusion, pp-chain: p + p e¡+p + p p + 2H 3He + 3He 3He + 4He 2H + e+ + ”e 2H + ”e 3He + 4He + p + p 7Be +!!!!! 379. & p + 7Be 8B 8Be⁄ 7Li + ”e 8Be + 4He + 4He e¡+7Be p + 7Li 8Be!!! 8B + 8Be⁄ + e+ +”e 4He + 4He!!! is the key for such a proof. ‰=‰p 6 1 ¡2 ¡4 ¡6 ¡8 10 10 10 10 ¡10 ¡12 10 10 helium deuterium - 10 102 103 104 t (sec) Figure 19.12: Mass fractions ‰ (relative to hydrogen ‰p) of primordial deuterium and 4He versus the time elapsed since the Big Bang. As soon as the nucleosynthesis started, the amount of deuterons, helium isotopes, and other light nuclei started to increase. This is shown in Fig
|
. 19.12 for 2H and 4He. The temperature and the density, however, continued to decrease. After a few minutes the temperature dropped to such a level that the fusion practically stopped because the kinetic energy of the nuclei was not su–cient to overcome the electric repulsion between nuclei anymore. Therefore the abundances of light elements in the cosmos were flxed (we call them the primordial abundances). Since then, they practically remain unchanged, like a photograph of the past events, and astronomers can measure them. Comparing the measurements with the predictions of the theory, we can check whether our assumptions about the flrst seconds of the universe are correct or not. Astronomy and the physics of microworld come to the same point from difierent directions. The Big Bang theory is only one example of their common interest. Another example is related to the mass of neutrino. When Pauli suggested this tiny particle to explain the nuclear fl-decay, it was considered as massless, like the photon. However, the experiments conducted recently, indicate that neutrinos may have small non-zero masses of just a few eV. In the world of elementary particles, this is extremely small mass, but it makes a huge difference in the cosmos. The universe continues to expand despite the fact that the gravitational forces pull everything back to each other. The estimates show, that the visible mass of all galaxies is not su–cient to stop and reverse the expansion. The universe is fllled with a tremendous number of neutrinos. Even with few eV per neutrino, this amounts to a huge total mass of them, which is invisible but could reverse the expansion. Thus, the cooperation of astronomers and particle physicists has led to signiflcant advances in our understanding of the universe and its evolution. The quest goes on. A famous German 380 philosopher Friedrich Nietzsche once said that \The most incomprehensible thing about this Universe is that it is comprehensible." 381 Appendix A GNU Free Documentation License 2000,2001,2002 Free Software Foundation, Inc. Version 1.2, November 2002 Copyright c 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. PREAMBLE The purpose of this License is to make a manual, textbook, or other functional and useful document \free" in the sense of
|
freedom: to assure everyone the efiective freedom to copy and redistribute it, with or without modifying it, either commercially or non-commercially. Secondarily, this License preserves for the author and publisher a way to get credit for their work, while not being considered responsible for modiflcations made by others. This License is a kind of \copyleft", which means that derivative works of the document must themselves be free in the same sense. It complements the GNU General Public License, which is a copyleft license designed for free software. We have designed this License in order to use it for manuals for free software, because free software needs free documentation: a free program should come with manuals providing the same freedoms that the software does. But this License is not limited to software manuals; it can be used for any textual work, regardless of subject matter or whether it is published as a printed book. We recommend this License principally for works whose purpose is instruction or reference. APPLICABILITY AND DEFINITIONS This License applies to any manual or other work, in any medium, that contains a notice placed by the copyright holder saying it can be distributed under the terms of this License. Such a notice grants a world-wide, royalty-free license, unlimited in duration, to use that work under the conditions stated herein. The \Document", below, refers to any such manual or work. Any member of the public is a licensee, and is addressed as \you". You accept the license if you copy, modify or distribute the work in a way requiring permission under copyright law. 382 A \Modifled Version" of the Document means any work containing the Document or a portion of it, either copied verbatim, or with modiflcations and/or translated into another language. A \Secondary Section" is a named appendix or a front-matter section of the Document that deals exclusively with the relationship of the publishers or authors of the Document to the Document’s overall subject (or to related matters) and contains nothing that could fall directly within that overall subject. (Thus, if the Document is in part a textbook of mathematics, a Secondary Section may not explain any mathematics.) The relationship could be a matter of historical connection with the subject or with related matters, or of legal, commercial, philosophical, ethical or political position regarding them. The \Invariant Sections" are certain Secondary Sections whose titles are designated, as
|
being those of Invariant Sections, in the notice that says that the Document is released under this License. If a section does not flt the above deflnition of Secondary then it is not allowed to be designated as Invariant. The Document may contain zero Invariant Sections. If the Document does not identify any Invariant Sections then there are none. The \Cover Texts" are certain short passages of text that are listed, as Front-Cover Texts or Back-Cover Texts, in the notice that says that the Document is released under this License. A Front-Cover Text may be at most 5 words, and a Back-Cover Text may be at most 25 words. A \Transparent" copy of the Document means a machine-readable copy, represented in a format whose speciflcation is available to the general public, that is suitable for revising the document straightforwardly with generic text editors or (for images composed of pixels) generic paint programs or (for drawings) some widely available drawing editor, and that is suitable for input to text formatters or for automatic translation to a variety of formats suitable for input to text formatters. A copy made in an otherwise Transparent flle format whose markup, or absence of markup, has been arranged to thwart or discourage subsequent modiflcation by readers is not Transparent. An image format is not Transparent if used for any substantial amount of text. A copy that is not \Transparent" is called \Opaque". Examples of suitable formats for Transparent copies include plain ASCII without markup, Texinfo input format, LATEX input format, SGML or XML using a publicly available DTD and standard-conforming simple HTML, PostScript or PDF designed for human modiflcation. Examples of transparent image formats include PNG, XCF and JPG. Opaque formats include proprietary formats that can be read and edited only by proprietary word processors, SGML or XML for which the DTD and/or processing tools are not generally available, and the machinegenerated HTML, PostScript or PDF produced by some word processors for output purposes only. The \Title Page" means, for a printed book, the title page itself, plus such following pages as are needed to hold, legibly, the material this License requires to appear in the title page. For works in formats which do not have any title page as such, \Title Page" means the
|
text near the most prominent appearance of the work’s title, preceding the beginning of the body of the text. A section \Entitled XYZ" means a named subunit of the Document whose title either is precisely XYZ or contains XYZ in parentheses following text that translates XYZ in another language. (Here XYZ stands for a speciflc section name mentioned below, such as \Acknowledgements", \Dedications", \Endorsements", or \History".) To \Preserve the Title" of such a section when you modify the Document means that it remains a section \Entitled XYZ" according to this deflnition. The Document may include Warranty Disclaimers next to the notice which states that this License applies to the Document. These Warranty Disclaimers are considered to be included by reference in this License, but only as regards disclaiming warranties: any other implication that these Warranty Disclaimers may have is void and has no efiect on the meaning of this License. 383 VERBATIM COPYING You may copy and distribute the Document in any medium, either commercially or non-commercially, provided that this License, the copyright notices, and the license notice saying this License applies to the Document are reproduced in all copies, and that you add no other conditions whatsoever to those of this License. You may not use technical measures to obstruct or control the reading or further copying of the copies you make or distribute. However, you may accept compensation in exchange for copies. If you distribute a large enough number of copies you must also follow the conditions in section A. You may also lend copies, under the same conditions stated above, and you may publicly display copies. COPYING IN QUANTITY If you publish printed copies (or copies in media that commonly have printed covers) of the Document, numbering more than 100, and the Document’s license notice requires Cover Texts, you must enclose the copies in covers that carry, clearly and legibly, all these Cover Texts: FrontCover Texts on the front cover, and Back-Cover Texts on the back cover. Both covers must also clearly and legibly identify you as the publisher of these copies. The front cover must present the full title with all words of the title equally prominent and visible. You may add other material on the covers in addition. Copying with changes limited to the covers, as long as they preserve the title of the Document and satisfy these conditions, can
|
be treated as verbatim copying in other respects. If the required texts for either cover are too voluminous to flt legibly, you should put the flrst ones listed (as many as flt reasonably) on the actual cover, and continue the rest onto adjacent pages. If you publish or distribute Opaque copies of the Document numbering more than 100, you must either include a machine-readable Transparent copy along with each Opaque copy, or state in or with each Opaque copy a computer-network location from which the general network-using public has access to download using public-standard network protocols a complete Transparent copy of the Document, free of added material. If you use the latter option, you must take reasonably prudent steps, when you begin distribution of Opaque copies in quantity, to ensure that this Transparent copy will remain thus accessible at the stated location until at least one year after the last time you distribute an Opaque copy (directly or through your agents or retailers) of that edition to the public. It is requested, but not required, that you contact the authors of the Document well before redistributing any large number of copies, to give them a chance to provide you with an updated version of the Document. MODIFICATIONS You may copy and distribute a Modifled Version of the Document under the conditions of sections A and A above, provided that you release the Modifled Version under precisely this License, with the Modifled Version fllling the role of the Document, thus licensing distribution and modiflcation of the Modifled Version to whoever possesses a copy of it. In addition, you must do these things in the Modifled Version: 1. Use in the Title Page (and on the covers, if any) a title distinct from that of the Document, and from those of previous versions (which should, if there were any, be listed in the History 384 section of the Document). You may use the same title as a previous version if the original publisher of that version gives permission. 2. List on the Title Page, as authors, one or more persons or entities responsible for authorship of the modiflcations in the Modifled Version, together with at least flve of the principal authors of the Document (all of its principal authors, if it has fewer than flve), unless they release you
|
from this requirement. 3. State on the Title page the name of the publisher of the Modifled Version, as the publisher. 4. Preserve all the copyright notices of the Document. 5. Add an appropriate copyright notice for your modiflcations adjacent to the other copyright notices. 6. Include, immediately after the copyright notices, a license notice giving the public permission to use the Modifled Version under the terms of this License, in the form shown in the Addendum below. 7. Preserve in that license notice the full lists of Invariant Sections and required Cover Texts given in the Document’s license notice. 8. Include an unaltered copy of this License. 9. Preserve the section Entitled \History", Preserve its Title, and add to it an item stating at least the title, year, new authors, and publisher of the Modifled Version as given on the Title Page. If there is no section Entitled \History" in the Document, create one stating the title, year, authors, and publisher of the Document as given on its Title Page, then add an item describing the Modifled Version as stated in the previous sentence. 10. Preserve the network location, if any, given in the Document for public access to a Transparent copy of the Document, and likewise the network locations given in the Document for previous versions it was based on. These may be placed in the \History" section. You may omit a network location for a work that was published at least four years before the Document itself, or if the original publisher of the version it refers to gives permission. 11. For any section Entitled \Acknowledgements" or \Dedications", Preserve the Title of the section, and preserve in the section all the substance and tone of each of the contributor acknowledgements and/or dedications given therein. 12. Preserve all the Invariant Sections of the Document, unaltered in their text and in their titles. Section numbers or the equivalent are not considered part of the section titles. 13. Delete any section Entitled \Endorsements". Such a section may not be included in the Modifled Version. 14. Do not re-title any existing section to be Entitled \Endorsements" or to conict in title with any Invariant Section. 15. Preserve any Warranty Disclaimers. If the Modifled
|
Version includes new front-matter sections or appendices that qualify as Secondary Sections and contain no material copied from the Document, you may at your option designate some or all of these sections as invariant. To do this, add their titles to the list of 385 Invariant Sections in the Modifled Version’s license notice. These titles must be distinct from any other section titles. You may add a section Entitled \Endorsements", provided it contains nothing but endorsements of your Modifled Version by various parties{for example, statements of peer review or that the text has been approved by an organisation as the authoritative deflnition of a standard. You may add a passage of up to flve words as a Front-Cover Text, and a passage of up to 25 words as a Back-Cover Text, to the end of the list of Cover Texts in the Modifled Version. Only one passage of Front-Cover Text and one of Back-Cover Text may be added by (or through arrangements made by) any one entity. If the Document already includes a cover text for the same cover, previously added by you or by arrangement made by the same entity you are acting on behalf of, you may not add another; but you may replace the old one, on explicit permission from the previous publisher that added the old one. The author(s) and publisher(s) of the Document do not by this License give permission to use their names for publicity for or to assert or imply endorsement of any Modifled Version. COMBINING DOCUMENTS You may combine the Document with other documents released under this License, under the terms deflned in section A above for modifled versions, provided that you include in the combination all of the Invariant Sections of all of the original documents, unmodifled, and list them all as Invariant Sections of your combined work in its license notice, and that you preserve all their Warranty Disclaimers. The combined work need only contain one copy of this License, and multiple identical Invariant Sections may be replaced with a single copy. If there are multiple Invariant Sections with the same name but difierent contents, make the title of each such section unique by adding at the end of it, in parentheses, the name of the original author or publisher of that section if known, or else a unique number
|
. Make the same adjustment to the section titles in the list of Invariant Sections in the license notice of the combined work. In the combination, you must combine any sections Entitled \History" in the various original documents, forming one section Entitled \History"; likewise combine any sections Entitled \Acknowledgements", and any sections Entitled \Dedications". You must delete all sections Entitled \Endorsements". COLLECTIONS OF DOCUMENTS You may make a collection consisting of the Document and other documents released under this License, and replace the individual copies of this License in the various documents with a single copy that is included in the collection, provided that you follow the rules of this License for verbatim copying of each of the documents in all other respects. You may extract a single document from such a collection, and distribute it individually under this License, provided you insert a copy of this License into the extracted document, and follow this License in all other respects regarding verbatim copying of that document. AGGREGATION WITH INDEPENDENT WORKS A compilation of the Document or its derivatives with other separate and independent documents or works, in or on a volume of a storage or distribution medium, is called an \aggregate" if the 386 copyright resulting from the compilation is not used to limit the legal rights of the compilation’s users beyond what the individual works permit. When the Document is included an aggregate, this License does not apply to the other works in the aggregate which are not themselves derivative works of the Document. If the Cover Text requirement of section A is applicable to these copies of the Document, then if the Document is less than one half of the entire aggregate, the Document’s Cover Texts may be placed on covers that bracket the Document within the aggregate, or the electronic equivalent of covers if the Document is in electronic form. Otherwise they must appear on printed covers that bracket the whole aggregate. TRANSLATION Translation is considered a kind of modiflcation, so you may distribute translations of the Document under the terms of section A. Replacing Invariant Sections with translations requires special permission from their copyright holders, but you may include translations of some or all Invariant Sections in addition to the original versions of these Invariant Sections. You may include a translation of this License, and all the license notices in the Document, and any Warranty Disclaimers, provided that you also include the original English version of this License and In case of a disagreement between the the
|
original versions of those notices and disclaimers. translation and the original version of this License or a notice or disclaimer, the original version will prevail. If a section in the Document is Entitled \Acknowledgements", \Dedications", or \History", the requirement (section A) to Preserve its Title (section A) will typically require changing the actual title. TERMINATION You may not copy, modify, sub-license, or distribute the Document except as expressly provided for under this License. Any other attempt to copy, modify, sub-license or distribute the Document is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance. FUTURE REVISIONS OF THIS LICENSE The Free Software Foundation may publish new, revised versions of the GNU Free Documentation License from time to time. Such new versions will be similar in spirit to the present version, but may difier in detail to address new problems or concerns. See http://www.gnu.org/copyleft/. Each version of the License is given a distinguishing version number. If the Document specifles that a particular numbered version of this License \or any later version" applies to it, you have the option of following the terms and conditions either of that specifled version or of any later version that has been published (not as a draft) by the Free Software Foundation. If the Document does not specify a version number of this License, you may choose any version ever published (not as a draft) by the Free Software Foundation. 387 ADDENDUM: How to use this License for your documents To use this License in a document you have written, include a copy of the License in the document and put the following copyright and license notices just after the title page: Copyright c YEAR YOUR NAME. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled \GNU Free Documentation License". If you have Invariant Sections, Front-Cover Texts and Back-Cover Texts, replace the \with...Texts." line with this: with
|
the Invariant Sections being LIST THEIR TITLES, with the Front-Cover Texts being LIST, and with the Back-Cover Texts being LIST. If you have Invariant Sections without Cover Texts, or some other combination of the three, merge those two alternatives to suit the situation. If your document contains nontrivial examples of program code, we recommend releasing these examples in parallel under your choice of free software license, such as the GNU General Public License, to permit their use in free software. 388 meet ; with this point of intersection as a centre and with any line as a radius, describe a circle ; let L be the length of the arc of this circle intercepted between the two lines and let R be the length of the radius of the circle ; then by definition the ratio — is the numerical value of the angle. Thus, in the figure, AB and CD are the two lines lying in a plane ; 0 is their point of intersection; OP is the radius of the circle, i.e. R; PQ is the intercepted arc, i.e. L ; hence the PQ angle has the value — ^- This definition of the value for an A p, angle is adopted because it is known — R— — i from geometry that the ratio of the FIG. i.- Numerical^ value of the intercepted arc to the radius of its i8^ circle is the same for all" values of the radius ; and therefore it is not necessary to specify the latter. If Nis the numerical value of the angle, this relation can be expressed N= =|, or L = RN; in words, the length of the intercepted arc equals the product of the length of the corresponding radius and the value of the angle. A unit angle is called a " radian " ; it is an angle such that the lengths of the intercepted arc and the radius are equal. In ordinary language angles are expressed in "degrees," "minutes," and "seconds" ; there being 60 seconds in a min- ute, 60 minutes in a degree, and 90 degrees in a right an<rl<«. INTRODUCTION a, e s u c f i R It is easy to find the relation between a radian and a degree ; e b a the arc intercepted by two lines making a right angle, where TT= 8.141»l approximately ; and therefore —, or 1.5708, is the — value of a right angle in terms of radians.
|
Hence, l.")708 radians = 90°, or 1 radian = 57°.2958 = 57° 17' 45".? / An angle has a sign as well as a numerical value. Thus, if OA be chosen as a fixed direction, there is a difference between the angles (AOB) and (y40(7), although they are numeric- ally equal. The latter corresponds °' to a rotation like that of the hands of a watch ; the former to a contrary rotation. One angle, it is immaterial which, is called plus ( + ); the other, minus (— ). When an angle becomes very small, the ratio of the value of its sine to its own value approaches unity. For, referring to the cut, the value of the angle between OA and OB is the ratio of the arc AB to the radius; the value of the sine of this angle is defined to be the ratio of CB to the radius; therefore, the ratio of the an^le t»» hs sine is the ratio of the an-, 1 />' t.> t '/>'. Fio 8 -In the limit when '^lis (M!ll;ils tn<> rati° °f tne arc /;'/; to its ch'ord BCD; and, as the angle is made smaller and smaller, this ratio /t)-(AOB).. approaches unity, because in tin- limit an arc and its chord eqaaL 3. Vectors and vector quantities. — A vector is a limited port i«»ll nf a MlMl-ht illir ill a definite diivrtinn. TllUS tin- INTRODUCTION straight lines AB and CD are vectors ; their lengths are the distances between A and B and between C and D ; and their directions are indicated by the arrows. Three ideas are involved: the direction of the line, the sense of this direction (i.e. a distinction is made between a line drawn to the right and one drawn to the left, etc.), and the length of the line. The position of the line is immaterial; so two vectors of the same length and in the same direction, wherever placed, are equal. A vector, then, is a straight line traced by a point moving from one position to another, as is indicated by the use of an "arrow" in the line. FIG. 4. — Two vectors. The
|
process of "addition of vectors" is defined as follows: move one vector parallel to itself until one of its ends meets that end of the other which causes the arrows to indicate continuous ad- vance from the free end of one vector to that of the other, and then join the former free end to the latter by a straight line. The "sum" is therefore a vector. Thus AB and CD may be added in two ways : (1) move CD parallel to itself until C coincides with B, — the arrows now indicate continu- ous advance from A to D, — and join these points by a PIG. 5. — Three methods for tho_additton of the vectors AB and CD. tffTRODUi TION 27 straight line, tlius forming the vector AD: (2) move CD parallel to itself until I) coincides with A, — the arrows now indicate continuous advance from C to 1?, — and join these points by a straight line, thus forming the vector CB. It is evident from geometry that these two vectors are identical, having the same length and the same direction and sense. (If a parallelogram is formed, having the two vectors as adjacent sides, both starting from the same point. the diagonal is their sum.) This process is called " geometrical addition v ; and it can obviously be extended to three and more vectors. The simplest case is evidently that when the two vectors are in the same straight line : if they are in the same sense, the numerical value of the sum is the ordinary arithmetical sum; while, if they are in opposite senses, it is their arithmetical difference. If, then, two vectors are in the same line and in the same sense, both may be called posi- A > B tive ; but if they have opposite senses, D < c we should call one positive ( + ) and the other negative ( — ) ; and their A D B,C geometrical sum equals in numerical FIO. B.-A.I • i«ni- value the algebraic sum in both cases, and has the direction of the two vectors. Its sense of direc- tion in the former case is that of both vectors; in the latter, that of the greater. Looking at this process of geometrical addition in a con- verse manner, it may be said that the vector j&? is the geo- metrical sum of AH ii\\<\ ( 7>,
|
where.1 // and f'/>aiv any two vectors such that, when added, their initial and final points are A and D: the vector AD is said to be "resolved into components." The case when the two components are at right angles is the most important. Let* AD be any vector and (JT any straight line; drop 28 INTRODUCTION perpendiculars AA' and DD' upon OP\ A'D' is called the "projection of AD upon OP." Draw through A a line par- allel to OP; it intersects DD' in B. Then the vector AD B,C FIG. 7. — Resolution of the vector AD into components. A' D' FIG. 8. — Projection of the vector AD upon the line OP. equals the geometrical sum of the vectors AB and BD. Let the lengths of AB, BD, and AD be 5, v, and A; then, by geometry, A2 = b2 -f v2 ; and, if JVis the angle (BAD), by the definitions of trigonometry : ^ = sine N, - = cosine N, - = tangent N, h n b or, as ordinarily written, v = h sin TV, b = h cos N, v = b tan N. So the projection of AD on OP equals the product of AD and cosine JV.* The vector ^..B is called " the component in the direction OP of the vector AD" (If AB and BD were not perpen- dicular, i.e. if (ABD) were not a right angle, the latter vector might be so resolved as to have a component in the direction OP ; and in that case the former would not be the only component of the vector^/) in this direction.) But, as just shown, AB = AD cos N. The general rule, then, for * In a similar manner if perpendicular lines are dropped upon a plane from the points forming the contour of any limited surface, the area inclosed by the feet of these lines is called the projection on this plane of the limited surface. If this surface is plane and has the area A, if the projected area is Aij and if the angle between the two planes (i.e. between lines perpendicular to them) is N, it is seen that A\ = A cos N. INTRODUCTION 29 obtaining the numerical value of the component in a particu- lar direction
|
2 = rav aa = ra2 = rzav etc. It is seen that a2 = Vd^ag. So, in gen- eral, the " geometric mean " of two similar quantities a and b is defined to be MECHANICS AND PROPERTIES OF MATTER INTRODUCTION \\'E have recognized three so-called fundamental proper- «»f matter: inertia, weight, and tin- one which includes the varied characteristics of size and shape. Kach of these will now l>e considered in!_;Tealer or less detail. As has been said before, the science of Mechanics is that branch of Physics which deals witli the inertia of matter. It is often divided into two parts. ••Kinematics" and •• Kinetic^": the former is the science of motion considered apart from matter; that is, it treats of possible motion; the latter is -trictly the science of the inertia of matter. If there is no change of any kind in the motion, what we call "rest " beiiiLT ecial case of this, the science is called "Statics"; while if the motion is chan^in^, the science is called "Dynamics." These two sciences are branches, then, of Kinetics. Statics can. however, be considered as a special limiting case of Dynamics; and this plan is adopted in the present book: 8O Mechanics will be treated under the two divisions. Kine- matics and Dynamics. In the former, the (jnestion as to the different possible kinds of motion will be discussed; in the latter, the physical conditions under which these types of motion occur. Kinematics is a geometrical science; dynamics, a physical one. \\YMii.\ill l»c discussed under the more general head of.Station. This will be f«ill..\\,-d by several chapters on the properties of solids, liquids, and gases. AMES'S PHYSIC* W CHAPTER I KINEMATICS General Description. — Kinematics has been defined as the science of motion apart from matter ; that is, it is concerned with the study of the possible motions of the geometrical quantities : a point, a plane figure, and a solid figure. For the sake of illustration, many material bodies will be referred to ; but all the statements and theorems are meant to apply to figures, not
|
to bodies, unless the contrary is expressly noted. If the motion of any actual body is observed (for instance, a stick thrown at random in the air, a moving baseball, the wheel of a moving wagon), it is seen that there are two types of motion involved : the object moves as a whole, and it also turns. These motions are independent of each other; one may occur without the other. In the up and down motion of an elevator, in the motion of a railway car on a straight track, etc., there is no turning ; in the motion of the fly- wheel of a stationary engine, in the opening or closing of a door, etc., we may say that the motion is one of turning only. The name " translation " is given to that kind of motion during which all lines in the figure remain parallel to their original positions ; further, all points of the figure move through paths that are geometrically identical. Thus, to describe completely any case of translation, all that is neces- sary is to describe the motion of any one point of the moving figure. The name " rotation " is given to that kind of motion dur- ing which each point of the figure moves in a circle ; e.g. a door as it opens or closes. The planes of these circles are 34 KIM-MAUL'S 35 parallel, and their centres all lie on a straight line which is called the --axis." If a plane section, perpendicular to the axi>, is taken through the rotating figure, all the lines of the figure in this plane have identically the same angular mo- tions: otherwise the figure would break up into parts. To • leseribe, therefore, motion of rotation at any instant, we must know two things: the position and motion of the axis and the angular motion of any line fixed in the figure with reference to any line fixed in space, provided the ^ lines lie in the same plane per- pendicular to the axis. Thus, con- sider the rotation of a figure like that of a grindstone ; its axis is the FIO. i2.-Rouuon of figure..inn,, of the axle. In the cut, ^^^^^ which represents a cross section by fixed in »pace, and />e, » line fixed a plane perpendicular to the axis, let PQ be a line fixed in the moving figure and AB be a line fixed in space; the position and motion of the figure at any instant are given by
|
a knowledge of tin anurle l>etwern these lines and of its changes in value. It should l»e noted that this is a special case of rotation, because tin- axis does imt ci. position, as it does in general. To describe the most complicated motion, tin -i must consider it resolved into two parts, a translation and a rotation, and must discuss each separately. Translation In translation, as lias been already explained, it is neces- tn describe tin* motion of a point only. The simplest case of this i> motion almi<r a straight line; but the | general case, that of motion along a curved line, is not ditli- eult. To describe this nmtion the first thing that it is neces- to know is the position of the point at.my inMant with re nee to soi i figure. 36 MECHANICS Linear Displacement. — Let the path of the point with reference to some fixed figure be represented by the curve in the cut. Let 0 be its position at any instant, and P that at a later time. The vector OP is called the "linear displacement" of the point with reference to the fixed figure during this interval of time. This same vector might be the displacement for any motion that passed through 0 and P ; or, in other words, a point may pass from 0 to P by various paths. The displacement is then a vector quantity and may be resolved into components in as many ways as we choose ; con- FIG. is. -The vector OP versely, two or more displacements may is the linear displacement of \>Q compounded by geometrical addition. P with reference to O.,„,. The importance of mentioning the "fixed figure of reference " may be seen from an illustration : if a stone is dropped from the top of the mast of a moving steamer, it will fall at its foot ; the displacement with refer- ence to the steamer is a vertical line, while with reference to the earth it is an oblique one, being the geometrical sum of the vertical line and the displacement of the steamer. Linear Velocity. — If the interval of time taken for the displacement is extremely small, P is very close to 0; and, in the limit, the displacement OP coincides with the actual path along the curve, and has, in fact, the direction of the tangent to the curve at
|
the point 0. If, as the displace- ment becomes very small, its length is represented by Aa?, and the corresponding interval of time by A£, the ratio — in the limit is called the " linear velocity " at the point 0 with reference to the fixed figure ; that is, it is the " rate of change" of the displacement. It is evidently a vector quantity for it is defined by its numerical value, which is that of — in the limit, and by the direction and " sense " of KINEMATICS 37 the displticement in the limit : viz., its direction is that of tht? tangent at 0 drawn from 0 to P, when P is close to 0. The numerical value of the linear velocity is called the k- linear speed"; so that the velocity at any point is char- acterized by the value of the speed and by the direction and "sense" of the tangent to the path at that point. If the motion is uniform along a straight line, that is, if the velocity is constant (both in amount and in direction), the speed is equal numerically to the distance traversed in a unit of time; and if the motion is not uniform, the speed at any instant is the distance which the point would travel during the next unit of time if the motion were to remain uniform. The unit of linear speed on the C. G. S. system is the speed of "1 cm. in 1 sec."; and the unit of linear velocity in a definite direction is the unit of speed in that direction. (This C. G. S. unit speed has not received a name; in fact, the only system of units in which there is a unit speed which has received a name is that based on the nautical mile — 6080 ft. — as the unit length, and the hour as the unit time : the speed " one nautical mile in one hour " is called a "Knot." The expression "16 knots per hour" is therefore incorrect; for a knot is a speed, not a length.) Since the linear velocity is a vector quantity, it can be resolved into components; and, conversely, two or more linear velocities may be compounded by geometrical addition. These statements are illustrated by many familiar facts: if a man \\alks across a moving railway carriage, his velocity with reference to the ground is com- pounded «>f that of the train and of that which he would have
|
38 MECHANICS if the train were at rest ; if a boat is rowed across a river, the actual velocity with reference to the earth is the geometrical sum of that of the water of the river and of that due to the oars ; the velocity of a raindrop with reference to the window pane of a moving carriage as it strikes it is the geometrical sum of a velocity equal but opposite to that of the carriage and of its own downward velocity at that instant ; if a man walks in a northeast direction with a speed of 8 cm. per second, his velocity may be represented by a vector PQ whose length is proportional to 8 and whose direction is northeast; and his velocity in a northern direction is given by the component, PR, in the direction north and south, whose numerical value is 8 cos 45°, or, more properly, s cos j- Similarly, one velocity may be subtracted from another, the difference being also a velocity. We will consider two illustra- tions: a body falling freely toward the earth and an extremely small particle mov- ing in a circle with a constant speed. In the first case, the velocity at any instant is represented by a vertical vector AB and FIG. is. - Rectilinear at some later instant by another vertical motion: AB and CD are -~=r f., the velocities at different vector CD ot greater length, because as BD l8 th6lr the body falls' its 8Peed increases. Call the length of AB sr and of CD «2. The change in velocity is the difference between these vectors ; that is, it is a vertical vector BD of length equal to *2 ~ *r In the second case in which the particle is moving in a circle let the constant speed be «; and let the direction of motion be that indicated by the arrows. When the particle is at the point A, its velocity has the direction of the tangent and the numerical value «; it can therefore be represented KIXEMATirs 30 by the vector PQ which is parallel to the tangent at A, and has a length proportional to «. Similarly, when the part irk- is at the point B, its velocity can be represented by the vector PS which is parallel to the tangent at B and whose length is equal to that of 7Vtv ( since the speed does not alter). The change in the velocity in the time taken for the particle to move from A to B is the
|
difference between the vectors PS and PQ; that is, it is the vector s v. Linear Acceleration. - - To Fio.J_6. — Uniform motion In » circle. PQ and PS are the velocities of the point at A and B. return to the original problem, that of describing the general case of the motion of a point in a curved path, ire have defined the displacement and the velocity at any point, the latter being the rate of change of the former. The velocity may change, however, both in direction and in speed ; and its rate of change at any instant is called the " linear acceleration " at that instant ; that is, if At; is the change in the velocity dm MIL: the time At, the limiting value of — - is the acceleration. Moreover, since the change in the velocity, A>\ is a vector quantity, so is also the accel- eration. Its numerical value is that of -- in the limit ; and At its direction is that of A'- in the limit. We may consider separately two cases; in one of which the din-rtion remains constant hut the speed changes, -while in the other th« speed remain^ constant but the direction changes. As an illustration "f the former we may take the mot; : falling l»odv: and of the latter, the uniform 40 MECHANICS motion of a particle in a circle. These two cases have already been partially discussed. In the former motion let the change in speed from s1 to s2 take place in the interval of time T2 — T^\ then the accel- when the interval eration has the numerical value T2 — Tl is taken infinitely small, and its direction is ver- tically down. If j:he acceleration is constant, it is, therefore, the change in the speed in a unit of time. In the latter case, that of uniform motion in a circle, let the interval of time during which the particle moves from A to B, and the velocity accordingly changes from PQ to PS, be taken extremely small, so that the length of the arc AB becomes minute also ; then, if this interval of time is called A£, the acceleration at the point A is the limiting value of fvector Q8\ Call the lengths of the various straight lines in the diagram by the letters marking their terminal points : by geometry the triangles (SPQ)_*,in\_(BOA) are
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.