text stringlengths 1 7.76k | source stringlengths 17 81 |
|---|---|
182 CHAPTER 8. TIME-VARYING FIELDS • The induced potential is proportional to the rate of change of B. If B is constant in time, then there is no induction. Finally, the current in the loop is simply I = VT R (8.9) Again, electromagnetic induction induces potential, and the current flows only in response to the induced potential as determined by Ohm’s Law. In particular, if the resistor is removed, then R →∞and I →0, but VT is unchanged. One final comment is that even though the current I is not a direct result of electromagnetic induction, we can use I as a check of the result using Lenz’s Law (Section 8.2). We’ll demonstrate this in the example below. Example 8.2. Induction in a motionless circular loop by a linearly-increasing magnetic field Let the loop be planar in the z = 0 plane and circular with radius a = 10 cm. Let the magnetic field be ˆzB(t) where B(t) = 0 , t < 0 = B0t/t0 , 0 ≤t ≤t0 = B0 , t > t0 (8.10) i.e., B(t) begins at zero and increases linearly to B0 at time t0, after which it remains constant at B0. Let B0 = 0.2 T, t0 = 3 s, and let the loop be closed by a resistor R = 1 kΩ. What current I flows in the loop? Solution. Adopting the sign conventions of Figure 8.5 we first note that ˆn = +ˆz; this is determined by the right-hand rule with respect to the indicated polarity of VT . Thus, Equation 8.8 becomes VT = − ˆb · ˆzA ∂ ∂tB(t) Note ˆb · ˆzA = A since ˆb = ˆz; i.e., because the plane of the loop is perpendicular to the magnetic field lines. Since the loop is circular, A = πa2. Also ∂ ∂tB(t) = 0 , t < 0 = B0/t0 , 0 ≤t ≤t0 = 0 , t > t0 Putting this all together: VT = −πa2 B0 t0 = −2.09 mV , 0 ≤t ≤t0 and VT = 0 before and after this time interval, since B is constant during those times. Subsequently the induced current is I = VT R = −2.09 µA , 0 ≤t ≤t0 and I = 0 before and after this time interval. We have found that the induced current is a constant clockwise flow that exists only while B is increasing. Finally, let’s see if the result is consistent with Lenz’s Law. The current induced while B is changing gives rise to an induced magnetic field Bind. From the right-hand rule that relates the direction of I to the direction of Bind (Section 7.5), the direction of Bind is generally −ˆz inside the loop. In other words, the magnetic field associated with the induced current opposes the increasing impressed magnetic field that induced the current, in accordance with Lenz’s Law. Example 8.3. Induction in a motionless circular loop by a sinusoidally-varying magnetic field. Let us repeat the previous example, but now with B(t) = B0 sin 2πf0t with f0 = 1 kHz. Solution. Now ∂ ∂tB(t) = 2πf0B0 cos 2πf0t | Electromagnetics_Vol1_Page_197_Chunk1501 |
8.5. TRANSFORMERS: PRINCIPLE OF OPERATION 183 So VT = −2π2f0a2B0 cos 2πf0t Subsequently I = VT R = −2π2f0a2B0 R cos 2πf0t Substituting values, we have: I = −(39.5 mA) cos [(6.28 krad/s)t] It should be no surprise that VT and I vary sinusoidally, since the source (B) varies sinusoidally. A bit of useful trivia here is that VT and I are 90◦out of phase with the source. It is also worth noting what happens when B(t) = 0. This occurs twice per period, at t = n/2f0 where n is any integer, including t = 0. At these times B(t) is zero, but VT and hence IR are decidedly non-zero; in fact, they are at their maximum magnitude. Again, it is the change in B that induces voltage and subsequently current, not B itself. 8.5 Transformers: Principle of Operation [m0031] A transformer is a device that connects two electrical circuits through a shared magnetic field. Transformers are used in impedance transformation, voltage level conversion, circuit isolation, conversion between single-ended and differential signal modes, and other applications.2 The underlying electromagnetic principle is Faraday’s Law (Section 8.3) – in particular, transformer emf. The essential features of a transformer can be derived from the simple experiment shown in Figures 8.6 and 8.7. In this experiment, two coils are arranged along a common axis. The winding pitch is small, so that all magnetic field lines pass through the length of the coil, and no lines pass between the windings. To further contain the magnetic field, we assume both coils are wound on the same core, consisting of some material exhibiting high permeability. The upper coil has N1 turns and the lower coil has N2 turns. In Part I of this experiment (Figure 8.6), the upper coil is connected to a sinusoidally-varying voltage source V (1) 1 in which the subscript refers to the coil and the superscript refers to “Part I” of this experiment. The voltage source creates a current in the coil, which in turn creates a time-varying magnetic field B in the core. The lower coil has N2 turns wound in the opposite direction and is open-circuited. Given the closely-spaced windings and use of a high-permeability core, we assume that the magnetic field within the lower coil is equal to B generated in the upper coil. The potential induced in the lower coil is V (1) 2 with reference polarity indicated in the figure. From Faraday’s Law we have V (1) 2 = −N2 ∂ ∂tΦ2 (8.11) where Φ2 is the flux through a single turn of the lower 2See “Additional Reading” at the end of this section for more on these applications. | Electromagnetics_Vol1_Page_198_Chunk1502 |
184 CHAPTER 8. TIME-VARYING FIELDS turns V 1 (1) + − V 2 (1) + − N1 turns N2 Figure 8.6: Part I of an experiment demonstrating the linking of electric circuits using a transformer. turns V (2) + − V 2 (2) + − N1 turns N2 Figure 8.7: Part II of an experiment demonstrating the linking of electric circuits using a transformer. coil. Thus: V (1) 2 = −N2 ∂ ∂t Z S B · (−ˆzds) (8.12) Note that the direction of ds = −ˆzds is determined by the polarity we have chosen for V (1) 2 . In Part II of the experiment (Figure 8.7), we make changes as follows. We apply a voltage V (2) 2 to the lower coil and open-circuit the upper coil. Further, we adjust V (2) 2 such that the induced magnetic flux density is again B – that is, equal to the field in Part I of the experiment. Now we have V (2) 1 = −N1 ∂ ∂tΦ1 (8.13) which is V (2) 1 = −N1 ∂ ∂t Z S B · (+ˆzds) (8.14) For reasons that will become apparent in a moment, let’s shift the leading minus sign into the integral. We then have V (2) 1 = +N1 ∂ ∂t Z S B · (−ˆzds) (8.15) Comparing this to Equations 8.11 and 8.12, we see that we may rewrite this in terms of the flux in the lower coil in Part I of the experiment: V (2) 1 = +N1 ∂ ∂tΦ2 (8.16) In fact, we can express this in terms of the potential in Part I of the experiment: V (2) 1 = −N1 N2 −N2 ∂ ∂tΦ2 = −N1 N2 V (1) 2 (8.17) We have found that the potential in the upper coil in Part II is related in a simple way to the potential in the lower coil in Part I of the experiment. If we had done Part II first, we would have obtained the same result but with the superscripts swapped. Therefore, it must be generally true – regardless of the arrangement of terminations – that V1 = −N1 N2 V2 (8.18) | Electromagnetics_Vol1_Page_199_Chunk1503 |
8.6. TRANSFORMERS AS TWO-PORT DEVICES 185 This expression should be familiar from elementary circuit theory – except possibly for the minus sign. The minus sign is a consequence of the fact that the coils are wound in opposite directions. We can make the above expression a little more general as follows: V1 V2 = pN1 N2 (8.19) where p is defined to +1 when the coils are wound in the same direction and −1 when coils are wound in opposite directions. (It is an excellent exercise to confirm that this is true by repeating the above analysis with winding direction changed for either the upper or lower coil, for which p will then turn out to be +1.) This is the “transformer law” of basic electric circuit theory, from which all other characteristics of transformers as two-port circuit devices can be obtained (See Section 8.6 for follow-up on that). Summarizing: The ratio of coil voltages in an ideal transformer is equal to the ratio of turns with sign determined by the relative directions of the windings, per Equation 8.19. A more familiar transformer design is shown in Figure 8.8 – coils wound on a toroidal core as opposed to a cylindrical core. Why do this? This arrangement confines the magnetic field linking the two coils to the core, as opposed to allowing field lines to extend beyond the device. This confinement is important in order to prevent fields originating outside the transformer from interfering with the magnetic field linking the coils, which would lead to electromagnetic interference (EMI) and electromagnetic compatibility (EMC) problems. The principle of operation is in all other respects the same. Additional Reading: • “Transformer” on Wikipedia. • “Balun” on Wikipedia. V1 V2 N1 turns N2 turns Φ Magnetic Flux, Transformer Core + + − − c⃝BillC (modified) CC BY SA 3.0 Figure 8.8: Transformer implemented as coils sharing a toroidal core. Here p = +1. 8.6 Transformers as Two-Port Devices [m0032] Section 8.5 explains the principle of operation of the ideal transformer. The relationship governing the terminal voltages V1 and V2 was found to be V1 V2 = pN1 N2 (8.20) where N1 and N2 are the number of turns in the associated coils and p is either +1 or −1 depending on the relative orientation of the windings; i.e., whether the reference direction of the associated fluxes is the same or opposite, respectively. We shall now consider ratios of current and impedance in ideal transformers, using the two-port model shown in Figure 8.9. By virtue of the reference current directions and polarities chosen in this figure, the power delivered by the source V1 is V1I1, and the power absorbed by the load Z2 is −V2I2. Assuming the transformer windings have no resistance, and assuming the magnetic flux is perfectly contained within the core, the power absorbed by the load must equal the power provided by the source; i.e., | Electromagnetics_Vol1_Page_200_Chunk1504 |
186 CHAPTER 8. TIME-VARYING FIELDS V1I1 = −V2I2. Thus, we have3 I1 I2 = −V2 V1 = −pN2 N1 (8.21) We can develop an impedance relationship for ideal transformers as follows. Let Z1 be the input impedance of the transformer; that is, the impedance looking into port 1 from the source. Thus, we have Z1 ≜V1 I1 = +p (N1/N2) V2 −p (N2/N1) I2 = − N1 N2 2 V2 I2 (8.22) In Figure 8.9, Z2 is the the output impedance of port 2; that is, the impedance looking out port 2 into the load. Therefore, Z2 = −V2/I2 (once again the minus sign is a result of our choice of sign conventions in Figure 8.9). Substitution of this result into the above equation yields Z1 = N1 N2 2 Z2 (8.23) and therefore Z1 Z2 = N1 N2 2 (8.24) 3 The minus signs in this equation are a result of the reference polarity and directions indicated in Figure 8.9. These are more-or- less standard in electrical two-port theory (see “Additional Reading” at the end of this section), but are certainly not the only reason- able choice. If you see these expressions without the minus signs, it probably means that a different combination of reference direc- tions/polarities is in effect. I 2 V + − I1 V1 + − +_ Z2 Z1 Figure 8.9: The transformer as a two-port circuit de- vice. Thus, we have demonstrated that a transformer scales impedance in proportion to the square of the turns ratio N1/N2. Remarkably, the impedance transformation depends only on the turns ratio, and is independent of the relative direction of the windings (p). The relationships developed above should be viewed as AC expressions, and are not normally valid at DC. This is because transformers exhibit a fundamental limitation in their low-frequency performance. To see this, first recall Faraday’s Law: V = −N ∂ ∂tΦ (8.25) If the magnetic flux Φ isn’t time-varying, then there is no induced electric potential, and subsequently no linking of the signals associated with the coils. At very low but non-zero frequencies, we encounter another problem that gets in the way – magnetic saturation. To see this, note we can obtain an expression for Φ from Faraday’s Law by integrating with respect to time, yielding Φ(t) = −1 N Z t t0 V (τ)dτ + Φ(t0) (8.26) where t0 is some earlier time at which we know the value of Φ(t0). Let us assume that V (t) is sinusoidally-varying. Then the peak value of Φ after t = t0 depends on the frequency of V (t). If the frequency of V (t) is very low, then Φ can become very large. Since the associated cross-sectional areas of the coils are presumably constant, this means that the magnetic field becomes very large. The problem with that is that most practical high-permeability materials suitable for use as transformer cores exhibit magnetic saturation; that is, the rate at which the magnetic field is able to increase decreases with increasing magnetic field magnitude (see Section 7.16). The result of all this is that a transformer may work fine at (say) 1 MHz, but at (say) 1 Hz the transformer may exhibit an apparent loss associated with this saturation. Thus, practical transformers exhibit highpass frequency response. It should be noted that the highpass behavior of practical transistors can be useful. For example, a transformer can be used to isolate an undesired DC offset and/or low-frequency noise in the circuit | Electromagnetics_Vol1_Page_201_Chunk1505 |
8.7. THE ELECTRIC GENERATOR 187 Differential Single- Ended Single- Ended c⃝SpinningSpark CC BY SA 3.0 (modified) Figure 8.10: Transformers used to convert a single- ended (“unbalanced”) signal to a differential (bal- anced) signal, and back. attached to one coil from the circuit attached to the other coil. The DC-isolating behavior of a transformer also allows the transformer to be used as a balun, as illustrated in Figure 8.10. A balun is a two-port device that transforms a single-ended (“unbalanced”) signal – that is, one having an explicit connection to a datum (e.g., ground) – into a differential (“balanced”) signal, for which there is no explicit connection to a datum. Differential signals have many benefits in circuit design, whereas inputs and outputs to devices must often be in single-ended form. Thus, a common use of transformers is to effect the conversion between single-ended and differential circuits. Although a transformer is certainly not the only device that can be used as a balun, it has one very big advantage, namely bandwidth. Additional Reading: • “Transformer” on Wikipedia. • “Two-port network” on Wikipedia. • “Saturation (magnetic)” on Wikipedia. • “Balun” on Wikipedia. • S.W. Ellingson, “Differential Circuits” (Sec. 8.7) in Radio Systems Engineering, Cambridge Univ. Press, 2016. 8.7 The Electric Generator [m0030] A generator is a device that transforms mechanical energy into electrical energy, typically by electromagnetic induction via Faraday’s Law (Section 8.3). For example, a generator might consist of a gasoline engine that turns a crankshaft to which is attached a system of coils and/or magnets. This rotation changes the relative orientations of the coils with respect to the magnetic field in a time-varying manner, resulting in a time-varying magnetic flux and subsequently induced electric potential. In this case, the induced potential is said to be a form of motional emf, as it is due entirely to changes in geometry as opposed to changes in the magnitude of the magnetic field. Coal- and steam-fired generators, hydroelectric generators, wind turbines, and other energy generation devices operate using essentially this principle. Figure 8.11 shows a rudimentary generator, which serves as to illustrate the relevant points. This generator consists of a planar loop that rotates around the z axis; therefore, the rotation can be parameterized in φ. In this case, the direction of rotation is specified to be in the +φ direction. The frequency of rotation is f0; that is, the time required for the loop to make one complete revolution is 1/f0. We assume a time-invariant and spatially-uniform magnetic flux density B = ˆbB0 where ˆb and B0 are both constants. For illustration purposes, the loop is indicated to be circular. However, because the magnetic field is time-invariant and spatially-uniform, the specific shape of the loop is not important, as we shall see in a moment. Only the area of the loop will matter. The induced potential is indicated as VT with reference polarity as indicated in the figure. This potential is given by Faraday’s Law: VT = −∂ ∂tΦ (8.27) Here Φ is the magnetic flux associated with an open surface S bounded by the loop: Φ = Z S B · ds (8.28) As usual, S can be any surface that intersects all | Electromagnetics_Vol1_Page_202_Chunk1506 |
188 CHAPTER 8. TIME-VARYING FIELDS z y x V +- direction of rotation Figure 8.11: A rudimentary single-loop generator, shown at time t = 0. magnetic field lines passing through the loop, and also as usual, the simplest choice is simply the planar area bounded by the loop. The differential surface element ds is ˆnds, where ˆn is determined by the reference polarity of VT according to the “right hand rule” convention from Stokes’ Theorem. Making substitutions, we have Φ = Z S ˆbB0 · (ˆnds) = h ˆb · ˆn i B0 Z S ds = h ˆb · ˆnA i B0 (8.29) where A is the area of the loop. To make headway, an expression for ˆn is needed. The principal difficulty here is that ˆn is rotating with the loop, so it is time-varying. To sort this out, first consider the situation at time t = 0, which is the moment illustrated in Figure 8.11. Beginning at the “−” terminal, we point the thumb of our right hand in the direction that leads to the “+” terminal by traversing the loop; ˆn is then the direction perpendicular to the plane of the loop in which the fingers of our right hand pass through the loop. We see that at t = 0, ˆn = +ˆy. A bit later, the loop will have rotated by one-fourth of a complete rotation, and at that time ˆn = −ˆx. This occurs at t = 1/(4f0). Later still, the loop will have rotated by one-half of a compete rotation, and then ˆn = −ˆy. This occurs at t = 1/(2f0). It is apparent that ˆn(t) = −ˆx sin 2πf0t + ˆy cos 2πf0t (8.30) as can be confirmed by checking the three special cases identified above. Now applying Faraday’s Law, we find VT = −∂ ∂tΦ = −∂ ∂t h ˆb · ˆn(t) i AB0 = − ˆb · ∂ ∂t ˆn(t) AB0 (8.31) For notational convenience we make the following definition: ˆn′(t) ≜− 1 2πf0 ∂ ∂t ˆn(t) (8.32) which is simply the time derivative of ˆn divided by 2πf0 so as to retain a unit vector. The reason for including a change of sign will become apparent in a moment. Applying this definition, we find ˆn′(t) = +ˆx cos 2πf0t + ˆy sin 2πf0t (8.33) Note that this is essentially the definition of the radial basis vector ˆρ from the cylindrical coordinate system (which is why we applied the minus sign in Equation 8.32). Recognizing this, we write ˆρ(t) = +ˆx cos 2πf0t + ˆy sin 2πf0t (8.34) and finally VT = +2πf0AB0ˆb · ˆρ(t) (8.35) If the purpose of this device is to generate power, then presumably we would choose the magnetic field to be in a direction that maximizes the maximum value of ˆb · ˆρ(t). Therefore, power is optimized for B polarized entirely in some combination of ˆx and ˆy, and with B · ˆz = 0. Under that constraint, we see that VT varies sinusoidally with frequency f0 and exhibits peak magnitude max |VT (t)| = 2πf0AB0 (8.36) | Electromagnetics_Vol1_Page_203_Chunk1507 |
8.8. THE MAXWELL-FARADAY EQUATION 189 It’s worth noting that the maximum voltage magnitude is achieved when the plane of the loop is parallel to B; i.e., when ˆb · ˆn(t) = 0 so that Φ(t) = 0. Why is that? Because this is when Φ(t) is most rapidly increasing or decreasing. Conversely, when the plane of the loop is perpendicular to B (i.e., ˆb · ˆn(t) = 1), |Φ(t)| is maximum but its time-derivative is zero, so VT (t) = 0 at this instant. Example 8.4. Rudimentary electric generator. The generator in Figure 8.11 consists of a circular loop of radius a = 1 cm rotating at 1000 revolutions per second in a static and spatially-uniform magnetic flux density of 1 mT in the +ˆx direction. What is the induced potential? Solution. From the problem statement, f0 = 1 kHz, B0 = 1 mT, and ˆb = +ˆx. Therefore ˆb · ˆρ(t) = ˆx · ˆρ(t) = cos 2πf0t. The area of the loop is A = πa2. From Equation 8.35 we obtain VT (t) ∼= (1.97 mV) cos [(6.28 krad/s) t] Finally, we note that it is not really necessary for the loop to rotate in the presence of a magnetic field with constant ˆb; it works equally well for the loop to be stationary and for ˆb to rotate – in fact, this is essentially the same problem. In some practical generators, both the potential-generating coils and fields (generated by some combination of magnets and coils) rotate. Additional Reading: • “Electric Generator” on Wikipedia. 8.8 The Maxwell-Faraday Equation [m0050] In this section, we generalize Kirchoff’s Voltage Law (KVL), previously encountered as a principle of electrostatics in Sections 5.10 and 5.11. KVL states that in the absence of a time-varying magnetic flux, the electric potential accumulated by traversing a closed path C is zero. Here is that idea in mathematical form: V = I C E · dl = 0 (8.37) Now recall Faraday’s Law (Section 8.3): V = −∂ ∂tΦ = −∂ ∂t Z S B · ds (8.38) Here, S is any open surface that intersects all magnetic field lines passing through C, with the relative orientations of C and ds determined in the usual way by the Stokes’ Theorem convention (Section 4.9). Note that Faraday’s Law agrees with KVL in the magnetostatic case. If magnetic flux is constant, then Faraday’s Law says V = 0. However, Faraday’s Law is very clearly not consistent with KVL if magnetic flux is time-varying. The correction is simple enough; we can simply set these expressions to be equal. Here we go: I C E · dl = −∂ ∂t Z S B · ds (8.39) This general form is known by a variety of names; here we refer to it as the Maxwell-Faraday Equation (MFE). The integral form of the Maxwell-Faraday Equa- tion (Equation 8.39) states that the electric poten- tial associated with a closed path C is due entirely to electromagnetic induction, via Faraday’s Law. Despite the great significance of this expression as one of Maxwell’s Equations, one might argue that all we have done is simply to write Faraday’s Law in a slightly more verbose way. This is true. The real power of the MFE is unleashed when it is expressed | Electromagnetics_Vol1_Page_204_Chunk1508 |
190 CHAPTER 8. TIME-VARYING FIELDS in differential, as opposed to integral form. Let us now do this. We can transform the left-hand side of Equation 8.39 into a integral over S using Stokes’ Theorem. Applying Stokes’ theorem on the left, we obtain Z S (∇× E) · ds = −∂ ∂t Z S B · ds (8.40) Now exchanging the order of integration and differentiation on the right hand side: Z S (∇× E) · ds = Z S −∂ ∂tB · ds (8.41) The surface S on both sides is the same, and we have not constrained S in any way. S can be any mathematically-valid open surface anywhere in space, having any size and any orientation. The only way the above expression can be universally true under these conditions is if the integrands on each side are equal at every point in space. Therefore, ∇× E = −∂ ∂tB (8.42) which is the MFE in differential form. What does this mean? Recall that the curl of E is a way to take a directive of E with respect to position (Section 4.8). Therefore the MFE constrains spatial derivatives of E to be simply related to the rate of change of B. Said plainly: The differential form of the Maxwell-Faraday Equation (Equation 8.42) relates the change in the electric field with position to the change in the magnetic field with time. Now that is arguably new and useful information. We now see that electric and magnetic fields are coupled not only for line integrals and fluxes, but also at each point in space. Additional Reading: • “Faraday’s Law of Induction” on Wikipedia. • “Maxwell’s Equations” on Wikipedia. 8.9 Displacement Current and Ampere’s Law [m0053] In this section, we generalize Ampere’s Law, previously encountered as a principle of magnetostatics in Sections 7.4 and 7.9. Ampere’s Law states that the current Iencl flowing through closed path C is equal to the line integral of the magnetic field intensity H along C. That is: I C H · dl = Iencl (8.43) We shall now demonstrate that this equation is unreliable if the current is not steady; i.e., not DC. First, consider the situation shown in Figure 8.12. Here, a current I flows in the wire, subsequently generating a magnetic field H that circulates around the wire (Section 7.5). When we perform the integration in Ampere’s Law along any path C enclosing the wire, the result is I, as expected. In this case, Ampere’s Law is working even when I is time-varying. Now consider the situation shown in Figure 8.13, in which we have introduced a parallel-plate capacitor. In the DC case, this situation is simple. No current flows, so there is no magnetic field and Ampere’s Law is trivially true. In the AC case, the current I can be non-zero, but we must be clear about the physical origin of this current. What is happening is that for one half of a period, a source elsewhere in the circuit is moving positive charge to one side of the capacitor and negative charge to the other side. For the other half-period, the source is exchanging the charge, so that negative charge appears on the previously positively-charged side and vice-versa. Note that at no point is current flowing directly from one side of the capacitor to the other; instead, all current must flow through the circuit in order to arrive at the other plate. Even though there is no current between the plates, there is current in the wire, and therefore there is also a magnetic field associated with that current. Now we are ready to shine a light on the problem. Recall that from Stokes’ Theorem (Section 4.9), the line integral over C is mathematically equivalent to an | Electromagnetics_Vol1_Page_205_Chunk1509 |
8.9. DISPLACEMENT CURRENT AND AMPERE’S LAW 191 I I S1 H C C. Burks (modified) Figure 8.12: Ampere’s Law applied to a continuous line of current. E I I S1 H C C. Burks (modified) Figure 8.13: Ampere’s Law applied to a parallel plate capacitor. integral over any open surface S that is bounded by C. Two such surfaces are shown in Figure 8.12 and Figure 8.13, indicated as S1 and S2. In the wire-only scenario of Figure 8.12, the choice of S clearly doesn’t matter; any valid surface intersects current equal to I. Similarly in the scenario of Figure 8.13, everything seems fine if we choose S = S1. If, on the other hand, we select S2 in the parallel-plate capacitor case, then we have a problem. There is no current flowing through S2, so the right side of Equation 8.43 is zero even though the left side is potentially non-zero. So, it appears that something necessary for the time-varying case is missing from Equation 8.43. To resolve the problem, we postulate an additional term in Ampere’s Law that is non-zero in the above scenario. Specifically, we propose: I C H · dl = Ic + Id (8.44) where Ic is the enclosed current (formerly identified as Iencl) and Id is the proposed new term. If we are to accept this postulate, then here is a list of things we know about Id: • Id has units of current (A). • Id = 0 in the DC case and is potentially non-zero in the AC case. This implies that Id is the time derivative of some other quantity. • Id must be somehow related to the electric field. How do we know Id must be related to the electric field? This is because the Maxwell-Faraday Equation (Section 8.8) tells us that spatial derivatives of E are related to time derivatives of H; i.e., E and H are coupled in the time-varying (here, AC) case. This coupling between E and H must also be at work here, but we have not yet seen E play a role. This is pretty strong evidence that Id depends on the electric field. Without further ado, here’s Id: Id = Z S ∂D ∂t · ds (8.45) where D is the electric flux density (units of C/m2) and is equal to ǫE as usual, and S is the same open surface associated with C in Ampere’s Law. Note that | Electromagnetics_Vol1_Page_206_Chunk1510 |
192 CHAPTER 8. TIME-VARYING FIELDS this expression meets our expectations: It is determined by the electric field, it is zero when the electric field is constant (i.e., not time varying), and has units of current. The quantity Id is commonly known as displacement current. It should be noted that this name is a bit misleading, since Id is not a current in the conventional sense. Certainly, it is not a conduction current – conduction current is represented by Ic, and there is no current conducted through an ideal capacitor. It is not unreasonable to think of Id as current in a more general sense, for the following reason. At one instant, charge is distributed one way and at another, it is distributed in another way. If you define current as a time variation in the charge distribution relative to S – regardless of the path taken by the charge – then Id is a current. However, this distinction is a bit philosophical, so it may be less confusing to interpret “displacement current” instead as a separate electromagnetic quantity that just happens to have units of current. Now we are able to write the general form of Ampere’s Law that applies even when sources are time-varying. Here it is: I C H · dl = Ic + Z S ∂D ∂t · ds (8.46) As is the case in the Maxwell-Faraday Equation, most of the utility of Ampere’s Law is unleashed when expressed in differential form. To obtain this form the first step is to write Ic as an integral of over S; this is simply (see Section 6.2): Ic = Z S J · ds (8.47) where J is the volume current density (units of A/m2). So now we have I C H · dl = Z S J · ds + Z S ∂D ∂t · ds = Z S J + ∂D ∂t · ds (8.48) We can transform the left side of the above equation into a integral over S using Stokes’ Theorem (Section 4.9). We obtain Z S (∇× H) · ds = Z S J + ∂D ∂t · ds (8.49) The surface S on both sides is the same, and we have not constrained S in any way. S can be any mathematically-valid open surface anywhere in space, having any size and any orientation. The only way the above expression can be universally true under these conditions is if the integrands on each side are equal at every point in space. Therefore: ∇× H = J + ∂ ∂tD (8.50) which is Ampere’s Law in differential form. What does Equation 8.50 mean? Recall that the curl of H is a way to describe the direction and rate of change of H with position. Therefore, this equation constrains spatial derivatives of H to be simply related to J and the time derivative of D (displacement current). Said plainly: The differential form of the general (time- varying) form of Ampere’s Law (Equation 8.50) relates the change in the magnetic field with po- sition to the change in the electric field with time, plus current. As is the case in the Maxwell-Faraday Equation (Section 8.8), we see that electric and magnetic fields become coupled at each point in space when sources are time-varying. Additional Reading: • “Displacement Current” on Wikipedia. • “Ampere’s Circuital Law” on Wikipedia. • “Maxwell’s Equations” on Wikipedia. [m0126] | Electromagnetics_Vol1_Page_207_Chunk1511 |
8.9. DISPLACEMENT CURRENT AND AMPERE’S LAW 193 Image Credits Fig. 8.1: c⃝Y. Qin, https://commons.wikimedia.org/wiki/File:M0003 fCoilBarMagnet.svg, CC BY 4.0 (https://creativecommons.org/licenses/by/4.0/), modified from the original. Fig. 8.2: E. Bach, https://commons.wikimedia.org/wiki/File:Faraday emf experiment.svg, public domain via CC0 1.0 (https://creativecommons.org/publicdomain/zero/1.0/deed.en), modified from the original. Fig. 8.8: c⃝BillC, https://commons.wikimedia.org/wiki/File:Transformer3d col3.svg, CC BY SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0/). Modified by author. Fig. 8.10: c⃝SpinningSpark, https://en.wikipedia.org/wiki/File:Transformer balance.svg, CC BY SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0/). Modified by author. Fig. 8.12: C. Burks, https://en.wikipedia.org/wiki/File:Displacement current in capacitor.svg, public domain. Heavily modified by author. Fig. 8.13: C. Burks, https://en.wikipedia.org/wiki/File:Displacement current in capacitor.svg, public domain. Heavily modified by author. | Electromagnetics_Vol1_Page_208_Chunk1512 |
Chapter 9 Plane Waves in Lossless Media 9.1 Maxwell’s Equations in Differential Phasor Form [m0042] In this section, we derive the phasor form of Maxwell’s Equations from the general time-varying form of these equations. Here we are interested exclusively in the differential (“point”) form of these equations. It is assumed that the reader is comfortable with phasor representation and its benefits; if not, a review of Section 1.5 is recommended before attempting this section. Maxwell’s Equations in differential time-domain form are Gauss’ Law (Section 5.7): ∇· D = ρv (9.1) the Maxwell-Faraday Equation (MFE; Section 8.8): ∇× E = −∂ ∂tB (9.2) Gauss’ Law for Magnetism (GSM; Section 7.3): ∇· B = 0 (9.3) and Ampere’s Law (Section 8.9): ∇× H = J + ∂ ∂tD (9.4) We begin with Gauss’s Law (Equation 9.1). We define eD and eρv as phasor quantities through the usual relationship: D = Re n eDejωto (9.5) and ρv = Re eρvejωt (9.6) Substituting these expressions into Equation 9.1: ∇· h Re n eDejωtoi = Re eρvejωt (9.7) Divergence is a real-valued linear operator. Therefore, we may exchange the order of the “Re” and “∇·” operations (this is “Claim 2” from Section 1.5): Re n ∇· h eDejωtio = Re eρvejωt (9.8) Next, we note that the differentiation associated with the divergence operator is with respect to position and not with respect to time, so the order of operations may be further rearranged as follows: Re nh ∇· eD i ejωto = Re eρvejωt (9.9) Finally, we note that the equality of the left and right sides of the above equation implies the equality of the associated phasors (this is “Claim 1” from Section 1.5); thus, ∇· eD = eρv (9.10) In other words, the differential form of Gauss’ Law for phasors is identical to the differential form of Gauss’ Law for physical time-domain quantities. The same procedure applied to the MFE is only a little more complicated. First, we establish the phasor representations of the electric and magnetic fields: E = Re n eEejωto (9.11) B = Re n eBejωto (9.12) Electromagnetics Vol 1. c⃝2018 S.W. Ellingson CC BY SA 4.0. https://doi.org/10.21061/electromagnetics-vol-1 | Electromagnetics_Vol1_Page_209_Chunk1513 |
9.1. MAXWELL’S EQUATIONS IN DIFFERENTIAL PHASOR FORM 195 After substitution into Equation 9.2: ∇× h Re n eEejωtoi = −∂ ∂t h Re n eBejωtoi (9.13) Both curl and time-differentiation are real-valued linear operations, so we are entitled to change the order of operations as follows: Re n ∇× h eEejωtio = −Re ∂ ∂t h eBejωti (9.14) On the left, we note that the time dependence ejωt can be pulled out of the argument of the curl operator, since it does not depend on position: Re nh ∇× eE i ejωto = −Re ∂ ∂t h eBejωti (9.15) On the right, we note that eB is constant with respect to time (because it is a phasor), so: Re nh ∇× eE i ejωto = −Re eB ∂ ∂tejωt = −Re n eBjωejωto = Re nh −jω eB i ejωto (9.16) And so we have found: ∇× eE = −jω eB (9.17) Let’s pause for a moment to consider the above result. In the general time-domain version of the MFE, we must take spatial derivatives of the electric field and time derivatives of the magnetic field. In the phasor version of the MFE, the time derivative operator has been replaced with multiplication by jω. This is a tremendous simplification since the equations now involve differentiation over position only. Furthermore, no information is lost in this simplification – for a reminder of why that is, see the discussion of Fourier Analysis at the end of Section 1.5. Without this kind of simplification, much of what is now considered “basic” engineering electromagnetics would be intractable. The procedure for conversion of the remaining two equations is very similar, yielding: ∇· eB = 0 (9.18) ∇× eH = eJ + jω eD (9.19) The details are left as an exercise for the reader. The differential form of Maxwell’s Equations (Equations 9.10, 9.17, 9.18, and 9.19) involve operations on the phasor representations of the physical quantities. These equations have the ad- vantage that differentiation with respect to time is replaced by multiplication by jω. | Electromagnetics_Vol1_Page_210_Chunk1514 |
196 CHAPTER 9. PLANE WAVES IN LOSSLESS MEDIA 9.2 Wave Equations for Source-Free and Lossless Regions [m0036] Electromagnetic waves are solutions to a set of coupled differential simultaneous equations – namely, Maxwell’s Equations. The general solution to these equations includes constants whose values are determined by the applicable electromagnetic boundary conditions. However, this direct approach can be difficult and is often not necessary. In unbounded homogeneous regions that are “source free” (containing no charges or currents), a simpler approach is possible. In this section, we reduce Maxwell’s Equations to wave equations that apply to the electric and magnetic fields in this simpler category of scenarios. Before reading further, the reader should consider a review of Section 1.3 (noting in particular Equation 1.1) and Section 3.6 (wave equations for voltage and current on a transmission line). This section seeks to develop the analogous equations for electric and magnetic waves. We can get the job done using the differential “point” phasor form of Maxwell’s Equations, developed in Section 9.1. Here they are: ∇· eD = eρv (9.20) ∇× eE = −jω eB (9.21) ∇· eB = 0 (9.22) ∇× eH = eJ + jω eD (9.23) In a source-free region, there is no net charge and no current, hence eρv = 0 and eJ = 0 in the present analysis. The above equations become ∇· eD = 0 (9.24) ∇× eE = −jω eB (9.25) ∇· eB = 0 (9.26) ∇× eH = +jω eD (9.27) Next, we recall that eD = ǫeE and that ǫ is a real-valued constant for a medium that is homogeneous, isotropic, and linear (Section 2.8). Similarly, eB = µ ˜H and µ is a real-valued constant. Thus, under these conditions, it is sufficient to consider either eD or eE and either eB or eH. The choice is arbitrary, but in engineering applications it is customary to use eE and eH. Eliminating the now-redundant quantities eD and eB, the above equations become ∇· eE = 0 (9.28) ∇× eE = −jωµ eH (9.29) ∇· eH = 0 (9.30) ∇× eH = +jωǫeE (9.31) It is important to note that requiring the region of interest to be source-free precludes the possibility of loss in the medium. To see this, let’s first be clear about what we mean by “loss.” For an electromagnetic wave, loss is observed as a reduction in the magnitude of the electric and magnetic field with increasing distance. This reduction is due to the dissipation of power in the medium. This occurs when the conductivity σ is greater than zero because Ohm’s Law for Electromagnetics (eJ = σeE; Section 6.3) requires that power in the electric field be transferred into conduction current, and is thereby lost to the wave (Section 6.6). When we required J to be zero above, we precluded this possibility; that is, we implicitly specified σ = 0. The fact that the constitutive parameters µ and ǫ appear in Equations 9.28–9.31, but σ does not, is further evidence of this. Equations 9.28–9.31 are Maxwell’s Equations for a region comprised of isotropic, homogeneous, and source-free material. Because there can be no conduction current in a source-free region, these equations apply only to material that is lossless (i.e., having negligible σ). Before moving on, one additional disclosure is appropriate. It turns out that there actually is a way to use Equations 9.28–9.31 for regions in which loss is significant. This requires a redefinition of ǫ as a complex-valued quantity. We shall not consider this technique in this section. We mention this simply because one should be aware that if permittivity | Electromagnetics_Vol1_Page_211_Chunk1515 |
9.2. WAVE EQUATIONS FOR SOURCE-FREE AND LOSSLESS REGIONS 197 appears as a complex-valued quantity, then the imaginary part represents loss. To derive the wave equations we begin with the MFE, Equation 9.29. Taking the curl of both sides of the equation we obtain ∇× ∇× eE = ∇× −jωµ eH = −jωµ ∇× eH (9.32) On the right we can eliminate ∇× eH using Equation 9.31: −jωµ ∇× eH = −jωµ +jωǫeE = +ω2µǫeE (9.33) On the left side of Equation 9.32, we apply the vector identity ∇× ∇× A = ∇(∇· A) −∇2A (9.34) which in this case is ∇× ∇× eE = ∇ ∇· eE −∇2 eE (9.35) We may eliminate the first term on the right using Equation 9.28, yielding ∇× ∇× eE = −∇2 eE (9.36) Substituting these results back into Equation 9.32 and rearranging terms we have ∇2 eE + ω2µǫeE = 0 (9.37) This is the wave equation for eE. Note that it is a homogeneous (in the mathematical sense of the word) differential equation, which is expected since we have derived it for a source-free region. It is common to make the following definition β ≜ω√µǫ (9.38) so that Equation 9.37 may be written ∇2 eE + β2 eE = 0 (9.39) Why go the the trouble of defining β? One reason is that β conveniently captures the contribution of the frequency, permittivity, and permeability all in one constant. Another reason is to emphasize the connection to the parameter β appearing in transmission line theory (see Section 3.8 for a reminder). It should be clear that β is a phase propagation constant, having units of 1/m (or rad/m, if you prefer), and indicates the rate at which the phase of the propagating wave progresses with distance. The wave equation for eH is obtained using essentially the same procedure, which is left as an exercise for the reader. It should be clear from the duality apparent in Equations 9.28-9.31 that the result will be very similar. One finds: ∇2 eH + β2 eH = 0 (9.40) Equations 9.39 and 9.40 are the wave equations for eE and eH, respectively, for a region comprised of isotropic, homogeneous, lossless, and source- free material. Looking ahead, note that eE and eH are solutions to the same homogeneous differential equation. Consequently, eE and eH cannot be different by more than a constant factor and a direction. In fact, we can also determine something about the factor simply by examining the units involved: Since eE has units of V/m and eH has units of A/m, this factor will be expressible in units of the ratio of V/m to A/m, which is Ω. This indicates that the factor will be an impedance. This factor is known as the wave impedance and will be addressed in Section 9.5. This impedance is analogous the characteristic impedance of a transmission line (Section 3.7). Additional Reading: • “Wave Equation” on Wikipedia. • “Electromagnetic Wave Equation” on Wikipedia. | Electromagnetics_Vol1_Page_212_Chunk1516 |
198 CHAPTER 9. PLANE WAVES IN LOSSLESS MEDIA 9.3 Types of Waves [m0142] Solutions to the electromagnetic wave equations (Section 9.2) exist in a variety of forms, representing different types of waves. It is useful to identify three particular geometries for unguided waves. Each of these geometries is defined by the shape formed by surfaces of constant phase, which we refer to as phasefronts. (Keep in mind the analogy between electromagnetic waves and sound waves (described in Section 1.3), and note that sound waves also exhibit these geometries.) A spherical wave has phasefronts that form concentric spheres, as shown in Figure 9.1. Waves are well-modeled as spherical when the dimensions of the source of the wave are small relative to the scale at which the wave is observed. For example, the wave radiated by an antenna having dimensions of 10 cm, when observed in free space over a scale of 10 km, appears to have phasefronts that are very nearly spherical. Note that the magnitude of the field on a phasefront of a spherical wave may vary significantly, but it is the shape of phasefronts that make it a spherical wave. A cylindrical wave exhibits phasefronts that form concentric cylinders, as shown in Figure 9.2. Said differently, the phasefronts of a cylindrical wave are circular in one dimension, and planar in the perpendicular direction. A cylindrical wave is often a good description of the wave that emerges from a line-shaped source. A plane wave exhibits phasefronts that are planar, with planes that are parallel to each other as shown in Figure 9.3. There are two conditions in which waves are well-modeled as plane waves. First, some structures give rise to waves that appear to have planar phasefronts over a limited area; a good example is the wave radiated by a parabolic reflector, as shown in Figure 9.4. Second, all waves are well-modeled as plane waves when observed over a small region located sufficiently far from the source. In particular, spherical waves are “locally planar” in the sense that they are well-modeled as planar when observed over a small portion of the spherical c⃝Y. Qin CC BY 4.0 Figure 9.1: The phasefronts of a spherical wave form concentric spheres. c⃝Y. Qin CC BY 4.0 Figure 9.2: The phasefronts of a cylindrical wave form concentric cylinders. | Electromagnetics_Vol1_Page_213_Chunk1517 |
9.4. UNIFORM PLANE WAVES: DERIVATION 199 c⃝Y. Qin CC BY 4.0 Figure 9.3: The phasefronts of a plane wave form parallel planes. feed reflector planar p onts c⃝Y. Qin CC BY 4.0 Figure 9.4: Plane waves formed in the region in front of a parabolic reflector antenna. phasefront, as shown in Figure 9.5. An analogy is that the Earth seems “locally flat” to an observer on the ground, even though it is clearly spherical to an observer in orbit. The “locally planar” approximation is often employed because it is broadly applicable and simplifies analysis. Most waves are well-modeled as spherical, cylin- drical, or plane waves. Plane waves (having planar phasefronts) are of particular importance due to wide applicability of the “locally planar” approximation. "locally-planar" phasefronts spherical phasefront c⃝Y. Qin CC BY 4.0 Figure 9.5: “Locally planar” approximation of a spherical wave over a limited area. 9.4 Uniform Plane Waves: Derivation [m0038] Section 9.2 showed how Maxwell’s Equations could be reduced to a pair of phasor-domain “wave equations,” namely: ∇2 eE + β2 eE = 0 (9.41) ∇2 eH + β2 eH = 0 (9.42) where β = ω√µǫ, assuming unbounded homogeneous, isotropic, lossless, and source-free media. In this section, we solve these equations for the special case of a uniform plane wave. A uniform plane wave is one for which both eE and eH have constant magnitude and phase in a specified plane. Despite being a special case, the solution turns out to be broadly applicable, appearing as a common building block in many practical and theoretical problems in unguided propagation (as explained in Section 9.3), as well as in more than a few transmission line and waveguide problems. To begin, let us assume that the plane over which eE and eH have constant magnitude and phase is a plane of constant z. First, note that we may make this | Electromagnetics_Vol1_Page_214_Chunk1518 |
200 CHAPTER 9. PLANE WAVES IN LOSSLESS MEDIA assumption with no loss of generality. For example, we could alternatively select a plane of constant y, solve the problem, and then simply exchange variables to get a solution for planes of constant z (or x).1 Furthermore, the solution for any planar orientation not corresponding to a plane of constant x, y, or z may be similarly obtained by a rotation of coordinates, since the physics of the problem does not depend on the orientation of this plane – if it does, then the medium is not isotropic! We may express the constraint that the magnitude and phase of eE and eH are constant over a plane that is perpendicular to the z axis as follows: ∂ ∂x eE = ∂ ∂y eE = ∂ ∂x eH = ∂ ∂y eH = 0 (9.43) Let us identify the Cartesian components of each of these fields as follows: eE = ˆx eEx + ˆy eEy + ˆz eEz , and (9.44) eH = ˆx eHx + ˆy eHy + ˆz eHz (9.45) Now Equation 9.43 may be interpreted in detail for eE as follows: ∂ ∂x eEx = ∂ ∂x eEy = ∂ ∂x eEz = 0 (9.46) ∂ ∂y eEx = ∂ ∂y eEy = ∂ ∂y eEz = 0 (9.47) and for eH as follows: ∂ ∂x eHx = ∂ ∂x eHy = ∂ ∂x eHz = 0 (9.48) ∂ ∂y eHx = ∂ ∂y eHy = ∂ ∂y eHz = 0 (9.49) The wave equation for eE (Equation 9.41) written explicitly in Cartesian coordinates is ∂2 ∂x2 + ∂2 ∂y2 + ∂2 ∂z2 ˆx eEx + ˆy eEy + ˆz eEz +β2 ˆx eEx + ˆy eEy + ˆz eEz = 0 (9.50) 1By the way, this is a highly-recommended exercise for the stu- dent. Decomposing this equation into separate equations for each of the three coordinates, we obtain the following: ∂2 ∂x2 + ∂2 ∂y2 + ∂2 ∂z2 eEx + β2 eEx = 0 (9.51) ∂2 ∂x2 + ∂2 ∂y2 + ∂2 ∂z2 eEy + β2 eEy = 0 (9.52) ∂2 ∂x2 + ∂2 ∂y2 + ∂2 ∂z2 eEz + β2 eEz = 0 (9.53) Applying the constraints of Equations 9.46 and 9.47, we note that many of the terms in Equations 9.51–9.53 are zero. We are left with: ∂2 ∂z2 eEx + β2 eEx = 0 (9.54) ∂2 ∂z2 eEy + β2 eEy = 0 (9.55) ∂2 ∂z2 eEz + β2 eEz = 0 (9.56) Now we will show that Equation 9.43 also implies that eEz must be zero. To show this, we use Ampere’s Law for a source-free region (Section 9.2): ∇× eH = +jωǫeE (9.57) and take the dot product with ˆz on both sides: ˆz · ∇× eH = +jωǫ eEz ∂ ∂y eHx −∂ ∂x eHy = +jωǫ eEz (9.58) Again applying the constraints of Equation 9.43, the left side of Equation 9.58 must be zero; therefore, eEz = 0. The exact same procedure applied to eH (using the Maxwell-Faraday Equation; also given in Section 9.2) reveals that eHz is also zero.2 Here is what we have found: If a wave is uniform over a plane, then the electric and magnetic field vectors must lie in this plane. This conclusion is a direct consequence of the fact that Maxwell’s Equations require the electric field to 2Showing this is a highly-recommended exercise for the reader. | Electromagnetics_Vol1_Page_215_Chunk1519 |
9.4. UNIFORM PLANE WAVES: DERIVATION 201 be proportional to the curl of the magnetic field and vice-versa. The general solution to Equation 9.54 is: eEx = E+ x0e−jβz + E− x0e+jβz (9.59) where E+ x0 and E− x0 are complex-valued constants. The values of these constants are determined by boundary conditions – possibly sources – outside the region of interest. Since in this section we are limiting our scope to source-free and homogeneous regions, we may for the moment consider the values of E+ x0 and E− x0 to be arbitrary, since any values will satisfy the associated wave equation.3 Similarly we have for eEy: eEy = E+ y0e−jβz + E− y0e+jβz (9.60) where E+ y0 and E− y0 are again arbitrary constants. Summarizing, we have found eE = ˆx eEx + ˆy eEy (9.61) where eEx and eEy are given by Equations 9.59 and 9.60, respectively. Note that Equations 9.59 and 9.60 are essentially the same equations encountered in the study of waves in lossless transmission lines; for a reminder, see Section 3.6. Specifically, factors containing e−jβz describe propagation in the +z direction, whereas factors containing e+jβz describe propagation in the −z direction. We conclude: If a wave is uniform over a plane, then possible directions of propagation are the two directions perpendicular to the plane. Since we previously established that the electric and magnetic field vectors must lie in the plane, we also conclude: The direction of propagation is perpendicular to the electric and magnetic field vectors. This conclusion turns out to be generally true; i.e., it is not limited to uniform plane waves. Although we 3The reader is encouraged to confirm that these are solutions by substitution into the associated wave equation. will not provide a rigorous proof of this, one way to see that this is true is to imagine that any type of wave can be interpreted as the sum (formally, a linear combination) of uniform plane waves, so perpendicular orientation of the field vectors with respect to the direction of propagation is inescapable. The same procedure yields the uniform plane wave solution to the wave equation for eH, which is eH = ˆx eHx + ˆy eHy (9.62) where eHx = H+ x0e−jβz + H− x0e+jβz (9.63) eHy = H+ y0e−jβz + H− y0e+jβz (9.64) and where H+ x0, H− x0, H+ y0 and H− y0 are arbitrary constants. Note that the solution is essentially the same as that for eE, with the sole difference being that the arbitrary constants may apparently have different values. To this point, we have seen no particular relationship between the electric and magnetic fields, and it may appear that the electric and magnetic fields are independent of each other. However, Maxwell’s Equations – specifically, the two curl equations – make it clear that there must be a relationship between these fields. Subsequently the arbitrary constants in the solutions for eE and eH must also be related. In fact, there are two considerations here: • The magnitude and phase of eE must be related to the magnitude and phase of eH. Since both fields are solutions to the same differential (wave) equation, they may differ by no more than a multiplicative constant. Since the units of eE and eH are V/m and A/m respectively, this constant must be expressible in units of V/m divided by A/m; i.e., in units of Ω, an impedance. • The direction of eE must be related to direction of eH. Let us now address these considerations. Consider an electric field that points in one of the cardinal directions – let’s say +ˆx – and make the definition E0 ≜E+ x0 for notational convenience. Then the electric field intensity may be written as follows: eE = ˆxE0e−jβz (9.65) | Electromagnetics_Vol1_Page_216_Chunk1520 |
202 CHAPTER 9. PLANE WAVES IN LOSSLESS MEDIA Again, there is no loss of generality here since the coordinate system could be rotated in such a way that any uniform plane could be described in this way. We may now determine eH from eE using the Maxwell-Faraday Equation (Section 9.2): ∇× eE = −jωµ eH (9.66) Solving this equation for eH, we find: eH = ∇× eE −jωµ = ∇× ˆxE0e−jβz −jωµ (9.67) Now let us apply the curl operator. The complete expression for the curl operator in Cartesian coordinates is given in Section B.2. Here let us consider one component at a time, starting with the ˆx component: ˆx · ∇× eE = ∂eEz ∂y −∂eEy ∂z (9.68) Since eEy = eEz = 0, the above expression is zero and subsequently eHx = 0. Next, the ˆy component: ˆy · ∇× eE = ∂eEx ∂z −∂eEz ∂x (9.69) Here eEz = 0, so we have simply ˆy · ∇× eE = ∂eEx ∂z (9.70) It is not necessary to repeat this procedure for eHz, since we know in advance that eH must be perpendicular to the direction of propagation and subsequently eHz = 0. Returning to Equation 9.67, we obtain: eH = ˆy 1 −jωµ ∂ ∂z eEx = ˆy 1 −jωµ ∂ ∂z | Electromagnetics_Vol1_Page_217_Chunk1521 |
9.5. UNIFORM PLANE WAVES: CHARACTERISTICS 203 The wave impedance in free space, assigned the symbol η0, is η0 ≜ rµ0 ǫ0 ∼= 377 Ω. (9.74) Wrapping up our solution, we find that if eE is as given by Equation 9.65, then eH = ˆyE0 η e−jβz (9.75) 9.5 Uniform Plane Waves: Characteristics [m0039] In Section 9.4, expressions for the electric and magnetic fields are determined for a uniform plane wave in lossless media. If the planar phasefront is perpendicular to the z axis, then waves may propagate in either the +ˆz direction of the −ˆz direction. If we consider only the former, and select eE to point in the +ˆx direction, then we find eE = +ˆxE0e−jβz (9.76) eH = +ˆyE0 η e−jβz (9.77) where β = ω√µǫ is the phase propagation constant, η = p µ/ǫ is the wave impedance, and E0 is a complex-valued constant associated with the magnitude and phase of the source. This result is in fact completely general for uniform plane waves, since any other possibility may be obtained by simply rotating coordinates. In fact, this is pretty easy because (as determined in Section 9.4) eE, eH, and the direction of propagation are mutually perpendicular, with the direction of propagation pointing in the same direction as eE × eH. In this section, we identify some important characteristics of uniform plane waves, including wavelength and phase velocity. Chances are that much of what appears here will be familiar to the reader; if not, a quick review of Sections 1.3 (“Fundamentals of Waves”) and 3.8 (“Wave Propagation on a Transmission Line”) are recommended. First, recall that eE and eH are phasors representing physical (real-valued) fields and are not the field values themselves. The actual, physical electric field intensity is E = Re n eEejωto (9.78) = Re ˆxE0e−jβzejωt = ˆx |E0| cos (ωt −βz + ψ) where ψ is the phase of E0. Similarly: H = ˆy|E0| η cos (ωt −βz + ψ) (9.79) | Electromagnetics_Vol1_Page_218_Chunk1522 |
204 CHAPTER 9. PLANE WAVES IN LOSSLESS MEDIA x z y E H c⃝E. Boutet (Modified) CC BY SA 3.0 Figure 9.7: Relationship between the electric field di- rection, magnetic field direction, and direction of prop- agation (here, +ˆz). This result is illustrated in Figure 9.7. Note that both E and H (as well as their phasor representations) have the same phase and have the same frequency and position dependence cos (ωt −βz + ψ). Since β is a real-valued constant for lossless media, only the phase, and not the magnitude, varies with z. If ω →0, then β →0 and the fields no longer depend on z; in this case, the field is not propagating. For ω > 0, cos (ωt −βz + ψ) is periodic in z; specifically, it has the same value each time z increases or decreases by 2π/β. This is, by definition, the wavelength λ: λ ≜2π β (9.80) If we observe this wave at some fixed point (i.e., hold z constant), we find that the electric and magnetic fields are also periodic in time; specifically, they have the same value each time t increases by 2π/ω. We may characterize the speed at which the wave travels by comparing the distance required to experience 2π of phase rotation at a fixed time, which is 1/β; to the time it takes to experience 2π of phase rotation at a fixed location, which is 1/ω. This is known as the phase velocity4 vp: vp ≜1/β 1/ω = ω β (9.81) Note that vp has the units expected from its definition, namely (rad/s)/(rad/m) = m/s. If we make the 4We acknowledge that this is a misnomer, since velocity is prop- erly defined as speed in a specified direction, and vp by itself does not specify direction. In fact, the phase velocity in this case is prop- erly said to be +ˆzvp. Nevertheless, we adopt the prevailing termi- nology. substitution β = ω√µǫ, we find vp = ω ω√µǫ = 1 √µǫ (9.82) Note that vp, like the wave impedance η, depends only on material properties. For example, the phase velocity of an electromagnetic wave in free space, given the special symbol c, is c ≜vp|µ=µ0,ǫ=ǫ0 = 1 √µ0ǫ0 ∼= 3.00 × 108 m/s (9.83) This constant is commonly referred to as the speed of light, but in fact it is the phase velocity of an electromagnetic field at any frequency (not just optical frequencies) in free space. Since the permittivity ǫ and permeability µ of any material is greater than that of a vacuum, vp in any material is less than the phase velocity in free space. Summarizing: Phase velocity is the speed at which any point of constant phase appears to travel along the direc- tion of propagation. Phase velocity is maximum (= c) in free space, and slower by a factor of 1/√µrǫr in any other lossless medium. Finally, we note the following relationship between frequency f, wavelength, and phase velocity: λ = 2π β = 2π ω/vp = 2π 2πf/vp = vp f (9.84) Thus, given any two of the parameters f, λ, and vp, we may solve for the remaining quantity. Also note that as a consequence of the inverse-proportional relationship between λ and vp, we find: At a given frequency, the wavelength in any ma- terial is shorter than the wavelength in free space. Example 9.1. Wave propagation in a lossless dielectric. Polyethylene is a low-loss dielectric having ǫr ∼= 2.3. What is the phase velocity in polyethylene? What is wavelength in | Electromagnetics_Vol1_Page_219_Chunk1523 |
9.5. UNIFORM PLANE WAVES: CHARACTERISTICS 205 polyethylene? The frequency of interest is 1 GHz. Solution. Low-loss dielectrics exhibit µr ∼= 1 and σ ≈0. Therefore the phase velocity is vp = 1 √µ0ǫrǫ0 = c √ǫr ∼= 1.98 × 108 m/s (9.85) i.e., very nearly two-thirds of the speed of light in free space. The wavelength at 1 GHz is λ = vp f ∼= 19.8 cm (9.86) Again, about two-thirds of a wavelength at the same frequency in free space. Returning to polarization and magnitude, it is useful to note that Equation 9.79 could be written in terms of E as follows: H = 1 η ˆz × E (9.87) i.e., H is perpendicular to both the direction of propagation (here, +ˆz) and E, with magnitude that is different from E by the wave impedance η. This simple expression is very useful for quickly determining H given E when the direction of propagation is known. In fact, it is so useful that it commonly explicitly generalized as follows: H = 1 η ˆk × E (9.88) where ˆk (here, +ˆz) is the direction of propagation. Similarly, given H, ˆk, and η, we may find E using E = −ηˆk × H (9.89) These spatial relationships can be readily verified using Figure 9.7. Equations 9.88 and 9.89 are known as the plane wave relationships. The plane wave relationships apply equally well to the phasor representations of E and H; i.e., eE = −ηˆk × eH (9.90) eH = 1 η ˆk × eE (9.91) These equations can be readily verified by applying the definition of the associated phasors (e.g., Equation 9.78). It also turns out that these relationships apply at each point in space, even if the waves are not planar, uniform, or in lossless media. Example 9.2. Analysis of a radially-directed plane wave. Consider the scenario illustrated in Figure 9.8. Here a uniform plane wave having frequency f = 3 GHz is propagating along a path of constant φ, where is φ is known but not specified. The phase of the electric field is π/2 radians at ρ = 0 (the origin) and t = 0. The material is an effectively unbounded region of free space. The electric field is oriented in the +ˆz direction and has peak magnitude of 1 mV/m. Find (a) the electric field intensity in phasor representation, (b) the magnetic field intensity in phasor representation, and (c) the actual, physical electric field along the radial path. Solution: First, realize that this “radially-directed” plane wave is in fact a plane wave, and not a cylindrical wave. It may well we be that if we “zoom out” far enough, we are able to perceive a cylindrical wave (for more on this idea, see Section 9.3), or it might simply be that wave is exactly planar, and cylindrical coordinates just happen to be a convenient coordinate system for this application. In either case, the direction of propagation ˆk = +ˆρ and the solution to this example will be the same. Here’s the phasor representation for the electric field intensity of a uniform plane wave in a lossless medium, analogous to Equation 9.76: eE = +ˆzE0e−jβρ (9.92) From the problem statement, |E0| = 1 mV/m. Also from the problem statement, the phase of E0 is π/2 radians; in fact, we could just write E0 = +j |E0|. Thus, the answer to (a) is eE = +ˆzj |E0| e−jβρ (9.93) | Electromagnetics_Vol1_Page_220_Chunk1524 |
206 CHAPTER 9. PLANE WAVES IN LOSSLESS MEDIA where the propagation constant β = ω√µǫ = 2πf√µ0ǫ0 ∼= 62.9 rad/m. The answer to (b) is easiest to obtain from the plane wave relationship: eH = 1 η ˆk × E = 1 η ˆρ × | Electromagnetics_Vol1_Page_221_Chunk1525 |
9.6. WAVE POLARIZATION 207 x z by RJB1 (Modified) Figure 9.9: Linear polarization. Here Ex is shown in blue, Ey is shown in green, and E is shown in red with black vector symbols. In this example the phases of Ex and Ey are zero and φ = −π/4. Ey/Ex. This wave too is said to exhibit linear polarization, because, again, the direction of the electric field is constant with both time and position. In fact, all linearly-polarized uniform plane waves propagating in the +ˆz direction in lossless media can be described as follows: eE = ˆρEρe−jβz (9.99) This is so because ˆρ could be ˆx, ˆy, or any other direction that is perpendicular to ˆz. If one is determined to use Cartesian coordinates, the above expression may be rewritten using (Section 4.3) ˆρ = ˆx cos φ + ˆy cos φ (9.100) yielding eE = (ˆx cos φ + ˆy cos φ) Eρe−jβz (9.101) When written in this form, φ = 0 corresponds to eE = eEx, φ = π/2 corresponds to eE = eEy, and any other value of φ corresponds to some other constant orientation of the electric field vector; see Figure 9.9 for an example. A wave is said to exhibit linear polarization if the direction of the electric field vector does not vary with either time or position. Linear polarization arises when the source of the wave is linearly polarized. A common example is the wave radiated by a straight wire antenna, such as a dipole or a monopole. Linear polarization may also be created by passing a plane wave through a polarizer; this is particularly common at optical frequencies (see “Additional Reading” at the end of this section). A commonly-encountered alternative to linear polarization is circular polarization. For an explanation, let us return to the linearly-polarized plane waves eEx and eEy defined earlier. If both of these waves exist simultaneously, then the total electric field intensity is simply the sum: eE = eEx + eEy = (ˆxEx + ˆyEy) e−jβz (9.102) If the phase of Ex and Ey is the same, then Ex = Eρ cos φ, Ey = Eρ sin φ, and the above expression is essentially the same as Equation 9.101. In this case, eE is linearly polarized. But what if the phases of Ex and Ey are different? In particular, let’s consider the following case. Let Ex = E0, some complex-valued constant, and let Ey = +jE0, which is E0 phase-shifted by +π/2 radians. With no further math, it is apparent that eEx and eEy are different only in that one is phase-shifted by π/2 radians relative to the other. For the physical (real-valued) fields, this means that Ex has maximum magnitude when Ey is zero and vice versa. As a result, the direction of E = Ex + Ey will rotate in the x −y plane, as shown in Figure 9.10 The rotation of the electric field vector can also be identified mathematically. When Ex = E0 and Ey = +jE0, Equation 9.102 can be written: eE = (ˆx + jˆy) E0e−jβz (9.103) Now reverting from phasor notation to the physical field: E = Re n eEejωto = Re (ˆx + jˆy) E0e−jβzejωt = ˆx |E0| cos (ωt −βz) + ˆy |E0| cos ωt −βz + π 2 (9.104) As anticipated, we see that both Ex and Ey vary sinusoidally, but are π/2 radians out of phase | Electromagnetics_Vol1_Page_222_Chunk1526 |
208 CHAPTER 9. PLANE WAVES IN LOSSLESS MEDIA by Dave3457 Figure 9.10: A circularly-polarized wave (in red, with black vector symbols) resulting from the addition of orthogonal linearly-polarized waves (shown in green and blue) that are phase-shifted by π/2 radians. resulting in rotation in the plane perpendicular to the direction of propagation. In the example above, the electric field vector rotates either clockwise or counter-clockwise relative to the direction of propagation. The direction of this rotation can be identified by pointing the thumb of the left hand in the direction of propagation; in this case, the fingers of the left hand curl in the direction of rotation. For this reason, this particular form of circular polarization is known as left circular (or ”left-hand” circular) polarization (LCP). If we instead had chosen Ey = −jE0 = −jEx, then the direction of E rotates in the opposite direction, giving rise to right circular (or “right-hand” circular) polarization (RCP). These two conditions are illustrated in Figure 9.11. A wave is said to exhibit circular polarization if the electric field vector rotates with constant magnitude. Left- and right-circular polarizations may be identified by the direction of rotation with respect to the direction of propagation. In engineering applications, circular polarization is useful when the relative orientations of transmit and receive equipment is variable and/or when the medium is able rotate the electric field vector. For example, radio communications involving satellites in by Dave3457 Figure 9.11: Left-circular polarization (LCP; top) and right-circular polarization (RCP; bottom). | Electromagnetics_Vol1_Page_223_Chunk1527 |
9.7. WAVE POWER IN A LOSSLESS MEDIUM 209 non-geosynchronous orbits typically employ circular polarization. In particular, satellites of the U.S. Global Positioning System (GPS) transmit circular polarization because of the variable geometry of the space-to-earth radio link and the tendency of the Earth’s ionosphere to rotate the electric field vector through a mechanism known Faraday rotation (sometimes called the “Faraday effect”). If GPS were instead to transmit using a linear polarization, then a receiver would need to continuously adjust the orientation of its antenna in order to optimally receive the signal. Circularly-polarized radio waves can be generated (or received) using pairs of perpendicularly-oriented dipoles that are fed the same signal but with a 90◦phase shift, or alternatively by using an antenna that is intrinsically circularly-polarized, such as a helical antenna (see “Additional Reading” at the end of this section). Linear and circular polarization are certainly not the only possibilities. Elliptical polarization results when Ex and Ey do not have equal magnitude. Elliptical polarization is typically not an intended condition, but rather is most commonly observed as a degradation in a system that is nominally linearly- or circularly-polarized. For example, most antennas that are said to be “circularly polarized” instead produce circular polarization only in one direction and various degrees of elliptical polarization in all other directions. Additional Reading: • “Polarization (waves)” on Wikipedia. • “Dipole antenna” on Wikipedia. • “Polarizer” on Wikipedia. • “Faraday effect” on Wikipedia. • “Helical antenna” on Wikipedia. 9.7 Wave Power in a Lossless Medium [m0041] In many applications involving electromagnetic waves, one is less concerned with the instantaneous values of the electric and magnetic fields than the power associated with the wave. In this section, we address the issue of how much power is conveyed by an electromagnetic wave in a lossless medium. The relevant concepts are readily demonstrated in the context of uniform plane waves, as shown in this section. A review of Section 9.5 (“Uniform Plane Waves: Characteristics”) is recommended before reading further. Consider the following uniform plane wave, described in terms of the phasor representation of its electric field intensity: eE = ˆxE0e−jβz (9.105) Here E0 is a complex-valued constant associated with the source of the wave, and β is the positive real-valued propagation constant. Therefore, the wave is propagating in the +ˆz direction in lossless media. The first thing that should be apparent is that the amount of power conveyed by this wave is infinite. The reason is as follows. If the power passing through any finite area is greater than zero, then the total power must be infinite because, for a uniform plane wave, the electric and magnetic field intensities are constant over a plane of infinite area. In practice, we never encounter this situation because all practical plane waves are only “locally planar” (see Section 9.3 for a refresher on this idea). Nevertheless, we seek some way to express the power associated with such waves. The solution is not to seek total power, but rather power per unit area. This quantity is known as the spatial power density, or simply “power density,” and has units of W/m2.5 Then, if we are interested in total power passing through some finite area, then we may 5Be careful: The quantities power spectral density (W/Hz) and power flux density (W/(m2·Hz)) are also sometimes referred to as “power density.” In this section, we will limit the scope to spatial power density (W/m2). | Electromagnetics_Vol1_Page_224_Chunk1528 |
210 CHAPTER 9. PLANE WAVES IN LOSSLESS MEDIA simply integrate the power density over this area. Let’s skip to the answer, and then consider where this answer comes from. It turns out that the instantaneous power density of a uniform plane wave is the magnitude of the Poynting vector S ≜E × H (9.106) Note that this equation is dimensionally correct; i.e. the units of E (V/m) times the units of H (A/m) yield the units of spatial power density (V·A/m2, which is W/m2). Also, the direction of E × H is in the direction of propagation (reminder: Section 9.5), which is the direction in which the power is expected to flow. Thus, we have some compelling evidence that |S| is the power density we seek. However, this is not proof – for that, we require the Poynting Theorem, which is a bit outside the scope of the present section, but is addressed in the “Additional Reading” at the end of this section. A bit later we’re going to need to know S for a uniform plane wave, so let’s work that out now. From the plane wave relationships (Section 9.5) we find that the magnetic field intensity associated with the electric field in Equation 9.105 is eH = ˆyE0 η e−jβz (9.107) where η = p µ/ǫ is the real-valued impedance of the medium. Let ψ be the phase of E0; i.e., E0 = |E0|ejψ. Then E = Re n eEejωto = ˆx |E0| cos (ωt −βz + ψ) (9.108) and H = Re n eHejωto = ˆy|E0| η cos (ωt −βz + ψ) (9.109) Now applying Equation 9.106, S = ˆz|E0|2 η cos2 (ωt −βz + ψ) (9.110) As noted earlier, |S| is only the instantaneous power density, which is still not quite what we are looking for. What we are actually looking for is the time-average power density Save – that is, the average value of |S| over one period T of the wave. This may be calculated as follows: Save = 1 T Z t0+T t=t0 |S|dt = |E0|2 η 1 T Z t0+T t=t0 cos2 (ωt −ks + ψ) dt (9.111) Since ω = 2πf = 2π/T, the definite integral equals T/2. We obtain Save = |E0|2 2η (9.112) It is useful to check units again at this point. Note (V/m)2 divided by Ωis W/m2, as expected. Equation 9.112 is the time-average power density (units of W/m2) associated with a sinusoidally- varying uniform plane wave in lossless media. Note that Equation 9.112 is analogous to a well-known result from electric circuit theory. Recall the time-average power Pave (units of W) associated with a voltage phasor eV across a resistance R is Pave = |eV |2 2R (9.113) which closely resembles Equation 9.112. The result is also analogous to the result for a voltage wave on a transmission line (Section 3.20), for which: Pave = |V + 0 |2 2Z0 (9.114) where V + 0 is a complex-valued constant representing the magnitude and phase of the voltage wave, and Z0 is the characteristic impedance of the transmission line. Here is a good point at which to identify a common pitfall. |E0| and |eV | are the peak magnitudes of the associated real-valued physical quantities. However, these quantities are also routinely given as root mean square (“rms”) quantities. Peak magnitudes are greater by a factor of √ 2, so Equation 9.112 | Electromagnetics_Vol1_Page_225_Chunk1529 |
9.7. WAVE POWER IN A LOSSLESS MEDIUM 211 expressed in terms of the rms quantity lacks the factor of 1/2. Example 9.3. Power density of a typical radio wave. A radio wave transmitted from a distant location may be perceived locally as a uniform plane wave if there is no nearby structure to scatter the wave; a good example of this is the wave arriving at the user of a cellular telephone in a rural area with no significant terrain scattering. The range of possible signal strengths varies widely, but a typical value of the electric field intensity arriving at the user’s location is 10 µV/m rms. What is the corresponding power density? Solution: From the problem statement, |E0| = 10 µV/m rms. We assume propagation occurs in air, which is indistinguishable from free space at cellular frequencies. If we use Equation 9.112, then we must first convert |E0| from rms to peak magnitude, which is done by multiplying by √ 2. Thus: Save = |E0|2 2η ∼= | Electromagnetics_Vol1_Page_226_Chunk1530 |
212 CHAPTER 9. PLANE WAVES IN LOSSLESS MEDIA Image Credits Fig. 9.1: c⃝Y. Qin, https://commons.wikimedia.org/wiki/File:M0142 fSphericalPhasefront.svg, CC BY 4.0 (https://creativecommons.org/licenses/by/4.0/). Fig. 9.2 c⃝Y. Qin, https://commons.wikimedia.org/wiki/File:M0142 fCylindricalPhasefront.svg, CC BY 4.0 (https://creativecommons.org/licenses/by/4.0/). Fig. 9.3 c⃝Y. Qin, https://commons.wikimedia.org/wiki/File:M0142 fPlanarPhasefront.svg, CC BY 4.0 (https://creativecommons.org/licenses/by/4.0/). Fig. 9.4 c⃝Y. Qin, https://commons.wikimedia.org/wiki/File:M0142 fPlaneWavesReflector.svg, CC BY 4.0 (https://creativecommons.org/licenses/by/4.0/). Fig. 9.5 c⃝Y. Qin, https://commons.wikimedia.org/wiki/File:M0142 fLocallyPlanar.svg, CC BY 4.0 (https://creativecommons.org/licenses/by/4.0/). Fig. 9.7: c⃝E. Boutet, https://en.wikipedia.org/wiki/File:Onde electromagnetique.svg, CC BY SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0/). Heavily modified by author. Fig. 9.9: RJB1, https://commons.wikimedia.org/wiki/File:Linearly Polarized Wave.svg, public domain. Modified by author. Fig. 9.10: Dave3457, https://en.wikipedia.org/wiki/File:Circular.Polarization.Circularly.Polarized.Light With.Components Left.Handed.svg, public domain. Fig. 9.11: Dave3457, https://en.wikipedia.org/wiki/File:Circular.Polarization.Circularly.Polarized.Light Without.Components Left.Handed.svg and https://en.wikipedia.org/wiki/File:Circular.Polarization.Circularly.Polarized.Light Without.Components Right.Handed.svg, public domain. | Electromagnetics_Vol1_Page_227_Chunk1531 |
Appendix A Constitutive Parameters of Some Common Materials A.1 Permittivity of Some Common Materials [m0135] The values below are relative permittivity ǫr ≜ǫ/ǫ0 for a few materials that are commonly encountered in electrical engineering applications, and for which permittivity emerges as a consideration. Note that “relative permittivity” is sometimes referred to as dielectric constant. Here we consider only the physical (real-valued) permittivity, which is the real part of the complex permittivity (typically indicated as ǫ′ or ǫ′ r) for materials exhibiting significant loss. Permittivity varies significantly as a function of frequency. The values below are representative of frequencies from a few kHz to about 1 GHz. The values given are also representative of optical frequencies for materials such as silica that are used in optical applications. Permittivity also varies as a function of temperature. In applications where precision better than about 10% is required, primary references accounting for frequency and temperature should be consulted. The values presented here are gathered from a variety of references, including those indicated in “Additional References.” Free Space (vacuum): ǫr ≜1 Solid Dielectrics: Material ǫr Common uses Styrofoam1 1.1 Teflon2 2.1 Polyethylene 2.3 coaxial cable Polypropylene 2.3 Silica 2.4 optical fiber3 Polystyrene 2.6 Polycarbonate 2.8 Rogers RO3003 3.0 PCB substrate FR4 (glass epoxy laminate) 4.5 PCB substrate 1 Properly known as extruded polystyrene foam (XPS). 2 Properly known as polytetrafluoroethylene (PTFE). 3 Typically doped with small amounts of other materials to slightly raise or lower the index of refraction (= √ǫr). Non-conducting spacing materials used in discrete capacitors exhibit ǫr ranging from about 5 to 50. Semiconductors commonly appearing in electronics – including carbon, silicon, geranium, indium phosphide, and so on – typically exhibit ǫr in the range 5–15. Glass exhibits ǫr in the range 4–10, depending on composition. Gasses, including air, typically exhibit ǫr ∼= 1 to within a tiny fraction of a percent. Liquid water typically exhibits ǫr in the range 72–81. Distilled water exhibits ǫr ≈81 at room temperature, whereas sea water tends to be at the Electromagnetics Vol 1. c⃝2018 S.W. Ellingson CC BY SA 4.0. https://doi.org/10.21061/electromagnetics-vol-1 | Electromagnetics_Vol1_Page_228_Chunk1532 |
214 APPENDIX A. CONSTITUTIVE PARAMETERS OF SOME COMMON MATERIALS lower end of the range. Other liquids typically exhibit ǫr in the range 10–90, with considerable variation as a function of temperature and frequency. Animal flesh and blood consists primarily of liquid matter and so also exhibits permittivity in this range. Soil typically exhibits ǫr in the range 2.5–3.5 when dry and higher when wet. The permittivity of soil varies considerably depending on composition. Additional Reading: • CRC Handbook of Chemistry and Physics. • “Relative permittivity” on Wikipedia. • “Miscellaneous Dielectric Constants” on microwaves101.com. A.2 Permeability of Some Common Materials [m0136] The values below are relative permeability µr ≜µ/µ0 for a few materials that are commonly encountered in electrical engineering applications, and for which µr is significantly different from 1. These materials are predominantly ferromagnetic metals and (in the case of ferrites) materials containing significant ferromagnetic metal content. Nearly all other materials exhibit µr that is not significantly different from that of free space. The values presented here are gathered from a variety of references, including those indicated in “Additional References” at the end of this section. Be aware that permeability may vary significantly with frequency; values given here are applicable to the frequency ranges for applications in which these materials are typically used. Also be aware that materials exhibiting high permeability are also typically non-linear; that is, permeability depends on the magnitude of the magnetic field. Again, values reported here are those applicable to applications in which these materials are typically used. Free Space (vacuum): µr ≜1. Iron (also referred to by the chemical notation “Fe”) appears as a principal ingredient in many materials and alloys employed in electrical structures and devices. Iron exhibits µr that is very high, but which decreases with decreasing purity. 99.95% pure iron exhibits µr ∼200, 000. This decreases to ∼5000 at 99.8% purity and is typically below 100 for purity less than 99%. Steel is an iron alloy that comes in many forms, with a correspondingly broad range of permeabilites. Electrical steel, commonly used in electrical machinery and transformers when high permeability is desired, exhibits µr ∼4000. Stainless steel, encompassing a broad range of alloys used in mechanical applications, exhibits µr in the range 750–1800. Carbon steel, including a broad class of alloys commonly used in structural applications, exhibits µr on the order of 100. | Electromagnetics_Vol1_Page_229_Chunk1533 |
A.3. CONDUCTIVITY OF SOME COMMON MATERIALS 215 Ferrites include a broad range of ceramic materials that are combined with iron and various combinations of other metals and are used as magnets and magnetic devices in various electrical systems. Common ferrites exhibit µr in the range 16–640. Additional Reading: • Section 7.16 (“Magnetic Materials”) • CRC Handbook of Chemistry and Physics. • “Magnetic Materials” on microwaves101.com. • “Permeability (electromagnetism)” on Wikipedia. • “Iron” on Wikipedia. • “Electrical steel” on Wikipedia. • “Ferrite (magnet)” on Wikipedia. A.3 Conductivity of Some Common Materials [m0137] The values below are conductivity σ for a few materials that are commonly encountered in electrical engineering applications, and for which conductivity emerges as a consideration. Note that materials in some applications are described instead in terms of resistivity, which is simply the reciprocal of conductivity. Conductivity may vary significantly as a function of frequency. The values below are representative of frequencies from a few kHz to a few GHz. Conductivity also varies as a function of temperature. In applications where precise values are required, primary references accounting for frequency and temperature should be consulted. The values presented here are gathered from a variety of references, including those indicated in “Additional References” at the end of this section. Free Space (vacuum): σ ≜0. Commonly encountered elements: Material σ (S/m) Copper 5.8 × 107 Gold 4.4 × 107 Aluminum 3.7 × 107 Iron 1.0 × 107 Platinum 0.9 × 107 Carbon 1.3 × 105 Silicon 4.4 × 10−4 Water exhibits σ ranging from about 6 µS/m for highly distilled water (thus, a very poor conductor) to about 5 S/m for seawater (thus, a relatively good conductor), varying also with temperature and pressure. Tap water is typically in the range 5–50 mS/m, depending on the level of impurities present. Soil typically exhibits σ in the range 10−4 S/m for dry soil to about 10−1 S/m for wet soil, varying also due to chemical composition. | Electromagnetics_Vol1_Page_230_Chunk1534 |
216 APPENDIX A. CONSTITUTIVE PARAMETERS OF SOME COMMON MATERIALS Non-conductors. Most other materials that are not well-described as conductors or semiconductors and are dry exhibit σ < 10−12 S/m. Most materials that are considered to be insulators, including air and common dielectrics, exhibit σ < 10−15 S/m, often by several orders of magnitude. Additional Reading: • CRC Handbook of Chemistry and Physics. • “Conductivity (electrolytic)” on Wikipedia. • “Electrical resistivity and conductivity” on Wikipedia. • “Soil resistivity” on Wikipedia. | Electromagnetics_Vol1_Page_231_Chunk1535 |
Appendix B Mathematical Formulas B.1 Trigonometry [m0138] ejθ = cos θ + j sin θ (B.1) cos θ = 1 2 | Electromagnetics_Vol1_Page_232_Chunk1536 |
218 APPENDIX B. MATHEMATICAL FORMULAS Gradient in spherical coordinates: ∇f = ˆr∂f ∂r + ˆθ1 r ∂f ∂θ + ˆφ 1 r sin θ ∂f ∂φ (B.8) Divergence Divergence in Cartesian coordinates: ∇· A = ∂Ax ∂x + ∂Ay ∂y + ∂Az ∂z (B.9) Divergence in cylindrical coordinates: ∇· A = 1 ρ ∂ ∂ρ (ρAρ) + 1 ρ ∂Aφ ∂φ + ∂Az ∂z (B.10) Divergence in spherical coordinates: ∇· A = 1 r2 ∂ ∂r | Electromagnetics_Vol1_Page_233_Chunk1537 |
B.3. VECTOR IDENTITIES 219 B.3 Vector Identities [m0140] Algebraic Identities A · (B × C) = B · (C × A) = C · (A × B) (B.18) A × (B × C) = B (A · C) −C (A · B) (B.19) Identities Involving Differential Operators ∇· (∇× A) = 0 (B.20) ∇× (∇f) = 0 (B.21) ∇× (fA) = f (∇× A) + (∇f) × A (B.22) ∇· (A × B) = B · (∇× A) −A · (∇× B) (B.23) ∇· (∇f) = ∇2f (B.24) ∇× ∇× A = ∇(∇· A) −∇2A (B.25) ∇2A = ∇(∇· A) −∇× (∇× A) (B.26) Divergence Theorem: Given a closed surface S enclosing a contiguous volume V, Z V (∇· A) dv = I S A · ds (B.27) where the surface normal ds is pointing out of the volume. Stokes’ Theorem: Given a closed curve C bounding a contiguous surface S, Z S (∇× A) · ds = I C A · dl (B.28) where the direction of the surface normal ds is related to the direction of integration along C by the “right hand rule.” | Electromagnetics_Vol1_Page_234_Chunk1538 |
Appendix C Physical Constants [m0141] The speed of light in free space (c), which is the phase velocity of any electromagnetic radiation in free space, is ∼= 2.9979 × 108 m/s. This is commonly rounded up to 3 × 108 m/s. This rounding incurs error of ∼= 0.07%, which is usually much less than other errors present in electrical engineering calculations. The charge of an electron is ∼= −1.602 × 10−19 C. The constant e ≜+1.602176634 × 10−19 C is known as the “elementary charge,” so the charge of the electron is said to be −e. The permittivity of free space (ǫ0) is ∼= 8.854 × 10−12 F/m. The permeability of free space (µ0) is 4π × 10−7 H/m. The wave impedance of free space (η0) is the ratio of the magnitude of the electric field intensity to that of the magnetic field intensity in free space and is p µ0/ǫ0 ∼= 376.7 Ω. This is also sometimes referred to as the intrinsic impedance of free space. Electromagnetics Vol 1. c⃝2018 S.W. Ellingson CC BY SA 4.0. https://doi.org/10.21061/electromagnetics-vol-1 | Electromagnetics_Vol1_Page_235_Chunk1539 |
Index acoustic wave equation, see wave equation, acoustic admittance, 64–66 characteristic, see characteristic admittance air, 123, 124 aluminum, 170, 215 Ampere’s law general form, 190–192 magnetostatics, 88, 149–150, 152, 155, 158, 159, 167 amplifier, 59, 62 antenna, 207 dipole, 207 helical, 209 impedance matching, 58, 62 monopole, 207 patch, 58 arcing, 124 attenuation constant, see propagation constant balun, 187 Biot-Savart Law, 26 boundary conditions, 118, 120, 160, 161 capacitance, 34, 124–126 definition, 124 capacitor, 125–126 energy storage, 130 parallel plate, 18, 116–117, 126–128 carbon, 215 Cartesian coordinate system, 70, 76–77 ceramic, 170 CGS (system of units), 14 characteristic admittance, 64 characteristic impedance, 33, 37–40, 48, 63, 210 coaxial cable, 43 definition, 39 measurement, 56 charge density line, 95, 96, 101, 112 surface, 95, 97, 112 volume, 85, 95, 99, 113 charge mobility, 136 circulation, 88 coax, see transmission line cobalt, 171 coil, 152–154, 163, 165 toroidal, see toroidal coil conductance, 34, 141–142 definition, 141 conductivity, 27, 136–138 definition, 136 of common materials, 215–216 conductor good, 136, 137 perfect, 21, 118, 120, 122, 137, 144 perfect (definition), 138 poor, 123 constitutive parameters, 27 copper, 171, 215 Coulomb’s law, 93–94, 100, 103, 105 cross product, 74–75 curl, 88–89, 218 definition, 89 current, 22 conduction, 134 convection, 134 displacement, see displacement current current density, 135–136 line, 135 surface, 135, 157 volume, 135 cylindrical coordinate system, 77–80 datum, 110 diamagnetic material, 170 dielectric (material), 42, 45, 123, 204 examples, 213 lossless, 137 lossy, 137 dielectric breakdown, 124, 138 dielectric constant, 20, 213 dielectric strength, 124 221 | Electromagnetics_Vol1_Page_236_Chunk1540 |
222 INDEX dipole, see antenna displacement current, 150, 192 divergence, 85–86, 218 definition, 85 divergence theorem, 87, 104, 148, 219 dot product, 73–74 duality, 146, 197 electric field, see field, electric electric field intensity, 17–19, 94, 124 boundary conditions, 118 definition, 18 related to current, 136 electric flux density, 21–22, 85 boundary conditions, 120 definition, 21 electrical length, 53 measurement, 56 electromagnetic compatibility (EMC), 185 electromagnetic force, 19, 24 electromagnetic interference (EMI), 30, 185 electromagnetic spectrum, see spectrum electromotive force, see emf electron, 220 electrostatics (description), 93 emf, 179 motional, 180, 187 transformer, 180–183 emitter induction, 55 energy electrostatic, 130–132 kinetic, 17 magnetostatic, 169–170 potential, 17, 105–107, 130 English system of units, 14 Faraday rotation, 209 Faraday’s law, 85, 176, 178–181, 189 ferrite, 170, 215 ferromagnetic material, 171 field definition, 17 electric, 17–19, 21–22 magnetic, 22–27 scalar, 5, 17 vector, 17 field intensity, magnetic, see magnetic field intensity field line (magnetic), 24 filter, 55 flux, 21, 24, 77, 79, 83, 85, 135, 161 definition, 85 magnetic, 163, 178 flux density, electric, see electric flux density flux density, magnetic, see magnetic flux density flux linkage seelinkage, 163 force, 18, 23, 70, 105, 124 electromotive, see emf Fourier series, 12 Fourier transform, 12 FR4, 46, 123, 213 fringing (field), 117, 126, 153, 166, 167 gamma ray, 3 Gauss’ law electric field, 85, 100–101, 103 magnetic field, 147–148 generator, 33, 180, 187–189 geranium, 213 glass, 123, 213 Global Positioning System (GPS), 209 gold, 171, 215 gradient, 84, 114, 217 ground, 111 ground plane, 128 heat, 144 homogeneity, 27 homogeneous (media), 21, 26, 27, 196 hysteresis, 171 impedance characteristic, see characteristic impedance matching, 55, 62–68 of resistance, 138 wave, see wave impedance impedance inverter, 58 independence of path, 107–108 index of refraction, 213 indium phosphide, 213 inductance, 34, 163–165 definition, 163, 164 induction, 175–178, 180 inductor, 152, 164 coaxial, 167–168 hysteresis, 171 straight coil, 165–166 infrared, 3 insulator good, 123 | Electromagnetics_Vol1_Page_237_Chunk1541 |
INDEX 223 perfect, 137 poor, 137 inverse square law, 20, 25 iron, 171, 214, 215 isotropic (media), 28, 196 isotropy, 28 joule heating, 144 Joule’s law, 143 kinetic energy, see energy, kinetic Kirchoff’s voltage law electrostatics, 88, 105, 108–110, 118, 180, 189 Laplace’s equation, 115, 116 Laplacian (operator), 91, 218 Lenz’s law, 175, 178, 182 lightning, 124, 138 linear (media), 28, 124, 196 linear and time-invariant (LTI), 28 linearity, 28 linkage, 163 load matched, 48 reactive, 48 lumped-element model, see transmission line magnesium, 170 magnetic field intensity, 26–27 definition, 26 magnetic flux, see flux, magnetic magnetic flux density, 22–25, 85 definition, 24 magnetostatics (description), 146 materials (properties), 27–28, 170–172 materials, magnetic, 170–172 Maxwell’s equations, 175, 196 differential phasor form, 194–196 source-free lossless region, 196 Maxwell-Faraday equation, 88, 180, 189–191, 202 metric system, 14 mica, 124 microstrip, 31, 44–46, 56, 63 mode, 31 monopole, see antenna motional emf, see emf mutual inductance, 164 neper, 40 nickel, 171 non-magnetic (material), 123 nonlinear (media), 27 notation, 15 Ohm’s law, 134, 136, 143, 196 ohmic heating, 144 open circuit, 48, 49, 54 optical (frequency), 3 optical fiber multimode, 31 single mode, 32 paramagnetic material, 170 patch antenna, see antenna perfect conductor, see conductor, perfect permanent magnet, 22 permeability, 25–27, 123, 170 of common materials, 214–215 relative, 25, 214 permittivity, 20, 27, 124, 137 complex-valued, 197 effective, 45 of common materials, 213–214 relative, 20, 123, 213 phase velocity, 3, 8, 40, 204 coaxial cable, 43 in microstrip, 46 measurement, 57 phasefront, 198, 203 phasor, 9–13, 194, 203, 207 phasor (definition), 9 photoelectric effect, 4 photon, 19 pipe, 140 plane wave relationships, 205 platinum, 170, 215 Poisson’s equation, 91, 115–116 polarization, 206 circular, 207 elliptical, 209 linear, 206 polarizer, 207 polycarbonate, 213 polyethylene, 213 polypropylene, 213 polystyrene, 213 polytetrafluoroethylene, 213 potential difference, 106 potential energy, see energy, potential power density, see waves | Electromagnetics_Vol1_Page_238_Chunk1542 |
224 INDEX power dissipation, 143–144 Poynting theorem, 210 Poynting vector, 210 printed circuit board (PCB), 44, 46, 123 capacitance, 127–128 material, 213 propagation constant, 37 attenuation, 33, 40 phase, 6, 33, 40, 197 quantum mechanics, 19, 23 quarter-wave inverter, 58 radio (frequency), 3 reactance, 165 reflection coefficient, 47–48 definition, 48 relative permeability, see permeability, relative relative permittivity, see permittivity, relative resistance, 34, 138–140 definition, 138 of wire (DC), 139–140 resistivity, 215 resistor, 1 RG-59, 43, 130, 142 right-hand rule cross product, 74 magnetic field of a wire, 152, 155, 157, 182 Stokes’ theorem, 90, 151, 158, 159, 161, 163, 178, 181, 219 saturation (magnetic), 171, 186 scalar field, see field, scalar scalar product, see dot product semiconductor (material), 137, 213 shield, 42 short circuit, 48, 49, 54 shot noise, 94 SI (system of units), 14 signum (sgn) function, 98 silica, 213 silicon, 171, 213, 215 single-stub matching, 66–68 skin effect, 165 soil, 214, 215 solenoid, 152 sound (waves), 5 spatial frequency, 6 spectrum, 3 speed of light, 3, 204, 220 spherical coordinate system, 81–83 standing wave, 41, 49–50 definition, 49 standing wave ratio (SWR), 51–52 definition, 51 steel, 140, 214 Stokes’ theorem, 90, 109, 163, 178, 181, 189, 190, 219 stripline, 44 stub applications, 55, 66–68 definition, 54 open circuit, 54 short circuit, 54 styrofoam, 213 superposition, 9, 28 susceptance, 64, 67 symmetry, 100, 102, 151 teflon, 123, 213 telegrapher’s equations, 35–37 TEM line, see transmission line Th´evenin equivalent circuit, 33 time-invariance (media), 28 toroid, 155 toroidal coil, 155–157 transformer, 164, 183–187 emf, see emf hysteresis, 171 induction, 176 transmission line coaxial, 31, 42–44, 62, 130, 141–142 definition, 30 input impedance, 52–55 lossless, 40 low loss, 40, 41, 43 lumped-element model, 34–35, 41 microstrip, see microstrip quarter-wavelength, 57–60 stripline, 44 stub, see stub time-average power, 61 transverse electromagnetic (TEM), 31, 37, 42, 45 two-port representation, 33–34 wave equations, 37 two-port, 33 ultraviolet, 3 unit vector, 70 units, 13–14 | Electromagnetics_Vol1_Page_239_Chunk1543 |
INDEX 225 vector arithmetic, 70–75, 219 definition, 70 identity, 219 position-fixed, 70 position-free, 70 unit, 70 velocity, 70 voltage reflection coefficient, see reflection coefficient voltage standing wave ratio (VSWR), 51 water, 213, 215 wave equation acoustic, 6 electromagnetic, 91 source-free lossless region, 196–197 TEM transmission line, 37 wave impedance, 197, 202, 220 waveguide, 31 wavelength, 2, 3, 7, 204 in microstrip, 46 wavenumber, 6 waves cylindrical, 198 definition, 17 fundamentals, 5–8 guided, 9, 30 plane, 198–206, 209 polarization, 206–209 power density, 209–211 spherical, 198 standing, see standing wave transmission line analogy, 210 unguided, 9 wire DC resistance of, 139–140 work, 105 X-ray, 3 | Electromagnetics_Vol1_Page_240_Chunk1544 |
Computer Science I Dr. Chris Bourke cbourke@cse.unl.edu Department of Computer Science & Engineering University of Nebraska–Lincoln Lincoln, NE 68588, USA 2018/10/12 08:09:12 Version 1.3.7 | ComputerScienceOne_Page_1_Chunk1545 |
Copyleft (Copyright) The entirety of this book is free and is released under a Creative Commons Attribution- ShareAlike 4.0 International License (see http://creativecommons.org/licenses/ by-sa/4.0/ for details). i | ComputerScienceOne_Page_3_Chunk1546 |
Draft Notice This book is a draft that has been released for evaluation and comment. Some of the later chapters are included as placeholders and indicators for the intended scope of the final draft, but are intentionally left blank. The author encourages people to send feedback including suggestions, corrections, and reviews to inform and influence the final draft. Thank you in advance to anyone helping out or sending constructive criticisms. iii | ComputerScienceOne_Page_5_Chunk1547 |
Preface “If you really want to understand something, the best way is to try and explain it to someone else. That forces you to sort it out in your own mind... that’s really the essence of programming. By the time you’ve sorted out a complicated idea into little steps that even a stupid machine can deal with, you’ve certainly learned something about it yourself.” —Douglas Adams, Dirk Gently’s Holistic Detective Agency [8] “The world of A.D. 2014 will have few routine jobs that cannot be done better by some machine than by any human being. Mankind will therefore have become largely a race of machine tenders. Schools will have to be oriented in this direction. All the high-school students will be taught the fundamentals of computer technology, will become proficient in binary arithmetic and will be trained to perfection in the use of the computer languages that will have developed out of those like the contemporary Fortran” —Isaac Asimov 1964 I’ve been teaching Computer Science since 2008 and was a Teaching Assistant long before that. Before that I was a student. During that entire time I’ve been continually disappointed in the value (note, not quality) of textbooks, particularly Computer Science textbooks and especially introductory textbooks. Of primary concern are the costs, which have far outstripped inflation over the last decade [30] while not providing any real additional value. New editions with trivial changes are released on a regular basis in an attempt to nullify the used book market. Publishers engage in questionable business practices and unfortunately many institutions are complicit in this process. In established fields such as mathematics and physics, new textbooks are especially questionable as the material and topics don’t undergo many changes. However, in Computer Science, new languages and technologies are created and change at breakneck speeds. Faculty and students are regularly trying to give away stacks of textbooks (“Learn Java 4!,” “Introduction to Cold Fusion,” etc.) that are only a few years old and yet are completely obsolete and worthless. The problem is that such books have built-in obsolescence by focusing too much on technological specifics and not enough on concepts. There are dozens of introductory textbooks for Computer Science; add in the fact that there are multiple languages and many gimmicks (“Learn Multimedia Java,” “Gaming with JavaScript,” “Build a Robot with C!”), it is a publisher’s paradise: hundreds of variations, a growing market, and customers with few alternatives. v | ComputerScienceOne_Page_7_Chunk1548 |
Preface That’s why I like organizations like OpenStax (http://openstaxcollege.org/) that attempt to provide free and “open” learning materials. Though they have textbooks for a variety of disciplines, Computer Science is not one of them (currently, that is). This might be due to the fact that there are already a huge amount of resources available online such as tutorials, videos, online open courses, and even interactive code learning tools. With such a huge amount of resources, why write this textbook then? Firstly, layoff. Secondly, I don’t really expect this book to have much impact beyond my own courses or department. I wanted a resource that presented an introduction to Computer Science how I teach it in my courses and it wasn’t available. However, if it does find its way into another instructor’s classes or into the hands of an aspiring student that wants to learn, then great! Several years ago our department revamped our introductory courses in a “Renaissance in Computing” initiative in which we redeveloped several different “flavors” of Computer Science I (one intended for Computer Science majors, one for Computer Engineering majors, one for non-CE engineering majors, one for humanities majors, etc.). The courses are intended to be equivalent in content but have a broader appeal to those in different disciplines. The intent was to provide multiple entry points into Computer Science. Once a student had a solid foundation, they could continue into Computer Science II and pick up a second programming language with little difficulty. This basic idea informed how I structured this book. There is a separation of concepts and programming language syntax. The first part of this book uses pseudocode with a minimum of language-specific elements. Subsequent parts of the book recapitulate these concepts but in the context of a specific programming language. This allows for a “plug-in” style approach to Computer Science: the same book could theoretically be used for multiple courses or the book could be extended by adding another part for a new language with minimal effort. Another inspiration for the structure of this book is the Computer Science I Honors course that I developed. Usually Computer Science majors take CS1 using Java as the primary language while CE students take CS1 using C. Since the honors course consists of both majors (as well as some of the top students), I developed the Honors version to cover both languages at the same time in parallel. This has led to many interesting teaching moments: by covering two languages, it provides opportunities to highlight fundamental differences and concepts in programming languages. It also keeps concepts as the focus of the course emphasizing that syntax and idiosyncrasies of individual languages are only of secondary concern. Finally, actively using multiple languages in the first class provides a better opportunity to extend knowledge to other programming languages–once a student has a solid foundation in one language learning a new one should be relatively easy. The exercises in this book are a variety of exercises I’ve used in my courses over the years. They have been made as generic as possible so that they could be assigned using any language. While some have emphasized the use of “real-world” exercises (whatever that means), my exercises have focused more on solving problems of a mathematical vi | ComputerScienceOne_Page_8_Chunk1549 |
nature (most of my students have been Engineering students). Some of them are more easily understood if students have had Calculus but it is not absolutely necessary. It may be clich´e, but the two quotes above exemplify what I believe a Computer Science I course is about. The second is from Isaac Asimov who was asked at the 1964 World’s Fair what he though the world of 2014 would look like. His prediction didn’t become entirely true, but I do believe we are on the verge of a fundamental social change that will be caused by more and more automation. Like the industrial revolution, but on a much smaller time scale and to a far greater extent, automation will fundamentally change how we live and not work (I say “not work” because automation will very easily destroy the vast majority of today’s jobs–this represents a huge economic and political challenge that will need to be addressed). The time is quickly approaching where being able to program and develop software will be considered a fundamental skill as essential as arithmetic. I hope this book plays some small role in helping students adjust to that coming world. The first quote describes programming, or more fundamentally Computer Science and “problem solving.” Computers do not solve problems, humans do. Computers only make it possible to automate solutions on a large scale. At the end of the day, the human race is still responsible for tending the machines and will be for some time despite what Star Trek and the most optimistic of AI advocates think. I hope that people find this book useful. If value is a ratio of quality vs cost then this book has already succeeded in having infinite value.1 If you have suggestions on how to improve it, please feel free to contact me. If you end up using it and finding it useful, please let me know that too! 1or it might be undefined, or NaN, or this book is Exceptional depending on which language sections you read vii | ComputerScienceOne_Page_9_Chunk1550 |
Acknowledgements I’d like to thank the Department of Computer Science & Engineering at the University of Nebraska–Lincoln for their support during my writing and maintaining this book. This book is dedicated to my family. ix | ComputerScienceOne_Page_11_Chunk1551 |
Contents Copyleft (Copyright) i Draft Notice iii Preface v Acknowledgements ix 1. Introduction 1 1.1. Problem Solving . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2. Computing Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.3. Basic Program Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.4. Syntax Rules & Pseudocode . . . . . . . . . . . . . . . . . . . . . . . . . 12 1.5. Documentation, Comments, and Coding Style . . . . . . . . . . . . . . . 14 2. Basics 17 2.1. Control Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.1.1. Flowcharts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.2. Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.2.1. Naming Rules & Conventions . . . . . . . . . . . . . . . . . . . . 19 2.2.2. Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 2.2.3. Declaring Variables: Dynamic vs. Static Typing . . . . . . . . . . 31 2.2.4. Scoping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 2.3. Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.3.1. Assignment Operators . . . . . . . . . . . . . . . . . . . . . . . . 33 2.3.2. Numerical Operators . . . . . . . . . . . . . . . . . . . . . . . . . 35 2.3.3. String Concatenation . . . . . . . . . . . . . . . . . . . . . . . . . 37 2.3.4. Order of Precedence . . . . . . . . . . . . . . . . . . . . . . . . . 38 2.3.5. Common Numerical Errors . . . . . . . . . . . . . . . . . . . . . . 38 2.3.6. Other Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 2.4. Basic Input/Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 2.4.1. Standard Input & Output . . . . . . . . . . . . . . . . . . . . . . 42 2.4.2. Graphical User Interfaces . . . . . . . | ComputerScienceOne_Page_13_Chunk1552 |
. . . . . . . . . . . . . . . . 42 2.4.3. Output Using printf() -style Formatting . . . . . . . . . . . . . 43 2.4.4. Command Line Input . . . . . . . . . . . . . . . . . . . . . . . . . 44 xi | ComputerScienceOne_Page_13_Chunk1553 |
Contents 2.5. Debugging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 2.5.1. Types of Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 2.5.2. Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 2.6. Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 2.6.1. Temperature Conversion . . . . . . . . . . . . . . . . . . . . . . . 50 2.6.2. Quadratic Roots . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 2.7. Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 3. Conditionals 65 3.1. Logical Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 3.1.1. Comparison Operators . . . . . . . . . . . . . . . . . . . . . . . . 66 3.1.2. Negation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 3.1.3. Logical And . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 3.1.4. Logical Or . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 3.1.5. Compound Statements . . . . . . . . . . . . . . . . . . . . . . . . 71 3.1.6. Short Circuiting . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 3.2. The If Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 3.3. The If-Else Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 3.4. The If-Else-If Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 3.5. Ternary If-Else Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 3.6. Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 3.6.1. Meal Discount . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 3.6.2. Look Before You Leap . . . | ComputerScienceOne_Page_14_Chunk1554 |
. . . . . . . . . . . . . . . . . . . . . 83 3.6.3. Comparing Elements . . . . . . . . . . . . . . . . . . . . . . . . . 84 3.6.4. Life & Taxes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 3.7. Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 4. Loops 95 4.1. While Loops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 4.1.1. Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 4.2. For Loops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 4.2.1. Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 4.3. Do-While Loops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 4.4. Foreach Loops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 4.5. Other Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 4.5.1. Nested Loops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 4.5.2. Infinite Loops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 4.5.3. Common Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 4.5.4. Equivalency of Loops . . . . . . . . . . . . . . . . . . . . . . . . . 106 4.6. Problem Solving With Loops . . . . . . . . . . . . . . . . . . . . . . . . . 106 4.7. Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 4.7.1. For vs While Loop . . . . . . . . . . . . . . . . . . . . . . . . . . 107 4.7.2. Primality Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 4.7.3. Paying the Piper . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 xii | ComputerScienceOne_Page_14_Chunk1555 |
Contents 4.8. Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 5. Functions 133 5.1. Defining & Using Functions . . . . . . . . . . . . . . . . . . . . . . . . . 134 5.1.1. Function Signatures . . . . . . . . . . . . . . . . . . . . . . . . . . 134 5.1.2. Calling Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 5.1.3. Organizing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 5.2. How Functions Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 5.2.1. Call By Value . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 5.2.2. Call By Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 5.3. Other Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 5.3.1. Functions as Entities . . . . . . . . . . . . . . . . . . . . . . . . . 142 5.3.2. Function Overloading . . . . . . . . . . . . . . . . . . . . . . . . . 144 5.3.3. Variable Argument Functions . . . . . . . . . . . . . . . . . . . . 145 5.3.4. Optional Parameters & Default Values . . . . . . . . . . . . . . . 145 5.4. Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 6. Error Handling 151 6.1. Error Handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 6.2. Error Handling Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . 153 6.2.1. Defensive Programming . . . . . . . . . . . . . . . . . . . . . . . 153 6.2.2. Exceptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 6.3. Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 7. Arrays, Collections & Dynamic Memory 159 7.1. Basic Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160 7.2. Static & Dynamic Memory . . . . . . . . . . | ComputerScienceOne_Page_15_Chunk1556 |
. . . . . . . . . . . . . . . . 162 7.2.1. Dynamic Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 7.2.2. Shallow vs. Deep Copies . . . . . . . . . . . . . . . . . . . . . . . 166 7.3. Multidimensional Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 7.4. Other Collections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 7.5. Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 8. Strings 177 8.1. Basic Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 8.2. Comparisons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 8.3. Tokenizing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 8.4. Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 9. File Input/Output 183 9.1. Processing Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 9.1.1. Paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184 9.1.2. Error Handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 9.1.3. Buffered and Unbuffered . . . . . . . . . . . . . . . . . . . . . . . 187 xiii | ComputerScienceOne_Page_15_Chunk1557 |
Contents 9.1.4. Binary vs Text Files . . . . . . . . . . . . . . . . . . . . . . . . . 187 9.2. Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188 10.Encapsulation & Objects 197 10.1. Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198 10.1.1. Defining . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198 10.1.2. Creating . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199 10.1.3. Using Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200 10.2. Design Principles & Best Practices . . . . . . . . . . . . . . . . . . . . . 200 10.3. Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 11.Recursion 203 11.1. Writing Recursive Functions . . . . . . . . . . . . . . . . . . . . . . . . . 204 11.1.1. Tail Recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 11.2. Avoiding Recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206 11.2.1. Memoization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208 11.3. Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 12.Searching & Sorting 211 12.1. Searching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 12.1.1. Linear Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212 12.1.2. Binary Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 12.1.3. Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 12.2. Sorting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220 12.2.1. Selection Sort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221 12.2.2. Insertion Sort . . . . . . . . . . | ComputerScienceOne_Page_16_Chunk1558 |
. . . . . . . . . . . . . . . . . . . 224 12.2.3. Quick Sort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 12.2.4. Merge Sort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232 12.2.5. Other Sorts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237 12.2.6. Comparison & Summary . . . . . . . . . . . . . . . . . . . . . . . 237 12.3. Searching & Sorting In Practice . . . . . . . . . . . . . . . . . . . . . . . 238 12.3.1. Using Libraries and Comparators . . . . . . . . . . . . . . . . . . 238 12.3.2. Preventing Arithmetic Errors . . . . . . . . . . . . . . . . . . . . 239 12.3.3. Avoiding the Difference Trick . . . . . . . . . . . . . . . . . . . . 241 12.3.4. Importance of a Total Order . . . . . . . . . . . . . . . . . . . . . 242 12.3.5. Artificial Ordering . . . . . . . . . . . . . . . . . . . . . . . . . . 242 12.3.6. Sorting Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 12.4. Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244 13.Graphical User Interfaces & Event Driven Programming 247 14.Introduction to Databases & Database Connectivity 249 xiv | ComputerScienceOne_Page_16_Chunk1559 |
Contents I. The C Programming Language 251 15.Basics 253 15.1. Getting Started: Hello World . . . . . . . . . . . . . . . . . . . . . . . . 253 15.2. Basic Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254 15.2.1. Basic Syntax Rules . . . . . . . . . . . . . . . . . . . . . . . . . . 255 15.2.2. Preprocessor Directives . . . . . . . . . . . . . . . . . . . . . . . . 255 15.2.3. Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258 15.2.4. The main() Function . . . . . . . . . . . . . . . . . . . . . . . . 259 15.3. Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260 15.3.1. Declaration & Assignment . . . . . . . . . . . . . . . . . . . . . . 260 15.4. Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262 15.5. Basic I/O . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263 15.6. Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265 15.6.1. Converting Units . . . . . . . . . . . . . . . . . . . . . . . . . . . 265 15.6.2. Computing Quadratic Roots . . . . . . . . . . . . . . . . . . . . . 267 16.Conditionals 271 16.1. Logical Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271 16.1.1. Order of Precedence . . . . . . . . . . . . . . . . . . . . . . . . . 273 16.1.2. Comparing Strings and Characters . . . . . . . . . . . . . . . . . 273 16.2. If, If-Else, If-Else-If Statements . . . . . . . . . . . . . . . . . . . . . . . 274 16.3. Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276 16.3.1. Computing a Logarithm . . . . . . . . . . . . . . . . . . . . . . . 276 16.3.2. Life & Taxes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 16.3.3. Quadratic Roots Revisited . . . . . . . . . . . . . . . . . . . . . . 279 17.Loops | ComputerScienceOne_Page_17_Chunk1560 |
283 17.1. While Loops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283 17.2. For Loops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285 17.3. Do-While Loops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285 17.4. Other Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286 17.5. Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287 17.5.1. Normalizing a Number . . . . . . . . . . . . . . . . . . . . . . . . 287 17.5.2. Summation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287 17.5.3. Nested Loops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288 17.5.4. Paying the Piper . . . . . . . . . . . . . . . . . . . . . . . . . . . 288 18.Functions 291 18.1. Defining & Using Functions . . . . . . . . . . . . . . . . . . . . . . . . . 291 18.1.1. Declaration: Prototypes . . . . . . . . . . . . . . . . . . . . . . . 291 18.1.2. Void Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293 18.1.3. Organizing Functions . . . . . . . . . . . . . . . . . . . . . . . . . 294 18.1.4. Calling Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 294 xv | ComputerScienceOne_Page_17_Chunk1561 |
Contents 18.2. Pointers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295 18.2.1. Passing By Reference . . . . . . . . . . . . . . . . . . . . . . . . . 297 18.2.2. Function Pointers . . . . . . . . . . . . . . . . . . . . . . . . . . . 300 18.3. Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301 18.3.1. Generalized Rounding . . . . . . . . . . . . . . . . . . . . . . . . 301 18.3.2. Quadratic Roots . . . . . . . . . . . . . . . . . . . . . . . . . . . 303 19.Error Handling 305 19.1. Language Supported Error Codes . . . . . . . . . . . . . . . . . . . . . . 305 19.1.1. POSIX Error Codes . . . . . . . . . . . . . . . . . . . . . . . . . 306 19.2. Error Handling By Design . . . . . . . . . . . . . . . . . . . . . . . . . . 308 19.3. Enumerated Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309 19.4. Using Enumerated Types for Error Codes . . . . . . . . . . . . . . . . . 310 20.Arrays 313 20.1. Basic Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313 20.2. Dynamic Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316 20.3. Using Arrays with Functions . . . . . . . . . . . . . . . . . . . . . . . . . 318 20.4. Multidimensional Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . 319 20.4.1. Contiguous 2-D Arrays . . . . . . . . . . . . . . . . . . . . . . . . 322 20.5. Dynamic Data Structures . . . . . . . . . . . . . . . . . . . . . . . . . . 324 21.Strings 325 21.1. Character Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325 21.2. String Library . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327 21.3. Arrays of Strings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330 21.4. Comparisons . . . . . . . . . . . . . . . . . . . . . . . . | ComputerScienceOne_Page_18_Chunk1562 |
. . . . . . . . . . 331 21.5. Conversions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332 21.6. Tokenizing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333 22.File I/O 335 22.1. Opening Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335 22.2. Reading & Writing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336 22.2.1. Plaintext Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336 22.2.2. Binary Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338 22.3. Closing Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339 23.Structures 341 23.1. Defining Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341 23.1.1. Alternative Declarations . . . . . . . . . . . . . . . . . . . . . . . 342 23.1.2. Nested Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . 343 23.2. Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344 23.2.1. Declaration & Initialization . . . . . . . . . . . . . . . . . . . . . 344 23.2.2. Selection Operators . . . . . . . . . . . . . . . . . . . . . . . . . . 346 xvi | ComputerScienceOne_Page_18_Chunk1563 |
Contents 23.3. Arrays of Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348 23.4. Using Structures With Functions . . . . . . . . . . . . . . . . . . . . . . 351 23.4.1. Factory Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 353 23.4.2. To String Functions . . . . . . . . . . . . . . . . . . . . . . . . . . 354 23.4.3. Passing Arrays of Structures . . . . . . . . . . . . . . . . . . . . . 355 24.Recursion 357 25.Searching & Sorting 361 25.1. Comparator Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361 25.2. Function Pointers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365 25.3. Searching & Sorting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372 25.3.1. Searching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372 25.3.2. Sorting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374 25.3.3. Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374 25.4. Other Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377 25.4.1. Sorting Pointers to Elements . . . . . . . . . . . . . . . . . . . . . 377 II. The Java Programming Language 381 26.Basics 383 26.1. Getting Started: Hello World . . . . . . . . . . . . . . . . . . . . . . . . 384 26.2. Basic Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385 26.2.1. Basic Syntax Rules . . . . . . . . . . . . . . . . . . . . . . . . . . 385 26.2.2. Program Structure . . . . . . . . . . . . . . . . . . . . . . . . . . 386 26.2.3. The main() Method . . . . . . . . . . . . . . . . . . . . . . . . . 389 26.2.4. Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389 26.3. Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390 26.3.1. Declaration & Assignment . . . . . . . . . . . . . . . . . | ComputerScienceOne_Page_19_Chunk1564 |
. . . . . 391 26.4. Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393 26.5. Basic I/O . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395 26.6. Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396 26.6.1. Converting Units . . . . . . . . . . . . . . . . . . . . . . . . . . . 396 26.6.2. Computing Quadratic Roots . . . . . . . . . . . . . . . . . . . . . 400 27.Conditionals 403 27.1. Logical Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403 27.1.1. Order of Precedence . . . . . . . . . . . . . . . . . . . . . . . . . 405 27.1.2. Comparing Strings and Characters . . . . . . . . . . . . . . . . . 406 27.2. If, If-Else, If-Else-If Statements . . . . . . . . . . . . . . . . . . . . . . . 407 27.3. Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408 27.3.1. Computing a Logarithm . . . . . . . . . . . . . . . . . . . . . . . 408 27.3.2. Life & Taxes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410 xvii | ComputerScienceOne_Page_19_Chunk1565 |
Contents 27.3.3. Quadratic Roots Revisited . . . . . . . . . . . . . . . . . . . . . . 411 28.Loops 415 28.1. While Loops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415 28.2. For Loops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417 28.3. Do-While Loops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417 28.4. Enhanced For Loops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418 28.5. Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419 28.5.1. Normalizing a Number . . . . . . . . . . . . . . . . . . . . . . . . 419 28.5.2. Summation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419 28.5.3. Nested Loops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420 28.5.4. Paying the Piper . . . . . . . . . . . . . . . . . . . . . . . . . . . 421 29.Methods 423 29.1. Defining Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 424 29.1.1. Void Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 426 29.1.2. Using Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . 426 29.1.3. Passing By Reference . . . . . . . . . . . . . . . . . . . . . . . . . 427 29.2. Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 428 29.2.1. Generalized Rounding . . . . . . . . . . . . . . . . . . . . . . . . 428 30.Error Handling & Exceptions 431 30.1. Exceptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431 30.1.1. Catching Exceptions . . . . . . . . . . . . . . . . . . . . . . . . . 431 30.1.2. Throwing Exceptions . . . . . . . . . . . . . . . . . . . . . . . . . 433 30.1.3. Creating Custom Exceptions . . . . . . . . . . . . . . . . . . . . . 433 30.1.4. Checked Exceptions . . . . . . . . . . . . . | ComputerScienceOne_Page_20_Chunk1566 |
. . . . . . . . . . . . . 434 30.2. Enumerated Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436 30.2.1. More Tricks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437 31.Arrays 439 31.1. Basic Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439 31.2. Dynamic Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442 31.3. Using Arrays with Methods . . . . . . . . . . . . . . . . . . . . . . . . . 442 31.4. Multidimensional Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . 443 31.5. Dynamic Data Structures . . . . . . . . . . . . . . . . . . . . . . . . . . 444 32.Strings 449 32.1. Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 449 32.2. String Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 450 32.3. Arrays of Strings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452 32.4. Comparisons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453 32.5. Tokenizing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455 xviii | ComputerScienceOne_Page_20_Chunk1567 |
Contents 33.File I/O 457 33.1. File Input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457 33.2. File Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459 34.Objects 461 34.1. Data Visibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462 34.2. Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463 34.2.1. Accessor & Mutator Methods . . . . . . . . . . . . . . . . . . . . 464 34.3. Constructors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 467 34.4. Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 469 34.5. Common Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 469 34.6. Composition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 472 34.7. Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474 35.Recursion 479 36.Searching & Sorting 483 36.1. Comparators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483 36.2. Searching & Sorting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487 36.2.1. Searching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487 36.2.2. Sorting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 488 36.3. Other Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 488 36.3.1. Sorted Collections . . . . . . . . . . . . . . . . . . . . . . . . . . . 488 36.3.2. Handling null values . . . . . . . . . . . . . . . . . . . . . . . . 490 36.3.3. Importance of equals() and hashCode() Methods . . . . . . . 491 36.3.4. Java 8: Lambda Expressions . . . . . . . . . . . . . . . . . . . . . 493 III. The PHP Programming Language 495 37.Basics 497 37.1. Getting Started: Hello World . . . . . . . . . . . . . . . . . | ComputerScienceOne_Page_21_Chunk1568 |
. . . . . . . 498 37.2. Basic Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499 37.2.1. Basic Syntax Rules . . . . . . . . . . . . . . . . . . . . . . . . . . 499 37.2.2. PHP Tags . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 500 37.2.3. Libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 500 37.2.4. Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 500 37.2.5. Entry Point & Command Line Arguments . . . . . . . . . . . . . 502 37.3. Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 502 37.3.1. Using Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . 503 37.4. Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 504 37.4.1. Type Juggling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505 37.4.2. String Concatenation . . . . . . . . . . . . . . . . . . . . . . . . . 508 37.5. Basic I/O . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 508 xix | ComputerScienceOne_Page_21_Chunk1569 |
Contents 37.6. Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 509 37.6.1. Converting Units . . . . . . . . . . . . . . . . . . . . . . . . . . . 509 37.6.2. Computing Quadratic Roots . . . . . . . . . . . . . . . . . . . . . 512 38.Conditionals 515 38.1. Logical Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 515 38.1.1. Order of Precedence . . . . . . . . . . . . . . . . . . . . . . . . . 517 38.2. If, If-Else, If-Else-If Statements . . . . . . . . . . . . . . . . . . . . . . . 517 38.3. Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519 38.3.1. Computing a Logarithm . . . . . . . . . . . . . . . . . . . . . . . 519 38.3.2. Life & Taxes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520 38.3.3. Quadratic Roots Revisited . . . . . . . . . . . . . . . . . . . . . . 522 39.Loops 527 39.1. While Loops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 527 39.2. For Loops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 528 39.3. Do-While Loops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 529 39.4. Foreach Loops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 529 39.5. Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 530 39.5.1. Normalizing a Number . . . . . . . . . . . . . . . . . . . . . . . . 530 39.5.2. Summation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 530 39.5.3. Nested Loops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 531 39.5.4. Paying the Piper . . . . . . . . . . . . . . . . . . . . . . . . . . . 531 40.Functions 535 40.1. Defining & Using Functions . . . . . . . . . . . . . . . . . . . . . . . . . 535 40.1.1. Declaring Functions . . . . . . . . . . | ComputerScienceOne_Page_22_Chunk1570 |
. . . . . . . . . . . . . . . . 535 40.1.2. Organizing Functions . . . . . . . . . . . . . . . . . . . . . . . . . 537 40.1.3. Calling Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 537 40.1.4. Passing By Reference . . . . . . . . . . . . . . . . . . . . . . . . . 538 40.1.5. Optional & Default Parameters . . . . . . . . . . . . . . . . . . . 539 40.1.6. Function Pointers . . . . . . . . . . . . . . . . . . . . . . . . . . . 540 40.2. Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 540 40.2.1. Generalized Rounding . . . . . . . . . . . . . . . . . . . . . . . . 540 40.2.2. Quadratic Roots . . . . . . . . . . . . . . . . . . . . . . . . . . . 541 41.Error Handling & Exceptions 543 41.1. Throwing Exceptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543 41.2. Catching Exceptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543 41.3. Creating Custom Exceptions . . . . . . . . . . . . . . . . . . . . . . . . . 544 42.Arrays 547 42.1. Creating Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 547 xx | ComputerScienceOne_Page_22_Chunk1571 |
Contents 42.2. Indexing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 547 42.2.1. Strings as Indices . . . . . . . . . . . . . . . . . . . . . . . . . . . 549 42.2.2. Non-Contiguous Indices . . . . . . . . . . . . . . . . . . . . . . . 549 42.2.3. Key-Value Initialization . . . . . . . . . . . . . . . . . . . . . . . 549 42.3. Useful Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 550 42.4. Iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 551 42.5. Adding Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 552 42.6. Removing Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 553 42.7. Using Arrays in Functions . . . . . . . . . . . . . . . . . . . . . . . . . . 554 42.8. Multidimensional Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . 555 43.Strings 557 43.1. Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 557 43.2. String Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 558 43.3. Arrays of Strings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 560 43.4. Comparisons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 560 43.5. Tokenizing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 561 44.File I/O 563 44.1. Opening Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563 44.2. Reading & Writing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 564 44.2.1. Using URLs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 565 44.2.2. Closing Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 565 45.Objects 567 45.1. Data Visibility . . . . . . . . . . . . . . . | ComputerScienceOne_Page_23_Chunk1572 |
. . . . . . . . . . . . . . . . . 568 45.2. Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 568 45.2.1. Accessor & Mutator Methods . . . . . . . . . . . . . . . . . . . . 570 45.3. Constructors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 571 45.4. Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 572 45.5. Common Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 573 45.6. Composition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 573 45.7. Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 574 46.Recursion 577 47.Searching & Sorting 581 47.1. Comparator Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 581 47.1.1. Searching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 584 47.1.2. Sorting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 584 Glossary 587 Acronyms 599 xxi | ComputerScienceOne_Page_23_Chunk1573 |
Contents Index 610 References 613 xxii | ComputerScienceOne_Page_24_Chunk1574 |
List of Algorithms 1.1. An example of pseudocode: finding a minimum value . . . . . . . . . . . . 13 2.1. Assignment Operator Demonstration . . . . . . . . . . . . . . . . . . . . . 34 2.2. Addition and Subtraction Demonstration . . . . . . . . . . . . . . . . . . 35 2.3. Multiplication and Division Demonstration . . . . . . . . . . . . . . . . . 36 2.4. Temperature Conversion Program . . . . . . . . . . . . . . . . . . . . . . 50 2.5. Quadratic Roots Program . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 3.1. An if-statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 3.2. An if-else Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 3.3. Example If-Else-If Statement . . . . . . . . . . . . . . . . . . . . . . . . . 79 3.4. General If-Else-If Statement . . . . . . . . . . . . . . . . . . . . . . . . . . 81 3.5. If-Else-If Statement With a Bug . . . . . . . . . . . . . . . . . . . . . . . 81 3.6. A simple receipt program . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 3.7. Preventing Division By Zero Using an If Statement . . . . . . . . . . . . . 83 3.8. Comparing Students by Name . . . . . . . . . . . . . . . . . . . . . . . . . 84 3.9. Computing Tax Liability with If-Else-If . . . . . . . . . . . . . . . . . . . 86 3.10. Computing Tax Credit with If-Else-If . . . . . . . . . . . . . . . . . . . . . 86 4.1. Counter-Controlled While Loop . . . . . . . . . . . . . . . . . . . . . . . . 97 4.2. Normalizing a Number With a While Loop . . . . . . . . . . . . . . . . . 99 4.3. A General For Loop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 xxiii | ComputerScienceOne_Page_25_Chunk1575 |
LIST OF ALGORITHMS 4.4. Counter-Controlled For Loop . . . . . . . . . . . . . . . . . . . . . . . . . 100 4.5. Summation of Numbers in a For Loop . . . . . . . . . . . . . . . . . . . . 100 4.6. Counter-Controlled Do-While Loop . . . . . . . . . . . . . . . . . . . . . . 101 4.7. Flag-Controlled Do-While Loop . . . . . . . . . . . . . . . . . . . . . . . . 102 4.8. Example Foreach Loop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 4.9. Foreach Loop Computing Grades . . . . . . . . . . . . . . . . . . . . . . . 103 4.10. Nested For Loops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 4.11. Infinite Loop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 4.12. Computing the Geometric Series Using a For Loop . . . . . . . . . . . . . 107 4.13. Computing the Geometric Series Using a While Loop . . . . . . . . . . . . 108 4.14. Determining if a Number is Prime or Composite . . . . . . . . . . . . . . 109 4.15. Counting the number of primes. . . . . . . . . . . . . . . . . . . . . . . . . 109 4.16. Computing a loan amortization schedule . . . . . . . . . . . . . . . . . . . 111 4.17. Scaling a Value . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 5.1. A function in pseudocode . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 5.2. Using a function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 11.1. Recursive CountDown(n) Function . . . . . . . . . . . . . . . . . . . . . 203 11.2. Recursive Fibonacci(n) Function . . . . . . . . . . . . . . . . . . . . . . 204 11.3. Recursive Fibonacci(n) Function With Memoization . . . . . . . . . . . 208 12.1. Linear Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212 12.2. Recursive Binary Search Algorithm, BinarySearch(A, l, r, ek) . . . . . . 214 12.3. Iterative Binary Search Algorithm, BinarySearch(A, ek) . . . . . . . . . 215 12.4. Selection Sort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222 12.5. Insertion Sort . . . . . . . . . . | ComputerScienceOne_Page_26_Chunk1576 |
. . . . . . . . . . . . . . . . . . . . . . . . 225 12.6. QuickSort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 xxiv | ComputerScienceOne_Page_26_Chunk1577 |
LIST OF ALGORITHMS 12.7. In-Place Partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 12.8. MergeSort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 12.9. Merge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234 xxv | ComputerScienceOne_Page_27_Chunk1578 |
List of Code Samples 1.1. A simple program in C . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.2. A simple program in C, compiled to assembly . . . . . . . . . . . . . . . 10 1.3. A simple program in C, resulting machine code formatted in hexadecimal (partial) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.1. Example of variable scoping in C . . . . . . . . . . . . . . . . . . . . . . 32 2.2. Compound Assignment Operators in C . . . . . . . . . . . . . . . . . . . 41 2.3. printf() examples in C . . . . . . . . . . . . . . . . . . . . . . . . . . 45 2.4. Output Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 4.1. Zune Bug . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 15.1. Hello World Program in C . . . . . . . . . . . . . . . . . . . . . . . . . . 254 15.2. Fahrenheit-to-Celsius Conversion Program in C . . . . . . . . . . . . . . 268 15.3. Quadratic Roots Program in C . . . . . . . . . . . . . . . . . . . . . . . 269 16.1. Examples of Conditional Statements in C . . . . . . . . . . . . . . . . . . 275 16.2. Logarithm Calculator Program in C . . . . . . . . . . . . . . . . . . . . . 280 16.3. Tax Program in C . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281 16.4. Quadratic Roots Program in C With Error Checking . . . . . . . . . . . 282 17.1. While Loop in C . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283 17.2. Flag-controlled While Loop in C . . . . . . . . . . . . . . . . . . . . . . . 284 17.3. For Loop in C . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285 17.4. Do-While Loop in C . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286 17.5. Normalizing a Number with a While Loop in C . . . . . . . . . . . . . . 287 17.6. Summation of Numbers using a For Loop in C . . . . . . . . . . . . . . . 287 17.7. Nested For Loops in C . . . . . . . . . . . | ComputerScienceOne_Page_29_Chunk1579 |
. . . . . . . . . . . . . . . . . 288 17.8. Loan Amortization Program in C . . . . . . . . . . . . . . . . . . . . . . 290 19.1. Using the errno.h library . . . . . . . . . . . . . . . . . . . . . . . . . 307 23.1. A Student structure declaration . . . . . . . . . . . . . . . . . . . . . . 344 25.1. C Function Pointer Syntax Examples . . . . . . . . . . . . . . . . . . . . 371 25.2. C Search Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375 25.3. C Sort Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376 25.4. C Comparator Function for Strings . . . . . . . . . . . . . . . . . . . . . 377 xxvii | ComputerScienceOne_Page_29_Chunk1580 |
List of Code Samples 25.5. Sorting Structures via Pointers . . . . . . . . . . . . . . . . . . . . . . . 378 25.6. Handling Null Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379 26.1. Hello World Program in Java . . . . . . . . . . . . . . . . . . . . . . . . 384 26.2. Basic Input/Output in Java . . . . . . . . . . . . . . . . . . . . . . . . . 396 26.3. Fahrenheit-to-Celsius Conversion Program in Java . . . . . . . . . . . . . 399 26.4. Quadratic Roots Program in Java . . . . . . . . . . . . . . . . . . . . . . 402 27.1. Examples of Conditional Statements in Java . . . . . . . . . . . . . . . . 407 27.2. Logarithm Calculator Program in Java . . . . . . . . . . . . . . . . . . . 412 27.3. Tax Program in Java . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413 27.4. Quadratic Roots Program in Java With Error Checking . . . . . . . . . . 414 28.1. While Loop in Java . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415 28.2. Flag-controlled While Loop in Java . . . . . . . . . . . . . . . . . . . . . 416 28.3. For Loop in Java . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417 28.4. Do-While Loop in Java . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418 28.5. Enhanced For Loops in Java Example 1 . . . . . . . . . . . . . . . . . . . 418 28.6. Enhanced For Loops in Java Example 2 . . . . . . . . . . . . . . . . . . . 419 28.7. Normalizing a Number with a While Loop in Java . . . . . . . . . . . . . 419 28.8. Summation of Numbers using a For Loop in Java . . . . . . . . . . . . . 420 28.9. Nested For Loops in Java . . . . . . . . . . . . . . . . . . . . . . . . . . . 420 28.10.Loan Amortization Program in Java . . . . . . . . . . . . . . . . . . . . . 422 34.1. The completed Java Student class. . . . . . . . . . . . . . . . . . . . . 477 36.1. Java Search Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . 489 36.2. Using Java Collection’s Sort Method . . . . . . . . . . . . . . . . . . . . 490 36.3. Handling Null Values in Java Comparators . . . . . . | ComputerScienceOne_Page_30_Chunk1581 |
. . . . . . . . . . . 491 37.1. Hello World Program in PHP . . . . . . . . . . . . . . . . . . . . . . . . 498 37.2. Hello World Program in PHP with HTML . . . . . . . . . . . . . . . . . 498 37.3. Type Juggling in PHP . . . . . . . . . . . . . . . . . . . . . . . . . . . . 506 37.4. Fahrenheit-to-Celsius Conversion Program in PHP . . . . . . . . . . . . . 511 37.5. Quadratic Roots Program in PHP . . . . . . . . . . . . . . . . . . . . . . 513 38.1. Examples of Conditional Statements in PHP . . . . . . . . . . . . . . . . 518 38.2. Logarithm Calculator Program in C . . . . . . . . . . . . . . . . . . . . . 523 38.3. Tax Program in PHP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 524 38.4. Quadratic Roots Program in PHP With Error Checking . . . . . . . . . 525 39.1. While Loop in PHP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 527 39.2. Flag-controlled While Loop in PHP . . . . . . . . . . . . . . . . . . . . . 528 39.3. For Loop in PHP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 529 39.4. Do-While Loop in PHP . . . . . . . . . . . . . . . . . . . . . . . . . . . . 529 39.5. Normalizing a Number with a While Loop in PHP . . . . . . . . . . . . . 530 xxviii | ComputerScienceOne_Page_30_Chunk1582 |
List of Code Samples 39.6. Summation of Numbers using a For Loop in PHP . . . . . . . . . . . . . 531 39.7. Nested For Loops in PHP . . . . . . . . . . . . . . . . . . . . . . . . . . 531 39.8. Loan Amortization Program in PHP . . . . . . . . . . . . . . . . . . . . 533 44.1. Processing a file line-by-line in PHP . . . . . . . . . . . . . . . . . . . . . 564 45.1. The completed PHP Student class. . . . . . . . . . . . . . . . . . . . . 576 47.1. Using PHP’s usort() Function . . . . . . . . . . . . . . . . . . . . . . . 585 xxix | ComputerScienceOne_Page_31_Chunk1583 |
List of Figures 1.1. Depiction of Computer Memory . . . . . . . . . . . . . . . . . . . . . . . 6 1.2. A Compiling Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.1. Types of Flowchart Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.2. Example of a flowchart for a simple ATM process . . . . . . . . . . . . . 19 2.3. Elements of a printf() statement in C . . . . . . . . . . . . . . . . . . 44 2.4. Intersection of two circles. . . . . . . . . . . . . . . . . . . . . . . . . . . 63 3.1. Control flow diagrams for sequential control flow and an if-statement. . . 77 3.2. An if-else Flow Chart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 3.3. Control Flow for an If-Else-If Statement . . . . . . . . . . . . . . . . . . 80 3.4. Quadrants of the Cartesian Plane . . . . . . . . . . . . . . . . . . . . . . 87 3.5. Three types of triangles . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 3.6. Intersection of Two Rectangles . . . . . . . . . . . . . . . . . . . . . . . . 91 3.7. Examples of Floor Tiling . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 4.1. A Typical Loop Flow Chart . . . . . . . . . . . . . . . . . . . . . . . . . 96 4.2. A Do-While Loop Flow Chart. The continuation condition is checked after the loop body. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 4.3. Plot of f(x) = sin x x . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 4.4. A rectangle for the interval [−5, 5]. . . . . . . . . . . . . . . . . . . . . . 117 4.5. Follow the bouncing ball . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 4.6. Sampling points in a circle . . . . . . . . . . . . . . . . . . . . . . . . . . 119 4.7. Regular polygons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 4.8. A polygon and its centroid. Whoo! . . . . . . . . . . . . . . . . . . . . . 130 5.1. A function declaration (prototype) in the C programming language with the return type, identifier, and parameter list labeled. . . . | ComputerScienceOne_Page_33_Chunk1584 |
. . . . . . . . 135 5.2. Program Stack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 5.3. Demonstration of Pass By Value . . . . . . . . . . . . . . . . . . . . . . . 141 5.4. Demonstration of Pass By Reference . . . . . . . . . . . . . . . . . . . . 143 7.1. Example of an Array . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160 7.2. Example returning a static array . . . . . . . . . . . . . . . . . . . . . . 163 7.3. Pitfalls of Returning Static Arrays . . . . . . . . . . . . . . . . . . . . . . 174 7.4. Depiction of Application Memory. . . . . . . . . . . . . . . . . . . . . . . 175 7.5. Shallow vs. Deep Copies . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 xxxi | ComputerScienceOne_Page_33_Chunk1585 |
List of Figures 9.1. Linux Tree Directory Structure . . . . . . . . . . . . . . . . . . . . . . . 186 9.2. An example polygon for n = 5 . . . . . . . . . . . . . . . . . . . . . . . . 188 9.3. A Word Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 9.4. A solved Sudoku puzzle . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 9.5. A DNA Sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 9.6. Codon Table for RNA to Protein Translation . . . . . . . . . . . . . . . . 195 11.1. Recursive Fibonacci Computation Tree . . . . . . . . . . . . . . . . . . . 207 12.1. Array of Integers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212 12.2. A Sorted Array . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 12.3. Binary Search Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216 12.4. Example of the benefit of ordered (indexed) elements in Windows 7 . . . 220 12.5. Selection Sort Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 12.6. Insertion Sort Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226 12.7. Partitioning Example 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230 12.8. Partitioning Example 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230 12.9. Partitioning Example 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 12.10.Merge Sort Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 12.11.Merge Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236 12.12.Generalized Sorting with a Comparator . . . . . . . . . . . . . . . . . . . 240 18.1. Pointer Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298 20.1. Dynamically Allocating Multidimensional Arrays . . . . . . . . . . . . . 321 20.2. Contiguous Two Dimensional Array . . . . . . . . . . . . . . . . . . . . . 324 21.1. Example | ComputerScienceOne_Page_34_Chunk1586 |
of a character array (string) in C. . . . . . . . . . . . . . . . . . 325 23.1. Contiguous Structure Array . . . . . . . . . . . . . . . . . . . . . . . . . 350 23.2. Array of Structure Pointers . . . . . . . . . . . . . . . . . . . . . . . . . 350 23.3. Hybrid Array of Structures . . . . . . . . . . . . . . . . . . . . . . . . . . 352 xxxii | ComputerScienceOne_Page_34_Chunk1587 |
1. Introduction Computers are awesome. The human race has seen more advancements in the last 50 years than in the entire 10,000 years of human history. Technology has transformed the way we live our daily lives, how we interact with each other, and has changed the course of our history. Today, everyone carries smart phones which have more computational power than supercomputers from even 20 years ago. Computing has become ubiquitous, the “internet of things” will soon become a reality in which every device will become interconnected and data will be collected and available even about the smallest of minutiae. However, computers are also dumb. Despite the most fantastical of depictions in science fiction and and hopes of Artificial Intelligence, computers can only do what they are told to do. The fundamental art of Computer Science is problem solving. Computers are not good at problem solving; you are the problem solver. It is still up to you, the user, to approach a complex problem, study it, understand it, and develop a solution to it. Computers are only good at automating solutions once you have solved the problem. Computational sciences have become a fundamental tool of almost every discipline. Scholars have used textual analysis and data mining techniques to analyze classical literature and historic texts, providing new insights and opening new areas of study. Astrophysicists have used computational analysis to detect dozens of new exoplanets. Complex visualizations and models can predict astronomical collisions on a galactic scale. Physicists have used big data analytics to push the boundaries of our understanding of matter in the search for the Higgs boson and study of elementary particles. Chemists simulate the interaction of millions of combinations of compounds without the need for expensive and time consuming physical experiments. Biologists use massively distributed computing models to simulate protein folding and other complex processes. Meteorologists can predict weather and climactic changes with ever greater accuracy. Technology and data analytics have changed how political campaigns are run, how products are marketed and even delivered. Social networks can be data mined to track and predict the spread of flu epidemics. Computing and automation will only continue to grow. The time is soon coming where basic computational thinking and the ability to develop software will be considered a basic skill necessary to every discipline, a requirement for many jobs and an essential skill akin to arithmetic. Computer Science is not programming. Programming is a necessary skill, but it is only the beginning. This book is intended to get you started on your journey. 1 | ComputerScienceOne_Page_35_Chunk1588 |
1. Introduction 1.1. Problem Solving At its heart, Computer Science is about problem solving. That is not to say that only Computer Science is about problem solving. It would be hubris to think that Computer Science holds a monopoly on “problem solving.” Indeed, it would be hard to find any discipline in which solving problems was not a substantial aspect or motivation if not integral. Instead, Computer Science is the study of computers and computation. It involves studying and understanding computational processes and the development of algorithms and techniques and how they apply to problems. Problem solving skills are not something that can be distilled down into a single step- by-step process. Each area and each problem comes with its own unique challenges and considerations. General problem solving techniques can be identified, studied and taught, but problem solving skills are something that come with experience, hard work, and most importantly, failure. Problem solving is part and parcel of the human experience. That doesn’t mean we can’t identify techniques and strategies for approaching problems, in particular problems that lend themselves to computational solutions. A prerequisite to solving a problem is understanding it. What is the problem? Who or what entities are involved in the problem? How do those entities interact with each other? What are the problems or deficiencies that need to be addressed? Answering these questions, we get an idea of where we are. Ultimately, what is desired in a solution? What are the objectives that need to be achieved? What would an ideal solution look like or what would it do? Who would use the solution and how would they use it? By answering these questions, we get an idea of where we want to be. Once we know where we are and where we want to be, the problem solving process can begin: how do we get from point A to point B? One of the first things a good engineer asks is: does a solution already exist? If a solution already exists, then the problem is already solved! Ideally the solution is an “off-the-shelf” solution: something that already exists and which may have been designed for a different purpose but that can be repurposed for our problem. However, there may be exceptions to this. The existing solution may be infeasible: it may be too resource intensive or expensive. It may be too difficult or too expensive to adapt to our problem. It may solve most of our problem, but may not work in some corner cases. It may need to be heavily modified in order to work. Still, this basic question may save a lot of time and effort in many cases. In a very broad sense, the problem solving process is one that involves 1. Design 2. Implementation 2 | ComputerScienceOne_Page_36_Chunk1589 |
1.1. Problem Solving 3. Testing 4. Refinement After one has a good understanding of a problem, they can start designing a solution. A design is simply a plan on the construction of a solution. A design “on paper” allows you to see what the potential solution would look like before investing the resources in building it. It also allows you to identify possible impediments or problems that were not readily apparent. A design allows you to an opportunity to think through possible alternative solutions and weigh the advantages and disadvantages of each. Designing a solution also allows you to understand the problem better. Design can involve gathering requirements and developing use cases. How would an individual use the proposed solution? What features would they need or want? Implementations can involve building prototype solutions to test the feasibility of the design. It can involve building individual components and integrating them together. Testing involves finding, designing, and developing test cases: actual instances of the problem that can be used to test your solution. Ideally, the a test case instance involves not only the “input” of the problem, but also the “output” of the problem: a feasible or optimal solution that is known to be correct via other means. Test cases allow us to test our solution to see if it gives correct and perhaps optimal solutions. Refinement is a process by which we can redesign, reimplement and retest our solution. We may want to make the solution more efficient, cheaper, simpler or more elegant. We may find there are components that are redundant or unnecessary and try to eliminate them. We may find errors or bugs in our solution that fail to solve the problem for some or many instances. We may have misinterpreted requirements or there may have been miscommunication, misunderstanding or differing expectations in the solution between the designers and stakeholders. Situations may change or requirements may have been modified or new requirements created and the solution needs to be adapted. Each of these steps may need to be repeated many times until an ideal solution, or at least acceptable, solution is achieved. Yet another phase of problem solving is maintenance. The solution we create may need to be maintained in order to remain functional and stay relevant. Design flaws or bugs may become apparent that were missed in previous phases. The solution may need to be updated to adapt to new technology or requirements. In software design there are two general techniques for problem solving; top-down and bottom-up design. A top-down design strategy approaches a problem by breaking it down into smaller and smaller problems until either a solution is obvious or trivial or a preexisting solution (the aforementioned “off-the-shelf” solution) exists. The solutions to the subproblems are combined and interact to solve the overall problem. A bottom-up strategy attempts to first completely define the smallest components or 3 | ComputerScienceOne_Page_37_Chunk1590 |
1. Introduction entities that make up a system first. Once these have been defined and implemented, they are combined and interactions between them are defined to produce a more complex system. 1.2. Computing Basics Everyone has some level of familiarity with computers and computing devices just as everyone has familiarity with automotive basics. However, just because you drive a car everyday doesn’t mean you can tell the difference between a crankshaft and a piston. To get started, let’s familiarize ourselves with some basic concepts. A computer is a device, usually electronic, that stores, receives, processes, and outputs information. Modern computing devices include everything from simple sensors to mobile devices, tablets, desktops, mainframes/servers, supercomputers and huge grid clusters consisting of multiple computers networked together. Computer hardware usually refers to the physical components in a computing system which includes input devices such as a mouse/touchpad, keyboard, or touchscreen, output devices such as monitors, storage devices such as hard disks and solid state drives, as well as the electronic components such as graphics cards, main memory, motherboards and chips that make up the Central Processing Unit (CPU). Computer processors are complex electronic circuits (referred to as Very Large Scale Inte- gration (VLSI)) which contain thousands of microscopic electronic transistors–electronic “gates” that can perform logical operations and complex instructions. In addition to the CPU a processor may contain an Arithmetic and Logic Unit (ALU) that performs arithmetic operations such as addition, multiplication, division, etc. Computer Software usually refers to the actual machine instructions that are run on a processor. Software is usually written in a high-level programming language such as C or Java and then converted to machine code that the processor can execute. Computers “speak” in binary code. Binary is nothing more than a structured collection of 0s and 1s. A single 0 or 1 is referred to as a bit. Bits can be collected to form larger chunks of information: 8 bits form a byte, 1024 bytes is referred to as a kilobyte, etc. Table 1.1 contains a several more binary units. Each unit is in terms of a power of 2 instead of a power of 10. As humans, we are more familiar with decimal–base-10 numbers and so units are usually expressed as powers of 10, kilo- refers to 103, mega- is 106, etc. However, since binary is base-2 (0 or 1), units are associated with the closest power of 2. Computers are binary machines because it is the most practical to implement in electronic devices. 0s and 1s can be easily represented by low/high voltage; low/high frequency; on-off; etc. It is much easier to design and implement systems that switch between only two states. 4 | ComputerScienceOne_Page_38_Chunk1591 |
1.3. Basic Program Structure Unit 2n Number of bytes Kilobyte (KB) 210 1,024 Megabyte (MB) 220 1,048,576 Gigabyte (GB) 230 1,073,741,824 Terabyte (TB) 240 1,099,511,627,776 Petabyte (PB) 250 1,125,899,906,842,624 Exabyte (EB) 260 1,152,921,504,606,846,976 Zettabyte (ZB) 270 1,180,591,620,717,411,303,424 Yottabyte (YB) 280 1,208,925,819,614,629,174,706,176 Table 1.1.: Various units of digital information with respect to bytes. Memory is usually measured using powers of two. Computer memory can refer to secondary memory which are typically longterm storage devices such as hard disks, flash drives, SD cards, optical disks (CDs, DVDs), etc. These generally have a large capacity but are slower (the time it takes to access a chunk of data is longer). Or, it can refer to main memory (or primary memory): data stored on chips that is much faster but also more expensive and thus generally smaller. The first hard disk (IBM 350) was developed in 1956 by IBM and had a capacity of 3.75MB and cost $3,200 ($27,500 in 2015 dollars) per month to lease. For perspective, the first commercially available TB hard drive was released in 2007. As of 2015, terabyte hard disks can be commonly purchased for $50–$100. Main memory, sometimes referred to as Random Access Memory (RAM) consists of a collection of addresses along with contents. An address usually refers to a single byte of memory (called byte-addressing). The content, that is the byte of data that is stored at an address, can be anything. It can represent a number, a letter, etc. To the computer it is all just a bunch of 0s and 1s. For convenience, memory addresses are represented using hexadecimal, which is a base-16 counting system using the symbols 0, 1, . . . , 9, a, b, c, d, e, f. Numbers are prefixed with a 0x to indicate they represent hexadecimal numbers. Figure 1.1 depicts memory and its address/contents. Separate computing devices can be connected to each other through a network. Networks can be wired with electrical signals or light as in fiber optics which provide large bandwidth (the amount of data that can be sent at any one time), but can be expensive to build and maintain. They can also be wireless, but provide shorter range and lower bandwidth. 1.3. Basic Program Structure Programs start out as source code, a collection of instructions usually written in a high- level programming language. A source file containing source code is nothing more than 5 | ComputerScienceOne_Page_39_Chunk1592 |
1. Introduction Address Contents ... ... 0x7fff58310b8f 0x7fff58310b8b 0x32 0x7fff58310b8a 0x3e 0x7fff58310b89 0xcf 0x7fff58310b88 0x23 0x7fff58310b87 0x01 0x7fff58310b86 0x32 0x7fff58310b85 0x7c 0x7fff58310b84 0xff 0x7fff58310b83 0x7fff58310b82 0x7fff58310b81 0x7fff58310b80 0x7fff58310b7f 0x7fff58310b7e 0x7fff58310b7d 0x7fff58310b7c 3.14159265359 0x7fff58310b7b 0x7fff58310b7a 0x7fff58310b79 0x7fff58310b78 32,321,231 0x7fff58310b77 0x7fff58310b76 0x7fff58310b75 0x7fff58310b74 1,458,321 0x7fff58310b73 \0 0x7fff58310b72 o 0x7fff58310b71 l 0x7fff58310b70 l 0x7fff58310b6f e 0x7fff58310b6e H 0x7fff58310b88 0xfa 0x7fff58310b87 0xa8 0x7fff58310b86 0xba ... ... Figure 1.1.: Depiction of Computer Memory. Each address refers to a byte, but different types of data (integers, floating-point numbers, characters) may require different amounts of memory. Memory addresses and some data is represented in hexadecimal. 6 | ComputerScienceOne_Page_40_Chunk1593 |
1.3. Basic Program Structure a plain text file that can be edited by any text editor. However, many developers and programmers utilize modern Integrated Development Environment (IDE) that provide a text editor with code highlighting: various elements are displayed in different colors to make the code more readable and elements can be easily identified. Mistakes such as unclosed comments or curly brackets can be readily apparent with such editors. IDEs can also provide automated compile/build features and other tools that make the development process easier and faster. Some languages are compiled languages meaning that a source file must be translated into machine code that a processor can understand and execute. This is actually a multistep process. A compiler may first preprocess the source file(s) and perform some pre-compiler operations. It may then transform the source code into another language such as an assembly language, a lower-level more machine-like language. Ultimately, the compiler transforms the source code into object code, a binary format that the machine can understand. To produce an executable file that can actually be run, a linker may then take the object code and link in any other necessary objects or precompiled library code necessary to produce a final program. Finally, an executable file (still just a bunch of binary code) is produced. Once an executable file has been produced we can run the program. When a program is executed, a request is sent to the operating system to load and run the program. The operating system loads the executable file into memory and may setup additional memory for its variables as well as its call stack (memory to enable the program to make function calls). Once loaded and setup, the operating system begins executing the instructions at the program’s entry point. In many languages, a program’s entry point is defined by a main function or method. A program may contain many functions and pieces of code, but this special function is defined as the one that gets invoked when a program starts. Without a main function, the code may still be useful: libraries contain many useful functions and procedures so that you don’t have to write a program from scratch. However, these functions are not intended to be run by themselves. Instead, they are written so that other programs can use them. A program becomes executable only when a main entry point is provided. This compile-link-execute process is roughly depicted in Code Sample 1.2. An example of a simple C program can be found in Code Sample 1.1 along with the resulting assembly code produced by a compiler in Figure 1.2 and the final machine code represented in hexadecimal in Code Sample 1.3. In contrast, some languages are interpreted, not compiled. The source code is contained in a file usually referred to as a script. Rather than being run directly by an operating system, the operating system loads and execute another program called an interpreter. The interpreter then loads the script, parses, and execute its instructions. Interpreted 7 | ComputerScienceOne_Page_41_Chunk1594 |
1. Introduction Text Editor or IDE Source File Compiler Syntax Error(s) Object File Linker Other Object Files & Libraries Executable File Results & Output success run Input Figure 1.2.: A Compiling Process 8 | ComputerScienceOne_Page_42_Chunk1595 |
1.3. Basic Program Structure 1 #include <stdlib.h> 2 #include <stdio.h> 3 #include <math.h> 4 5 int main(int argc, char **argv) { 6 7 if(argc != 2) { 8 fprintf(stderr, "Usage: %s x\n", argv[0]); 9 exit(1); 10 } 11 12 double x = atof(argv[1]); 13 double result = sqrt(x); 14 15 if(x < 0) { 16 fprintf(stderr, "Cannot handle complex roots\n"); 17 exit(2); 18 } 19 20 printf("square root of %f = %f\n", x, result); 21 22 return 0; 23 } Code Sample 1.1.: A simple program in C 9 | ComputerScienceOne_Page_43_Chunk1596 |
1. Introduction .section __TEXT,__text,regular,pure_instructions .globl _main .align 4, 0x90 _main: ## @main .cfi_startproc ## BB#0: pushq %rbp Ltmp2: .cfi_def_cfa_offset 16 Ltmp3: .cfi_offset %rbp, -16 movq %rsp, %rbp Ltmp4: .cfi_def_cfa_register %rbp subq $48, %rsp movl $0, -4(%rbp) movl %edi, -8(%rbp) movq %rsi, -16(%rbp) cmpl $2, -8(%rbp) je LBB0_2 ## BB#1: leaq L_.str(%rip), %rsi movq ___stderrp@GOTPCREL(%rip), %rax movq (%rax), %rdi movq -16(%rbp), %rax movq (%rax), %rdx movb $0, %al callq _fprintf movl $1, %edi movl %eax, -36(%rbp) ## 4-byte Spill callq _exit LBB0_2: movq -16(%rbp), %rax movq 8(%rax), %rdi callq _atof xorps %xmm1, %xmm1 movsd %xmm0, -24(%rbp) movsd -24(%rbp), %xmm0 sqrtsd %xmm0, %xmm0 movsd %xmm0, -32(%rbp) ucomisd -24(%rbp), %xmm1 jbe LBB0_4 ## BB#3: leaq L_.str1(%rip), %rsi movq ___stderrp@GOTPCREL(%rip), %rax movq (%rax), %rdi movb $0, %al callq _fprintf movl $2, %edi movl %eax, -40(%rbp) ## 4-byte Spill callq _exit LBB0_4: leaq L_.str2(%rip), %rdi movsd -24(%rbp), %xmm0 movsd -32(%rbp), %xmm1 movb $2, %al callq _printf movl $0, %ecx movl %eax, -44(%rbp) ## 4-byte Spill movl %ecx, %eax addq $48, %rsp popq %rbp retq .cfi_endproc .section __TEXT,__cstring,cstring_literals L_.str: ## @.str .asciz "Usage: %s x\n" L_.str1: ## @.str1 .asciz "Cannot handle complex roots\n" L_.str2: ## @.str2 .asciz "square root of %f = %f\n" .subsections_via_symbols Code Sample 1.2.: A simple program in C, compiled to assembly 10 | ComputerScienceOne_Page_44_Chunk1597 |
1.3. Basic Program Structure 00000e40 55 48 89 e5 48 83 ec 30 c7 45 fc 00 00 00 00 89 |UH..H..0.E......| 00000e50 7d f8 48 89 75 f0 81 7d f8 02 00 00 00 0f 84 2c |}.H.u..}.......,| 00000e60 00 00 00 48 8d 35 f2 00 00 00 48 8b 05 9f 01 00 |...H.5....H.....| 00000e70 00 48 8b 38 48 8b 45 f0 48 8b 10 b0 00 e8 94 00 |.H.8H.E.H.......| 00000e80 00 00 bf 01 00 00 00 89 45 dc e8 81 00 00 00 48 |........E......H| 00000e90 8b 45 f0 48 8b 78 08 e8 6e 00 00 00 0f 57 c9 f2 |.E.H.x..n....W..| 00000ea0 0f 11 45 e8 f2 0f 10 45 e8 f2 0f 51 c0 f2 0f 11 |..E....E...Q....| 00000eb0 45 e0 66 0f 2e 4d e8 0f 86 25 00 00 00 48 8d 35 |E.f..M...%...H.5| 00000ec0 a5 00 00 00 48 8b 05 45 01 00 00 48 8b 38 b0 00 |....H..E...H.8..| 00000ed0 e8 41 00 00 00 bf 02 00 00 00 89 45 d8 e8 2e 00 |.A.........E....| 00000ee0 00 00 48 8d 3d 9d 00 00 00 f2 0f 10 45 e8 f2 0f |..H.=.......E...| 00000ef0 10 4d e0 b0 02 e8 22 00 00 00 b9 00 00 00 00 89 |.M....".........| 00000f00 45 d4 89 c8 48 83 c4 30 5d c3 ff 25 08 01 00 00 |E...H..0]..%....| 00000f10 ff 25 0a 01 00 00 ff 25 0c 01 00 00 ff 25 0e 01 |.%.....%.....%..| 00000f20 00 00 00 00 4c 8d 1d dd 00 00 00 41 53 ff 25 cd |....L......AS.%.| 00000f30 00 00 00 90 68 00 00 00 00 e9 e6 ff ff ff 68 0c |....h.........h.| 00000f40 00 00 00 e9 dc ff ff ff 68 18 00 00 00 e9 d2 ff |........h.......| 00000f50 ff ff 68 27 00 00 00 e9 c8 ff ff ff 55 73 61 67 |..h'........Usag| 00000f60 65 3a 20 25 73 20 78 0a 00 43 61 6e 6e 6f 74 20 |e: %s x..Cannot | 00000f70 68 61 6e 64 6c 65 20 63 6f 6d 70 6c 65 78 20 72 |handle complex r| 00000f80 6f 6f 74 73 0a 00 73 71 75 61 72 65 20 72 6f 6f |oots..square roo| 00000f90 74 20 6f 66 20 25 66 20 3d 20 25 66 0a 00 00 00 |t of %f = %f....| 00000fa0 01 00 00 00 1c 00 00 00 00 00 00 00 1c 00 00 00 |................| 00000fb0 00 00 00 00 1c 00 00 00 02 00 00 00 40 0e 00 00 |............@...| 00000fc0 34 00 00 00 34 00 00 00 0b 0f 00 00 00 00 00 00 |4...4...........| 00000fd0 34 00 00 00 03 00 00 00 0c 00 01 00 10 00 01 00 |4...............| 00000fe0 00 00 00 00 00 00 00 01 14 00 00 00 00 00 00 00 |................| 00000ff0 01 7a 52 00 01 78 10 01 10 0c 07 08 90 01 00 00 |.zR..x..........| 00001000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00001010 00 00 00 00 00 00 00 00 34 0f 00 00 01 00 00 00 |........4.......| 00001020 3e 0f 00 00 01 00 00 00 48 0f 00 00 01 00 00 00 |>.......H.......| 00001030 52 0f 00 00 01 00 00 00 00 00 00 00 00 00 00 00 |R...............| 00001040 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| ...(cut for room)... 00002000 11 22 18 54 00 00 00 00 11 40 5f 5f 5f 73 74 64 |.".T.....@___std| 00002010 65 72 72 70 00 51 72 10 90 40 64 79 6c 64 5f 73 |errp.Qr..@dyld_s| 00002020 74 75 62 5f 62 69 6e 64 65 72 00 80 e8 ff ff ff |tub_binder......| 00002030 ff ff ff ff ff 01 90 00 72 18 11 40 5f 61 74 6f |........r..@_ato| | ComputerScienceOne_Page_45_Chunk1598 |
00002040 66 00 90 00 72 20 11 40 5f 65 78 69 74 00 90 00 |f...r .@_exit...| 00002050 72 28 11 40 5f 66 70 72 69 6e 74 66 00 90 00 72 |r(.@_fprintf...r| 00002060 30 11 40 5f 70 72 69 6e 74 66 00 90 00 00 00 00 |0.@_printf......| 00002070 00 01 5f 00 05 00 02 5f 6d 68 5f 65 78 65 63 75 |.._...._mh_execu| 00002080 74 65 5f 68 65 61 64 65 72 00 21 6d 61 69 6e 00 |te_header.!main.| 00002090 25 02 00 00 00 03 00 c0 1c 00 00 00 00 00 00 00 |%...............| 000020a0 c0 1c 00 00 00 00 00 00 fa de 0c 05 00 00 00 14 |................| 000020b0 00 00 00 01 00 00 00 00 00 00 00 00 00 00 00 00 |................| 000020c0 02 00 00 00 0f 01 10 00 00 00 00 00 01 00 00 00 |................| 000020d0 16 00 00 00 0f 01 00 00 40 0e 00 00 01 00 00 00 |........@.......| 000020e0 1c 00 00 00 01 00 00 01 00 00 00 00 00 00 00 00 |................| 000020f0 27 00 00 00 01 00 00 01 00 00 00 00 00 00 00 00 |'...............| 00002100 2d 00 00 00 01 00 00 01 00 00 00 00 00 00 00 00 |-...............| 00002110 33 00 00 00 01 00 00 01 00 00 00 00 00 00 00 00 |3...............| 00002120 3c 00 00 00 01 00 00 01 00 00 00 00 00 00 00 00 |<...............| 00002130 44 00 00 00 01 00 00 01 00 00 00 00 00 00 00 00 |D...............| 00002140 03 00 00 00 04 00 00 00 05 00 00 00 06 00 00 00 |................| 00002150 07 00 00 00 00 00 00 40 02 00 00 00 03 00 00 00 |.......@........| 00002160 04 00 00 00 05 00 00 00 06 00 00 00 20 00 5f 5f |............ .__| 00002170 6d 68 5f 65 78 65 63 75 74 65 5f 68 65 61 64 65 |mh_execute_heade| 00002180 72 00 5f 6d 61 69 6e 00 5f 5f 5f 73 74 64 65 72 |r._main.___stder| 00002190 72 70 00 5f 61 74 6f 66 00 5f 65 78 69 74 00 5f |rp._atof._exit._| 000021a0 66 70 72 69 6e 74 66 00 5f 70 72 69 6e 74 66 00 |fprintf._printf.| 000021b0 64 79 6c 64 5f 73 74 75 62 5f 62 69 6e 64 65 72 |dyld_stub_binder| 000021c0 00 00 00 00 |....| 000021c4 Code Sample 1.3.: A simple program in C, resulting machine code formatted in hexadec- imal (partial) 11 | ComputerScienceOne_Page_45_Chunk1599 |
1. Introduction languages may still have a predefined main function, but in general, a script starts executing starting with the first instruction in the script file. Adhering to the syntax rules is still important, but since interpreted languages are not compiled, syntax errors become runtime errors. A program may run fine until its first syntax error at which point it fails. There are other ways of compiling and running programs. Java for example represents a compromise between compiled and interpreted languages. Java source code is compiled into Java bytecode which is not actually machine code that the operating system and hardware can run directly. Instead, it is compiled code for a Java Virtual Machine (JVM). This allows a developer to write highly portable code, compile it once and it is runnable on any JVM on any system (write-once, compile-once, run-anywhere). In general, interpreted languages are slower than compiled languages because they are being run through another program (the interpreter) instead of being executed directly by the processor. Modern tools have been introduced to solve this problem. Just In Time (JIT) compilers have been developed that take scripts that are not usually compiled, and compile them to a native machine code format which has the potential to run much faster than when interpreted. Modern web browsers typically do this for JavaScript code (Google Chrome’s V8 JavaScript engine for example). Another related technology are transpilers. Transpilers are source-to-source compilers. They don’t produce assembly or machine code, instead they translate code in one high-level programming language to another high-level programming language. This is sometimes done to ensure that scripting languages like JavaScript are backwards compatible with previous versions of the language. Transpilers can also be used to translate one language into the same language but with different aspects (such as parallel or synchronized code) automatically added. They can also be used to translate older languages such as Pascal to more modern languages as a first step in updating a legacy system. 1.4. Syntax Rules & Pseudocode Programming languages are a lot like human languages in that they have syntax rules. These rules dictate the appropriate arrangements of words, punctuation, and other symbols that form valid statements in the language. For example, in many programming languages, commands or statements are terminated by semicolons (just as most sentences are ended with a period). This is an example of “punctuation” in a programming language. In English paragraphs are separated by lines, in programming languages blocks of code are separated by curly brackets. Variables are comparable to nouns and operations and functions are comparable to verbs. Complex documents often have footnotes that provide additional explanations; code has comments that provide documentation and explanation for important elements. English is read top-to-bottom, left-to-right. Programming 12 | ComputerScienceOne_Page_46_Chunk1600 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.