text
stringlengths
30
4k
source
stringlengths
60
201
for cos θe > cos θc. Therefore, the criterion for a Wenzel State giving way to a Cassie state is identical to that for spontaneous wicking. MIT OCW: 18.357 Interfacial Phenomena 62 Prof. John W. M. Bush 15.4. Cassie-Baxter State Chapter 15. Contact angle hysteresis, Wetting of textured solids Summary: Hydrophilic: Wenzel’s Law ceases to apply at small θe when demi-wicking sets in, and the Cassie state emerges. Hydrophobic: Discontinuous jump in θ∗ as θe exceeds π/2 ⇒ Cassie state. Jump is the largest for large roughness (small φS) Historical note: 1. early studies of wetting motivated by insecticides 2. chemists have since been trying to design superhydrophobic (or oliophobic) surfaces using combina­ tions of chemistry and texture recent advances in microfabrication have achieved θ∗ ∼ π, Δθ ∼ 0 (e.g. Lichen surface McCarthy) 3. MIT OCW: 18.357 Interfacial Phenomena 63 Prof. John W. M. Bush 16. More forced wetting Some clarification notes on Wetting. Figure 16.1: Three different wetting states. Last class, we discussed the Cassie state only in the context of
https://ocw.mit.edu/courses/18-357-interfacial-phenomena-fall-2010/04759cfc15c79928ca9f72a18b3864dd_MIT18_357F10_lec_all.pdf
wetting states. Last class, we discussed the Cassie state only in the context of drops in a Fakir state, i.e. suspended partially on a bed of air. There is also a “wet Cassie” state. More generally, the Cassie-Baxter model applies to wetting on a planar but chemi­ cally heterogeneous surfaces. Consider a surface with 2 species, one with area fraction f1 and equilibrium contact angle θ1, another with area fraction f2 and angle θ2. Energy variation associated with the front advancing a distance dx: dE = f1(γSL − γSV )1dx + f2(γSL − γSV )2dx + γ cos θ∗dx. Thus, dE = 0 when cos θ ∗ = f1 cos θ1 +f2 cos θ2 (Cassie-Baxter relation) (16.1) Special Case: in the Fakir state, the two phases are the solid (θ1 = θe and f1 = θS) and air (θ2 = π, f2 = 1 − θS) so we have Figure 16.2: Wetting of a tiled (chem­ ically heterogeneous) surface. cos θ ∗ = θS cos θe − 1 + θS (16.2) as previously. As before, in this hydrophobic case, the Wenzel state is energetically favourable when dEW <dEC, i.e
https://ocw.mit.edu/courses/18-357-interfacial-phenomena-fall-2010/04759cfc15c79928ca9f72a18b3864dd_MIT18_357F10_lec_all.pdf
case, the Wenzel state is energetically favourable when dEW <dEC, i.e. cos θC < cos θe < 0 where cos θC = (θS − 1)/(r − θS), i.e. θE is between π/2 and θC. However, experiments indicate that even in this regime, air may remain trapped, so that a metastable Cassie state emerges. 16.1 Hydrophobic Case: θe > π/2, cos θe < 0 In the Fakir state, the two phases are the solid (θ = θe, f1 = φ) and vapour (θ2 = π, f2 = 1 − φs). Cassie-Baxter: cos θ ∗ = πS cos θe − 1 + φs (16.3) as deduced previously. As previously, the Wenzel state is energetically favourable when dEW < dEL, i.e. φS −1 . Experiments indicate that even in this region, air may remain cos θC < cos θe < 0 where cos θC = r−φS trapped, leading to a meta-stable Fakir state. 64 16.2. Hydrophilic Case: θe < π/2 Chapter 16. More forced wetting Figure 16.3: Relationship between cos θ∗ and cos θe for different wetting states. 16.2 Hydrophilic Case: θe < π/2
https://ocw.mit.edu/courses/18-357-interfacial-phenomena-fall-2010/04759cfc15c79928ca9f72a18b3864dd_MIT18_357F10_lec_all.pdf
different wetting states. 16.2 Hydrophilic Case: θe < π/2 Here, the Cassie state corresponds to a tiled surface with 2 phases corresponding to the solid (θ1 = θe, f1 = φS) and the fluid (θ2 = 0, f2 = 1 − φS). Cassie-Baxter ⇒ cos θ∗ = 1 − φS + φS cos θe, which describes a “Wet Cassie” state. Energy variation: dE = (r − φS)(γSL − γSV )dx + (1 − φS)γdx. ⇒ dE = 0 if cos θe = γSL − γSV γ > 1 − φS r − φS ∗ ≡ cos θ c (16.4) For θe < θc, a film will impregnate the rough solid. Criteria for this transition can also be deduced by equating energies in the Cassie and Wenzel states, i.e. r cos θe = 1 − φS + φS cos θe ⇒ θe = θC. Therefore, when π/2 > θe > θC, the solid remains dry ahead of the drop ⇒ Wenzel applies ⇒ when θe < θC ⇒ film penetrates texture and system is described by “Wet Cassie” state. Johnson + Dettre (1964) examined water drops on wax, whose roughness they varied by baking. They showed
https://ocw.mit.edu/courses/18-357-interfacial-phenomena-fall-2010/04759cfc15c79928ca9f72a18b3864dd_MIT18_357F10_lec_all.pdf
4) examined water drops on wax, whose roughness they varied by baking. They showed an increase and then decrease of Δθ = θa − θr as the roughness increased, and system went from smooth to Wenzel to Cassie states. Water-repellency: important for corrosion-resistance, self-cleaning, drag-reducing surfaces. It requires the maintenance of a Cassie State. This means the required impregnation pressure must be exceeded by the curvature pressure induced by roughness. E.g.1 Static Drop in a Fakir State The interface will touch down if δ > h. Pressure balance: σ ∼ σ Thus taller pillars maintain Fakir State. (see Fig. 16.5) so δ > h ⇒ l R > h i.e. R < l h . δ l2 R 2 2 E.g.2 Impacting rain drop: impregnation pressure ΔP ∼ ρU 2 or ρU c where c is the speed of sound in water. E.g.3 Submerged surface, e.g. on a side of a boat. ΔP = ρgz is impregnation pressure. MIT OCW: 18.357 Interfacial Phenomena 65 Prof. John W. M. Bush 16.3. Forced Wetting: the Landau-Levich-Derjaguin Problem Chapter 16. More forced wetting Figure 16.4: Contact angle as a function of surface
https://ocw.mit.edu/courses/18-357-interfacial-phenomena-fall-2010/04759cfc15c79928ca9f72a18b3864dd_MIT18_357F10_lec_all.pdf
6. More forced wetting Figure 16.4: Contact angle as a function of surface roughness for water drops on wax. Figure 16.5: To remain in a Cassie state, the internal drop pressure P0 + 2σ/R must not exceed the curvature pressure induced by the roughness, roughly σ/ℓ. 16.3 Forced Wetting: the Landau-Levich-Derjaguin Problem Withdraw a plate from a viscous fluid with constant speed. What is the thickness of the film that coats the plate? Consider a static meniscus. For relatively thick films (Ca ∼ 1), balancing viscous stresses and gravity: µ ∼ ρgh ⇒ V h 1/2 h ∼ µV ρg � � ∼ ℓcCa 1/2 (Derjaguin 1943) (16.5) where ℓc = � σ ρg and Ca = µV σ = viscous curvature is the Capillary number. But this scaling is not observed at low Ca, where the coating is resisted principally by curvature pressure rather than gravity. Recall static meniscus (Lecture 6): η(x) = 2ℓc (1 − sin θ(x)) and internal pressure: p(x) = p0 − ρgη(x). As x → 0, η(x) → 2ℓc and p(x) → p0 − 2ρgℓc. It is this capillary suction inside
https://ocw.mit.edu/courses/18-357-interfacial-phenomena-fall-2010/04759cfc15c79928ca9f72a18b3864dd_MIT18_357F10_lec_all.pdf
and p(x) → p0 − 2ρgℓc. It is this capillary suction inside the meniscus that resists the rise of thin films. √ √ √ MIT OCW: 18.357 Interfacial Phenomena 66 Prof. John W. M. Bush 16.3. Forced Wetting: the Landau-Levich-Derjaguin Problem Chapter 16. More forced wetting Thin film wetting We describe the flow in terms of two distinct re- gions: Region I: Static meniscus. The balance is between gravity and curvature pressures: ρgη ∼ σ∇ · n so curva- ture ∇ · n ∼ 1/ℓc. Region II: Dynamic meniscus (coating zone). The balance here is between viscous stresses and curvature pressure. Define this region as the zone over which film thickness decreases from 2h to h, whose vertical extent L to be specified by pressure matching. In region II, cur- vature ∇ · n ∼ h/L2. Matching pressure at point A: √ 2 L2 ∼ p0 − ρgℓc ⇒ L ∼ 0 − σh σh p L = ℓch ρgℓc is the geometric mean of ℓc and h. Force balance in Zone II: viscous stress vs. curvature pres- sure: µ h2 σ Substitute in for L ⇒ h3 ∼ µV L3 ∼ Caℓ Implicit in above: h ≪ L, L ≪ ℓc, ρg ≪ σh L h ≈ 0.94ℓ V ∼ ∆P ∼ h 1 .L 3/2 c h3/2 ⇒ h ∼ ℓ cCa /3. 2 ℓch ⇒ ∼ L2 L σ Figure 16.6: The two regions of the meniscus next to
https://ocw.mit.edu/courses/18-357-interfacial-phenomena-fall-2010/04759cfc15c79928ca9f72a18b3864dd_MIT18_357F10_lec_all.pdf
ℓch ⇒ ∼ L2 L σ Figure 16.6: The two regions of the meniscus next to a moving wall. /3 2 cCa where ℓc = Ca = Ca1/3 ≪ 1. Matched asymptotics give σ ,ρg µV .σ q 3 , or equivalently E.g.1 Jump out of pool at 1m/s: Ca ∼ 10−2 so h ∼ 0.1mm ⇒ ∼ 300g entrained. E.g.2 Drink water from a glass, V ∼ 1cm/s ⇒ Ca ∼ 10−4. Figure 16.7: Left: A static meniscus. Right: Meniscus next to a wall moving upwards with speed V . MIT OCW: 18.357 Interfacial Phenomena 67 Prof. John W. M. Bush 17. Coating: Dynamic Contact Lines ∼ ℓc < 10−3 Ca2/3 for Ca = Last time we considered the Landau-Levich-Derjaguin Problem and deduced µV h σ h ∼ ℓcCa1/3 for Ca → 1. The influence of surfactants Surfactants decrease σ which affects h slightly. But the principle effect is to generate Marangoni stresses that increase fluid emplacement: h typically doubles. Figure 17.1: The influence of surfactants on fiber coating. Gradients in Γ induce Marangoni stresses that enhance deposition. b ( ) R1 + 1 R1 1 R2 Fiber coating: = p0 − ρgz. Normal stress: p0 + σ If b ≪ ℓc, 1 ∼ 1 ⇒ curvature pressures dominant, can’t be balanced by gravity. Thus, the interface must take the form of a catenoid:
https://ocw.mit.edu/courses/18-357-interfacial-phenomena-fall-2010/04759cfc15c79928ca9f72a18b3864dd_MIT18_357F10_lec_all.pdf
� 1 ⇒ curvature pressures dominant, can’t be balanced by gravity. Thus, the interface must take the form of a catenoid: 1 R1 For wetting, θe = 0 ⇒ r(z) = b cosh Note: e 1. gravity prevents meniscus from extending to ∞ ⇒ h deduced by cutting it off at ℓc. 2. h is just a few times b (h ≪ ℓc) ⇒ lateral extent greatly exceeds its height. Forced wetting on fibers e.g. optical fiber coating. where h ≈ b ln(2ℓc/b). + 1 R2 z−h b = 0. ) Figure 17.2: Etching of the microtips of Atomic Force Microscopes. As the fiber is withdrawn from the acid bath, the meniscus retreats and a sharp tip forms. 68 Chapter 17. Coating: Dynamic Contact Lines Figure 17.3: Left: Forced wetting on a fiber. Right: The coating thickness as a function of the Reyonolds number Re. pf ilm ∼ p0 + b , pmeniscus ∼ p0 ⇒ Δp ∼ σ σ b resists entrainment. Force balance: µ e2 ∼ Δp U σ = L ∼ 1 ⇒ L ∼ bL .√ b e L2 Pressure match: Note: be, substitute into the previous equation to find e ≈ bCa 2/3 (Bretherton ′ s Law) (17.1) • this scaling is valid when e ≪ b, i.e. Ca2/3 ≪ 1. • At higher Ca, film is the viscous boundary layer that develops during pulling: δ ∼ µ
https://ocw.mit.edu/courses/18-357-interfacial-phenomena-fall-2010/04759cfc15c79928ca9f72a18b3864dd_MIT18_357F10_lec_all.pdf
1. • At higher Ca, film is the viscous boundary layer that develops during pulling: δ ∼ µ ρ ( Ls is the submerged length. L s U 1/2 ) , where Displacement of an interface in a tube E.g. air evacuating a water-filled pipette, pumping oil out of rock with water. Figure 17.4: Left: Displacing a liquid with a vapour in a tube. Right: The dependence of the film thickness left by the intruding front as a function of Ca = µU/σ. In the limit of h ≪ r, the pressure gradient in the meniscus ∇p ∼ meniscus. As on a fiber, pressure matching: p0 + − Force balance: µU/h2 ∼ σ/rl ∼ σ/r(hr)1/2 ⇒ σ r−h 2σ r σh l2 ∼ p0 + ⇒ l ∼ (hr)1/2 when h ≪ r. σ rl , where l is the extent of the dynamic viscous -v " ' curvature '-v" h ∼ rCa 2/3 (Bretherton 1961) (17.2) where Ca = Thick films: what if h = ord(r)? For h ∼ r, Taylor (1961) found h ∼ (r − h)Ca µU σ . 2/3 . MIT OCW: 18.357 Interfacial Phenomena 69 Prof. John W. M. Bush 17.1. Contact Line Dynamics Chapter 17. Coating: Dynamic Contact Lines 17.1 Contact Line Dynamics Figure 17.5: The form of a moving
https://ocw.mit.edu/courses/18-357-interfacial-phenomena-fall-2010/04759cfc15c79928ca9f72a18b3864dd_MIT18_357F10_lec_all.pdf
ating: Dynamic Contact Lines 17.1 Contact Line Dynamics Figure 17.5: The form of a moving meniscus near a wall or inside a tube for three different speeds. We consider the withdrawal of a plate from a fluid bath (Fig. 16.6) or fluid displacement within a cylindrical tube. Observations: • at low speeds, the contact line advances at the dynamic contact angle θd < θe • dynamic contact angle θd decreases progressively as U increases until U = UM . • at sufficiently high speed, the contact line cannot keep up with the imposed speed and a film is entrained onto the solid. Now consider a clean system free of hysteresis. Force of traction pulling liquid towards a dry region: F (θd) = γSV − γSL − γ cos θd. Note: • F (θe) = 0 in equilibrium. How does F depend on U ? What is θd(U )? • the retreating contact line (F < 0) was examined with retraction experiments e.g. plate withdrawal. • the advancing contact line (F > 0) was examined by Hoff­ mann (1975) for the case of θe = 0. • he found θd ∼ U 1/3 ∼ Ca1/3 (Tanner’s Law) Dussan (1979): drop in vicinity of contact line advances like a tractor tread Figure 17.6: Dynamic contact angle θd as a function of the differential speed U . For U > UM , the fluid wets the solid.
https://ocw.mit.edu/courses/18-357-interfacial-phenomena-fall-2010/04759cfc15c79928ca9f72a18b3864dd_MIT18_357F10_lec_all.pdf
�d as a function of the differential speed U . For U > UM , the fluid wets the solid. MIT OCW: 18.357 Interfacial Phenomena 70 Prof. John W. M. Bush 17.1. Contact Line Dynamics Chapter 17. Coating: Dynamic Contact Lines Figure 17.7: The advancing and retreating contact angles of a drop. Figure 17.8: A drop advancing over a solid boundary behaves like a tractor tread (Dussan 1979 ), ad­ vancing though a rolling motion. Flow near advancing contact line We now consider the flow near the contact line of a spreading liquid (θd > θe): • consider θd ≪ 1, so that slope tan θd = ≈ θd ⇒ z ≈ θdx. z x • velocity gradient: dU ≈dz U θdx • rate of viscous dissipation in the corner Φ = µ 1 1 corner Φ = 3µ ∞ dx 0 x ≈ 0 1 L dx a x J dv dz ∞ 2 ∞ dU = µ 1 0 θdxdx = U 2 2x2 θd = ln L/a ≡ ℓD zmax =θd x U 2 2x2 θd 0 dz ∞ dx x 1 0 dx 1 3µU 2 θd de Gennes’ approximation: where L is the drop size and a is the molecular size. From experiments 15 < ℓD < 20. J Energetics: F U = Φ = 3µℓD θd · U 2 rate of work done by surface forces equals the rate of
https://ocw.mit.edu/courses/18-357-interfacial-phenomena-fall-2010/04759cfc15c79928ca9f72a18b3864dd_MIT18_357F10_lec_all.pdf
F U = Φ = 3µℓD θd · U 2 rate of work done by surface forces equals the rate of viscous dissipation. Recall: • F = γSV − γSL − γ cos θd = γ (cos θe − cos θd) 2 ⇒ F ≈ • in the limit θe < θd ≪ 1, cos θ ≈ 1 − θ 2 • substitute F into the energetics equation to get the contact line speed: θ2 − θ2 e d γ 2 ) e γ where U = µ ∗ ≈ 30m/s. U = ∗ U 6ℓD 2 − θ2 e θd θd e ) (17.3) (17.4) (17.5) (17.6) MIT OCW: 18.357 Interfacial Phenomena 71 Prof. John W. M. Bush 17.1. Contact Line Dynamics Chapter 17. Coating: Dynamic Contact Lines Note: 1. rationalizes Hoffmann’s data (obtained for θe = 0) ⇒ U ∼ θ3 D 2. U = 0 for θd = θe (static equilibrium) 3. U = 0 as θd → 0: dissipation in sharp wedge impedes motion. 4. U (θd) has a maximum when dU dθd = ∗ U 6ℓD e 3θ2 − θ2 ⇒ θd = √ θe 3 ) d e ⇒ Umax = ∗ U √ 9 3ℓD θ3 e Figure 17.9: Left: Schematic illustration of the flow in the vicinity of an
https://ocw.mit.edu/courses/18-357-interfacial-phenomena-fall-2010/04759cfc15c79928ca9f72a18b3864dd_MIT18_357F10_lec_all.pdf
3 e Figure 17.9: Left: Schematic illustration of the flow in the vicinity of an advancing contact line. Right: The dependence of the dynamic contact angle on the speed of withdrawal. E.g. In water, U = 70m/s. With θe = 0.1 radians and ℓD = 20, Umax = 0.2mm/s ⇒ sets upper bound on extraction speed for water coating flows. ∗ MIT OCW: 18.357 Interfacial Phenomena 72 Prof. John W. M. Bush 18. Spreading Recall: gravity currents, the spreading of heavy fluid under the influence of gravity. further reading: John E. Simpson - Gravity Currents: In the Environment and the Laboratory. Stage I: Re ≫ 1 Flow forced by gravity, and resisted by fluid iner­ tia: ∼ 2 ρ U R ⇒ U ∼ √ g ′h where g ′ = Δρgh R Δρ g. ρ Continuity: V = πR2(t)h(t) = const. ⇒ h(t) ∼ R2(t) g ′V 1 ⇒ RdR ∼ g ′V dt ⇒ R(t) ∼ ⇒ U ≡ (g dR dt ′ V )1/4t1/2 √ √ ∼ R V Note: U ∼ g ′h decreases until Re = √ U R ν ≤ 1. Figure 18.1: Spreading of a fluid volume un­ der the influence of gravity. Stage II: Re ≪ 1 Flow forced by gravity, resisted by viscosity: now substitute for h(t) = V /R2(t) to obtain:
https://ocw.mit.edu/courses/18-357-interfacial-phenomena-fall-2010/04759cfc15c79928ca9f72a18b3864dd_MIT18_357F10_lec_all.pdf
≪ 1 Flow forced by gravity, resisted by viscosity: now substitute for h(t) = V /R2(t) to obtain: ∂p = ν ∂ ∂r 2 u ∂z2 ⇒ Δρgh R ∼ ν U h2 U = dR dt R ∼ ∼ t 3 ρg′V νR7 ⇒ R ∼ ρg′V ν ( 3 1/8 ) 1/8 t (18.1) 18.1 Spreading of small drops on solids For a drop of undeformed radius R placed on a solid substrate, spreading will in general be driven by both gravity and curvature pressure. ρgh Gravity: ∇pg ∼ R Continuity V = πR2(t)h(t) =const. Which dominates? important as the drop spreads !? Recall: ∼ ⇒ gravity becomes progressively more , Curvature: ∇pc ∼ R3 . = Bond number. Bo = Δpg Δpc ρgR γ ρgV γh ∼ 1 h γh 2 • drop behaviour depends on S = γSV − γSL − γ. • When S < 0: Partial wetting. Spreading arises until a puddle forms. • When S > 0: Complete wetting. Here, one expects spreading forced by the unbalanced tension at the contact line. µU h Thus, we expect R dR ∼ S ∼ S h dt µ viscous stress '-v" U µ R2 ⇒ R3 dR ∼ SU µ dt · πR2 ∼ drop area '-v" S · 2πR contact line f orce perimeter '-v" '-v"
https://ocw.mit.edu/courses/18-357-interfacial-phenomena-fall-2010/04759cfc15c79928ca9f72a18b3864dd_MIT18_357F10_lec_all.pdf
area '-v" S · 2πR contact line f orce perimeter '-v" '-v" ⇒ R ∼ 1/4 1/4 t SU µ ( ) 73 (18.2) (18.3) 18.2. Immiscible Drops at an Interface Pujado & Scriven 1972 Chapter 18. Spreading But this is not observed; instead, one sees R ∼ t1/10 . Why? Hardy (1919) observed a precursor film, the evidence of which was the disturbance of dust ahead of the drop. This precursor film is otherwise invisible, with thickness e ∼ 20˚A Its origins lie in the force imbalance at the contact line (S > 0) and its stability results from interactions between the fluid and solid (e.g. Van der Waals) Physical picture Force at Apparent Contact Line: γ cos θd − γSL = θd. γ(1 − cos θd) ≈ F = γ + γSL − for small 2 θ γ d 2 ∗ U 2 . Note: F ≪ S. Now recall from last class: F U = 2 3 , where γθd Letting F → 2 , we find U = θd U = µ . Since the drop is small, it is a section of a sphere, so that π U = R3θd 4 3µℓD θd ∗ U 6ℓD θd 3ℓD µ F = dR dt = γ . Substituting in dR from above, we �
https://ocw.mit.edu/courses/18-357-interfacial-phenomena-fall-2010/04759cfc15c79928ca9f72a18b3864dd_MIT18_357F10_lec_all.pdf
ℓD µ F = dR dt = γ . Substituting in dR from above, we find: dt dθd θd dt 3 dR = − 1 R dt −U Hence 1 dθd θd dt = R θ3 d. −1/3 Now substitute R = Lθ d ∗ ≈ (U/θd) 1/3 and L ≡ U 1/3 ⇒ dθd dt ∗ 13/3 = − U θ L d ⇒ (18.4) Figure 18.2: The precursor film of a spreading drop. so using (18.4) yields R ∼ L U t L ∗ ( θd = L U ∗t ( 3/10 ) (T anner ′ s Law) 1/10 ) , which is consistent with observation. (18.5) 18.2 Immiscible Drops at an Interface Pujado & Scriven 1972 Gravitationally unstable configurations can arise (ρa < ρb < ρc or ρc < ρa < ρb). • weight of drops supported by interfacial tensions. • if drop size R < lbc ∼ γbc (ρb−ρc)g , it can be suspended by the interface. Sessile Lens, ρa < ρc < ρb: stable for drops of any size, e.g. oil on water. Figure 18.3: An immiscible liquid drop floats on a liquid bath. MIT OCW: 18.357 Interfacial Phenomena 74 Prof. John W. M. Bush 18.3. Oil Spill 18.3 Oil Sp
https://ocw.mit.edu/courses/18-357-interfacial-phenomena-fall-2010/04759cfc15c79928ca9f72a18b3864dd_MIT18_357F10_lec_all.pdf
4 Prof. John W. M. Bush 18.3. Oil Spill 18.3 Oil Spill 3 Distinct phases: Phase I Inertia vs. Gravity: U ∼ g ′h(t) ⇒ R(t) ∼ (g ′U0) 1/4 t1/2 Chapter 18. Spreading Phase II Viscosity vs. Gravity : as previously, R ∼ 1/8 3 0 ρg ′ V ν t1/8 J ( Phase III Line tension vs. Viscosity: For S < 0, an equilibrium configuration arises ⇒ drop takes the form of a sessile lens. For S > 0 the oil will completely cover the water, spreading to a layer of molecular thickness. ) Phase IIIa Viscous resistance from dissipation within oil. As previously: µU h πR2 ∼ 2πRS ⇒ R ∼ t1/4 SU µ 1/4 ( ) Phase IIIb Spreading driven by S, resisted by viscous dissipation in the underlying fluid. Blasius boundary layer grows on base of spreading current like δ ∼ νt. µ U πR2 ∼ S · 2πR where δ ∼ νt1/2 ⇒ R ∼ ν1/4t3/4 √ √ 1/2 √ . νt ⇒ R dR ∼ S µ dt δ S µ ( ) 18.4 Oil on water: A brief review When an oil drop is emplaced on the water surface, its behaviour will depend on the spreading coefficient S ≡ σaw − σoa − σow (18.6) For S > 0, the droplet will completely wet the underlying liquid, and
https://ocw.mit.edu/courses/18-357-interfacial-phenomena-fall-2010/04759cfc15c79928ca9f72a18b3864dd_MIT18_357F10_lec_all.pdf
6) For S > 0, the droplet will completely wet the underlying liquid, and so spread to a layer of molecular thickness. References: Franklin (1760); Fay (1963); DePietro & Cox (1980); Foda & Cox (1980); Joanny (1987); Brochard-Wyart et al. (1996); Fraaije & Cazabat (1989). For S < 0, an equilibrium configuration arises: the drop assumes the form of a sessile lens. The statics of the sessile lens have been considered by Langmuir (1933) and Pujado & Scriven (1972). their dynamics has been treated by Wilson & Williams (1997) and Miksis & Vanden-Broeck (2001). Figure 18.4: An oil drop spreading on the water surface. MIT OCW: 18.357 Interfacial Phenomena 75 Prof. John W. M. Bush 18.5. The Beating Heart Stocker & Bush (JFM 2007) Chapter 18. Spreading 18.5 The Beating Heart Stocker & Bush (JFM 2007) . When a drop of mineral oil containing a small quantity of non-ionic, water-insoluble surfactant (Tergitol) Figure 18.5: An oil drop oscillates on the water surface. Note the ring of impurities that marks the edge of the internal circulation. is so emplaced, a sessile lens with S < 0 is formed. However, no equilibrium shape emerges; the lens is characterized by periodic fluctuations in radius, and so resembles a beating heart. The
https://ocw.mit.edu/courses/18-357-interfacial-phenomena-fall-2010/04759cfc15c79928ca9f72a18b3864dd_MIT18_357F10_lec_all.pdf
no equilibrium shape emerges; the lens is characterized by periodic fluctuations in radius, and so resembles a beating heart. The phenomenon was first reported by Buetschli (1894), a professor of Zoology at the University of Heidelberg, in his treatise Investigations on Protoplasm. It was subsequently described qualitatively by Sebba (1979, 1981). Motivation: “The ultimate goal of physiologists is to be able to explain living behaviour in terms of physicochemical forces. Thus, any expansion of our knowledge of such forces, based on inanimate systems, should be examined to see whether this might not offer insight into biological behaviour”. Sebba (1979). Many biological systems exhibit periodic behaviour; e.g. oscillations of cells of nerves and muscle tissue, oscillations in mitochondria, and biological clocks. Conversion of chemical into mechanical energy is one of the main processes in biological movements; e.g. chloroplast movements and muscle contraction. Observations: • lens behaviour is independent of water depth, strongly dependent on surfactant concentration Γ • for Γ = 0 no beating - stable sessile lens • for moderate Γ steady beating observed • for high Γ drop edges become unstable to fingers • for highest Γ, lens explodes into a series of smaller beating lenses. • beating marked by slow expansion, rapid retraction • odour of Tergitol always accompanies beating MIT OCW: 18.357 Interfacial Phenomena 76 Prof. John W. M. Bush 18.5. The Beating Heart Stocker & Bush (JFM 2007) Chapter 18. Spreading •
https://ocw.mit.edu/courses/18-357-interfacial-phenomena-fall-2010/04759cfc15c79928ca9f72a18b3864dd_MIT18_357F10_lec_all.pdf
The Beating Heart Stocker & Bush (JFM 2007) Chapter 18. Spreading • placing lid on the chamber suppresses the oscillations ⇒ evaporation is a critical ingredient. Physical picture Stage I: Slow expansion of drop. Adsorption of surfactant onto oil-water interface ⇒ σow decreases. Evaporation of surfactant from air- water surface ⇒ σaw increases. Stage II: Rapid retraction. Flushing of surfactant onto air-water interface ⇒ σaw decreases and σow increases. BUT WHY? Internal circulation: confined to the outer extremities of the lens, absent in the flat central region. Marangoni flow associated with gradient in Γ - indicates Γ is low­ est at the drop edge. Consistent with radial gradi­ ent in adsorption flux along surface. Reflects geomet­ ric constraint - less surfactant available to corners than bulk. Such Marangoni shear layers are unstable to longitu­ dinal rolls or transverse waves (as in the wine glass). The flushing events are associated with breaking Marangoni waves (Frenkel & Halpern 2005 ). Figure 18.6: Internal circulation of the “beat­ ing heart”. Another surfactant-induced auto-oscillation: The Spitting Drop (Fernandez & Homsy 2004) • chemical reaction produces surfactant at drop surface • following release of first drop, periodic spitting • rationalized in terms of tip-streaming (Taylor 1934), which arises only in the presence of surfactant (de Bruijn 1993) for µ/µd ≈ 1
https://ocw.mit.edu/courses/18-357-interfacial-phenomena-fall-2010/04759cfc15c79928ca9f72a18b3864dd_MIT18_357F10_lec_all.pdf
), which arises only in the presence of surfactant (de Bruijn 1993) for µ/µd ≈ 104 and Ca = µG/σ > 0.4 MIT OCW: 18.357 Interfacial Phenomena 77 Prof. John W. M. Bush 19. Water waves We consider waves that might arise from disturbing the surface of a pond. We define the normal to the surface: n = (−ζx ,1) (1+ζ2 )1/2 x −ζxx Curvature: ∇ · n = (1+ζ2 )3/2 x We assume the fluid motion is inviscid and irrota­ tional: u = ∇φ. Must deduce solution for velocity potential φ satisfying ∇2φ = 0. B.C.s: ∂φ 1. ∂z 2. Kinematic B.C.: Dζ = uz ⇒ ∂ζ = ∂t Dt 3. Dynamic B.C. (time-dependent Bernoulli ap- plied at free surface): ρ ∂φ ∂t where ps = p0 + σ∇ · n = p0 − σ + ρ |∇φ| + ρgζ + pS = f (t), independent of x ∂φ on z = ζ. ∂z = 0 on z = −h ∂φ ∂ζ ∂x ∂x 1 2 + 2 ζxx (1+ζ2 x Recall: unsteady inviscid flows Navier-Stokes: )3/2 is the surface pressure. Figure 1
https://ocw.mit.edu/courses/18-357-interfacial-phenomena-fall-2010/04759cfc15c79928ca9f72a18b3864dd_MIT18_357F10_lec_all.pdf
iscid flows Navier-Stokes: )3/2 is the surface pressure. Figure 19.1: Waves on the surface of an inviscid ir­ rotational fluid. ∂u ∂t ρ 1 2 2 ) [ ( + ρ ∇ u − u × (∇ × u) = −∇ (p + Ψ) (19.1) ] 2 For irrotational flows, u = ∇φ, so that u · ∇ Time-dependent Bernoulli: ρ ∂φ + ρ |∇φ| + p + Φ = F (t) only. ρ ∂φ ∂t [ 2 2+ ρ |∇φ| + p + Φ = 0. ] 1 1 2 ∂t Now consider small amplitude waves and linearize the governing equations and BCs (assume ζ, φ are ∂φ ∂z ∂φ ∂z ∂ζ ∂t = 0 on z = −h on z = 0. = small, so we can neglect the nonlinear terms φ2, ζ 2, φζ, etc.) ⇒ ∇2φ = 0 in −h ≤ z ≤ 0. Must solve this equation subject to the B.C.s 1. 2. 3. ρ ∂φ + ρgζ + p0 − σζxx = f (t) on z = 0. Seek solutions: ζ(x, t) = i.e. travelling waves in x-direction with phase speed c and wavelength λ = 2π/k. Substitute φ into ∇2φ = 0 to obtain φ
https://ocw.mit.edu/courses/18-357-interfacial-phenomena-fall-2010/04759cfc15c79928ca9f72a18b3864dd_MIT18_357F10_lec_all.pdf
wavelength λ = 2π/k. Substitute φ into ∇2φ = 0 to obtain φˆzz − k2φˆ = 0 Solutions: φˆ(z) = e , e−kz or sinh(z), cosh(z). To satisfy B.C. 1: ∂ ˆ From B.C. 2: = 0 on z = −h so choose φˆ(z) = A cosh k(z + h). , φ(x, z, t) = φ(z)e ζeˆ ik(x−ct) ik(x−ct) φ ∂z kz ∂t ˆ ikcζˆ = Ak sinh kh ik(x−ct) = f (t), independent of x, i.e. From B.C. 3: −ikcρA cosh kh + ρgζ + k2σζˆ e ) ( −ikcρA cosh kh + ρgζˆ + k2σζˆ = 0 icζ (19.2)⇒ A = sinh kh Dispersion Relation: ⇒ into (19.3) ⇒ c = 2 g k + σk ρ ( ω2 = gk + ( ) σk3 ρ ) tanh kh tanh kh defines the phase speed c = ω/k. 78 (19.2) (19.3) (19.4) Note: as h → ∞. tanh kh → 1, and we obtain deep water dispersion relation deduced in our wind-over­ water lecture. Physical Interpretation Chapter 19. Water waves • relative importance of σ and g is prescribed by the Bond number Bo =
https://ocw.mit.edu/courses/18-357-interfacial-phenomena-fall-2010/04759cfc15c79928ca9f72a18b3864dd_MIT18_357F10_lec_all.pdf
waves • relative importance of σ and g is prescribed by the Bond number Bo = ρg σk2 = σ(2π)2 2 ρgλ2 = (2π)2 ℓ c λ2 where ℓc = σ/ρg is the capillary length. • for air-water, Bo ∼ 1 for λ ∼ 2πℓc ∼ 1.7cm. J • Bo ≫ 1, λ ≫ 2πℓc: surface effects negligible ⇒ gravity waves. • Bo ≪ 1 : λ ≪ 2πℓc: influence of g is negligible ⇒ capillary waves. Special Cases: deep and shallow water. Can expand via Taylor series: For kh ≪ 1, tanh kh = kh − 1 (kh)3 + O (kh)5 , and for kh ≫ 1, tanh kh ≈ 1. 3 ) ( A. Gravity waves Bo ≫ 1: c2 = k √ Shallow water (kh ≪ 1) ⇒ c = one can only surf in shallow water. Deep water (kh ≫ 1) ⇒ c = g tanh kh. gh. All wavelengths travel at the same speed (i.e. non-dispersive), so g/k, so longer waves travel faster, e.g. drop large stone into a pond. B. Capillary Waves: Bo ≪ 1, c = ρ tanh kh. σk 2 J Deep water kh ≫ 1 ⇒ c = fastest, e.g
https://ocw.mit.edu/courses/18-357-interfacial-phenomena-fall-2010/04759cfc15c79928ca9f72a18b3864dd_MIT18_357F10_lec_all.pdf
σk 2 J Deep water kh ≫ 1 ⇒ c = fastest, e.g. raindrop in a puddle. √ σkρ so short waves travel Shallow water kh ≪ 1 ⇒ c = An interesting note: in lab modeling of shallow water waves (kh ≪ 1) c ≈ = J kh − 1 k3h3 + O (kh)5 2 σhk2 .ρ g k + 3 − 1 gh2 k2 + O (kh)4 gh. ) ( ( σk ρ 1/2 ( ) ( )) In ripple tanks, gh + σh ρ 3 ( choose h = ) 3σ ρg ( ) nondispersive waves. 0.5cm. to get a good approximation to 1/2 In water, ∼ 3·70 103 1/2 ∼ ( ) 3σ ρg ( ) 1/4 Figure 19.2: Deep water capillary waves, whose speed increases as wavelength de­ creases. 4gσ ρ From c(k) can deduce cmin = . ( ( Group velocity: when c = c(λ), a wave is called dispersive since its different Fourier components (corresponding to different k or λ) separate or disperse, e.g. deep water gravity waves: c ∼ λ. In a dispersive system, the energy of a wave component does not propagate at c = ω/k (phase speed), but at the group velocity: Image courtesy of Andrew Davidhazy. Used with permission. for kmin = √ ρg
https://ocw.mit.edu/courses/18-357-interfacial-phenomena-fall-2010/04759cfc15c79928ca9f72a18b3864dd_MIT18_357F10_lec_all.pdf
the group velocity: Image courtesy of Andrew Davidhazy. Used with permission. for kmin = √ ρg σ ) ) 1/2 cg = dω dk = d dk (ck) (19.5) Deep gravity waves: ω = ck = √ gk. cg = ∂ ∂k ω = ∂ ∂k √ gk = 1 2 Deep capillary wave: c = σ/ρ k 1/2 , ω = σ/ρk3/2 ⇒ cg = J J = ∂ω ∂k c g/k = 2 . 3 2 σ/ρk1/2 = 3 2 c. J MIT OCW: 18.357 Interfacial Phenomena 79 Prof. John W. M. Bush Chapter 19. Water waves Flow past an obstacle. If U < cmin, no steady waves are generated by the obstacle. If U > cmin, there are two k−values, for which c = U : 1. the smaller k is a gravity wave with cg = c/2 < c ⇒ energy swept downstream. 2. the larger k is a capillary wave with cg = 3c/2 > c, so the energy is swept upstream. Figure 19.3: Phase speed c of surface waves as a function of their wavelength λ. MIT OCW: 18.357 Interfacial Phenomena 80 Prof. John W. M. Bush MIT OpenCourseWare http://
https://ocw.mit.edu/courses/18-357-interfacial-phenomena-fall-2010/04759cfc15c79928ca9f72a18b3864dd_MIT18_357F10_lec_all.pdf
ial Phenomena 80 Prof. John W. M. Bush MIT OpenCourseWare http://ocw.mit.edu 357 Interfacial Phenomena Fall 2010 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
https://ocw.mit.edu/courses/18-357-interfacial-phenomena-fall-2010/04759cfc15c79928ca9f72a18b3864dd_MIT18_357F10_lec_all.pdf
LINEAR ALGEBRA: VECTOR SPACES AND OPERATORS Contents 1 Vector spaces and dimensionality 2 Linear operators and matrices 3 Eigenvalues and eigenvectors 4 Inner products 5 Orthonormal basis and orthogonal projectors 6 Linear functionals and adjoint operators 7 Hermitian and Unitary operators 1 Vector spaces and dimensionality B. Zwiebach October 21, 2013 1 5 11 14 18 20 24 In quantum mechanics the state of a physical system is a vector in a complex vector space. Observables are linear operators, in fact, Hermitian operators acting on this complex vector space. The purpose of this chapter is to learn the basics of vector spaces, the structures that can be built on those spaces, and the operators that act on them. Complex vector spaces are somewhat different from the more familiar real vector spaces. I would say they have more powerful properties. In order to understand more generally complex vector spaces it is useful to compare them often to their real dimensional friends. We will follow here the discussion of the book Linear algebra done right, by Sheldon Axler. In a vector space one has vectors and numbers. We can add vectors to get vectors and we can multiply vectors by numbers to get vectors. If the numbers we use are real, we have a real vector space. If the numbers we use are complex, we have a
https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf
real, we have a real vector space. If the numbers we use are complex, we have a complex vector space. More generally, the numbers we use belong to what is called in mathematics a ‘field’ and denoted by the letter F. We will discuss just two cases, F = R, meaning that the numbers are real, and F = C, meaning that the numbers are complex. The definition of a vector space is the same for F being R or C. A vector space V is a set of vectors V . This means with an operation of addition (+) that assigns an element u + v that V is closed under addition. There is also a scalar multiplication by elements of F, with av V to each u, v ∈ ∈ V ∈ 1 for any a F and v ∈ ∈ operations must satisfy the following additional properties: V . This means the space V is closed under multiplication by numbers. These 1. u + v = v + u V for all u, v ∈ ∈ V (addition is commutative). 2. u + (v + w) = (u + v) + w and (ab)u = a(bu) for any u, v, w V and a, b ∈ ∈ F (associativity). 3. There is a vector 0 ∈ V such that 0 + u = u for all u V (additive identity). ∈
https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf
such that 0 + u = u for all u V (additive identity). ∈ V there is a u V such that v + u = 0 (additive inverse). 4. For each v ∈ 5. The element 1 ∈ ∈ F satisfies 1v = v for all v V (multiplicative identity). ∈ 6. a(u + v) = au + av and (a + b)v = av + bv for every u, v V and a, b ∈ ∈ F (distributive property). This definition is very efficient. Several familiar properties follow from it by short proofs (which we will not give, but are not complicated and you may try to produce): The additive identity is unique: any vector 0 ′ that acts like 0 is actually equal to 0. • 0v = 0, for any v V , where the first zero is a number and the second one is a vector. This • ∈ means that the number zero acts as expected when multiplying a vector. a0 = 0, for any a F. Here both zeroes are vectors. This means that the zero vector multiplied • ∈ by any number is still the zero vector. The additive inverse of any vector v V is unique. It is denoted by v and in fact v = ( 1)v. • We must emphasize that while the numbers, in F
https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf
fact v = ( 1)v. • We must emphasize that while the numbers, in F are sometimes real or complex, we never speak of the vectors themselves as real or complex. A vector multiplied by a complex number is not said to − − − ∈ be a complex vector, for example! The vectors in a real vector space are not themselves real, nor are the vectors in a complex vector space complex. We have the following examples of vector spaces: 1. The set of N -component vectors a1 a2 . . . aN           , ai R , ∈ i = 1, 2, . . . N . form a real vector space. 2. The set of M × N matrices with complex entries a11 a21 . . . aM 1      a1N a2N . . . . . . . . . . . . . . . aM N      2 , C , aij ∈ (1.1) (1.2) is a complex vector space. In here multiplication by a constant multiplies each entry of the matrix by the constant. 3. We can have matrices with complex entries that naturally form a real vector
https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf
constant. 3. We can have matrices with complex entries that naturally form a real vector space. The space of two-by-two hermitian matrices define a real vector space. They do not form a complex vector space since multiplication of a hermitian matrix by a complex number ruins the hermiticity. 4. The set (F) of polynomials p(z). Here the variable z P has coefficients a0, a1, . . . an also in F: F and p(z) ∈ ∈ F. Each polynomial p(z) n p(z) = a0 + a1z + a2z + . . . + anz . 2 (1.3) By definition, the integer n is finite but it can take any nonnegative value. Addition of poly­ nomials works as expected and multiplication by a constant is also the obvious multiplication. The space P (F) of all polynomials so defined form a vector space over F. 5. The set F∞ of infinite sequences (x1, x2, . . .) of elements xi F. Here ∈ (x1, x2, . . .) + (y1, y2, . . .) = (x1 + y1, x2 + y2, . . .) a(x1, x2, . . .) = (ax1, ax2, . . .) a F . ∈ This is a vector space over F. (1.4) 6. The set of complex functions on an interval x [
https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf
over F. (1.4) 6. The set of complex functions on an interval x [0, L], form a vector space over C. ∈ To better understand a vector space one can try to figure out its possible subspaces. A subspace of a vector space V is a subset of V that is also a vector space. To verify that a subset U of V is a subspace you must check that U contains the vector 0, and that U is closed under addition and scalar multiplication. Sometimes a vector space V can be described clearly in terms of collection U1, U2, . . . Um of sub- is the direct sum of the subspaces U1, U2, . . . Um and we spaces of V . We say that the space V write V = U1 U2 ⊕ ⊕ · · · ⊕ Um (1.5) if any vector in V can be written uniquely as the sum u1 + u2 + . . . + um, where ui Ui. To check uniqueness one can, alternatively, verify that the only way to write 0 as a sum u1 + u2 + . . . + um with W , it suffices to ui prove that any vector can be written as u + w with u Ui is by taking all ui’s equal to zero. For the case of two subspaces V = U
https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf
taking all ui’s equal to zero. For the case of two subspaces V = U W and that U U and w W = 0. ⊕ ∈ ∈ ∈ ∈ ∩ Given a vector space we can produce lists of vectors. A list (v1, v2, . . . , vn) of vectors in V contains, by definition, a finite number of vectors. The number of vectors in the list is the length of the list. The span of a list of vectors (v1, v2, , vn), is the set of all linear vn) in V , denoted as span(v1, v2, · · · · · · combinations of these vectors a1v1 + a2v2 + . . . anvn , F ai ∈ (1.6) 3 A vector space V is spanned by a list (v1, v2, vn) if V = span(v1, v2, vn). Now comes a very natural definition: A vector space V spanned by some list of vectors in V . If V · · · · · · is said to be finite dimensional if it is is not finite dimensional, it is infinite dimensional. In such case, no list of vectors from V can span V . Let us show that the vector space of all polynomials p(z) considered in Example 4 is an infinite dimensional vector space. Indeed, consider any list of polynomials. In
https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf
an infinite dimensional vector space. Indeed, consider any list of polynomials. In this list there is a polynomial of maximum degree (recall the list is finite). Thus polynomials of higher degree are not in the span of the list. Since no list can span the space, it is infinite dimensional. For example 1, consider the list of vectors (e1, e2, . . . eN ) with 0 0  . . .  1    1 0  . . .  0    0 1  . . .  0    e1 =  , e2 =  , . . . eN =  . (1.7)     This list spans the space (the vector displayed is a1e1 + a2e2 + . . . aN eN ). This vector space is finite dimensional.         A list of vectors (v1, v2, . . . , vn), with vi ∈ V is said to be linearly independent if the equation a1v1 + a2v2 + . . . + anvn = 0 , (1.8) = an = 0. One can show that the length of any linearly independent only has the solution a1 = a2 = list is shorter or
https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf
of any linearly independent only has the solution a1 = a2 = list is shorter or equal to the length of any spanning list. This is reasonable, because spanning lists · · · can be arbitrarily long (adding vectors to a spanning list gives still a spanning list), but a linearly independent list cannot be enlarged beyond a certain point. Finally, we get to the concept of a basis for a vector space. A basis of V is a list of vectors in V that both spans V and it is linearly independent. Mathematicians easily prove that any finite dimensional vector space has a basis. Moreover, all bases of a finite dimensional vector space have the same length. The dimension of a finite-dimensional vector space is given by the length of any list of basis vectors. One can also show that for a finite dimensional vector space a list of vectors of length dim V is a basis if it is linearly independent list or if it is a spanning list. For example 1 we see that the list (e1, e2, . . . eN ) in (1.7) is not only a spanning list but a linearly independent list (prove it!). Thus the dimensionality of this space is N . For example 3, recall that the most general hermitian two-by-two matrix takes the form a0 + a3 a1 ia2 � a1 +
https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf
ian two-by-two matrix takes the form a0 + a3 a1 ia2 � a1 + ia2 a0 a3 � − − , a0, a1, a2, a3 R. ∈ (1.9) Now consider the following list of four ‘vectors’ (1, σ1, σ2, σ3). All entries in this list are hermitian matrices, so this is a list of vectors in the space. Moreover they span the space since the most general hermitian matrix, as shown above, is simply a01 + a1σ1 + a2σ2 + a3σ3. The list is linearly independent 4 as a01 + a1σ1 + a2σ2 + a3σ3 = 0 implies that a0 + a3 a1 a1 + ia2 a0 � a3 ! � ia2 − − = 0 0 � 0 0 ! � , (1.10) and you can quickly see that this implies a0, a1, a2, and a3 are zero. So the list is a basis and the space in question is a four-dimensional real vector space. Exercise. Explain why the vector space in example 2 has dimension M N . It seems pretty obvious that the vector space in example 5 is infinite dimensional, but it actually · takes a bit of work to prove it. 2 Linear operators and matrices A linear map refers in general to a
https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf
to prove it. 2 Linear operators and matrices A linear map refers in general to a certain kind of function from one vector space V to another vector space W . When the linear map takes the vector space V to itself, we call the linear map a linear operator. We will focus our attention on those operators. Let us then define a linear operator. A linear operator T on a vector space V is a function that takes V to V with the properties: 1. T (u + v) = T u + T v, for all u, v V . 2. T (au) = aT u, for all a ∈ ∈ F and u V . ∈ We call (V ) the set of all linear operators that act on V . This can be a very interesting set, as we L will see below. Let us consider a few examples of linear operators. 1. Let V denote the space of real polynomials p(x) of a real variable x with real coefficients. Here are two linear operators: • Let T denote differentiation: T p = p . This operator is linear because (p1 + p2) ′ = p + p and (ap) ′ = ap . ′ ′ 1 ′ ′ 2 Let S denote multiplication by x: Sp = xp. S is also a linear operator. • 2. In the space F
https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf
Sp = xp. S is also a linear operator. • 2. In the space F∞ of infinite sequences define the left-shift operator L by L(x1, x2, x3, . . .) = (x2, x3, . . .) . (2.11) We lose the first entry, but that is perfectly consistent with linearity. We also have the right-shift operator R that acts as follows: R(x1, x2, . . .) = (0, x1, x2, . . .) . (2.12) Note that the first entry in the result is zero. It could not be any other number because the zero element (a sequence of all zeroes) should be mapped to itself (by linearity). 5 3. For any V , the zero map 0 such that 0v = 0. This map is linear and maps all elements of V to the zero element. 4. For any V , the identity map I for which Iv = v for all v invariant. V . This map leaves all vectors ∈ Since operators on V can be added and can also be multiplied by numbers, the set (V ) introduced above is itself a vector space (the vectors being the operators!). Indeed for any two operators T, S L (V ) we have the natural definition L (S + T )v = Sv + T v , (aS)v = a(Sv) . The additive identity in the vector space (V ) is the zero map of example 3
https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf
. The additive identity in the vector space (V ) is the zero map of example 3. L In this vector space there is a surprising new structure: the vectors (the operators!) ∈ (2.13) can be multiplied. There is a multiplication of linear operators that gives a linear operator. We just let one operator act first and the second later. So given S, T (V ) we define the operator ST as ∈ L (ST )v S(T v) ≡ (2.14) You should convince yourself that ST is a linear operator. This product structure in the space of linear operators is associative: S(T U ) = (ST )U , for S, T, U , linear operators. Moreover it has an identity element: the identity map of example 4. Most crucially this multiplication is, in general, noncommutative. We can check this using the two operators T and S of example 1 acting on the polynomial p = x . Since T differentiates and S multiplies by x we get n (T S)x = T (Sxn) = T (x n n+1) n = (n + 1)x , while (ST )x = S(T xn) = S(nx n n−1) = nx n . (2.15) We can quantify this failure of commutativity by writing the difference ST )x = (n + 1)x n n (T S − − n nx
https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf
difference ST )x = (n + 1)x n n (T S − − n nx = x = I x n n (2.16) where we inserted the identity operator at the last step. Since this relation is true for any xn, it would also hold acting on any polynomial, namely on any element of the vector space. So we write [ T , S ] = I . (2.17) Y X. , where we introduced the commutator [ · ] of two operators X, Y , defined as [X, Y ] · ≡ XY − The most basic features of an operator are captured by two simple concepts: its null space and its range. Given some linear operator T on V mapped to the zero element. The null space (or kernel) of T it is of interest to consider those elements of V that are (V ) is the subset of vectors in V ∈ L that are mapped to zero by T : null T = v { V ; T v = 0 } . ∈ 6 (2.18) Actually null T is a subspace of V (The only nontrivial part of this proof is to show that T (0) = 0. This follows from T (0) = T (0 + 0) = T (0) + T (0) and then adding to both sides of this equation the additive inverse to T (0)). A linear operator T : V V , implies u
https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf
the additive inverse to T (0)). A linear operator T : V V , implies u = v. An injective map is called a one-to-one map, because not two different elements can be mapped to the same one. In fact, physicist Sean Carroll has suggested that a better name would be two-to-two as V is said to be injective if T u = T v, with u, v → ∈ injectivity really means that two different elements are mapped by T to two different elements! We leave for you as an exercise to prove the following important characterization of injective maps: Exercise. Show that T is injective if and only if null T = . 0 } { Given a linear operator T on V it is also of interest to consider the elements of V of the form T v. The linear operator may not produce by its action all of the elements of V . We define the range of T as the image of V under the map T : Actually range T is a subspace of V (can you prove it?). The linear operator T is said to be surjective range T = T v; v { V . } ∈ (2.19) if range T = V . That is, if the image of V under T is the complete V . Since both the null space and the range of
https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf
under T is the complete V . Since both the null space and the range of a linear operator T : V can assign a dimension to them, and the following theorem is nontrivial: V are subspaces of V , one → dim V = dim (null T ) + dim (range T ) . (2.20) Example. Describe the null space and range of the operator T = 0 1 0 0 ( ) Let us now consider invertible linear operators. A linear operator T (2.21) (V ) is invertible if there ∈ L exists another linear operator S (V ) such that ST and T S are identity maps (written as I). The linear operator S is called the inverse of T . The inverse is actually unique. Say S and S ′ are inverses of T . Then we have ∈ L S = SI = S(T S ′ ) = (ST )S ′ = IS ′ = S ′ . (2.22) Note that we required the inverse S to be an inverse acting from the left and acting from the right. This is useful for infinite dimensional vector spaces. For finite-dimensional vector spaces one suffices; one can then show that ST = I if and only if T S = I. It is useful to have a good characterization of invertible linear operators. For a finite-dimensional vector space V
https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf
a good characterization of invertible linear operators. For a finite-dimensional vector space V the following three statements are equivalent! Finite dimension: T is invertible ←→ T is injective ←→ T is surjective (2.23) 7 For infinite dimensional vector spaces injectivity and surjectivity are not equivalent (each can fail independently). In that case invertibility is equivalent to injectivity plus surjectivity: Infinite dimension: T is invertible ←→ T is injective and surjective (2.24) The left shift operator L is not injective (maps (x1, 0, . . .) to zero) but it is surjective. The right shift operator is not surjective although it is injective. Now we consider the matrix associated to a linear operator T that acts on a vector space V . This matrix will depend on the basis we choose for V . Let us declare that our basis is the list (v1, v2, . . . vn). It is clear that the full knowledge of the action of T on V T on the basis vectors, that is on the values (T v1, T v2, . . . , T vn). Since T vj is in V , it can be written as a linear combination of basis vectors. We then have is encoded in the action of T vj = T1 j v1 + T2 j v2 + . . . + Tn j vn
https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf
T vj = T1 j v1 + T2 j v2 + . . . + Tn j vn , (2.25) where we introduced the constants Ti,j that are known if the operator T is known. As we will see, these are the entries form the matrix representation of the operator T in the chosen basis. The above relation can be written more briefly as T vj = n Tij vi . i=1 L When we deal with different bases it can be useful to use notation where we replace Tij Tij ( v { ) , } → (2.26) (2.27) so that it makes clear that T is being represented using the v basis (v1, . . . , vn). I want to make clear why (2.25) is reasonable before we show that it makes for a consistent association between operator multiplication and matrix multiplication. The left-hand side, where we have the action of the matrix for T on the j-th basis vector, can be viewed concretely as T vj ←→ j-th position (2.28) T11 T21 . . . T1 j T2 j . . . · · · · · · . . . T1n T2n . . . · · · · · · . . . Tn1 Tn j · · · · · · Tnn       �
https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf
· Tnn        0 . . .  1  . . .    0                   where the column vector has zeroes everywhere except on the j-th entry. The product, by the usual rule of matrix multiplication is the column vector T1 j T2 j . . . Tn j           = T1 j 1 0  . . .  0         + T2 j 0 1  . . .  0         0 0  . . .  1    + . . . + Tn j 8      T1 j v1 + . . . Tn j vn . (2.29) ←→ which we identify with the right-hand side
https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf
which we identify with the right-hand side of (2.25). So (2.25) is reasonable. Exercise. Verify that the matrix representation of the identity operator is a diagonal matrix with an entry of one at each element of the diagonal. This is true for any basis. Let us now examine the product of two operators and their matrix representation. Consider the operator T S acting on vj : (T S)vj = T (Svj ) = T Spj vp = Spj T vp = Spj Tipvi p L p L p i L L so that changing the order of the sums we find (T S)vj = TipSpj vi . (2.30) (2.31) p L(L Using the identification implicit in (2.26) we see that the object in parenthesis is the i, j matrix element ) i of the matrix that represents T S. Therefore we found (T S)ij = TipSpj , p L (2.32) which is precisely the right formula for matrix multiplication. In other words, the matrix that repre­ sents T S is the product of the matrix that represents T with the matrix that represents S, in that order. Changing basis While matrix representations are very useful for concrete visualization, they are basis dependent. It is a good idea to try to figure out if there
https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf
basis dependent. It is a good idea to try to figure out if there are quantities that can be calculated using a matrix representation that are, nevertheless, guaranteed to be basis independent. One such quantity is the trace of the matrix representation of a linear operator. The trace is the sum of the matrix elements in the diagonal. Remarkably, that sum is the same independent of the basis used. Consider a linear operator T in (V ) and two sets of basis vectors (v1, . . . , vn) and (u1, . . . , un) for V . Using the explicit notation (2.27) for the matrix representation we state this property as L v tr T ( { u ) = tr T ( { } ) . } (2.33) We will establish this result below. On the other hand, if this trace is actually basis independent, there should be a way to define the trace of the linear operator T without using its matrix representation. This is actually possible, as we will see. Another basis independent quantity is the determinant of the matrix representation of T . Let us then consider the effect of a change of basis on the matrix representation of an operator. Consider a vector space V and a change of basis from (v1, . . . vn) to (u1, . . . un) defined by the linear operator A as follows: A : vk → uk, for k = 1, . . . , n . (2
https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf
follows: A : vk → uk, for k = 1, . . . , n . (2.34) 9 This can also be written as Avk = uk (2.35) Since we know how A acts on every element of the basis we know, by linearity how it acts on any vector. The operator A is clearly invertible because, letting B : uk vk or → we have Buk = vk , BAvk = B(Avk) = Buk = vk ABuk = A(Buk) = Avk = uk , (2.36) (2.37) showing that BA = I and AB = I. Thus B is the inverse of A. Using the definition of matrix representation, the right-hand sides of the relations uk = Avk and vk = Buk can be written so that the equations take the form uk = Ajk vj , vk = Bjk uj , (2.38) where we used the convention that repeated indices are summed over. Aij are the elements of the matrix representation of A in the v basis and Bij are the elements of the matrix representation of B in the u basis. Replacing the second relation on the first, and then replacing the first on the second we get uk = Ajk Bij ui = Bij Ajk ui vk = Bjk Aij vi = Aij Bjk vi
https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf
= Ajk Bij ui = Bij Ajk ui vk = Bjk Aij vi = Aij Bjk vi Since the u’s and v’s are basis vectors we must have Bij Ajk = δik and Aij Bjk = δik which means that the B matrix is the inverse of the A matrix. We have thus learned that vk = (A−1)jk uj . (2.39) (2.40) (2.41) We can now apply these preparatory results to the matrix representations of the operator T . We have, by definition, We now want to calculate T on uk so that we can read the formula for the matrix T on the u basis: v T vk = Tik( { ) vi . } (2.42) Computing the left-hand side, using the linearity of the operator T , we have u T uk = Tik( { ) ui . } v T uk = T (Ajkvj ) = AjkT vj = AjkTpj ( { ) vp } 10 (2.43) (2.44) and using (2.41) we get v T uk = AjkTpj ( { ) (A−1)ip ui = (A−1)ip Tpj ( v { } ) Ajk ui = A−1T ( v { } ( ) ( Comparing with (2.43) we get )A ui . ik } ) u Tij ( { ) = A−1T ( v { } )A } ij → u T ( { ) = A−1T (
https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf
= A−1T ( v { } )A } ij → u T ( { ) = A−1T ( v { } )A . } This is the result we wanted to obtain. ( ) (2.45) (2.46) The trace of a matrix Tij is given by Tii, where sum over i is understood. To show that the trace of T is basis independent we write u tr(T ( { ) = (A−1)ij Tjk( v u )) = Tii( { } { } = Aki(A−1)ij Tjk( v { ) } )Aki } v = δkj Tjk( { v ) = Tjj ( { } v ) = tr(T ( { } )) . } (2.47) For the determinant we recall that det(AB) = (detA)(detB). Therefore det(A) det(A−1) = 1. From (2.46) we then get u detT ( { ) = det(A−1) detT ( v { } v ) det A = detT ( { } ) . } (2.48) Thus the determinant of the matrix that represents a linear operator is independent of the basis used. 3 Eigenvalues and eigenvectors In quantum mechanics we need to consider eigenvalues and eigenstates of hermitian operators acting on complex vector spaces. These operators are called observables and their eigenvalues represent possible results of a measurement. In order to acquire a better perspective on these matters, we consider the eigenvalue/eigenvector problem in more generality.
https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf
on these matters, we consider the eigenvalue/eigenvector problem in more generality. One way to understand the action of an operator T (V ) on a vector space V is to understand ∈ L how it acts on subspaces of V , as those are smaller than V and thus possibly simpler to deal with. Let U denote a subspace of V . In general, the action of T may take elements of U outside U . We have a noteworthy situation if T acting on any element of U gives an element of U . In this case U is said to be invariant under T , and T is then a well-defined linear operator on U . A very interesting situation arises if a suitable list of invariant subspaces give the space V as a direct sum. Of all subspaces, one-dimensional ones are the simplest. Given some vector u the one-dimensional subspace U spanned by u: U = cu : c { F . } ∈ 11 V one can consider ∈ (3.49) We can ask if the one-dimensional subspace U is left invariant by the operator T . For this T u must be equal to a number times u, as this guarantees that T u U . Calling the number λ, we write ∈ T u = λ u . (3.50) This equation is so ubiquitous that names have been
https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf
u = λ u . (3.50) This equation is so ubiquitous that names have been invented to label the objects involved. The number λ F is called an eigenvalue of the linear operator T if there is a nonzero vector u satisfying this equation. Then it follows that cu, for any c such that the equation above is satisfied. Suppose we find for some specific λ a nonzero vector u F also satisfies equation (3.50), so that the solution space of the equation includes the subspace U , which is now said to be an invariant subspace under T . It is convenient to call any vector that satisfies (3.50) for a given λ an eigenvector ∈ V ∈ ∈ of T corresponding to λ. In doing so we are including the zero vector as a solution and thus as an eigenvector. It can often happen that for a given λ there are several linearly independent eigenvectors. In this case the invariant subspace associated with the eigenvalue λ is higher dimensional. The set of eigenvalues of T is called the spectrum of T . Our equation above is equivalent to for some nonzero u. It is therefore the case that λI) u = 0 , (T − λ is an eigenvalue ←→ (T − λI) not injective. (3.51) (3.52) Using (2.
https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf
→ (T − λI) not injective. (3.51) (3.52) Using (2.23) we conclude that λ is an eigenvalue also means that (T surjective. We also note that λI) is not invertible, and not − Set of eigenvectors of T corresponding to λ = null (T λI) . − (3.53) It should be emphasized that the eigenvalues of T and the invariant subspaces (or eigenvectors as­ sociated with fixed eigenvalues) are basis independent objects. Nowhere in our discussion we had to invoke the use of a basis, nor we had to use any matrix representation. Below, we will discuss the familiar calculation of eigenvalues and eigenvectors using a matrix representation of the operator T in some particular basis. Let us consider some examples. Take a real three-dimensional vector space V (our space to great accuracy!). Consider the rotation operator T that rotates all vectors by a fixed angle small about the z axis. To find eigenvalues and eigenvectors we just think of the invariant subspaces. We must ask which are the vectors for which this rotation doesn’t change their direction and effectively just multiplies them by a number? Only the vectors along the z-direction do not change direction upon this rotation. So the vector space spanned by ez is the invariant subspace, or the space of eigenvectors. The eigenv
https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf
spanned by ez is the invariant subspace, or the space of eigenvectors. The eigenvectors are associated with the eigenvalue of one, as the vectors are not altered at all by the rotation. 12 Consider now the case where T is a rotation by ninety degrees on a two-dimensional real vector space V . Are there one-dimensional subspaces left invariant by T ? No, all vectors are rotated, none remains pointing in the same direction. Thus there are no eigenvalues, nor, of course, eigenvectors. If you tried calculating the eigenvalues by the usual recipe, you will find complex numbers. A complex eigenvalue is meaningless in a real vector space. Although we will not prove the following result, it follows from the facts we have introduced and no extra machinery. It is of interest being completely general and valid for both real and complex vector spaces: Theorem: Let T ∈ L (V ) and assume λ1, . . . λn are distinct eigenvalues of T and u1, . . . un are corre­ sponding nonzero eigenvectors. Then (u1, . . . un) are linearly independent. Note that we cannot ask if the eigenvectors are orthogonal to each other as we have not yet introduced an inner product on the vector space V . In this theorem there may be more than one linearly independent eigenvector associated with
https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf
In this theorem there may be more than one linearly independent eigenvector associated with some eigenvalues. In that case any one eigenvector will do. Since an n-dimensional vector space V does not have more than n linearly independent vectors, no linear operator on V can have more than n distinct eigenvalues. We saw that some linear operators in real vector spaces can fail to have eigenvalues. Complex vector spaces are nicer. In fact, every linear operator on a finite-dimensional complex vector space has at least one eigenvalue. This is a fundamental result. It can be proven without using determinants with an elegant argument, but the proof using determinants is quite short. When λ is an eigenvalue, we have seen that T λI is not an invertible operator. This also means that using any basis, the matrix representative of T λI is non-invertible. The condition of − − non-invertibility of a matrix is identical to the condition that its determinant vanish: This condition, in an N -dimensional vector space looks like det(T − λ1) = 0 . (3.54) T11 λ − T21 . . . TN 1 T12 λ T22 − . . . TN 2 T1N . . . T2N . . . . . . . . . . . . TN
https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf
. T2N . . . . . . . . . . . . TN N − λ     det       = 0 . (3.55) The left-hand side is a polynomial f (λ) in λ of degree N called the characteristic polynomial: f (λ) = det(T λ1) = ( − − λ)N + bN −1λN −1 + . . . b1λ + b0 , (3.56) where the bi are constants. We are interested in the equation f (λ) = 0, as this determines all possible eigenvalues. If we are working on real vector spaces, the constants bi are real but there is no guarantee of real roots for f (λ) = 0. With complex vector spaces, the constants bi will be complex, but a complex solution for f (λ) = 0 always exists. Indeed, over the complex numbers we can factor the polynomial f (λ) as follows f (λ) = ( − 1)N (λ λ1)(λ − − λ2) . . . (λ λN ) , − (3.57) 13 where the notation does not preclude the possibility that some of the λi’s may be equal. The λi’s are the eigenvalues, since they lead to f (λ) = 0 for λ = λi. If all eigenvalues of T are different the
https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf
0 for λ = λi. If all eigenvalues of T are different the spectrum of T is said to be non-degenerate. If an eigenvalue appears k times it is said to be a degenerate eigenvalue with of multiplicity k. Even in the most degenerate case we must have at least one eigenvalue. The eigenvectors exist because (T λI) non-invertible means it is not injective, and therefore there are nonzero vectors that are mapped to zero by this operator. − 4 Inner products We have been able to go a long way without introducing extra structure on the vector spaces. We have considered linear operators, matrix representations, traces, invariant subspaces, eigenvalues and eigenvectors. It is now time to put some additional structure on the vector spaces. In this section we consider a function called an inner product that allows us to construct numbers from vectors. A vector space equipped with an inner product is called an inner-product space. An inner product on a vector space V over F is a machine that takes an ordered pair of elements of V , that is, a first vector and a second vector, and yields a number in F. In order to motivate the definition of an inner product we first discuss the familiar way in which we associate a length to a vector.
https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf
first discuss the familiar way in which we associate a length to a vector. The length of a vector, or norm of a vector is a real number that is positive or zero, if the vector is the zero vector. In Rn a vector a = (a1, . . . an) has norm defined by a | | = a | | 2 a + . . . a2 1 n Squaring this one may think of a | 2 as the dot product of a with a: | 2 = a | a | · 2 2 a = a1 + . . . a n Based on this the dot product of any two vectors a and b is defined by b = a1b1 + . . . + anbn . a · (4.58) (4.59) (4.60) If we try to generalize this dot product we may require as needed properties the following 1. a a · ≥ 0, for all vectors a. 2. a 3. a · · a = 0 if and only if a = 0. (b1 + b2) = a b1 + a · · b2. Additivity in the second entry. 4. a · (α b) = α a · b, with α a number. 5. a · b = b a. · 14 Along with these axioms, the length a | |
https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf
= b a. · 14 Along with these axioms, the length a | | of a vector a is the positive or zero number defined by relation 2 = a | a | · a . (4.61) These axioms are satisfied by the definition (4.60) but do not require it. A new dot product defined b = c1a1b1 + . . . + cnanbn, with c1, . . . cn positive constants, would do equally well! So whatever by a can be proven with these axioms holds true not only for the conventional dot product. · The above axioms guarantee that the Schwarz inequality holds: To prove this consider two (nonzero) vectors a and b and then consider the shortest vector joining the tip of a to the line defined by the direction of b (see the figure below). This is the vector a⊥, given by b a | · | ≤ | a b . | | | (4.62) a⊥ a ≡ − a b b · b · b . is there because the vector is perpendicular to b, namely a⊥ (4.63) b = 0, as you can quickly · The subscript ⊥ see. To write the above vector we subtracted from a the component of a parallel to b. Note that the vector a⊥ is not changed as b should, the vector a⊥ is zero if
https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf
the vector a⊥ is not changed as b should, the vector a⊥ is zero if and only if the vectors a and b are parallel. All this is only motivation, we could have just said “consider the following vector a⊥”. cb; it does not depend on the overall length of b. Moreover, as it → Given axiom (1) we have that a⊥ a⊥ · ≥ 0 and therefore using (4.63) a⊥ = a a⊥ · a · − (a b b)2 b · · 0 . ≥ Since b is not the zero vector we then have b)2 (a · (a a)(b · b) . · ≤ (4.64) (4.65) Taking the square root of this relation we obtain the Schwarz inequality (4.62). The inequality becomes an equality only if a⊥ = 0 or, as discussed above, when a = cb with c a real constant. For complex vector spaces some modification is necessary. Recall that the length of a complex γ∗γ, where the asterisk superscript denotes complex conjugation. It is γ | | number γ is given by = √ γ | | 15 not hard to generalize this a bit. Let z = (z1, . . . , zn) be a vector in Cn . Then the length of the vector z | is a real number greater than zero given by | = z | | ∗ z z1 +
https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf
is a real number greater than zero given by | = z | | ∗ z z1 + . . . + z ∗ zn . 1 n (4.66) We must use complex conjugates, denoted by the asterisk superscript, to produce a real number greater than or equal to zero. Squaring this we have ∗ 2 = z1 z1 + . . . + z zn . | ∗ n z | (4.67) This suggests that for vectors z = (z1, . . . , zn) and w = (w1, . . . , wn) an inner product could be given by ∗ w1z1 + . . . + w zn , ∗ n (4.68) and we see that we are not treating the two vectors in an equivalent way. There is the first vector, in this case w whose components are conjugated and a second vector z whose components are not conjugated. If the order of vectors is reversed, we get for the inner product the complex conjugate of the original value. As it was mentioned at the beginning of the section, the inner product requires an ordered pair of vectors. It certainly does for complex vector spaces. Moreover, one can define an inner product in general in a way that applies both to complex and real vector spaces. An inner product on a vector space V over F is a map from an ordered pair (u, v) of vectors are
https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf
space V over F is a map from an ordered pair (u, v) of vectors are inspired by the axioms we listed for the dot in F. The axioms for in V to a number u, v ( ) u, v ( ) product. 1. 2. 3. 4. 5. v , v ( ) ≥ 0, for all vectors v V . ∈ v, v ( ) = 0 if and only if v = 0. u , v1 + v2 ( ) = u , v1 ( ) + u , v2 ( . Additivity in the second entry. ) u , α v ( ) = α u , v ( , with α ) ∈ F. Homogeneity in the second entry. u , v ( ) = v , u ( ∗ . Conjugate exchange symmetry. ) This time the norm of a vector v v | | V ∈ is the positive or zero number defined by relation 2 = | v | v , v ( ) . (4.69) From the axioms above, the only major difference is in number five, where we find that the inner product is not symmetric. We know what complex conjugation is in C. For the above axioms to apply to vector spaces over R we just define the obvious: complex conjugation of a real number is a conjugation does nothing and the inner product is strictly real number. In a real vector space the symmetric
https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf
and the inner product is strictly real number. In a real vector space the symmetric in its inputs. ∗ 16 A few comments. One can use (3) with v2 = 0 to show that = 0 for all u V , and thus, by u, 0 ) ( ∈ (5) also = 0. Properties (3) and (4) amount to full linearity in the second entry. It is important 0, u ( ) to note that additivity holds for the first entry as well: u1 + u2, v ( ) = v, u1 + u2 ( v, u1 = ( ( v, u1 ( u1, v ( + ) ∗ + ) + = = ) ∗ ) v, u2 ( v, u2 ( u2, v ( ) Homogeneity works differently on the first entry, however, α u , v ( ) = ∗ v , α u ) ( ) ∗ = (α v , u ( ) = α ∗ u , v ( ) . ) ∗ ) ∗ ) . (4.70) (4.71) Thus we get conjugate homogeneity on the first entry. This is a very important fact. Of course, for a real vector space conjugate homogeneity is the same as just plain homogeneity. Two vectors u, v V are said to be orthogonal if = 0. This, of course, means that u, v ( ) ∈ v, u ) ( = 0 as well. The zero vector
https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf
, means that u, v ( ) ∈ v, u ) ( = 0 as well. The zero vector is orthogonal to all vectors (including itself). Any vector orthogonal to all vectors in the vector space must be equal to zero. Indeed, if x V is such that = 0 for all v, ∈ x, v ( ) = 0 implies x = 0 by axiom 2. This property is sometimes stated as the pick v = x, so that non-degeneracy of the inner product. The “Pythagorean” identity holds for the norm-squared of x, x ( ) orthogonal vectors in an inner-product vector space. As you can quickly verify, u + v | 2 = | 2 + | u | 2 , | v | for u, v ∈ V, orthogonal vectors. (4.72) The Schwarz inequality can be proven by an argument fairly analogous to the one we gave above for dot products. The result now reads Schwarz Inequality: u , v |( u ≤ | . v | | | )| (4.73) The inequality is saturated if and only if one vector is a multiple of the other. Note that in the left-hand side denotes the norm of a complex number and on the right-hand side each denotes the norm of a vector. You will prove this identity in a slightly different way in the homework. You will also consider there the triangle
https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf
a slightly different way in the homework. You will also consider there the triangle inequality ... | | ... | | which is saturated when u = cv for c a real, positive constant. Our definition (4.69) of norm on a vector space V is mathematically sound: a norm is required to satisfy the triangle inequality. Other u + v | u | ≤ | + | , v | | (4.74) properties are required: (i) 0 for all v, (ii) v | | ≥ some constant. Our norm satisfies all of them. = 0 if and only if v = 0, and (iii) v | | = cv | | a c || | for c | 17 A complex vector space with an inner product as we have defined is a Hilbert space if it is finite dimensional. If the vector space is infinite dimensional, an extra completeness requirement must be satisfied for the space to be a Hilbert space: all Cauchy sequences of vectors must converge to vectors in the space. An infinite sequence of vectors vi, with i = 1, 2, . . . , vn ǫ > 0 there is an N such that | < ǫ whenever n, m > N . vm − | is a Cauchy sequence if for any ∞ 5 Orthonormal basis and orthogonal projectors In an
https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf
uchy sequence if for any ∞ 5 Orthonormal basis and orthogonal projectors In an inner-product space we can demand that basis vectors have special properties. A list of vectors is said to be orthonormal if all vectors have norm one and are pairwise orthogonal. Consider a list (e1, . . . , en) of orthonormal vectors in V . Orthonormality means that ei, ej ( ) = δij . (5.75) We also have a simple expression for the norm of a1e1 + . . . + anen, with ai F: ∈ a1e1 + . . . + anen | 2 = a1e1 + . . . + anen , a1e1 + . . . + anen | = = a1e1 , a1e1 ( ( a1 | ) 2 + . . . + | + . . . + 2 . | an | anen , anen ( ) ) (5.76) This result implies the somewhat nontrivial fact that the vectors in any orthonormal list are linearly 2 . This independent. Indeed if a1e1 + . . . + anen = 0 then its norm is zero and so is | implies all ai = 0, thus proving the claim. 2 + . . . + | an | a1 | An orthonormal basis of V is a list of orthonormal vectors that is also a basis for V . Let (e1, . . . , en
https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf
orthonormal vectors that is also a basis for V . Let (e1, . . . , en) denote an orthonormal basis. Then any vector v can be written as for some constants ai that can be calculated as follows v = a1e1 + . . . + anen , ei, v ( ) = ei , aiei ( ) = ai , ( i not summed). Therefore any vector v can be written as v = e1, v ( ) e1 + . . . + en , v ( ) = ei , v ( ) ei . (5.77) (5.78) (5.79) To find an orthonormal basis on an inner product space V we just need to start with a basis and then use an algorithm to turn it into an orthogonal basis. In fact, a little more generally: Gram-Schmidt: Given a list (v1, . . . , vn) of linearly independent vectors in V one can construct a list (e1, . . . , en) of orthonormal vectors such that both lists span the same subspace of V . The Gram-Schmidt algorithm goes as follows. You take e1 to be v1, normalized to have unit norm: e1 = v1/ . Then take v2 + αe1 and fix the constant α so that this vector is orthogonal to e1. The v1 | | 18 answer is clearly v2 In fact we can write the
https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf
. The v1 | | 18 answer is clearly v2 In fact we can write the general vector in a recursive fashion. If we know e1, e2, . . . , ej−1, we can write ej as follows: e1. This vector, normalized by dividing it by its norm, is set equal to e2. ) e1, v2 − ( − − It should be clear to you by inspection that this vector is orthogonal to the vectors ei with i < j and has unit norm. The Gram-Schmidt procedure is quite practical. − ( − ( − ( − ( (5.80) ej = | vj vj | e1 e1, vj ) e1 e1, vj ) . . . . . . ej−1 ej−1, vj ) ej−1 ej−1, vj ) With an inner product we can construct interesting subspaces of a vector space V . Consider a subset U of vectors in V (not necessarily a subspace). Then we can define a subspace U ⊥, called the orthogonal complement of U as the set of all vectors orthogonal to the vectors in U : U ⊥ = v { V v, u = 0, for all u U . (5.81) |( ∈ ) This is clearly a subspace of V . When U is a subspace, then U and U ⊥ actually give a direct sum decomposition of the full space: Theorem: If U
https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf
⊥ actually give a direct sum decomposition of the full space: Theorem: If U is a subspace of V , then V = U ⊕ Proof: This is a fundamental result and is not hard to prove. Let (e1, . . . en) be an orthonormal basis for U . We can clearly write any vector v in V as U ⊥ . ∈ } e1, v v = ( ( e1 + . . . + ) en, v ( en ) + ( v ) e1, v − ( e1 ) − . . . en, v − ( en ) . ) (5.82) On the right-hand side the first vector in parenthesis is clearly in U as it is written as a linear combination of U basis vectors. The second vector is clearly in U ⊥ as one can see that it is orthogonal to any vector in U . To complete the proof one must show that there is no vector except the zero U ⊥ . Then v is in U U ⊥ (recall the comments below (1.5)). Let v U vector in the intersection U ∩ and in U ⊥ so it should satisfy v, v ( ) = 0. But then v = 0, completing the proof. ∈ ∩ ∈ and w Given this decomposition any vector v U U ⊥ . One can define a linear operator PU , called the orthogonal projection of V onto U , that and that
https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf
a linear operator PU , called the orthogonal projection of V onto U , that and that acting on v above gives the vector u. It is clear from this definition that: (i) the range of PU is U . (ii) the null space of PU is U ⊥, (iii) that PU is not invertible and, (iv) acting on U , the operator PU is the identity operator. The formula for the vector u can be read from (5.82) V can be written uniquely as v = u + w where u ∈ ∈ PU v = e1, v ( e1 + . . . + ) en, v ( en . ) (5.83) It is a straightforward but a good exercise to verify that this formula is consistent with the fact that acting on U , the operator PU is the identity operator. Thus if we act twice in succession with PU on a vector, the second action has no effect as it is already acting on a vector in U . It follows from this that PU PU = I PU = PU P 2 U = PU . → (5.84) The eigenvalues and eigenvectors of PU are easy to describe. Since all vectors in U are left invariant by the action of PU , an orthonormal basis of U provides a set of orthonormal eigenvectors of P all with 19
https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf
U provides a set of orthonormal eigenvectors of P all with 19 eigenvalue one. If we choose on U ⊥ an orthonormal basis, that basis provides orthonormal eigenvectors of P all with eigenvalue zero. In fact equation (5.84) implies that the eigenvalues of PU can only be one or zero. T he eigenvalues of an operator satisfy whatever equation the operator satisfies (as shown by letting the equation act on a presumed eigenvector) thus λ2 = λ is needed, and this gives λ(λ only possibilities. 1) = 0, and λ = 0, 1, as the − Consider a vector space V = U U ⊥ that is (n + k)-dimensional, where U is n-dimensional and U ⊥ is k-dimensional. Let (e1, . . . , en) be an orthonormal basis for U and (f1, . . . fk) an orthonormal basis for U ⊥ . We then see that the list of vectors (g1, . . . gn+k) defined by ⊕ (g1 , . . . , gn+k) = (e1, . . . , en, f1, . . . fk) is an orthonormal basis for V. (5.85) Exercise: Use PU ei = ei, for i = 1, . . . n and PU fi = 0, for i = 1, . . . , k, to show that in the above basis the projector operator is represented by the diagonal matrix: PU
https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf
show that in the above basis the projector operator is represented by the diagonal matrix: PU = diag 1, . . . 1 , 0, . . . , 0 ) . (5.86) n entries k entries ( ' -v We see that, as expected from its non-invertibility, det(PU ) = 0. But more interestingly we see that the trace of the matrix PU is n. Therefore ' -v " " tr PU = dim U . (5.87) The dimension of U is the rank of the projector PU . Rank one projectors are the most common projectors. They project to one-dimensional subspaces of the vector space. Projection operators are useful in quantum mechanics, where observables are described by opera­ tors. The effect of measuring an observable on a physical state vector is to turn this original vector instantaneously into another vector. This resulting vector is the orthogonal projection of the original vector down to some eigenspace of the operator associated with the observable. 6 Linear functionals and adjoint operators When we consider a linear operator T on a vector space V that has an inner product, we can construct a related linear operator T † on V called the adjoint of T . This is a very useful operator and is typically different from T . When the adjoint T † happens to be equal to T , the operator is said to be Hermit
https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf
the adjoint T † happens to be equal to T , the operator is said to be Hermitian. To understand adjoints, we first need to develop the concept of a linear functional. A linear functional φ on the vector space V is a linear map from V to the numbers F: for v φ(v) ∈ F. A linear functional has the following two properties: V , ∈ 1. φ(v1 + v2) = φ(v1) + φ(v2) , with v1, v2 V . ∈ 2. φ(av) = aφ(v) for v V and a ∈ F. ∈ 20 As an example, consider the three-dimensional real vector space R3 with inner product equal to the familiar dot product. Writing a vector v as the triplet v = (v1, v2, v3), we take φ(v) = 3v1 + 2v2 4v3 . − (6.1) Linearity is clear as the right-hand side features the components v1, v2, v3 appearing linearly. We can 4) to write the linear functional as an inner product. Indeed, one can readily use a vector u = (3, 2, see that − This is no accident, in fact. We can prove that any linear functional φ(v) admits such representation φ(v) = u, v ( . ) (6.2) with some suitable choice of vector u. Theorem: Let φ be
https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf
) (6.2) with some suitable choice of vector u. Theorem: Let φ be a linear functional on V . There is a unique vector u V . for all v Proof: Consider an orthonormal basis, (e1, . . . , en) and write the vector v as ∈ V such that φ(v) = ∈ When φ acts on v we find, first by linearity and then by conjugate homogeneity v = e1, v ( e1 + . . . + ) en, v ( en . ) φ(v) = φ e1 + . . . + ) e1, v en, v en ( ) ( φ(e1) + . . . + e1, v en, v φ(en) ( ) ( ) ( ) φ(e1) ∗ e1, v φ(en) ∗ en , v + . . . + ( ( φ(e1) ∗ e1 + . . . + φ(en) ∗ en , v . ) ) = = = ) u, v ( ) (6.3) (6.4) We have thus shown that, as claimed ( φ(v) = u, v ( ) with u = φ(e1) ∗ e1 + . . . + φ(en) ∗ en . (6.5) Next, we prove that this u is unique. If there exists another vector, u ′ , that also gives the correct u, we see result for all v, then ′ = 0 for all v. Taking v = u ′ u , v = ′ u , v (
https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf
0 for all v. Taking v = u ′ u , v = ′ u , v ( u = 0 or u = u, proving uniqueness.1 , which implies ) u, v ( ′ u ( − ) ) − ′ that this shows u − We can modify a bit the notation when needed, to write where the left-hand side makes it clear that this is a functional acting on v that depends on u. φu(v) u, v ≡ ( , ) (6.6) We can now address the construction of the adjoint. Consider: φ(v) = a linear functional, whatever the operator T is. Since any linear functional can be written as u, T v ( , which is clearly ) w, v ( , ) with some suitable vector w, we write u, T v ( 1This theorem holds for infinite dimensional Hilbert spaces, for continuous linear functionals. w , v ( ) ) = , (6.7) 21 Of course, the vector w must depend on the vector u that appears on the left-hand side. Moreover, it must have something to do with the operator T , who does not appear anymore on the right-hand side. So we must look for some good notation here. We can think of w as a function of the vector u and thus write w = T †u where T † denotes a map (not obviously
https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf
u and thus write w = T †u where T † denotes a map (not obviously linear) from V to V . So, we think of T †u as the vector obtained by acting with some function T † on u. The above equation is written as u , T v ( ) = T † u , v ( ) , (6.8) Our next step is to show that, in fact, T † is a linear operator on V . The operator T † is called the adjoint of T . Consider u1 + u2, T v ( ) = T †(u1 + u2), v ( ) , and work on the left-hand side to get u1 + u2, T v ( ) = + u1, T v ( ) T † u1, v ( u2, T v ) ( T † u2, v + = ( ) = T † u1 + T † u2 , v . ) Comparing the right-hand sides of the last two equations we get the desired: ) ( Having established linearity now we establish homogeneity. Consider T †(u1 + u2) = T † u1 + T † u2 . au, T v ( = ) T †(au) , v ( ) . The left hand side is au, T v ( = a ⋆ u, T v ( = a ⋆ T † u, v ( ) ) ) = ( aT † u, v . ) This time we conclude that T †(au)
https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf
= ( aT † u, v . ) This time we conclude that T †(au) = aT † u . This concludes the proof that T †, so defined is a linear operator on V . A couple of important properties are readily proven: Claim: (ST )† = T †S† . We can show this as follows: u, ST v ( ) = S†u, T v ( ) = T †S†u, v ( . ) (6.9) (6.10) (6.11) (6.12) (6.13) (6.14) Claim: The adjoint of the adjoint is the original operator: (S†)† = S. We can show this as follows: u, S†v u, S†v = . Comparing with ( ) ( the first result, we have shown that (S†)†u = Su, for any u, which proves the claim . Now, additionally ) (S†)†u, v ( S†v, u ( v, Su ( Su, v ( ∗ = ) ∗ = ) = ) ) Example: Let v = (v1, v2, v3), with vi space, C3 . Define a linear operator T that acts on v as follows: ∈ C denote a vector in the three-dimensional complex vector T (v1, v2, v3) = ( 0v1 + 2v2 + iv3 , v1 − iv2 + 0v3 , 3iv1 + v2 + 7v3 )
https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf
iv3 , v1 − iv2 + 0v3 , 3iv1 + v2 + 7v3 ) . (6.15) 22 Calculate the action of T † on a vector. Give the matrix representations of T and T † using the orthonormal basis e1 = (1, 0, 0), e2 = (0, 1, 0), e3 = (0, 0, 1). Assume the inner product is the standard on on C3 . Solution: We introduce a vector u = (u1, u2, u3) and will use the basic identity The left-hand side of the identity gives: u, T v ( ) = T †u, v ( . ) u, T v ( ) = u (2v2 + iv3) + u (v1 ∗ 2 ∗ 1 iv2) + u (3iv1 + v2 + 7v3) . ∗ 3 (6.16) − This is now rewritten by factoring the various vi’s u, T v ( ) ∗ = (u + 3iu ∗ )v1 + (2u 1 ∗ 2 3 − iu 2 ∗ + u )v2 + (iu ∗ + 7u )v3 . ∗ 3 ∗ 3 1 Identifying the right-hand side with T †u, v ( ) we now deduce that T †(u1, u2, u3) = ( u2 3iu3 , 2u1 + iu2 + u3 , − iu1 + 7u3 ) . − (6.
https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf
2u1 + iu2 + u3 , − iu1 + 7u3 ) . − (6.17) (6.18) This gives the action of T † . To find the matrix representation we begin with T . Using basis vectors, we have from (6.15) T e1 = T (1, 0, 0) = (0, 1, 3i) = e2 + 3ie3 = T11e1 + T21e2 + T31e3 , (6.19) and deduce that T11 = 0, T21 = 1, T31 = 3i. This can be repeated, and the rule becomes clear quickly: the coefficients of vi read left to right fit into the i-th column of the matrix. Thus, we have T = 0 1 3i   2 i i 0  − 7 1  and T † =  1 i i 0 0 2 −   3i − 1 7  . (6.20) These matrices are related: one is the transpose and complex conjugate of the other! This is not an accident. Let us reframe this using matrix notation. Let u = ei and v = ej where ei and ej are orthonormal basis vectors. Then the definition can be written as u, T v ( = ) = T †u
https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf
the definition can be written as u, T v ( = ) = T †u, v ( ) T † ei, ej ) ( † = T ek, ej ki ) ( †(T ) ∗ δkj = Tjkδik ki (T †) ∗ = Tij ei, T ej ( ) ei, Tkj ek ( ji ) (6.21) Relabeling i and j and taking the complex conjugate we find the familiar relation between a matrix and its adjoint: (T †)ij = (Tji) ∗ . (6.22) If we did not, in the equation above the use of The adjoint matrix is the transpose and complex conjugate matrix only if we use an orthonormal basis. = gij , where ei, ej ( ) gij is some constant matrix that would appear in the rule for the construction of the adjoint matrix. = δij would be replaced by ei, ej ) ( 23 7 Hermitian and Unitary operators Before we begin looking at special kinds of operators let us consider a very surprising fact about operators on complex vector spaces, as opposed to operators on real vector spaces. Suppose we have an operator T that is such that for any vector v vanishes V the following inner product ∈ v, T v ( ) = 0 for all v V. ∈ (7.23) What can we say about the operator T ? The
https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf
v V. ∈ (7.23) What can we say about the operator T ? The condition states that T is an operator that starting from a vector gives a vector orthogonal to the original one. In a two-dimensional real vector space, this is simply the operator that rotates any vector by ninety degrees! It is quite surprising and important that for complex vector spaces the result is very strong: any such operator T necessarily vanishes. This is a theorem: Theorem: Let T be a linear operator in a complex vector space V : If v , T v ( ) = 0 for all v ∈ V, then T = 0. (7.24) Proof: Any proof must be such that it fails to work for real vector space. Note that the result V . Indeed, if this holds, then take u = T v, follows if we could prove that = 0, for all u, v u, T v ( ) ∈ = 0 for all v implies that T v = 0 for all v and therefore T = 0. We will thus try to show that ∈ vanish, whatever # is. So we must aim to form linear combinations of such terms in order ) = 0 for all u, v V . All we know is that objects of the form u , T v ( then T v, T v ( ) #, T # ( to reproduce )
https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf
of the form u , T v ( then T v, T v ( ) #, T # ( to reproduce ) u , T v ( . We begin by trying the following ) u + v, T (u + v) ( ) − ( u v, T (u v) ) = 2 u, T v ( + 2 v, T u ( ) ) . − − (7.25) We see that the “diagonal” term vanished, but instead of getting just . v , T u ) ( Here is where complex numbers help, we can get the same two terms but with opposite signs by trying, u , T v ( we also got ) u + iv, T (u + iv) ( ) − ( u iv, T (u iv) ) = 2i u, T v ( − 2i ) − v, T u ( ) . − (7.26) It follows from the last two relations that The condition = 0 for all v, implies that each term of the above right-hand side vanishes, thus u v, T (u + v) ) − − 1 i u+iv, T (u+iv) ( ) − 1 i u ( − iv, T (u − ) − ( . (7.27) iv) ) ) u , T v ( ) = 1 4 showing that u+v, T (u+v) ( ( v, T v ( u , T v ( ) ) = 0 for all u, v V . As explained above this proves the result. ∈ An operator T is said to be Hermitian
https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf
explained above this proves the result. ∈ An operator T is said to be Hermitian if T † = T . Hermitian operators are pervasive in quantum mechanics. The above theorem in fact helps us discover Hermitian operators. It is familiar that the expectation value of a Hermitian operator, on any state, is real. It is also true, however, that any operator whose expectation value is real for all states must be Hermitian: 24 T = T † if and only if v, T v ( ) ∈ R for all v . To prove this first go from left to right. If T = T † v, T v ( ) = T † v, v ( ) = T v, v ( ) = v, T v ( ∗ , ) (7.28) (7.29) showing that v, T v ( ) is real. To go from right to left first note that the reality condition means that v, T v ( ) = T v, v ( ) = v, T † v ( ) , (7.30) where the last equality follows because (T †)† = T . Now the leftmost and rightmost terms can be combined to give = 0, which holding for all v implies, by the theorem, that T = T † . T †)v v, (T ( − ) We can prove two additional results of Hermitian
https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf
T † . T †)v v, (T ( − ) We can prove two additional results of Hermitian operators rather easily. We have discussed earlier the fact that on a complex vector space any linear operator has at least one eigenvalue. Here we learn that the eigenvalues of a hermitian operator are real numbers. Moreover, while we have noted that eigenvectors corresponding to different eigenvalues are linearly independent, for Hermitian operators they are guaranteed to be orthogonal. Thus we have the following theorems Theorem 1: The eigenvalues of Hermitian operators are real. Theorem 2: Different eigenvalues of a Hermitian operator correspond to orthogonal eigenfunctions. Proof 1: Let v be a nonzero eigenvector of the Hermitian operator T with eigenvalue λ: T v = λv. Taking the inner product with v we have that v, T v ( ) = v, λv ( ) = λ v, v ( ) . Since T is hermitian, we can also evaluate v, T v ( ) as follows v, T v ( ) = T v, v ( ) = λv, v ( ) = λ ∗ v, v ( ) . (7.31) (7.32) The above equations give (λ showing the reality of λ. λ∗) v, v ( ) − = 0 and since v is not the zero vector, we conclude that λ∗ = λ, Proof 2:
https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf
v is not the zero vector, we conclude that λ∗ = λ, Proof 2: Let v1 and v2 be eigenvectors of the operator T : T v1 = λ1v1, T v2 = λ2v2 , (7.33) with λ1 and λ2 real (previous theorem) and different from each other. Consider the inner product v2, T v1 ( and evaluate it in two different ways. First ) v2, T v1 ( ) = v2, λ1v1 ( ) = λ1 v2, v1 ( ) , (7.34) 25 and second, using hermiticity of T , v2, T v1 ( ) = T v2, v1 ( ) = λ2v2, v1 ( ) = λ2 v2, v1 ( ) . From these two evaluations we conclude that (λ1 λ2) v1, v2 ( ) − = 0 (7.35) (7.36) and the assumption λ1 = λ2, leads to v1, v2 ( ) = 0, showing the orthogonality of the eigenvectors. Let us now consider another important class of linear operators on a complex vector space, the so- (V ) in a complex vector space V is said to be a unitary called unitary operators. An operator U operator if it is surjective and does not change the magnitude of the vector it acts upon: ∈ L = U u | | u
https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf
change the magnitude of the vector it acts upon: ∈ L = U u | | u | | , for all u V . ∈ (7.37) We tailored the definition to be useful even for infinite dimensional spaces. Note that U can only kill vectors of zero length, and since the only such vector is the zero vector, null U = 0, and U is injective. Since U is also assumed to be surjective, a unitary operator U is always invertible. A simple example of a unitary operator is the operator λI with λ a complex number of unit-norm: λ | = 1. Indeed | λIu | | = λu | | = u λ | || = | u | | for all u. Moreover, the operator is clearly surjective. For another useful characterization of unitary operators we begin by squaring (7.37) U u, U u ( ) = u, u ( ) By the definition of adjoint u, U †U u ( = ) u, u ( ) → ( u , (U †U I)u ) − = 0 for all u . (7.38) (7.39) So by our theorem U †U = I, and since U is invertible this means U † is the inverse of U and we also have U U † = I: Unitary operators preserve inner products in the following sense U �
https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf
U U † = I: Unitary operators preserve inner products in the following sense U †U = U U † = I . U u , U v = ) u , v ( ) . ( (7.40) (7.41) This follows immediately by moving the second U to act on the first input and using U †U = I. Assume the vector space V is finite dimensional and has an orthonormal basis (e1, . . . en). Consider the new set of vectors (f1, . . . , fn) where the f ’s are obtained from the e’s by the action of a unitary operator U : fi = U ei . (7.42) 26 This also means that ei = U †fi. We readily see that the f ’s are also a basis, because they are linearly independent: Acting on a1f1 + . . . + anfn = 0 with U † we find a1e1 + . . . + anen = 0, and thus ai = 0. We now see that the new basis is also orthonormal: fi , fj ( ) = U ei , U ej ) ( = ei , ej ) ( = δij . The matrix elements of U in the e-basis are Let us compute the matrix elements U ′ ki of U in the f -basis Uki = ek , U ei ( ) . ′ Uki = fk , U fi ( ) =
https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf
basis Uki = ek , U ei ( ) . ′ Uki = fk , U fi ( ) = U ek , U fi ( ) = ek , fi ( ) = ek , U ei ( ) = Uki The matrix elements are the same! Can you find an explanation for this result? (7.43) (7.44) (7.45) 27 MIT OpenCourseWare http://ocw.mit.edu 8.05 Quantum Physics II Fall 2013 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
https://ocw.mit.edu/courses/8-05-quantum-physics-ii-fall-2013/04b0570b349e84d74129eef504498472_MIT8_05F13_Chap_03.pdf
Lectures 13 & 14 Packet Multiple Access: The Aloha protocol Eytan Modiano Massachusetts Institute of Technology Eytan Modiano Slide 1 Multiple Access • Shared Transmission Medium – a receiver can hear multiple transmitters – a transmitter can be heard by multiple receivers • the major problem with multi-access is allocating the channel between the users; the nodes do not know when the other nodes have data to send – Need to coordinate transmissions Eytan Modiano Slide 2 Examples of Multiple Access Channels • Local area networks (LANs) – Traditional Ethernet – Recent trend to non-multi-access LANs • satellite channels • Multi-drop telephone • Wireless radio NET DLC PHY Eytan Modiano Slide 3 MAC LLC • Medium Access Control (MAC) – Regulates access to channel • Logical Link Control (LLC) – All other DLC functions Approaches to Multiple Access • Fixed Assignment (TDMA, FDMA, CDMA) – each node is allocated a fixed fraction of bandwidth – Equivalent to circuit switching – very inefficient for low duty factor traffic • Contention systems – Polling – Reservations and Scheduling – Random Access Eytan Modiano Slide 4 Aloha Single receiver, many transmitters Receiver ... . Transmitters E.g., Satellite system, wireless Eytan Modiano Slide 5 Slotted Aloha • Time is divided into “slots” of one packet duration – E.g., fixed size packets • When a node has a packet to send, it waits until the start of the next slot to send it – Requires synchronization • If no other nodes attempt transmission during that slot, the transmission is successful – Otherwise “collision” – Collided packet are retransmitted after a random delay 1 Success 2 Idle 3 4 5 Collision Idle Success Eytan Modiano Slide 6 Slotted Aloha Assumptions
https://ocw.mit.edu/courses/6-263j-data-communication-networks-fall-2002/04ca100a8b247ecf18d328b752f1b929_Lectures13_14.pdf
3 4 5 Collision Idle Success Eytan Modiano Slide 6 Slotted Aloha Assumptions • Poisson external arrivals • No capture – Packets involved in a collision are lost – Capture models are also possible • Immediate feedback – Idle (0) , Success (1), Collision (e) • If a new packet arrives during a slot, transmit in next slot • If a transmission has a collision, node becomes backlogged – while backlogged, transmit in each slot with probability qr until successful • Infinite nodes where each arriving packet arrives at a new node – Equivalent to no buffering at a node (queue size = 1) – Pessimistic assumption gives a lower bound on Aloha performance Eytan Modiano Slide 7 Markov chain for slotted aloha P03 1 0 P 10 P 13 P34 2 3 • state (n) of system is number of backlogged nodes. pi,i-1 = prob. of one backlogged attempt and no new arrival pi,i =prob. of one new arrival and no backlogged attempts or no new arrival and no success pi,i+1= prob of one new arrival and one or more backlogged attempts pi,i+j = Prob. Of J new arrivals and one or more backlogged attempts or J+1 new arrivals and no backlogged attempts • Steady state probabilities do not exists – Backlog tends to infinity => system unstable – More later Eytan Modiano Slide 8 slotted aloha • let g(n) be the attempt rate (the expected number of packets transmitted in a slot) in state n g(n) = λ + nqr • The number of attempted packets per slot in state n is approximately a Poisson random variable of mean g(n) – P (m attempts) = g(n)me-g(n)/m! – P (idle) = probability of no attempts in a slot = e-g(n) – p (success) = probability of one attempt in a slot = g(n
https://ocw.mit.edu/courses/6-263j-data-communication-networks-fall-2002/04ca100a8b247ecf18d328b752f1b929_Lectures13_14.pdf
) = probability of no attempts in a slot = e-g(n) – p (success) = probability of one attempt in a slot = g(n)e-g(n) – P (collision) = P (two or more attempts) = 1 - P(idle) - P(success) Eytan Modiano Slide 9 Throughput of Slotted Aloha • The throughput is the fraction of slots that contain a successful transmission = P(success) = g(n)e-g(n) – When system is stable throughput must also equal the external arrival rate (λ) -1e g(n)e-g(n) Departure rate 1 g(n) – What value of g(n) maximizes throughput? – g(n) < 1 => too many idle slots – g(n) > 1 => too many collisions – g( n)e−g( n) = e−g( n) − g( n)e−g( n) = 0 d dg( n) ⇒ g(n) = 1 ⇒ P( success) = g(n )e−g( n) = 1/ e ≈ 0.36 Eytan Modiano Slide 10 If g(n) can be kept close to 1, an external arrival rate of 1/e packets per slot can be sustained Instability of slotted aloha • if backlog increases beyond unstable point (bad luck) then it tends to increase without limit and the departure rate drops to 0 • Drift in state n, D(n) is the expected change in backlog over one time slot – D(n) = λ - P(success) = λ - g(n)e-g(n) negative drift -1 e λ Departure rate negative drift -G Ge Stable Unstable positive drift G=0 G=1 G = λ + nq r Arrival rate positive drift Eytan Modiano Slide 11 Stabilizing slotted aloha • choosing qr small increases the backlog at which instability occurs ( since g(n
https://ocw.mit.edu/courses/6-263j-data-communication-networks-fall-2002/04ca100a8b247ecf18d328b752f1b929_Lectures13_14.pdf
11 Stabilizing slotted aloha • choosing qr small increases the backlog at which instability occurs ( since g(n) = λ + nqr), but also increases delay (since mean retry time is 1/qr) • solution: estimate the backlog (n) from past feedback – Given the backlog estimate, choose qr to keep g(n) = 1 Assume all arrivals are immediately backlogged g(n) = nqr , P(success) = nqr (1-qr)n-1 To maximize P(success) choose qr = min{1,1/n} – When the estimate of n is perfect: idles occur with probability 1/e, successes with 1/e, and collisions with 1-2/e. – When the estimate is too large, too many idle slots occur – When the estimate is too small, too many collisions occur • Nodes can use feedback information (0,1,e) to make estimates – A good rule is increase the estimate of n on each collision, and to decrease it on each idle slot or successful slot note that the increase on a collision should be (e-2)-1 times as large as the decrease on an idle slot Eytan Modiano Slide 12 stabilized slotted aloha • assume all arrivals are immediately backlogged – g(n) = nqr = attempt rate – p(success) = nqr (1-qr)n-1 for max throughput set g(n) = 1 => qr = min{1,1/n’} where n’ is the estimate of n – Let nk = estimate of backlog after kth slot nk+1 = max {λ, nk+λ-1} idle or success nk+λ+(e-2)-1 collision – Can be shown to be stable for λ < 1/e Eytan Modiano Slide 13 TDM vs. slotted aloha TDM, m=16 TDM, m=8 DELAY 8 4 ALOHA 0 0.2 0.4 0.6
https://ocw.mit.edu/courses/6-263j-data-communication-networks-fall-2002/04ca100a8b247ecf18d328b752f1b929_Lectures13_14.pdf
TDM, m=8 DELAY 8 4 ALOHA 0 0.2 0.4 0.6 0.8 ARRIVAL RATE • Aloha achieves lower delays when arrival rates are low • TDM results in very large delays with large number of users, while Aloha is independent of the number of users Eytan Modiano Slide 14 Pure (unslotted) Aloha • New arrivals are transmitted immediately (no slots) – No need for synchronization – No need for fixed length packets • A backlogged packet is retried after an exponentially distributed random delay with some mean 1/x • The total arrival process is a time varying Poisson process of rate g(n) = λ + nx (n = backlog, 1/x = ave. time between retransmissions) • Note that an attempt suffers a collision if the previous attempt is not yet finished (ti-ti-1<1) or the next attempt starts too soon (ti+1-ti<1) New Arrivals τ 3 τ 4 Eytan Modiano Slide 15 t 1 t 2 t 3 Collision t 4 t 5 Retransmission Throughput of Unslotted Aloha • An attempt is successful if the inter-attempt intervals on both sides exceed 1 (for unit duration packets) – P(success) = e-g(n) e-g(n) = e-2g(n) – Throughput (success rate) = g(n) e-2g(n) – For max throughput at g(n) = 1/2, Throughput = 1/2e ~ 0.18 – stabilization issues are similar to slotted aloha – advantages of unslotted aloha are simplicity and possibility of unequal length packets Eytan Modiano Slide 16 Splitting Algorithms • More efficient approach to resolving collisions – Simple feedback (0,1,e) – Basic idea: assume only two packets are involved in a collision Suppose all other nodes remain quiet until collision is resolved
https://ocw.mit.edu/courses/6-263j-data-communication-networks-fall-2002/04ca100a8b247ecf18d328b752f1b929_Lectures13_14.pdf
0,1,e) – Basic idea: assume only two packets are involved in a collision Suppose all other nodes remain quiet until collision is resolved, and nodes in the collision each transmit with probability 1/2 until one is successful On the next slot after this success, the other node transmits The expected number of slots for the first success is 2, so the expected number of slots to transmit 2 packets is 3 slots Throughput over the 3 slots = 2/3 – In practice above algorithm cannot really work Cannot assume only two users involved in collision Practical algorithm must allow for collisions involving unknown number of users Eytan Modiano Slide 17 Tree algorithms • After a collision, all new arrivals and all backlogged packets not in the collision wait • Each colliding packet randomly joins either one of two groups (Left and Right groups) – Toss of a fair coin – Left group transmits during next slot while Right group waits If collision occurs Left group splits again (stack algorithm) Right group waits until Left collision is resolved – When Left group is done, right group transmits (1,2,3,4) collision (1,2,3) success 1 idle success 4 collision (2,3) collision (2,3) success success 2 3 Eytan Modiano Slide 18 Notice that after the idle slot, collision between (2,3) was sure to happen and could have been avoided Many variations and improvements on the original tree splitting algorithm Throughput comparison • stabilized pure aloha T = 0.184 = (1/(2e)) • stabilized slotted aloha T = 0.368 = (1/e) • Basic tree algorithm T = 0.434 • Best known variation on tree algorithm T = 0.4878 • Upper bound on any collision resolution algorithm with (0,1,e) feedback T <= 0.568 • TDM achieves throughputs up to 1 packet per slot, but
https://ocw.mit.edu/courses/6-263j-data-communication-networks-fall-2002/04ca100a8b247ecf18d328b752f1b929_Lectures13_14.pdf
(0,1,e) feedback T <= 0.568 • TDM achieves throughputs up to 1 packet per slot, but the delay increases linearly with the number of nodes Eytan Modiano Slide 19
https://ocw.mit.edu/courses/6-263j-data-communication-networks-fall-2002/04ca100a8b247ecf18d328b752f1b929_Lectures13_14.pdf
Lecture Notes on Geometrical Optics (02/18/14) 2.71/2.710 Introduction to Optics –Nick Fang Outline: A. Optical Invariant B. Composite Lenses C. Ray Vector and Ray Matrix D. Location of Principal Planes for an Optical System E. Aperture Stops, Pupils and Windows A. Optical Invariant -What happens to an arbitrary “axial” ray that originates from the axial intercept of the object, after passing through a series of lenses? If we make use of the relationship between launching angle and the imaging conditions, we have: Rearranging, we obtain: 𝜃𝑖𝑛 = 𝑥𝑖𝑛 𝑠𝑜 𝜃𝑖𝑛 𝜃𝑜𝑢𝑡 and 𝜃𝑜𝑢𝑡 = − = − 𝑠𝑖 𝑠𝑜 = ℎ𝑖 ℎ𝑜 𝑥𝑖𝑛 𝑠𝑖 𝜃𝑖𝑛ℎ𝑜 = 𝜃𝑜𝑢𝑡ℎ𝑖 We see that the product of the image height and the angle with respect to the axis (the components of the ray vector!) remains a constant. Indeed a more general result, 𝑛ℎ𝑜𝑠𝑖𝑛𝜃𝑖𝑛 = 𝑛′ℎ𝑖𝑠𝑖𝑛𝜃𝑜𝑢𝑡 is a constant (often referred as a Lagrange invariant in different textbooks) across any surface of the imaging system. - The invariant may be used to deduce other quantities of the optical system, without the necessity of certain intermediate ray-tracing calculations. - You may regard it as a precursor to wave optics: the angles are approximately proportional to lateral momentum of light, and the image height is equivalent to separation of two geometric points. For two points that are separated far apart, there is a limiting angle to transmit their information across the imaging system. B. Composite Lenses To elaborate the effect of lens in combinations, let’s consider first two lenses separated by a distance d. We may apply the thin lens equation and cascade the imaging process by taking the image formed by lens 1 as the object for lens 2. 1 Lecture Notes on Geometrical
https://ocw.mit.edu/courses/2-71-optics-spring-2014/04d4bff4ff4a9ae5e2c8ba7500905557_MIT2_71S14_lec4_notes.pdf
taking the image formed by lens 1 as the object for lens 2. 1 Lecture Notes on Geometrical Optics (02/18/14) 2.71/2.710 Introduction to Optics –Nick Fang 1 𝑠𝑜1 + 1 𝑠𝑖2 = ( 1 𝑓1 + 1 𝑓2 ) − 𝑑 (𝑑 − 𝑠𝑖1)𝑠𝑖1 A few limiting cases: a) Parallel beams from the left: 𝑠𝑖2 is the back-focal length (BFL) 1 BFL = ( 1 𝑓1 + 1 𝑓2 ) − 𝑑 (𝑑 − 𝑓1)𝑓1 b) collimated beams to the right: 𝑠𝑜1 is the front-focal length (FFL) 1 FFL = ( 1 𝑓1 + 1 𝑓2 ) − 𝑑 (𝑑 − 𝑓2)𝑓2 The composite lens does not have the same apparent focusing length in front and back end! c) d=f1+f2: parallel beams illuminating the composite lens will remain parallel at the exit; the system is often called afocal. This is in fact the principle used in most telescopes, as the object is located at infinity and the function of the instrument is to send the image to the eye with a large angle of view. On the other hand, a point source located at the left focus of the first lens is imaged at the right focus of the second lens (the two are called conjugate points). This is often used as a condenser for illumination. 2 f1f2f1f2d Lecture Notes on Geometrical Optics (02/18/14) 2.71/2.710 Introduction to Optics –Nick Fang Practice Example: Huygens eyepiece A Huygens eyepiece is designed with two plano-convex lenses separated by the average of the two focal length. Ideally, such eyepiece should produce a virtual image at infinity distance. Let f1=
https://ocw.mit.edu/courses/2-71-optics-spring-2014/04d4bff4ff4a9ae5e2c8ba7500905557_MIT2_71S14_lec4_notes.pdf
lenses separated by the average of the two focal length. Ideally, such eyepiece should produce a virtual image at infinity distance. Let f1=30cm and f2=10cm, so the spacing d=20cm, let’s find these parameters: a) BFL and FFL, b) the location of PPs, c) the EFL. C. Ray Vector and Ray Matrix In principle, ray tracing can help us to analyze image formation in any given optical system as the rays refract or reflect at all interfaces in the optical train. If we restrict the analysis to paraxial rays only, then such process can be described in a matrix approach. In the Feb 10 lecture, we defined a light ray by two co-ordinates: a. its position, x 3 ABCDOptical system ↔ Ray matrix 𝑖𝑛𝜃𝑖𝑛 𝑜𝑢𝑡𝜃𝑜𝑢𝑡d=20cm Lecture Notes on Geometrical Optics (02/18/14) 2.71/2.710 Introduction to Optics –Nick Fang b. its slope,  These parameters define a ray vector, which will change with distance and as the ray propagates through optics. Associated with the input ray vector ( 𝑖𝑛 𝜃𝑖𝑛 express the effect of the optical elements in the general form of a 2x2 ray matrix: ) and output ray vector( ), we can 𝑜𝑢𝑡 𝜃𝑜𝑢𝑡 𝑜𝑢𝑡 ( 𝜃𝑜𝑢𝑡 These matrices are often (uncreatively) called ABCD Matrices. Since the displacements and angles are assumed to be small, we can think in terms of partial derivatives. 𝐴 𝐵 𝐶 𝐷 𝑖𝑛 𝜃𝑖𝑛 ) = [ ] ( ) 𝐴 = ( ) : spatial magnification; Therefore, we can connect the Matrix components with the functions of the imaging elements: 𝜕𝑥𝑜𝑢𝑡 𝜕𝑥𝑖𝑛 𝜕
https://ocw.mit.edu/courses/2-71-optics-spring-2014/04d4bff4ff4a9ae5e2c8ba7500905557_MIT2_71S14_lec4_notes.pdf