text stringlengths 1 7.76k | source stringlengths 17 81 |
|---|---|
Answers to Selected Exercises 779 12.3.24 (p. 663) u(x,y) = ∞ X n=1 αn cosh nπx/b cosh nπa/b sin nπy b , αn = 2 b Z b 0 g(y) sin nπy b dy u(x, y) = 96 π5 ∞ X n=1 cosh(2n −1)πx (2n −1)5 cosh(2n −1)π sin(2n −1)πy. 12.3.25 (p. 664) u(x,y) = ∞ X n=1 αn cosh(2n −1)πx/2b cosh(2n −1)πa/2b cos (2n −1)πy 2b , αn = 2 b Z b 0 g(y) cos (2n −1)πy 2b dy u(x, y) = −128 π3 ∞ X n=1 (−1)n cosh(2n −1)πx/4 (2n −1)3 cosh(2n −1)π/2 cos (2n −1)πy 4 . 12.3.26 (p. 664) u(x,y) = b π ∞ X n=1 αn cosh nπx/b n sinh nπa/b sin nπy b , αn = 2 b Z b 0 g(y) sin nπy b dy u(x, y) = 64 π3 ∞ X n=1 (−1)n+1 cosh(2n −1)πx/4 (2n −1)3 sinh(2n −1)π/4 sin (2n −1)πy 4 12.3.27 (p. 664) u(x,y) = −2b π ∞ X n=1 αn cosh(2n −1)π(x −a)/2b (2n −1) sinh(2n −1)πa/2b sin (2n −1)y 2b , αn = 2 b Z b 0 g(y) sin (2n −1)πy 2b dy u(x, y) = 192 ∞ X n=1 1 + (−1)n 4 (2n −1)π cosh(2n −1)(x −1)/2 (2n −1)4 sinh(2n −1)/2 sin (2n −1)y 2 . 12.3.28 (p. 664) u(x,y) = α0(x −a) + b π ∞ X n=1 αn sinh nπ(x −a)/b n cosh nπa/b cos nπy b , α0 = 1 b Z b 0 g(y) cos nπy b dy, αn = 2 b Z b 0 g(y) cos nπy b dy u(x, y) = π(x −2) 2 −4 π ∞ X n=1 sinh(2n −1)(x −2) (2n −1)3 cosh 2(2n −1) cos(2n −1)y. 12.3.29 (p. 664) u(x,y) = α0 + ∞ X n=1 αne−nπy/a cos nπx a , α0 = 1 a Z a 0 f(x) dx, αn = 2 a Z a 0 f(x)cos nπx a dx, n ≥1 u(x, y) = π3 2 −48 π P∞ n=1 1 (2n−1)4 e−(2n−1)y cos(2n −1)x 12.3.30 (p. 664) u(x,y) = ∞ X n=1 αne−(2n−1)πy/2a cos (2n −1)πx 2a , αn = 2 a Z a 0 f(x) cos (2n −1)πx 2a dx u(x, y) = −288 π3 ∞ X n=1 (−1)n (2n −1)3 e−(2n−1)πy/6 cos (2n −1)πx 6 12.3.31 (p. 664) u(x,y) = ∞ X n=1 αne−(2n−1)πy/2a sin (2n −1)πx 2a , αn = 2 a Z a 0 f(x)sin (2n −1)πx 2a dx u(x, y) = 32 π ∞ X n=1 1 (2n −1)3 e−(2n−1)y/2 sin (2n −1)x 2 . | Elementary Differential Equations with Boundary Value Problems_Page_789_Chunk3201 |
780 Answers to Selected Exercises 12.3.32 (p. 664) u(x,y) = −a π ∞ X n=1 αn n e−nπy/a sin nπx a , αn = 2 a Z a 0 f(x) sin nπx a dx u(x) = 4 ∞ X n=1 (1 + (−1)n2) n4 e−ny sin nx 12.3.33 (p. 664) u(x, y) = −2a π ∞ X n=1 αn 2n −1e−(2n−1)πy/2a cos (2n −1)πx 2a , αn = 2 a Z a 0 f(x)cos (2n −1)πx 2a dx u(x, y) = 5488 π3 ∞ X n=1 1 (2n −1)3 1 + 4(−1)n (2n −1)π e−(2n−1)πy/14 cos (2n −1)πx 14 12.3.34 (p. 664) u(x, y) = −2a π ∞ X n=1 αn 2n −1e−(2n−1)πy/2a sin (2n −1)πx 2a , αn = 2 a Z a 0 f(x) sin (2n −1)πx 2a dx u(x, y) = −2000 π3 ∞ X n=1 1 (2n −1)3 (−1)n + 4 (2n −1)π e−(2n−1)πy/10 sin (2n −1)πx 10 12.3.35 (p. 664) u(x,y) = ∞ X n=1 An sinh nπ(b −y)/a + Bn sinh nπy/a sinh nπb/a sin nπx a + ∞ X n=1 Cn sinh nπ(a −x)/b + Dn sinh nπx/b sinh nπa/b sin nπy b 12.3.36 (p. 664) u(x,y) = C + a π ∞ X n=1 Bn cosh nπy/a −An cosh nπ(y −b)/a n sinh nπb/a cos nπx a + b π ∞ X n=1 Dn cosh nπx/b −Cn cosh nπ(x −a)/b n sinh nπa/b cos nπy b Section 12.4 Answers, pp. 672–673 12.4.1 (p. 672) u(r, θ) = α0 ln r/ρ ln ρ0/ρ + ∞ X n=1 rnρ−n −ρnr−n ρn 0 ρ−n −ρnρ−n 0 (αn cos nθ + βn sin nθ) α0 = 1 2π Z π −π f(θ) dθ, and αn = 1 π Z π −π f(θ) cos nθ dθ, βn = 1 π Z π −π f(θ) sin nθ dθ, n = 1, 2, 3, . . . 12.4.2 (p. 672) u(r, θ) = ∞ X n=1 αn ρ−nπ/γ 0 rnπ/γ −ρnπ/γ 0 r−nπ/γ ρ−nπ/γ 0 ρnπ/γ −ρnπ/γ 0 ρ−nπ/γ sin nπθ γ αn = 1 γ Z γ 0 f(θ) sin nπθ γ dθ, n = 1, 2, 3,. . . 12.4.3 (p. 672) u(r, θ) = ρα0 ln r ρ0 + ργ π ∞ X n=1 αn n ρ −nπ/γ 0 rnπ/γ −ρ nπ/γ 0 r−nπ/γ ρ−nπ/γ 0 ρnπ/γ + ρnπ/γ 0 ρ−nπ/γ cos nπθ γ α0 = 1 γ Z γ 0 f(θ) dθ, αn = 2 γ Z γ 0 f(θ) cos nπθ γ dθ, n = 1, 2, 3,. . . 12.4.4 (p. 672) u(r, θ) = ∞ X n=1 αn r(2n−1)π/2γ ρ(2n−1)π/2γ cos (2n −1)πθ 2γ αn = 2 γ Z γ 0 f(θ) cos (2n −1)πθ 2γ dθ, n = 1, 2, 3,. . . 12.4.5 (p. 673) u(r, θ) = 2γρ0 π ∞ X n=1 αn 2n −1 ρ−(2n−1)π/2γr(2n−1)π/2γ + ρ(2n−1)π/2γr−(2n−1)π/2γ ρ−(2n−1)π/2γρ(2n−1)π/2γ 0 −ρ(2n−1)π/2γρ−(2n−1)π/2γ 0 sin (2n −1)πθ 2γ , | Elementary Differential Equations with Boundary Value Problems_Page_790_Chunk3202 |
Answers to Selected Exercises 781 αn = 2 γ Z γ 0 g(θ) sin (2n −1)πθ 2γ dθ, n = 1, 2, 3,. . . 12.4.6 (p. 673) u(r, θ) = α0 + ∞ X n=1 αn rnπ/γ ρnπ/γ cos nπθ γ α0 = 1 γ Z γ 0 f(θ) dθ, αn = 2 γ Z γ 0 f(θ) cos nπθ γ dθ, n = 1, 2, 3,. . . 12.4.7 (p. 673) vn(r, θ) = rn nρn−1 (αn cos nθ + sin nθ) u(r, θ) = c + ∞ X n=1 rn nρn−1 (αn cos nθ + βn sin nθ) αn = 1 π Z π −π f(θ) cos nθ dθ, βn = 1 π Z π −π f(θ) sin nθ dθ, n = 1, 2, 3,. . . Section 13.1 Answers, pp. 684–686 13.1.2 (p. 684) y = −x + 2 e −1 | Elementary Differential Equations with Boundary Value Problems_Page_791_Chunk3203 |
782 Answers to Selected Exercises 13.1.12 (p. 685) y = sinh(x −a) sinh(b −a) Z b x F(t) sinh(t −b) dt + sinh(x −b) sinh(b −a) Z x a F(t) sinh(t −a) dt 13.1.13 (p. 685) y = −sinh(x −a) cosh(b −a) Z b x F(t) cosh(t −b) dt −cosh(x −b) cosh(b −a) Z x a F(t) sinh(t −a) dt 13.1.14 (p. 685) y = −cosh(x −a) sinh(b −a) Z b x F(t) cosh(t −b) dt −cosh(x −b) sinh(b −a) Z x a F(t) cosh(t −a) dt 13.1.15 (p. 685) y = −1 2 ex Z b x e−tF(t) dt + e−x Z x a etF(t) dt 13.1.16 (p. 685) If ω isn’t a positive integer, then y = 1 ω sin ωπ sin ωx Z π x F(t) sin ω(t −π) dt + sin ω(x −π) Z x 0 F(t) sin ωt dt . If ω = n (positive integer), then Z π 0 F(t) sin nt dt = 0 is necessary for existence of a solution. In this case, y = −1 n sin nx Z π x F(t) cos nt dt + cos nx Z x 0 F(t) sin nt dt + c1 sin nx with c1 arbitrary. 13.1.17 (p. 685) If ω ̸= n + 1/2 (n = integer), then y = −sin ωx ω cos ωπ Z π x F(t) cos ω(t −π) dt −cos ω(x −π) ω cos ωπ Z x 0 F(t) sin ωt dt. If ω = n + 1/2 (n = integer), then Z π 0 F(t) sin(n + 1/2)t dt = 0 is necessary for existence of a solution. In this case, y = −sin(n + 1/2)x n + 1/2 Z π x F(t) cos(n + 1/2)t dt −cos(n + 1/2)x n + /2 Z x 0 F(t) sin(n + 1/2)t dt + c1 sin(n + 1/2)x with c1 arbitrary, 13.1.18 (p. 685) If ω ̸= n + 1/2 (n = integer), then y = cos ωx ω cos ωπ Z π x F(t) sin ω(t −π) dt + sin ω(x −π) ω cos ωπ Z x 0 F(t) cos ωt dt. If ω = n + 1/2 (n = integer), then Z π 0 F(t) cos(n + 1/2)t dt = 0 is necessary for existence of a solution. In this case, y = cos(n + 1/2)x n + 1/2 Z π x F(t)sin(n + 1/2)t dt +sin(n + 1/2)x n + /2 Z x 0 F(t) cos(n + 1/2)t dt + c1 cos(n + 1/2)x with c1 arbitrary. 13.1.19 (p. 685) If ω isn’t a positive integer, then y = 1 ω sin ωπ cos ωx Z π x F(t) cos ω(t −π) dt + cos ω(x −π) Z x 0 F(t) cos ωt dt . | Elementary Differential Equations with Boundary Value Problems_Page_792_Chunk3204 |
Answers to Selected Exercises 783 If ω = n (positive integer), then Z π 0 F(t) cos nt dt = 0 is necessary for existence of a solution. In this case, y = −1 n cos nx Z π x F(t) sin nt dt + sin nx Z x 0 F(t) cos nt dt + c1 cos nx with c1 arbitrary. 13.1.20 (p. 685) y1 = B1(z2)z1 −B1(z1)z2 13.1.21 (p. 685) (a) G(x,t) = (t −a)(x −b) b −a a ≤t ≤x, (x −a)(t −b) b −a) x ≤t ≤b y = 1 b −a (x −a) Z b x (t −b)F(t) dt + (x −b) Z x a (t −a)F(t) dt (b) G(x, t) = a −t a ≤t ≤x a −x x ≤t ≤b y = (a −x) Z b x F(t) dt + Z x a (a −t)F(t) dt (c) G(x,t) = x −b a ≤t ≤x t −b x ≤t ≤b y = Z b x (t −b)F(t) dt + (x −b) Z x a F(t) dt (d) Z b a F(t) dt = 0 is a necessary condition for existence of a solution. Then y = Z b x tF(t) dt + x Z x a F(t) dt + c1 with c1 arbitrary. 13.1.22 (p. 686) G(x,t) = −(2 + t)(3 −x) 5 , 0 ≤t ≤x, −(2 + x)(3 −t) 5 , x ≤t ≤1 (a) y = x2 −x −2 2 (b) y = 5x2 −7x −14 30 (c) y = 5x4 −9x −18 60 13.1.23 (p. 686) G(x,t) = cos t sin x t3/2√x , π 2 ≤t ≤x, cos x sin t t3/2√x , x ≤t ≤π (a) y = 1 + cos x −sin x √x (b) y = x + π cos x −π/2 sin x √x 13.1.24 (p. 686) G(x,t) = (t −1)x(x −2) t3 , 1 ≤t ≤x, x(x −1)(t −2) t3 , x ≤t ≤2 (a) y = x(x −1)(x −2) (b) y = x(x −1)(x −2)(x + 3) 13.1.25 (p. 686) G(x,t) = −1 22 3 + 1 t2 x + 4 x , 1 ≤x ≤t, −1 22 3x + 1 x 1 + 4 t2 , x ≤t ≤2 (a) y = x2 −11x + 4 11x (b) y = 11x3 −45x2 −4 33x (c) y = 11x4 −139x2 −28 88x | Elementary Differential Equations with Boundary Value Problems_Page_793_Chunk3205 |
784 Answers to Selected Exercises 13.1.26 (p. 686) α(ρ + δ) −βρ ̸= 0 G(x,t) = (β −αt)(ρ + δ −ρx) α(ρ + δ) −βρ , 0 ≤t ≤x, (β −αx)(ρ + δ −ρt) α(ρ + δ) −βρ , x ≤t ≤1 13.1.27 (p. 686) αδ −βρ ̸= 0 G(x, t) = (β cos t −α sin t)(δ cos x −ρ sin x) αδ −βρ , 0 ≤t ≤x, (β cos x −α sin x)(δ cos t −ρ sin t) αδ −βρ , x ≤t ≤π 13.1.28 (p. 686) αρ + βδ ̸= 0 G(x, t) = (β cos t −α sin t)(ρ cos x + δ sin x) αρ + βδ x ≤t ≤π (β cos x −α sin x)(ρ cos t + δ sin t) αρ + βδ 0 ≤t ≤x 13.1.29 (p. 686) αδ−βρ ̸= 0 G(x, t) = ex−t)(β cos t −(α + β) sin t)(δ cos x −(ρ + δ) sin x) αδ −βρ 0 ≤t ≤x, ex−t(β cos x −(α + β) sin x)(δ cos t −(ρ + δ) sin t) αδ −βρ , x ≤t ≤π 13.1.30 (p. 686) βδ+(α+β)(ρ+δ) ̸= 0 G(x,t) = ex−t(β cos t −(α + β)sin t)((ρ + δ) cos x + δ sin x) βδ + (α + β)(ρ + δ , 0 ≤t ≤x, ex−t(β cos x −(α + β) sin x)((ρ + δ) cos t + δ sin t) βδ + (α + β)(ρ + δ , x ≤t ≤π/2 13.1.31 (p. 686) (ρ + δ)(α −β)e(b−a) −(ρ −δ)(α + β)e(a−b) ̸= 0 G(x, t) = ((α −β)e(t−a) −(α + β)e−(t−a))((ρ −δ)e(x−b) −(ρ + δ)e−(x−b)) 2[(ρ + δ)(α −β)e(b−a) −(ρ −δ)(α + β)e(a−b)] , 0 ≤t ≤x, ((α −β)e(x−a) −(α + β)e−(x−a))((ρ −δ)e(t−b) −(ρ + δ)e−(t−b)) 2[(ρ + δ)(α −β)e(b−a) −(ρ −δ)(α + β)e(a−b)] x ≤t ≤π Section 13.2 Answers, pp. 696–700 13.2.1 (p. 696) (ebxy′)′+cebxy = 0 13.2.2 (p. 696) (xy′)′ + x −ν2 x y = 0 13.2.3 (p. 696) ( p 1 −x2y′)′ + α2 √ 1 −x2 y = 0 13.2.4 (p. 696) (xby′)′+cxb−2y = 0 13.2.5 (p. 696) (e−x2y′)′+2αe−x2y = 0 13.2.6 (p. 696) (xe−xy′)′+ αe−xy = 0 13.2.7 (p. 696) ((1 −x2)y′)′ + α(α + 1)y = 0 13.2.9 (p. 696) λn = n2π2, yn = e−x sin nπx (n = positive integer) 13.2.10 (p. 697) λ0 = −1, y0 = 1 λn = n2π2, yn = e−x(nπ cos nπx + sin nπx) (n = positive integer) 13.2.11 (p. 697) (a) λ = 0 is an eigenvalue y0 = 2 −x (b) none (c) 5.0476821, 14.9198790, 29.7249673, 49.4644528 y = 2 √ λcos √ λ x −sin √ λx 13.2.12 (p. 697) (a) λ = 0 isn’t an eigenvalue (b) −0.5955245 y = cosh √ −λx (c) 8.8511386, 38.4741053, 87.8245457, 156.9126094 y = cos √ λx 13.2.13 (p. 697) (a) λ = 0 isn’t an eigenvalue (b) none (c) 0.1470328, 1.4852833, 4.5761411, 9.6059439 y = √ λ cos √ λx + sin √ λx 13.2.14 (p. 697) (a) λ = 0 isn’t an eigenvalue (b) −0.1945921 y = 2 √ −λ cosh √ −λ x −sinh √ −λx (c) 1.9323619, 5.9318981, 11.9317920, 19.9317507 y = 2 √ λ cos √ λx −sin √ λx | Elementary Differential Equations with Boundary Value Problems_Page_794_Chunk3206 |
Answers to Selected Exercises 785 13.2.15 (p. 697) (a) λ = 0 isn’t an eigenvalue (b) −1.0664054 y = cosh √ −λx (c) 1.5113188, 8.8785880, 21.2104662, 38.4805610 y = cos √ λ x 13.2.16 (p. 697) (a) λ = 0 isn’t an eigenvalue (b) −1.0239346 y = √ −λ cosh √ −λx −sinh √ −λx (c) 2.0565705, 9.3927144, 21.7169130, 38.9842177 y = √ λ cos √ λ x −sin √ λ x 13.2.17 (p. 697) (a) λ = 0 isn’t an eigenvalue (b) −0.4357577, y = 2 √ −λ cosh √ −λ x −sinh √ −λx (c) 0.3171423, 3.7055350, 9.1970150, 16.8760401 y = 2 √ λ cos √ λ x −sin √ λ x 13.2.18 (p. 697) (a) λ = 0 isn’t an eigenvalue (b) −2.1790546, −9.0006633 y = √ −λ cosh √ −λx −3 sinh √ −λx (c) 5.8453181, 17.9260967, 35.1038567, 57.2659330 y = √ λ cos √ λ x −3 sin √ λ x 13.2.19 (p. 697) (a) λ = 0 is an eigenvalue y0 = 2 −x (b) −1.0273046 y = 2 √ −λ cosh √ −λ x −sinh √ −λ x (c) 8.8694608, 16.5459202, 26.4155505, 38.4784094 y = 2 √ λ cos √ λx −sin √ λx 13.2.20 (p. 697) (a) λ = 0 isn’t an eigenvalue (b) −7.9394171, −3.1542806 y = 2 √ −λ cosh √ −λ x −5 sinh √ −λx (c) 29.3617465, 78.777456, 147.8866417, 236.7229622 y = 2 √ λ cos √ λ x −5 sin √ λ x 13.2.21 (p. 697) λ = 0, y = xe−x 20.1907286, 118.8998692, 296.5544121, 553.1646458 y = e−x sin √ λ x 13.2.22 (p. 697) λn = n2π2, yn = x sin nπ(x −2) (n = positive integer) 13.2.23 (p. 697) λ = 0, y = x(2 −x) 20.1907286, 118.8998692, 296.5544121 553.1646458, y = x sin √ λ(x −2) 13.2.24 (p. 698) 3.3730893, 23.1923372, 62.6797232, 121.8999231, 200.8578309 y = x sin √ λ (x −1) 13.2.25 (p. 698) (a) −L < δ < 0 (b) δ = −L 13.2.26 (p. 698) λ0 = −1/α2 y0 = e−x/α λn = n2, yn = nα cos nx −sin nx, n = 1, 2, . . . 13.2.27 (p. 698) (a) y = x −α (b) y = αk cosh kx −sin kx (c) y = αk cos kx −sin kx 13.2.29 (p. 698) (b) λ = −α2/β2 y = e−αx/β | Elementary Differential Equations with Boundary Value Problems_Page_795_Chunk3207 |
Index A Abel’s formula, 199–202, 468 Accelerated payment, 139 Acceleration due to gravity, 151 Airy’s equation, 319 Amplitude, of oscillation, 271 time-varying, 279 Amplitude–phase form, 272 Aphelion distance, 300 Apogee, 300 Applications, of first order equations, 130–192 autonomoussecondorder equations, 162–179 cooling problems, 140–141 curves, 179–192 elementary mechanics, 151–177 growth and decay, 130–140 mixing problems, 143–150 of linear second order equations, 268–302 motion under a central force, 295–302 motion under inverse square law force, 299– 301 RLC circuit, 289–295 spring–mass systems, 268–289 Autonomous second order equations, 162–183 conversion to first order equations, 162 damped 173–178 pendulum 174 spring–mass system, 173 Newton’s second law of motion and, 163 undamped 164–172 pendulum 173–169 spring–mass system, 164–173 stability and instability conditions for, 170–178 B Beat, 275 Bernoulli’s equation, 63–64 Bessel functions of order ν, 360 Bessel’s equation, 205 287, 348 of order ν, 360 of order zero, 377 ordinary point of, 319 singular point of, 319, 342 Bifurcation value, 54, 176 Birth rate, 2 Boundary conditions, 580 in heat equation, 618 Laplace equation, 649–651 periodic, 580 separated, 676 for two-point boundary value problems, 676 Boundary points, 676 Boundary value problems, 618 initial-, 618 mixed, 649 two-point, 676–686 assumptions, 676 boundary conditions for, 676 defined, 676 Green’s functions for, 681 homogeneousand nonhomogeneous, 676 orthogonality and, 583 Capacitance, 290 Capacitor, 290 Carbon dating, 136 Central force, motion under a, 295–302 in terms of polar coordinates, Characteristic equation, 210 with complex conjugate roots, 214–217 with disinct real roots, 211–217 with repeated real root, 212, 217 Characteristic polynomial, 210, 340, 475 Charge, 290 steady state, 293 Chebyshev polynomials, 322 Chebshev’s equation, 322 787 | Elementary Differential Equations with Boundary Value Problems_Page_797_Chunk3208 |
788 Index Circuit, RLC. See RLC circuit Closed Circuit, 289 Coefficient(s) See also Constant coefficient equations computing recursively, 322 Fourier, 587 in Frobenius solutions, 352–358 undetermined, method of, 229–248, 475–496 principle of superposition and, 235 Coefficient matrix, 516, 516 Competition, species, 6, 541 Complementary equation, 35, 469 Complementary system, 568 Compound interest, continuous, 131, 134 Constant, damping, 173 decay, 130 separation, 619 spring, 268 temperature decay, 140 Constant coefficient equations, 210, 475 homogeneous, 210–221 with complex conjugate roots, 214–217 with distinct real roots, 211, 217 higher order. See Higher order constant coef- ficient homogeneous equations with repeated real roots, 212, 217 with impulses, 452–460 nonhomogeneous, 229–248 with piecewisecontinuousforcing functions, 430– 439 Constant coefficient homogeneous linear systems of differential equations, 529–568 geometric properties of solutions, when n = 2, 536–539, 551–554, 562–565 with complex eigenvalueof constantmatrix, 556– 565 with defective constant matrix, 558–556 with linearly independent eigenvetors, 529–541 Constant solutions of separable first order equations, 48–53 Converge absolutely, 307 Convergence, of improper integral, 393 open interval of, 306 radius of, 306 Convergent infinite series, 620 Convergent power series, 306 Convolution, 440–452 convolution integral, 445–450 defined, 441 theorem, 441 transfer functions, 446–448 Volterra integral equation, 445 Cooling, Newton’s law of, 3, 140 Cooling problems, 140–141, 148–148 Cosine series, Fourier, 603–604 mixed, 606–608 Critically damped motion, 280–281 oscillation, 291–294 Critical point, 163 Current, 289 steady state, 293 transient, 293 Curves, 179–192 equipotential, 185 geometric problems, 183 isothermal, 185 one-parameter famlies of, 179–183 subsubitem defined, 179 differential equation for, 180 orthogonal trajectories, 190–190, 192 finding, 186–190 D D’Alembert’s solution, 637 Damped autonomous second order equations, 172– 179 for pendulum, 174 for spring-mass system, 173 Damped motion, 268 Damping, RLC circuit in forced oscllation with, 293 spring-mass systems with, 173, 268, 279–288 critically damped motion, 280–283 forced vibrations, 283–287 free vibrations, 279–283 overdamped motion,279 underdamped motion, 279 spring-mass systems without, 269–277 forced oscillation, 273–277 Damping constant, 173 Damping forces, 163, 268 Dashpot, 268 Dating, carbon, 135–136 Death rate 3 Decay, See Exponential growth and decay, Decay constant, 130 Derivatives, Laplace transform of, 413–415 Differential equations, defined, 8 order of, 8 ordinary, 8 partial, 8 solutions of, 9–11 Differentiation of power series, 308 Dirac, Paul A. M., 452 | Elementary Differential Equations with Boundary Value Problems_Page_798_Chunk3209 |
Index 789 Dirac delta function, 452 Direction fields for first order equations, 16–27 Dirichlet, Peter G. L., 662 Dirichlet condition, 662 Dirichlet problem, 662 Discontinuity, jump, 398 removable, 408 Distributions, theory of, 453 Divergence of improper integral, 393 Divergent power series, 306 E Eccentricity of orbit, 300 Eigenfunction associated with λ, 580, 689 Eigenvalue, 580, 689 Eigenvalue problems, See also Boundary value prob- lems Sturm-Liouville problems, 686–700 defined, 689 orthogonality in, 695 solving, 688 Elliptic orbit, 300 Epidemics 5–53 Equidimensional equation, 486 Equilibrium, 163 spring-mass system, 268 Equilibrium position, 268 Equipotentials, 185 Error(s), in applying numerical methods, 96 in Euler’s method, 97–102 at the i-th step, 96 truncation, 96 global, 102, 111, 119 local, 100 local, numerical methods with O(h3), 114– 116 Escape velocity, 159 Euler’s equation, 343–346, 229 Euler’s identity, 96–108 Euler’s method, 96–108 error in, 97–100 truncation, 100–102 improved, 109–114 semilinear, 102–106 step size and accuracy of, 97 Even functions, 592 Exact first order equations, 73–82 implicit solutions of, 73–74 procedurs for solving, 76 Exactness condition, 75 Existence of solutions of nonlinear first order equa- tions, 56–62 Existence theorem, 40, 56 Exponential growth and decay, 130–140 carbon dating, 136 interest compounded continuously, 134 mixed growth and decay, 134 radioactive decay, 130 savings program, 136 Exponential order, function of, 400 F First order equations, 31–93 applications of See under Applications. autonomoussecondorder equationconverted to, 162 direction fields for, 16–20 exact, 73–82 implicit solution of, 73 procedurs for solving, 76 linear, 31–44 homogeneous, 106–35 nonhomogeneous, 35–41 solutions of, 30 nonlinear, 41, 52, 56–72 existence and uniquenessof solutions of, 56– 62 transformation into separables, 62–72 numerical methods for solving. See Numerical method separable, 45–55, 68–72 constant solutions of, 48–50 implicit solutions of, 47–48 First order systems of equations, higher order systems written as, 511 scalar differential equations written as, 512 First shifting theorem, 397 Force(s) damping, 163, 268 gravitational, 151, 158 impulsive, 453 lines of, 185 motion under central, 296–302 motion under inverse square law, 299–302 Forced motion, 268 oscillation damped, 293–294 undamped, 273–277 vibrations, 283–287 Forcing function, 194 without exponential factors, 516, 516 487–494 with exponential factors, 244–244 | Elementary Differential Equations with Boundary Value Problems_Page_799_Chunk3210 |
790 Index piecewise continuous constant equations with, 430–439 Fourier coefficients, 587 Fourier series, 586–616 convergence of, 589 cosine series, 603–604 convergence of, 607 mixed, 606 defined 588 even and odd functions, 592–589 sine series, 605 convergence of, 605 mixed, 609 Fourier solutions of partial differential equations, 618– 673 heat equation, 618–629 boundary conditions in, 618 defined, 618 formal and actual solutions of, 620 initial-boundary value problem, 618–625 initial condition, 618 nonhomogeneous problems, 623–625 separation of variables to solve, 618–620 Laplace’s equation, 649–673 boundary conditions, 649–662 defined, 649 formal solutions of, 651–662 in polar coordinates, 666–673 for semiinfinite strip, 660 Free fall under constant gravity, 13 Free motion, 268 oscillation, RLC circuit in, 291–292 vibrations, 279–283 Frequency, 279 of simple harmonic motion, 292 Frobenius solutions, 347–390 indicial equation with distinct real roots differ- ing by an integer, 378–390 indicial equation with distinct real roots not dif- fering by an integer, 351–364 indicial equation with repeated root, 364–378 power series in, 348 recurrence relationship in, 350 two term, 353–355 verifying, 357 Function(s) even and odd, 592 piecewise smooth, 588 Fundamental matrix, 524 Fundamental set of solutions, of higher order constant coefficient homogeneous equations, 480– 482 of homogeneous linear second order equations, 198, 202 of homogeneous linear systems of differential equations, 522, 524 of linear higher order equations, 466 G Gamma function, 403 Generalized Riccati equation, 72, 255 General solution of higher order constantcoefficient homogeneous equations, 475–480 of homogeneous linear second order equations, 198 of homogeneous linear systems of differential equations, 521, 524 of linear higher order equations, 465, 469 of nonhomogeneouslinear first order equations, 30, 40 of nonhomogeneous linear second order equa- tions, 221, 248–255 Geometric problems, 235–184 Gibbs phenomenon, 589, 597 Global truncation error in Euler’s method, 102 Glucose absorption by the body, 5 Gravitation, Newton’s law of, 151, 176, 295, 510, 515 Gravity, acceleration due to,151 Green’s function, 504, 681, 682, 685–686 Grid, rectangular, 17 Growth and decay, carbon dating, 136 exponential, 130–140 interest compounded continuously, 133–134 mixed growth and decay, 134 radioactive decay, 130 savings program, 136 H Half-life, 130 Half-line, 536 Half-plane, 552 Harmonic conjugate function, 82 Harmonic function, 82 Harmonic motion, simple, 166, 271 292, 292 amplitude of oscillation, 272 natural frequancy of, 272 phase angle of, 272 Heat equation,618–629 boundary conditions in, 618 defined, 618 formal and actual solutions of, 620 initial-boundary value problems, 618, 625 initial condition, 618 nonhomogeneous problems, 623–625 | Elementary Differential Equations with Boundary Value Problems_Page_800_Chunk3211 |
Index 791 separation of variables to solve, 618 Heat flow lines, 185 Heaviside’s method, 407, 412 Hermite’s equation, 322 Heun’s method, 116 Higher order constant coefficient homogeneousequa- tions, 475–487 characteristic polynomial of, 482–478 fundamental sets of solutions of, 480 general solution of, 476–482 Homogeneous linear first order equations, 30–34 general solutions of, 33 separation of variables in, 35 Homogeneous linear higher order equations, 465 Homogeneouslinear secondorder equations, 194–221 constant coefficient, 210–221 with complex conjugate roots, 214–217 with distinct real roots, 210–217 with repeated real roots, 210–214, 217 solutions of, 194, 198 the Wronskian and Abel’s formula, 199–202 Homogeneouslinear systems of differential equations, 516 basic theory of, 521–528 constant coefficient, 529–568 with complex eigenvalues of coefficient ma- trix, 556–568 with defective coefficient matrix, 542–551 geometric properties of solutions when n = 2, 529–539, 551–554, 562–565 with linearly independent eigenvectors, 529– 539 subitem fundamental set of solutions of, 521, 524 general solution of, 521, 524 trivial and nontrivial solution of, 521 Wronskian of solution set of, 523 Homogeneous nonlinear equations defined, 65 transformation into separable equations, 65–68 Hooke’s law, 268–269 I Imaginary part, 215 Implicit function theorem, 47 Implicit solution(s) 73–74 of exact first order equations, 73–74 of initial value problems, 47 of separable first order equations, 47–49 Impressed voltage, 53 Improper integral, 393 Improved Euler method, 108–112 120–122 semilinear, 112–114 Impulse function, 452 Impulse response, 448, 455 Impulses, constant coefficient equations with, 452– 461 Independence, linear of n function, 466 of two functions, 198 of vector functions, 525 Indicial equation, 343, 351 with distinct real roots differing by an integer, 378–390 with distinct real roots not differing by an inte- ger, 351–364 with repeated root, 364–378 Indicial polynomial, 343, 351 Inductance, 290 Infinite series, convergent, 620 Initial-boundary value problem, 618 heat equation, 618–625 wave equation, 630–649 Initial conditions, 11 Initial value problems, 11–14 implicit solution of, 47 Laplace transforms to solve, 413–419 formula for, 443–444 second order equations, 415–419 Integral curves, 9–9, 413–27, Integrals, convolution, 445–445 improper, 393 Integrating factors, 82–93 finding, 83–93 Interest compounded continuously, 131–134 Interval of validity, 12 Inverse Laplace transforms, 404–413 defined, 404 linearity property of, 405 of rational functions, 406–413 Inverse square law force, motion under, 299–301 Irregular singular point, 342 Isothermal curves, 185 J Jump discontinuity, 398 K Kepler’s second law, 296 Kepler’s third law, 301 Kirchoff’s Law, 290 L Laguerre’s equation, 348 Lambda-eigenfunctions, 580, 687 Laplace’s equation, 649–673 | Elementary Differential Equations with Boundary Value Problems_Page_801_Chunk3212 |
792 Index boundary conditions, 649–651 defined, 649 formal solutions of, 651–662 in polar coordinates, 666–673 for semi-infinte strip, 660 Laplace transforms, 393–461 computation of simple, 393–396 of constant coefficient equations with impulses 452–461 with piecewise continuous forcing functions, 430–439 convolution, 440–452 convolution integral, 445 defined, 441 theorem, 441 transfer functions, 446–448 definition of, 393 existence of, 398 First shifting theorem, 397 inverse, 404 defined, 403 linearity property of, 405 of rational functons, 406–411 linearity of, 396 of piecewise continuous functions, 421–430 unit step function and, 419–430 Second shifting theorem, 425 to solve initial value problems, 413–419 derivatives , 413–415 formula for, 443–444 second order equations, 415 tables of, 396 Legendre’s equation, 204, 319 ordinary points of, 319 singular points of, 319, 347 Limit, 398 Limit cycle, 176 Linear combination(s), 198, 465, 521 of power series, 313–316 Linear difference equations, second order homoge- neous, 340 Linear first order equations, 30–44 homogeneous, 30–35 general solution of, 33 separation of variables, 35 nonhomogeneous, 30, 35–41 general solution of, 35–41 solutions in integral form, 38–39 variation of parameters to solve, 35, 38 solutions of, 30–31 Linear higher order equations, 465–505 fundamental set of solutions of, 465, 466 general solution of, 465, 469 higher order constant coefficient homogeneous equations, 475–487 characteristic polyomial of 475–480 fundamental sets of solutions of, 479–482 general solution of, 476–478 homogeneous, 465 nonhomogeneous, 465, 469 trivial and nontrivial solutions of, 465 undetermined coefficients for, 487–496 variation of parameters for, 497–505 derivation of method, 497–499 fourth order equations, 501–497 third order equations, 499 Wronskian of solutions of 467–469 Linear independence 198 of n functions, 466 of two functions, 198 of vector functions, 521–523 Linearity, of inverse Laplace transform, 405 of Laplace transform, 396 Linear second order equations, 194–264 applications of. See under Applications defined, 194 homogeneous, 194–221 constant coefficient, 210–201 solutions of, 194–198 the Wronskian and Abel’s formula, 199–202 nonhomnogeneous, 194, 221–264, 465, 469 comparison of methods for solving, 203 complementary equation for, 221 constant coefficient, 229–248 general solution of, 221–225 particular solution of, 221, 225–227 reduction of order to find general solution of, 248–255 superposition principle and, 225–227 undetermined coefficients method for, 229– 248 variation of parameters to find particular so- lution of, 255–264 series solutions of, 306–390 Euler’s equation, 343–347 Frobenius solutions, 347–390 near an ordinary point, 319–339 with regular singular points, 342–347 Linear systems of differential equations, 515–577 defined, 515 homogeneous, 515 basic theory of, 521–529 constant coefficient, 529–568 fundamental set of solutions of, 521–524 general solution of, 521, 524 | Elementary Differential Equations with Boundary Value Problems_Page_802_Chunk3213 |
Index 793 linear indeopendence of, 521, 524 trivial and nontrivial solution of, 521 Wronskian of solution set of, 523 nonhomogeneous, 516 variation of parameters for, 568–576 solutions to initial value problem, 515–517 Lines of force, 185 Liouville, Joseph, 689 local truncation error, 100–102 numerical methods with O(h3), 114-116 Logistic equation 3 M Maclaurin series, 308 Magnitude of acceleration due to gravity at Earth’s surface, 151 Malthusian model, 2 Mathematical models, 2 validity of, 137, 140, 149 Matrix/matrices, 516–518 coefficient, complex eigenvalue of, 557–568 defective, 542 fundamental, 524 Mechanics, elementary, 151–178 escape velocity, 158–159 162, 162 motion through resisting medium underconstant gravitational force, 151–157 Newton’s second law of motion, 151–151 pendulum motion damped, 174–176 undamped, 173–169 spring-mass system damped, 173–174, 269, 279–289 undamped, 164–173,268 units used in, 151 Midpoint method, 109 Mixed boundary value problems, 649 Mixed Fourier cosine series, 606–608 Mixed Fourier sine series, 609 Mixed growth and decay, 134 Mixing problems, 143–148 Models, mathematical, 2–3 validity of, 137, 140, 149 Motion, damped, 268 critically, 280 overdamped, 279–280 underdamped, 279 elementary, See Mechanics, elementary equation of, 269 forced, 270 free, 270 Newton’s second law of, 6, 151–151, 163, 165, 173, 173–174, 268, 296, 510 autonomous second order equations and, 163 simple harmonic, 166, 269–273 amplitude of oscillation, 271 frequency of, 272 phase angle of, 271 through resisting medium under constant gravi- tational force, 152–157 under a central force, 295–302 under inverse square law force, 299–301 undamped, 268 Multiplicity, 479 N Natural frequency, 272 Natural length of spring, 268 Negative half plane, 552 Neumann condition, 649 Neumann problem, 649 Newton’s law of cooling, 3, 140–141, 148–150 Newton’s law of gravitation, 151, 176, 295, 510, 519 Newton’s second law of motion, 151–151, 163, 166, 173, 176, 268, 296, 509, 510 autonomous second order equations and, 163 Nonhomogeneous linear second order equations, 30, 35, 41 general solution of, 35–38 40–41 solutions in integral form, 38 variation of parameters to solve, 35, 38 Nonhomogeneouslinear secondorder equations, 194, 221–264 comparison of methods for solving, 262 complementary equation for, 221, 221 constant coefficient, 229–255 general solution of, 221–225 particular solution of, 221, 221–226, 229–235, 255–262 reduction of order to find general solution of, 248–255 superposition principle and, 225–223 undetermined coefficients method for, 229–255 forcing functions with exponential factors, 242– 244 forcing functions without exponential factors, 238–241 superposition principle and, 235 variation of parameters to find particular so- lution of, 255–264 Nonhomogeneouslinear systems of differential equa- tions, 515 variation of parameters for, 568–577 Nonlinear first order equations, 52 56–72 | Elementary Differential Equations with Boundary Value Problems_Page_803_Chunk3214 |
794 Index existence and uniqueness of solutions of, 56–72 transformation into separable equations, 62–72 Nonoscillatory solution, 358 Nontrivial solutions of homogeneous linear first order equations, 30 of homogeneous linear higher order equations, 465 of homogeneous linear second order equations, 194 of homogeneous linear systems of differential equations, 521 Numerical methods, 96–127, 514 with O(h3) local truncation, 114–116 error in, 96 Euler’s method, 96–108 error in, 97–102 semilinear, 102–106 step size and accuracy of, 97 truncation error in, 99–102 Heun’s method, 115 semilinear, 106 improved Euler method, 106, 109–112 semilinear, 112 midpoint, 116 Runge-Kutta method, 98, 106 119–127, 513– 514 for caseswhere x0 isn’t the left endpoint, 122– 124 semilinear, 106, 122 for systems of differential equations, 514 Numerical quadrature, 119, 127 O Odd functions, 592 One-parameter families of curves, 179–183 defined, 179 differential equation for, 180 One-parameter families of functions, 30 Open interval of convergence, 306 Open rectangle, 56 Orbit, 301 eccentricity of, 300 elliptic, 300 period of, 301 Order of differential equation, 8 Ordinary differential equation, defined, 8 Ordinary point, series solutions of linear second order equations near, 319–341 Orthogonality, 583–581 in Sturm-Liouville problems, 695 Orthogonal trajectories, 186–190, finding, 186 Orthogonal with respect to a weighting function, 331, 331 Oscillation amplitude of, 271 critically damped, 292 overdamped, 292 RLC circuit in forced, with damping, 293–293 RLC circuit in free, 291–293 undamped forced, 273–277 underdamped, 291 Oscillatory solutions, 232–176, 347 Overdamped motion, 279–279 P Partial differential equations defined, 8 Fourier solutions of, 618–673 heat equation, 618–630 Laplace’s equation, 649–673 wave equation 630–649 Partial fraction expansions, software packagesto find, 411 Particular solutions of nonhomogeneoushigher equa- tions, 469, 487–505 Particular solutions of nonhomogeneous linear sec- ond order equations, 221, 225–226, 229– 235, 255–261 Particular solutions of nonhomogeneous linear sys- tems equations, 568–577 Pendulum damped, 174–176 undamped, 173–169 Perigee, 300 Perihelion distance, 300 Periodic functions, 404 Period of orbit, 296 Phase angle of simple harmonic motion, 271–272 Phase plane equivalent, 162 Piecewise continuous functions, 399 forcing, constant coeffocient equationswith, 430– 439 Piecewise smooth function, 588 Laplace transforms of 398–401, 421–430 unit step functions and, 419–430 Plucked string, wave equation applied to, 638–642 Poinccaré, Henri, 162 Polar coordinates central force in terms of, 296–298 in amplitude-phase form, 271 Laplace’s equation in, 666–673 Polynomial(s) characteristic, 210, 340, 482 | Elementary Differential Equations with Boundary Value Problems_Page_804_Chunk3215 |
Index 795 of higher order constant coefficient homoge- neous equations, 475–478 Chebyshev, 322 indicial, 343, 351 Taylor, 309 trigonometric, 602 Polynomial operator, 475 Population growth and decay, 2 Positive half-plane, 552 Potential equation, 649 Power series, 306–319 convergent, 306–307 defined, 306 differentiation of, 308–309 divergent, 306 linear combinations of, 313–316 radius of convergence of, 306, 307 shifting summation index in, 310–312 solutions of linear second order equations, rep- resented by, 319–341 Taylor polynomials, 309 Taylor series, 308 uniqueness of 309–309 Q Quasi-period, 279 R Radioactive decay, 130–131 Radius of convergence of power series, 306, 307 Rational functions, inverse Laplacetransforms of, 406– 413 Rayleigh, Lord, 171 Rayleigh’s equation, 177 Real part, 215 Rectangle, open, 56 Rectangular grid, 17 Recurrence relations, 322 in Frobenius solutions, 351 two term, 353–355 Reduction of order, 213, 248–255 Regular singular points, 342–347 at x0 = 0, 347–364 Removable discontinuity, 398 Resistance, 290 Resistor, 290 Resonance, 277 Ricatti, Jacopo Francesco, 72 Ricatti equation, 72 RLC circuit, 289–294 closed, 289 in forced oscillation with danping, 293 in free oscillation, 291–293 Roundoff errors, 96 Runge-Kutta method, 98, 119–127, 514 for cases where x0 isn’t the left endpoint, 122 for linear systems of differential equations, 514 semilinear, 106, 122 S Savings program, growth of, 136 Scalar differential equations, 512 Second order differential equation, 6 autonomous, 162–178 conversion to first order equation, 162 damped, 173–178 Newton’s second law of mation and, 163 undamped, 164–169 Laplace transform to solve, 415–418 linear, See linear second equations two-point boundary value problems for, 676– 686 assumptions, 676 boundary conditions for, 676 defined, 676 Green’s function for, 682 homogeneousand nonhomogeneous, 676 procedure for solving, 677 Second order homogeneous linear difference equa- tion, 340 Second shifting Theorem, 425–427 Semilinear Euler method, 102 Semilinear improved Euler method, 106, 112 Semilinear Runge-Kutta method, 108, 124 Separable first order equations, 45–55 constant solutions of, 48–50 implicit solutions, 47 transfomations of nonlinear equations to, 62–63 Bernoulli’s equation, 62–68 homogeneousnonlinear equations, 65–68 other equations, 64 Separated boundary conditions, 676 Separation constant, 619 Separation of variables, 35, 45 to solve heat equation, 618 to solve Laplace’s equation, 651–662, 667–673 to solve wave equation, 632 Separatrix, 171, 170 Series, power. See Power series Series solution of linear second order equations, 306- 390 Frobenius solutions, 347–390 near an ordinary point, 319 Shadow trajectory, 564–565 Shifting theorem first, 397 | Elementary Differential Equations with Boundary Value Problems_Page_805_Chunk3216 |
796 Index second, 425–427 Simple harmonic motion, 269–273 amplitude of oscillation, 271 natural frequency of, 272 phase angle of, 272 Simpson’s rule, 127 Singular point, 319 irregular, 342 regular, 342–347 Solution(s), 9–10 See also Frobenius solutions Non- trivial solutions Series solutions of linear second order equations Trivial solution nonoscillatory, 358 oscillatory, 358 Solution curve, 9–9 Species, interacting, 6, 540 Spring, natural length of, 268, 269 Spring constant, 268 Spring-mass systems, 268–289 damped, 172, 269, 287–289 critically damped motion, 287–283 forced vibrations, 283–287 free vibrations, 283–285 overdamped motion, 279 underdamped motion, 279 in equilibrium, 268 simple harmonic motion, 269–273 amplitude of oscillation, 272 natural frequency of, 272 phase angle of, 272 undamped, 164–166, 269–277 forced oscillation, 273–287 Stability of equilibrium and critical point, 163–164 Steady state, 135 Steady state charge, 293 Steady state component, 285, 447 Steady state current, 293 String motion, wave equation applied to, 630–638 plucked, 638–642 vibrating, 630–638 Sturm-Liouville equation, 689 Sturm-Liouville expansion, 696 Sturm-Liouville problems, 687–700 defined, 687 orthogonality in, 695 solving, 687 Summation index in power series, 310–312 Superposition, principle of, 44, 225, 235, 470 method of undetermine coefficients and, 235 Systems of differential equations, 507–518 See also Linear systems of differential equations first order higher order systems rewritten as, 322–512 scalar differential equations rewritten as, 512 numerical solutions of, 514 two first order equations in two unknowns, 507– 510 T Tangent lines, 181 Taylor polynomials, 309 Taylor Series, 308 Temperature, Newton’s law of cooling, 3 140–141, 148–149 Temperature decay constant of the medium, 140 Terminal velocity, 152 Time-varying amplitude, 279 Total impulse, 452 Trajectory(ies), of autonomous second order equations, 162 orthogonal, 186–190 finding, 186–244 shadow, 564 of 2 × 2 systems, 536–539, 551–554, 562–565 Transfer functions, 446 Transformation of nonlinear equations to separable first order, equations, 62–81 Bernoulli’s equation, 63 homogeneous nonlinear equations, 65–68 other equations, 64 Transform pair, 393 Transient current, 293 Transient components, 285, 447 Transient solutions, 292 Trapezoid rule, 119 Trivial solution, of homogeneouslinear first order equations, 30 of homogeneous linear second order equations, 194 of homogeneous linear systems of differential equations, 521 of linear higher order differential equations, 465 Truncation error(s), 96 in Euler’s method, 100 global, 102, 109 local, 100 numerical methods with O(h3), 114–116 Two-point boundary value problems, 676–686 assumptions, 676 boundary conditions for, 676 defined, 676 Green’s function for, 681 homogeneous and nonhomogeneous, 677 U | Elementary Differential Equations with Boundary Value Problems_Page_806_Chunk3217 |
Index 797 Undamped autonomoussecond order equations, 164– 171 pendulum, 173–169 spring-mass system, 164–166 stability and instabilty conditions for, 170–171 Undamped motion, 268 Underdamped motion, 279 Underdamped oscillation, 291 Undetermined coefficients for linear higher order equations, 487–496 forcing functions, 487–494 for linear second order equations, 229–248 principle of superposition, 235 Uniqueness of solutions of nonlinear first equations, 56–62 Uniqueness theorem, 40, 56, 194, 465, 516 Unit step function, 422–430 V Validity, interval of, 12 Vandermonde, 484 Vandermonde determinant, 484 van der Pol’s equation, 176 Variables, separation of, 35, 45 Variation of parameters for linear first order equations, 35 for linear higher order equations, 497–505 derivation of method, 497–498 fourth order equations, 501–502 third order equations, 499 for linear higher second order equations, 255 for nonhomogeneouslinear systems of differen- tial equations, 568–577 Velocity escape, 158–151 terminal, 152–156 Verhulst, Pierre, 3 Verhulst model, 3, 27, 69 Vibrating strings, wave equation applied to, 630 Vibrations forced, 283–287 free, 279–283 Voltage, impressed, 289 Voltage drop, 290 Volterra, Vito 445 Volterra integral equation, 445 W Wave equation, 630–649 defined, 630 plucked string, 637–642 vibrating string, 630–637 assumptions, 630 formal solution, 632–637 Wave, traveling, 639 orthogonal with respect to, 587 331 Wronskian of solutions of homogeneous linear systems of differential equations, 523 of solutions of homogeneoussecond differential equations, 199–201 of solutions of homogeneouslinear higher order differential equations, 467–469 | Elementary Differential Equations with Boundary Value Problems_Page_807_Chunk3218 |
MULTIVARIABLE CALCULUS Don Shimamoto | Multivariable_Calculus_Shimamoto_Page_1_Chunk3219 |
Multivariable Calculus Don Shimamoto Swarthmore College | Multivariable_Calculus_Shimamoto_Page_5_Chunk3220 |
ii ©2019 Don Shimamoto ISBN: 978-1-7082-4699-0 This work is licensed under the Creative Commons Attribution 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. Unless noted otherwise, the graphics in this work were created using the software packages: Cin- derella, Mathematica, and, in a couple of instances, Adobe Illustrator and Canvas. | Multivariable_Calculus_Shimamoto_Page_6_Chunk3221 |
Contents Preface vii I Preliminaries 1 1 Rn 3 1.1 Vector arithmetic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2 Linear transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.3 The matrix of a linear transformation . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.4 Matrix multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 1.5 The geometry of the dot product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.6 Determinants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 1.7 Exercises for Chapter 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 II Vector-valued functions of one variable 23 2 Paths and curves 25 2.1 Parametrizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.2 Velocity, acceleration, speed, arclength . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.3 Integrals with respect to arclength . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.4 The geometry of curves: tangent and normal vectors . . . . . . . . . . . . . . . . . . 31 2.5 The cross product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.6 The geometry of space curves: Frenet vectors . . . . . . . . . . . . . . . . . . . . . . 38 2.7 Curvature and torsion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 2.8 The Frenet-Serret formulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 2.9 The classification of space curves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 2.10 Exercises for Chapter 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 III Real-valued functions 53 3 Real-valued functions: preliminaries 55 3.1 Graphs and level sets . . . . . . . . . . . | Multivariable_Calculus_Shimamoto_Page_7_Chunk3222 |
. . . . . . . . . . . . . . . . . . . . . . . . . 55 3.2 More surfaces in R3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 3.3 The equation of a plane in R3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 3.4 Open sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 3.5 Continuity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 iii | Multivariable_Calculus_Shimamoto_Page_7_Chunk3223 |
iv CONTENTS 3.6 Some properties of continuous functions . . . . . . . . . . . . . . . . . . . . . . . . . 69 3.7 The Cauchy-Schwarz and triangle inequalities . . . . . . . . . . . . . . . . . . . . . . 71 3.8 Limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 3.9 Exercises for Chapter 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 4 Real-valued functions: differentiation 83 4.1 The first-order approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 4.2 Conditions for differentiability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 4.3 The mean value theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 4.4 The C1 test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 4.5 The Little Chain Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 4.6 Directional derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 4.7 ∇f as normal vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 4.8 Higher-order partial derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 4.9 Smooth functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 4.10 Max/min: critical points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 4.11 Classifying nondegenerate critical points . . . . . . . . . . . . . . . . . . . . . . . . . 102 4.11.1 The second-order approximation . . . . . . . . . . . . . . . . . . . . . . . . . 102 4.11.2 Sums and differences of squares . . . . . . . . . . . . . . . . . . . . . . . . . . 104 4.12 Max/min: Lagrange multipliers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 4.13 Exercises for | Multivariable_Calculus_Shimamoto_Page_8_Chunk3224 |
Chapter 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 5 Real-valued functions: integration 121 5.1 Volume and iterated integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 5.2 The double integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 5.3 Interpretations of the double integral . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 5.4 Parametrizations of surfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 5.4.1 Polar coordinates (r, θ) in R2 . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 5.4.2 Cylindrical coordinates (r, θ, z) in R3 . . . . . . . . . . . . . . . . . . . . . . . 137 5.4.3 Spherical coordinates (ρ, ϕ, θ) in R3 . . . . . . . . . . . . . . . . . . . . . . . 138 5.5 Integrals with respect to surface area . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 5.6 Triple integrals and beyond . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 5.7 Exercises for Chapter 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 IV Vector-valued functions 155 6 Differentiability and the chain rule 157 6.1 Continuity revisited . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 6.2 Differentiability revisited . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 6.3 The chain rule: a conceptual approach . . . . . . . . . . . . . . . . . . . . . . . . . . 161 6.4 The chain rule: a computational approach . . . . . . . . . . . . . . . . . . . . . . . . 162 6.5 Exercises for Chapter 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 7 Change of variables 173 7.1 Change of variables for double integrals . . . . . . . . . . . . . . . . . . . . . . . . . 174 7.2 A word about substitution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 7.3 Examples: linear changes of variables, symmetry | Multivariable_Calculus_Shimamoto_Page_8_Chunk3225 |
. . . . . . . . . . . . . . . . . . . . 178 | Multivariable_Calculus_Shimamoto_Page_8_Chunk3226 |
CONTENTS v 7.4 Change of variables for n-fold integrals . . . . . . . . . . . . . . . . . . . . . . . . . . 182 7.5 Exercises for Chapter 7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 V Integrals of vector fields 195 8 Vector fields 197 8.1 Examples of vector fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 8.2 Exercises for Chapter 8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200 9 Line integrals 203 9.1 Definitions and examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 9.2 Another word about substitution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210 9.3 Conservative fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 9.4 Green’s theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 9.5 The vector field W . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219 9.6 The converse of the mixed partials theorem . . . . . . . . . . . . . . . . . . . . . . . 222 9.7 Exercises for Chapter 9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226 10 Surface integrals 235 10.1 What the surface integral measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 10.2 The definition of the surface integral . . . . . . . . . . . . . . . . . . . . . . . . . . . 240 10.3 Stokes’s theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245 10.4 Curl fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250 10.5 Gauss’s theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253 10.6 The inverse square field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | Multivariable_Calculus_Shimamoto_Page_9_Chunk3227 |
. . 257 10.7 A more substitution-friendly notation for surface integrals . . . . . . . . . . . . . . . 259 10.8 Independence of parametrization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261 10.9 Exercises for Chapter 10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264 11 Working with differential forms 273 11.1 Integrals of differential forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274 11.2 Derivatives of differential forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275 11.3 A look back at the theorems of multivariable calculus . . . . . . . . . . . . . . . . . 279 11.4 Exercises for Chapter 11 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280 Answers to selected exercises 285 Index 307 | Multivariable_Calculus_Shimamoto_Page_9_Chunk3228 |
vi CONTENTS | Multivariable_Calculus_Shimamoto_Page_10_Chunk3229 |
Preface This book is based on a course that I taught several times at Swarthmore College. The material is standard, though this particular course is geared towards students who enjoy learning mathematics for its own sake. As a result, there is a priority placed on understanding why things are true and a recognition that, when details are sketched or omitted, that should be acknowledged. Otherwise, the level of rigor is fairly normal. The course has a prerequisite of a semester of linear algebra. That is not necessary for this book, though it helps. Many of the students go on to more advanced courses in mathematics, and the course is calibrated to be part of the path on the way to the more theoretical, proof-oriented nature of upper-level material. Indeed, the hope is that the course will help inspire the students to continue on. Roughly speaking, the book is organized into three main parts depending on the type of function being studied: vector-valued functions of one variable, real-valued functions of many variables, and, finally, the general case of vector-valued functions of many variables. The table of contents gives a pretty good idea of the topics that are covered, but here are a few notes: • Chapter 1 contains the basics of working with vectors in Rn. For students who have studied linear algebra, this is review. • Chapter 2 is concerned with curves in Rn. This belongs to the study of functions of one variable, and many students will have studied parametric equations in their first-year calculus courses, at least for curves in the plane. One of the appealing aspects of the topic is that it is possible to give a reasonably complete proof of a substantial result—the classification of space curves—in a way that reinforces the basic vector concepts that have just been introduced. Many of the proofs along the way and in the related exercises feel like calculations. This allows the students to ease into the mindset of proving things before encountering the more argument-based proofs to come. • Chapters 3, 4, and 5 study real-valued functions of many variables. This includes the topics most closely associated with multivariable calculus: partial derivatives and multiple integrals. The discussion of differentiation emphasizes first-order approximations and the notion of differentiability. For integration, the focus is almost entirely on functions of two variables and, after the change of variables theorem is introduced later on, functions of three variables. • Chapters 6 and 7 begin the study of vector-valued functions of many variables. The main results here are the chain rule and the change of variables theorem. Their importance to the subsequent theory is reiterated throughout the rest of the book. • Chapters 8, 9, and 10 introduce vector fields and their integrals over curves in Rn and surfaces in R3, in other words, line and surface integrals. This leads to the theorems of Green, Stokes, and Gauss, which are often taken as the target destinations of a multivariable calculus course. • The book closes in Chapter 11 with a brief discussion of differential forms and their relation to what has come before. The treatment is superficial but hopefully still illuminating. The vii | Multivariable_Calculus_Shimamoto_Page_11_Chunk3230 |
viii PREFACE chapter is a success if the students come away believing that something interesting is going on and curious enough to want to learn more. As is always the case, the most useful way for students to learn the material is by doing problems, and this book is written to get to the exercises as quickly as possible. In some cases, proofs are sketched in the text and the details are left for the exercises. I have tried to make sure that there is enough information in the text for the students to be able to do all the problems, but some students and instructors may find it helpful to see more worked examples. I believe that there are enough exercises in the book that instructors can choose to include some of them—ranging from routine calculations to proofs—as part of their own presentations of the material. Or students can try some of the exercises on their own and check their answers against those in the back of the book. The exercises are written with these possibilities in mind. I hope that this also gives instructors the flexibility to integrate the approach taken in the book more easily with their personal perspectives on the subject. In writing the book, it has become clear that my own view of the material is heavily influenced by the multivariable calculus course I took as a student. It was taught by Greg Brumfiel, and I thank him for getting me excited about the subject in such a lasting way. I also thank my colleague Ralph Gomez for reading an entire draft of the manuscript and making numerous suggestions that improved the book considerably. I wish I had thought of them myself. Lastly, I thank the students who took my courses over the years for their responsiveness and feedback, which shaped my approach to the material as it evolved to its current state, such as it is. The students did not know that the course they were taking might someday become the basis for a textbook, but then I wasn’t aware of it either. | Multivariable_Calculus_Shimamoto_Page_12_Chunk3231 |
Part I Preliminaries 1 | Multivariable_Calculus_Shimamoto_Page_13_Chunk3232 |
Chapter 1 Rn Let R denote the set of real numbers. Its elements are also called scalars. If n is a positive integer, then Rn is defined to be the set of all sequences x of n real numbers: x = (x1, x2, . . . , xn). (1.1) The elements of Rn are called points, vectors, or n-tuples. We follow the convention of indicating vectors in boldface and scalars in plainface. For a vector x, the individual scalar entries xi for i = 1, 2, . . . , n are called coordinates or components. Multivariable calculus studies functions between these sets, that is, functions of the form f : Rn →Rm, or, more accurately, of the form f : A →Rm, where A is a subset of Rn. In this context, if x represents a typical point of Rn, the coordinates x1, x2, . . . , xn are referred to as variables. For example, first-year calculus studies real-valued functions of one variable, functions of the form f : R →R. This chapter collects some of the background information about Rn that we use throughout the book. The presentation is meant to be self-contained, though readers who have studied linear algebra are likely to have a greater perspective on how the pieces fit together as part of a bigger picture. 1.1 Vector arithmetic There are two basic algebraic operations in Rn. Let x = (x1, x2, . . . , xn) and y = (y1, y2, . . . , yn) be elements of Rn, and let c be a real number. The aforementioned operations are defined as follows. • Addition: x + y = (x1 + y1, x2 + y2, . . . , xn + yn). • Scalar multiplication: cx = (cx1, cx2, . . . , cxn). Because of our familiarity with R2, we illustrate these concepts there in some detail. An element of R2 is an ordered pair x = (x1, x2). Geometrically, x is a point in the plane plotted in the usual way. In particular, the origin (0, 0) is called the zero vector and is denoted by 0. Alternatively, we may visualize x by drawing the arrow starting at (0, 0) and ending at (x1, x2). We’ll go back and forth freely between the point/arrow viewpoints. Given two vectors x = (x1, x2) and y = (y1, y2) in R2, the sum x + y as defined above is the point that results from adding the displacements in each of the horizontal and vertical directions, respectively. For instance, if x = (1, 2) and y = (3, 4), then x + y = (4, 6). Thinking of x and y as arrows emanating from 0, this places x + y at the vertex opposite the origin in the parallelogram 3 | Multivariable_Calculus_Shimamoto_Page_15_Chunk3233 |
4 CHAPTER 1. Rn Figure 1.1: Vector addition determined by x and y, as on the left of Figure 1.1. If we think of x + y as an arrow as well, it is one of the diagonals of the parallelogram, as shown on the right. Another way to reach x + y is to move the arrow representing y so that it retains the same length and direction but begins at the endpoint of x. This is called a “translation” of the original vector y. Then x + y is the destination if you go along x followed by the translated version of y, as illustrated in Figure 1.2, like following two displacements x and y in succession. It is often Figure 1.2: Using the translation of a vector convenient to translate vectors, especially when they represent quantities for which length and direction are the most relevant characteristics. Nevertheless, it’s important to remember that the translations are only copies: the real vector starts at the origin. Similarly, if c is a real number, then cx results from multiplying each of the coordinate displace- ments by a factor of c. For instance, if x = (1, 2), then 3x = (3, 6). In general, cx is an arrow |c| times as long as x and in the same direction as x if c > 0, the opposite direction if c < 0. See Figure 1.3. In particular, (−1)x = (−x1, −x2) is a copy of x rotated by 180◦to reverse the direction. It is usually denoted by −x since it satisfies x + (−1)x = 0. Going back to the parallelogram used to visualize x+y, we could also look at the other diagonal, say drawn as an arrow from y to x, as indicated in Figure 1.4. We sometimes denote this arrow by −→ yx. It is what you would add to y to get to x. In other words, it is the difference x −y: −→ yx = x −y. Again, the real x −y should be returned so that it starts at 0. | Multivariable_Calculus_Shimamoto_Page_16_Chunk3234 |
1.1. VECTOR ARITHMETIC 5 Figure 1.3: Scalar multiplication Figure 1.4: The difference of two vectors In the case of R2, the coordinates are usually denoted by x, y, rather than x1, x2. Then R2 is the usual xy-plane. The notation is potentially confusing since x is also often used to denote the generic vector in Rn, as in equation (1.1) above. Hopefully, the context and the use of boldface will clarify whether a coordinate or vector is intended. Similarly, R3 represents three-dimensional space. Its coordinates are often denoted by x, y, z. Returning to the general case, every vector x = (x1, x2, . . . , xn) in Rn can be decomposed as a sum along the coordinate directions: x = (x1, 0, . . . , 0) + (0, x2, . . . , 0) + · · · + (0, 0, . . . , xn) = x1(1, 0, . . . , 0) + x2(0, 1, . . . , 0) + · · · + xn(0, 0, . . . , 1). The vectors e1 = (1, 0, . . . , 0), e2 = (0, 1, . . . , 0), . . . , en = (0, 0, . . . , 1), with a 1 in a single compo- nent and zeros everywhere else, are called the standard basis vectors. Thus: x = x1e1 + x2e2 + · · · + xnen. (1.2) The scalar coefficients xi are the coordinates of x. In general, a set of n vectors {v1, v2, . . . , vn} in Rn is called a basis if every x in Rn can be written in a unique way as a combination x = c1v1+c2v2+· · ·+cnvn for some scalars c1, c2, . . . , cn. Any basis can be used to define its own coordinate system in Rn, and Rn has many different bases. Apart from a few occasions, however, we’ll stick with the standard basis. In R2, the standard basis vectors are usually denoted by i = (1, 0) and j = (0, 1). In R3, the corresponding names are i = (1, 0, 0), j = (0, 1, 0), and k = (0, 0, 1). | Multivariable_Calculus_Shimamoto_Page_17_Chunk3235 |
6 CHAPTER 1. Rn 1.2 Linear transformations Linear transformations are functions that respect vector addition and scalar multiplication. More precisely: Definition. A function T : Rn →Rm is called a linear transformation if: • T(x + y) = T(x) + T(y) and • T(cx) = c T(x). The conditions must be satisfied for all x, y in Rn and all c in R. This definition may seem austere, but many familiar functions are linear transformations. We just may not be used to thinking of them that way. Example 1.1. Let T : R2 →R2 be counterclockwise rotation by π 3 about the origin. That is, if x ∈R2, then T(x) is the point that is reached after x is rotated counterclockwise by π 3 about 0. We give a geometric argument that T satisfies the two requirements for being a linear transformation. First, consider the parallelogram determined by vectors x and y in R2. Then T rotates this parallelogram to another parallelogram, the one determined by T(x) and T(y). In particular, the vertex x + y is rotated to the vertex T(x) + T(y). This says exactly that T(x + y) = T(x) + T(y). This is illustrated at the left of Figure 1.5. Figure 1.5: Rotations are linear transformations. This geometric argument is so simple that it may not be clear that there is any actual reasoning behind it. The reader is encouraged to go through it carefully to pin down how it proves what we want. Also, the case where x and y are collinear, so that they don’t determine an honest parallelogram, requires a separate argument. We won’t give it, though see the reasoning in the next paragraph. Similarly, T rotates the line through the origin and x to the line through the origin and T(x). The point on the first line c times as far from the origin as x is rotated to the point c times as far from the origin as T(x), as indicated on the right of the figure. In other words, T(cx) = c T(x). Example 1.2. Likewise, the function T : R2 →R2 that reflects a point x = (x1, x2) in the x1-axis is a linear transformation. The appropriate supporting diagrams are shown in Figure 1.6. Example 1.3. Let T : R3 →R2 be the function that projects a point x = (x1, x2, x3) in R3 onto the x1x2-plane, that is, T(x1, x2, x3) = (x1, x2). Rather than use pictures, this time we show that T is linear by calculating. For instance: T(x + y) = T(x1 + y1, x2 + y2, x3 + y3) = (x1 + y1, x2 + y2). | Multivariable_Calculus_Shimamoto_Page_18_Chunk3236 |
1.3. THE MATRIX OF A LINEAR TRANSFORMATION 7 Figure 1.6: So are reflections. On the other hand: T(x) + T(y) = (x1, x2) + (y1, y2) = (x1 + y1, x2 + y2). Both expressions equal the same thing, so T(x + y) = T(x) + T(y). Similarly, T(cx) = T(cx1, cx2, cx3) = (cx1, cx2) while c T(x) = c(x1, x2) = (cx1, cx2). Thus T(cx) = c T(x) as well. 1.3 The matrix of a linear transformation Let T : Rn →Rm be a linear transformation. One of the things that make such a function easy to work with is that, although T(x) is defined for all x in Rn, the transformation is actually determined by a finite amount of input. To make sense of this, recall from equation (1.2) that every x in Rn can be expressed in terms of the standard basis vectors: x = x1e1 + x2e2 + · · · + xnen. Thus, by the defining properties of a linear transformation: T(x) = T(x1e1 + x2e2 + · · · + xnen) = T(x1e1) + T(x2e2) + · · · + T(xnen) = x1T(e1) + x2T(e2) + · · · + xnT(en) = x1a1 + x2a2 + · · · + xnan, (1.3) where aj = T(ej) for j = 1, 2, . . . , n. Conversely, given any n vectors a1, a2, . . . , an in Rm, it’s straightforward to check by calculation that the formula T(x) = x1a1 + x2a2 + · · · + xnan satisfies the two requirements for being a linear transformation. This is Exercise 2.3 at the end of the chapter. Moreover, T(ej) = T(0, 0, . . . , 1, . . . , 0) = 0 · a1 + 0 · a2 + · · · + 1 · aj + · · · + 0 · an = aj for each j. In other words, a linear transformation T : Rn →Rm is completely determined, as in (1.3), once you know the values T(e1) = a1, T(e2) = a2, . . . , T(en) = an, and there are no restrictions on what these values can be. To understand how this might be used, we consider first the case of a linear transformation T : Rn →R whose values are real numbers. Then every T(ej) is a scalar, say T(ej) = aj, and equation (1.3) becomes: T(x) = x1a1 + x2a2 + · · · + xnan. (1.4) Sums of this type are an indispensable part of vector algebra. Definition. Given x = (x1, x2, . . . , xn) and y = (y1, y2, . . . , yn) in Rn, the dot product, denoted by x · y, is defined by: x · y = x1y1 + x2y2 + · · · + xnyn. | Multivariable_Calculus_Shimamoto_Page_19_Chunk3237 |
8 CHAPTER 1. Rn For example, in R3, if x = (1, 2, 3) and y = (4, 5, 6), then x·y = 1·4+2·5+3·6 = 4+10+18 = 32. The dot product satisfies a variety of elementary properties, such as x · y = y · x. The ones we shall use are pretty obvious, so we won’t bother listing them out, though please see Exercises 5.5–5.10 if you’d like to see some of them collected together. Returning to the main point, we have shown in equation (1.4) that every real-valued linear transformation T : Rn →R has the form: T(x) = a · x for some vector a = (a1, a2, . . . , an) in Rn. The analysis for the general case of a linear transformation T : Rn →Rm follows the same pattern except that now the values aj = T(ej) are vectors in Rm. We record these values by putting them in the columns of a rectangular table. That is, for each j = 1, 2, . . . , n, say aj is the vector (a1j, a2j, . . . , amj) in Rm, and let A be the table: A = a11 a12 · · · a1j · · · a1n a21 a22 · · · a2j · · · a2n ... ... ... ... am1 am2 · · · amj · · · amn , where aj is highlighted in red in the jth column. Such a table is called a matrix. In fact, A is called an m by n matrix, which means that it has m rows and n columns. The subscripting has been chosen so that aij is the entry in row i, column j, where the rows are numbered starting from the top and the columns starting from the left. The matrix obtained in this way is called the matrix of T with respect to the standard bases. Since, by equation (1.3), the transformation T is completely determined by the columns aj, the matrix contains all the data we need to find T(x) for all x in Rn. We illustrate this with the three examples of linear transformations considered earlier. Example 1.4. Let T : R2 →R2 be the counterclockwise rotation by π 3 about the origin. Then T rotates the vector e1 = (1, 0) to the vector on the unit circle that makes an angle of π 3 with the positive x1-axis. That is, T(e1) = (cos π 3 , sin π 3 ) = (1 2, √ 3 2 ). Similarly, T(e2) = (cos 5π 6 , sin 5π 6 ) = (− √ 3 2 , 1 2). Hence the matrix of T with respect to the standard bases is: A = " 1 2 − √ 3 2 √ 3 2 1 2 # . (1.5) Example 1.5. If T : R2 →R2 is the reflection in the x1-axis, then T(e1) = T(1, 0) = (1, 0) and T(e2) = T(0, 1) = (0, −1). Hence: A = 1 0 0 −1 . (1.6) Example 1.6. Lastly, if T : R3 →R2 is the projection of x1x2x3-space onto the x1x2-plane, then T(e1) = T(1, 0, 0) = (1, 0), T(e2) = T(0, 1, 0) = (0, 1), and T(e3) = T(0, 0, 1) = (0, 0), so: A = 1 0 0 0 1 0 . (1.7) | Multivariable_Calculus_Shimamoto_Page_20_Chunk3238 |
1.3. THE MATRIX OF A LINEAR TRANSFORMATION 9 To use the matrix A to compute T(x) in a systematic way, we observe the convention that vectors are identified with matrices having a single column. Thus: x = x1 x2 ... xn and aj = a1j a2j ... amj represent vectors in Rn and Rm, respectively. Sometimes they’re referred to as column vectors. With this notation, equation (1.3) becomes: T(x) = x1 a11 a21 ... am1 + x2 a12 a22 ... am2 + · · · + xn a1n a2n ... amn = a11x1 + a12x2 + · · · + a1nxn a21x1 + a22x2 + · · · + a2nxn ... am1x1 + am2x2 + · · · + amnxn . (1.8) This is similar to the real-valued case (1.4) in that each component is a dot product. Specifically, the ith component is the dot product of the ith row of A and x. This expression is given a name. Definition (Preliminary version of matrix multiplication). Let A be an m by n matrix, and let x be a column vector in Rn. Then the product Ax is defined to be the column vector y in Rm whose ith component is the dot product of the ith row of A and x: yi = ai1x1 + ai2x2 + · · · + ainxn. In other words, Ax is the column vector given by equation (1.8) above. We’ve labeled this as a “preliminary” version, because we define a more general matrix multi- plication shortly that includes this as a special case. Using this definition, equation (1.8) can be written in the simple form T(x) = Ax. The preceding discussion is summarized in the following result. Proposition 1.7. 1. Given any linear transformation T : Rn →Rm, there is an m by n matrix A such that: • the jth column of A is T(ej) for j = 1, 2, . . . , n, and • T(x) = Ax for all x in Rn. 2. Conversely, given any m by n matrix A, the formula T(x) = Ax defines a linear transforma- tion T : Rn →Rm. We apply this to the examples previously considered. For instance, if T is the counterclockwise rotation of R2 by π 3 about the origin, then by (1.5): T(x) = " 1 2 − √ 3 2 √ 3 2 1 2 # x1 x2 = " 1 2x1 − √ 3 2 x2 √ 3 2 x1 + 1 2x2 # = | Multivariable_Calculus_Shimamoto_Page_21_Chunk3239 |
10 CHAPTER 1. Rn Once you get the hang of it, this may be the simplest way to find a formula for a rotation. For the reflection in the x1-axis, (1.6) gives: T(x) = 1 0 0 −1 x1 x2 = 1 · x1 + 0 · x2 0 · x1 −1 · x2 = x1 −x2 = (x1, −x2). We didn’t need matrix methods to come up with this formula, but at least it’s correct. Lastly, for the projection of R3 onto the x1x2-plane, we find from (1.7) that: T(x) = 1 0 0 0 1 0 x1 x2 x3 = x1 + 0 + 0 0 + x2 + 0 = x1 x2 = (x1, x2), as expected. 1.4 Matrix multiplication Let T : Rn →Rm and S : Rm →Rp be functions, where, as indicated, the codomain of T equals the domain of S. The composition S ◦T : Rn →Rp is defined to be the function given by (S ◦T)(x) = S(T(x)) for all x in Rn. If S and T are linear transformations, it’s easy to check that S ◦T is a linear transformation, too: • S(T(x + y)) = S(T(x) + T(y)) = S(T(x)) + S(T(y)), • S(T(cx)) = S(c T(x)) = c S(T(x)). In each case, the first equation follows from the definition of linear transformation applied to T and the second applied to S. As a result, S ◦T is represented by a matrix C with respect to the standard bases. Let A and B be the matrices of S and T, respectively, with respect to the standard bases. We shall describe C in terms of A and B. By Proposition 1.7, the jth column cj of C is S(T(ej)). For the same reason, T(ej) is the jth column of B. Let’s call it bj, so cj = S(bj). On the other hand, S is represented by the matrix A, so cj is the matrix product cj = Abj. Its ith component is the ith row of A dotted with bj, that is, the dot product of the ith row of A and the jth column of B. Doing this for all j = 1, 2, . . . , n, fills out the matrix C. The matrix obtained in this way is called the product of A and B. Definition (Final version of matrix multiplication). Let A be a p by m matrix and B an m by n matrix. Their product, denoted by AB, is the p by n matrix C whose (i, j)th entry is the dot product of the ith row of A and the jth column of B: cij = ai1b1j + ai2b2j + · · · + aimbmj = m X k=1 aikbkj. Visually, the action is something like this: ... · · · cij · · · ... = ... ... ... ai1 ai2 · · · aim ... ... ... · · · b1j · · · · · · b2j · · · ... · · · bmj · · · . This only makes sense when the number of columns of A equals the number of rows of B. The definitions have been rigged so that the following statement is an immediate consequence. | Multivariable_Calculus_Shimamoto_Page_22_Chunk3240 |
1.5. THE GEOMETRY OF THE DOT PRODUCT 11 Proposition 1.8. Composition of linear transformations corresponds to matrix multiplication. That is, let S : Rm →Rp and T : Rn →Rm be linear transformations with matrices A and B, respectively, with respect to the standard bases. Then S ◦T is a linear transformation, and its matrix with respect to the standard bases is the product AB. Example 1.9. Let T : R2 →R2 be the counterclockwise rotation by π 3 about the origin, S : R2 → R2 the reflection in the x1-axis, and consider the composition S◦T (first rotate, then reflect). Using the matrices from (1.5) and (1.6), the matrix of S ◦T with respect to the standard bases is the product: 1 0 0 −1 " 1 2 − √ 3 2 √ 3 2 1 2 # = " 1 · 1 2 + 0 · √ 3 2 1 · (− √ 3 2 ) + 0 · 1 2 0 · 1 2 + (−1) · √ 3 2 0 · (− √ 3 2 ) + (−1) · 1 2 # = " 1 2 − √ 3 2 − √ 3 2 −1 2 # . Working backwards and looking at the columns of this matrix, this tells us that (S ◦T)(e1) = (1 2, − √ 3 2 ) = (cos(−π 3 ), sin(−π 3 )) and (S ◦T)(e2) = (− √ 3 2 , −1 2) = (cos(7π 6 ), sin(7π 6 )). These points are plotted in Figure 1.7. A linear transformation that has the same effect on e1 and e2 is the reflection Figure 1.7: The composition of a rotation and a reflection in the line ℓthat passes through the origin and makes an angle of −π 6 with the positive x1-axis. But linear transformations are completely determined by what they do to the standard basis: two transformations that do the same thing must be the same transformation. Thus we conclude that S ◦T is the reflection in ℓ. This can be verified with geometric reasoning as well. By comparison, the composition T ◦S (first reflect, then rotate) is represented by the same matrices multiplied in the opposite order: " 1 2 − √ 3 2 √ 3 2 1 2 # 1 0 0 −1 = " 1 2 √ 3 2 √ 3 2 −1 2 # . (1.9) Note that this is different from the matrix of S ◦T. Matrix multiplication need not obey the commutative law AB = BA. (Can you describe geometrically the linear transformation that (1.9) represents?) 1.5 The geometry of the dot product We now discuss some geometric properties of the dot product, or, perhaps more accurately, how the dot product can be used to develop geometric intuition about Rn when n is large enough to be outside direct experience. | Multivariable_Calculus_Shimamoto_Page_23_Chunk3241 |
12 CHAPTER 1. Rn Definition. The norm, or magnitude, of a vector x = (x1, x2, . . . , xn) in Rn, denoted by ∥x∥, is defined to be: ∥x∥= q x2 1 + x2 2 + · · · + x2n. For instance, in R3, if x = (1, 2, 3), then ∥x∥= √1 + 4 + 9 = √ 14. The following simple property gets used a lot. Proposition 1.10. If x ∈Rn, then x · x = ∥x∥2. Proof. Both sides equal x2 1 + x2 2 + · · · + x2 n. We return to the familiar setting of the plane and examine these notions there. For instance, if x = (x1, x2), then ∥x∥= p x2 1 + x2 2. By the Pythagorean theorem, this is the length of the hypotenuse of a right triangle with legs |x1| and |x2|. If we think of x as an arrow emanating from the origin, then ∥x∥is the length of the arrow. If we think of x as a point, then ∥x∥is the distance from x to the origin. Given two points x and y in R2, the distance between them is the length of the arrow that connects them, −→ yx = x −y. Hence: Distance between x and y = ∥x −y∥. Next, let x = (x1, x2) and y = (y1, y2) be nonzero elements of R2, regarded as arrows emanating from the origin. Suppose that the arrows are perpendicular. If x and y do not form a horizon- tal/vertical pair, then the slopes of the lines through the origin that contain them are defined and are negative reciprocals. The slope is the ratio of vertical displacement to horizontal displacement, so this gives x2 x1 = −y1 y2 . See Figure 1.8. This is easily rearranged to become x1y1 + x2y2 = 0, or Figure 1.8: Perpendicular vectors in the plane: for instance, note that x has slope x2/x1. x · y = 0. If x and y do form a horizontal/vertical pair, say x = (x1, 0) and y = (0, y2), then x · y = 0 once again. Conversely, if x · y = 0, the preceding reasoning can be reversed to conclude that x and y are perpendicular. Thus: In R2, x and y are perpendicular if and only if x · y = 0. For vectors in the plane in general, let θ denote the angle between two given vectors x and y, where 0 ≤θ ≤π. To study the relationship between the dot product and θ, assume for the moment that neither x nor y is a scalar multiple of the other, so θ ̸= 0, π, and consider the triangle whose | Multivariable_Calculus_Shimamoto_Page_24_Chunk3242 |
1.5. THE GEOMETRY OF THE DOT PRODUCT 13 Figure 1.9: Vectors x and y in R2 and the angle θ between them vertices are 0, x, and y. Two of the sides of this triangle have lengths ∥x∥and ∥y∥, and the length of the third side is the length of the arrow −→ yx = x −y. See Figure 1.9. Thus, by the law of cosines, ∥x −y∥2 = ∥x∥2 + ∥y∥2 −2∥x∥∥y∥cos θ. By Proposition 1.10, this is the same as: (x −y) · (x −y) = x · x + y · y −2∥x∥∥y∥cos θ. (1.10) Multiplying out the left-hand side gives (x−y)·(x−y) = x·x−x·y−y·x+y·y = x·x−2x·y+y·y. This expansion uses some of the elementary, but obvious, algebraic properties of the dot product that we declined to list. After substitution into (1.10), we obtain: x · x −2x · y + y · y = x · x + y · y −2∥x∥∥y∥cos θ. Hence, after some cancellation: x · y = ∥x∥∥y∥cos θ. (1.11) In the case that x or y is a scalar multiple of the other, then θ = 0 or π. Say, for instance, that y = cx. Then x·y = x·(cx) = c∥x∥2. Meanwhile, ∥y∥= |c| ∥x∥, so ∥x∥∥y∥cos θ = |c| ∥x∥2 cos θ = ±c∥x∥2 cos θ. The plus sign occurs when |c| = c, i.e., the scalar multiple c is nonnegative, in which case θ = 0 and cos θ = 1. The minus sign occurs when c < 0, whence cos θ = cos π = −1. Either way, ∥x∥∥y∥cos θ = c∥x∥2, the same as x · y. Thus (1.11) is valid for all x, y in R2. We use our knowledge of the plane to try to create some intuition about Rn when n ≥3. This is especially important when n > 3, since visualization in those spaces is almost completely an act of imagination. For instance, if v is a nonzero vector in Rn, we think of the set of all scalar multiples cv as a “line.” If w is a second vector, not a scalar multiple of v, and c and d are scalars, the parallelogram law of addition suggests that the combination cv + dw should lie in a “plane” determined by v and w and that, as c and d range over all possible scalars, cv + dw should sweep out this plane. As a result, the set of all vectors cv + dw, where c, d ∈R, is called the plane spanned by v and w. We denote it by P. The only reason that it’s a plane is because that’s what we have chosen to call it. We would like to make P into a replica of the familiar plane R2. We sketch one approach for doing this, without filling in many technical details. We begin by choosing two vectors u1 and u2 in P such that u1 · u1 = u2 · u2 = 1 and u1 · u2 = 0. These vectors play the role of the standard basis vectors e1 and e2, which of course satisfy the same relations. (It’s not hard to show that such vectors u1 and u2 exist. In fact, in terms of the spanning vectors v and w, one can check that u1 = 1 ∥v∥v and u2 = 1 ∥z∥z, where z = w −v·w v·v v, is one possibility, though there are many others.) We use u1 and u2 to establish a two-dimensional coordinate system internal to P. Every element of P can be written in the form x = a1u1+a2u2 for some scalars a1 and a2. In terms of our budding intuition, we think of p a2 1 + a2 2 as representing the distance from the origin, or the length, of x. | Multivariable_Calculus_Shimamoto_Page_25_Chunk3243 |
14 CHAPTER 1. Rn If y = b1u2 + b2u2 is another element of P, then: x · y = (a1u1 + a2u2) · (b1u1 + b2u2) = a1b1u1 · u1 + (a1b2 + a2b1)u1 · u2 + a2b2u2 · u2 = a1b1 + a2b2. Thus the dot product in Rn agrees with the result we would expect in terms of the newly created internal coordinates in P. In particular, ∥x∥2 = x · x = a2 1 + a2 2, so the norm in Rn represents our intuitive notion of length in P. This is true even though the coordinates of x = (x1, x2, . . . , xn) in Rn don’t really have anything to do with P. Continuing in this way, we can build up P as a copy of R2 and transfer over the familiar concepts of plane geometry, such as distance and angle. The following terminology reflects this intuition. Definition. A vector x in Rn is called a unit vector if ∥x∥= 1. For example, the standard basis vectors ei = (0, 0, . . . , 0, 1, 0, . . . , 0) are unit vectors. Corollary 1.11. If x is a nonzero vector in Rn, then u = 1 ∥x∥x is a unit vector. It is called the unit vector in the direction of x. Proof. We calculate that u·u = | Multivariable_Calculus_Shimamoto_Page_26_Chunk3244 |
1.6. DETERMINANTS 15 1.6 Determinants The determinant is a function that assigns a real number to an n by n matrix. There’s a separate function for each n. We shall focus almost exclusively on the cases n = 2 and n = 3, since those are the cases we really need later. The determinant is not defined for matrices in which the numbers of rows and columns are unequal. The determinant of a 2 by 2 matrix is defined to be: det a11 a12 a21 a22 = a11a22 −a12a21. It’s the product of the diagonal entries minus the product of the off-diagonal entries. For instance, det [ 1 2 3 4 ] = 1 · 4 −2 · 3 = 4 −6 = −2. One often thinks of the determinant as a function of the rows of the matrix. If x = (x1, x2) and y = (y1, y2), let x y denote the matrix whose rows are x and y: det x y = det x1 x2 y1 y2 = x1y2 −x2y1. The main geometric property of 2 by 2 determinants is the following. Proposition 1.13. Let x and y be vectors in R2, neither a scalar multiple of the other. Then: det x y = Area of the parallelogram determined by x and y. Proof. We show the equivalent result that | Multivariable_Calculus_Shimamoto_Page_27_Chunk3245 |
16 CHAPTER 1. Rn Meanwhile: | Multivariable_Calculus_Shimamoto_Page_28_Chunk3246 |
1.6. DETERMINANTS 17 For 3 by 3 matrices, the determinant is defined in terms of the 2 by 2 case: det a11 a12 a13 a21 a22 a23 a31 a32 a33 = a11 · det a22 a23 a32 a33 −a12 · det a21 a23 a31 a33 + a13 · det a21 a22 a31 a32 . (1.14) The signs in the sum alternate, and the pattern is that the terms run along the entries of the first row, a1j, each multiplied by the 2 by 2 determinant obtained by deleting the first row and jth column of the original matrix. The process is known as expansion along the first row. For instance: det 1 2 3 4 5 6 7 8 9 = 1 · det 5 6 8 9 −2 · det 4 6 7 9 + 3 · det 4 5 7 8 = 1 · (45 −48) −2 · (36 −42) + 3 · (32 −35) = −3 + 12 −9 = 0. One can show that 3 by 3 determinants satisfy all four of the algebraic properties of Proposition 1.14. The calculations are longer than in the 2 by 2 case but still straightforward, except for the product formula det(AB) = (det A)(det B) which calls for a new approach. One consequence of the properties is that there’s a formula for expanding the determinant along any row. To expand along the ith row, interchange row i with the row above it repeatedly until it reaches the top, then expand using equation (1.14). The result has the same form as (1.14), except that the leading scalar factors come from the ith row and sometimes the signs might alternate beginning with a minus sign depending on how many sign flips were introduced in getting the ith row to the top. In addition, since det(At) = det A, any general statement about rows applies to columns as well, so there are formulas for expanding along any column, too. We won’t write down those formulas precisely. There is also a geometric interpretation of 3 by 3 determinants in terms of three-dimensional volume. This is discussed in the next chapter. For larger matrices, the same pattern continues, that is, n by n determinants can be defined in terms of (n −1) by (n −1) determinants using expansion along the first row. For the 4 by 4 case, the formula is: det a11 a12 a13 a14 a21 a22 a23 a24 a31 a32 a33 a34 a41 a42 a43 a44 = a11 · det a22 a23 a24 a32 a33 a34 a42 a43 a44 −a12 · det a21 a23 a24 a31 a33 a34 a41 a43 a44 + a13 · det a21 a22 a24 a31 a32 a34 a41 a42 a44 −a14 · det a21 a22 a23 a31 a32 a33 a41 a42 a43 . The algebraic properties of Proposition 1.14 remain true for n by n determinants whatever the value of n, though, to prove this, one should really develop the algebraic structure of determinants in a systematic way rather than hope for success with brute force calculation. We leave this level of generality for a course in linear algebra. In what follows, we work for the most part with the cases n = 2 and n = 3. | Multivariable_Calculus_Shimamoto_Page_29_Chunk3247 |
18 CHAPTER 1. Rn 1.7 Exercises for Chapter 1 Section 1 Vector arithmetic In Exercises 1.1–1.4, let x and y be the vectors x = (1, 2, 3) and y = (4, −5, 6) in R3. Also, 0 denotes the zero vector, 0 = (0, 0, 0). 1.1. Find x + y, 2x, and 2x −3y. 1.2. Find −→ yx and y + −→ yx. 1.3. If x + y + z = 0, find z. 1.4. If x −2y + 3z = 0, find z. 1.5. Let x and y be points in Rn. (a) Show that the midpoint of line segment xy is given by m = 1 2x + 1 2y. (Hint: What do you add to x to get to the midpoint?) (b) Find an analogous expression in terms of x and y for the point p that is 2/3 of the way from x to y. (c) Let x = (1, 1, 0) and y = (0, 1, 1). Find the point z in R3 such that y is the midpoint of line segment xz. Section 2 Linear transformations 2.1. Let T : R3 →R2 be the projection of x1x2x3-space onto the x1x2-plane, T(x1, x2, x3) = (x1, x2). Draw pictures that show that T satisfies the requirements for being a linear trans- formation. 2.2. Let T : R2 →R3 be the function defined by T(x1, x2) = (x1, x2, 0). Show that T is a linear transformation. 2.3. Let a1, a2, . . . , an be vectors in Rm. Verify that the function T : Rn →Rm given by T(x1, x2, . . . , xn) = x1a1 + x2a2 + · · · + xnan is a linear transformation. Section 3 The matrix of a linear transformation 3.1. Let T : R2 →R2 be the linear transformation such that T(e1) = (1, 2) and T(e2) = (3, 4). (a) Find the matrix of T with respect to the standard bases. (b) Find T(5, 6). (c) Find a general formula for T(x1, x2). 3.2. Let T : R3 →R3 be the linear transformation such that T(e1) = (1, 0, −1), T(e2) = (−1, 1, 0), and T(e3) = (0, −1, 1). (a) Find the matrix of T with respect to the standard bases. (b) Find T(1, −1, 1). | Multivariable_Calculus_Shimamoto_Page_30_Chunk3248 |
1.7. EXERCISES FOR CHAPTER 1 19 (c) Find the set of all points x = (x1, x2, x3) in R3 such that T(x) = 0. In Exercises 3.3–3.6, find the matrix of the given linear transformation T : R2 →R2 with respect to the standard bases. 3.3. T is the counterclockwise rotation by π 2 about the origin. 3.4. T is the reflection in the x2-axis. 3.5. T is the identity function, T(x) = x. 3.6. T is the dilation about the origin by a factor of 2, T(x) = 2x. 3.7. Let ρθ : R2 →R2 be the rotation about the origin counterclockwise by an angle θ. Show that the matrix of ρθ with respect to the standard bases is: Rθ = cos θ −sin θ sin θ cos θ . (1.15) The matrix Rθ is called a rotation matrix. 3.8. Let T : R2 →R2 be the linear transformation whose matrix with respect to the standard bases is A = 0 1 1 0 . Describe T geometrically. 3.9. Let T : R2 →R2 be the linear transformation whose matrix with respect to the standard bases is A = 1 2 2 4 . (a) Find T(e1) and T(e2). (b) Describe the image of T, that is, the set of all y in R2 such that y = T(x) for some x in R2. 3.10. Let v1 = (1, 1) and v2 = (−1, 1). (a) Find scalars c1, c2 such that c1v1 + c2v2 = e1. (b) Find scalars c1, c2 such that c1v1 + c2v2 = e2. (c) Let T : R2 →R2 be the linear transformation such that T(v1) = (−1, −1) and T(v2) = (−2, 2). Find the matrix of T with respect to the standard bases. (d) Let S : R2 →R2 be the linear transformation such that S(v1) = e1 and S(v2) = e2. Find the matrix of S with respect to the standard bases. 3.11. Let T : R2 →R3 be the linear transformation given by T(x1, x2) = (x1, x2, 0). Find the matrix of T with respect to the standard bases. 3.12. Let T : R3 →R3 be the rotation about the x3-axis by π 2 counterclockwise as viewed looking down from the positive x3-axis. Find the matrix of T with respect to the standard bases. 3.13. Let T : R3 →R3 be the rotation by π about the line x1 = x2, x3 = 0, in the x1x2-plane. Find the matrix of T with respect to the standard bases. 3.14. Let A and B be m by n matrices. If Ax = Bx for all column vectors x in Rn, show that A = B. | Multivariable_Calculus_Shimamoto_Page_31_Chunk3249 |
20 CHAPTER 1. Rn Section 4 Matrix multiplication In Exercises 4.1–4.6, find the indicated matrix products. 4.1. AB and BA, where A = 1 −1 1 1 and B = 2 4 −1 3 4.2. AB and BA, where A = 1 2 3 4 and B = 1 0 0 1 4.3. AB and BA, where A = 0 0 1 1 0 0 0 1 0 and B = 0 1 0 0 0 1 1 0 0 4.4. AB and BA, where A = 1 2 3 0 4 5 0 0 6 and B = 1 1 1 0 1 1 0 0 1 4.5. AB and BA, where A = 2 −1 −4 3 2 1 and B = 2 3 −1 2 −4 1 4.6. AB and BA, where A = 1 2 3 and B = 4 5 6 4.7. We have seen that the commutative law AB = BA does not hold in general for matrix multiplication. In fact, the situation is worse than that: it’s rare for AB and BA even to both be defined. In that sense, the preceding half dozen exercises are misleading. (a) Find an example of matrices A and B such that neither AB nor BA is defined. (b) Find an example where AB is defined but BA is not. 4.8. (a) Let T : Rn →Rm, S : Rm →Rp and R: Rp →Rq be functions. Show that: | Multivariable_Calculus_Shimamoto_Page_32_Chunk3250 |
1.7. EXERCISES FOR CHAPTER 1 21 4.9. Use matrices to prove that r ◦ρθ ◦r = ρ−θ. 4.10. (a) Compute the product A = RθSR−θ. (b) By thinking about the corresponding composition of linear transformations, give a ge- ometric description of the linear transformation T : R2 →R2 that is represented with respect to the standard bases by A. Section 5 The geometry of the dot product 5.1. Let x = (−1, 1, −2) and y = (4, −1, −1). (a) Find x · y. (b) Find ∥x∥and ∥y∥. (c) Find the angle between x and y. 5.2. Find the unit vector in the direction of x = (2, −1, −2). 5.3. Find all unit vectors in R2 that are orthogonal to x = (1, 2). 5.4. What does the sign of x · y tell you about the angle between x and y? In Exercises 5.5–5.10, show that the dot product satisfies the given property. The properties are true for vectors in Rn, though you may assume in your arguments that the vectors are in R2, i.e., x = (x1, x2), y = (y1, y2), and so on. The proofs for Rn in general are similar. 5.5. x · y = y · x 5.6. (cx) · y = c(x · y) for any scalar c 5.7. ∥cx∥= |c| ∥x∥for any scalar c 5.8. w · (x + y) = w · x + w · y 5.9. (v + w) · (x + y) = v · x + v · y + w · x + w · y (Hint: Make use of the preceding exercises.) 5.10. (x + y) · (x −y) = ∥x∥2 −∥y∥2 5.11. Recall that a rhombus is a planar quadrilateral whose sides all have the same length. Use the dot product to show that the diagonals of a rhombus are perpendicular to each other. 5.12. The Pythagorean theorem states that, for a right triangle in the plane, a2 + b2 = c2, where a and b are the lengths of the legs and c is the length of the hypotenuse. Use vector algebra and the dot product to verify that the theorem remains true for right triangles in Rn. (Hint: The hypotenuse is a diagonal of a rectangle.) Section 6 Determinants In Exercises 6.1–6.4, find the given determinant. 6.1. det 2 5 −3 4 6.2. det 0 5 −3 4 | Multivariable_Calculus_Shimamoto_Page_33_Chunk3251 |
22 CHAPTER 1. Rn 6.3. det 1 −2 3 −4 5 −6 7 −8 9 6.4. det 1 0 0 2 3 0 4 5 6 6.5. Find the area of the parallelogram in R2 determined by x = (4, 0) and y = (1, 3). 6.6. Find the area of the parallelogram in R2 determined by x = (−2, −3) and y = (−3, 2). 6.7. Let A = 1 −2 2 −4 . Use the product rule for determinants to show that there is no 2 by 2 matrix B such that AB = 1 0 0 1 . 6.8. Let A be an n by n matrix, and let At be its transpose. Show that det(AtA) = (det A)2. 6.9. If (x1, x2, x3) ∈R3, let: T(x1, x2, x3) = det x1 x2 x3 1 2 3 4 5 6 . (1.16) (a) Expand the determinant to find a formula for T(x1, x2, x3). (b) Show that equation (1.16) defines a linear transformation T : R3 →R. (c) Find the matrix of T with respect to the standard bases. (d) The matrix you found in part (c) should be a 1 by 3 matrix. Thinking of it as a vector in R3, show that it is orthogonal to both (1, 2, 3) and (4, 5, 6), the second and third rows of the matrix used to define T(x1, x2, x3). | Multivariable_Calculus_Shimamoto_Page_34_Chunk3252 |
Part II Vector-valued functions of one variable 23 | Multivariable_Calculus_Shimamoto_Page_35_Chunk3253 |
Chapter 2 Paths and curves This chapter is concerned with curves in Rn. While we may have an intuitive sense of what a curve is, at least in R2 or R3, the formal description here is somewhat indirect in that, rather than requiring a curve to have a defining equation, we describe it by how it is swept out, like the trace of a skywriter. Thus, in addition to studying geometric features of the curve, such as its length, we also look at quantities related to the skywriter’s motion, such as its velocity and acceleration. The goal of the chapter is a remarkable result about curves in R3 that describes measurements that characterize the geometry of a space curve completely. Along the way, we shall gain valuable experience applying vector methods. The functions in this chapter take their values in Rn, but they are functions of one variable. As a result, we treat the material as a continuation of first-year calculus without going back to redefine or redevelop concepts that are introduced there. At times, this assumed familiarity may lead to a rather loose treatment of certain basic topics, such as continuity, derivatives, and integrals. When we study functions of more than one variable, we shall go back and define these concepts carefully, and what we say then applies retroactively to what we cover here. We hope that any reader who becomes anxious about a possible lack of rigor will be willing to wait. 2.1 Parametrizations Let I be an interval of real numbers, typically, I = [a, b], (a, b), or R. Definition. A continuous function α: I →Rn is called a path. As t varies over I, α(t) traces out a curve C. More precisely: C = {x ∈Rn : x = α(t) for some t in I}. This is also known as the image of α. We say that α is a parametrization of the curve. We often refer to the input variable t as time and think of α(t) as the position of a moving object at time t. See Figure 2.1. A path is a vector-valued function of one variable. To follow our notational convention, we should write the value in boldface as α(t), but for the most part we continue to use plainface, usually reserving boldface for a particular kind of vector-valued function that we begin studying in Chapter 8. For each t in I, α(t) is a point of Rn, so we may write α(t) = (x1(t), x2(t), . . . , xn(t)), where each of the n coordinates is a real number that depends on t, i.e., a real-valued function of one variable. Here are some of the standard examples of parametrized curves that we shall refer to frequently. 25 | Multivariable_Calculus_Shimamoto_Page_37_Chunk3254 |
26 CHAPTER 2. PATHS AND CURVES Figure 2.1: A parametrization α: [a, b] →Rn of a curve C Example 2.1. Circles in R2: x2 + y2 = a2, where a is the radius of the circle. The equation of the circle can be rewritten as | Multivariable_Calculus_Shimamoto_Page_38_Chunk3255 |
2.1. PARAMETRIZATIONS 27 Example 2.3. Helices in R3. A helix winds around a circular cylinder, say of radius a. If the axis of the cylinder is the z-axis, then the x and y-coordinates along the helix satisfy the equation of the circle as in Example 2.1, while the z-coordinate changes at a constant rate. Thus we set x = a cos t, y = a sin t, and z = bt for some constant b. That is, the helix is parametrized by α: R →R3, where: α(t) = (a cos t, a sin t, bt), a, b constant, a > 0. If b > 0, the helix is said to be “right-handed” and if b < 0 “left-handed.” Each type is shown in Figure 2.4. Figure 2.4: Helices: right-handed (at left) and left-handed (at right) Example 2.4. Lines in Rn. Given a point a and a nonzero vector v in Rn, the line through a and parallel to v is parametrized by α: R →Rn, where: α(t) = a + tv. See Figure 2.5. If a = (a1, a2, . . . , an) and v = (v1, v2, . . . , vn), then α(t) = (a1, a2, . . . , an) + t(v1, v2, . . . , vn) = (a1 + tv1, a2 + tv2, . . . , an + tvn), so in terms of coordinates: x1 = a1 + tv1 x2 = a2 + tv2 ... xn = an + tvn. If the domain of α is restricted to an interval of finite length, then α parametrizes a finite segment of the line. Figure 2.5: As t varies, α(t) = a + tv traces out a line. | Multivariable_Calculus_Shimamoto_Page_39_Chunk3256 |
28 CHAPTER 2. PATHS AND CURVES Example 2.5. Find a parametrization of the line in R3 that passes through the points a = (1, 2, 3) and b = (4, 5, 6). The line passes through the point a = (1, 2, 3) and is parallel to v = −→ ab = b −a = (4, 5, 6) − (1, 2, 3) = (3, 3, 3). The setup is indicated in Figure 2.6. Therefore one parametrization is: α(t) = (1, 2, 3) + t(3, 3, 3) = (1 + 3t, 2 + 3t, 3 + 3t). Figure 2.6: Parametrizing the line through two given points a and b Any nonzero scalar multiple of v = (3, 3, 3) is also parallel to the line and could be used as part of a parametrization as well. For instance, w = (1, 1, 1) is such a multiple, and β(t) = (1, 2, 3) + t(1, 1, 1) = (1 + t, 2 + t, 3 + t) is another parametrization of the same line. 2.2 Velocity, acceleration, speed, arclength We now introduce some basic aspects of calculus for paths and curves. We discuss derivatives of paths in this section and integrals of real-valued functions over paths in the next. Definition. Given a path α: I →Rn, the derivative of α is defined by: α′(t) = lim h→0 1 h | Multivariable_Calculus_Shimamoto_Page_40_Chunk3257 |
2.2. VELOCITY, ACCELERATION, SPEED, ARCLENGTH 29 change of distance ds dt as the speed. Unfortunately, these terms should be defined more carefully, and, to get everything in the right logical order, it seems best to define the speed first. To define what we expect ds dt to be, we make the intuitive approximation that the distance △s along the curve between two nearby points α(t) and α(t + △t) is approximately the straight line distance between them, as indicated in Figure 2.7. We assume that △t is positive. If △t is small: “distance along curve” ≈straight line distance, or △s ≈∥α(t + △t) −α(t)∥. Thus △s △t ≈∥α(t+△t)−α(t) △t ∥. In the limit as △t goes to 0, this approaches ∥α′(t)∥, the magnitude of Figure 2.7: △s ≈∥α(t + △t) −α(t)∥ the velocity. We take this as the definition of speed. Definition. If α: I →Rn is a differentiable path, then its speed, denoted by v(t), is defined to be: v(t) = ∥v(t)∥. Note that the speed v(t) is a scalar quantity, whereas the velocity v(t) is a vector. Now, to define the length of the path from t = a to t = b, we integrate the speed. Definition. The arclength from t = a to t = b is defined to be R b a v(t) dt = R b a ∥v(t)∥dt. The arclength function s(t) we considered above is then given by integrating the speed v(t), so, by the fundamental theorem of calculus, ds dt = v(t). Thus the definitions realize the intuitive relations with which we began. Example 2.7. For the helix parametrized by α(t) = (cos t, sin t, t), find: (a) the speed and (b) the arclength from t = 0 to t = 4π, i.e., two full twists around the helix. First, as just shown in Example 2.6, the velocity is v(t) = (−sin t, cos t, 1). Thus: (a) speed: v(t) = ∥v(t)∥= p sin2 t + cos2 t + 1 = √ 2. (b) arclength: R 4π 0 v(t) dt = R 4π 0 √ 2 dt = √ 2 t 4π 0 = 4π √ 2. Summary of definitions velocity v(t) = α′(t) acceleration a(t) = α′′(t) speed v(t) = ∥v(t)∥= ds dt arclength = R b a v(t) dt | Multivariable_Calculus_Shimamoto_Page_41_Chunk3258 |
30 CHAPTER 2. PATHS AND CURVES 2.3 Integrals with respect to arclength Let C be a curve in Rn, and let f : C →R be a real-valued function defined on C. To formulate the definition of the integral of f over C, we adapt the usual approach of first-year calculus for integrating a function over an interval, namely: • Chop up the domain C into a sequence of short pieces of lengths △s1, △s2, . . . , △sk. • Choose a point x1, x2, . . . , xk in each piece. • Consider sums of the form P j f(xj) △sj. • Take the limit of the sums as △sj goes to 0. See Figure 2.8. Figure 2.8: Defining the integral with respect to arclength We can rewrite the sums in a way that leads to a normal first-year calculus integral over an interval by using a parametrization α: [a, b] →Rn of C. We assume that [a, b] can be subdivided into a sequence of consecutive subintervals of lengths △t1, △t2, . . . , △tk such that the subinterval of length △tj gets mapped to the curve segment of length △sj for each j = 1, 2, . . . , k. Then the sum that models the integral can be written as: X j f(xj) △sj = X j f(α(cj)) △sj △tj △tj, where cj is a value of the parameter for which α(cj) = xj. In the limit as △tj goes to 0, these sums approach the integral R b a f(α(t))ds dt dt. In this last expression, ds dt is the speed, also denoted by v(t). This intuition is formalized in the following definition. Definition. Let α: [a, b] →Rn be a differentiable parametrization of a curve C, and let v(t) denote its speed. If f : C →R is a continuous real-valued function, the integral of f with respect to arclength, denoted by R α f ds, is defined to be: Z α f ds = Z b a f(α(t)) v(t) dt. We also denote this integral by R C f ds, though this raises some issues that are addressed later when we have more practice with parametrizations and their effect on calculations. (See page 208.) Example 2.8. If f = 1 (constant function), then the definition of the integral says R C 1 ds = R b a v(t) dt. On the other hand, this is also precisely the definition of arclength. Hence: Z C 1 ds = arclength of C. | Multivariable_Calculus_Shimamoto_Page_42_Chunk3259 |
2.4. THE GEOMETRY OF CURVES: TANGENT AND NORMAL VECTORS 31 Example 2.9. Consider again the portion of the helix parametrized by α(t) = (cos t, sin t, t), 0 ≤t ≤4π. Find R C (x + y + z) ds. Here, f(x, y, z) = x + y + z, so f(α(t)) = f(cos t, sin t, t) = cos t + sin t + t. In other words, we read off the components of the parametrization to substitute x = cos t, y = sin t, and z = t into the formula for f. From Example 2.7, v(t) = ∥v(t)∥= √ 2. Thus: Z C (x + y + z) ds = Z 4π 0 (cos t + sin t + t) √ 2 dt = √ 2 (sin t −cos t + 1 2t2) 4π 0 = √ 2 | Multivariable_Calculus_Shimamoto_Page_43_Chunk3260 |
32 CHAPTER 2. PATHS AND CURVES Figure 2.10: The unit tangent vector T(t), translated to start at α(t)1 The remaining two curve-related coordinate directions are orthogonal to this first one. In R3, there is a whole plane of orthogonal possibilities, but one of the possibilities turns out to be naturally distinguished. Identifying it requires some preliminary work. Proposition 2.10 (Product rule for the dot product). If α, β : I →Rn are differentiable paths, then: (α · β)′ = α′ · β + α · β′. Proof. Note that α · β is real-valued, so by definition of the derivative in first-year calculus: (α · β)′(t) = lim h→0 1 h | Multivariable_Calculus_Shimamoto_Page_44_Chunk3261 |
2.5. THE CROSS PRODUCT 33 Definition. If T′(t) ̸= 0, then: N(t) = 1 ∥T′(t)∥T′(t) is called the principal normal vector. See Figure 2.11. Figure 2.11: The unit tangent and principal normal vectors, T(t) and N(t) Example 2.13. For the helix parametrized by α(t) = (cos t, sin t, t), find the unit tangent T(t) and the principal normal N(t). As calculated in Examples 2.6 and 2.7, α′(t) = (−sin t, cos t, 1) and ∥α′(t)∥= v(t) = √ 2. Thus: T(t) = 1 ∥α′(t)∥α′(t) = 1 √ 2(−sin t, cos t, 1). Continuing with this, T′(t) = 1 √ 2(−cos t, −sin t, 0), and ∥T′(t)∥= 1 √ 2 p cos2 t + sin2 t + 02 = 1 √ 2. Hence: N(t) = 1 ∥T′(t)∥T′(t) = 1 1 √ 2 · 1 √ 2(−cos t, −sin t, 0) = (−cos t, −sin t, 0). As a check, we verify that the unit tangent and principal normal are orthogonal, as predicted by the theory: T(t) · N(t) = 1 √ 2 | Multivariable_Calculus_Shimamoto_Page_45_Chunk3262 |
34 CHAPTER 2. PATHS AND CURVES Our goal for the moment is this: given vectors v and w in R3, find a vector, written v × w, that is orthogonal to both v and w. We concentrate initially not so much on what v × w actually is but rather on requiring it to have a certain critical property. Namely, whatever v × w is, we insist that it satisfy the following. Key Property. For all x in R3, x · (v × w) = det x1 x2 x3 v1 v2 v3 w1 w2 w3 , that is: x · (v × w) = det x v w (P) where x v w denotes the 3 by 3 matrix whose rows are x, v, w. For instance, i × j must satisfy: x · (i × j) = det x i j = det x1 x2 x3 1 0 0 0 1 0 = x1 · (0 −0) −x2 · (0 −0) + x3 · (1 −0) = x3 for all x = (x1, x2, x3) in R3. But x · k = (x1, x2, x3) · (0, 0, 1) = x3 for all x as well, so k satisfies the key property (P) required of i × j. We would expect that i × j = k. Assuming for the moment that v × w can be defined in general to satisfy (P), our main goal follows immediately. Proposition 2.14. v × w is orthogonal to v and w (Figure 2.12). Proof. We take dot products and use the key property: v · (v × w) = det v v w = 0 (two equal rows ⇒det = 0). Thus v × w is orthogonal to v. For the same reason, w · (v × w) = det w v w = 0. Figure 2.12: The cross product v × w is orthogonal to v and w. It remains to define v × w in a way that satisfies (P). It turns out that there is no choice about how to do this. Set v × w = (c1, c2, c3). The components c1, c2, c3 are determined by (P). For instance: c1 = (1, 0, 0) · (c1, c2, c3) = i · (v × w) = det i v w = det 1 0 0 v1 v2 v3 w1 w2 w3 = det v2 v3 w2 w3 . | Multivariable_Calculus_Shimamoto_Page_46_Chunk3263 |
2.5. THE CROSS PRODUCT 35 Similarly, c2 and c3 are determined by looking at j · (v × w) and k · (v × w), respectively. Definition. Given vectors v and w in R3, their cross product is defined by: v × w = det v2 v3 w2 w3 , −det v1 v3 w1 w3 , det v1 v2 w1 w2 . One can check that this is the same as: v × w = det i j k v1 v2 v3 w1 w2 w3 . (×) Working backwards, it is not hard to verify that the formula given by (×) does indeed satisfy (P) (Exercise 5.8) so that we have actually accomplished something. The odd-looking determinant in (×) is somewhat illegitimate in that the entries in the top row are vectors, not scalars. It is meant to be calculated in the usual way by expansion along the top row. To understand how it works, perhaps it is best to do some examples. Example 2.15. First, we reproduce our result for i × j, putting i = (1, 0, 0) and j = (0, 1, 0) in the second and third rows of (×) and expanding along the first row: i × j = det i j k 1 0 0 0 1 0 = i · (0 −0) −j · (0 −0) + k · (1 −0) = k, as expected. Similarly, j × k = i and k × i = j. Example 2.16. If v = (1, 2, 3) and w = (4, 5, 6), then: v × w = det i j k 1 2 3 4 5 6 = i · (12 −15) −j · (6 −12) + k · (5 −8) = −3i + 6j −3k = (−3, 6, −3). As a check against possible computational error, one can go back and confirm that the result is orthogonal to both v and w. Example 2.17. With the same v and w as in the previous example, let u = (1, −2, 3). Then: u · (v × w) = (1, −2, 3) · (−3, 6, −3) = −3 −12 −9 = −24. (2.1) Suppose that we use the same three factors but in a permuted order, say w · (v × u). Taking advantage of the result of (2.1), what can you say about the value of this product without calculating the individual dot and cross products involved? We use (P) and properties of the determinant: w · (v × u) = det w v u (property (P)) = −det u v w (row switch) = −u · (v × w) (property (P)) = −(−24) (by (2.1)) = 24. | Multivariable_Calculus_Shimamoto_Page_47_Chunk3264 |
36 CHAPTER 2. PATHS AND CURVES The basic properties of the cross product are summarized below. 1. w × v = −v × w (Justification. Interchanging rows in (×) changes the sign of the determinant.) 2. v × v = 0 (Justification. Equal rows in (×) implies the determinant is zero.) 3. (The length of the cross product.) ∥v × w∥= ∥v∥∥w∥sin θ, where θ is the angle between v and w (2.2) (Justification. This appears as Exercise 5.9. It uses v · w = ∥v∥∥w∥cos θ.) The length formula (2.2) for v × w has some useful consequences: (a) v × w = 0 if and only if v = 0, w = 0, or θ = 0 or π. This follows immediately from (2.2). In other words, v × w = 0 if and only if v or w is a scalar multiple of the other. (b) Consider the parallelogram determined by v and w. We can find its area by thinking of v as base and dropping a perpendicular from w to get the height, as in the case of Figure 1.10 in Chapter 1. See Figure 2.13. Area of parallelogram = (base)(height) = ∥v∥(∥w∥sin θ) = ∥v × w∥. Figure 2.13: The parallelogram determined by v and w In other words: ∥v × w∥is the area of the parallelogram determined by v and w. (c) We can now describe the geometric interpretation of 3 by 3 determinants as volume that was mentioned in Chapter 1. Let u, v, and w be points in R3 such that 0, u, v, and w are not coplanar. This determines a paralellepiped, which is the three-dimensional analogue of a parallelogram in the plane, as shown in Figure 2.14. It consists of all points of the form au + bv + cw, where a, b, c are scalars such that 0 ≤a ≤1, 0 ≤b ≤1, and 0 ≤c ≤1. Then: det u v w is the volume of the parallelepiped determined by u, v, and w. | Multivariable_Calculus_Shimamoto_Page_48_Chunk3265 |
2.5. THE CROSS PRODUCT 37 Figure 2.14: The parallelepiped determined by u, v, w in R3 This follows from property (P), which we can use to reverse course and take what we’ve learned about dot and cross products to tell us about determinants. We leave the details for the exercises (namely, Exercise 5.13), though the relevant ingredients are labeled in Figure 2.14. In the figure, φ denotes the angle between u and v × w. 4. (The direction of the cross product.) We have seen that the cross product v×w is orthogonal to v and w, but this does not pin down its direction completely. There are two possible orthogonal directions, each opposite the other. We present a criterion that distinguishes the possibilities in terms of v and w. Definition. Let (v1, v2, . . . , vn) be a basis of Rn, where the vectors have been listed in a particular order. We say that (v1, v2, . . . , vn) is right-handed if det v1 v2 . . . vn > 0 and left-handed if det v1 v2 . . . vn < 0. (As usual, v1 v2 . . . vn is the n by n matrix whose rows are v1, v2, . . . , vn.) Example 2.18. Determine the orientation of the standard basis (e1, e2, . . . , en). det e1 e2 . . . en = det 1 0 . . . 0 0 1 . . . 0 ... ... ... ... 0 0 . . . 1 = 1. This is positive, so (e1, e2, . . . , en) is right-handed. By comparison, (e2, e1, . . . , en) is left-handed (switching two rows flips the sign of the determinant). The orientation of a basis depends on the order in which the basis vectors are presented. In R3, given v and w, what is the orientation of (v, w, v × w)? Note that: det v w v × w = + det v × w v w (two row switches) = (v × w) · (v × w) (property (P) with x = v × w) = ∥v × w∥2. (2.3) As shown earlier, v × w ̸= 0 provided that v and w are not scalar multiples of one another. If this is the case, then equation (2.3) implies that det v w v × w is positive. Thus: Proposition 2.19. If v and w are vectors in R3, neither a scalar multiple of the other, then the triple (v, w, v × w) is right-handed. In R3, the right-handed orientation gets its name from a rule of thumb known as the “right-hand rule.” This is based on a convention regarding the standard basis (i, j, k), namely, if you rotate the | Multivariable_Calculus_Shimamoto_Page_49_Chunk3266 |
38 CHAPTER 2. PATHS AND CURVES fingers of your right hand from i to j, your thumb points in the direction of k. This is not so much a fact as it is an agreement: whenever we draw R3, we agree to orient the positive x, y, and z-axes so that the right-hand rule for (i, j, k) is satisfied, as in Figure 2.15. As we continue with further material, figures in R3 become increasingly prominent, so it is probably good to state this convention explicitly. More generally, given a basis (v, w, z) of R3, the plane P spanned by v and w separates R3 into two half-spaces, or “sides.” Once we adopt the right-hand rule for (i, j, k), it turns out that (v, w, z) is right-handed in the sense of the definition if and only if, when you rotate the fingers of your right hand from v to w, your thumb lies on the same side of P as z. In particular, by Proposition 2.19, your thumb lies on the same side as v × w. In this way, your right thumb gives the direction of v × w. Figure 2.15: The circle of life: i × j = k, j × k = i, k × i = j Proving the connection between the sign of a determinant and the right-hand rule would be too great a digression. One approach uses the continuity of the determinant and the geometry of rotations to show that, if the rule works for (i, j, k), then it works in general. In any case, the right-hand rule is sometimes a convenient way to find cross products geometrically. For instance, in addition to i × j = k, it allows us to see that j × k = i, not −i, and that k × i = j, as pictured in Figure 2.15. 2.6 The geometry of space curves: Frenet vectors We take up the following question: How can one tell when two curves in R3 are congruent, that is, either curve can be translated, rotated, and/or reflected so that it fits exactly on top of the other? As mentioned earlier, the main idea is to construct a basis of orthogonal unit vectors that moves along the curve and then to study how the basis vectors change during the motion. So let C be a curve in R3, parametrized by a differentiable path α: I →R3, where I is an interval in R. In Section 2.4, we defined two of the vectors in our moving basis: • the unit tangent vector T(t) = 1 ∥α′(t)∥α′(t) = 1 v(t)v(t) and • the principal normal vector N(t) = 1 ∥T′(t)∥T′(t). These definitions make sense only if α′(t) ̸= 0 and T′(t) ̸= 0, so we assume that this is the case from now on. Both T(t) and N(t) are constructed to be unit vectors, and we showed before that T(t) · N(t) = 0. At this point, the third basis vector is easily defined. Definition. The cross product B(t) = T(t) × N(t) is called the binormal vector. | Multivariable_Calculus_Shimamoto_Page_50_Chunk3267 |
2.6. THE GEOMETRY OF SPACE CURVES: FRENET VECTORS 39 As a cross product, B(t) is orthogonal to T(t) and N(t). Moreover, its length is ∥B(t)∥= ∥T(t)∥∥N(t)∥sin θ = 1 · 1 · sin π 2 = 1. Thus (T(t), N(t), B(t)) is a collection of orthogonal unit vectors. The vectors are called the Frenet vectors of α and are illustrated in Figure 2.16. Figure 2.16: The Frenet vectors T(t), N(t), B(t) Example 2.20 (The Frenet vectors of a helix). We shall use the helix with parametrization α(t) = (a cos t, a sin t, bt) as a running example to illustrate new concepts as they arise. Here, a and b are constant, and a > 0. Recall that the helix is called right-handed if b > 0 and left-handed if b < 0. If b = 0, the curve collapses to a circle of radius a in the xy-plane. To find the Frenet vectors T(t), N(t), and B(t) of the helix, we simply and carefully follow the definitions. Unit tangent: α′(t) = (−a sin t, a cos t, b), so ∥α′(t)∥= p a2 sin2 t + a2 cos2 t + b2 = √ a2 + b2. Thus: T(t) = 1 ∥α′(t)∥α′(t) = 1 √ a2 + b2 (−a sin t, a cos t, b). Principal normal: From the last calculation, T′(t) = 1 √ a2+b2 (−a cos t, −a sin t, 0), so ∥T′(t)∥= 1 √ a2+b2 p a2 cos2 t + a2 sin2 t + 02 = a √ a2+b2 . Hence: N(t) = 1 ∥T′(t)∥T′(t) = 1 a √ a2+b2 · 1 √ a2 + b2 (−a cos t, −a sin t, 0) = 1 a(−a cos t, −a sin t, 0) = (−cos t, −sin t, 0). Binormal: Lastly: B(t) = T(t) × N(t) = det i j k −a sin t √ a2+b2 a cos t √ a2+b2 b √ a2+b2 −cos t −sin t 0 = i · | Multivariable_Calculus_Shimamoto_Page_51_Chunk3268 |
40 CHAPTER 2. PATHS AND CURVES 2.7 Curvature and torsion We use the Frenet vectors (T, N, B) to make two geometric measurements: • curvature, the rate of turning, and • torsion, the rate of wobbling. First, curvature measures how fast the tangent direction is changing, which is represented by ∥T′(t)∥, the magnitude of the rate of change of the unit tangent vector. A given curve can be traced out in many different ways, however, so, if we want curvature to reflect purely on the geometry of the curve, we must take into account the effect of the parametrization. This is an important point, and it is an issue that we address more carefully later when we discuss the effect of parametrization on vector integrals over curves. (See Exercise 1.13 in Chapter 9.) For the time being, let’s agree informally that we can normalize for the effect of the parametrization by dividing by the speed. For instance, if you traverse the same curve a second time, twice as fast before, the unit tangent will change twice as fast, too. Dividing ∥T′(t)∥by the speed maintains a constant ratio. Definition. The curvature is defined by: κ(t) = ∥T′(t)∥ v(t) , where v(t) is the speed.2 Note that, by definition, curvature is a nonnegative quantity. Example 2.21 (The curvature of a helix). For the helix α(t) = (a cos t, a sin t, bt), the ingredients that go into the definition of curvature were obtained as part of the work of Example 2.20, namely, ∥T′(t)∥= a √ a2+b2 and v(t) = ∥α′(t)∥= √ a2 + b2. Thus: Curvature of a helix: κ(t) = a √ a2+b2 √ a2 + b2 = a a2 + b2 . In particular, for a circle of radius a, in which case b = 0, the curvature is κ = a a2+02 = 1 a. Next, to measure wobbling, we look at the binormal B, which is orthogonal to the plane spanned by T and N. This plane is the “plane of motion” in the sense that it contains the velocity and acceleration vectors (see Exercise 9.4), so B′ can be thought of as representing the rate of wobble of that plane. We prove in Lemma 2.24 below that B′(t) is always a scalar multiple of N(t), that is: B′(t) = c(t)N(t), where c(t) is a scalar that depends on t. For the moment, let’s accept this as true. Then c(t) represents the rate of wobble. We again normalize by dividing by the speed. Moreover, the convention is to throw in a minus sign. Definition. Given that B′(t) = c(t)N(t) as above, then the torsion is defined by: τ(t) = −c(t) v(t), where v(t) is the speed. 2What Exercise 1.13 in Chapter 9 shows is that the curvature is essentially a function of the points on the curve C in the sense that, if x ∈C and if there is only one value of the parameter t such that α(t) = x, then the quantity ∥T′(t)∥ v(t) is the same for all such parametrizations of C. Thus we could denote the curvature by κ(x) and calculate it without worrying about which parametrization we use. | Multivariable_Calculus_Shimamoto_Page_52_Chunk3269 |
2.7. CURVATURE AND TORSION 41 Example 2.22 (The torsion of a helix). To find the torsion of the helix α(t) = (a cos t, a sin t, bt), we again piggyback on the calculations of Example 2.20. For instance, using the result that B(t) = 1 √ a2+b2 (b sin t, −b cos t, a), we obtain B′(t) = 1 √ a2+b2 (b cos t, b sin t, 0). To find c(t), we want to manipulate this to be a scalar multiple of N(t) = (−cos t, −sin t, 0): B′(t) = 1 √ a2 + b2 (b cos t, b sin t, 0) = − b √ a2 + b2 (−cos t, −sin t, 0) = − b √ a2 + b2 N(t). Therefore c(t) = − b √ a2+b2 , so: Torsion of a helix: τ(t) = −c(t) v(t) = − − b √ a2+b2 √ a2 + b2 = b a2 + b2 . Note that the torsion is positive if the helix is right-handed (b > 0) and negative if left-handed (b < 0). In the case of a circle (b = 0), the torsion is zero. This reflects the fact that planar curves don’t wobble in space at all. To complete the discussion of torsion, it remains to justify the claim that B′ is a scalar multiple of N. The argument uses a product rule for the cross product. Proposition 2.23 (Product rules). Let I be an interval of real numbers, and let α, β : I →Rn be differentiable paths and f : I →R a differentiable real-valued function. Then: 1. (Dot product) (α · β)′ = α′ · β + α · β′. 2. (Cross product) For paths α, β in R3, (α × β)′ = α′ × β + α × β′. 3. (Scalar multiplication) (fα)′ = f′α + fα′. Proof. The first of these was proven earlier (Proposition 2.10), and the others are the same, swap- ping in the appropriate type of product. We leave them as exercises (Exercise 7.3). Lemma 2.24. The derivative B′ of the binormal is always a scalar multiple of the principal normal N. Proof. We show that B′ is orthogonal (a) to B and (b) to T. The only vectors orthogonal to both are precisely the scalar multiples of N, so the lemma follows. (a) B is a unit vector, so ∥B∥is constant. Hence B and its derivative B′ are always orthogonal (Corollary 2.12). (b) By definition, B = T × N, so using the product rule: B′ = T′ × N + T × N′ = (∥T′∥N) × N + T × N′ (definition of N) = ∥T′∥0 + T × N′ (v × v = 0 always) = T × N′. As a cross product, B′ is orthogonal to T (and to N′ for that matter). | Multivariable_Calculus_Shimamoto_Page_53_Chunk3270 |
42 CHAPTER 2. PATHS AND CURVES 2.8 The Frenet-Serret formulas We saw in Examples 2.21 and 2.22 that, for the helix α(t) = (a cos t, a sin t, bt), a > 0, the curvature and torsion are given by: κ = a a2 + b2 and τ = b a2 + b2 , respectively. For example, if α(t) = (2 cos t, 2 sin t, t), i.e., a = 2 and b = 1, then κ = 2 5 and τ = 1 5. In particular, helices have constant curvature and torsion. We are about to see that the geometry of a curve in R3 is determined by its curvature and torsion. For instance, it turns out that any curve with constant curvature and torsion is necessarily congruent to a helix. The proof is based on the following formulas, which describe precisely how the Frenet vectors change within the coordinate systems that they determine. Proposition 2.25 (Frenet-Serret formulas). Let α: I →R3 be a differentiable path in R3 with speed v, Frenet vectors (T, N, B), curvature κ, and torsion τ. Assume that v ̸= 0 and T′ ̸= 0 so that the Frenet vectors are defined for all t. Then: (1) T′ = κvN, (2) N′ = −κvT + τvB, (3) B′ = −τvN. Proof. The first and third of these are basically the definitions of curvature and torsion, so we prove them first. To prove (1), we know that, by definition, N = 1 ∥T′∥T′ and κ = ∥T′∥ v . Thus: T′ = ∥T′∥N = κvN. For (3), we know that B′ = cN and τ = −c v for some scalar-valued function c. Hence c = −τv, and B′ = −τvN. Lastly, for (2), the right-hand rule gives N = B × T (see Figure 2.16). Therefore, according to the product rule and the two Frenet-Serret formulas that were just proven: N′ = B′ × T + B × T′ = −τvN × T + B × (κvN). But, again by the right-hand rule, N × T = −B and B × N = −T. Substituting into the last expression gives N′ = −κvT + τvB, completing the proof. 2.9 The classification of space curves Our discussion of curves in R3 culminates with a result on the connection between curvature and torsion—that is, turning and wobbling—and the intrinsic geometry of a curve. On the one hand, moving a curve rigidly in space does not change geometric features of the curve such as its length. It is plausible that this extends to not changing the curvature and torsion as well, and we begin by elaborating on this point. For example, suppose that a path α(t) is translated by a constant amount c to obtain a new path β(t) = α(t) + c. Then β′(t) = α′(t), so α and β have the same velocity and therefore the same speed. It follows from the definition that they also have the same unit tangent vector T(t). Following along with further definitions, they then have the same principal normal N(t), hence the same binormal B(t), and finally the same curvature κ(t) and torsion τ(t). In other words, speed, curvature, and torsion are preserved under translations. | Multivariable_Calculus_Shimamoto_Page_54_Chunk3271 |
2.9. THE CLASSIFICATION OF SPACE CURVES 43 The same conclusion applies to rotations, though we don’t really have the tools to give a rigorous proof at this point. So we try an intuitive explanation. Suppose that α is rotated in R3 to obtain a new path β. The velocities α′(t) and β′(t) are related by the same rotation, and then so are the respective Frenet vectors (T(t), N(t), B(t)). So the Frenet vectors of the two paths are not the same, but, since they are rotated versions of one another, the corresponding vectors change at the same rates. In particular, the speed, curvature, and torsion are the same. Our main theorem is a converse to these considerations. That is, we show that, if two paths α and β have the same speed, curvature, and torsion, then they differ by a translation and/or rotation in the sense that there is a composition of translations and rotations F : R3 →R3 that transforms α into β, i.e., F(α(t)) = β(t) for all t. Hence measuring the three scalar quantities v, κ, τ suffices to determine the geometry of a space curve. It is striking and satisfying that such a complete answer is possible. In the spirit of the theorems in plane geometry about congruent triangles (SAS, ASA, SSS, etc.), we call the theorem the “vκτ theorem.” Theorem 2.26 (vκτ theorem). Let I be an interval of real numbers, and let α, β : I →R3 be differentiable paths in R3 such that v and T′ are nonzero so that the Frenet vectors are defined for all t. Then either path can be translated and rotated so that it fits exactly on top of the other if and only if they have: • the same speed v(t), • the same curvature κ(t), and • the same torsion τ(t). In particular, two paths with the same speed, curvature, and torsion are congruent to each other. Proof. That translations and rotations preserve speed, curvature, and torsion was discussed above. For the converse, assume that α and β have the same speed, curvature, and torsion. We proceed in three stages to move α on top of β. Step 1. A translation. Choose now and for the rest of the proof a point a in the interval I. We first translate α so that α(a) moves to β(a). In other words, we shift α by the constant vector d = β(a) −α(a) to get a new path γ(t) = α(t) + d = α(t) + β(a) −α(a). Then: γ(a) = α(a) + β(a) −α(a) = β(a). Moreover, as a translate, γ has the same speed, curvature, and torsion as α and hence as β as well. Step 2. Two rotations. Let x0 denote the common point γ(a) = β(a). The unit tangents to γ and β at x0 may not be equal, but we can rotate γ about x0 until they are. Then, rotate γ again using this tangent direction as axis of rotation until the principal normals at x0 coincide, too. The binormals are then automatically the same, since they are the cross products of T and N. Moreover, neither rotation changes the speed, curvature, or torsion. Call this final path eα. In this way, we have translated and rotated α into a path eα satisfying the following three conditions: • eα(a) = β(a) = x0, • at the point x0, eα and β have the same Frenet vectors, and • eα and β have the same speed, curvature, and torsion for all t. | Multivariable_Calculus_Shimamoto_Page_55_Chunk3272 |
44 CHAPTER 2. PATHS AND CURVES Step 3. A calculation. We show that eα = β. Since eα has been obtained from α by translation and rotation, the theorem follows. The argument uses the Frenet-Serret formulas, which we repeat for convenience: T′ = κvN, N′ = −κvT + τvB, B′ = −τvN. We denote the Frenet vectors of eα by (eT, eN, eB) and those of β by (T, N, B). Using similar notation for the speed, curvature, and torsion, we have ev = v, eκ = κ, and eτ = τ by construction. Let θ be the angle between eT(t) and T(t). Then eT(t)·T(t) = ∥eT(t)∥∥T(t)∥cos θ = 1·1·cos θ = cos θ. Therefore eT(t) · T(t) is less than or equal to 1 always and is equal to 1 if and only if θ = 0, that is, if and only if eT(t) = T(t). The same comments apply to eN(t) · N(t) and eB(t) · B(t). Thus, if we define f : I →R by f(t) = eT(t) · T(t) + eN(t) · N(t) + eB(t) · B(t), then the previous remarks imply: f(t) ( ≤3 for all t, = 3 if and only if eT(t) = T(t), eN(t) = N(t), and eB(t) = B(t). For instance, f(a) = 3 by Step 2. Now, by the product rule, f′ = (eT′ · T + eT · T′) + ( eN′ · N + eN · N′) + (eB′ · B + eB · B′). We use the Frenet-Serret formulas to substitute for every derivative in sight, keeping in mind that ev = v, eκ = κ, and eτ = τ: f′ = κv eN · T + eT · κvN + (−κv eT + τv eB) · N + eN · (−κvT + τvB) + (−τv eN) · B + eB · (−τvN) = (κv eN · T + κv eT · N) + (−κv eT · N + τv eB · N −κv eN · T + τv eN · B) −(τv eN · B + τv eB · N). Miraculously, the terms cancel in pairs so that f′ = 0 for all t. Hence f is a constant function. Since f(a) = 3, it follows that f(t) = 3 for all t. As noted above, this in turn implies that eT(t) = T(t) for all t (and eN(t) = N(t), eB(t) = B(t) as well, but we don’t need them). By definition of the unit tangent, this gives 1 v(t) eα′(t) = 1 v(t)β′(t), so eα′(t) = β′(t) for all t. Consequently eα and β differ by a constant vector: eα(t) = β(t) + c. Since eα(a) = x0 = β(a), the constant is c = 0. Thus eα(t) = β(t) for all t. We have succeeded in translating and rotating α to reach a path eα that lies on top of β. Note that the theorem does not say what happens when a path is reflected in a plane, that is, how the speed, curvature, and torsion of a path are related to those of its mirror image. For a clue as to what happens in this case, see Exercise 9.14. | Multivariable_Calculus_Shimamoto_Page_56_Chunk3273 |
2.10. EXERCISES FOR CHAPTER 2 45 2.10 Exercises for Chapter 2 Section 1 Parametrizations In Exercises 1.1–1.8, sketch the curve parametrized by the given path. Indicate on the curve the direction in which it is being traversed. 1.1. α: [0, 2] →R2, α(t) = (t, t2) 1.2. α: [0, 6π] →R2, α(t) = (t cos t, t sin t) 1.3. α: R →R2, α(t) = (et, e−t) (Hint: How are the x and y-coordinates related?) 1.4. α: R →R3, α(t) = (cos t, sin t, t) 1.5. α: R →R3, α(t) = (cos t, sin t, −t) 1.6. α: R →R3, α(t) = (sin t, cos t, t) 1.7. α: R →R3, α(t) = (sin t, cos t, −t) 1.8. α: [0, 2π] →R3, α(t) = (cos t, sin t, cos 2t) In Exercises 1.9–1.12, find a parametrization of the given curve. 1.9. The portion of the parabola y = x2 in R2 with −1 ≤x ≤1 1.10. The portion of the curve xy = 1 in R2 with 1 ≤x ≤2 1.11. The circle x2 + y2 = a2 in R2 traversed once in the clockwise direction 1.12. A right-handed helix in R3 that: • lies on the circular cylinder of radius 2 whose axis is the z-axis, • completes one complete twist as it rises from z = 0 to z = 1. 1.13. Find a parametrization of the line in R3 that passes through the point a = (1, 2, 3) and is parallel to v = (4, 5, 6). 1.14. Let ℓbe the line in R3 parametrized by α(t) = (1 + 2t, −3t, 4 + 5t). Find a point that lies on ℓand a vector that is parallel to ℓ. 1.15. Find a parametrization of the line in R3 that passes through the points a = (1, 0, 0) and b = (0, 0, 1). 1.16. Find a parametrization of the line in R3 that passes through the points a = (1, −2, 3) and b = (−4, 5, −6). 1.17. Let ℓbe the line in Rn parametrized by α(t) = a + tv, and let p be a point in Rn. Suppose that q is the foot of the perpendicular dropped from p to ℓ(Figure 2.17). (a) Since q lies on ℓ, there is a value of t such that q = a+tv. Find a formula for t in terms of a, v, and p. (Hint: ℓlies in a direction perpendicular to p −q.) (b) Find a formula for the point q. | Multivariable_Calculus_Shimamoto_Page_57_Chunk3274 |
46 CHAPTER 2. PATHS AND CURVES Figure 2.17: The foot of the perpendicular from p to ℓ 1.18. Let ℓbe the line in R3 parametrized by α(t) = (1 + t, 1 + 2t, 1 + 3t), and let p = (0, 0, 4). Find the foot of the perpendicular dropped from p to ℓ. 1.19. Let ℓand m be lines in R3 parametrized by: α(t) = a + tv and β(t) = b + tw, respectively, where a = (a1, a2, a3), v = (v1, v2, v3), b = (b1, b2, b3), and w = (w1, w2, w3). (a) Explain why the problem of determining whether ℓand m intersect in R3 amounts to solving a system of three equations in two unknowns t1 and t2. (b) Determine whether the lines parametrized by α(t) = (−1, 0, 1) + t(1, −3, 2) and β(t) = (1, 5, 4) + t(−1, 1, 6) intersect by writing down the corresponding system of equations, as described in part (a), and solving the system. (c) Repeat for the lines parametrized by α(t) = (2, 1, 0) + t(0, 0, 1) and β(t) = (0, 1, 3) + t(1, 0, 0). Section 2 Velocity, acceleration, speed, arclength 2.1. Let α(t) = (t3, 3t2, 6t). (a) Find the velocity, speed, and acceleration. (b) Find the arclength from t = 0 to t = 4. 2.2. Let α(t) = (1 2t2, 2t, 4 3t3/2). (a) Find the velocity, speed, and acceleration. (b) Find the arclength from t = 2 to t = 4. 2.3. Let α(t) = (cos 2t, sin 2t, 4t3/2). (a) Find the velocity, speed, and acceleration. (b) Find the arclength from t = 1 to t = 2. 2.4. Let a and b be real numbers, a > 0, and let α(t) = (a cos t, a sin t, bt). (a) Find the velocity, speed, and acceleration. (b) Find the arclength from t = 0 to t = 2π. 2.5. Find a parametrization of the line through the points a = (1, 1, 1) and b = (2, 3, 4) that traverses the line with constant speed 1. | Multivariable_Calculus_Shimamoto_Page_58_Chunk3275 |
2.10. EXERCISES FOR CHAPTER 2 47 Section 3 Integrals with respect to arclength In Exercises 3.1–3.2, evaluate the integral with respect to arclength for the curve C with the given parametrization α. 3.1. R C xyz ds, where α(t) = (t3, 3t2, 6t), 0 ≤t ≤1 3.2. R C z ds, where α(t) = (cos t + t sin t, sin t −t cos t, t), 0 ≤t ≤4 3.3. Let C be the line segment in R2 from (1, 0) to (0, 1). With as little calculation as possible, find the values of the following integrals. (a) R C 5 ds (b) R C (x + y) ds 3.4. Let C be the line segment in R2 parametrized by α(t) = (t, t), 0 ≤t ≤1. (a) Evaluate R C (x + y) ds. (b) Find a triangle in R3 whose base is the segment C (in the xy-plane) and whose area is represented by the integral in part (a). (Hint: Area under a curve.) Section 4 The geometry of curves: tangent and normal vectors Note: Further exercises on this material appear in Section 9. In Exercises 4.1–4.2, find the unit tangent T(t) and the principal normal N(t) for the given path α. Then, verify that T(t) and N(t) are orthogonal. 4.1. α(t) = (sin 4t, cos 4t, 3t) 4.2. α(t) = (t, t2, 0) 4.3. Let α be a differentiable path in Rn. (a) If the speed is constant, show that the velocity and acceleration vectors are always orthogonal. (Hint: Consider ∥v(t)∥2.) (b) Conversely, if the velocity and acceleration are always orthogonal, show that the speed is constant. 4.4. Let C be a curve in Rn traced out by a differentiable path α: (a, b) →Rn defined on an open interval (a, b). Assume that there is a point α(t0) on C that is closer to the origin than any other point of C. Prove that the tangent vector α′(t0) is orthogonal to α(t0). Draw a sketch that illustrates the conclusion. Section 5 The cross product In Exercises 5.1–5.4, (a) find the cross product v × w and (b) verify that it is orthogonal to v and w. 5.1. v = (1, 1, 1), w = (1, 3, 5) 5.2. v = (1, 2, 3), w = (2, 4, 6) 5.3. v = (1, −2, 4) , w = (−2, 1, 3) | Multivariable_Calculus_Shimamoto_Page_59_Chunk3276 |
48 CHAPTER 2. PATHS AND CURVES 5.4. v = (2, 0, −1), w = (0, −2, 1) 5.5. Find a nonzero vector in R3 that points in a direction perpendicular to the plane that contains the origin and the points p = (1, 1, 1) and q = (2, 1, −3). 5.6. Find a nonzero vector in R3 that points in a direction perpendicular to the plane that contains the points p = (1, 0, 0), q = (0, 1, 0), and r = (0, 0, 1). 5.7. Let v = (−1, 1, −2), w = (4, −1, −1), and u = (−3, 2, 1). (a) Find v × w. (b) What is the area of the parallelogram determined by v and w? (c) Find u · (v × w). Then, use your answer to find v · (u × w) and (u × v) · w. (d) What is the volume of the parallelepiped determined by u, v, and w? 5.8. Verify that the cross product v × w as defined in equation (×) satisfies the key property that x · (v × w) = det x v w for all x in R3. 5.9. Let v = (v1, v2, v3) and w = (w1, w2, w3) be vectors in R3. (a) Use the definitions of the dot and cross products in terms of coordinates to prove that: ∥v × w∥2 = ∥v∥2∥w∥2 −(v · w)2. (b) Use part (a) to give a proof that ∥v × w∥= ∥v∥∥w∥sin θ, where θ is the angle between v and w. 5.10. Do there exist nonzero vectors v and w in R3 such that v · w = 0 and v × w = 0? Explain. 5.11. In R3, let ℓbe the line parametrized by α(t) = a + tv, and let p be a point. Figure 2.18: The distance from a point to a line (a) Show that the distance from p to ℓis given by the formula: d = ∥(p −a) × v∥ ∥v∥ . See Figure 2.18. (b) Find the distance from the point p = (1, 1, −1) to the line parametrized by α(t) = (2, 1, 2) + t(1, −2, 2). | Multivariable_Calculus_Shimamoto_Page_60_Chunk3277 |
2.10. EXERCISES FOR CHAPTER 2 49 5.12. True or false: u × (v × w) = (u × v) × w for all u, v, w in R3. Either give a proof or find a counterexample. 5.13. Let u, v, and w be points in R3 such that 0, u, v, and w are not coplanar. Prove that the volume of the parallelepiped determined by u, v, and w is equal to det u v w . (Hint: See Figure 2.14, where φ is the angle between u and v × w. Keep in mind that u and v × w may lie on opposite sides of the plane spanned by v and w.) 5.14. Let (v, w, z) be a basis of R3, and let θ be the angle between v × w and z, where 0 ≤θ ≤π, as usual. Use the definition of handedness to show that θ < π 2 if (v, w, z) is right-handed and θ > π 2 if left-handed. Draw pictures to illustrate both situations. (Hint: Consider z·(v×w).) 5.15. Strictly speaking, the cross product is defined only in R3, but there is an analogous operation in R4 if three factors are allowed. That is, given vectors u, v, w in R4, the product u × v × w is a vector in R4 that satisfies the property: x · (u × v × w) = det x u v w for all x in R4, where the determinant is of the 4 by 4 matrix whose rows are x, u, v, w. (a) Assuming for the moment that such a product u×v×w exists, show that it is orthogonal to each of u, v, and w. (b) If u × v × w ̸= 0, is the quadruple (u, v, w, u × v × w) right-handed or left-handed? (c) Describe how you might find a formula for u×v ×w, and use it to compute the product if u = (1, −1, 1, −1), v = (1, 1, −1, −1), and w = (−1, 1, 1, −1). 5.16. In general, the “cross product” in Rn is a function of n −1 factors. In R2, this means that there is only one factor. (a) Given a vector v = (a, b) in R2, find a reasonable formula for “v×”, and explain how you came up with it. (b) Draw a sketch that illustrates v and your candidate for v×. Section 6 The geometry of space curves: Frenet vectors Note: Further exercises on this material appear in Section 9. In Exercises 6.1–6.2, find the binormal B(t) for the given path α. These problems are continu- ations of Exercises 4.1–4.2. 6.1. α(t) = (sin 4t, cos 4t, 3t) 6.2. α(t) = (t, t2, 0) 6.3. Consider the line in R3 parametrized by α(t) = (1 + t, 1 + 2t, 1 + 3t). What happens when you try to find its Frenet vectors (T(t), N(t), B(t))? Section 7 Curvature and torsion Note: Further exercises on this material appear in Section 9. 7.1. Find a parametrization of a curve that has constant curvature 1 2 and constant torsion 0. 7.2. Find a parametrization of a curve that has constant curvature 1 2 and constant torsion 1 4. 7.3. Prove the product rule for cross products: If α and β are differentiable paths in R3, then: (α × β)′ = α′ × β + α × β′. | Multivariable_Calculus_Shimamoto_Page_61_Chunk3278 |
50 CHAPTER 2. PATHS AND CURVES Section 9 The classification of space curves 9.1. Consider the curve parametrized by: α(t) = (cos t, −sin t, t). (a) Find the speed v(t). (b) Find the unit tangent T(t). (c) Find the principal normal N(t). (d) Find the binormal B(t). (e) Find the curvature κ(t). (f) Find the torsion τ(t). (g) The curve traced out by α is congruent to a curve with a parametrization of the form β(t) = (a cos t, a sin t, bt), where a and b are constant and a > 0. Find the values of a and b. Then, sketch the curves traced out by α and β, and describe a motion of R3 that takes the path α and puts it right on top of β. 9.2. Let α be the path defined by: α(t) = (6 cos t, 8 cos t, 10 sin t). (a) Find the speed v(t). (b) Find the unit tangent T(t). (c) Find the principal normal N(t). (d) Find the binormal B(t). (e) Find the curvature κ(t). (f) Find the torsion τ(t). (g) Based on your answers to parts (e) and (f), what can you say about the type of curve traced out by α? 9.3. Let α be the path defined by: α(t) = (t + √ 3 sin t, √ 3 t −sin t, 2 cos t). The following calculations are kind of messy, but the conclusion may be unexpected. (a) Find the speed v(t). (b) Find the unit tangent T(t). (c) Find the principal normal N(t). (d) Find the binormal B(t). (e) Find the curvature κ(t). (f) Find the torsion τ(t). (g) Based on your answers to parts (e) and (f), what can you say about the type of curve traced out by α? | Multivariable_Calculus_Shimamoto_Page_62_Chunk3279 |
2.10. EXERCISES FOR CHAPTER 2 51 In Exercises 9.4–9.8, α is a path in R3 with velocity v(t), speed v(t), acceleration a(t), and Frenet vectors T(t), N(t), and B(t). You may assume that v(t) ̸= 0 and T′(t) ̸= 0 for all t so that the Frenet vectors are defined. 9.4. Prove the following statements. (a) v = vT (b) a = v′T + κv2N (Hence v′ is the tangential component of acceleration and κv2 the normal component.) 9.5. Prove that v · a = vv′. 9.6. Show that the curvature is given by κ = ∥v×a∥ v3 . 9.7. Let C be the curve in R3 parametrized by α(t) = (t, t2, t3). Use the formula from the previous exercise to find the curvature at the point (1, 1, 1). 9.8. According to Exercise 9.4, the velocity and acceleration vectors lie in the plane spanned by T and N. Thus, like the binormal vector B, the cross product v × a is orthogonal to that plane. It follows that the unit vector in the direction of v × a must be ±B. Show that in fact it is B. In other words, show that B = v×a ∥v×a∥. 9.9. Let α(t) = (2 cos t, 2 sin t, 2 cos t). (a) Find the curvature κ(t). (You may use the result of Exercise 9.6 if you wish.) (b) For this curve, what does v × a tell you about the torsion τ(t)? 9.10. Let α be a path in R3 such that v ̸= 0 and T′ ̸= 0, and let: w = τvT + κvB. Prove the Darboux formulas: w × T = T′, w × N = N′, w × B = B′. 9.11. Suppose that the Darboux vector w = τvT + κvB from the previous exercise is a constant vector. (a) Prove that τv and κv are both constant. (Hint: Calculate w′, and use the Frenet-Serret formulas. Note that T and B need not be constant even if w is.) (b) If α has constant speed, prove that α is congruent to a helix. 9.12. Let α be a path in R3 that lies on the sphere of radius a centered at the origin, that is: ∥α(t)∥= a for all t. Let T(t) be the unit tangent, N(t) the principal normal, v(t) the speed, and κ(t) the curvature. Assume that v(t) ̸= 0 and T′(t) ̸= 0 for all t so that the Frenet vectors are defined. (a) Show that α(t) · T(t) = 0 for all t. (b) Show that κ(t)α(t) · N(t) = −1 for all t. (Hint: Differentiate part (a).) | Multivariable_Calculus_Shimamoto_Page_63_Chunk3280 |
52 CHAPTER 2. PATHS AND CURVES (c) Show that κ(t) ≥1 a for all t. 9.13. Let α be a path in R3 that has: • constant speed v = 1, • positive torsion τ(t), and • binormal vector B(t) = | Multivariable_Calculus_Shimamoto_Page_64_Chunk3281 |
Part III Real-valued functions 53 | Multivariable_Calculus_Shimamoto_Page_65_Chunk3282 |
Chapter 3 Real-valued functions: preliminaries Thus far, we have studied functions α for which the input is a real number t and the output is a vector α(t) = (x1(t), x2(t), . . . , xn(t)). The techniques from calculus that we used were basically familiar from first-year calculus. Now, we reverse the roles and consider functions where the input is a vector x = (x1, x2, . . . , xn) and the output is a real number y = f(x) = f(x1, x2, . . . , xn). Thinking of the coordinates x1, x2, . . . , xn as variables, these are real-valued functions of n variables. More formally, they are functions f : A →R, where the domain A is a subset of Rn. When more than one variable is involved, we shall need methods that go beyond first-year calculus. Example 3.1. We have seen that a helix can be parametrized by α(t) = (a cos t, a sin t, bt). The values of a and b give the radius of the cylinder around which the helix winds and the rate at which it rises, respectively. If a and b vary, the size and shape of the helix change. The curvature of the helix is given by κ = a a2+b2 . We can think of this as a real-valued function that describes how the curvature depends on the geometric parameters a and b. In a way, it’s like a function on the set of helices, but, strictly speaking, it is a function κ: A →R whose domain as far as helices go is A = {(a, b) ∈R2 : a > 0}, a subset of R2. Likewise, the torsion of a helix, τ = b a2+b2 , is a real-valued function of the same two variables with the same domain. 3.1 Graphs and level sets In order to analyze a function, it seems like a good idea to try to visualize its behavior somehow. It is easy enough to come up with ways of doing this in principle, but it takes a lot of practice to do it effectively when there are two or more variables. For instance, to graph a real-valued function of n variables, we replace y = f(x) in the one- variable case with xn+1 = f(x1, x2, . . . , xn). Definition. Let A be a subset of Rn, and let f : A →R be a function. The graph of f is the set: {(x1, x2, . . . , xn+1) ∈Rn+1 : (x1, x2, . . . , xn) ∈A and xn+1 = f(x1, x2, . . . , xn)}. Note that the graph is a subset of Rn+1, so we can hope to draw it only if n equals 1 or 2, i.e., when there are one or two variables. In the case of two variables, which is our focus, the domain A is a subset of R2, and the graph is the surface in R3 that satisfies the equation z = f(x, y) and whose projection on the xy-plane is A. Example 3.2. Let f : R2 →R be given by f(x, y) = p x2 + y2. To sketch the graph z = p x2 + y2, we slice with various well-chosen planes and then try to reconstruct an image of the surface from the cross-sections. 55 | Multivariable_Calculus_Shimamoto_Page_67_Chunk3283 |
56 CHAPTER 3. REAL-VALUED FUNCTIONS: PRELIMINARIES For instance, consider the cross-sections with the three coordinate planes. The yz-plane is where x = 0, so the equation of the intersection of the graph with this plane is z = p 02 + y2 = |y| and x = 0, a V -shaped curve. Similarly, the cross-section with the xz-plane is z = |x|, y = 0, another V -shaped curve at right angles to the first. These cross-sections are shown in Figure 3.1. Lastly, Figure 3.1: Two cross-sections with coordinate planes: z = |y|, x = 0 (left) and z = |x|, y = 0 (right) the cross-section with the xy-plane, where z = 0, is 0 = p x2 + y2, z = 0. This consists of the single point (0, 0, 0). In general, cross-sections with horizontal planes z = c are also useful. Here, they are described by c = p x2 + y2, or x2 + y2 = c2, where c ≥0. This is a circle of radius c in the plane z = c. As c increases, the circles get bigger. Some of them are shown in Figure 3.2. Figure 3.2: Some horizontal cross-sections By putting all this information together, we can assemble a pretty good picture of the whole graph z = p x2 + y2. It is the circular cone with vertex at the origin pictured in Figure 3.3. Figure 3.3: A circular cone: the graph z = p x2 + y2 | Multivariable_Calculus_Shimamoto_Page_68_Chunk3284 |
3.1. GRAPHS AND LEVEL SETS 57 The information about horizontal cross-sections in Figure 3.2 can also be presented by projecting the cross-sections onto the xy-plane and presenting them like a topographical map. As we saw above, the cross-section with z = c is described by the points (x, y) such that x2 + y2 = c2, a circle of radius c. When projected onto the xy-plane, this is called a level curve. Sketching a representative sample of level curves in one picture and labeling them with the corresponding value of c is called a contour map. In this case, it consists of concentric circles about the origin. See Figure 3.4. The function f is constant on each circle and increases uniformly as you move out. 0 0.5 1 1.5 2 2.5 2.5 2.5 2.5 -2 -1 0 1 2 -2 -1 0 1 2 Figure 3.4: Level curves of f(x, y) = p x2 + y2 corresponding to c = 0, 0.5, 1, 1.5, 2, 2.5 Definition. Let A be a subset of Rn, and let f : A →R be a real-valued function. Given a real number c, the level set corresponding to c is {x ∈A : f(x) = c}. It is the set of all points in the domain at which f has a value of c. For functions of two variables, level sets are also called level curves and, for functions of three variables, level surfaces. A level set is a subset of the domain A, which is contained in Rn. So we can hope to draw level sets only for functions of one, two, or three variables. Example 3.3. Let f : R2 →R be given by f(x, y) = x2 −y2. Sketch some level sets and the graph of f. Level sets: We set f(x, y) = x2 −y2 = c and choose a few values of c. For example: Level set c = 1 x2 −y2 = 1, a hyperbola c = 0 x2 −y2 = 0, or y = ±x, two lines c = −1 x2 −y2 = −1, or y2 −x2 = 1, a hyperbola These curves and a couple of others are sketched in Figure 3.5. -2 -2 -1 -1 0 0 0 1 1 2 2 0 -2 -1 0 1 2 -2 -1 0 1 2 Figure 3.5: Level curves of f(x, y) = x2 −y2 corresponding to c = −2, −1, 0, 1, 2 | Multivariable_Calculus_Shimamoto_Page_69_Chunk3285 |
58 CHAPTER 3. REAL-VALUED FUNCTIONS: PRELIMINARIES The graph: For the graph, the level sets are taken out of the xy-plane and raised to height z = c. Furthermore, identifying the cross-sections with the coordinate planes provides some additional framework. Coordinate plane Equation of cross-section in that plane yz-plane: x = 0 z = −y2, a downward parabola xz-plane: y = 0 z = x2, an upward parabola These are shown in Figure 3.6. Figure 3.6: Two cross-sections with coordinate planes: z = −y2, x = 0 (left) and z = x2, y = 0 (right) Reconstructing the graph from these pieces gives a saddle-like surface, also known as a hy- perbolic paraboloid. See Figure 3.7. It has the distinctive feature that, in the cross-section with Figure 3.7: A saddle surface: the graph z = x2 −y2 the yz-plane, the origin looks like a local maximum, while, in the xz-plane, it looks like a local minimum. (This last information can be deduced from the contour map, too. See Figure 3.5.) 3.2 More surfaces in R3 Not all surfaces in R3 are graphs z = f(x, y). We look at a few examples of some other surfaces and the equations that define them. Example 3.4. A sphere of radius a centered at the origin. A point (x, y, z) lies on the sphere if and only if ∥(x, y, z)∥= a, in other words, if and only if p x2 + y2 + z2 = a. Hence: Equation of a sphere: x2 + y2 + z2 = a2 The sphere of radius 1 is shown in Figure 3.8, left. | Multivariable_Calculus_Shimamoto_Page_70_Chunk3286 |
3.2. MORE SURFACES IN R3 59 Figure 3.8: The unit sphere x2 + y2 + z2 = 1 (left) and the cylinder x2 + y2 = 1 (right) Example 3.5. A circular cylinder of radius a whose axis is the z-axis. Here, as long as (x, y) satisfies the condition for being on the circle of radius a, z can be anything. Thus: Equation of a circular cylinder: x2 + y2 = a2 An example is shown on the right of Figure 3.8. Example 3.6. Sketch the surface x2 + y2 −z2 = 1. This is not the graph z = f(x, y) of a function: given an input (x, y), there can be two choices of z that satisfy the equation. In fact, the surface is the union of two graphs, namely, z = p x2 + y2 −1 and z = − p x2 + y2 −1. Nevertheless, as was the case with graphs, we can try to construct a sketch by looking at cross-sections with various well-chosen planes. For example: Cross-sectional plane Equation of cross-section in that plane yz-plane: x = 0 y2 −z2 = 1, a hyperbola xz-plane: y = 0 x2 −z2 = 1, a hyperbola horizontal plane: z = c x2 + y2 = 1 + c2, a circle of radius √ 1 + c2 See Figure 3.9. Figure 3.9: Cross-sections with the yz-plane (left), xz-plane (middle), and horizontal planes (right) These cross-sections fit together to form a surface that looks like a nuclear cooling tower, as shown in Figure 3.10. In fact, each of these last three surfaces is a level set of a function of three variables. The sphere is the level set of f(x, y, z) = x2 + y2 + z2 corresponding to c = a2; the cylinder the | Multivariable_Calculus_Shimamoto_Page_71_Chunk3287 |
60 CHAPTER 3. REAL-VALUED FUNCTIONS: PRELIMINARIES Figure 3.10: The cooling tower x2 + y2 −z2 = 1 level set of f(x, y, z) = x2 + y2 corresponding to c = a2; and the cooling tower the level set of f(x, y, z) = x2 + y2 −z2 corresponding to c = 1. This suggests a way to visualize the behavior of functions of three variables, which would require four dimensions to graph. Namely, determine the level sets given by f(x, y, z) = c. Then the way these surfaces morph into one another as c varies reflects which points get mapped to which values. For example, the function f(x, y, z) = x2 + y2 + z2 is constant on its level sets, which are spheres, and the larger the sphere, the greater the value of f. Example 3.7. A graph of a function f(x, y) of two variables can be regarded as a level set, too, but of a function of three variables. For let F(x, y, z) = f(x, y) −z. Then the graph z = f(x, y) is the level set of F corresponding to c = 0. Both are defined by the condition F(x, y, z) = f(x, y) −z = 0. 3.3 The equation of a plane in R3 Suppose that a real-valued function of three variables T : R3 →R happens to be a linear transfor- mation. As with any linear transformation, T is represented by a matrix, in this case, a 1 by 3 matrix: T(x, y, z) = A B C x y z = Ax + By + Cz. The level set of T corresponding to a value D is the set of all (x, y, z) in R3 such that: Ax + By + Cz = D. (3.1) To understand what this level set is, we rewrite (3.1) as a dot product, (A, B, C) · (x, y, z) = D, or, writing the typical point (x, y, z) of R3 as x: n · x = D, where n = (A, B, C). Suppose that we happen to know one particular point p on the level set. Then n · p = D, so all points on the level set satisfy: n · x = D = n · p. (3.2) | Multivariable_Calculus_Shimamoto_Page_72_Chunk3288 |
3.3. THE EQUATION OF A PLANE IN R3 61 In other words, n·(x−p) = 0. This says that x lies on the level set if and only if x−p is orthogonal to n. Geometrically, this is true if and only if x lies in the plane that both passes through p and is perpendicular to n. This is illustrated in Figure 3.11. We say that n is a normal vector to the Figure 3.11: The plane through p with normal vector n plane. Thus the level sets of the linear transformation T are planes, all having as normal vector the vector n whose components are the entries of the matrix that represents T. For future reference, we record the results of (3.1) and (3.2). Proposition 3.8. 1. In R3, the equation of the plane through p with normal vector n is: n · x = n · p, where x = (x, y, z). 2. Alternatively, the equation Ax + By + Cz = D is the equation of a plane with normal vector n = (A, B, C). Example 3.9. Find an equation of the plane containing the point p = (1, 2, 3) and the line parametrized by α(t) = (4, 5, 6) + t(7, 8, 9). See Figure 3.12. Figure 3.12: The plane determined by a given point and line We use the form n · x = n · p. As a point in the desired plane, we can take p = (1, 2, 3). From its parametrization, the line α passes through the point a = (4, 5, 6) and is parallel to v = (7, 8, 9). Hence, as normal vector to the plane, we can use the cross product: −→ ap × v = (p −a) × v = | Multivariable_Calculus_Shimamoto_Page_73_Chunk3289 |
62 CHAPTER 3. REAL-VALUED FUNCTIONS: PRELIMINARIES The scalar multiple n = (1, −2, 1) is also a normal vector, and, since it’s a little simpler, it’s the one we use. Substituting into n · x = n · p gives: (1, −2, 1) · (x, y, z) = (1, −2, 1) · (1, 2, 3), x −2y + z = 1 −4 + 3, or x −2y + z = 0. Example 3.10. Are the planes x + y −z = 5 and x −2y + 3z = 6 perpendicular? The angle between two planes is the same as the angle between their normal vectors (Figure 3.13), so it suffices to check if the normals are orthogonal. Reading off the coefficients from the Figure 3.13: The angle between two planes equations that define the planes, the normals are n1 = (1, 1, −1) and n2 = (1, −2, 3). Then n1 · n2 = 1 −2 −3 = −4, which is nonzero. The planes are not perpendicular. 3.4 Open sets We prepare for the important concept of continuity. In first-year calculus, a real-valued function f of one variable is said to be continuous if its graph has no holes or breaks. Intuitively, whenever x approaches a point a, the value f(x) approaches the value f(a). This can be written succinctly as limx→a f(x) = f(a). Unfortunately, this idea is difficult to formulate in a rigorous way. The notions of continuity and limit are closely related. In many respects, limits are more fundamental, but we shall work more directly with continuity. Hence we base our discussion on the definition of a continuous function and modify it later to incorporate limits. We begin by introducing the most natural type of subset of Rn for taking limits. These are sets in which there is a cushion around every point so that one can approach the point from all possible directions. Definition. Let a be a point of Rn, and assume that r > 0. The open ball with center a and radius r, denoted by B(a, r), is defined to be: B(a, r) = {x ∈Rn : ∥x −a∥< r}. In other words, it is the set of all points within r units of a. Example 3.11. In R, an open ball is an open interval (a −r, a + r). In R2, it is a disk, the region in the interior of a circle. These are shown in Figure 3.14. In R3, an open ball is a solid ball, the region in the interior of a sphere. | Multivariable_Calculus_Shimamoto_Page_74_Chunk3290 |
3.4. OPEN SETS 63 Figure 3.14: Open balls in R and R2 Definition. A subset U of Rn is called open if, given any point a of U, there exists a positive real number r such that B(a, r) ⊂U. So to show that a set U is open, one starts with an arbitrary point a of U and finds a value of r so that the open ball about a of radius r stays entirely within U. The value of r depends typically on the conditions that define the set U as well as on the point a. For the time being, we shall be content to find a concrete expression for r and accept geometric intuition as justification that it works. If, for some point a of U, there is no value of r that works, then U is not open. Example 3.12. In R2, let U be the open ball B((0, 0), 1) of radius 1 centered at the origin. This is the set of all x in R2 such that ∥x∥< 1. We verify that this open ball is in fact an open set. Let a be a point of U, and let r = 1 −∥a∥. Note that r > 0 since ∥a∥< 1. Then B(a, r) ⊂U, so, given any a, we’ve found an r that works. See Figure 3.15. Hence U is an open set. Figure 3.15: Open balls are open sets. To those who would like a more detailed justification that B(a, r) ⊂U, please see Exercise 7.2. Note that the expression r = 1 −∥a∥gets smaller as a approaches the boundary of the disk. That this is necessary makes sense from the geometry of the situation. By modifying the argument slightly, one can show that every open ball in Rn is an open set in Rn (Exercise 7.3). Example 3.13. Prove that the set U = {(x, y) ∈R2 : x > 0, y < 0} is open in R2. This is the fourth quadrant of the plane, not including the coordinate axes. Let a be a point of U. Write a = (c, d), so c > 0 and d < 0. The closest point on the boundary of U lies on one of the axes, either c or |d| units away (Figure 3.16). We choose r to be whichever of these is smaller, that is, r = min{c, |d|}. Then B(a, r) ⊂U. Thus U is open. Example 3.14. Is U = {(x, y) ∈R2 : x > 0, y ≤0} open in R2? This is the same set as in the previous example with the positive x-axis added in. The change is enough to keep the set from being open. For instance, the point a = (1, 0) is in U, but every open | Multivariable_Calculus_Shimamoto_Page_75_Chunk3291 |
64 CHAPTER 3. REAL-VALUED FUNCTIONS: PRELIMINARIES Figure 3.16: An open quadrant ball about a includes points outside of U, namely, points in the upper half-plane. This is shown in Figure 3.17. Thus no value of r works for this choice of a. Figure 3.17: A nonopen quadrant Example 3.15. If U and V are open sets in Rn, prove that their intersection U ∩V is open, too. Let a be a point of U ∩V . Then a ∈U, and, since U is open, there exists a positive number r1 such that B(a, r1) ⊂U. Likewise, a ∈V , and there exists an r2 such that B(a, r2) ⊂V . Let r = min{r1, r2} (Figure 3.18). Then B(a, r) ⊂B(a, r1) ⊂U and B(a, r) ⊂B(a, r2) ⊂V , so Figure 3.18: The intersection of two open sets B(a, r) ⊂U ∩V . In other words, given any a in U ∩V , there’s an r that works. By similar reasoning, the intersection U1 ∩U2 ∩· · · ∩Uk of a finite number of open sets in Rn is open in Rn. On the other hand, an infinite intersection U1 ∩U2 ∩· · · need not be open. For instance, in R2, consider the sequence of concentric open balls U1 = B((0, 0), 1), U2 = B((0, 0), 1 2), | Multivariable_Calculus_Shimamoto_Page_76_Chunk3292 |
3.4. OPEN SETS 65 U3 = B((0, 0), 1 3), etc. Each Un is open, but the only point common to all of them is the origin. Thus U1 ∩U2 ∩· · · = {(0, 0)}. This is no longer an open set. There is also a notion of closed set, though the definition may not be what one would guess. Definition. A subset K of Rn is called closed if Rn −K = {x ∈Rn : x /∈K} is open. The set Rn −K is called the complement of K in Rn. Example 3.16. Let K = {x ∈R2 : ∥x∥≤1}. This is the open ball centered at the origin of radius 1 together with its boundary, the unit circle, as shown in Figure 3.19. Its complement R2 −K is the set of all points x such that ∥x∥> 1, which is an open set. (Briefly, given a in R2 −K, then r = ∥a∥−1 works in the definition of open set.) Hence K is closed. Figure 3.19: Closed balls are closed sets. More generally, if a ∈Rn and r ≥0, then K(a, r) = {x ∈Rn : ∥x −a∥≤r} is called a closed ball. These are closed subsets of Rn as well by a similar argument. Example 3.17. Let K = {x = (x, y) ∈R2 : either ∥x∥< 1 or x2 + y2 = 1 where y ≥0}. This is the same as the closed ball of the preceding example except that the lower semicircle has been deleted. See Figure 3.20. Figure 3.20: A set that is neither open nor closed K is no longer closed: for instance, the point a = (0, −1) belongs to R2 −K, but no open ball about a is contained entirely within R2 −K. At the same time, K is not open either: the point a = (0, 1) belongs to K, but no open ball about a stays within K. | Multivariable_Calculus_Shimamoto_Page_77_Chunk3293 |
66 CHAPTER 3. REAL-VALUED FUNCTIONS: PRELIMINARIES 3.5 Continuity We are ready for continuity. Intuitively, the idea is that a function f is continuous at a point a if limx→a f(x) = f(a). That is, as x gets close to a, f(x) gets close to f(a). We make this precise by expressing the requirement in terms of open balls. Definition. Let U be an open set in Rn, and let f : U →R be a real-valued function. We say that f is continuous at a point a of U if, given any open ball B(f(a), ϵ) about f(a), there exists an open ball B(a, δ) about a such that: f(B(a, δ)) ⊂B(f(a), ϵ). In other words, if x ∈B(a, δ), then f(x) ∈B(f(a), ϵ) (Figure 3.21). Equivalently, writing out the definition of open ball, f is continuous at a if, given any ϵ > 0, there exists a δ > 0 such that: if ∥x −a∥< δ, then |f(x) −f(a)| < ϵ. Figure 3.21: f maps the open ball B(a, δ) about a into the open ball (f(a) −ϵ, f(a) + ϵ) about f(a). The definition says that f is continuous at a if you can guarantee that f(x) will be as close to f(a) as you want (“within ϵ”) by making sure that x is close enough to a (“within δ”). The strategy for proving that a function is continuous is similar formally to proving that a set is open. There, one starts with a point a and tries to find a radius r. Here, one starts with an ϵ and tries to find a δ. It may seem that using the open ball B(f(a), ϵ) in the definition is an unnecessarily confusing way to write the interval (f(a) −ϵ, f(a) + ϵ), but we have in mind the generalization of continuity to vector-valued functions in Chapter 6. The definition phrased in terms of open balls extends naturally to the more general case, as we shall see. Definition. A function f : U →R is simply called continuous if it is continuous at every point of its domain U. Example 3.18. Let f : R2 →R be defined by f(x, y) = x. In other words, f is the projection of the xy-plane onto the x-axis. Is f a continuous function? Let a be a point of R2. To get a sense of what answer to expect, we consider limx→a f(x). (Of course, we haven’t defined rigorously what a limit is yet, so this is meant to be completely informal.) Write x = (x, y) and a = (c, d). Then: lim x→a f(x) = lim (x,y)→(c,d) f(x, y) = lim (x,y)→(c,d) x = c. | Multivariable_Calculus_Shimamoto_Page_78_Chunk3294 |
3.5. CONTINUITY 67 At the same time, f(a) = f(c, d) = c, which agrees. Thus we expect intuitively that f is continuous at a. To prove this rigorously, let ϵ > 0 be given. We want to find a radius δ so that the open ball B | Multivariable_Calculus_Shimamoto_Page_79_Chunk3295 |
68 CHAPTER 3. REAL-VALUED FUNCTIONS: PRELIMINARIES suppose that we approach along the x-axis. Then f(x, 0) = x·0 x2+02 = 0, so: lim (x,0)→(0,0) f(x, 0) = lim (x,0)→(0,0) 0 = 0. On the other hand, if we approach along the line y = x, then f(x, x) = x·x x2+x2 = 1 2, so: lim (x,x)→(0,0) f(x, x) = lim (x,x)→(0,0) 1 2 = 1 2. Thus f(x, y) can approach different values depending on how you approach the origin. As a result, intuitively, the limit does not exist, so f cannot be continuous regardless of what c is. To prove this rigorously, we need to articulate what it means for the criterion for continuity to fail. The definition says that a function is continuous if, for every ϵ, there exists a δ. The negation of this is that there is some ϵ for which there is no δ. We try to find such a bad ϵ. Our intuitive calculations above showed that there are points arbitrarily close to the origin where f = 0 and other points arbitrarily close where f = 1 2. We choose ϵ so that these values of f cannot both be within ϵ of f(0, 0) = c. For this, let ϵ = 1 8. Then B(f(0, 0), ϵ) = B(c, 1 8) = (c−1 8, c+ 1 8), an interval of length 1 4. As just noted, in any open ball B((0, 0), δ) about the origin, there will be points (x, 0) where f(x, 0) = 0 and points (x, x) where f(x, x) = 1 2. It’s impossible to fit both these values inside an interval of length 1 4. So for ϵ = 1 8, there is no δ > 0 such that f | Multivariable_Calculus_Shimamoto_Page_80_Chunk3296 |
3.6. SOME PROPERTIES OF CONTINUOUS FUNCTIONS 69 We get a consistent answer of 0, but this does not prove that the limit is 0. We have not exhausted all possible ways of approaching the origin, and being close to the origin is different from being close to it along any individual curve. Nevertheless, the evidence suggests that choosing f(0, 0) = 0 might make f continuous, and this gives us something concrete to shoot for. So set f(0, 0) = 0, and let ϵ > 0 be given. We want to find a δ > 0 such that, if ∥(x, y)−(0, 0)∥< δ, then |f(x, y) −0| < ϵ. This is satisfied automatically when (x, y) = (0, 0) for any value of δ since f(0, 0) = 0, so we assume (x, y) ̸= (0, 0) and look for a δ such that, if ∥(x, y)∥< δ, then x3+y3 x2+y2 < ϵ. To hunt for a connection between the quantities ∥(x, y)∥and x3+y3 x2+y2 , we introduce polar coor- dinates: x = r cos θ and y = r sin θ, where r = p x2 + y2 = ∥(x, y)∥. See Figure 3.25. Figure 3.25: Polar coordinates r and θ Then, if (x, y) ̸= (0, 0): x3 + y3 x2 + y2 = r3(cos3 θ + sin3 θ) r2 = r(cos3 θ + sin3 θ). Since | cos3 θ + sin3 θ| ≤2, this implies that: x3 + y3 x2 + y2 = |r(cos3 θ + sin3 θ)| ≤2r = 2 p x2 + y2 = 2∥(x, y)∥. Thus, if ∥(x, y)∥< a, then x3+y3 x2+y2 ≤2∥(x, y)∥< 2a. So let δ = ϵ 2. With this choice, if ∥(x, y)∥< δ, then |f(x, y)| < 2δ = ϵ, i.e., f maps B((0, 0), δ) inside (−ϵ, ϵ). In other words, given any ϵ > 0, we’ve found a δ > 0 that satisfies the definition of continuity, namely, δ = ϵ 2. Therefore setting f(0, 0) = 0 makes f continuous at (0, 0). 3.6 Some properties of continuous functions The definition of continuity makes for unimpeachable arguments, but, as we learn more about continuous functions, we might prefer to build on what we learn and not have to start from scratch with the definition every time. We present a couple of general properties that are especially useful. These results are behind the working principle that most functions that look continuous really are continuous. One need not bring out the ϵ’s and δ’s. Proposition 3.21. Let f, g: U →R be real-valued functions defined on an open set U in Rn. If f and g are continuous at a point a of U, then so are: f + g, cf for any scalar c, fg, the product of f and g, f g , the quotient, assuming that g(a) ̸= 0. | Multivariable_Calculus_Shimamoto_Page_81_Chunk3297 |
70 CHAPTER 3. REAL-VALUED FUNCTIONS: PRELIMINARIES Before proving this result, let’s apply it to some examples. Example 3.22. We showed earlier in Example 3.18 that the projection onto the x-axis f : R2 →R, f(x, y) = x, is continuous. Likewise, the projection onto the y-axis g: R2 →R, g(x, y) = y, is continuous. It then follows immediately from the proposition that functions like x + y, 3x, xy, x2 = x · x, x3 −4x2y2 + 5y, and, at points other than (0, 0), xy x2+y2 are all continuous. Proof. We prove the statement about the sum f + g and leave the rest for the exercises. (See Exercises 7.4–7.7.) For this, we want to argue that, as x gets close to a, f(x) + g(x) gets close to f(a) + g(a). Since f(x) and g(x) get close to f(a) and g(a) individually, this conclusion seems clear, and it’s just a matter of feeding the intuition into the formal definition. The key ingredient in the argument below is the “triangle inequality.” Let ϵ > 0 be given. We need to come up with a δ > 0 such that f(x) + g(x) ∈B(f(a) + g(a), ϵ) whenever x ∈B(a, δ), in other words, a δ such that: |(f(x) + g(x)) −(f(a) + g(a))| < ϵ whenever x ∈B(a, δ). Note that: |(f(x) + g(x)) −(f(a) + g(a))| = |(f(x) −f(a)) + (g(x) −g(a))|. (3.3) The triangle inequality says that, for any real numbers r and s, |r + s| ≤|r| + |s|. This seems reasonable, but it has a generalization to Rn that is important enough that we discuss it separately in the next section. Accepting that it is true for the time being, we find from equation (3.3) that: |(f(x) + g(x)) −(f(a) + g(a))| ≤|f(x) −f(a)| + |g(x) −g(a)|. (3.4) This separates the f contribution and the g contribution into terms that can be manipulated independently. Since f is continuous at a, taking the positive quantity ϵ 2 as the given input in the definition of continuity, there is a δ1 > 0 such that |f(x) −f(a)| < ϵ 2 whenever x ∈B(a, δ1). Similarly, there is a δ2 > 0 such that |g(x) −g(a)| < ϵ 2 whenever x ∈B(a, δ2). Let δ = min{δ1, δ2}. Then the last two inequalities remain true for all x in B(a, δ), and substituting them into (3.4) gives: (f(x) + g(x)) −(f(a) + g(a)) < ϵ 2 + ϵ 2 = ϵ whenever x ∈B(a, δ). This shows that f + g is continuous at a. Proposition 3.23. Compositions of continuous functions are continuous. That is, let U be an open set in Rn and V an open set in R, and let f : U →R and g: V →R be functions such that f(x) ∈V for all x in U. (This assumption guarantees that the composition g ◦f : U →R is defined.) Then, if f and g are continuous, so is g ◦f. Again, we look at some examples first. Example 3.24. We accept without comment that the familiar functions of one variable from first- year calculus that were said to be continuous there really are continuous. This includes |x|, n√x, sin x, cos x, ex, and ln x. The proposition then implies that functions like p x2 + y2, sin(x + y), and exy are continuous. For example, the first is the composition (x, y) 7→x2 + y2 7→ p x2 + y2, which is a composition of continuous steps. | Multivariable_Calculus_Shimamoto_Page_82_Chunk3298 |
3.7. THE CAUCHY-SCHWARZ AND TRIANGLE INEQUALITIES 71 Proof. Let a be a point of U, and let ϵ > 0 be given. We want to find an open ball about a that is mapped by g ◦f inside the interval B(g(f(a)), ϵ) = | Multivariable_Calculus_Shimamoto_Page_83_Chunk3299 |
72 CHAPTER 3. REAL-VALUED FUNCTIONS: PRELIMINARIES Proof. It’s equivalent to square both sides and prove that ∥v + w∥2 ≤(∥v∥+ ∥w∥)2. But: ∥v + w∥2 = (v + w) · (v + w) = v · v + v · w + w · v + w · w = ∥v∥2 + 2v · w + ∥w∥2. (3.5) Also, v · w ≤|v · w| ≤∥v∥∥w∥, where the second inequality is Cauchy-Schwarz. Substituting this into (3.5) gives: ∥v + w∥2 ≤∥v∥2 + 2∥v∥∥w∥+ ∥w∥2 = (∥v∥+ ∥w∥)2. The connection with triangles is illustrated in Figure 3.27. Figure 3.27: The geometry of the triangle inequality: the length ∥v + w∥of one side of a triangle is less than or equal to the sum ∥v∥+ ∥w∥of the lengths of the other two sides. 3.8 Limits We used our intuition about limits as a way to think about continuity. Now that continuity has been defined rigorously, we turn the tables and use continuity to give a formal definition of limits. One technical, though important, point is that, in determining how f(x) behaves as x ap- proaches a, what is happening right at a does not matter. The value of f(a), or even whether f is defined there, is irrelevant. Definition. Let U be an open set in Rn, and let a be a point of U. If f is a real-valued function that is defined on U, except possibly at the point a, then we say that limx→a f(x) exists if there is a number L such that the function ef : U →R defined by ef(x) = ( f(x) if x ̸= a, L if x = a is continuous at a. When this happens, we write limx→a f(x) = L. By carefully applying the definition of continuity to the function ef, the definition of limit can be restated using ϵ’s and δ’s, which is its more customary form. Namely, limx→a f(x) = L means: Given any ϵ > 0, there exists a δ > 0 such that |f(x) −L| < ϵ whenever ∥x −a∥< δ, except possibly when x = a. We shall never use this formulation, however. If f is defined at a, then it’s basically a tautology from the definition that limx→a f(x) = f(a) if and only if f is continuous at a, thus completing the intuitive connection between the two concepts with which we began. | Multivariable_Calculus_Shimamoto_Page_84_Chunk3300 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.